diff --git "a/abs_29K_G/test_abstract_long_2405.00747v1.json" "b/abs_29K_G/test_abstract_long_2405.00747v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00747v1.json" @@ -0,0 +1,438 @@ +{ + "url": "http://arxiv.org/abs/2405.00747v1", + "title": "Soft Preference Optimization: Aligning Language Models to Expert Distributions", + "abstract": "We propose Soft Preference Optimization (SPO), a method for aligning\ngenerative models, such as Large Language Models (LLMs), with human\npreferences, without the need for a reward model. SPO optimizes model outputs\ndirectly over a preference dataset through a natural loss function that\nintegrates preference loss with a regularization term across the model's entire\noutput distribution rather than limiting it to the preference dataset. Although\nSPO does not require the assumption of an existing underlying reward model, we\ndemonstrate that, under the Bradley-Terry (BT) model assumption, it converges\nto a softmax of scaled rewards, with the distribution's \"softness\" adjustable\nvia the softmax exponent, an algorithm parameter. We showcase SPO's\nmethodology, its theoretical foundation, and its comparative advantages in\nsimplicity, computational efficiency, and alignment precision.", + "authors": "Arsalan Sharifnassab, Sina Ghiassian, Saber Salehkaleybar, Surya Kanoria, Dale Schuurmans", + "published": "2024-04-30", + "updated": "2024-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "We propose Soft Preference Optimization (SPO), a method for aligning\ngenerative models, such as Large Language Models (LLMs), with human\npreferences, without the need for a reward model. SPO optimizes model outputs\ndirectly over a preference dataset through a natural loss function that\nintegrates preference loss with a regularization term across the model's entire\noutput distribution rather than limiting it to the preference dataset. Although\nSPO does not require the assumption of an existing underlying reward model, we\ndemonstrate that, under the Bradley-Terry (BT) model assumption, it converges\nto a softmax of scaled rewards, with the distribution's \"softness\" adjustable\nvia the softmax exponent, an algorithm parameter. We showcase SPO's\nmethodology, its theoretical foundation, and its comparative advantages in\nsimplicity, computational efficiency, and alignment precision.", + "main_content": "Introduction The alignment problem focuses on adjusting a generative model (e.g., Large Language Models (LLMs)) to align its outputs with human preferences and ethical standards or to tailor the model for specific tasks; and is especially important after pretraining on extensive datasets with mixed-quality samples. A widely embraced approach involves refining these models based on expert (i.e., human) preferences, typically expert-provided comparisons of pairs of model-generated outputs [1]. Given a preference dataset D and a pretrained model \u03c0ref, preference alignment seeks to train a new model, \u03c0\u03b8, whose outputs are better aligned with the preference in D [2, 3]. A notable advancement in this field has been the application of Reinforcement Learning from Human Feedback (RLHF), which involves training a reward-model based of actions preferred by humans and then optimizing \u03c0\u03b8 to maximize these learned rewards while ensuring closeness to the initial model behaviors [4]. Despite the effectiveness of RLHF in addressing the alignment problem, RLHF involves a relatively complex pipeline, susceptible to propagation of reward-model\u2019s biases over to the policy optimization Recently, several studies have introduced methods for the direct optimization of preferences, including Direct Preference Optimization (DPO) among others [5, 6, 7, 8, 9, 10, 11]. These approaches eliminate the need for a separate reward model training phase, instead adjusting the model directly using preference data, and often outperform RLHF-based approaches. These reward-model-free methods enjoy advantages over RLHF-based approaches, such as simplified pipelines, reduced computational complexity, and avoidance of the bias transfer from the reward model to policy optimization. Indeed, the rationale for incorporating an additional component, the reward model, into a supervised learning context with a supervised dataset, is debatable. In this work, we propose a simple and effective reward-model-free alignment method, termed Soft Preference Optimization (SPO). SPO seeks to align the model\u2019s preference estimates (detailed in Section 3) with expert preferences D, through minimizing a loss function of the form AlignmentLoss(\u03c0\u03b8, \u03c0ref, D) = PreferenceLoss(\u03c0\u03b8, D) + Regularizer(\u03c0\u03b8, \u03c0ref), (1) \u2217Correspondence to sharifna@ualberta.ca arXiv:2405.00747v1 [cs.LG] 30 Apr 2024 \fwhere the Regularizer may be chosen as the KL divergence. We discuss natural choices for the model\u2019s preference estimates and the preference loss function in Sections 3 and 4. Unlike RLHF and DPO, the development of SPO does not rely on assumptions regarding the existence of underlying rewards, such as the Bradley-Terry (BT) model [12]. Nevertheless, we demonstrate that if the BT model is applicable and given an asymptotically large preference dataset, SPO is theoretically guaranteed to converge to a softmax of the rewards, which inspires the designation \u201cSoft Preference Optimization\u201d. Unlike DPO, which tends toward a deterministic model even with an extremely large dataset if the regularization coefficient is nearly zero, SPO allows for the adjustment of the softmax\u2019s exponent through an input parameter, thereby offering flexibility in modulating the \u201csoftness\" of the output distribution. SPO has two main differences from existing reward-model-free alignment methods. The first distinction involves the choice of a preference loss that aligns model\u2019s preference estimates with expert\u2019s preferences, resulting in a favorable fixed point as discussed in the previous paragraph. The other distinction of SPO with DPO and similar algorithms lies in the application of regularization. DPO restricts regularization to the preference dataset, which is counter-intuitive since the dataset already provides specific data points for the model to fit; thus, additional regularization within this limited scope is unnecessary. More critically, since the preference dataset represents a tiny subset of the potential outputs of the model, focusing regularization solely within this subset can lead to undesirable, extensive shift in the model\u2019s distribution outside of the dataset, resulting in a non-coherent behaviours. Acknowledging this limitation, SPO applies regularization across the entire output distribution of the model, not just within the confines of the preference dataset. 2 Background Consider a finite context (or query) space X and a finite action (or response) space Y. For a given query x \u2208X, a behavior policy (such as a pre-trained model) is employed to generate responses y1, y2 \u2208Y. These responses are subsequently evaluated by expert raters (e.g., humans) to determine which of y1 or y2 constitutes a more appropriate response to the query x. We adopt the notation y1 \u227by2 to denote that y1 is preferred over y2 in a specific context. The true expert preferences are typically represented by a probability, p\u2217(y1 \u227by2|x), reflecting the inherent randomness due to the variable nature of the experts, who may be a group of humans with slightly differing preferences. A preference dataset, D, is compiled by collecting expert preferences for multiple (x; y1, y2) tuples. In detail, D comprises tuples (x; yw, yl), where yw \u227byl indicates the preferred (winner) and less preferred (loser) responses based on expert evaluations. RLHF comprises two main phases: reward modeling and reinforcement learning (RL) fine-tuning. The initial phase, reward modeling, operates under the assumption that there exist latent rewards r(y|x) that form the basis of expert preferences. This phase aims to develop a model capable of closely approximating these underlying rewards. A widely accepted method for defining these latent rewards is through the Bradley-Terry (BT) model [12], alongside the Plackett-Luce ranking models, which serve as a generalization of the BT model for data involving rankings [13, 14]. The BT model posits that the distribution of expert preferences, p\u2217, is characterized by the following equation: p\u2217(y1 \u227by2|x) = \u03c3 \u0000r(y1|x) \u2212r(y2|x) \u0001 = exp \u0000r(y1|x) \u0001 exp \u0000r(y1|x) \u0001 + exp \u0000r(y2|x) \u0001, (2) where \u03c3(\u00b7) represents the sigmoid function. Subsequently, the reward model r\u03d5(y|x) is trained to minimize the negative log-likelihood loss, \u2212E(x;yw,yl)\u223cD \u0002 \u03c3 \u0000r(yw|x) \u2212r(yl|x) \u0001\u0003 . The RL fine-tuning phase aims to train a model, \u03c0\u03b8, to maximize a loss function of the form LRLHF \u0000\u03c0\u03b8, \u03c0ref, r\u03d5 \u0001 = \u2212Ex\u223cD,y\u223c\u03c0\u03b8(\u00b7|x) \u0002 r\u03d5(y|x) \u0003 + \u03b2DKL \u0000\u03c0\u03b8 \u2225\u03c0ref \u0001 , (3) where \u03b2 is a non-negative constant, and \u03c0ref is a reference policy often acquired through pre-training on high-quality data and is typically identical to the behavior policy. The DKL term in the loss function acts as a regularizer, ensuring the model does not significantly deviate from the distribution where the reward model is most accurate. RL fine-tuning employs reinforcement learning algorithms, like PPO [15], to optimize the above loss function [4], introducing significant complexity into the RLHF pipeline. Additionally, the RLHF framework allows for the propagation of any generalization errors from the reward model to the RL fine-tuned model. The DPO framework [5] addresses these 2 \fchallenges by simplifying the problem into a single-phase supervised learning approach, thus avoiding the pitfalls associated with separate reward modeling and RL fine-tuning phases. DPO circumvents the need for a reward model by directly optimizing the following loss function: LDPO \u0000\u03c0\u03b8, \u03c0ref, D \u0001 = \u2212E \u0014 log \u03c3 \u0012 \u03b2 log \u03c0\u03b8(yw|x) \u03c0ref(yw|x) \u2212\u03b2 log \u03c0\u03b8(yl|x) \u03c0ref(yl|x) \u0013\u0015 . (4) It was demonstrated in [5] that LDPO has the same minimizer as LRLHF, under the conditions of the BT model, an asymptotically large dataset, and a sufficiently large model capacity (i.e., a tabular model that encodes the probability of \u03c0\u03b8(y|x) for all x \u2208X and y \u2208Y into a vector). The DPO framework was further extended in [16], aiming to directly maximize the win-rate of \u03c0\u03b8 against \u03c0ref. 3 SPO Following (1), we consider a loss function of the form: LSPO(\u03c0\u03b8, \u03c0ref, D) = Lpref(\u03c0\u03b8, D) + Reg(\u03c0\u03b8, \u03c0ref), (5) where Lpref and Reg stand for preference loss and regularizer, respectively. We proceed to further detail these components. The the regularization term, Reg(\u03c0\u03b8, \u03c0ref), aims to ensure that \u03c0\u03b8 avoids producing outputs that are highly improbable under \u03c0ref. A common and effective choice is the KL divergence, DKL(\u03c0\u03b8 \u2225\u03c0ref), although other regularization options are viable [17]. Importantly, Reg(\u03c0\u03b8, \u03c0ref) does not incorporate the preference dataset D as an input. This is because within D, the model aims to fit to the target preferences, making additional regularization within D unnecessary. In fact, the regularization term primarily aims to regularize \u03c0\u03b8 outside D. This approach diverges from the DPO and several other existing loss functions (detailed in Section 7), which only consider the divergence of \u03c0\u03b8 from \u03c0ref within the preference dataset. We now turn our attention to the preference loss. Given a query x, let \u03c0\u03b8(y|x) denote the probability that model \u03c0\u03b8 generates output y. When presented with a query x and two responses, y1 and y2, we define the probability that \u03c0\u03b8 prefers y1 over y2 as P\u03c0\u03b8(y1 \u227by2 | x) def = P \u0000output of \u03c0\u03b8(\u00b7|x) is y1 \f \f output of \u03c0\u03b8(\u00b7|x) is in {y1, y2} \u0001 = \u03c0\u03b8(y1|x) \u03c0\u03b8(y1|x) + \u03c0\u03b8(y2|x), (6) where the last equality follows from the definition of conditional probability. We can then employ log-likelihood loss to measure the alignment of preference-probabilities\u2019 with the preference-dataset labels, \u2212E(x;yw,yl)\u223cD \u0002 log P\u03c0\u03b8(yw \u227byl | x) \u0003 . (7) We consider a preference loss Lpref(\u03c0\u03b8, D) that extends the above cross entropy loss by employing arbitrary exponents for \u03c0\u03b8. Specifically, we let for any \u03b1 > 0, L\u03b1 pref(\u03c0\u03b8, D) def = \u22121 \u03b1E(x;yw,yl)\u223cD \u0014 log \u03c0\u03b8(yw | x)\u03b1 \u03c0\u03b8(yw | x)\u03b1 + \u03c0\u03b8(yl | x)\u03b1 \u0015 , (8) and for \u03b1 = 0, L0 pref(\u03c0\u03b8, D) def = \u22121 2E(x;yw,yl)\u223cD \u0014 log \u03c0\u03b8(yw | x) \u03c0\u03b8(yl | x) \u0015 . (9) This L\u03b1 pref(\u03c0\u03b8, D) contains the cross-entropy loss in (7) as a special case when \u03b1 = 1. The \u03b1 parameter allows for tailoring the model to exhibit different entropies; models minimized under L\u03b1 pref will display higher entropy for larger \u03b1 values, gradually moving towards a deterministic model akin to DPO as \u03b1 approaches zero; as established in the next theorem. Although the SPO framework does not rely on existence of underlying reward functions, and in particular the BT assumption, it is insightful to study the preference loss L\u03b1 pref under the conditions where the BT model assumption is valid. Intuitively, for a BT expert model, defined as \u03c0(y|x) = exp(r(y|x))/Z(x) with Z(x) being the partition function, the preference probability in (6) would be 3 \fidentical to the BT preference formula (2). In the next theorem, we further study the landscape of L\u03b1 pref under the BT model assumption. To eliminate local minima and saddle points that arise from nonlinear model spaces such as neural networks, in the theorems we consider a tabular model that encodes the probability of \u03c0\u03b8(y|x) for all x \u2208X and y \u2208Y into a large vector. Theorem 1. Suppose that the BT model holds with rewards r(\u00b7|x), and fix any probability distribution D over X \u00d7 Y \u00d7 Y that has full support2 and is consistent with the BT assumption.3 Then, for any \u03b1 \u22650, L\u03b1 pref has a unique minimizer Softmax(r(\u00b7|x)/\u03b1) (reducing to argmax r(\u00b7|x) for \u03b1 = 0). Furthermore, this minimizer is globally absorbing, and the landscape of L\u03b1 pref contains no other first-order stationary point (i.e., no other local minima, local maxima, or saddle points). The proof is provided in Appendix A. According to Theorem 1, minimizer of L\u03b1 pref is the softmax of BT rewards divided by \u03b1, where \u03b1 controls the entropy of the final model. Specifically, in the the asymptotically large dataset regime, when \u03b1 = 1, the preference loss reaches its minimum at the hypothetical BT expert model that generates the preference dataset\u2019s labels, defined as Softmax(r(\u00b7|x)). The gradient of the preference loss L\u03b1 pref, for any \u03b1 \u22650, is given by \u2212\u2207\u03b8L\u03b1 pref(\u03c0\u03b8, D) = E(x;yw,yl)\u223cD \u0014 \u03c0\u03b8(yl|x)\u03b1 \u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1 \u0000\u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x) \u0001\u0015 . Here, \u03c0\u03b8(yl|x)\u03b1/ \u0000\u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1\u0001 serves as a measure of the model\u2019s error in preferring yw over yl. Consequently, the magnitude of this preference error proportionally scales the adjustment \u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x), leading to larger updates when the error is large. 4 Weighted SPO We further expand the preference loss of SPO by considering a weighting over different samples, where the weights can depend on \u03c0\u03b8. This weighting only affects (improves) the optimization process without changing the fixed point, as we show in this section. We call a function \u00b5 : Y \u00d7 Y \u00d7 X \u2192R+ symmetric positive if \u00b5(y1, y2 | x) = \u00b5(y2, y1 | x) > 0, for all x \u2208X and all y1, y2 \u2208Y. Given a symmetric positive function \u00b5 and an \u03b1 \u22650, we define weighted preference loss as L\u03b1,\u00b5 pref(\u03c0\u03b8, D) def = \u22121 \u03b1E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl | x) log \u03c0\u03b8(yw | x)\u03b1 \u03c0\u03b8(yw | x)\u03b1 + \u03c0\u03b8(yl | x)\u03b1 \u0015 (10) if \u03b1 > 0, and for \u03b1 = 0 we let L0,\u00b5 pref(\u03c0\u03b8, D) def = \u22121 2E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl | x) log \u03c0\u03b8(yw | x) \u03c0\u03b8(yl | x) \u0015 . (11) The weight-function \u00b5 controls the impact of individual samples within the loss calculation. The utility of \u00b5 emerges from the observation that not all sample pairs in the preference dataset hold equivalent significance. For instance, diminishing the weights of dataset samples (x; yw, yl) where both responses yw and yl are of low quality (e.g., low probability) can be particularly advantageous. This can be achieved for example by setting \u00b5(yw, yl | x) \u221d \u0000\u03c0\u03b8(yw | x) + \u03c0\u03b8(yl | x) \u0001\u03b3, with \u03b3 > 0. Refer to Section 6 and Appendix B for choices of \u00b5 that have demonstrated enhanced performance in our experimental evaluations. While \u00b5 may depend on \u03c0\u03b8, it is important to note that gradient propagation through \u00b5 is not permitted. Specifically, the gradient \u2207\u03b8L\u03b1,\u00b5 pref(\u03c0\u03b8, D) is given by \u2212E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl|x) \u03c0\u03b8(yl|x)\u03b1 \u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1 \u0000\u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x) \u0001\u0015 . (12) 2Full support in this context means that the probability distribution assigns a non-zero sampling probability to all (x; yw, yl) \u2208X \u00d7 Y \u00d7 Y. 3Consistency with the BT holds if the relative probability of outcomes is determined by a logistic function of the reward differences. More specifically, D(x; y1, y2)/D(x; y2, y1) = exp \u0000r(y1 | x) \u2212r(y2 | x) \u0001 , for all (x; y1, y2) \u2208X \u00d7 Y \u00d7 Y, where r(\u00b7|\u00b7) is the reward function in the BT model. 4 \fInterestingly, the weight function, \u00b5, mainly influences the optimization process, not the ultimate fixed point, in the tabular setting and under asymptotically large preference dataset, as we show in the next theorem. Theorem 2. Suppose that the conditions of Theorem 1 hold. Then for any \u03b1 \u22650 and any symmetric positive function \u00b5, the softmax of the BT rewards divided by \u03b1, Softmax(r(\u00b7|x)/\u03b1) (reducing to argmax r(\u00b7|x) for \u03b1 = 0), is the unique globally absorbing fixed point of the differential equation \u02d9 \u03c0 = Q \u0000\u2212\u2207\u03b8L\u03b1,\u00b5 pref(\u03c0\u03b8, D)), where Q(\u00b7) stands for projection onto the probability simplex, and the gradient is given in (12). The proof is given in Appendix A. 5 Comparative Analysis: SPO Versus DPO This section contrasts the SPO method with the DPO algorithm, at a conceptual level. A detailed empirical comparison with DPO will follow in Section 6. A key distinction between SPO and DPO lies in the application of the regularization (or DKL(\u03c0\u03b8 \u2225 \u03c0ref)). The DPO loss function (4) incorporates regularization over preference dataset samples only. This approach is suboptimal since the primary objective of alignment is to align with the preferences in the dataset, and regularization within the preference dataset undermines this objective. The inclusion of DKL in Reinforcement Learning from Human Feedback (RLHF) originally aimed to mitigate the risk of \u03c0\u03b8 diverging significantly from \u03c0ref in unexplored regions of the response space, which could lead to unexpected distribution shifts. In the same vein, SPO incorporates a global regularizer, acting across the entire response space rather than being confined to the preference dataset. Another advantage of SPO over DPO and RLHF emerges from the tendency of DPO and RLHF models towards determinism. Consider a hypothetical scenario where the preference dataset is significantly larger compared to the data used for pretraining. In such a context, the preference dataset itself provides ample information for model alignment, rendering the regularization or DKL unnecessary; equivalently \u03b2 can be set to a very small value or zero. In this case, under BT-model assumption, the minimizers of the RLHF and DPO loss functions are deterministic models that for each query x, deterministically return a response y that maximizes r(y|x). In general, the RLHF and DPO loss functions have an inherent tendency towards low entropy policies [16], which results in a constrained range of responses, leading to potential mode collapse, where the model\u2019s outputs, although high-quality, are limited to a narrow set. In contrast, SPO, through its alignment loss in (5), allows for entropy control of the output solution via the \u03b1 parameter in (8), even when \u03b2 = 0 (see Theorem 1). This capacity to preserve information diversity makes SPO more adaptable for continual learning scenarios, enabling the model to evolve over time and be applied to subsequent alignments without significant loss of potential sets of responses. It is noteworthy that unlike RLHF and DPO, the SPO framework does not presuppose the existence of an underlying reward model nor relies on assumptions such as the BT model. Instead, SPO\u2019s preference loss aims to directly align \u03c0\u03b8 with the preferences in the preference dataset. This distinction potentially facilitates the adaptation of SPO to broader alignment contexts. Furthermore, the choice of regularization is not confined to DKL. This stands in contrast to the DPO and IPO frameworks, which fundamentally depend on employing DKL for derivation of their loss functions. We further observe that the DPO loss does not allow for separation into components like (5), namely as a sum of a preference loss that is independent of \u03c0ref and a regularizer such as DKL. To understand why, consider a scenario where \u03c0\u03b8(yw|x) = \u03c0\u03b8(yl|x) for a given sample (x; yw, yl) \u2208D. In this instance, the alignment loss remains symmetrical with respect to \u03c0ref(yw|x) and \u03c0ref(yl|x); because swapping the values of \u03c0ref(yw|x) and \u03c0ref(yl|x) would not alter either the preference loss or DKL. This symmetry is absent in the DPO framework, as evident from the DPO loss formulation in (4). Despite the benefits of regularizing \u03c0\u03b8 across the entire response space, as opposed to restricting DKL solely to the preference dataset\u2014a point highlighted earlier in this section\u2014this approach can lead to significant computational overhead. This is particularly true for sequential generative models such as language models, where generating sequences is computationally more intensive than calculating the probability of a given sequence. In Appendix B, we discuss methods to resolve this and other practical concerns of SPO. 5 \f6 Experiments We evaluated the performance of SPO in a story generation task, using pretrained models on the TinyStories dataset [18] which is a synthetic collection of brief stories designed for children aged 3 to 4. This dataset proves effective for training and evaluating language models that are smaller than the current state-of-the-art, and capable of crafting stories that are not only fluent and coherent but also diverse. Preference Dataset: We generated a preference dataset for aligning the stories to older age groups. In particular, we used the following prompt in generating the preference dataset where given a pair of stories, we asked GPT3.5-Turbo to assess them in terms of coherency in the plot, good language skills, and whether it is interesting and engaging for the age group of 16 years old. A high school teacher has asked two 16 year-old students to write a short story. Your task is to decide which story is better for publication in the high school newspaper, with absolutely no further editing. Story 1: \u201cOnce upon a time, there was a big balloon. It was red and round and made a funny noise. A little girl named Lily loved to watch it float in the sky. One day, Lily\u2019s mom had a meeting and needed to go. She told Lily to stay inside and play with her toys. But Lily wanted to see the balloon so badly that she sneaked outside and followed it. As she followed the balloon, she noticed that the sky was getting darker and thicker. She started to feel scared. Suddenly, the balloon started to shrink and get smaller and smaller. Lily was so scared that she started to cry. But then, a kind police officer found her and took her back home. Lily learned that it\u2019s important to listen to her mom and stay safe. And she also learned that balloons can be filled with air, but they can also be filled with heavy water.\u201d Story 2: \u201cOnce upon a time, there was a little girl named Lily. She loved animals and had a pet bunny named Fluffy. One day, she saw an amazing birdcage in the store. It was shiny and big, and had many colorful birds inside.Lily wanted the birdcage so much, but she didn\u2019t have enough money to buy it. She felt sad and cried a little. But then, Fluffy came to her and started cuddling with her. Lily felt happy again, and she realized that having Fluffy was more than just a pet store. It was her favorite thing.From that day on, Lily and Fluffy would sit together and watch the birds in the amazing birdcage. They didn\u2019t need to buy it, they just needed each other. And they lived happily ever after.\u201d Please provide your general assessment about each story including whether it is interesting and engaging for the age group of 16 years (not being too childish), has a coherent plot, and has good language skills. Then, assign each story a score between 0 and 10. A story should get a higher score if it is better in all aspects considered in the general assessment. Story 1: The plot is a bit confusing and jumps around a bit with Lily following the balloon and then suddenly being rescued by a police officer. The lesson about listening to her mom and staying safe is good, but the addition of the balloon shrinking and being filled with heavy water feels a bit random and out of place. Language skills could be improved with more descriptive language and better flow. Story 2: The plot is more coherent and focuses on a simple yet heartwarming relationship between Lily and her pet bunny, Fluffy. The message about appreciating what you have rather than always wanting more is clear and well-delivered. The language used is more engaging and suitable for the age group of 16 years. Final estimates: Score of story 1: 5 Score of story 2: 8 Preference: Story 2 is better for publication in the highschool newspaper. We generated a preference dataset of 500k story pairs using the above prompt, where each story was independently generated using a 110M-parameter pretrained model [19]. To improve the quality of preference data, we evaluated each story pair twice (flipping the stories in the second evaluation), and considered those pairs whose both evaluations have a consistent preference and the difference between story 1 and story 2 scores is at least two in each of both runs (at least 3 if story 2 is the winner, due to the statistically significant bias of GPT3.5 towards favoring story 2 over story 1). After this filtration, about 100k pairs remained to be used in the alignment phase. Implementation: Building on the implementation in [20], and starting from the pretrained 110M parameter model [19], we aligned a 110M parameter model using SPO and DPO. The alignment training used a batch size of 128 story-pairs for 40k iterations, optimizing SPO and DPO loss functions via AdamW with 5k warm-up iterations. For SPO, we searched \u03b1 \u2208{0.003, 0.01, 0.03, 0.1}, and used uniform weighting and SoftImportance weighting (see Appendix B for definition) for the weight function \u00b5. The regularization coefficient \u03b2 was swept over \u03b2 \u2208{0.003, 0.01, 0.03, 0.1} for both DPO and 6 \fFigure 1: Win rates of SPO and DPO against reference model. SPO algorithms. Refer to Appendix B for more implementation details. In both experiments we considered a reference model identical to the pretraind model. Results: We computed the win rates of SPO and DPO against the reference model using GPT3.5-Turbo, at different stages of training. Each win rate was averaged over 1000 story-pair instances. Figure 1 illustrates win rates versus training steps. The peak win-rates of SPO and DPO are 72% and 68%, respectively. Moreover, DPO has a disadvantage that its win rate quickly drops bellow 50% after a narrow peak, whereas SPO exhibits a higher and wider peak, as well as a much better long-term performance. 7 Related Works Reinforcement Learning from Human Feedback (RLHF) aims to align AI systems with human preferences, relying on human judgments rather than manual rewards or demonstrations. This method has been successfully applied in fine-tuning large language models (LLMs) [21, 22, 4], but faces challenges including data quality issues, reward misgeneralization, and policy optimization complexities. Research to enhance RLHF includes methods such as rejection sampling for response generation [23, 22], where the highest-reward response from a fixed number is selected for finetuning. The reference [24] simplified instruction alignment with language models into a goal-oriented reinforcement learning task, utilizing a two-phase approach of high-temperature online sampling and supervised learning with relabeled data during offline training. A two-loop learning algorithm, Grow and Improve, has also been proposed for iterative model alignment and training on a fixed dataset [25]. The Grow loop leverages the existing model to create and sample a dataset while the Improve loop iteratively trains the model on a fixed dataset. Given the challenges of RLHF, reward-model-free alignment methods emerged fairly recently and have gained a lot of popularity. Reward-model-free approach to alignment was popularized specifically after introduction of DPO in [5], which is breifly outlined in Section 2. Recently, several works have been proposed methods to improve DPO. In [16], the authors considered an objective called \u03a8PO for learning from human preferences that is expressed in terms of pairwise preferences, with no need for assumption the BT model. The authors focused on a specific instance, IPO, of \u03a8PO by setting \u03a8 as the identity, aiming to mitigate the overfitting and tendency-towards-deterministic-policies issues observed in DPO. The work in [7] proposed a loss function which is an unbiased estimate of 7 \fthe original DPO loss, and aims to alleviate sensitivity to flipped labels due to labeling noise. It was proposed in [6] to add an offset term within the sigmoid function in the DPO loss. In this manner, the model puts more weight on the winning response. In order to reduce the memory usage of DPO, [8] approximated the DPO loss by replacing \u03c0ref with a uniform reference model, eliminating the need to store and evaluate the reference model. Inspired by cringe loss previously proposed for binary feedback, [10] adapted cringe loss for the pairwise preference context. More specifically, cringe loss applies standard likelihood training to winning responses. For a losing response, it contrasts each token in the losing sequence against other likely tokens predicted by the model, aiming to discourage the losing sequence from being the top-ranked sequence. In [17], the authors proposed a separable alignment technique, called SLiC, where, similar to SPO, the alignment loss is the sum of two terms: a calibration loss that contrasts a winner and loser responses encouraging the model \u03c0\u03b8 to assign more probability to the winner, and a regularizer term. SLiC was further developed in [26] to be used in alignment to preference data, where they proposed the SLiC-HF algorithm. SLiC-HF involves a rectified contrastive-loss as its calibration loss and a log-likelihood term as the regularization. Other than a different choices for preference loss and regularization, SLiC-HF diverges from the SPO framework in that the regularization in SLiC-HF is limited to the preference or pertaining datasets, not using online samples form \u03c0\u03b8 as in the DKL regularizer. In practice, the performance of an alignment technique highly depends on the quality of the human preference dataset. Noisy preference pairs could potentially limit the language models from capturing human intention. In [27], DPO was used in conjunction with an improved preference dataset via a rejection sampling technique, arguing that DPO suffers from a mismatch between the sampling distribution and the policy corresponding to true expert preferences. In [11], the authors formed a dataset of conservative pairs by collecting AI feedback through an ensemble of chat model completions, followed by GPT-4 scoring. Then, employed DPO for alignment to this improved dataset. The work in [9] leveraged semantic correlations of prompts in the dataset to form more conservative response pairs. for a given prompt (x; yw, yl), a prompt x\u2032 with a similar semantic from a tuple (x\u2032; y\u2032 w, y\u2032 l) is used to form more conservative pairs. In particular, they propose a weighted version of the DPO loss where for a given labeled data (x; yw, yl), yw is approved while yl and any y\u2032 l (from a similar prompt x\u2032) are penalized.", + "additional_graph_info": { + "graph": [ + [ + "Arsalan Sharifnassab", + "Saber Salehkaleybar" + ], + [ + "Arsalan Sharifnassab", + "Sina Ghiassian" + ], + [ + "Saber Salehkaleybar", + "Kun Zhang" + ], + [ + "Saber Salehkaleybar", + "Jalal Etesami" + ], + [ + "Sina Ghiassian", + "Richard S. Sutton" + ], + [ + "Sina Ghiassian", + "Adam White" + ], + [ + "Sina Ghiassian", + "Banafsheh Rafiee" + ], + [ + "Sina Ghiassian", + "Andrew Patterson" + ], + [ + "Sina Ghiassian", + "Martha White" + ] + ], + "node_feat": { + "Arsalan Sharifnassab": [ + { + "url": "http://arxiv.org/abs/2402.02342v3", + "title": "MetaOptimize: A Framework for Optimizing Step Sizes and Other Meta-parameters", + "abstract": "This paper addresses the challenge of optimizing meta-parameters (i.e.,\nhyperparameters) in machine learning algorithms, a critical factor influencing\ntraining efficiency and model performance. Moving away from the computationally\nexpensive traditional meta-parameter search methods, we introduce MetaOptimize\nframework that dynamically adjusts meta-parameters, particularly step sizes\n(also known as learning rates), during training. More specifically,\nMetaOptimize can wrap around any first-order optimization algorithm, tuning\nstep sizes on the fly to minimize a specific form of regret that accounts for\nlong-term effect of step sizes on training, through a discounted sum of future\nlosses. We also introduce low complexity variants of MetaOptimize that, in\nconjunction with its adaptability to multiple optimization algorithms,\ndemonstrate performance competitive to those of best hand-crafted learning rate\nschedules across various machine learning applications.", + "authors": "Arsalan Sharifnassab, Saber Salehkaleybar, Richard Sutton", + "published": "2024-02-04", + "updated": "2024-05-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC" + ], + "main_content": "Introduction Optimization algorithms used in machine learning involve meta-parameters (i.e., hyperparameters) that substantially influence their performance. These meta-parameters are typically identified through a search process, such as grid search or other trial-and-error methods, prior to training. However, the computational cost of this meta-parameter search is significantly larger than that of training with optimal meta-parameters (Dahl et al., 2023; Jin, 2022). Meta-parameter optimization seeks to streamline this process by concurrently adjusting meta-parameters during training, moving away from the computationally expensive and often sub-optimal trial and error search methods. Meta-parameter optimization is particularly important in continual learning (De Lange et al., 2021), its primary domain, where dynamic environments or evolving loss functions necessitate meta-parameters, like step sizes, to adapt to optimal time-varying values rather than settling on a static value as in the stationary case. Nevertheless, this work concentrates on the stationary scenario, demonstrating the competitiveness of meta-parameter optimization even in this case. In this work, we propose MetaOptimize as a framework for optimizing meta-parameters to minimize a notion of regret, specifically accounting for the long-term influence of step sizes on future loss. The framework is applicable to a broad range of meta-parameters, however the primary focus of this paper is on step sizes as a critical meta-parameter that is universally present. MetaOptimize brings additional benefits beyond simplifying the search process. Firstly, it enables a dynamic step-size adjustment during training, potentially accelerating the learning process. Traditional methods typically require manual customization of learning rate schedules for each problem, often following an optimal pattern of initial increase and subsequent decay (Amid et al., 2022). As our experiments show, step sizes obtained from MetaOptimize follow similar patterns automatically. \u2217Correspondence to sharifna@ualberta.ca Preprint. Under review. arXiv:2402.02342v3 [cs.LG] 23 May 2024 \fSecondly, varying step sizes across different blocks of a neural network, such as layers or neurons, has been shown to improve performance (Singh et al., 2015; Howard & Ruder, 2018). Manually tuning or using grid search for block-wise step-sizes is impractical in networks with numerous blocks. MetaOptimize framework can automatically manage blockwise step-sizes. The concept of meta step-size optimization can be traced back to (Kesten, 1958), Delta-bar-Delta (Sutton, 1981; Jacobs, 1988), and its incremental variant, IDBD (Sutton, 1992). Over the years, numerous methods have been developed to address this challenge, detailed further in Section 8. This research distinguishes itself from prior work through the following key aspects: \u2022 We introduce a formalization of step-size optimization as minimizing a specific form of regret, essentially a discounted sum of future losses. We demonstrate how to handle this minimization in a causal manner, by introducing the MetaOptimize framework. \u2022 MetaOptimize framework is general in the sense that it can wrap around any first-order optimization algorithm, also called base update, (such as SGD, RMSProp (Hinton et al., 2012), Adam (Kingma & Ba, 2014), or Lion (Chen et al., 2023))), for which it optimizes step sizes via an algorithm of desire (such as SGD, Adam, RMSProp, or Lion), called the meta update. \u2022 We develop approximation methods (Section 6), that when integrated into MetaOptimize, lead to computationally efficient algorithms that outperform state-of-the-art automatic hyperparameter optimization methods on CIFAR10, ImageNet, and language modeling applications (refer to experiments in Section 7). \u2022 We show that some existing methods (like IDBD, its extension (Xu et al., 2018), and hypergradient descent (Baydin et al., 2017)) are specific instances or approximations within the MetaOptimize framework (see Section 5). 2 Problem Setting We introduce a general continual optimization setting that, for a given sequence of loss functions ft(\u00b7) : Rn \u2192R, t = 0, 1, 2, . . ., aims to find a sequence of weight vectors w1, w2, w3, . . . to minimize a discounted sum of future loss: F \u03b3 t def = (1 \u2212\u03b3) X \u03c4>t \u03b3\u03c4\u2212t\u22121f\u03c4(w\u03c4), (1) where \u03b3 \u2208[0, 1) is a fixed constant, often very close to 1, called the discount factor. As an important special case, the above setting includes stationary supervised learning if ft are sampled from a static distribution, for all t. In this case, minimizing F \u03b3 t results in rapid minimization of expected loss. Consider an arbitrary first order optimization algorithm (including but not limited to SGD, RMSProp, Adam, or Lion) for updating wt. At each time t, this algorithm takes the gradient \u2207ft(wt) of the immediate loss function, along with an m-dimensional vector \u03b2t of meta-parameters, and updates wt and possibly some internal variables (e.g., momentum in Adam or trace of gradient squares in RMSProp), based on a fixed update rule Algbase, referred to as the base-update, xt = Algbase(xt, \u2207ft(wt), \u03b2t), (2) where xt def = Stack(wt, \u02dc xt) is an \u02dc n-dimensional vector obtained by stacking wt and all internal variables of the algorithm that are being updated (e.g., momentum), denoted by \u02dc xt. The goal of the MetaOptimize framework is to find a sequence of meta-parameters \u03b2t, for t = 1, 2, . . ., such that when plugged into the base update, (2), results in relative minimization of F \u03b3 t defined in (1). Step-size optimization is a special case of the above framework where at each time t, the m dimensional vector \u03b2t is used to determine the n-dimensional (weight-wise) vector \u03b1t of step sizes (typically m \u226an), through a fixed function \u03c3 : Rm \u2192Rn, \u03b1t = \u03c3(\u03b2t). (3) A typical choice is to partition weights of the neural network into m blocks and use step-size exp(\u03b2) within each block for some entry \u03b2 of \u03b2. Depending on m, this can result in a single shared scalar step-size, or layer-wise, node-wise, or weight-wise step sizes. It is particularly beneficial to consider a function \u03c3 of the exponential form, mentioned above, because of two reasons (Sutton, 1992). First, it ensures that \u03b1t will always be positive. Second, a constant change in \u03b2t would lead to a multiplicative change in \u03b1t, making it suitable for adapting step sizes with different orders of magnitude. 2 \f3 Forward and Backward Views Since the definition of F \u03b3 t in (1) relies on information forward into the future, minimizing it in a causal way necessitates alternative views; discussed in this section. In order to motivate our approach, we start by considering a hypothetical meta-parameter optimization algorithm that has oracle access to future information (e.g., future loss), and updates \u03b2t along the gradient of F \u03b3 t with respect to \u03b2t; that is for t = 0, 1, 2, . . ., \u03b2t+1 = \u03b2t \u2212\u03b7 d d \u03b2t F \u03b3 t = \u03b2t \u2212\u03b7 (1 \u2212\u03b3) X \u03c4>t \u03b3\u03c4\u2212t\u22121 d d \u03b2t f\u03c4(w\u03c4), (4) for some fixed meta step-size, \u03b7 > 0. This forward-view update however requires that at time t, we have access to f\u03c4(\u00b7) and w\u03c4 for all \u03c4 > t, which are typically unavailable. To circumvent this problem, we adopt an idea similar to eligibility traces in reinforcement learning (Sutton, 1988; Sutton & Barto, 2018). More specifically, instead of the forward-view update, we introduce an update of the following type, which we call the backward-view update. At time \u03c4 = 0, 1, 2, . . ., we let \u03b2\u03c4+1 \u2190\u03b2\u03c4 \u2212\u03b7 (1 \u2212\u03b3) X t<\u03c4 \u03b3\u03c4\u2212t\u22121 d d \u03b2t f\u03c4(w\u03c4). (5) Note that every term \u03b3\u03c4\u2212t\u22121 d f\u03c4 (w\u03c4 ) d \u03b2t in the right hand side of (4) also appears in (5), but is applied at time \u03c4 instead of time t, which is the earliest time that all required information for computing this term is available. Consequently, the [ \u2207\u03b2F \u03c4 defined below serves as a causal proxy for d F \u03b3 \u03c4 /d \u03b2\u03c4; [ \u2207\u03b2F \u03c4 def = (1 \u2212\u03b3) \u03c4\u22121 X t=0 \u03b3\u03c4\u2212t\u22121 d d \u03b2t f\u03c4(w\u03c4). (6) It follows from chain rule that [ \u2207\u03b2F \u03c4 = HT \u03c4 \u2207f\u03c4(w\u03c4). (7) where H\u03c4 def = (1 \u2212\u03b3) \u03c4\u22121 X t=0 \u03b3\u03c4\u2212t\u22121 dw\u03c4 d \u03b2t . (8) The dw\u03c4/d \u03b2t in (8) denotes the Jacobian matrix of w\u03c4 with respect to \u03b2t. Therefore, H\u03c4 is an n \u00d7 m matrix such that H\u03c4 v, for any m \u00d7 1 vector v, equals the change in w\u03c4 if we increment all past \u03b2t along \u03b3\u03c4\u2212t v. 4 MetaOptimize The general formulation of MetaOptimize framework is given in Algorithm 1. The idea is to update \u03b2t via any first order optimization algorithm to minimize F \u03b3 t , while using the surrogate gradient [ \u2207\u03b2F t in place of \u2207\u03b2F \u03b3 t , to preserve causality of the updates. More specifically, for t = 1, 2, . . ., let yt+1 = Algmeta \u0000yt, [ \u2207\u03b2F t \u0001 = Algmeta \u0000yt, HT t \u2207ft(wt) \u0001 (9) be the meta update, where yt def = Stack(\u03b2t, \u02dc yt) is an \u02dc m-dimensional vector obtained from stacking \u03b2t and all other internal variables \u02dc yt of the Algmeta algorithm (e.g., momentum), and the second equality follows from (7). Examples of Algmeta include SGD, RMSprop, Adam, and Lion algorithms. Note that in all cases, we pass [ \u2207\u03b2F to the algorithm as the gradient. In each iteration, after performing the base update (2), we compute HT t \u2207ft(wt) and plug it into (9) to update y (and in particular \u03b2).In the rest of this section, we present incremental updates for Ht. Let ht be an nm-dimensional vector obtained by stacking the columns of the n \u00d7 m matrix Ht. It follows from the chain rule that for any times t and \u03c4 with t \u2265\u03c4, d yt+1 d \u03b2\u03c4 = d yt+1 d yt d yt d \u03b2\u03c4 + d yt+1 d xt d xt d \u03b2\u03c4 + d yt+1 d ht d ht d \u03b2\u03c4 , d xt+1 d \u03b2\u03c4 = d xt+1 d yt d yt d \u03b2\u03c4 + d xt+1 d xt d xt d \u03b2\u03c4 + d xt+1 d ht d ht d \u03b2\u03c4 , d ht+1 d \u03b2\u03c4 = d ht+1 d yt d yt d \u03b2\u03c4 + d ht+1 d xt d xt d \u03b2\u03c4 + d ht+1 d ht d ht d \u03b2\u03c4 . 3 \fAlgorithm 1 MetaOptimize Framework (for general meta-parameters) Given: Base-update Algbase, meta-update Algmeta, Parameters: Discount-factor \u03b3 \u22641. Initialize: X0 = 0(n+\u02dc n)\u00d7m, Y0 = \u0002 Im\u00d7m | 0m\u00d7 \u02dc m \u0003T , and Q0 = 0nm\u00d7m. for t = 0, 1, 2, . . . do xt+1 \u2190Algbase(xt, \u2207ft(wt), \u03b2t). Ht = sub-matrix of Xt, constituting its first n rows. yt+1 \u2190Algmeta \u0000yt, HT t \u2207ft(wt) \u0001 . Update Xt, Yt, and Qt from (15), using Gt in (10). end for Letting Gt def = \uf8ee \uf8ef \uf8ef \uf8f0 d yt+1 d yt d yt+1 d xt d yt+1 d ht d xt+1 d yt d xt+1 d xt d xt+1 d ht d ht+1 d yt d ht+1 d xt d ht+1 d ht \uf8f9 \uf8fa \uf8fa \uf8fb, (10) the above set of equations can be equivalently written as \uf8ee \uf8ef \uf8ef \uf8f0 d yt+1 d \u03b2\u03c4 d xt+1 d \u03b2\u03c4 d ht+1 d \u03b2\u03c4 \uf8f9 \uf8fa \uf8fa \uf8fb= Gt \uf8ee \uf8ef \uf8ef \uf8f0 d yt d \u03b2\u03c4 d xt d \u03b2\u03c4 d ht d \u03b2\u03c4 \uf8f9 \uf8fa \uf8fa \uf8fb. It follows that t X \u03c4=0 \u03b3t\u2212\u03c4 \uf8ee \uf8ef \uf8ef \uf8f0 d yt+1 d \u03b2\u03c4 d xt+1 d \u03b2\u03c4 d ht+1 d \u03b2\u03c4 \uf8f9 \uf8fa \uf8fa \uf8fb= Gt \uf8ee \uf8ef \uf8ef \uf8f0 d yt d \u03b2t d xt d \u03b2t d ht d \u03b2t \uf8f9 \uf8fa \uf8fa \uf8fb+ Gt t\u22121 X \u03c4=0 \u03b3t\u2212\u03c4 \uf8ee \uf8ef \uf8ef \uf8f0 d yt d \u03b2\u03c4 d xt d \u03b2\u03c4 d ht d \u03b2\u03c4 \uf8f9 \uf8fa \uf8fa \uf8fb. (11) Let Yt def = (1 \u2212\u03b3) t\u22121 X \u03c4=0 \u03b3t\u2212\u03c4\u22121 d yt d \u03b2\u03c4 (12) Xt def = (1 \u2212\u03b3) t\u22121 X \u03c4=0 \u03b3t\u2212\u03c4\u22121 d xt d \u03b2\u03c4 , (13) Qt def = (1 \u2212\u03b3) t\u22121 X \u03c4=0 \u03b3t\u2212\u03c4\u22121 d ht d \u03b2\u03c4 . (14) Note also that d xt/d \u03b2t = 0, d ht/d \u03b2t = 0, and d yt/d \u03b2t = d Stack(\u03b2t, \u02dc yt)/d \u03b2t = Stack(I, 0). Plugging these into (11), we obtain \" Yt+1 Xt+1 Qt+1 # = Gt \uf8eb \uf8ec \uf8ec \uf8ed\u03b3 \" Yt Xt Qt # + (1 \u2212\u03b3) \uf8ee \uf8ef \uf8ef \uf8f0 \u0014 I 0 \u0015 0 0 \uf8f9 \uf8fa \uf8fa \uf8fb \uf8f6 \uf8f7 \uf8f7 \uf8f8. (15) Matrices Xt, Yt, Qt can be computed iteratively using (15). The matrix Ht in (8) is then obtained from the sub-matrix constituting the first n rows of Xt, because xt = Stack(wt, \u02dc xt). To complete Algorithm 1, it only remains to compute the matrix Gt in (10). In Appendix A, we calculate Gt for common choices of base and meta updates: SGD, AdamW, and Lion. Notably, the first row of Gt blocks depends only on Algmeta, and the rest of Gt blocks depend only on Algbase. This simplifies the derivation and implementation for various base and meta algorithm combinations. 4 \fAlgorithm 2 MetaOptimize with 2\u00d72 approx., (Algbase, Algmeta)= (SGD, SGD), and scalar step-size Initialize: H0 = 0n\u00d71, Y0 = 1. for t = 1, 2, . . . do \u03b1t = e\u03b2t Base update: wt+1 = wt \u2212\u03b1t\u2207ft(wt) Ht+1 = \u03b3 \u0000I \u2212\u03b1t\u22072ft(wt) \u0001 Ht \u2212Yt\u03b1t\u2207ft(wt) Yt+1 = \u03b3Yt + (1 \u2212\u03b3) \u2212\u03b3\u03b7HT t \u22072ft(wt)Ht # For L-approximation let Yt+1 = 1 Meta update: \u03b2t+1 = \u03b2t \u2212\u03b7 HT t \u2207ft(wt) end for 5 Reducing Complexity The matrix Gt in (10) is typically large, reducing the algorithm\u2019s practicality. We discuss two approximations of Gt for more efficient algorithms. 2\u00d72 approximation: The vector ht, formed by stacking Ht\u2019s columns, has length mn, making Gt\u2019s last row and column of blocks very large. Moreover, as shown in Appendix A, the term dht+1/dxt typically involves third order derivatives of ft with respect to wt, which is not practically computable. In the 2\u00d72 approximation, we resolve the above problems by completely zeroing out all blocks in the last row and also in the last column of blocks of Gt in (10). Consequently, we can also remove Qt from the algorithm. This appears to have minimal impact on the performance, as we empirically observed in simple settings. Intuitively, the block d, xt+1/d, ht in Gt is zero, as Ht doesn\u2019t affect the base update (2). Thus, Q affects X, only indirectly, via Y . L-approximation: Herein, we take a step further, and in addition to the last row and the last column of blocks of Gt, we also zero out the block in the first row and the second column of Gt. In other words, we let GL t def = \uf8ee \uf8f0 d yt+1 d yt 0 d xt+1 d yt d xt+1 d xt \uf8f9 \uf8fb, (16) and simplify (15) as \u0014 Yt+1 Xt+1 \u0015 = GL t \u03b3 \u0014 Yt Xt \u0015 + (1 \u2212\u03b3) \" I 0 0 #! . (17) We have empirically observed that the resulting algorithm typically performs as good as the 2\u00d72 approximation, and even results in improved stability in some cases. Intuition of MetaOptimize updates: Algorithm 2 provides a 2\u00d72 approximation of MetaOptimize for the case where both base and meta updates use SGD, and under scalar step-size (detailed derivation in Appendix A). It shows that Ht traces past gradients, decaying at rate \u03b3(I \u2212[\u03b1] \u22072ft). This decay ensures that if past gradients poorly approximate future ones due to large \u22072ft or \u03b1, their influence fades more rapidly. If the current gradient aligns positively with past gradients (i.e., \u2212HT t \u2207ft > 0), the algorithm increases the step-size \u03b1 for quicker adaptation; if negatively correlated, it reduces the step size to prevent issues like zigzagging. Yt in (12) reflects the impact of changes in past \u03b2 on the current value of \u03b2, amplifying the increment in the Ht+1 update if \u03b2 has been consistently rising or falling over the recent past. It is also worth noting that in Algorithm 2, under the L-approximation, Yt remains constant, equal to I. A similar phenomenon occurs also when Adam, RMSProp, or Lion algorithms are used instead of SGD. Containing some existing algorithms as special cases: Special cases of the above L-approximation method include IDBD algorithm (Sutton, 1982) and its extension (Xu et al., 2018), if we limit Algbase and Algmeta to SGD algorithm. Refer to Appendix B.1 for more details and proofs. MetaOptimize also contains the hypergradient-descent algorithm (Baydin et al., 2017) as a special case, when using SGD for both base and meta updates of MetaOptimze with \u03b3 = 0. Hypergradientdescent updates step size towards minimizing the immediate loss ft rather than discounted sum of future losses, F \u03b3 t , ignoring long-term effects of step size on future loss. See Appendix B.2 for details. 5 \f6 Hessian-Free MetaOptimize The step-size optimization algorithms discussed so far typically involve Hessian, \u22072ft(wt), of the loss function. In particular, the Hessian matrix typically appears in the middle column of blocks in the Gt matrix; e.g., in the dwt+1/dwt block where wt+1 = wt \u2212\u03b1t\u2207ft(wt). Consequently, the update in (15) involves a Hessian-matrix-product of the form \u22072ft(wt)Ht, which increases per-step computational complexity of the algorithm. The added computational overhead would be still manageable if m is small. In particular for m = 1 (i.e., the case that a scalar step-size is used for update of all weights), Ht would be a vector; and one can leverage efficient Hessian-vector-product computation techniques that have the same complexity as gradient computation (Pearlmutter, 1994). Interestingly, for certain base and meta algorithms, we can eliminate the Hessian without much compromising the performance. An example of such (base or meta) algorithms is the Lion algorithm (Chen et al., 2023). The Lion algorithm, when used as the base algorithm, updates wt as mt+1 = \u03c1 mt + (1 \u2212\u03c1) \u2207ft(wt), wt+1 = wt \u2212\u03b1t Sign \u0000c mt + (1 \u2212c)\u2207ft \u0001 \u2212\u03ba\u03b1twt, where \u03c1, c \u2208[0, 1), \u03ba is a nonnegative weight-decay parameter, and Sign(\u00b7) is the entry-wise sign function. In the special cases of c = 0 or \u03c1 = 0, mt can be eliminated and the above update simplifies to wt+1 = wt \u2212\u03b1t Sign \u0000\u2207ft \u0001 \u2212\u03ba\u03b1twt. In this case, it is easy to see that the derivatives of xt in (10) are Hessian-free. The above argument can be extended to arbitrary values of c and \u03c1. In Appendix A.1.3 (respectively Appendix A.3.2), we show that if Algmeta (Algbase) is the Lion algorithm, then the first row (second and third rows) of blocks in G would be Hessian-free. In summary, Algorithm 1 turns Hessian-free, if Lion is used in both base and meta updates. This elimination of Hessian results from flatness of the Sign function when ignoring the discontinuity at 0. For other algorithms, we may consider their Hessian-free approximation by zeroing out any Hessian term in Gt. The Hessian-free approximation turns out to be a good approximation, especially for base and meta algorithms that involve gradient normalization, like RMSProp and Adam. Note that, the sign function used in the Lion algorithm is an extreme form of normalization that divides a vector by its absolute value. We could instead use softer forms of normalization, such as normalizing to square root of a trace of squared vector, vt, as in RMSProp. Such normalizations typically result in two opposing Hessian-based terms in Ht\u2019s update (stemming from d wt+1 d wt and d wt+1 d vt blocks of matrix Gt), aiming to cancel out, particularly when consecutive gradients are positively correlated. The main advantage of Hessian-free methods lies in their computational congeniality. For base and meta updates including SGD, RMSProp, AdamW, and Lion, the Hessian-free 2\u00d72 approximation has low computational complexity, requiring only a few vector-products per iteration beyond the computations required for the base and meta updates. When Hessian terms in 2 \u00d7 2 approximation of Gt are zeroed out; the blocks in Gt, and therefore the blocks in Xt and Yt, become diagonal. Thus, Xt and Yt matrices can be simplified to vector forms, eliminating costly matrix multiplications. The same holds for general blockwise step-sizes (e.g., layer-wise and weight-wise step-sizes), leading to computational overheads on par with the scalar case. We note also that for the meta updates mentioned above if we use no weight-decay in the meta update, Hessian-free 2\u00d72 approximation becomes equivalent to Hessian-free L-approximation. Algorithm 3 presents Hessian-free approximations for some selected base and meta updates: SGD with momentum (SGDm), AdamW, and Lion. 7 Experiments In this section, we evaluate the MetaOptimize framework on image classification and language modeling benchmarks. Out of several possible combinations of base and meta algorithms and approximations, we report a few Hessian-free combinations from Algorithm 3 that showed better performance. In all experiments, we set the initial step-sizes of MetaOptimize to one or two orders of magnitudes smaller than the range of good fixed step-sizes, with no specific tuning. We compare MetaOptimize against some popular baselines whose meta-parameters are well-tuned for each task separately. Refer to Appendix C for further experiment details. Codes are available at (Anonymous, 2024). 6 \fAlgorithm 3 Hessian-free MetaOptimize algorithms with 2\u00d72 approximation used in experiments Parameters: \u03b7 > 0 (default 10\u22123), \u03b3 \u2208[0, 1] (default 1) Initialize: h0 = 0n\u00d71. for t = 1, 2, . . . do Base update \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u03b1t = \u03c3(\u03b2t) # exponential scalar/blockwise mt+1 = \u03c1mt + (1 \u2212\u03c1)\u2207ft(wt) if Algbase is SGDm then \u2206w = \u2212\u03b1tmt \u2212\u03ba\u03b1twt if Algbase is Lion then \u2206w = \u2212\u03b1t Sign \u0000c mt + (1 \u2212c)\u2207ft \u0001 \u2212\u03ba\u03b1twt if Algbase is AdamW then vt+1 = \u03bb vt + (1 \u2212\u03bb)\u2207ft(wt)2 \u00b5t = \u221a 1 \u2212\u03bbt/(1 \u2212\u03c1t), \u2206w = \u2212\u03b1t\u00b5tmt/\u221avt \u2212\u03ba\u03b1twt wt+1 = wt + \u2206w ht+1 = \u03b3(1 \u2212\u03ba\u03b1t)ht + \u2206w Meta update \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 z = ht \u2207ft(wt) \u00af mt+1 = \u00af \u03c1 \u00af mt + (1 \u2212\u00af \u03c1) z if Algmeta is Lion then \u03b2t+1 = \u03b2t \u2212\u03b7 Sign \u0000\u00af c \u00af mt + (1 \u2212\u00af c)z \u0001 if Algmeta is Adam then \u00af vt+1 = \u00af \u03bb \u00af vt + (1 \u2212\u00af \u03bb) z2 \u00af \u00b5t = p 1 \u2212\u00af \u03bbt/(1 \u2212\u00af \u03c1t) \u03b2t+1 = \u03b2t \u2212\u03b7 \u00af \u00b5t \u00af mt/ p \u00af vt end for 7.1 CIFAR10 dataset The first set of experiments involve training ResNet-18 with batch size of 100 on the CIFAR10 (Krizhevsky et al., 2009) dataset. Fig. 1 depicts the learning curves of four combinations of (base, meta) algorithms for Hessian-free MetaOptimize, along with the corresponding baselines with well-tuned fixed step sizes. For MetaOptimize, in addition to scalar step-sizes, we also considered block-wise step-sizes by partitioning layers of the ResNet18 network into six blocks (first and last linear blocks and 4 ResNet blocks). Fig. 1 demonstrates that each tested base-meta combination of MetaOptimize, whether scalar or blockwise, surpasses the performance of the corresponding fixed step-size baseline. (Lion, Lion) (AdamW, Adam) (RMSProp, Adam) (SGDm, Adam) Figure 1: Learning curves for selected (base, meta) combinations in CIFAR10. Figure 2: Robustness to initial step-sizes, for (Lion, Lion) as (base, meta) update in CIFAR10. Figure 3: Evolution of blockwise step-sizes during training, for (SGDm, Adam) as (base, meta) update in CIFAR10. Interestingly, as demonstrated in Fig. 2, the MetaOptimize algorithms show remarkable robustness to initial step-size choices, even for initial step sizes that are several orders of magnitude smaller than the optimal fixed step-size. Fig. 3 depicts the blockwise step-sizes for (SGDm, Adam) across different blocks, showing an increasing trend from the first to the last block (output layer), which is generally a desirable pattern. In contrast, in the blockwise versions of (AdamW, Adam), (Lion, Lion), and (RMSProp, Adam) updates, we empirically observed that the first five blocks exhibit similar trends and values, while the last block follows a distinct trend, growing larger and rising at a later time. 7 \f7.2 ImageNet dataset We trained ResNet-18 with batch-size 256 on ImageNet (Deng et al., 2009). We compared MetaOptimize with scalar step-size against four state-of-the-art hyperparamter optimization algorithms, namely DoG (Ivgi et al., 2023), gdtuo (Chandra et al., 2022), Prodigy (Mishchenko & Defazio, 2023), and mechanic (Cutkosky et al., 2024), as well as AdamW and Lion baselines with fixed step-sizes, and AdamW with a well-tuned cosine decay learning rate scheduler with a 10k iterations warmup. Learning curves and complexity overheads are shown respectively in Fig. 4 and Table 1, showcasing the advantage of MetaOptimize algorithms (learning curve of DoG is not depicted due to its relatively poor performance). Unlike CIFAR10, here the blockwise versions of MetaOptimize showed no improvement over the scalar versions. Refer to Appendix D for further details. Figure 4: ImageNet learning curves. Figure 5: TinyStories learning curves. Table 1: Per-iteration wall-clock-time and GPU-space overhead (compared to AdamW). ImageNet TinyStories Time Space Time Space AdamW (fixed stepsize) 0% 0% 0% 0% DoG (Ivgi et al., 2023) +45% 1.4% +268% 0% gdtuo (Chandra et al., 2022) +85% 64% +150% 21% mechanic (Cutkosky et al., 2024) +42% 88% +9% 0% Prodigy (Mishchenko & Defazio, 2023) +42% 13% +9% 0% MetaOptimize (AdamW, Lion) +44% 33% +13% 0% 7.3 Language modeling For language model experiments, we used the TinyStories dataset (Eldan & Li, 2023), a synthetic collection of brief stories designed for children aged 3 to 4. This dataset proves effective for training and evaluating language models that are significantly smaller than the current state-of-the-art, and capable of crafting stories that are not only fluent and coherent but also diverse. We used the implementation in (Karpathy, 2024) for training 15M parameter model with a batch size of 128 on the TinyStories dataset. Two combinations of Hessian-free MetaOptimize with scalar step sizes were tested against Lion and AdamW with well-tuned fixed step sizes, AdamW with a well-tuned cosine decay learning rate scheduler with 1k warmup iterations, and the four state-of-theart step-size adaptation algorithms mentioned in the previous subsection. According to the learning curves, shown in Fig. 5, MetaOptimize outperforms all baselines (with an initial delay due to small initial step-sizes), except for the well-tuned learning rate scheduler within 30k iterations. 8 Related Works Automatic adaptation of step sizes, has been an important research topic in the literature of stochastic optimization. Several works aimed to remove the manual tuning of learning rates via adaptations of classical line search (Rolinek & Martius, 2018; Vaswani et al., 2019; Paquette & Scheinberg, 2020; Kunstner et al., 2023) and Polyak step size (Berrada et al., 2020; Loizou et al., 2021), stochastic 8 \fproximal methods (Asi & Duchi, 2019), stochastic quadratic approximation (Schaul et al., 2013), hyper-gradient descent (Baydin et al., 2017), nested hyper-gradient descent (Chandra et al., 2022), distance to a solution adaptation (Ivgi et al., 2023; Defazio & Mishchenko, 2023; Mishchenko & Defazio, 2023), and online convex learning (Cutkosky et al., 2024). A limitation of most of these methods is their potential underperformance when their meta-parameters are not optimally configured for specific problems (Ivgi et al., 2023). Moreover, the primary focus of most of these methods is on minimizing immediate loss rather than considering the long-term effects of step sizes on future loss. Normalization techniques proposed over past few years, such as AdaGrad (Duchi et al., 2011), RMSProp, and Adam have significantly enhanced the training process. While these algorithms show promise in the stationary problems, these normalization techniques do not optimize effective step sizes and are prone to have sub-optimal performance especially in the continual learning settings (Degris et al., 2024). An early practical step-size optimization method was the Incremental-Delta-Bar-Delta (IDBD) algorithm, proposed in (Sutton, 1992), aiming to optimize the step-size vector to minimize a special form of quadratic loss functions in the continual setting. This algorithm was then extended in (Xu et al., 2018) for use in neural networks, and in (Mahmood et al., 2012) (respectively (Javed, 2020)) for some specific meta update (respectively base update) other than SGD. However, the development of IDBD and its extensions included some implicit assumptions, notably overlooking the impact of step-size dynamics on the formulation of step-size update rules. Our paper extends the IDBD research, significantly broadening the framework and establishing a solid basis for the derivations. IDBD and its extensions have been used in various machine learning tasks including independent component analysis (Schraudolph & Giannakopoulos, 1999), human motion tracking (Kehl & Van Gool, 2006), classification (Koop, 2007; Andrychowicz et al., 2016), and reinforcement learning (Xu et al., 2018; Young et al., 2018). Refer to (Sutton, 2022) for a comprehensive history of step-size optimization. There is also a line of research on the so-called parameter-free optimization that aims to remove the need for step-size tuning with almost no knowledge of the problem properties. Most of these methods are primarily designed for stochastic convex optimization (Luo & Schapire, 2015; Orabona & P\u00e1l, 2016), while more recent ones (Orabona & Tommasi, 2017; Ivgi et al., 2023) were applied to supervised learning tasks with small or moderate sample sizes. 9 Limitations and Future Works Our work represents a step toward unlocking the potential of meta-parameter optimization, with substantial room for further exploration, some of which we outline here: Hessian: We confined our experiments to Hessian-free methods for practicality, though Hessianbased algorithms could offer superior performance. These methods, however, face challenges requiring additional research. The Hessian matrix is notably noisy, impacting Ht+1 multiplicatively, necessitating smoothing and clipping techniques. Additionally, the Hessian approximates the loss landscape\u2019s curvature but fails to account for non-differentiable curvatures, such as those from ReLU unit breakpoints, significant at training\u2019s end. From a computational perspective, developing lowcomplexity methods for approximate Hessian matrix products, especially for adjusting step-sizes at the layer and weight levels, is essential. Blockwise step-sizes: While step sizes can vary much in granularity, our experiments focused on scalar and blockwise step-sizes. While increasing the number of step sizes is anticipated to enhance performance, our experimental findings in Section 7 reveal that this improvement is not consistent across the MetaOptimize approximations evaluated. Further investigation is needed in future research. Other approximations: We explored a limited set of MetaOptimize\u2019s possible approximations, leaving a comprehensive analysis of various approximations for future research. Other meta-parameters: Our study was limited to differentiable meta-parameters, not covering discrete ones like batch size or network layer count. We also did not investigate several significant differentiable meta-parameters beyond step-sizes, deferring such exploration to future work. Continual learning: Although continual step-size optimization is primarily aimed at continual learning, this study focused on the stationary case, demonstrating MetaOptimize\u2019s competitiveness in a context that is particularly challenging for it. Investigating the framework within continual learning presents a promising direction for future research. 9" + }, + { + "url": "http://arxiv.org/abs/2301.13757v3", + "title": "Toward Efficient Gradient-Based Value Estimation", + "abstract": "Gradient-based methods for value estimation in reinforcement learning have\nfavorable stability properties, but they are typically much slower than\nTemporal Difference (TD) learning methods. We study the root causes of this\nslowness and show that Mean Square Bellman Error (MSBE) is an ill-conditioned\nloss function in the sense that its Hessian has large condition-number. To\nresolve the adverse effect of poor conditioning of MSBE on gradient based\nmethods, we propose a low complexity batch-free proximal method that\napproximately follows the Gauss-Newton direction and is asymptotically robust\nto parameterization. Our main algorithm, called RANS, is efficient in the sense\nthat it is significantly faster than the residual gradient methods while having\nalmost the same computational complexity, and is competitive with TD on the\nclassic problems that we tested.", + "authors": "Arsalan Sharifnassab, Richard Sutton", + "published": "2023-01-31", + "updated": "2023-07-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Value estimation is a core problem in reinforcement learning (Sutton & Barto, 2018), and is a key ingredient in several policy optimization methods, e.g., (Bhatnagar et al., 2009; Minh et al., 2015; Lillicrap et al., 2015) . The popular class of value estimation algorithms based on temporal difference learning via forward bootstrapping; including TD(\u03bb) (Sutton, 1988), Expected Sarsa (van Seijen et al., 2009), and Q-learning (Watkins & Dayan, 1992) have found substantial empirical success when combined with proper policy optimization (Minh et al., 2015; Minh et al., 2016; Lillicrap et al., 2015). Nevertheless, these algorithms are not gradient-based optimization methods (Barnard, 1993) and their convergence cannot be guaranteed for general function approximation setting (Baird, 1995; Tsitsiklis & Van Roy, 1997; Brandfonbrener & Bruna, 2019). The stability problem of TD learning has inspired other classes of value estimation algorithms that involve optimizing a loss 1Authors are with the Department of Computing Science, University of Alberta, Canada. Correspondence to: Arsalan Sharifnassab . function through gradient updates. This includes Residual Gradient (RG) algorithm for minimizing Mean Squared Bellman Error (MSBE) (Baird, 1995), Gradient-TD algorithms for minimizing projected Bellman error (Sutton et al., 2009; Maei et al., 2010; Maei, 2011; Hackman, 2012), and their extensions for optimizing a dual formulation of BE2 (Liu et al., 2015; Macua et al., 2014; Dai et al., 2017). These algorithms enjoy the general robustness and convergence properties of Stochastic Gradient Descent (SGD), but are known to be slower than TD in tabular and linear function approximation settings (Baird, 1995; Schoknecht & Merke, 2003; Gordon, 1999; Ghiassian & Sutton, 2021) . In this paper, we investigate the root causes of the slowness problem of gradient-based value estimation by taking a deeper look into the landscape of MSBE, and propose linear complexity methods to alleviate these problems. We provide theoretical results showing that MSBE is an ill-conditioned loss function in the senses that the condition-number of its Hessian matrix is typically very large. This explains slowness of gradient-based value estimation methods, because gradient descent in general is slow in minimizing ill-conditioned loss functions. In contrast, algorithms like Newton and Gauss-Newton methods are invariant to conditioning of the loss. Unfortunately a direct implementation of these methods requires matrix inversion, which is computationally costly even if computed incrementally. We propose a linear complexity incremental algorithm, called Residual Approximate Gauss-Newton (RAN), that incorporates a trace to approximate the Gauss-Newton direction and then updates the weights along that trace. We show that RAN can be equivalently formulated as a batch-free proximal algorithm. A weakness of RAN is that it requires double sampling (Baird, 1995), which limits its use in stochastic environments. We propose a double-sampling-free extension of RAN by following similar ideas that underlie GTD-type methods. The resulting algorithms significantly outperform RG and GTD2, being orders of magnitudes faster on the simple classic environments that we tested, while having almost similar computational complexity to RG and GTD2. We then turn our focus to a second cause of slowness of gradient-based value estimation: under function approximation, sample gradients of MSBE involve large outliers that carry important information, resulting in large variance of stochastic updates. Outliers of this type often appear 1 arXiv:2301.13757v3 [cs.LG] 23 Jul 2023 \fToward Efficient Gradient-Based Value Estimation in every episode (usually at pre-terminal transitions), and are specific to gradient-based value estimation methods (i.e. such outliers do not appear in TD learning). We propose a general technique called outlier-splitting, which results in no information loss as opposed to the standard clipping methods. Our main value estimation algorithm, called RAN with outlier-Splitting (RANS), has linear computational complexity and has only one effective hyper-parameter (and some other hyper-parameters that can be set to their default values), thanks to its adaptive step-size mechanism. Our empirical results on a few classic control environments with neural network function approximation show significant improvement over RG, and achieving competitive performance to TD. 2. Background We consider a discounted Markov Decision Process (MDP) defined by the tuple (S, A, R, p, \u03b3), where S is a finite set of states, A is a finite set of actions, R is a set of rewards, p : S \u00d7 A \u00d7 S \u00d7 R \u2192[0, 1] is the environment dynamics determining the probability of the next state and immediate reward given a current state and action pair, and \u03b3 \u2208[0, 1] is a discount factor. We fix a stationary policy \u03c0 : S \u00d7 A \u2192[0, 1], and let p\u03c0(s\u2032, a\u2032, r|s, a) = p(St+1 = s\u2032, Rt+1 = r|St = s, At = a)\u03c0(At+1 = a\u2032|St+1 = s\u2032). We consider an episodic and online setting where a data stream (S1, A1, R1), (S2, A2, R2), . . . is generated according to the policy \u03c0. The action-value function q\u03c0 : S \u00d7A \u2192 R, at each state s and action a, is the expected discounted sum of rewards obtained by starting from state s and action a and following policy \u03c0. We then define the value function v\u03c0 : S \u2192R as v\u03c0(s) = Ea\u223c\u03c0(\u00b7|s)[q\u03c0(s, a)]. In value estimation, we aim to obtain an estimate of the true action-values q\u03c0, usually through a function qw : S \u00d7 A \u2192R parameterized by a d-dimensional weight vector w. Corresponding to qw is a Bellman residual at each state and action pair (s, a), defined as \u03b4w(s, a) def = Es\u2032,a\u2032,r\u223cp\u03c0(\u00b7,\u00b7,\u00b7|s,a) \u0002 r+\u03b3qw(s\u2032, a\u2032)\u2212qw(s, a) \u0003 . According to Bellman equations (Sutton & Barto, 2018; Bertsekas & Tsitsiklis, 1996), qw = q\u03c0 if and only if \u03b4w(s, a) = 0 for all (s, a) \u2208S \u00d7 A. In this view, MSBE(w), defined below, serves as a proxy for the quality of estimates w: MSBED(w) def = E(s,a)\u223cD \u0002 \u03b4w(s, a)2\u0003 , (1) where D is some distribution over states and action pairs. When the distribution is online, we drop the subscript D and write MSBE(\u00b7). For simplicity of notation, we also write Es,a[\u00b7] to denote the expectation with respect to state and action pairs sampled from the online distribution. In the same vein, we consider a parameterized estimate vw : S \u2192 R of value function v\u03c0, and let MSBEV D(w) def = Es\u223cD \u0002 \u03b4w(s)2\u0003 , (2) where \u03b4w(s) = Ea\u223c\u03c0(\u00b7|s)[\u03b4w(s, a)], and D is some distribution over states. Gradient-based value estimation methods use gradientbased optimization algorithms to minimize MSBE or other related objectives such as MSPBE (Sutton et al., 2009). The first and simplest method in this category is the RG algorithm (Baird, 1995). In this algorithm, to obtain an unbiased sample estimate of \u2207w(\u03b4w(St, At)2), we require independent samples (St+1, At+1, Rt) and (S\u2032 t+1, A\u2032 t+1, R\u2032 t) from p\u03c0(\u00b7, \u00b7, \u00b7|St, At). For simplicity of notation, at time t, we let \u03b4t def = Rt + \u03b3qw(St+1, At+1) \u2212qw(St, At), (3) \u03b4\u2032 t def = R\u2032 t + \u03b3qw(S\u2032 t+1, A\u2032 t+1) \u2212qw(St, At), (4) where wt = w. The RG update is then w \u2190w \u2212\u03b1\u03b4\u2032 t\u2207w\u03b4t. (5) The requirement for two independent sample transitions at time t is called double sampling (Sutton & Barto, 2018). In stochastic environments, double sampling is possible only if we have a correct model of the world. In other words, MSBE minimizer is not learnable if an exact model of the underlying stochastic environment is not available, which is the case in real-world applications (Sutton & Barto, 2018). A general technique to circumvent double sampling is using Fenchel duality to obtain an equivalent saddle point formulation of MSBE (Dai et al., 2017, Du et al., 2017) as min w max \u02c6 \u03b4(\u00b7,\u00b7) Es,a \u0014 \u03b4w(s, a) \u02c6 \u03b4(s, a) \u22121 2 \u02c6 \u03b4(s, a)2 \u0015 , (6) where \u02c6 \u03b4(s, a) is an auxiliary variable that serves as a proxy of \u03b4w(s, a). In practice, one can consider a parametric approximation \u02c6 \u03b4\u03b8(\u00b7, \u00b7) of \u02c6 \u03b4(\u00b7, \u00b7), and perform gradient updates on the resulting minimax problem minw max\u03b8 Es,a h \u03b4w(s, a) \u02c6 \u03b4\u03b8(s, a) \u22121 2 \u02c6 \u03b4\u03b8(s, a)2i : w \u2190w \u2212\u03b1\u02c6 \u03b4\u03b8(St, At)\u2207w\u03b4t, \u03b8 \u2190\u03b8 + \u03b7 \u0000\u03b4t \u2212\u02c6 \u03b4\u03b8(St, At) \u0001 \u2207\u03b8\u02c6 \u03b4\u03b8(St, At), (7) (Sutton et al., 2009; Liu et al., 2020; Dai et al., 2017). Intuitively, this is similar to the RG algorithm in (5) except for using the parametric approximation \u02c6 \u03b4\u03b8(St, At) instead of \u03b4\u2032 t, and updating \u02c6 \u03b4\u03b8(s, a) by SGD on Es,a \u0002 (\u02c6 \u03b4\u03b8(s, a) \u2212 \u03b4w(s, a))2\u0003 . The GTD2 algorithm (Sutton et al., 2009) is a special case of (7) in which qw and \u02c6 \u03b4\u03b8 are linear approximations of the form qw(s, a) = \u03d5T s,aw and \u02c6 \u03b4\u03b8(s, a) = \u03d5T s,a\u03b8, for feature vectors \u03d5s,a (Liu et al., 2020; Dai et al., 2017). 2 \fToward Efficient Gradient-Based Value Estimation 3. MSBE loss is ill-conditioned The condition-number of a symmetric square matrix, H, is defined as the ratio of its largest to smallest singular values, maxx:\u2225x\u2225=1 |xT Hx|/ miny:\u2225y\u2225=1 |yT Hy|. For a quadratic function f(x) = xT Hx, we define the conditionnumber, C(f), of f as the condition-number of its Hessian matrix H. Intuitively, level sets (or contours) of a convex quadratic function have an elliptical shape, and the condition-number C(f) equals the squared ratio between the largest and the smallest diameters of each of these ellipsoids (see Fig. 1). We say that f is ill-conditioned if C(f) is very large. Then, the level sets of an ill-conditioned quadratic function have the shape of ellipsoids that are thin. It is known that the convergence rate of the gradient descent on a quadratic loss f scales with C(f) (Polyak, 1964), which can be very slow for ill-conditioned loss functions. In this section, we consider linear function approximation. In this case, MSBEV D(\u00b7) defined in (2) is a convex quadratic function. We denote the condition-number of MSBEV D(\u00b7) under uniform distribution D by C. We let l be the average episode length, defined as the expected time until termination when starting from a state, uniformly averaged over all states. We also let h def = Es\u223cunif \u0002 P(St+1 = s|St = s) \u0003 be the the average self-loop probability. Note that h is typically much smaller than 1. Theorem 3.1. In the tabular case, the following statements hold for any discount factor \u03b3 \u2208[0, 1]: a) For any MDP and under any policy, we have C \u2265(1 \u2212\u03b3h)2 4 min \u0012 1 (1 \u2212\u03b3)2 , l2 \u0013 (8) where l is the average episode length and h is the average self-loop probability. b) For any n \u22651, there exists an n-state MDP and a policy for which C \u2265\u03b34n2/(1 \u2212\u03b3)2. The proof is given in Appendix A.1. A similar result also holds for the condition-number of MSBE defined in (1) (see Proposition A.1 in Appendix A.2). Theorem 3.1 shows that MSBE is typically ill-conditioned in the tabular case. This explains the slow convergence of gradient-based methods for minimizing MSBE. As an example, the bound in (8) implies that for \u03b3 = .99 and for any MDP and policy pair with average episode length at least 100 and average self-loop probability no larger than 0.1, we have C > 2000. Moreover, Theorem 3.1 (b) implies that for \u03b3 = .99, there is a 100-state Markov chain for which C > 96, 000, 000. Lower bounds similar to Theorem 3.1 are not possible for non-tabular linear function approximation. This is because different feature representations can improve or worsen the Figure 1. Level sets of MSBE (gray curves) in a 2-state loop environment with p(s0 \u2192s1) = p(s1 \u2192s0) = 1 for \u03b3 = 0.8. Here, condition number of MSBE is 81, and is equal to the squared ratio between the diameters (red) of each ellipsoid. The solution trajectory of RG (blue) for \u03b1 = .9 and the Gauss-Newton direction (green) are also depicted. In this environment, C = O(1/(1\u2212\u03b3)2) (see Theorem 3.1), which rapidly grows for larger \u03b3. condition-number. To see why, note that in the linear function approximation case and under uniform state distribution, MSBEV unif(w) = wT \u03a6T (I \u2212\u03b3P)T (I \u2212\u03b3P)\u03a6w, where \u03a6 is an n \u00d7 d matrix, each row of which is a feature vector of a state; and P is the transition matrix. For the specific choice \u03a6 = (I \u2212\u03b3P)\u22121 we obtain C = 1, while for the case that \u03a6 is not full-rank, we have C = \u221e. In general, since underparameterized function approximation reduces parameters dimension, it usually improves condition-number. Fig. 2 illustrates dependence of C on the number of features, d, in an extended Boyan chain environment (Boyan, 2002) with 200 states and with random binary features (see Appendix G.1 for details). We observe that smaller d results in better condition-number; but this comes at the cost of larger value-error at MSBE minimum (the red curve in Fig. 2), where by value-error we mean Es\u223cunif \u0002 Ea\u223c\u03c0(\u00b7|s)[\u2225qw(s, a) \u2212q\u03c0(s, a)\u22252] \u0003 . See Appendix B for more experiments on condition number under linear function approximation. 4. A review of the Gauss-Newton method Consider an expected loss function of the form F(w) = Ef[f 2(w)], and the associated Hessian matrix HF = E[\u2207f\u2207f T ] + E[f Hf], where Hf denotes Hessian of sample function f. The first term on the right hand side, E[\u2207f\u2207f T ], is called the Gauss-Newton matrix and is denoted by G. The Gauss-Newton algorithm then updates w as w \u2190w \u2212\u03b1G\u22121\u2207F(w). In the special case that functions f are linear, we have Hf = 0 and thereby HF = G. In this case, Gauss-Newton and Newton methods become equivalent. However, the Gauss-Newton algorithm has two advantages in the non-linear case. Firstly, G\u22121\u2207F(w) is 3 \fToward Efficient Gradient-Based Value Estimation Figure 2. Condition-number of MSBE (blue) and value-error at MSBE minimizer (red) versus number of features, in a 200-state extended Boyan chain with random binary features. always a descent direction, as opposed to the Newton updates that may climb uphill and converge to local maxima or saddle points (Nesterov & Polyak, 2006). Secondly, G can be computed in terms of gradients while H entails second order derivatives which are not as easily accessible in certain settings (Nocedal & Wright, 1999). As opposed to gradient descent which is prohibitively slow in ill-conditioned problems, Newton and Gauss-Newton methods are invariant to conditioning of the loss function. Some recent works proposed using Gauss-Newton method for value estimation (Gottwald et al., 2021). However, these algorithms require matrix inversion, which is computationally costly even if computed incrementally via ShermanMorrison formula with quadratic complexity (Sherman & Morrison, 1950). In the next section, we propose an incremental low complexity approach that approximately follows the Gauss-Newton direction. 5. Our first algorithm: RAN The Gauss-Newton direction for minimizing MSBE is mGN(w) = G\u22121 w \u2207MSBE(w)/2, where Gw = Es,a \u0002 \u2207\u03b4w(s, a) \u2207\u03b4w(s, a)T \u0003 is the Gauss-Newton matrix and \u2207MSBE(w) = 2Es,a \u0002 \u03b4w(s, a) \u2207\u03b4w(s, a) \u0003 . Then, mGN is the minimizer of the following quadratic function: L(m) def = 1 2 Es,a h\u0000\u03b4w(s, a) \u2212\u2207\u03b4w(s, a)T m \u00012i . (9) This is because for any m, \u2207mL(m) = Es,a \u0002\u0000\u2207\u03b4w(s, a)T m \u2212\u03b4w(s, a) \u0001 \u2207\u03b4w(s, a) \u0003 = Gwm \u2212\u2207MSBE(w)/2, and therefore \u2207mL(mGN) = 0. We follow a two time scale approach (Yao et al., 2009, Bhatnagar et al., 2009; Dabney & Thomas, 2014) to incrementally find an approximate minimizer m of L and update w along that direction. More concretely, given a \u03b2 > 0 and \u03bb \u2208[0, 1], at Algorithm 1 RAN Parameters: step-sizes \u03b1, \u03b2, and decay parameter \u03bb Initialize: m = 0 and w for t = 1, 2, . . . do consider \u03b4t and \u03b4\u2032 t defined in (3) and (4), respectively m \u2190\u03bbm + \u03b2 \u03b4\u2032 t \u2207\u03b4t m \u2190m \u2212\u03b2(mT \u2207\u03b4t)\u2207\u03b4t w \u2190w \u2212\u03b1m end for time t, we update m along an unbiased sample gradient of \u03b2L(m) + (1 \u2212\u03bb)\u2225m\u22252, m \u2190\u03bbm + \u03b2 \u0000\u03b4\u2032 t \u2212mT \u2207\u03b4\u2032 t \u0001 \u2207\u03b4t, (10) where \u03b4t and \u03b4\u2032 t are defined in (3) and (4), and (1 \u2212\u03bb)\u2225m\u22252 is a Levenberg\u2013Marquardt regularizer (Marquardt, 1963)1. We then update w along m, i.e, w \u2190w \u2212\u03b1m. Algorithm 1 gives the pseudo code of the RAN algorithm. For better stability and faster convergence, the update of m in RAN is of the form m \u2190\u03bbm + \u03b2 \u0000\u03b4\u2032 t \u2212mT \u2207\u03b4t \u0001 \u2207\u03b4t. (11) which is the same as (10) except for using \u2207\u03b4t instead of \u2207\u03b4\u2032 t. The updates in (11) have lower variance compared to (10), and additionally \u2207\u03b4t\u2207\u03b4T t in (11) is positive semidefinite, as opposed to the finite sample estimate of the Gauss-Newton matrix 1 \u03c4 P\u03c4 t=1 \u2207\u03b4t\u2207\u03b4\u2032T t in (10). For fixed w, the update in (11) is in expectation along \u2212 \u0000Et[\u2207\u03b4t\u2207\u03b4T t ] + (1 \u2212\u03bb)I \u0001\u22121\u2207MSBE(w). (12) In Appendix C, we provide further intuition for RAN, by presenting a derivation of Algorithm 1 as a proximal method with momentum for minimizing MSBE. In this view, m serves as a momentum of MSBE gradients, to which we add a correction term equal to the gradient of a penalty function that aims to regularize the change in \u03b4w(s, a) for all state action pairs (s, a). Convergence of the RAN algorithm can be shown in twotime-scale regime where \u03b1t, \u03b2t \u21920, with \u03b1 diminishing faster than \u03b2 (i.e., \u03b1t/\u03b2t \u21920)2. Convergence of such two-time-scale algorithms is well-studied (Kushner & Yin, 2003; Konda & Tsitsiklis, 1999; Bhatnagar et al., 2009), 1We have empirically observed that \u03bb = 1 often leads to slow convergence, because it causes large inertia in m, and therefore large oscillations in w. The best performance is achieved for \u03bb \u2208(0.99, 0.9999). 2Note that the two-time-scale view is only for the purpose of convergence analysis, and in practice we consider fixed or adaptive step-sizes whose ratio needs not go to zero. 4 \fToward Efficient Gradient-Based Value Estimation Figure 3. The Hallway experiment discussed in Section 5. under some smoothness and irreducibly conditions. In Appendix D, we discuss different conditions for convergence of Algorithm 1 in the two-time-scale regime. Moreover, in this regime, RAN is robust to reparameterization: Proposition 5.1 (Informal). For \u03bb = 1 and asymptotically small step-sizes \u03b1 \u21920 and \u03b1/\u03b2 \u21920, the trajectory of w in the RAN algorithm is invariant to any differentiable and bijective non-linear transformation on parameterization. The formal version of Proposition 5.1 and its proof are given in Appendix E. We evaluated the performance of RAN in a simple benchmark environment. Consider an environment with n states and one action, in which each state i = 1, 2 . . . , n transits to state min(i + 1, n) with probability 1 \u2212\u03f5, and transits to a terminal state with probability \u03f5, for some \u03f5 \u2208[0, 1). This is a generalization of the Hallway environment (Baird, 1995), and is known to be a challenging task for the RG algorithm (Baird, 1995). We tested Algorithm 1 in this environment with n = 50, \u03f5 = 0.01, and \u03b3 = 0.99 in the tabular setting (see Appendix G.2 for the details of this experiment). The learning curves are depicted in Fig. 3. We observe that, in this experiment, Algorithm 1 is about 30 times faster than RG, and reaches a convergence rate close to TD(0) (Sutton, 1988; Sutton & Barto, 2018). 6. Double-sampling-free RAN algorithm In Algorithm 1, we require double sampling to compute \u03b4\u2032 t. In this section, we propose a Double-Sampling-Free version of RAN, called DSF-RAN. Double sampling is easily doable in deterministic environments (Saleh & Jiang, 2019; Zhang et al., 2020), in which case \u03b4\u2032 t can be computed using an independent sample A\u2032 t+1 from the policy. However, for double sampling in stochastic environments, we require a model to get an independent sample S\u2032 t+1 of the next state, which is typically possible only in simulated environments. To resolve the double sampling issue of RAN in stochastic Algorithm 2 DSF-RAN Parameters: step-sizes \u03b1, \u03b2, \u03b7, and decay parameter \u03bb Initialize: m = 0, w, \u03b8. for t = 1, 2, . . . do \u2207\u03b4t = \u03b3\u2207wqw(St+1, At+1) \u2212\u2207wqw(St, At) m \u2190\u03bbm + \u03b2 \u02c6 \u03b4\u03b8(St, At) \u2207\u03b4t m \u2190m \u2212\u03b2(mT \u2207\u03b4t)\u2207\u03b4t w \u2190w \u2212\u03b1m \u03b8 \u2190\u03b8 + \u03b7 \u0000\u03b4t \u2212\u02c6 \u03b4\u03b8(St, At) \u0001 \u2207\u03b8\u02c6 \u03b4\u03b8(St, At) end for Figure 4. The Baird\u2019s star experiment discussed in Section 6 environments, we use the technique discussed in Section 2, which was also used in the GTD2 algorithm. More specifically, instead of \u03b4\u2032 t in Algorithm 1, we use a parametric approximation \u02c6 \u03b4\u03b8(St, At) of \u03b4w(St, At), parameterized by \u03b8. Similar to GTD2 (see (7)), we then learn \u03b8 through SGD on Es,a \u0002 (\u02c6 \u03b4\u03b8(s, a)\u2212\u03b4w(s, a))2\u0003 . Pseudo code of DSF-RAN is given in Algorithm 2. We tested RAN and DNS-RAN algorithms on Baird\u2019s Star environment (Baird, 1995), that is a Markov chain with six states, each represented by seven features (see Appendix G.3 for details of this experiment). The results are illustrated in Fig. 4. We observe that in this environment, RAN and DNSRAN converge about 200 times faster than RG and GTD2 algorithms, respectively. It is well-known that off-policy TD(0) is unstable in this environment (Baird, 1995). 7. The problem of outliers In this section we argue that the gradient of MSBE involves large outliers and discuss its impact on the RAN algorithm. For simplicity, temporarily suppose that the set of actions is a singleton, A = {a}. In the function approximation case, successive states St and St+1 often have similar representations. As a result, \u2207qw(St, a) and \u03b3\u2207qw(St+1, a) are often similar, rendering \u2207\u03b4t = \u03b3\u2207qw(St+1, a) \u2212\u2207qw(St, a) to be small (Zhang et al., 2020). This would not have been 5 \fToward Efficient Gradient-Based Value Estimation problematic if \u2207\u03b4t was small for all t, in which case we could compensate by increasing the step-size. However, \u2207\u03b4t can occasionally be large, for example when St+1 is a terminal state in which case \u2207\u03b4t = \u2212\u03b3\u2207qw(St, a), or when St+1 is far from St (e.g., in large jump transitions). Although these outliers occur with low probability, they carry important information. For example, the pre-terminal transitions are important because they pin down the estimated values to the terminal values. In environments with larger action sets, if the policy has small entropy, At+1 and At would have similar representations with high probability, causing \u2207\u03b4t to be small. We now discuss how these outliers affect RAN. The updates of m in Algorithm 1 involve a momentum (of MSBE gradient) term \u03bbm + \u03b4\u2032 t\u2207\u03b4t and a correction term \u2212\u03b2(\u2207\u03b4T t m)\u2207\u03b4t that aims to slowly modify m towards the approximate Gauss-Newton direction. However, when \u2207\u03b4t is an outlier, \u03b2(\u2207\u03b4T t m)\u2207\u03b4t can grow very large, cause an overshoot, and completely change the direction of m. In particular, if \u03b2\u2225\u2207\u03b4t\u22252 > 1, then magnitude of the correction term would be larger than the projection of m on \u2207\u03b4t, i.e. \f \f\u27e8\u03b2(\u2207\u03b4T t m)\u2207\u03b4t, \u2207\u03b4t\u27e9 \f \f > \f \f\u27e8\u2207\u03b4T t m\u27e9 \f \f, (13) which results in an overshoot along \u2207\u03b4t. Such overshoots hinder m from tracking the approximate Gauss-Netwon direction. To reduce the adverse effect of outliers, one can reduce stepsize \u03b2, at the cost of slowed down learning. Another popular solution is gradient clipping (Zhang et al., 2019). However, as discussed in the first paragraph of this section, the outliers in our problem carry important information, which can be lost via gradient clipping. 8. Outlier-splitting We now propose outlier-splitting as a general metatechnique for stochastic optimization, appropriate for the case that data contains rare sample functions with abnormally large gradients, and these sample functions carry important information that would be lost in gradient clipping. We first explain the key idea by an example. Consider minimizing f1 + \u00b7 \u00b7 \u00b7 + fn for smooth functions f1, . . . , fn. Suppose that f1 is an outlier in the sense that the norm of its gradient is locally k times larger than the gradient norms of other functions, for some integer k > 1. The idea is that instead of applying SGD on f1 + \u00b7 \u00b7 \u00b7 + fn, we break down f1 into k copies of f1/k and apply SGD on f1/k + \u00b7 \u00b7 \u00b7 + f1/k + f2 + \u00b7 \u00b7 \u00b7 + fn in a random order. The latter updates are outlier-free while being equivalent to the former updates in expectation. We now proceed to a formal description. Consider SGD on an objective function F = E[f]. For any Algorithm 3 Outlier-splitting for online SGD, applied to loss function F = E[f] Parameters: step-size \u03b2, outlier threshold \u03c1, trace parameter \u03bb\u03be, outlier sampling probability \u03c3. Initialize: \u02c6 \u03be = 0, w. for t = 1, 2, . . . do \u02c6 \u03be \u2190\u03bb\u03be \u02c6 \u03be + (1 \u2212\u03bb\u03be)\u03be(ft, w) \u00af \u03be = \u02c6 \u03be/(1\u2212\u03bbt \u03be) \u25b7bias-corrected trace estimate k = \u230a\u03be(ft, w)/(\u03c1\u00af \u03be)\u230b+ 1 w \u2190w \u2212(\u03b2/k)\u2207ft(w) if k > 1 then Store (f, k, k \u22121) in the outlier buffer end if With probability min(1, \u03c3\u2217length of outlier bufffer): Sample (f, k\u2032, j) uniformly form outlier buffer k\u2032\u2032 = max \u0000k\u2032, \u230a\u03be(f, w)/(\u03c1\u00af \u03be)\u230b+ 1 \u0001 w \u2190w \u2212(\u03b2/k\u2032\u2032)\u2207f(w) if j > 1 then Replace (f, k\u2032, j) with (f, k\u2032, j \u22121) in the buffer else if j = 1 then Remove (f, k\u2032, j) from the outlier buffer end if end for sample sample function f and any point w, we consider a non-negative measure \u03be(f, w); e.g., \u03be(f, w) = \u2225\u2207f(w)\u2225 or \u2225\u2207f(w)\u22252. Let \u00af \u03be be a trace of \u03be, updated by \u00af \u03be \u2190 \u03bb\u03be \u00af \u03be + (1 \u2212\u03bb\u03be)\u03be(ft, wt), where \u03bb\u03be \u2208(0, 1) is a constant close to 1. We say that ft is an outlier if \u03be(ft, wt) \u2265\u03c1\u00af \u03bet, for some outlier threshold \u03c1 > 1. The pseudo code of the outlier-splitting method for online SGD is given in Algorithm 3. At time t of this algorithm, we let k = \u0016\u03be(ft, wt) \u03c1\u00af \u03bet \u0017 + 1. (14) If ft is an outlier (equivalently k > 1), instead of ft we pretend to have k copies of ft/k. We use one of these copies to do a gradient update at time t, and store the remaining k\u22121 copies in a buffer to use them for future updates. These copies are stored in one cell of an outlier-buffer as a tuple (ft, k, k\u22121), where k\u22121 indicates the number of remaining copies to be used for future updates. In each iteration we perform one update based on the online sample, and perform at most one update based on a sample from the buffer. More concretely, in each iteration t, after applying a gradient update w \u2190w\u2212(\u03b2/k)\u2207ft(w), we take a sample (f, kf, j) from the outlier buffer with some positive probability, and perform a gradient update w \u2190w \u2212(\u03b2/kf)\u2207f(w). We now show that the outlier buffer is stable. The expected 6 \fToward Efficient Gradient-Based Value Estimation number of copies, k\u22121, added to the buffer at time t satisfies E[k \u22121] \u2264Et \u0014\u03be(ft, wt) \u03c1\u00af \u03bet \u0015 \u2243Et [\u03be(ft, wt)] \u03c1E[\u00af \u03bet] = 1 \u03c1 < 1, where the inequality is due to (14) and the approximate equality is because \u00af \u03bet is a long-time average. On the other hand, as the length of the outlier buffer increases, the probability of performing a sample update from the buffer goes to 1. In this case, arrival rate to the buffer, 1/\u03c1, is smaller than its departure rate, 1; implying stability of the outlier buffer. 9. Our main algorithm: RANS Our final algorithm, RAN with outlier Splitting (RANS), is a combination of RAN, outlier-splitting, and adaptive stepsize ideas. In order to improve updates of m, we employ an adaptive vector step-size \u03b2 that evolves according to a mechanism quite similar to RMSProp (Kochenderfer & Wheeler, 2019), as we discuss next. Consider a trace vector \u03bdt of (\u2207\u03b4t)2 updated according to \u03bdt \u2190\u03bb\u2032\u03bdt\u22121 + (1 \u2212\u03bb\u2032)(\u2207\u03b4t)2, where (\u2207\u03b4t)2 is the entrywise square vector of \u2207\u03b4t, and \u03bb\u2032 \u2208[0, 1) is a constant. We consider an outlier-measure \u03bet = \u27e81 \u221a\u03bdt \u2299\u2207\u03b4t, \u2207\u03b4t\u27e9 (15) where 1/\u221a\u03bdt is entrywise square root, and \u2299and \u27e8\u00b7, \u00b7\u27e9denote entrywise product and inner product of two vectors, respectively. We then compute the trace \u00af \u03be and k as in Section 8: \u00af \u03bet \u2190\u03bb\u2032 \u00af \u03bet + (1 \u2212\u03bb\u2032)\u03bet and k = \u0004 \u03bet/(\u03c1\u00af \u03bet) \u0005 + 1. We finally fix an \u03b7 \u2208(0, 1) and choose the step-size \u03b2t = \u03b7 \u03c1\u00af \u03bet 1 \u221a\u03bdt . (16) The pseudo code of RANS is given in Algorithm 4 in Appendix F. The algorithm involves applying the outliersplitting method on the updates of m in RAN, and using the adaptive step-size in (16). We now shows that the outlier-splitting mechanism in RANS effectively prevents overshoots of type (13) in the updates of m. Given the above choice of \u03b2t, we have 1 k \u27e8\u03b2t\u2299\u2207\u03b4t, \u2207\u03b4t\u27e9= 1 k \u03b7 \u03c1\u00af \u03bet \u27e81 \u221a\u03bdt \u2299\u2207\u03b4t, \u2207\u03b4t\u27e9 \u2264\u03c1\u00af \u03bet \u03bet \u03b7 \u03c1\u00af \u03bet \u27e81 \u221a\u03bdt \u2299\u2207\u03b4t, \u2207\u03b4t\u27e9= \u03b7, where the first equality is from the definition of \u03b2t in (16), the inequality is due to the definition of k, and the last equality follows from the definition of \u03bet in (15). This implies that \f \f\u27e81 k \u03b2(\u2207\u03b4T t m)\u2207\u03b4t, \u2207\u03b4t\u27e9 \f \f \u2264\u03b7 \f \f\u2207\u03b4T t m \f \f. (17) Therefore overshoots of type (13) do not occur in RANS. The RANS algorithm has hyperparameters \u03b1, \u03b7, \u03c1, \u03bb, \u03bb\u2032, and \u03c3 (the outlier sampling probability). Setting \u03b7 = 0.2 and \u03c1 = 1.2 are always good choices. Furthermore, our experiments show that the parameters \u03bb, \u03bb\u2032, and \u03c3 can be set to the default values \u03bb = 0.999, \u03bb\u2032 = 0.9999, and \u03c3 = 0.02 without much performance degradation. In this case, the RANS algorithm would have essentially one hyperparameter \u03b1, just like RG and TD algorithms with Adam optimizer (Kingma & Ba, 2014). The per-iteration computational complexity of RANS is at most twice the RG algorithm with Adam optimizer. 10. Experiments Similar to TD, the RANS algorithm can be utilized within any control loop. More specifically, one can used RAN instead of TD to estimate Q-values and plug these estimates into the policy update of interest, including actor-critic algorithms like A3C (Minh et al., 2016), deterministic policy gradient algorithms (Silver et al., 2014) like DDPG (Lillicrap et al., 2015), and greedy/soft-max policy updates like DQN (Minh et al., 2015). In this section, we assess the performance of softmax policy updates and deterministic policy gradient methods when the Q-functions in these algorithms are calculated using RANS. For softmax policy updates, we conducted experiments on Acrobot and Cartpole environments. We used a single-layer neural network with 64 hidden units with ReLU activation to learn the action-values via three algorithms, TD(0), RG, and RANS. The actions are chosen according to a softmax distribution on the action-values. Fig. 5 illustrates expected returns versus number of step. We trained TD(0) and RG using Adam optimizer. Refer to Appendix G.4 for complementary experimental results and details of the experiments. The results show that the RANS algorithm outperforms RG and TD on these environments. For the deterministic policy gradient actor updates, we conducted experiments on MuJoCo environments: Hopper and HalfCheetah. We employed a two-layer feedforward neural network with 400 and 300 hidden ReLU units respectively for both the actor and critic, and a final tanh unit following the output of the actor. The actor was trained using deterministic policy gradient policy updates (Silver et al., 2014; Lillicrap et al., 2015), while the critics were trained by three algorithms: RANS, Adam TD with delayed target network update, and Adam RG. We considered an on-policy setting where samples are drawn from the current policy 7 \fToward Efficient Gradient-Based Value Estimation Figure 5. Performance of RANS, TD(0), and RG on classic control tasks. A single-layer neural network with 64 hidden ReLU units was used to learn the Q-values, and a softmax distribution on the Q-values was used as the policy. in an online manner and are directly fed into the actor and critic training algorithms. We did not use replay buffers or batch updates. Fig. 6 depicts the learning curves of these algorithms. Refer to Appendix G.4 for additional experimental results and details of the experiments. The results indicate that the RANS algorithm surpasses TD and RG in these environments. It is important to note that the results cannot be fairly compared to the state-of-the-art because our setting is on-policy and does not take advantage of replay buffers and batch updates. We leave the integration of these techniques into the RANS ideas for future work. 11. Related works Poor conditioning of MSBE was previously observed in (Wang & Ueda, 2021) through study of an example Markov chains. More specifically, Wang and Ueda (2021) analyzed a particular n-state Markov chain and showed that the condition-number of MSBE in this Markov chain scales with n2. They also showed that the condition-number scales with 1/(1 \u2212\u03b3)2 in another example Markov chain. In comparison, our lower bound in Theorem 3.1 (a) holds for every Markov chain, and the lower bound in Theorem 3.1 (b) scales with n2/(1 \u2212\u03b3)2. A prevalent explanation for slowness of gradient-based value estimation methods is the so called information flow in the wrong direction (Baird, 1995). More concretely, each update in RG can be decomposed into a forward bootstrapping component (or a TD update) and a backward bootstrapping component (the so called wrong direction of information flow). A common approach for accelerating the gradient updates is by suppressing the second component (e.g., via some sort of combination with TD updates), especially in early stages of training. The acceleration gained in the residual algorithm (Baird, 1995), TDC (Sutton et al., 2009), TDRC, and QRC (Ghiassian et al., 2020) can be understood from this perspective. In contrast, acceleration gained in our algorithms does not rely on combinations with TD updates. Use of Gauss-Newton method for value estimation was explicitly proposed in (Gottwald et al., 2021; Gottwald & Shen, 2022), recently. Value estimation algorithms based on Kalman filter (Choi & Van Roy, 2006; Geist & Pietquin, 2010) are also known to have an equivalent form to online Gauss-Newton updates (Geist & Pietquin, 2010). Sun and Bagnell (2015) studied MSBE minimization with Newton method. However, all of the above methods involve approximating a variant of the Hessian or Gauss-Newton matrices and solving a system of linear equations in each iteration, which is computationally costly. Yao et al. (2009) proposed a low complexity two time-scale method, called LMS-2, for stochastic linear regression. Our RAN algorithm can be perceived as a generalization of the LMS-2 algorithm to MSBE minimization under non-linear function approximation. Several other algorithms including least squares TD (Sutton & Barto, 2018) and (Devraj & Meyn, 2017) also leverage matrix gain for improved convergence, under linear function approximation. In the same spirit, natural gradient methods (Amari, 1998; Kakade, 2001; Martens, 2020) also enjoy robustness to parameterization. Dabney and Thomas (2014), Knight and Lerner (2018), and Achiam et al. (2019) proposed natural gradients algorithms for value estimation. Dabney and Thomas (2014) also proposed a low complexity two time scale implementation that has high-level algorithmic similarities to the RAN algorithm. In Section 5 and Appendix C we showed that the RAN algorithm can be perceived as a proximal method. A proximal method for value estimation, called GTD2-MP, was proposed in (Liu et al., 2020; Mahadevan et al., 2014). However, these works consider a Bregman divergence that does not depend on the value estimates. In fact, 8 \fToward Efficient Gradient-Based Value Estimation Figure 6. Performance of RANS, TD with target networks, and RG algorithms on simple MuJoCo environments. The Q-values, trained via these algorithms, were integrated into a standard deterministic policy gradient control loop for policy training. The experiments are online, not utilizing replay buffers or batch updates. as the step-size goes to zero, update direction of GTD2MP tends to the expected GTD2 update direction. Schulman et al. (2015), Sun and Bagnell (2015), and Zhu and Murray (2022) considered proximal methods with value dependent penalties of the form E[(vwt+1(St) \u2212 vwt(St))2]. Although the resulting expected update direction Es[\u2207vw(s)\u2207vw(s)T ]\u22121\u2207MSBE(w) is robust to parameterization, it is not robust against poor conditioning. For example, in the tabular case, this expected update direction simplifies to \u2207MSBE(w), which is the same as RG. In contrast, in the proximal view of the RAN algorithm, we used penalties of type E[(\u03b4wt+1(St, At) \u2212\u03b4wt(St, At))2], which provides robustness to the conditioning of MSBE, as discussed in Section 5 and Appendix C. The control algorithm SBEED (Dai et al., 2018) involves mirror descent RG along with some other ideas including entropy regularization, akin to SAC (Haarnoja et al., 2018), and blending RG with naive residual gradient (Sutton and Barto, 2018, Chapter 11). The entropy regularization technique, in particular, is known to produce significant performance gains. It is noteworthy that the entropy regularization technique can be used in conjunction with RANS and DFSRAN algorithms, as well. Karampatziakis and Langford (2010) and Tian and Sutton (2019) proposed a method, called sliding-step, to reduce the adverse effect of outliers in certain problems. This method is pretty similar to the outlier-splitting algorithm, with the only difference that in the sliding-step method, all k updates w \u2190w \u2212\u2207ft(w)/k are applied sequentially and before time t + 1, while the outlier-splitting method spreads these updates over a long time. Another simple approach is using momentum to reduce the variance of updates. However, smoothing large outliers requires large momentum parameters, in which case the delayed effect of gradients propagate far into future and become out-dated, pushing w along outdated outlier gradient even if the outlier gradient at current w is reversed. 12. Future works and discussion In this paper, we highlighted causes that underlie slowness of gradient-based value estimation methods, and proposed low complexity techniques to resolve them. Our focus was on the on-policy case, however the proposed algorithms are easily applicable for off-policy learning when combined with standard importance sampling techniques. We provided evidence for the potential of the proposed algorithms via experiments on a few classic environments. Other than applying standard techniques (such as batch updates, replay buffers, different forms of step-size adaptation, etc.) and testing the algorithms on more complex environments, there are several directions for future research. This includes adopting the unbiased gradient estimate of (9) in (10) instead of the biased estimate in (11), and comparing these methods with other means of solving (9), including conjugate gradient and low rank approximation of the Gauss-Newton matrix. Another important direction is further exploration of the proposed double-sampling-free methods in stochastic environments with neural network function approximation. On the theory side, it would be interesting to study condition-number of MSBE, and in general the shape of MSBE landscape, under linear and non-linear function approximation under common feature representations in asymptotically large environments. Moreover, non-asymptotic behavior and finite sample complexity analysis of the proposed methods would be very helpful for understanding the effectiveness of these algorithms in reducing sensitivity to condition-number. 13. Acknowledgments The authors want to thank Yi wan, Sina Ghiassian, John N. Tsitsiklis, and Saber Salehkaleybar for their valuable 9 \fToward Efficient Gradient-Based Value Estimation feedback in various stages of development of this work." + }, + { + "url": "http://arxiv.org/abs/1911.00731v1", + "title": "Order Optimal One-Shot Distributed Learning", + "abstract": "We consider distributed statistical optimization in one-shot setting, where\nthere are $m$ machines each observing $n$ i.i.d. samples. Based on its observed\nsamples, each machine then sends an $O(\\log(mn))$-length message to a server,\nat which a parameter minimizing an expected loss is to be estimated. We propose\nan algorithm called Multi-Resolution Estimator (MRE) whose expected error is no\nlarger than $\\tilde{O}\\big(m^{-{1}/{\\max(d,2)}} n^{-1/2}\\big)$, where $d$ is\nthe dimension of the parameter space. This error bound meets existing lower\nbounds up to poly-logarithmic factors, and is thereby order optimal. The\nexpected error of MRE, unlike existing algorithms, tends to zero as the number\nof machines ($m$) goes to infinity, even when the number of samples per machine\n($n$) remains upper bounded by a constant. This property of the MRE algorithm\nmakes it applicable in new machine learning paradigms where $m$ is much larger\nthan $n$.", + "authors": "Arsalan Sharifnassab, Saber Salehkaleybar, S. Jamaloddin Golestani", + "published": "2019-11-02", + "updated": "2019-11-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The rapid growth in the size of datasets has given rise to distributed models for statistical learning, in which data is not stored on a single machine. In several recent learning applications, it is commonplace to distribute data across multiple machines, each of which processes its own data and communicates with other machines to carry out a learning task. The main bottleneck in such distributed settings is often the communication between machines, and several recent works have focused on designing communication-ef\ufb01cient algorithms for different machine learning applications [Duchi et al., 2012, Braverman et al., 2016, Chang et al., 2017, Diakonikolas et al., 2017, Lee et al., 2017]. In this paper, we consider the problem of statistical optimization in a distributed setting as follows. Consider an unknown distribution P over a collection, F, of differentiable convex functions with Lipschitz \ufb01rst order derivatives, de\ufb01ned on a convex region in Rd. There are m machines, each observing n i.i.d sample functions from P. Each machine processes its observed data, and transmits a signal of certain length to a server. The server then collects all the signals and outputs an estimate of the parameter \u03b8\u2217that minimizes the expected loss, i.e., min\u03b8 Ef\u223cP \u0002 f(\u03b8) \u0003 . See Fig. 1 for an illustration of the system model. We focus on the distributed aspect of the problem considering arbitrarily large number of machines (m) and a) present an order optimal algorithm with b = O(log mn) bits per transmission, whose estimation error is no larger than \u02dc O \u0000m\u22121/max(d,2)n\u22121/2\u0001 , meeting the lower bound in [Salehkaleybar et al., 2019] up to a poly-logarithmic factor (cf. Theorem 1); b) we present an algorithm with a single bit per message with expected error no larger than \u02dc O \u0000m\u22121/2 + n\u22121/2\u0001 (cf. Proposition 1). 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. arXiv:1911.00731v1 [cs.LG] 2 Nov 2019 \f1.1 Background The distributed setting considered here has recently employed in a new machine learning paradigm called Federated Learning [Kone\u02c7 cn` y et al., 2015]. In this framework, training data is kept in users\u2019 computing devices due to privacy concerns, and the users participate in the training process without revealing their data. As an example, Google has been working on this paradigm in their recent project, Gboard [McMahan and Ramage, 2017], the Google keyboard. Besides communication constraints, one of the main challenges in this paradigm is that each machine has a small amount of data. In other words, the system operates in a regime that m is much larger than n [Chen et al., 2017]. A large body of distributed statistical optimization/estimation literature considers \u201cone-shot\" setting, in which each machine communicates with the server merely once [Zhang et al., 2013]. In these works, the main objective is to minimize the number of transmitted bits, while keeping the estimation error as low as the error of a centralized estimator, in which the entire data is co-located in the server. If we impose no limit on the communication budget, then each machine can encode its entire data into a single message and sent it to the server. In this case, the sever acquires the entire data from all machines, and the distributed problem reduces to a centralized problem. We call the sum of observed functions at all machines as the centralized empirical loss, and refer to its minimizer as the centralized solution. It is part of the folklore that the centralized solution is order optimal and its expected error is \u0398 \u00001/\u221amn \u0001 [Lehmann and Casella, 2006, Zhang et al., 2013]. Clearly, no algorithm can beat the performance of the best centralized estimator. Zhang et al. [2012] studied a simple averaging method where each machine obtains the empirical minimizer of its observed functions and sends this minimizer to the server through an O(log mn) bit message. Output of the server is then the average of all received empirical minimizers. Zhang et al. [2012] showed that the expected error of this algorithm is no larger than O \u00001/\u221amn + 1/n \u0001 , provided that: 1all functions are convex and twice differentiable with Lipschitz continuous second derivatives, and 2the objective function Ef\u223cP \u0002 f(\u03b8) \u0003 is strongly convex at \u03b8\u2217. Under the extra assumption that the functions are three times differentiable with Lipschitz continuous third derivatives, Zhang et al. [2012] also present a bootstrap method whose expected error is O \u00001/\u221amn + 1/n1.5\u0001 . It is easy to see that, under the above assumptions, the averaging method and the bootstrap method achieve the performance of the centralized solution if m \u2264n and m \u2264n2, respectively. Recently, Jordan et al. [2018] proposed to optimize a surrogate loss function using Taylor series expansion. This expansion can be constructed at the server by communicating O(m) number of d-dimensional vectors. Under similar assumption on the loss function as in [Zhang et al., 2012], they showed that the expected error of their method is no larger than O \u00001/\u221amn + 1/n9/4\u0001 . It, therefore, achieves the performance of the centralized solution for m \u2264n3.5. However, note that when n is \ufb01xed, all aforementioned bounds remain lower bounded by a positive constant, even when m goes to in\ufb01nity. For the problem of sparse linear regression, Braverman et al. [2016] proved that any algorithm that achieves optimal minimax squared error, requires to communicate \u2126(m \u00d7 min(n, d)) bits in total from machines to the server. Later, Lee et al. [2017] proposed an algorithm that achieves optimal mean squared error for the problem of sparse linear regression when d < n. Recently, Salehkaleybar et al. [2019] studied the impact of communication constraints on the expected error, over a class of \ufb01rst order differentiable functions with Lipschitz continuous derivatives. In parts of their results, they showed that under the assumptions of Section 2 of this paper in the case of log mn bits communication budget, the expected error of any estimator is lower bounded by \u02dc \u2126 \u0000m\u22121/max(d,2)n\u22121/2\u0001 . They also showed that if the number of bits per message is bounded by a constant and n is \ufb01xed, then the expected error remains lower bounded by a constant, even when the number of machines goes to in\ufb01nity. Other than one-shot communication, there is another major communication model that allows for several transmissions back and forth between the machines and the server. Most existing works of this type [Bottou, 2010, Lian et al., 2015, Zhang et al., 2015, McMahan et al., 2017] involve variants of stochastic gradient descent, in which the server queries at each iteration the gradient of empirical loss at certain points from the machines. The gradient vectors are then aggregated in the server to update the model\u2019s parameters. The expected error of such algorithms typically scales as O \u00001/k \u0001 , where k is the number of iterations. 2 \f1.2 Our contributions We study the problem of one-shot distributed learning under milder assumptions than previously available in the literature. We assume that loss functions, f \u2208F, are convex and differentiable with Lipschitz continuous \ufb01rst order derivatives. This is in contrast to the works of [Zhang et al., 2012] and [Jordan et al., 2018] that assume Lipschitz continuity of second or third derivatives. The reader should have in mind this model differences, when comparing our bounds with the existing results. Unlike existing works, our results concern the regime where the number of machines m is large, and our bounds tend to zero as m goes to in\ufb01nity, even if the number of per-machine observations n is bounded by a constant. This is contrary to the algorithms in [Zhang et al., 2012], whose errors tend to zero only when n goes to in\ufb01nity. In fact, when n = 1, a simple example1 shows that the expected errors of the simple averaging and bootstrap algorithms in [Zhang et al., 2012] remain lower bounded by a constant, for all values of m. The algorithm in [Jordan et al., 2018] suffers from the same problem and its expected error may not go to zero when n = 1. In this work, we present an algorithm with O \u0000log(mn) \u0001 bits per message, which we call MultiResolution Estimator for Convex landscapes and log mn bits communication budget (MRE-C-log) algorithm. We show that the estimation error of MRE-C-log algorithm meets the aforementioned lower bound up to a poly-logarithmic factor. More speci\ufb01cally, we prove that the expected error of MRE-C-log algorithm is no larger than O \u0000m\u22121/max(d,2)n\u22121/2\u0001 . In this algorithm, each machines reports not only its empirical minimizer, but also some information about the derivative of its empirical loss at some randomly chosen point in a neighborhood of this minimizer. To provide insight into the underlying idea behind MRE-C-log algorithm, we also present a simple naive approach whose error tends to zero as the number of machines goes to in\ufb01nity. Comparing with the lower bound in [Salehkaleybar et al., 2019], the expected error of MRE-C-log algorithm meets the lower bound up to a poly-logarithmic factor. Moreover, for the case of having constant bits per message, we present a simple algorithm whose error goes to zero with rate \u02dc O \u0000m\u22121/2 + n\u22121/2\u0001 , when m and n go to in\ufb01nity simultaneously. We evaluate performance of the MRE-C-log algorithm in two different machine learning tasks and compare with the existing methods in [Zhang et al., 2012]. We show via experiments, for the n = 1 regime, that MRE-C-log algorithm outperforms these algorithms. The observations are also in line with the expected error bounds we give in this paper and those previously available. In particular, in the n = 1 regime, the expected error of MRE-C-log algorithm goes to zero as the number of machines increases, while the expected errors of the previously available estimators remain lower bounded by a constant. 1.3 Outline The paper is organized as follows. We begin with a detailed model and problem de\ufb01nition in Section 2. In Section 3, we present our algorithms and main upper bounds. We then report our numerical experiments in Section 4. Finally, in Section 5 we discuss our results and present open problems and directions for future research. The proofs of the main results and optimality of the MRE-C-log algorithm are given in the appendix. 2 Problem De\ufb01nition Consider a positive integer d and a collection F of real-valued convex functions over [\u22121, 1]d. Let P be an unknown probability distribution over the functions in F. Consider the expected loss function F(\u03b8) = Ef\u223cP \u0002 f(\u03b8) \u0003 , \u03b8 \u2208[\u22121, 1]d. (1) Our goal is to learn a parameter \u03b8\u2217that minimizes F: \u03b8\u2217= argmin \u03b8\u2208[\u22121,1]d F(\u03b8). (2) 1Consider two convex functions f0(\u03b8) = \u03b82 +\u03b83/6 and f1(\u03b8) = (\u03b8\u22121)2 +(\u03b8\u22121)3/6 over [0, 1]. Consider a distribution P that associates probability 1/2 to each function. Then, EP [f(\u03b8)] = f0(\u03b8)/2 + f1(\u03b8)/2, and the optimal solution is \u03b8\u2217= ( \u221a 15 \u22123)/2 \u22480.436. On the other hand, in the averaging method proposed in [Zhang et al., 2012], assuming n = 1, the empirical minimizer of each machine is either 0 if it observes f0, or 1 if it observes f1. Therefore, the server receives messages 0 and 1 with equal probability , and E \u0002\u02c6 \u03b8 \u0003 = 1/2. Hence, E \u0002 |\u02c6 \u03b8 \u2212\u03b8\u2217| \u0003 > 0.06, for all values of m. 3 \f\u0742 \u0b35 \u0b35 \u0742 \u0be1 \u0b35 \u0742 \u0b35 \u0b36 \u0742 \u0be1 \u0b36 \u0742 \u0b35 \u0be0 \u0742 \u0be1 \u0be0 SServer \u073b\u0b35 \u073b\u0b36 \u073b\u0be0 \u07e0 \u1218 1 2 m Figure 1: A distributed system of m machines, each having access to n independent sample functions from an unknown distribution P. Each machine sends a signal to a server based on its observations. The server receives all signals and output an estimate \u02c6 \u03b8 for the optimization problem in (2). The expected loss is to be minimized in a distributed fashion, as follows. We consider a distributed system comprising m identical machines and a server. Each machine i has access to a set of n independently and identically distributed samples {f i 1, \u00b7 \u00b7 \u00b7 , f i n} drawn from the probability distribution P. Based on these observed functions, machine i then sends a signal Y i to the server. We assume that the length of each signal is limited to b bits. The server then collects signals Y 1, . . . , Y m and outputs an estimation of \u03b8\u2217, which we denote by \u02c6 \u03b8. See Fig. 1 for an illustration of the system model.2 Assumption 1 We let the following assumptions on F and P be in effect throughout the paper. \u2022 Every f \u2208F is once differentiable and convex. \u2022 Each f \u2208F has bounded and Lipschitz continuous derivatives. More concretely, for any f \u2208F and any \u03b8, \u03b8\u2032 \u2208[\u22121, 1]d, we have |f(\u03b8)| \u2264 \u221a d, \u2225\u2207f(\u03b8)\u2225\u22641, and \u2225\u2207f(\u03b8) \u2212 \u2207f(\u03b8\u2032)\u2225\u2264\u2225\u03b8 \u2212\u03b8\u2032\u2225. \u2022 Distribution P is such that F (de\ufb01ned in (1)) is strongly convex. More speci\ufb01cally, there is a constant \u03bb > 0 such that for any \u03b81, \u03b82 \u2208[\u22121, 1]d, we have F(\u03b82) \u2265F(\u03b81) + \u2207F(\u03b81)T (\u03b82 \u2212\u03b81) + \u03bb\u2225\u03b82 \u2212\u03b81\u22252. \u2022 The minimizer of F lies in the interior of the cube [\u22121, 1]d. Equivalently, there exists \u03b8\u2217\u2208(\u22121, 1)d such that \u2207F(\u03b8\u2217) = 0. 3 Algorithms and Main Results In this section, we propose estimators to minimize the expected loss, organized in a sequence of three subsections. In the \ufb01rst subsection, we consider the case of constant bits per signal transmission, whereas in the last two subsections we allow for log mn bits per signal transmission. For the latter regime, we \ufb01rst present in Subsection 3.2, a simple naive approach whose estimation error goes to zero for large values of m, even when n = 1. Afterwards, in Subsection 3.3, we describe our main estimator, establish an upper bound on its estimation error, and show that it is order optimal. 3.1 Constant number of bits per transmission Here, we consider a simple case with a one-dimensional domain (d = 1) and one-bit signal per transmission (b = 1). We show that the expected error can be made arbitrarily small as m and n go to in\ufb01nity simultaneously. Proposition 1 Suppose that d = 1 and b = 1. There exists a randomized estimator \u02c6 \u03b8 such that E \u0002 (\u02c6 \u03b8 \u2212\u03b8\u2217)2\u00031/2 = O \u0012 1 \u221an + 1 \u221am \u0013 . 2The considered model here is similar to the one in [Salehkaleybar et al., 2019]. 4 \fThe proof is given in Appendix A. There, we assume for simplicity that the domain is the [0, 1] interval and propose a simple randomized algorithm in which each machine i \ufb01rst computes an O(1/\u221an)-accurate estimation \u03b8i based on its observed functions. It then sends a Y i = 1 signal with probability \u03b8i. The server then outputs the average of the received signals as the \ufb01nial estimate. Based on Proposition 1, there is an algorithm that achieves any desired accuracy even with budget of one bit, provided that m and n go to in\ufb01nity simultaneously. In contrary, it was shown in Proposition 1 of [Salehkaleybar et al., 2019] that no estimator yields error better than a constant if n = 1 and the number of bits per transmission is a constant independent of m. We conjecture that the bound in Proposition 1 is tight. More concretely, for constant number of bits per transmission and any randomized estimator \u02c6 \u03b8, we have E[(\u02c6 \u03b8 \u2212\u03b8\u2217)2]1/2 = \u02dc \u2126 \u00001/\u221an + 1/\u221am \u0001 . 3.2 A simple naive approach with log mn bits per transmission We now consider the case where the number of bits per transmission is O(log m). In order to set the stage for our main algorithm given in the next subsection, here we present a simple algorithm and show that its estimation error decays as O(m\u22121/3). The underlying idea is that unlike existing estimators, in this algorithm each machine encodes in its signal some information about the shape of its observed functions at a point that is not necessarily close to its own private optimum. To simplify the presentation, here we con\ufb01ne our setting to one dimensional domain (d = 1) with each machine observing a single sample function (n = 1). The algorithm is as follows: Consider a regular grid of size 3 \u221am/ log(m) over the [\u22121, 1] interval. Each machine i selects a grid point \u03b8i uniformly at random. The machine then forms a signal comprising two parts: 1The location of \u03b8i, and 2The derivative of its observed function f i at \u03b8i. In other words, the signal Y i of the i-th machine is an ordered pair of the form \u0000\u03b8i, f \u2032i(\u03b8i) \u0001 , where f \u2032i(\u03b8i) is the derivative of f i at \u03b8i. In this encoding, we use O(log m) bits to represent both \u03b8i and f \u2032i(\u03b8i). In the server, for each grid point \u03b8, the average of f \u2032i is computed over all machines i with \u03b8i = \u03b8. We denote this average by \u02c6 F \u2032(\u03b8). The server then outputs a point \u03b8 that minimizes \f \f \u02c6 F \u2032(\u03b8) \f \f. This algorithm learns an estimation of derivatives of F, and \ufb01nds a point that minimizes the size of this derivative. The following lemma shows that the estimation error of this algorithm is \u02dc O(1/ 3 \u221am). The proof is given in Appendix B. Proposition 2 Let \u02c6 \u03b8 be the output of the above estimator. For any \u03b1 > 1, Pr \u0012\f \f\u02c6 \u03b8 \u2212\u03b8\u2217\f \f > 3\u03b1 log(m) \u03bb 3 \u221am \u0013 = O \u0010 exp \u0000\u2212\u03b12 log3 m \u0001\u0011 . Consequently, for any k \u22651, we have E \u0002 |\u02c6 \u03b8 \u2212\u03b8\u2217|k\u0003 = O \u0000(log(m)/ 3 \u221am)k\u0001 . We now turn to the general case with arbitrary values for d and n, and present our main estimator. 3.3 The Main Algorithm In this part, we propose our main algorithm and an upper bound on its estimation error. In the proposed algorithm, transmitted signals are designed such that the server can construct a multi-resolution view of gradient of function F(\u03b8) around a promising grid point. Then, we call the proposed algorithm \u201cMulti-Resolution Estimator for Convex landscapes with log mn bits communication budget (MREC-log)\". The description of MRE-C-log is as follows: Each machine i observes n functions and sends a signal Y i comprising three parts of the form (s, p, \u2206). The signals are of length O(log(mn)) bits and the three parts s, p, and \u2206are as follows. 5 \f\ud835\udc60 \ud835\udc60 \ud835\udc5d\u2032 \ud835\udc5d 2 log \ud835\udc5a\ud835\udc5b/ \ud835\udc5b Grid \ud835\udc3a Cube \ud835\udc36\ud835\udc60 Figure 2: An illustration of grid G and cube Cs centered at point s for d = 2. The point p belongs to \u02dc G2 s and p\u2032 is the parent of p. \u2022 Part s: Consider a grid G with resolution log(mn)/\u221an over the d-dimensional cube. Each machine i computes the minimizer of the average of its \ufb01rst n/2 observed functions, \u03b8i = argmin \u03b8\u2208[\u22121,1]d n/2 X j=1 f i j(\u03b8). (3) It then lets s be the closest grid point to \u03b8i. \u2022 Part p: Let \u03b4 \u225c4 \u221a d \u0012log5(mn) m \u0013 1 max(d,2) . (4) Note that \u03b4 = \u02dc O \u0000m\u22121/ max(d,2)\u0001 . Let t = log(1/\u03b4). Without loss of generality we assume that t is an integer. Let Cs be a d-dimensional cube with edge size 2 log(mn)/\u221an centered at s. Consider a sequence of t+1 grids on Cs as follows. For each l = 0, . . . , t, we partition the cube Cs into 2ld smaller equal sub-cubes with edge size 2\u2212l+1 log(mn)/\u221an. The lth grid \u02dc Gl s comprises the centers of these smaller cubes. Then, each \u02dc Gl s has 2ld grid points. For any point p\u2032 in \u02dc Gl s, we say that p\u2032 is the parent of all 2d points in \u02dc Gl+1 s that are in the \u00002\u2212l \u00d7 (2 log mn)/\u221an \u0001 -cube centered at p\u2032 (see Fig. 2). Thus, each point \u02dc Gl s (l < t) has 2d children. To select p, we randomly choose an l from 0, . . . , t with probability 2(d\u22122)l/(Pt j=0 2(d\u22122)j). We then let p be a uniformly chosen random grid point in \u02dc Gl s. Note that O(d log(1/\u03b4)) = O(d log(mn)) bits suf\ufb01ce to identify p uniquely. \u2022 Part \u2206: We let \u02c6 F i(\u03b8) \u225c2 n n X j=n/2+1 f i j(\u03b8), (5) and refer to it as the empirical function of the ith machine. If the selected p in the previous part is in \u02dc G0 s, i.e., p = s, then we set \u2206to the gradient of \u02c6 F i at \u03b8 = s. Otherwise, if p is in \u02dc Gl s for l \u22651, we let \u2206\u225c\u2207\u02c6 F i(p) \u2212\u2207\u02c6 F i(p\u2032), where p\u2032 \u2208\u02dc Gl\u22121 s is the parent of p. Note that \u2206is a d-dimensional vector whose entries are in the range \u00002\u2212l\u221a d log(mn)/\u221an \u0001 \u00d7 \u0002 \u22121, +1 \u0003 . This is due to the Lipschitz continuity of the derivative of the functions in F (cf. Assumption 1) and the fact that \u2225p \u2212p\u2032\u2225= 2\u2212l\u221a d log(mn)/\u221an. Hence, we can use O(d log(mn)) bits to represent \u2206within accuracy 2\u03b4 log(mn)/\u221an. At the server, we choose an s\u2217\u2208G that has the largest number of occurrences in the received signals. Then, base on the signals corresponding to \u02dc G0 s\u2217, we approximate the gradient of F at s\u2217as \u02c6 \u2207F(s\u2217) = 1 Ns\u2217 X Signals of the form Y i=(s\u2217,s\u2217,\u2206) \u2206, 6 \fwhere Ns\u2217is the number of signals containing s\u2217in the part p. Then, for any point p \u2208\u02dc Gl s\u2217with l \u22651, we compute \u02c6 \u2207F(p) = \u02c6 \u2207F(p\u2032) + 1 Np X Signals of the form Y i=(s\u2217,p,\u2206) \u2206, (6) where Np is the number of signals having point p in their second argument. Finally, the sever lets \u02c6 \u03b8 be a grid point p in \u02dc Gt s\u2217with the smallest \u2225\u02c6 \u2207F(p)\u2225. In the MRE-C-log algorithm the signals are of length d/(d + 1) log m + d log n bits, which is no larger than d log mn. Please refer to Section 5 for discussions on how the MRE-C-log algorithm can be extended to work under more general communication constraints. Theorem 1 Let \u02c6 \u03b8 be the output of the above algorithm. Then, Pr \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225> 8d log 5 max(d,2) +1(mn) \u03bb m 1 max(d,2) n 1 2 ! = exp \u0010 \u2212\u2126 \u0000log2(mn) \u0001\u0011 . The proof is given in Appendix C. The proof goes by \ufb01rst showing that s\u2217is a closest grid point of G to \u03b8\u2217with high probability. We then show that for any l \u2264t and any p \u2208\u02dc Gl s\u2217, the number of received signals corresponding to p is large enough so that the server obtains a good approximation of \u2207F at p. Once we have a good approximation \u02c6 \u2207F of \u2207F at all points of \u02dc Gt s\u2217, a point at which \u02c6 \u2207F has the minimum norm lies close to the minimizer of F. Corollary 1 Let \u02c6 \u03b8 be the output of the above algorithm. There is a constant \u03b7 > 0 such that for any k \u2208N, E \u0002 \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225k\u0003 < \u03b7 8d log 5 max(d,2) +1(mn) \u03bb m 1 max(d,2) n 1 2 !k . Moreover, \u03b7 can be chosen arbitrarily close to 1, for large enough values of mn. The upper bound in Theorem 1 matches the lower bound in Theorem 2 of [Salehkaleybar et al., 2019] up to a polylogarithmic factor. In this view, the MRE-C-log algorithm has order optimal error. Moreover, as we show in Appendix C, in the course of computations, the server obtains an approximation \u02c6 F of F such that for any \u03b8 in the cube Cs\u2217, we have \u2225\u2207\u02c6 F(\u03b8) \u2212\u2207F(\u03b8)\u2225= \u02dc O \u0000m\u22121/dn\u22121/2). Therefore, the server not only \ufb01nds the minimizer of F, but also obtains an approximation of F at all points inside Cs\u2217. In the special case that n = 1, we have Cs\u2217= [\u22121, 1]d, and as a result, the server would acquire an approximation of F over the entire domain. This observation suggests the following insight: In the extreme distributed case (n = 1), \ufb01nding an O \u0000m\u22121/d)-accurate minimizer of \u2207F is as hard as \ufb01nding an O \u0000m\u22121/d)-accurate approximation of F for all points in the domain. 4 Experiments We evaluated the performance of MRE-C-log on two learning tasks and compared with the averaging method (AVGM) in [Zhang et al., 2012]. Recall that in AVGM, each machine sends the empirical risk minimizer of its own data to the server and the average of received parameters at the server is returned in the output. The \ufb01rst experiment concerns the problem of ridge regression. Here, each sample (X, Y ) is generated based on a linear model Y = XT \u03b8\u2217+ E, where X, E, and \u03b8\u2217are sampled from N(0, Id\u00d7d), N(0, 0.01), and uniform distribution over [0, 1]d, respectively. We consider square loss function with l2 norm regularization: f(\u03b8) = (\u03b8T X \u2212Y )2 + 0.1\u2225\u03b8\u22252 2. In the second experiment, we perform a logistic regression task, considering sample vector X generated according to N(0, Id\u00d7d) and labels Y randomly drawn from {\u22121, 1} with probability Pr(Y = 1|X, \u03b8\u2217) = 1/(1 + exp(\u2212XT \u03b8\u2217)). In both experiments, we consider a two dimensional domain (d = 2) and assumed that each machine has access to one sample (n = 1). 7 \f0 0.5 1 1.5 2 Number of machines (m) 106 10-3 10-2 10-1 100 Average error MRE AVGM (a) Ridge regression 0 0.5 1 1.5 2 Number of machines (m) 106 10-3 10-2 10-1 100 Average error MRE AVGM (b) Logistic regression Figure 3: The average of MRE-C-log and AVGM algorithms versus the number of machines in two different learning tasks. In Fig. 3, the average of \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u22252 is computed over 100 instances for the different number of machines in the range [104, 106]. Both experiments suggest that the average error of MRE-C-log keep decreasing as the number of machines increases. This is consistent with the result in Theorem 1, according to which the expected error of MRE-C-log is upper bounded by \u02dc O(1/\u221amn). It is evident from the error curves that MRE-C-log outperforms the AVGM algorithm in both tasks. This is because where m is much larger than n, the expected error of the AVGM algorithm typically scales as O(1/n), independent of m. 5 Discussion We studied the problem of statistical optimization in a distributed system with one-shot communications. We proposed an algorithm, called MRE-C-log , with O \u0000log(mn) \u0001 -bits per message, and showed that its expected error is optimal up to a poly-logarithmic factor. Aside from being order optimal, the MRE-C-log algorithm has the advantage over the existing estimators that its error tends to zero as the number of machines goes to in\ufb01nity, even when the number of samples per machine is upper bounded by a constant. This property is in line with the out-performance of the MRE-C-log algorithm in the m \u226bn regime, as discussed in our experimental results. The main idea behind the MRE-C-log algorithm is that it essentially computes, in an ef\ufb01cient way, an approximation of the gradient of the expected loss over the entire domain. It then outputs a norm-minimizer of this approximate gradients, as an estimate of the minimizer of the expected loss. Therefore, MRE-C-log carries out the intricate and seemingly redundant task of approximating the loss function for all points in the domain, in order to resolve the apparently much easier problem of \ufb01nding a single approximate minimizer for the loss function. In this view, it is quite counterintuitive that such algorithm is order optimal in terms of expected error and sample complexity. This observation provides the interesting insight that in a distributed system with one shot communication, \ufb01nding an approximate minimizer is as hard as \ufb01nding an approximation of the function derivatives for all points in the domain. Our algorithms and bounds are designed and derived for a broader class of functions with Lipschitz continuous \ufb01rst order derivatives, compared to the previous works that consider function classes with Lipschitz continuous second or third order derivatives. The assumption is indeed both practically important and technically challenging. For example, it is well-known that the loss landscapes involved in learning applications and neural networks are highly non-smooth. Therefore, relaxing assumptions on higher order derivatives is actually a practically important improvement over the previous works. On the other hand, considering Lipschitzness only for the \ufb01rst order derivative renders the problem way more dif\ufb01cult. To see this, note that when n > m, the existing upper bound O(1/\u221amn + 1/n) for the case of Lipschitz second derivatives goes below the O(m1/dn1/2) lower bound in the case of Lipschitz \ufb01rst derivatives. 8 \fA drawback of the MRE-C-log algorithm is that each machine requires to know m in order to set the number of levels for the grids. This however can be resolved by considering in\ufb01nite number of levels, and letting the probability that p is chosen from level l decrease exponentially with l. Moreover, although communication budget of the MRE-C-log algorithm is O(d log mn) bits per signal, the algorithm can be extended to work under more general communication constraints, via dividing each signal to subsignals of length O(d log mn) each containing an independent independent signal of the MRE-C-log algorithm. The expected loss of this modi\ufb01ed algorithm can be shown to still matches the existing lower bounds up to logarithmic factors. Please refer to Salehkaleybar et al. [2019] for a thorough treatment. We also proposed, for d = 1, an algorithm with communication budget of one bit per transmission, whose error tends to zero in a rate of O \u00001/\u221am + 1/\u221an \u0001 as m and n go to in\ufb01nity simultaneously. We conjecture that this algorithms is order-optimal, in the sense that no randomized constant-bit algorithm has expected error smaller than O \u00001/\u221am + 1/\u221an \u0001 . There are several open problems and directions for future research. The \ufb01rst group of problems involve the constant bit regime. It would be interesting if one could verify whether or not the bound in Proposition 1 is order optimal. Moreover, the constant bit algorithm in Subsection 3.1 is designed for one-dimensional domains and one-bit per transmission. Decent extensions of this algorithm to higher dimensions with vanishing errors under one bit per transmission constraint seem to be non-trivial. Investigating the power of more bits per transmission (constants larger than one bit) in reducing the expected error is another interesting direction. Another important group of problems concerns the more restricted class of functions with Lipschitz continuous second order derivatives. Despite several attempts in the literature, the optimal scaling of expected error for this class of functions in the m \u226bn regime is still an open problem. Acknowledgments This research was supported by Iran National Science Foundation (INSF) under contract No. 970128." + } + ], + "Saber Salehkaleybar": [ + { + "url": "http://arxiv.org/abs/2205.08253v3", + "title": "Momentum-Based Policy Gradient with Second-Order Information", + "abstract": "Variance-reduced gradient estimators for policy gradient methods have been\none of the main focus of research in the reinforcement learning in recent years\nas they allow acceleration of the estimation process. We propose a\nvariance-reduced policy-gradient method, called SHARP, which incorporates\nsecond-order information into stochastic gradient descent (SGD) using momentum\nwith a time-varying learning rate. SHARP algorithm is parameter-free, achieving\n$\\epsilon$-approximate first-order stationary point with $O(\\epsilon^{-3})$\nnumber of trajectories, while using a batch size of $O(1)$ at each iteration.\nUnlike most previous work, our proposed algorithm does not require importance\nsampling which can compromise the advantage of variance reduction process.\nMoreover, the variance of estimation error decays with the fast rate of\n$O(1/t^{2/3})$ where $t$ is the number of iterations. Our extensive\nexperimental evaluations show the effectiveness of the proposed algorithm on\nvarious control tasks and its advantage over the state of the art in practice.", + "authors": "Saber Salehkaleybar, Sadegh Khorasani, Negar Kiyavash, Niao He, Patrick Thiran", + "published": "2022-05-17", + "updated": "2023-11-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Reinforcement Learning (RL) has achieved remarkable success in solving various complex tasks in games (Silver et al., 2017), autonomous driving (Shalev-Shwartz et al., 2016), and robot manipulation (Deisenroth et al., 2013), among other fields. In RL setting, an agent tries to learn the best actions by interacting with the environment and evaluating its performance based on reward signals. More specifically, in Markov Decision Processes (MDPs), the mathematical formalism for RL, after taking an action, the state changes according to a transition probability model and a reward signal is received based on the action taken and the current state. The main goal of the learner is to find a policy that maps the state space to the action space, maximizing the expected cumulative rewards as the objective function. Policy gradient methods (Sutton et al., 2000) are often used for obtaining good policies in MDPs, especially for high-dimensional continuous action space. In policy gradient methods, the policy is parameterized by 1 arXiv:2205.08253v3 [cs.LG] 26 Nov 2023 \fan unknown parameter \u03b8 and it is directly optimized using the stochastic first-order gradient of cumulative rewards as it is infeasible to compute the gradient exactly. REINFORCE (Williams, 1992), PGT (Sutton et al., 2000), and GPOMDP (Baxter & Bartlett, 2001) are some classical methods that update the policy by applying a stochastic gradient ascent step. These methods generally require a large number of trajectories due to the large variance of gradient estimates, stemming from randomness of transitions over trajectories. In the RL literature, several methods have been proposed to reduce the variance in policy gradient methods. For instance, Sutton et al. (2000) proposed to consider a baseline in order to reduce variance of gradient estimation. Konda & Tsitsiklis (2000) presented an actor-critic algorithm that estimates the value function and uses it to mitigate the effect of large variance. Schulman et al. (2015b) proposed GAE to control both bias and variance by exploiting a temporal difference relation for the advantage function approximation. More recent work such as TRPO (Schulman et al., 2015a) considers a Kullback-Leibler (KL) divergence penalty term in order to ensure that the updated policy remains close to the current policy or PPO (Schulman et al., 2017) that uses clipped surrogate objective function to achieve the same goal. In practice, it has been shown that these algorithms have better performance compared with vanilla policy gradient method. Most stochastic gradient based policy methods need O(\u03f5\u22124) trajectories in order to achieve \u03f5-approximate first-order stationary point (\u03f5-FOSP) of the objective function J(\u03b8), i.e., E[\u2225\u2207J(\u03b8)\u2225] \u2264\u03f5 (Ghadimi & Lan, 2013; Shani et al., 2020). In recent years, there have been several attempts to reduce the variance of policy gradient by adapting variance reduction techniques proposed previously in supervised learning context (a list of previous work is given in Section 4). These methods can achieve sample complexity of O(\u03f5\u22123) in RL setting and this rate is optimal in stochastic optimization under some mild assumptions on the objective function and stochastic gradients (Arjevani et al., 2020). In supervised learning problems, the objective function is oblivious, in the sense that the randomness that selects the loss function does not depend on the parameters that are to be optimized. On the other hand, in RL setting, the distribution over trajectories is non-stationary and changes over time as the parameters of policy are updated. To resolve this issue, most previous work utilized importance sampling techniques, which may degrade the effectiveness of the variance reduction process (Yang et al., 2019). Moreover, to analyze the convergence rate of these methods, a strong assumption on the variance of importance sampling weights is assumed, which may not hold in RL setting. Most importantly, these methods often need huge batch sizes, which is highly undesirable in practice. In this paper, we propose Stochastic Hessian Aided Recursive Policy gradient (SHARP) algorithm, which incorporates second-order information into SGD with momentum. Our main contributions are summarized as follows: \u2022 Under some common regularity assumptions on the parameterized policy, SHARP reaches \u03f5-FOSP with a sample complexity of O(\u03f5\u22123). Moreover, our algorithm does not use importance sampling techniques. As a result, we can relax the strong additional assumptions on importance sampling weights customary in the literature. \u2022 The batch size of SHARP is O(1) and it does not require checkpoints, thanks to the use of a second-order term in the updates and time-varying learning rate and momentum weight. \u2022 SHARP is parameter-free in the sense that the initial learning rate and momentum weight do not depend on the parameters of the problem. Moreover, the variance of the estimation error decays with the rate of O(1/t2/3), where t is the number of iterations. \u2022 Our experimental results show that SHARP outperforms the state of the art on various control tasks, with remarkable performance in more complex environments. The rest of this paper is organized as follows: In Section 2, we define the problem and provide some notations and background on variance reduction methods in supervised learning. In Section 3, we describe the proposed algorithm and analyze its convergence rate. In Section 4, we give a summary of previous work and discuss how our proposed algorithm differs from them. In Section 5, we evaluate the performance of the proposed algorithm against the related work experimentally. Finally, we conclude the paper in Section 6. 2 \f2 Preliminaries 2.1 Notations and problem definition Consider a discrete-time MDP M = {S, A, P, R, \u03b3, \u03c1} that models how an agent interacts with a given environment. S and A are state space and action space, respectively. P(s\u2032|s, a) denotes the probability of transiting to state s\u2032 from s after taking action a. The reward function R returns reward r(s, a) when action a is taken in state s. Parameter \u03b3 \u2208(0, 1) denotes the discount factor and \u03c1 is the distribution of starting state. The actions are chosen according to policy \u03c0 where \u03c0(a|s) is the probability of taking action a for a given state s. Here, we assume that the policy is parameterized with a vector \u03b8 \u2208Rd and use shorthand notation \u03c0\u03b8 for \u03c0\u03b8(a|s). For a given time horizon H, according to policy \u03c0\u03b8, the agent observes a sequence of state-action pairs \u03c4 = (s0, a0, \u00b7 \u00b7 \u00b7 , sH\u22121, aH\u22121) called a trajectory. The probability of observing a trajectory \u03c4 for a given policy \u03c0\u03b8 is: p(\u03c4|\u03c0\u03b8) = \u03c1(s0) H\u22121 Y h=0 P(sh+1|sh, ah)\u03c0\u03b8(ah|sh). (1) The discounted cumulative reward for a trajectory \u03c4 is defined as R(\u03c4) := PH\u22121 h=0 \u03b3hr(sh, ah) and the expected return for a policy \u03c0\u03b8 is: J(\u03b8) := E\u03c4\u223c\u03c0\u03b8[R(\u03c4)]. (2) The main goal in policy-based RL is to find \u03b8\u2217= arg max\u03b8 J(\u03b8). As in many applications, J(\u03b8) is non-convex and we settle instead for obtaining \u03f5-FOSP, \u02c6 \u03b8, such that E[\u2225\u2207J(\u02c6 \u03b8)\u2225] \u2264\u03f5. It can be shown that: \u2207J(\u03b8) = E \" H\u22121 X h=0 \u03a8h(\u03c4)\u2207log \u03c0\u03b8(ah|sh) # , (3) where \u03a8h(\u03c4) = PH\u22121 t=h \u03b3tr(st, at). Therefore, for any trajectory \u03c4, g(\u03c4; \u03b8) := PH\u22121 h=0 \u03a8h(\u03c4)\u2207log \u03c0\u03b8(ah|sh) is an unbiased estimator of \u2207J(\u03b8). The vanilla policy gradient updates \u03b8 as follows: \u03b8 \u2190\u03b8 + \u03b7g(\u03c4; \u03b8), (4) where \u03b7 is the learning rate. The Hessian matrix of J(\u03b8) can be written as follows (Shen et al., 2019): \u22072J(\u03b8) = E[\u2207\u03a6(\u03b8; \u03c4)\u2207log p(\u03c4|\u03c0\u03b8)T + \u22072\u03a6(\u03b8; \u03c4)], (5) where \u03a6(\u03b8; \u03c4) = PH\u22121 h=0 PH\u22121 t=h \u03b3tr(st, at) log \u03c0\u03b8(ah|sh). For a given trajectory \u03c4, B(\u03c4; \u03b8) := \u2207\u03a6(\u03b8; \u03c4)\u2207log p(\u03c4|\u03c0\u03b8)T + \u22072\u03a6(\u03b8; \u03c4) is an unbiased estimator of the Hessian matrix. 2.2 Variance reduced methods for gradient estimation Variance reduced methods for estimating the gradient vector were originally proposed for the stochastic optimization setting: min \u03b8\u2208Rd Ez\u223cp(z)[f(\u03b8, z)], (6) where a sample z is drawn from distribution p(z) and f(., z) is commonly assumed to be smooth and non-convex function of \u03b8. This setting is mainly considered in supervised learning context where \u03b8 corresponds to the parameters of the training model and z = (x, y) is the training sample, with x the feature vector of the sample and y the corresponding label. In this setting, the distribution p(z) is invariant with respect to parameter \u03b8. The common approach for reducing the variance of gradient estimation is to reuse past gradient vectors. The pseudo-code for this general framework for variance reduction is given in Algorithm 1. After every 3 \fAlgorithm 1 Common framework in variance reduction methods 1: for t = 0, \u00b7 \u00b7 \u00b7 , T \u22121 do 2: ht = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 |Bcheck| X z\u2208Bcheck \u2207f(\u03b8t, z) if t \u22610 (mod Q), (7) ht\u22121 + 1 |B| X z\u2208B \u2207f(\u03b8t, z) \u2212\u2207f(\u03b8t\u22121, z), otherwise. (8) 3: \u03b8t+1 \u2190\u03b8t \u2212\u03b7ht 4: end for 5: Return \u03b8t with t chosen randomly from {0, \u00b7 \u00b7 \u00b7 , T \u22121} pre-determined number of iterations Q, there is a checkpoint to obtain an unbiased estimate of the gradient, denoted by ht, at the current parameter \u03b8t by taking a batch of samples Bcheck. Between any two consecutive checkpoints, the gradient at the parameter \u03b8t is estimated according to equation 8 by taking a batch of samples B drawn from p(z). The above framework appeared in several previous variance reduction methods in stochastic optimization such as SARAH (Nguyen et al., 2017) and SPIDER (Fang et al., 2018). Zhang (2021) discusses how to choose the size of batches and the parameters Q and \u03b7. In fact, there is a trade-off between \u03b7 and |B|. If a small batch size is used, then \u03b7 is also required to be small. The two extremes are SpiderBoost (Wang et al., 2019) (|B| = O(\u03f5\u22121), \u03b7 = O(1)) and SARAH (Nguyen et al., 2017) (|B| = O(1), \u03b7 = O(\u03f5)). Very recently, Li et al. (2021) proposed PAGE, where in each iteration t, either a batch of samples is taken with probability pt to update the gradient or the previous estimate of the gradient is used with a small adjustment, with probability 1 \u2212pt. In the context of RL, a sample z corresponds to a trajectory \u03c4. Unlike supervised learning, the distribution of these trajectories depends on the parameters of policy generating them. Therefore, in the second term in the sum in equation 8, namely \u2207f(\u03b8t\u22121, z), z (or trajectory \u03c4 in RL context) is generated according to policy \u03c0\u03b8t while \u03b8t\u22121 is the parameter of the policy at the previous iteration. In RL setting, importance sampling technique is commonly used to account for the distribution shift as follows: ht = ht\u22121 + 1 |B| X \u03c4\u2208B g(\u03b8t; \u03c4) \u2212w(\u03c4|\u03b8t, \u03b8t\u22121)g(\u03b8t\u22121; \u03c4), (9) with the weights w(\u03c4|\u03b8t, \u03b8t\u22121) = QH\u22121 h=0 \u03c0\u03b8t\u22121(ah|sh) \u03c0\u03b8t(ah|sh) . As we shall see in Section 4, nearly all variance reduction approaches in RL employing the general framework of Algorithm 1, use an importance sampling technique. This could significantly degrade the performance of the approach as the gradient estimates depend heavily on these weights (Yang et al., 2019). Besides, these variance reduction methods often need giant batch sizes at checkpoints, which is not practical in RL setting. Finally, the hyper-parameters of these approaches must be selected carefully as they often use non-adaptive learning rates. To resolve the issue of requiring huge batch-sizes, in the context of stochastic optimization, a variance reduction method called STORM (Cutkosky & Orabona, 2019) was proposed with the following update rule: ht = (1 \u2212\u03b1t)ht\u22121+\u03b1t\u2207f(\u03b8t, zt) + (1 \u2212\u03b1t)(\u2207f(\u03b8t, zt) \u2212\u2207f(\u03b8t\u22121, zt)) \u03b8t+1 \u2190\u03b8t \u2212\u03b7tht, (10) where zt is the sample drawn at iteration t and \u03b1t and \u03b7t are the adaptive momentum weight and learning rate, respectively. Compared with SGD with momentum, the main difference in STORM is the correction term \u2207f(\u03b8t, zt) \u2212\u2207f(\u03b8t\u22121, zt) in equation 10. Cutkosky & Orabona (2019) showed that by adaptively updating \u03b1t and \u03b7t based on the norm of stochastic gradient in previous iterations, STORM can achieve the same convergence rate as previous methods without requiring checkpoints nor a huge batch size. Later, a parameter-free version, called STORM+ (Levy et al., 2021), has been introduced using new adaptive learning 4 \fAlgorithm 2 The SHARP algorithm Input: Initial point \u03b80, parameters \u03b10, \u03b70, and number of iterations T 1: Sample trajectory \u03c40 with policy \u03c0\u03b80 2: v0 \u2190g(\u03c40; \u03b80) 3: \u03b81 \u2190\u03b80 + \u03b70 v0 \u2225v0\u2225 4: for t = 1, \u00b7 \u00b7 \u00b7 , T \u22121 do 5: Sample bt \u223cU(0, 1) 6: \u03b8b t \u2190bt\u03b8t + (1 \u2212bt)\u03b8t\u22121 7: Sample trajectories \u03c4t and \u03c4 b t with policies \u03c0\u03b8t and \u03c0\u03b8b t , respectively 8: \u03b7t \u2190 \u03b70 t2/3 , \u03b1t \u2190 \u03b10 t2/3 9: vt \u2190(1 \u2212\u03b1t)(vt\u22121 + B(\u03c4 b t ; \u03b8b t)(\u03b8t \u2212\u03b8t\u22121)) + \u03b1tg(\u03c4t; \u03b8t) 10: \u03b8t+1 \u2190\u03b8t + \u03b7t vt \u2225vt\u2225 11: end for 12: Return \u03b8t with t chosen randomly from {0, \u00b7 \u00b7 \u00b7 , T \u22121} rate and momentum weight. However, to adapt these methods in RL setting, we still need to use importance sampling techniques because of the term \u2207f(\u03b8t\u22121, zt). Recently, Tran & Cutkosky (2022) showed that the correction term can be replaced with a second-order term \u22072f(\u03b8t, zt)(\u03b8t \u2212\u03b8t\u22121) by considering additional assumption that objective function is second-order smooth. Besides, the above Hessian vector product can be computed in O(Hd) (similar to the computational complexity of obtaining the gradient vector) by executing Pearlmutter\u2019s algorithm (Pearlmutter, 1994). 3 The SHARP Algorithm In this section, we propose the SHARP algorithm, which incorporates second-order information into SGD with momentum and provide a convergence guarantee. SHARP algorithm is presented in Algorithm 2. At each iteration t, we draw sample bt from a uniform distribution in the interval [0, 1] (line 5) and then obtain \u03b8b t as the linear combination of \u03b8t\u22121 and \u03b8t with coefficients 1 \u2212bt and bt (line 6). In line 7, we sample trajectories \u03c4t and \u03c4 b t according to policies \u03c0\u03b8t and \u03c0\u03b8b t , respectively. Afterwards, we update the momentum weight \u03b1t and the learning rate \u03b7t (line 8) and then compute the estimate of gradient at time t, i.e., vt, using the Hessian vector product B(\u03c4 b t ; \u03b8b t)(\u03b8t \u2212\u03b8t\u22121) and stochastic gradient g(\u03c4t; \u03b8t) (line 9). Finally we update \u03b8t based on a normalized version of vt in line 10. Remark 1 By choosing a point uniformly at random on the line between \u03b8t\u22121 and \u03b8t, we can ensure that B(\u03c4 b t ; \u03b8b t)(\u03b8t \u2212\u03b8t\u22121)) is an unbiased estimate of \u2207J(\u03b8t) \u2212\u2207J(\u03b8t\u22121) (see equation 22 in Appendix B). As we mentioned before, in the context of stochastic optimization, Tran & Cutkosky (2022) used the second-order term \u22072f(\u03b8t, zt)(\u03b8t \u2212\u03b8t\u22121), which is biased as the Hessian vector product is evaluated at the point \u03b8t. As a result, in order to provide the convergence guarantee, it is further assumed that the objective function is second-order smooth in (Tran & Cutkosky, 2022). Remark 2 To give an intuition why the second-order term is helpful in the update in line 9, consider the following error term: \u03f5t = vt \u2212\u2207J(\u03b8t). (11) We can rewrite the above error term as follows: \u03f5t = (1 \u2212\u03b1t)(vt\u22121 \u2212\u2207J(\u03b8t) + B(\u03c4 b t ; \u03b8b t)(\u03b8t \u2212\u03b8t\u22121)) + \u03b1t(g(\u03c4t; \u03b8t) \u2212\u2207J(\u03b8t)). (12) Now, for a moment, suppose that E[vt\u22121] = E[\u2207J(\u03b8t\u22121)] (with total expectation on both sides). Then, E[vt\u22121 \u2212\u2207J(\u03b8t) + B(\u03c4 b t ; \u03b8b t)(\u03b8t \u2212\u03b8t\u22121)] = 0. (13) 5 \fAs v0 is an unbiased estimate of gradient at \u03b80, we can easily show by induction that according to above equation, E[vt] = E[\u2207J(\u03b8t)] for any t \u22650. In the next part, we provide a theoretical guarantee on the convergence rate of SHARP algorithm. 3.1 Convergence Analysis In this part, we analyze the convergence rate of Algorithm 2 under bounded reward function and some regularity assumptions on the policy \u03c0\u03b8. Assumption 1 (Bounded reward) For \u2200s \u2208S, \u2200a \u2208A, |R(s, a)| < R0 where R0 > 0 is some constant. Assumption 2 (Parameterization regularity) There exist constants G, L > 0 such that for any \u03b8 \u2208Rd and for any s \u2208S, a \u2208A: (a) \u2225\u2207log \u03c0\u03b8(a|s)\u2225\u2264G, (b) \u2225\u22072 log \u03c0\u03b8(a|s)\u2225\u2264L. Assumptions 1 and 2 are common in the RL literature (Papini et al., 2018; Shen et al., 2019) to analyze the convergence of policy gradient methods. Under these assumptions, the following upper bounds can be derived on E[\u2225g(\u03c4; \u03b8) \u2212\u2207J(\u03b8)\u22252] and E[\u2225B(\u03c4; \u03b8) \u2212\u22072J(\u03b8)\u22252]. Lemma 1 (Shen et al. (2019)) Under Assumptions 1 and 2: E[\u2225g(\u03c4; \u03b8) \u2212\u2207J(\u03b8)\u22252] \u2264\u03c32 g E[\u2225B(\u03c4; \u03b8) \u2212\u22072J(\u03b8)\u22252] \u2264\u03c32 B, (14) where \u03c32 g = G2R2 0 (1\u2212\u03b3)4 and \u03c32 B = H2G4R2 0+L2R2 0 (1\u2212\u03b3)4 . Based on these bounds, we can provide the following guarantee on the convergence rate of SHARP algorithm. All proofs are provided in the appendix. Theorem 1 Suppose that the initial momentum weight \u03b10 \u2208(2/3, 1] and initial learning rate \u03b70 > 0. Under Assumptions 1 and 2, Algorithm 2 guarantees that: E \" 1 T T X t=1 \u2225\u2207J(\u03b8t)\u2225 # \u22648 \u221a C + 9CJ/\u03b70 T 1/3 + 6\u03c3B\u03b70 T 2/3 , (15) where C = 3\u03b10(48\u03c32 B\u03b72 0/\u03b10 + (6\u03b10 + 1/\u03b10)\u03c32 g)/(3\u03b10 \u22122) and CJ = R0/(1 \u2212\u03b3). Corollary 1 The right hand side of equation 15 is dominated by the first term. If we set \u03b70 in the order of p CJ/\u03c3B, then the number of trajectories for achieving \u03f5-FOSP would be O( 1 (1\u2212\u03b3)2\u03f53 ), where we assume that the time horizon H is set in the order of 1/(1 \u2212\u03b3). Remark 3 Along the iterations of the SHARP algorithm, it can be shown that the following inequality holds for any t \u22651 (see equation 20 in Appendix B): E[\u2225\u03f5t\u22252] \u2264(1 \u2212\u03b1t)E[\u2225\u03f5t\u22121\u22252] + O(\u03b72 t ). (16) Therefore, the variance of the estimation error decays with the rate of O(1/t2/3) (see Appendix C for the proof). To the best of our knowledge, existing variance reduction methods only guarantee the decay of accumulative variance. This appealing property of SHARP is largely due to the use of unbiased Hessian-aided gradient estimator and normalized gradient descent. Moreover, as a byproduct of these desirable properties, our convergence analysis turns out to be more simple, compared to existing work (Cutkosky & Orabona, 2019; Tran & Cutkosky, 2022). This could be of independent interest for better theory of variance-reduced methods. 6 \fRemark 4 The SHARP algorithm is parameter-free in the sense that \u03b10 and \u03b70 are constants that do not depend on other parameters in the problem. Therefore, for any choice of 2/3 < \u03b10 \u22641 and \u03b70 > 0, we can guarantee convergence to \u03f5-FOSP with the sample complexity of O(\u03f5\u22123). However, in practice, it is desirable to tune these constants to have smaller constants in the numerators of convergence rates in equation 15. For instance, \u03c3B might be large in some RL settings and we control the constant in the first term on the right hand side of equation 15 by tuning \u03b70. It is noteworthy that STORM+ is also parameter-free but it requires adaptive learning rate and momentum weight that depend on stochastic gradients in previous iterations. Remark 5 Regarding the dependency on \u03f5, in the context of stochastic optimization, Arjevani et al. (2020) have shown that under some mild assumptions on the objective function and stochastic gradient, the rate of O(1/\u03f53) is optimal in order to obtain \u03f5-FOSP, and cannot be improved with stochastic p-th order methods for p \u22652. 4 Related Work In recent years, several variance-reduced methods have been proposed in order to accelerate the existing PG methods. Papini et al. (2018) and Xu et al. (2020) proposed SVRPG algorithm based on SVRG (Johnson & Zhang, 2013), with sample complexity of O(1/\u03f54) and O(1/\u03f510/3), respectively. This algorithm requires importance sampling techniques as well as the following further assumption for guaranteeing convergence to \u03f5-FOSP: \u2022 Bounded variance of importance sampling weights: For any trajectory \u03c4, it is assumed that: V ar p(\u03c4|\u03c0\u03b81) p(\u03c4|\u03c0\u03b82) ! \u2264W, \u2200\u03b81, \u03b82 \u2208Rd, (17) where W < \u221eis a constant. The above assumption is fairly strong as the importance sampling weight could grow exponentially with horizon length H (Zhang et al., 2021). In order to remove importance sampling weights, Shen et al. (2019) proposed HAPG algorithm, which uses second-order information and achieves better sample complexity of O(1/\u03f53). However, it still needs checkpoints and large batch sizes of |B| = O(1/\u03f5), and |Bcheck| = O(1/\u03f52). In Table 1, we compare the main variance-reduced policy gradient methods achieving \u03f5-FOSP in terms of sample complexity and batch size1. In this table, after HAPG (Shen et al., 2019), all the proposed variance reduction methods achieve a similar sample complexity. The orders of batch sizes are also the same as in HAPG. Xu et al. (2019) proposed SRVR-PG, and used stochastic path-integrated differential estimators for variance reduction. This algorithm uses important sampling weights and the required batch sizes are in the order of |B| = O(1/\u221a\u03f5) and |Bcheck| = O(1/\u03f5). Later, Pham et al. (2020) proposed HSPGA by adapting the SARAH estimator for reducing the variance of REINFORCE. HSPGA still needs importance sampling weights, but the batch size is reduced to O(1). Huang et al. (2020) proposed three variants of momentum-based policy gradient (called MBPG), which are based on the STORM algorithm (Cutkosky & Orabona, 2019). Thus, the required batch size is in the order of O(1), similarly to STORM. However, it still needs to use importance sampling weights. In (Zhang et al., 2020), similar update for the estimate of stochastic gradient as the one in SHARP have been explored for Frank-Wolfe type algorithms in the context of constrained optimization. Later, Zhang et al. (2021) proposed TSIVR-PG with a gradient truncation mechanism in order to resolve some of issues pertaining to the use of importance sampling weights. In their convergence analysis, they are restricted to soft-max policy with some specific assumptions on the parameterization functions. More recently, two methods using mirror-descent algorithm based on Bregman divergence, called VR-BGPO (Huang et al., 2022) and VR-MPO (Yang et al., 2022) have been proposed, achieving \u03f5-FOSP if the mirror map in Bregman divergence is the l2-norm. Very recently, based on PAGE (Li et al., 2021), Gargiani et al. (2022) proposed the PAGE-PG algorithm which takes a batch of samples of 1Please note that the sample complexity also depends on the other parameters such as horizon length H and discount factor \u03b3. Here, we just mention the dependency of sample complexity on \u03f5. 7 \fMethod SC |B| |Bcheck| Checkpoint IS Further Assump. SVRPG Xu et al. (2020) O( 1 \u03f510/3 ) O( 1 \u03f54/3 ) O( 1 \u03f54/3 ) Needed Needed Assump. in equation 17 HAPG Shen et al. (2019) O( 1 \u03f53 ) O( 1 \u03f5 ) O( 1 \u03f52 ) Needed Not needed SRVR-PG Xu et al. (2019) O( 1 \u03f53 ) O( 1 \u221a\u03f5) O( 1 \u03f5 ) Needed Needed Assump. in equation 17 HSPGA Pham et al. (2020) O( 1 \u03f53 ) O(1) Not needed Needed Assump. in equation 17 MBPG Huang et al. (2020) \u02dc O( 1 \u03f53 ) O(1) Not needed Needed Assump. in equation 17 VRMPO Yang et al. (2022) O( 1 \u03f53 ) O( 1 \u03f5 ) O( 1 \u03f52 ) Needed Not needed VR-BGPO Huang et al. (2022) O( 1 \u03f53 ) O(1) Not Needed Needed Assump. in equation 17 PAGE-PG Gargiani et al. (2022) O( 1 \u03f53 ) O(1) O( 1 \u03f52 ) Needed Needed Assump. in equation 17 This paper O( 1 \u03f53 ) O(1) Not needed Not needed Table 1: Comparison of main variance-reduced policy gradient methods to achieve \u03f5-FOSP based on sample complexity (SC), batch size (|B|), batch size at checkpoints (|Bcheck|), and the need for checkpoints, importance sampling (IS), and additional assumptions. O(\u03f5\u22122) for updating the parameters with probability pt or reuse the previous estimate gradient with a small adjustment, with probability 1 \u2212pt. The proposed algorithm requires importance sampling weights and thus the additional assumption in equation 17 to guarantee convergence to \u03f5-FOSP with a sample complexity of O(\u03f5\u22123). There exist few recent work on the global convergence of policy gradient methods. For instance, Liu et al. (2020) showed global convergence of policy gradient, natural policy gradient, and their variance reduced variants, in the case of positive definite Fisher information matrix of the policy. Chung et al. (2021) studied the impact of baselines on the learning dynamics of policy gradient methods and showed that using a baseline minimizing the variance can converge to a sub-optimal policy. Recently, Ding et al. (2022) studied the soft-max and the Fisher non-degenerate policies, and showed that adding a momentum term improves the global optimality sample complexities of vanilla PG methods by \u02dc O(\u03f5\u22121.5) and \u02dc O(\u03f5\u22121), respectively. The aforementioned discussion for the main methods are summarized in Table 1. For each method, we mention whether it needs checkpoints and importance sampling weights.2 All the discussed aforementioned methods require Assumptions 1 and 2. In the last column, additional assumptions besides these two are listed for each method. Comparing the sample complexity of our algorithm with previous work, note that all the algorithms (including ours) under SVRPG in Table 1 achieve the rate of O(1/\u03f53) or \u02dc O(1/\u03f53). Without any further assumption, our proposed method is the only one that requires no checkpoints, no importance sampling weights, and has a batch size of the order of O(1). As we will see in the next section, besides these algorithmic advantages, it has remarkable performance compared to the state of the art in various control tasks. 2To be more precise, although PAGE-PG, has no fixed checkpoints, it takes a batch of O(\u03f5\u22122) to get an unbiased estimate of the gradient with probability pt. Therefore, in this sense, it requires checkpoints. 8 \f5 Experiments In this section, we evaluate the performance of the proposed algorithm and compare it with the previous work for control tasks in MuJoCo simulator (Todorov et al., 2012) which is a physical engine, suitable for simulating robotic tasks with good accuracy and speed in RL setting. We implemented SHARP in the Garage library (garage contributors, 2019) as it allows for maintaining and integrating it in future versions of Garage library for easier dissemination. We utilized a Linux server with Intel Xeon CPU E5-2680 v3 (24 cores) operating at 2.50GHz with 377 GB DDR4 of memory, and Nvidia Titan X Pascal GPU. The implementation of SHARP is available as supplementary material. We considered the following four control tasks with continuous action space: Reacher, Walker, Humanoid, and Swimmer. In Reacher, there is an arm with two degrees of freedom, aiming to reach a target point in the two-dimensional plane. A higher rewards is attained if the arm gets closer to the target point. In Walker, a humanoid walker tries to move forward in a two dimensional space, i.e., it can only fall forward or backward. The state contains velocities of different parts of body and joint angles and the actions represent how to move legs and foot joints. The reward signal is based on the current velocity of the agent. In Humanoid, a three-dimensional bipedal robot is trained to walk forward as fast as possible, without falling over. The state space is 376-dimensional, containing the position and velocity of each joint, the friction of the actuator, and contact forces. The action space is a 17-dimensional continuous space. Finally, in Swimmer, the agent is in a two-dimensional pool and the goal is to move as fast as possible in the right direction. We compared SHARP algorithm with PG methods that provide theoretical guarantees for converging to an approximate FOSP: PAGE-PG (Gargiani et al., 2022), IS-MBPG (Huang et al., 2020) which is based on STROM, HAPG (Shen et al., 2019) which does not require IS weights, and VR-BGPO (Huang et al., 2022) which is a mirror descent based algorithm. We also considered REINFORCE (Sutton et al., 2000) as a baseline algorithm. There are some other approaches in the literature with theoretical guarantees such as VRMPO (Yang et al., 2022), and STORM-PG (Yuan et al., 2020) but the official implementations are not publicly available and our request to access the code from the authors remained unanswered. For each algorithm, we used the same set of Gaussian policies parameterized with neural networks having two layers of 64 neurons each. Baselines and environment settings (such as maximum trajectory horizon, and reward intervals) were considered the same for all algorithms. We chose a maximum horizon of 500 for Walker, Swimmer, and Humanoid and 50 for Reacher. More details about experiments are provided in Appendix E. In the literature, it has been observed that most PG methods are quite sensitive to parameter initialization or random seeds (Henderson et al., 2018). Hence, it might be challenging in some cases to reproduce previous results. Moreover, it is not clear how to compare methods in terms of performance (e.g., the average episode return) and robustness (such as the standard deviation (STD) of return) at the same time. To resolve the above issues, we considered the perfromance-robustness (PR) metric proposed in (Khorasani et al., 2023), capturing both the average return and STD of return of an algorithm. In particular, for any algorithm A, after observing t number of state-actions pairs (which we call system probes), the lower bound of the confidence interval 90 percent of average return over n runs of the algorithm (denoted by LCIA(n, t)) is computed. The PR metric is defined by averaging LCIA(n, t) over all system probes t = 1, \u00b7 \u00b7 \u00b7 , T as follows: PRA(n) = 1 T T X t=1 LCIA(n, t), (18) where T is maximum number of system probes. We used grid search to tune the hyper-parameters of all the considered algorithms. For the algorithms excepts ours, the search space for each hyper-parameter was chosen based on the one from the original papers. For each configuration of the hyper-parameters, we ran each algorithm A, five times and computed PRA(5). We selected the configuration which maximized PRA(5) and then reported PRA(10) of each algorithm for the selected configuration based on 10 different runs in Table 2. Our proposed method achieved the highest PR in all environments compared with the other algorithms. 9 \f(b) Walker (a) Reacher (c) Humanoid (d) Swimmer Figure 1: Comparison of SHARP with other variance reduction methods on four control tasks. Table 2: Comparison of SHARP with other variance-reduced methods in terms of PR. In each environment, the highest PR is in bold. Reacher Walker Humanoid Swimmer HAPG -19.51 104.296 161.12 70.62 IS-MBPG -21.76 204.32 201.01 27.44 PAGE-PG -17.39 247.58 356.58 19.26 REINFORCE -20.10 79.98 156.21 20.28 VR-BGPO -15.15 320.51 409.67 38.24 SHARP (our algorithm) -10.34 325.98 422.98 83.45 We considered the confidence interval of the performance (average return) to show the sensitivity of an algorithm to random seeds. As can be seen in Figure 1, SHARP not only achieves the highest average return after 10 million system probes but also has a relatively small confidence interval. 10 \f6" + }, + { + "url": "http://arxiv.org/abs/1908.03932v1", + "title": "Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables", + "abstract": "We consider the problem of learning causal models from observational data\ngenerated by linear non-Gaussian acyclic causal models with latent variables.\nWithout considering the effect of latent variables, one usually infers wrong\ncausal relationships among the observed variables. Under faithfulness\nassumption, we propose a method to check whether there exists a causal path\nbetween any two observed variables. From this information, we can obtain the\ncausal order among them. The next question is then whether or not the causal\neffects can be uniquely identified as well. It can be shown that causal effects\namong observed variables cannot be identified uniquely even under the\nassumptions of faithfulness and non-Gaussianity of exogenous noises. However,\nwe will propose an efficient method to identify the set of all possible causal\neffects that are compatible with the observational data. Furthermore, we\npresent some structural conditions on the causal graph under which we can learn\ncausal effects among observed variables uniquely. We also provide necessary and\nsufficient graphical conditions for unique identification of the number of\nvariables in the system. Experiments on synthetic data and real-world data show\nthe effectiveness of our proposed algorithm on learning causal models.", + "authors": "Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash, Kun Zhang", + "published": "2019-08-11", + "updated": "2019-08-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction One of the primary goals in empirical sciences is to discover casual relationships among a set of variables of interest in various natural and social phenomena. Such causal relationships can be recovered by conducting controlled experiments. However, performing controlled experiments is 1 arXiv:1908.03932v1 [cs.LG] 11 Aug 2019 \f\ud835\udc49 \u0b35 \ud835\udc49 \u0b37 \ud835\udc49 \u0b36 \ud835\udc49 \u0b35 \ud835\udc49 \u0b37 \ud835\udc49 \u0b36 (b) (a) Figure 1: An example of causal graphs: V1 and V2 are observed variables while V3 is latent. often expensive or even impossible due to technical or ethical reasons. Thus, it is vital to develop statistical methods for recovering causal relationships from non-experimental data. Probabilistic graphical models are commonly used to represent causal relations. Alternatively, Structural Equation Models (SEM) which further specify mathematical equations among the variables can be used to represent probabilistic causal in\ufb02uences. Linear SEMs are a special class of SEMs where each variable is a linear combination of its direct causes and an exogenous noise. Under the causal suf\ufb01ciency assumption, by utilizing conventional causal structure learning algorithms such as PC (Spirtes et al., 2000) and IC (Pearl, 2009), we can identify a class of models that are equivalent in the sense that they represent the same set of conditional independence assertions obtained from data. If we have background knowledge about the data-generating mechanism, we may further narrow down the possible models that are compatible with the observed data (Peters et al., 2016; Ghassami et al., 2018; Salehkaleybar et al., 2018; Zhang et al., 2017; Peters and B\u00a8 uhlmann, 2013; Zhang and Hyv\u00a8 arinen, 2009; Hoyer et al., 2009; Janzing et al., 2012). For instance, Shimizu et al. (2006) proposed a linear non-Gaussian acyclic model (LiNGAM) discovery algorithm that can identify causal structure uniquely by assuming non-Gaussian distributions for the exogenous noises in the linear SEM model. However, LiNGAM algorithm and its regression-based variant (DirectLiNGAM) (Shimizu et al., 2011) rely on the causal suf\ufb01ciency assumption, i.e., no unobserved common causes exist for any pair of variables that are under consideration in the model. In the presence of latent variables, Hoyer et al. (2008) showed that linear SEM can be converted to a canonical form where each latent variable has at least two children and no parents. Such latent variables are commonly called \u201clatent confounders\u201d. Furthermore, they proposed a solution which casts the problem of identifying causal effects among observed variables into an overcompleteICA problem and returns multiple causal structures that are observationally equivalent. The time complexity of searching such structures can be as high as \u0000 p po \u0001 where po and p are the number of observed and total variables in the system, respectively. Entner and Hoyer (2010) proposed a method that identi\ufb01es a partial causal structure among the observed variables by recovering all the unconfounded sets1 and then learning the causal effects for each pair of variables in the set. However, their method may return an empty unconfounded set if latent confounders are the cause of most of observed variables in the system such as the simple example of Figure 1. Chen and Chan (2013) showed that a causal order and causal effects among observed variables can be identi\ufb01ed if the latent confounders have Gaussian distribution and exogenous noises of observed variables are simultaneously super-Gaussian or sub-Gaussian. In (Tashiro et al., 2014), the ideas in DirectLiNGAM was extended to the case where latent confounders exist in the system. The proposed solution \ufb01rst tries to \ufb01nd a root variable (a variable with no parents). Then, the effect of such variable is removed by 1. A set of variables is called unconfounded if there is no variable outside the set which is confounder of some variables in the set. In Figure 1, variable V3 is a confouder of variables V1 and V2 but it is not observable. Thus, the set of variables V1 and V2 is not unconfounded. 2 \fregressing it out. This procedure continues until any variable and its residual becomes dependent. Subsequently, a similar iterative procedure is used to \ufb01nd a sink variable and remove its effect from other variables. However, this solution may not recover causal order in some causal graphs such as the one in Figure 1.2. Shimizu and Bollen (2014) proposed a Bayesian approach for estimating the causal direction between two observed variables when the sum of non-Gaussian independent latent confounders has a multivariate t-distribution. They compute log-marginal likelihoods to infer causal directions. Rather surprisingly, although the causal structure is in general not fully identi\ufb01able in the presence of latent variables, we will show that the causal order among the observed variables is still identi\ufb01able under the faithfulness assumption. In order to obtain a causal order, we \ufb01rst check whether there exists a causal path between any two observed variables. Subsequently, from this information, we obtain a causal order among them. Having established a causal order, we aim to \ufb01gure out whether the causal effects are uniquely identi\ufb01able from observational data. We show by an example that causal effects among observed variables is not uniquely identi\ufb01able even if the faithfulness assumption holds true and the exogenous noises are non-Gaussian. We propose a method to identify the set of all possible causal effects ef\ufb01ciently in time that are compatible with the observational data. Furthermore, we present some structural conditions on the causal graph under which causal effects among the observed variables can be identi\ufb01ed uniquely. We also provide necessary and suf\ufb01cient graphical conditions under which the number of latent variables is uniquely identi\ufb01able. The rest of this paper is organized as follows. In Section 2, we de\ufb01ne the problem of identifying causal orders and causal effects in linear causal systems with latent variables. In Section 3, we propose our approach to learn the causal order among the observed variables and provide necessary and suf\ufb01cient graphical conditions under which the number of latent variables is uniquely identi\ufb01able. In Section 4, we present a method to \ufb01nd the set of all possible causal effects which are consistent with the observational data and give conditions under which causal effects are uniquely identi\ufb01able. We conduct experiments to evaluate the performance of proposed solutions in Section 5 and conclude in Section 6. 2. Problem De\ufb01nition 2.1 Notations In a directed graph G = (V, E) with the vertex set V = {V1, \u00b7 \u00b7 \u00b7 , Vp} and the edge set E, we denote a directed edge from Vi to Vj by (Vi, Vj). A directed path P = (Vi0, Vi1, \u00b7 \u00b7 \u00b7 , Vik) in G is a sequence of vertices of G where there is a directed edge from Vij to Vij+1 for any 0 \u2264j \u2264k \u22121. We de\ufb01ne the set of variables {Vi1, \u00b7 \u00b7 \u00b7 , Vik\u22121} as the intermediate variables on the path P. We use notation Vi \u21ddVj to show that there exists a directed path from Vi to Vj. If there is a directed path from Vi to Vj, Vi is ancestor of Vj and that Vj is a descendant of Vi. More formally, anc(Vi) = {Vj|Vj \u21ddVi} and des(Vi) = {Vj|Vi \u21ddVj}. Each variable Vi is an ancestor and a descendant of itself. We denote vectors and matrices by boldface letters. The vectors Ai,: and A:,i represent i-th row and column of matrix A, respectively. The (i, j) entry of matrix A is denoted by [A]i,j. For n \u00d7 m 2. In Figure 1, the root variable (V3) is latent and the regressor of sink variable V2 and the residual are not independent without considering the latent variable V3 in the set of regressors. Thus, no root or sink variable can be identi\ufb01ed in the system. 3 \fmatrix A and n \u00d7 p matrix B, the notation [A, B] denotes the horizontal concatenation. For n \u00d7 m matrix A and p \u00d7 m matrix B, the notation [A; B] shows the vertical concatenation. 2.2 System Model Consider a linear SEM among a set of variables V = {V1, \u00b7 \u00b7 \u00b7 , Vp}: V = AV + N, (1) where the vectors V and N denote the random variables in V and their corresponding exogenous noises, respectively. The entry (i, j) of matrix A shows the strength of direct causal effect of variable Vj on variable Vi. We assume that the causal relations among random variables can be represented by a directed acyclic graph (DAG). Thus, the variables in V can be arranged in a causal order, such that no latter variable causes any earlier variable. We denote such a causal order on the variables by k in which k(i), i \u2208{1, \u00b7 \u00b7 \u00b7 , p} shows the position of variable Vi in the causal order. The matrix A can be converted to a strictly lower triangular matrix by permuting its rows and columns simultaneously based on the causal order. Example 1 Consider the following linear SEM with four random variables {V1, \u00b7 \u00b7 \u00b7 , V4}: \uf8ee \uf8ef \uf8ef \uf8f0 V1 V2 V3 V4 \uf8f9 \uf8fa \uf8fa \uf8fb= \uf8ee \uf8ef \uf8ef \uf8f0 0 e 0 d 0 0 0 0 0 a 0 0 0 b c 0 \uf8f9 \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8f0 V1 V2 V3 V4 \uf8f9 \uf8fa \uf8fa \uf8fb+ \uf8ee \uf8ef \uf8ef \uf8f0 N1 N2 N3 N4 \uf8f9 \uf8fa \uf8fa \uf8fb, where a, b, c, d and e are some constants (see Figure 2). A causal order in this SEM model would be: k(1) = 4, k(2) = 1, k(3) = 2, k(4) = 3. Hence, the matrix PAPT is strictly lower triangular where P is a permutation matrix associated with k de\ufb01ned by the following non-zero entries: {(k(i), i)|1 \u2264i \u22644}. We split random variables in V into an observed vector Vo \u2208Rpo and a latent vector Vl \u2208Rpl where po and pl are the number of observed and latent variables, respectively. Without loss of generality, we assume that \ufb01rst po entries of V are observable, i.e. Vo = [V1, \u00b7 \u00b7 \u00b7 , Vpo]T and Vl = [Vpo+1, \u00b7 \u00b7 \u00b7 , Vp]T . Therefore, \u0014 Vo Vl \u0015 = \u0014 Aoo Aol Alo All \u0015 \u0014 Vo Vl \u0015 + \u0014 No Nl \u0015 , (2) where No and Nl are the vectors of exogenous noises of Vo and Vl, respectively. Furthermore, we have: A = [Aoo, Aol; Alo, All]. The causal order among all variables k, induces a causal order ko among the observed variables as follows: For any two observed variables Vi, Vj, 1 \u2264i, j \u2264po, ko(i) < ko(j) if k(i) < k(j). Similarly, k induces a causal order among latent variables. We denote this causal order by kl. It can be easily shown that Aoo and All can be converted to strictly lower triangular matrices by permuting rows and columns simultaneously based on causal orders ko and kl, respectively. Example 2 In Example 1, suppose that only variables V1 and V2 are observable. Then, the causal order among observed variables would be: ko(1) = 2 and ko(2) = 1. Thus, PAooPT is a strictly lower triangular matrix where P = [0, 1; 1, 0]. For the latent variables, kl(3) = 1 and kl(4) = 2. 4 \f\ud835\udc49 3 \ud835\udc49 2 \ud835\udc49 1 \ud835\udc49 4 b a c e d Figure 2: Causal graph of Example 1. In the remainder of this section, we brie\ufb02y describe LiNGAM algorithm, which is capable of recovering the matrix A uniquely if all variables in the model are observable and exogenous noises are non-Gaussian (Shimizu et al., 2006). The vector V in Equation (1) can be written as a linear combination of exogenous noises as follows: V = BN, (3) where B = (I \u2212A)\u22121. The above equation \ufb01ts into the standard linear Independent Component Analysis (ICA) framework, where independent non-Gaussian components are all variables in N. By utilizing statistical techniques in ICA (Hyv\u00a8 arinen et al., 2004), matrix B can be identi\ufb01ed up to scaling and permutations of its columns. More speci\ufb01cally, the independent components of ICA as well as the estimated B matrix are not uniquely determined because permuting and rescaling them does not change their mutual independence. So without knowledge of the ordering and scaling of the noise terms, the following general ICA model for V holds: V = \u02dc B \u02dc N, (4) where \u02dc N contains independent components and these components (resp. the columns of \u02dc B) are a permuted and rescaled version of those in N (resp. the columns of B). In what follows, we use B for the matrix B = (I \u2212A)\u22121 while \u02dc B is the mixing matrix for the ICA model, as given in (4). Hence \u02dc B can be written as: \u02dc B = BP\u039b, where P is a permutation matrix and \u039b is a diagonal scaling matrix. Yet the corresponding causal model, represented by A, can be uniquely identi\ufb01ed because of its acyclicity constraint. In particular, the inverse of B can be converted uniquely to a lower triangular matrix having all-ones on its diagonal by some scaling and permutation of the rows. 3. Identifying Causal Orders among Observed Variables Since the graph with adjacency matrix A is acyclic, there exists an integer d such that Ad = 0. Thus, we can rewrite B in the following form: B = (I \u2212A)\u22121 = d\u22121 X k=0 Ak. (5) 5 \fIt can be seen that there exists a casual path of length k from the exogenous noise of variable Vi to variable Vj if entry (j, i) of matrix Ak is nonzero. We de\ufb01ne [B]j,i as the total causal effect of variable Vi on variable Vj. Assumption 1 (Faithfulness assumption) The total causal effect from variable Vi to Vj is nonzero if there is a causal path from Vi to Vj. Thus, we have: [B]j,i \u0338= 0 if Vi \u21ddVj. In the following lemma, we list two consequences of the faithfulness assumption that are immediate from the de\ufb01nition. Lemma 1 Under the faithfulness assumptions, for any two observed variables Vi and Vj, 1 \u2264 i, j \u2264po, the following holds: (i) Suppose that Vi \u21ddVj. If [B]i,k \u0338= 0 for some k \u0338= j, then [B]j,k \u0338= 0. (ii) If there is no causal path between Vi and Vj, then [B]i,j = 0 and [B]j,i = 0. Based on Equation (2), we can write Vo in terms of No and Nl as follows Vo = (I \u2212D)\u22121No + (I \u2212D)\u22121Aol(I \u2212All)\u22121Nl, (6) where D = Aoo + Aol(I \u2212All)\u22121Alo. Let Bo := (I \u2212D)\u22121, Bl := (I \u2212D)\u22121Aol(I \u2212All)\u22121, and N := [No; Nl]. Thus, Vo = B\u2032N where B\u2032 := [Bo, Bl]. This equation \ufb01ts into a linear overcomplete ICA where the exogenous noises are non-Gaussian and the number of observed variables is less than the number of variables in the system. The following proposition asserts when the columns of matrix B\u2032 are still identi\ufb01able up to some permutations and scaling. De\ufb01nition 2 (Reducibility of a matrix) A matrix is reducible if two of its columns are linearly dependent. Proposition 3 ((Eriksson and Koivunen, 2004), Theorem 3) In the linear over-completer ICA problem, the columns of mixing matrix can be identi\ufb01ed up to some scaling and permutation if it is not reducible. Lemma 4 The columns of B\u2032 corresponding to any two observed variables are linearly independent. Proof Consider any two observed variables Vi and Vj. We know that [B\u2032]i,i and [B\u2032]j,j are nonzero. Furthermore, B\u2032 is a sub-matrix of B. Hence, based on Lemma 1 (ii), if there is no causal path between Vi and Vj, we have: [B\u2032]i,j = 0 and [B\u2032]j,i = 0. Thus, [B\u2032]:,i and [B\u2032]:,j are not linearly dependent. Furthermore, if one of the variable is the ancestor of the another one, let say Vi \u2208anc(Vj), according to Lemma 1 (i), [B\u2032]j,i \u0338= 0 while [B\u2032]i,j = 0. Thus, [B\u2032]:,i and [B\u2032]:,j are also not linearly dependent in this case and the proof is complete. Although columns of B\u2032 corresponding to the observed variables are pairwise linearly independent, a column corresponding to a latent variable Vi might be linearly dependent on a column corresponding to an observed or latent variable Vj (see Example 3). In that case, we can remove the column [B\u2032]:,i and Ni from matrix B\u2032 and vector N, respectively and replace Nj by Nj + \u03b1Ni 6 \fwhere \u03b1 is a constant such that [B\u2032]:,i = \u03b1[B\u2032]:,j. We can continue this process until all the remaining columns are pairwise linearly independent. Let B\u2032\u2032 and N\u2032\u2032 be the resulting mixing matrix and exogenous noise vector, respectively. According to Lemma 4, all the columns of B\u2032 corresponding to observed variables are in B\u2032\u2032. We utilize the matrix B\u2032\u2032 to recover a causal order among the observed variables. Since the matrix B\u2032\u2032 is not reducible, its column can be identi\ufb01ed up to some scaling and permutation according to Proposition 3. Let \u02dc B\u2032\u2032 be the recovered matrix containing columns of B\u2032\u2032. Consider any two observed variables Vi and Vj, i.e., 1 \u2264i, j \u2264po. We extract two rows of \u02dc B\u2032\u2032 corresponding to variables Vi and Vj. Let n0\u2217be the number of columns in [\u02dc B\u2032\u2032 i,:; \u02dc B\u2032\u2032 j,:] whose \ufb01rst entries are zero but second entries are nonzero. Similarly, let n\u22170 be the number of columns that their \ufb01rst entries are nonzero but their second entries are zero. The following lemma asserts that the existence of a causal path between Vi and Vj can be checked from n0\u2217and n\u22170 (or equivalently, \u02dc B\u2032\u2032). Lemma 5 Under the faithfulness assumption, the existence of a causal path between any two observed variable can be inferred from matrix \u02dc B\u2032\u2032. Proof. First, we show that if Vi \u21ddVj, then n0\u2217> 0 and n\u22170 = 0. We know that the matrix [\u02dc B\u2032\u2032 i,:; \u02dc B\u2032\u2032 j,:] can be converted to [B\u2032\u2032 i,:; B\u2032\u2032 j,:] by some permutation and scaling of its columns. Moreover, B\u2032\u2032 contains some of the columns of B\u2032 including all the columns corresponding to the observed variables. Thus, from Lemma 1, we know that if [B\u2032\u2032]i,k \u0338= 0 for any k \u0338= j, then [B\u2032\u2032]j,k \u0338= 0. Moreover, we have: [B\u2032\u2032]j,j \u0338= 0 and [B\u2032\u2032]i,j = 0. Hence, we can conclude that: n0\u2217> 0 and n\u22170 = 0. If n0\u2217> 0 and n\u22170 = 0, then Vi \u21ddVj. By contradiction, suppose that there is no causal path between Vi and Vi or Vj \u21ddVi. The second case (Vj \u21ddVi) does not happen due to what we just proved. Furthermore, from Lemma 1, we know that [B\u2032\u2032]i,i \u0338= 0, [B\u2032\u2032]i,j = 0. Therefore, n\u22170 > 0 which is in contradiction with our assumption. Hence, we can conclude that n0\u2217> 0 and n\u22170 = 0 if and only if Vi \u21ddVj. We can construct an auxiliary directed graph whose vertices are the observed variables and a directed edge exists from Vi to Vj if Vi \u21ddVj (which we can infer from n\u22170 and n0\u2217). Any causal order over the auxiliary graph is a correct causal order among the observed variables Vo. Example 3 Consider the causal graph in Figure 3. Suppose that variables V3 and V4 are latent. The matrix B\u2032 would be: \u00141 0 0 a d 1 e c + ad + be \u0015 . We can remove the third column from B\u2032 and update the vector N to [N1; N2 + eN3; N4]. Thus, the matrix B\u2032\u2032 is equal to: \u00141 0 a d 1 c + ad + be \u0015 , which is not reducible. Without loss of generality, assume that the recovered matrix \u02dc B\u2032\u2032 is equal to B\u2032\u2032. Therefore, n0\u2217= 1 and n\u22170 = 0. Hence, we can infer that there is a causal path from V1 to V2. 7 \f\ud835\udc49 1 \ud835\udc49 4 \ud835\udc49 3 \ud835\udc49 2 c a d b e Figure 3: Causal graph of Example 3. Recovering the Number of Variables in the System According to Proposition 3, the number of variables in the system can be recovered if and only if the matrix B\u2032 is not reducible. Furthermore, Equation (6) implies that matrix B\u2032 is not reducible if and only if the columns of the following matrix are not linearly independent: [Ipo\u00d7po|Aol(I \u2212All)\u22121]. In the rest of this section, we will present equivalent necessary and suf\ufb01cient graphical conditions under which the number of variables in the systems can be uniquely identi\ufb01ed. But before that, we present a simple example where [Ipo\u00d7po|Aol(I \u2212All)\u22121] is reducible and give a graphical interpretation of it. Example 4 Consider a linear SEM with three variables V1, V2, and V3 where V3 = N3, V1 = \u03b1V3+ N1, and V2 = \u03b2V1+N2. Thus, the corresponding causal graph would be: V3 \u2192V1 \u2192V2. Suppose that V3 is the only latent variable. Hence, All = 0, Aol = [\u03b1; 0], and Aol(I \u2212All)\u22121 = [\u03b1; 0] which is linearly dependent on the \ufb01rst column of I. In fact, latent variable V3 can be absorbed in variable V1 by changing the exogenous noise of V1 from N1 to N1 + \u03b1N3. Thus, the number of variables in this model cannot be identi\ufb01ed uniquely in this model. De\ufb01nition 6 (Absorbing) Variable Vi is said to be absorbed in variable Vj if the exogenous noise of Vi is set to zero Ni \u21900, and the exogenous noise of Vj is replaced by Nj \u2190Nj + [B]j,iNi. We de\ufb01ne absorbing a variable in \u2205by setting its exogenous noise to zero. De\ufb01nition 7 (Absorbablity) Let P \u2032 Vo be the joint distribution of the observed variables after absorbing Vi in Vj. We say Vi is absorbable in Vj if P \u2032 Vo = PVo. The following theorem characterizes the graphical conditions where a latent variable is absorbable. The proof of theorem is given in Appendix A. Theorem 8 (a) A latent variable is absorbable in \u2205if and only if it has no observable descendant. (b) A latent variable Vj is absorbable in variable Vi (observed or latent), if and only if all paths from Vj to its observable descendants go through Vi. Example 5 Consider a linear SEM with corresponding causal graph in Figure 4 where V1 and V2 are the only observed variables. V7 satis\ufb01es condition (a) and its exogenous noise can be set to zero. Furthermore, V3 and V4 satisfy condition (b) with respect to V5 and they can be absorbed in V5 by setting the exogenous noise of V5 to N5 + (\u03b1\u03b3 + \u03b2)N3 + \u03b3N4. Finally, V6 satis\ufb01es condition (b) and it can be absorbed in V2. Note that V8 and V5 cannot be absorbed in V1 or V2. 8 \f\ud835\udc49 1 \ud835\udc49 5 \ud835\udc49 8 \ud835\udc49 2 \ud835\udc49 3 \ud835\udc49 4 \ud835\udc49 7 \ud835\udc49 6 \ud835\udefc \ud835\udefe \ud835\udefd Figure 4: Causal graph of Example 5. V1 and V2 are the only observed variables. De\ufb01nition 9 We say a causal graph is minimal if none of its variables are absorbable. Based on above de\ufb01nition, a causal graph is minimal if none of the latent variables satisfy the conditions in Theorem 8. We borrowed the terminology of minimal causal graphs from Pearl (1988) for polytree causal structures. In (Pearl, 1988), a casual graph is called minimal if it has no redundant latent variables in the sense that the joint distribution without latent variables remains a connected tree. Later, Etesami et al. (2016) showed that in minimal latent directed information polytrees, each node has at least two children. The following lemma asserts that the same argument holds true for the non-absorbable latent variables in our setting. The proof of lemma is given in Appendix B. Lemma 10 A latent variable is non-absorbable if it has at least two non-absorbable children. Next theorem gives necessary and suf\ufb01cient graphical conditions for non-reduciblity of the matrix B\u2032. The proof of theorem is given in Appendix C. Theorem 11 The matrix B\u2032 is not reducible almost surely if and only if the corresponding causal graph G is minimal. Corollary 12 Under faithfulness assumption and non-Gaussianity of exogenous noises, the number of variables in the system is identi\ufb01able almost surely if the corresponding graph is minimal. Proof. Based on Theorem 11, we know that the matrix B\u2032 is not reducible almost surely if the corresponding causal graph G is minimal. Furthermore, according to Proposition 3, the number of variables in the systems is identi\ufb01able if matrix B\u2032 is not reducible. This completes the proof. 4. Identifying Total Causal Effects among Observed Variables In this section, \ufb01rst, we will show by an example that total causal effects among observed variables cannot be identi\ufb01ed uniquely under the faithfulness assumption and non-Gaussianity of exogenous noises3. However, we can obtain all the possible solutions. Furthermore, under some additional assumptions on linear SEM, we show that one can uniquely identify total causal effects among observed variables. 3. This example has also been studied in (Hoyer et al., 2008). 9 \f\ud835\udc49 \ud835\udc56 \ud835\udc49 \ud835\udc58 \ud835\udc49 \ud835\udc57 \ud835\udc41\ud835\udc58 \ud835\udc41\ud835\udc56 \ud835\udc41 \ud835\udc57 \ud835\udefc \ud835\udefd \ud835\udefe \ud835\udc49 \ud835\udc56 \ud835\udc49 \ud835\udc58 \ud835\udc49 \ud835\udc57 \ud835\udc41\ud835\udc56 \ud835\udefc\ud835\udc41\ud835\udc58 \ud835\udc41 \ud835\udc57 1 \ud835\udefc\ud835\udefd+ \ud835\udefe \ud835\udefc \ud835\udefe \ud835\udefc Figure 5: An example of non-identi\ufb01ability of causal effects from observed variable Vi to observed variable Vj. 4.1 Example of non-Uniqueness of Total Causal Effects Consider causal graph in Figure 5 where Vi and Vj are observed variables and Vk is latent variable. The direct causal effects from Vk to Vi, from Vk to Vj, and from Vi to Vi are \u03b1, \u03b3, and \u03b2, respectively. We can write Vi and Vj based on the exogenous noises of their ancestors as follows: Vi = \u03b1Nk + Ni, Vj = \u03b2Ni + (\u03b1\u03b2 + \u03b3)Nk + Nj. (7) Now, we construct a second causal graph depicted in Figure 5 where the exogenous noises of variables Vi and Vk are changed to \u03b1Nk and Ni, respectively. Furthermore, we set the direct causal effects from Vk to Vi, from Vk to Vj, and from Vi to Vj to 1, \u2212\u03b3/\u03b1, and \u03b2 + (\u03b3/\u03b1), respectively. It can be seen that equations in (7) do not change while the direct causal effect from Vi to Vj becomes \u03b2 + (\u03b3/\u03b1) in the second causal graph. Thus, we cannot identify causal effect from Vi to Vj merely by observational data from Vi and Vj. In Appendix D, we extend this example to the case where there might be multiple latent variables on the path from Vk to Vi and Vj, and from Vi to Vj. The above example shows that causal effects may not be identi\ufb01ed even by assuming nonGaussianity of exogenous noises if we have some latent variables in the system. In the following, we \ufb01rst show that the set of all possible total causal effects can be identi\ufb01ed. Afterwards, we will present a set of structural conditions under which we can uniquely identify total causal effects among observed variables. 4.2 Identifying the Set of All Possible Total Causal Effects Since the subgraph corresponding to All is a DAG, there exists an integer dl such that Alldl = 0. Hence, we can rewrite matrix D given in (6) as follows D = Aoo + dl\u22121 X k=0 AolAllkAlo. (8) Lemma 13 Matrix D in (6) can be converted to a strictly lower triangular matrix by permuting columns and rows simultaneously based on the causal order ko. Proof. Let P be the permutation matrix corresponding the causal order ko. We want to show that PDPT is strictly lower triangular. It suf\ufb01ces to prove PAolAllkAloPT is strictly lower triangular 10 \ffor any 0 \u2264k \u2264dl \u22121. Suppose that there exists a nonzero entry, (i, j), in PAolAllkAloPT where j \u2265i. Then, there should be a directed path from observed variable Vk\u22121 o (j) to Vk\u22121 o (i) of length k + 2 through latent variables in the causal graph where k\u22121 o (i) is the index of an observed variable whose order is i in the causal order ko. This means variable Vk\u22121 o (j) should come before variable Vk\u22121 o (i) in any causal order. But this violates the causal order ko. Previously, we showed that existence of a causal path between any two observed variables Vi and Vj can be determined by performing over-complete ICA. Let deso(Vi) be the set of all observed descendants of Vi, i.e., deso(Vi) = {Vj|Vi \u21ddVj, 1 \u2264j \u2264po}. We will utilize deso(Vi)\u2019s to enumerate all possible total causal effects among the observed variables. Remark 14 From Lemma 4, we have: deso(Vi) \u0338= deso(Vj) for any 1 \u2264i, j \u2264po. As we discussed in Section 3, under non-Gaussianity of exogenous noises, the columns of B\u2032\u2032 can be determined up to some scalings and permutations by solving an overcomplete ICA problem. Let pr be the number of columns of B\u2032\u2032. Furthermore, without loss of generality, assume that variables Vpo+1, Vpo+1, \u00b7 \u00b7 \u00b7 , Vpr are the latent variables in the system whose corresponding columns remain in B\u2032\u2032. Theorem 15 Let ri := |{j : deso(Vi) = deso(Vj), 1 \u2264j \u2264pr}|, for any 1 \u2264i \u2264po. Under faithfulness assumption and non-Gaussianity of exogenous noises, the number of all possible D\u2019s that can generate the same distribution for Vo according to (2), is equal to \u03a0po i=1ri. Proof. According to Proposition 3, under non-Gaussianity of exogenous noises, the columns of B\u2032\u2032 can be determined up to some scalings and permutations by solving an overcomplete ICA problem. Furthermore, for the column corresponding to the noise Ni, 1 \u2264i \u2264po, we have ri possible candidates with the same set of indicies of non-zero entries where all of them are pairwise linearly independent. Let B\u2032 o be a po \u00d7 po matrix by selecting one of the candidates for each column corresponding to noise Ni, 1 \u2264i \u2264po. Thus, we have \u03a0po i=1ri possible matrices4. Now, for each B\u2032 o, we just need to show that there exists an assignment for Aoo, Alo, Aol, and All such that they satisfy (6) and Aoo and All can be converted to strictly lower triangular matrices with some simultaneous permutations of columns and rows. Let Alo = 0pl\u00d7po and All = 0pl\u00d7pl. Assume that B\u2032 l consists of the remaining columns which are not in B\u2032 o. We also add columns corresponding to latent absorbed variables to B\u2032 l. Now, we set Aoo and Aol to I \u2212B\u2032 o \u22121 and B\u2032 o \u22121B\u2032 l, respectively. By these assignments, the proposed matrix A = [Aoo, Aol; Alo, All] satis\ufb01es in (6). Thus, we just need to show that I \u2212B\u2032 o \u22121 can be converted to a strictly lower triangular matrix by some permutations. To do so, \ufb01rst note that from Lemma 13, we know that matrix D can be converted to a strictly lower triangular matrix by a permutation matrix P. Furthermore, based on this property of matrix D, we have: Dpo = 0. Thus, we can write: P(I \u2212D)\u22121PT = po\u22121 X k=0 PDkPT = po\u22121 X k=0 (PDPT )k. 4. Please note that diagonal entries of B\u2032 o should be equal to one. Otherwise we can normalize each column to its on-diagonal entry. 11 \fSince the matrix (PDPT )k is a lower triangular matrix for any k \u22650, (I \u2212D)\u22121 can be converted to a lower triangular matrix by permutation matrix P. Furthermore, the set of nonzero entries of B\u2032 o is the same as the one of (I \u2212D)\u22121. Thus, PB\u2032 oPT is also a lower triangular matrix where all diagonal elements of it are equal to one. Hence, we can write B\u2032 o in the form of B\u2032 o = I + B\u2032\u2032 o where PB\u2032\u2032 oPT is a strictly lower triangular matrix. Therefore, we have: P(I \u2212B\u2032 o \u22121)PT = P(I \u2212 po\u22121 X k=0 (\u22121)kB\u2032\u2032 o k)PT = P( po\u22121 X k=1 (\u22121)k+1B\u2032\u2032 o k)PT , (9) where the last term shows that I \u2212B\u2032 o \u22121 can be converted to a strictly lower triangular matrix and the proof is complete. Comparing our results with (Hoyer et al., 2008), we can obtain all sets deso(Vi)\u2019s and determine which columns can be selected as corresponding columns of observed variables in O(p2 opr) and then enumerate all the possible total causal effects while the proposed algorithm in (Hoyer et al., 2008) requires to search a space of \u0000pr po \u0001 different possible choices. Moreover, we can identify a causal order uniquely with the same time complexity by utilizing the method proposed in Section 3. 4.3 Unique Identi\ufb01cation of Causal Effects under Structural Conditions Based on Theorem 15, in this part, we propose a method to identify total causal effects uniquely under some structural conditions. Assumption 2 Assume that for any observed variables Vi and any latent variable Vk, we have: deso(Vk) \u0338= deso(Vi). Assumption 2 is a very natural condition that one expects to hold for unique identi\ufb01ability of causal effects. This is because if Assumption 2 fails, then based on Theorem 15, there are multiple sets of total causal effects that are compatible with the observed data. Theorem 16 Under Assumptions 1-2, and non-Gaussianity of exogenous noises, the total causal effect between any two observed variables can be identi\ufb01ed uniquely. Proof. Let matrix [\u02dc B\u2032\u2032]po\u00d7pr be the output of over-complete ICA problem whose columns are the columns in matrix B\u2032\u2032. We de\ufb01ne Ii as the the set of indices of nonzero entries of column \u02dc B\u2032\u2032 :,i, i.e. Ii = {k|[\u02dc B\u2032\u2032 :,i]k \u0338= 0}. We know that Ii = deso(Vj) if \u02dc B\u2032\u2032 :,i corresponds to the observed variable Vj. Moreover, under Assumption 2, any observed variable Vi and any variable Vj (observed or latent) have different sets deso(Vi) and deso(Vj). Thus, each set Ii is just equal to one of deso(Vi)\u2019s, let say deso(Vj). The column \u02dc B\u2032\u2032 :,i normalized to [\u02dc B\u2032\u2032 :,i]j shows the total causal effects from variable j to other observed variables. The description of the proposed solution in Theorem 16 is given in Algorithm 1. It is noteworthy to mention that the example in Section 4.1 (given in Figure 5) violates the conditions in Theorem 16 since we have deso(Vk) = deso(Vi). We have shown for this example that the causal effect from Vi to Vj cannot be identi\ufb01ed uniquely. 12 \fAlgorithm 1 1: Input: Collection of the sets deso(Vi), 1 \u2264i \u2264po. 2: Run an over-complete ICA algorithm over observed variables Vo and obtain matrix \u02dc B\u2032\u2032. 3: for i = 1 : pr do 4: Ii = {k|[\u02dc B\u2032\u2032 :,i]k \u0338= 0} 5: for j = 1 : po do 6: if Ii = deso(Vj) then 7: [\u02c6 Bo]:,j = \u02dc B\u2032\u2032 :,i/[\u02dc B\u2032\u2032 :,i]j 8: end if 9: end for 10: end for 11: Output: \u02c6 Bo 5. Experiments In this section, we \ufb01rst evaluate the performance of the proposed method in recovering causal orders from synthetic data, generated according to the causal graph in Figure 1. Our experiments show that the proposed method returns a correct causal order while, as we have discussed in Introduction section, the previous methods (Entner and Hoyer, 2010; Shimizu et al., 2011) cannot identify the causal order. We also consider another causal graph which satis\ufb01es Assumption 2 and demonstrate that the proposed method can return the correct causal effects. Next, we evaluate the performance of the proposed method for different number of variables in the system. Afterwards, for real data, we consider the daily closing prices of four world stock indicies and check the existence of causal paths between any two indicies. The results are compatible with common beliefs in economy. 5.1 Synthetic data First, for the causal graph in Figure 1, we generated 1000 samples of observed variables V1 and V2 where nonzero entries of matrix A is equal to 0.9. We utilized Reconstruction ICA (RICA) algorithm (Le et al., 2011) to solve the over-complete ICA problem as follows: Let vo be a po \u00d7 n matrix containing observational data where [vo]i,j is j-th sample of variable Vi and n is the number of samples. First, the sample covariance matrix of vo is eigen-decomposed, i.e., 1/(n \u22121)(vo \u2212 \u00af vo)(vo \u2212\u00af vo)T = U\u03a3UT where U is the orthogonal matrix, \u03a3 is a diagonal matrix, and \u00af vo is the sample mean vector. Then, the observed data is pre-whitened as follows: w = \u03a3\u22121/2U(vo \u2212\u00af vo). The RICA algorithm tries to \ufb01nd matrix Z that is the minimizer of the following objective function: minimize Z n X i=1 pr X j=1 g(ZT :,jw:,i) + \u03bb n n X i=1 \u2225ZZT w:,i \u2212w:,i\u22252 2, where parameter \u03bb controls the cost of penalty term. We estimated the matrix \u02dc B\u2032\u2032 by U\u03a31/2Z\u2217 where Z\u2217is the optimal solution of the above optimization problem. In order to estimate the number of columns of \u02dc B\u2032\u2032, we held out 250 of samples for model selection. More speci\ufb01cally, we solved the over-complete ICA problem for different number of columns, evaluated the \ufb01tness of each model by computing the objective function of RICA over the hold-out set, and selected the model with minimum cost. In order to check whether an entry is 13 \f\ud835\udc49 \u0b36 \ud835\udc49 \u0b38 \ud835\udc49 \u0b37 \ud835\udc49 \u0b35 Figure 6: An example of casual graphs satisfying structural conditions. equal to zero, we used the bootstrapping method (Efron and Tibshirani, 1994), which generates 10 bootstrap samples by sampling with replacement from training data. For each bootstrap sample, we executed RICA algorithm to obtain an estimation of \u02dc B\u2032\u2032. Since in each estimation, columns are in arbitrary permutation, we need to match similar columns in estimations of \u02dc B\u2032\u2032. To do so, in each estimation, we divided all entries of a column by the entry with the maximum absolute value in that column. Then, we picked each column from the estimated mixing matrix, computed its l2 distance from each column of another estimated mixing matrix, and matched to the one with a minimum distance. Afterwards, we used t-test with con\ufb01dence level of 95% to check whether an entry is equal to zero from the bootstrap samples. An estimation of \u02dc B\u2032\u2032 from a bootstrap sample is given as follows: \u0014\u22120.0272 0.5238 1 1 1 0.8579 \u0015 . Moreover, experimental results showed the correct support of \u02dc B\u2032\u2032, i.e., [0, 1, 1; 1, 1, 1] can be recovered with merely 10 bootstrap samples. Thus, there is a causal path from V1 to V2. Furthermore, for the causal graph V1 \u2190V3 \u2192V2 in which V3 is only the latent variable, we repeated the same procedure explained above. An estimation of \u02dc B\u2032\u2032 from one of the bootstrap samples is given as follows: \u0014 1 \u22120.046 0.9838 \u22120.031 1 1 \u0015 . From experiments, the estimated support of \u02dc B\u2032\u2032 from bootstrap samples would be: [0, 1, 1; 1, 0, 1]. Thus, we can conclude that there is no causal path between V1 and V2. Next, we considered the causal graph in Figure 6 where V4 is the only latent variable. The direct causal effects of all directed edges are equal to 0.9. An estimation of \u02dc B\u2032\u2032 from one of the bootstrap samples is given as follows: \uf8ee \uf8f0 \u22120.049 0.892 1 1 \u22120.024 1 0.523 \u22120.042 1 \u22120.02 0.527 \u22120.032 \uf8f9 \uf8fb. 14 \f200 400 600 800 1000 Number of samples 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Normalized average error p=10 p=8 p=6 Figure 7: Average normalized error versus number of samples. Thus, we can imply that there is only a causal path from V2 to V1. We can also estimate total causal effects between observed variables since this causal graph satis\ufb01es Assumption 2. The output of Algorithm 1 would be: \uf8ee \uf8f0 1 0.892 \u22120.049 \u22120.042 1 \u22120.024 \u22120.032 \u22120.02 1 \uf8f9 \uf8fb. which is close to the true causal effects. We generated 1000 DAGs of size p = 6, 8, 10 by \ufb01rst selecting a causal order among variables randomly and then connecting each pair of variables with probability 0.3. We generated data from a linear SEM where nonzero entries of matrix A is equal to 0.9, and the exogenous noises have a uniform distribution. In each generated DAG, we selected pl = p/2 variables randomly as latent variables. We checked whether there is a causal path between any two observed variables by a similar procedure described for the previous examples. We de\ufb01ne normalized error as the number of pairs such as (Vi, Vj) that there exists a causal path from Vi to Vj in the true causal graph but we output that there is no causal path between them (or vice versa) to the total pairs, i.e., p(p \u22121). In Figure 7, the average normalized error of the result given by our approach is depicted versus the number of samples. As can be seen, the average normalized error is fairly low for large enough samples. Furthermore, we have better performance for the cases with smaller number of variables in the system. 5.2 Real data We considered the daily closing prices of the following world stock indicies from 10/12/2012 to 10/12/2018, obtained from Yahoo \ufb01nancial database: Dow Jones Industrial Average (DJI) in USA, Nikkei 225 (N225) in Japan, Euronext 100 (N100) in Europe, Hang Seng Index (HSI) in Hong Kong, and the Shanghai Stock Exchange Composite Index (SSEC) in China. 15 \fDJI N100 HSI SSEC N225 Figure 8: The causal relationships among \ufb01ve world stock indicies obtained from the proposed method in Section 3. Let ci(t) be the closing price of i-th index on day t. We de\ufb01ne the corresponding return by Ri(t) := (ci(t) \u2212ci\u22121(t))/ci\u22121(t). We considered the returns of indicies as an observational data and applied the proposed method in Section 3 in order to check the existence of a causal path between any two indicies. Figure 8 depicts the causal relationships among the indicies. In this \ufb01gure, there is a directed edge from index i to index j if we \ufb01nd a causal path from i to j. As can be seen, there are causal paths from DJI to HSI, N225, and N100 which is commonly known to be true in the stock market (Hyv\u00a8 arinen et al., 2010). Furthermore, HSI is in\ufb02uenced by all other indicies and SSEC only affects HSI which these \ufb01ndings are compatible with the previous results in (Hyv\u00a8 arinen et al., 2010). 6." + }, + { + "url": "http://arxiv.org/abs/1905.04634v5", + "title": "One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them", + "abstract": "We consider distributed statistical optimization in one-shot setting, where\nthere are $m$ machines each observing $n$ i.i.d. samples. Based on its observed\nsamples, each machine sends a $B$-bit-long message to a server. The server then\ncollects messages from all machines, and estimates a parameter that minimizes\nan expected convex loss function. We investigate the impact of communication\nconstraint, $B$, on the expected error and derive a tight lower bound on the\nerror achievable by any algorithm. We then propose an estimator, which we call\nMulti-Resolution Estimator (MRE), whose expected error (when $B\\ge\\log mn$)\nmeets the aforementioned lower bound up to poly-logarithmic factors, and is\nthereby order optimal. We also address the problem of learning under tiny\ncommunication budget, and present lower and upper error bounds when $B$ is a\nconstant. The expected error of MRE, unlike existing algorithms, tends to zero\nas the number of machines ($m$) goes to infinity, even when the number of\nsamples per machine ($n$) remains upper bounded by a constant. This property of\nthe MRE algorithm makes it applicable in new machine learning paradigms where\n$m$ is much larger than $n$.", + "authors": "Saber Salehkaleybar, Arsalan Sharifnassab, S. Jamaloddin Golestani", + "published": "2019-05-12", + "updated": "2019-12-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC", + "stat.ML" + ], + "main_content": "Introduction In recent years, there has been a growing interest in various learning tasks over large scale data generated and collected via smart phones and mobile applications. In order to carry out a learning task over this data, a naive approach is to collect the data in a centralized server which might be infeasible or undesirable due to communication constraints or privacy reasons. For learning statistical models in a distributed fashion, several works have focused on designing communication-ef\ufb01cient algorithms for various machine learning applications (Duchi et al., 2012; Braverman et al., 2016; Chang et al., 2017; Diakonikolas et al., 2017; Lee et al., 2017). \u2217. Parts of this work (including weaker versions of Theorems 3 and 6) are presented in Sharifnassab et al. (2019) at Neurips 2019. 1 arXiv:1905.04634v5 [cs.LG] 30 Dec 2019 \f\u0742 \u0b35 \u0b35 \u0742 \u0be1 \u0b35 \u0742 \u0b35 \u0b36 \u0742 \u0be1 \u0b36 \u0742 \u0b35 \u0be0 \u0742 \u0be1 \u0be0 SServer \u073b\u0b35 \u073b\u0b36 \u073b\u0be0 \u07e0 \u1218 1 2 m Figure 1: A distributed system of m machines, each having access to n independent sample functions from an unknown distribution P. Each machine sends a signal to a server based on its observations. The server receives all signals and output an estimate \u02c6 \u03b8 for the optimization problem in (2). In this paper, we consider the problem of statistical optimization in a distributed setting as follows. Consider an unknown distribution P over a collection, F, of differentiable convex functions with Lipschitz \ufb01rst order derivatives, de\ufb01ned over a convex region in Rd. There are m machines, each observing n i.i.d sample functions from P. Each machine processes its observed data, and transmits a signal of certain length to a server. The server then collects all the signals and outputs an estimate of the parameter \u03b8\u2217that minimizes the expected loss, i.e., min\u03b8 Ef\u223cP \u0002 f(\u03b8) \u0003 . See Fig. 1 for an illustration of the system model. We focus on the distributed aspect of the problem considering arbitrarily large number of machines (m) and present tight lower bounds and matching upper bounds on the estimation error. In particular, \u2022 Under general communication budget with B \u2265d log mn bits per transmission, we present a tight lower bound and an order-optimal estimator that achieves this bounds up to polylogarithmic factors. More speci\ufb01cally, we show that \u2225\u02c6 \u03b8\u2212\u03b8\u2217\u2225= \u02dc \u0398 \u0000max \u0000n\u22121/2(mB)\u22121/d, (mn)\u22121/2\u0001\u0001 . \u2022 For the regime that the communication budget is very small with constant number of bits per transmission, we present upper and lower bounds on the estimation error and show that the error can be made arbitrarily small if m and n tend to in\ufb01nity simultaneously. \u2022 Compared to the previous works that consider function classes with Lipschitz continuous second or third order derivatives, our algorithms and bounds are designed and derived for a broader class of functions with Lipschitz continuous \ufb01rst order derivatives. This brings our model closer to real-world learning applications where the loss landscapes involved are highly non-smooth. 1.1 Background The distributed setting considered here has recently employed in a new machine learning paradigm called Federated Learning (Kone\u02c7 cn` y et al., 2015). In this framework, training data is kept in users\u2019 computing devices due to privacy concerns, and the users participate in the training process without revealing their data. As an example, Google has been working on this paradigm in their recent project, Gboard (McMahan and Ramage, 2017), the Google keyboard. Besides communication constraints, 2 \fone of the main challenges in this paradigm is that each machine has a small amount of data. In other words, the system operates in a regime that m is much larger than n (Chen et al., 2017). A large body of distributed statistical optimization/estimation literature considers \u201cone-shot\" setting, in which each machine communicates with the server merely once (Zhang et al., 2013). In these works, the main objective is to minimize the number of transmitted bits, while keeping the estimation error as low as the error of a centralized estimator, in which the entire data is co-located in the server. If we impose no limit on the communication budget, then each machine can encode its entire data into a single message and sent it to the server. In this case, the sever acquires the entire data from all machines, and the distributed problem reduces to a centralized problem. We call the sum of observed functions at all machines as the centralized empirical loss, and refer to its minimizer as the centralized solution. It is part of the folklore that the centralized solution is order optimal and its expected error is \u0398 \u00001/\u221amn \u0001 (Lehmann and Casella, 2006; Zhang et al., 2013). Clearly, no algorithm can beat the performance of the best centralized estimator. 1.1.1 UPPER BOUNDS Zhang et al. (2012) studied a simple averaging method where each machine obtains the empirical minimizer of its observed functions and sends this minimizer to the server through an O(log mn) bit message. Output of the server is then the average of all received empirical minimizers. Zhang et al. (2012) showed that the expected error of this algorithm is no larger than O \u00001/\u221amn + 1/n \u0001 , provided that: 1all functions are convex and twice differentiable with Lipschitz continuous second derivatives, and 2the objective function Ef\u223cP \u0002 f(\u03b8) \u0003 is strongly convex at \u03b8\u2217. Under the extra assumption that the functions are three times differentiable with Lipschitz continuous third derivatives, Zhang et al. (2012) also present a bootstrap method whose expected error is O \u00001/\u221amn + 1/n1.5\u0001 . It is easy to see that, under the above assumptions, the averaging method and the bootstrap method achieve the performance of the centralized solution if m \u2264n and m \u2264n2, respectively. Recently, Jordan et al. (2018) proposed to optimize a surrogate loss function using Taylor series expansion. This expansion can be constructed at the server by communicating O(m) number of d-dimensional vectors. Under similar assumption on the loss function as in (Zhang et al., 2012), they showed that the expected error of their method is no larger than O \u00001/\u221amn + 1/n9/4\u0001 . It, therefore, achieves the performance of the centralized solution for m \u2264n3.5. However, note that when n is \ufb01xed, all aforementioned bounds remain lower bounded by a positive constant, even when m goes to in\ufb01nity. In (Sharifnassab et al., 2019), we relaxed the second order differentiability assumption, and considered a model that allows for convex loss functions that have Lipschitz continuous \ufb01rst order derivatives. There we presented an algorithm (called MRE-C-log) with the communication budget of log mn bits per transmission, and proved the upper bound \u02dc O \u0000m\u22121/max(d,2)n\u22121/2\u0001 on its estimation error. In this work we extend this algorithm to general communication budget of B bits per transmission, for arbitrary values of B \u2265log mn. We also derive a lower bound on the estimation error of any algorithm. This lower bound meets the error-upper-bound of the MRE-C algorithm, showing that the MRE-C estimator has order optimal accuracy up to a poly-logarithmic factor. 1.1.2 LOWER BOUNDS Shamir (2014) considered various communication constraints and showed that no distributed algorithm can achieve performance of the centralized solution with budget less than \u2126(d2) bits per 3 \fmachine. For the problem of sparse linear regression, Braverman et al. (2016) proved that any algorithm that achieves optimal minimax squared error, requires to communicate \u2126(m \u00d7 min(n, d)) bits in total from machines to the server. Later, Lee et al. (2017) proposed an algorithm that achieves optimal mean squared error for the problem of sparse linear regression when d < n. Zhang et al. (2013) derived an information theoretic lower bound on the minimax error of parameter estimation, in presence of communication constraints. They showed that, in order to acquire the same precision as the centralized solution for estimating the mean of a d-dimensional Gaussian distribution, the machines require to transmit a least total number of \u2126 \u0000md/ log(m) \u0001 bits. Garg et al. (2014) improved this bound to \u2126(dm) bits using direct-sum theorems (Chakrabarti et al., 2001). 1.1.3 ONE-SHOT VS. SEVERAL-SHOT MODELS Besides the one-shot model, there is another communication model that allows for several transmissions back and forth between the machines and the server. Most existing works of this type (Bottou, 2010; Lian et al., 2015; Zhang et al., 2015; McMahan et al., 2017) involve variants of stochastic gradient descent, in which the server queries at each iteration the gradient of empirical loss at certain points from the machines. The gradient vectors are then aggregated in the server to update the model\u2019s parameters. The expected error of such algorithms typically scales as \u02dc O \u00001/ \u221a k \u0001 , where k is the number of iterations. The bidirectional communication in the several-shot model makes it convenient for the server to guide the search by sending queries to the machines (e.g., asking for gradients at speci\ufb01c points of interest). This powerful ability of the model typically leads to more ef\ufb01cient communication for the case of convex loss landscapes. However, the two-way communication require the users (or machines) be available during the time of training, so that they can respond to the server queries in real time. Moreover, in such iterative algorithms, the users should be willing to reveal parts of their information asked by the servers. In contrast to the several-shot model, in the one-shot setting, because of one-way communication, SGD-like iterative algorithms are not applicable. The one-shot setting calls for a totally different type of algorithms and lower bounds. 1.2 Our contributions We study the problem of one-shot distributed learning under milder assumptions than previously available in the literature. We assume that loss functions, f \u2208F, are differentiable with Lipschitz continuous \ufb01rst order derivatives. This is in contrast to the works of (Zhang et al., 2012) and (Jordan et al., 2018) that assume Lipschitz continuity of second or third derivatives. The assumption is indeed practically important since the loss landscapes involved in several learning applications are highly non-smooth. The reader should have in mind this model differences, when comparing our bounds with the existing results. See Table 1 for a summary of our results. We consider a sitting where the loss landscape is convex, and derive a lower bound on the estimation error, under communication budget of B bits per transmission for all B \u2265d log mn. We also propose an algorithm (which we call Multi-Resolution Estimator for Convex setting (MRE-C )), and show that its estimation error meets the lower bound up to a poly-logarithmic 4 \fCommunication Budget (B) Assumptions Result Ref. B \u2265d log(mn) \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc \u2126 \u0010 max \u0010 1 \u221an (mB)1/d , 1 \u221amn \u0011\u0011 Th. 1 \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc O \u0010 max \u0010 1 \u221an (mB)1/d , 1 \u221amn \u0011\u0011 Th. 3 Constant B n = 1 \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u2126(1) Th. 5 B = d \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= O \u0010 1 \u221an + 1 \u221am \u0011 Th. 6 Table 1: Summary of our results. factor. Therefore, MRE-C algorithm has order optimal accuracy. Combining these lower and upper bounds, we show that for any communication budget B no smaller than d log mn, we have \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc \u0398 \u0000max \u0000n\u22121/2(mB)\u22121/d, (mn)\u22121/2\u0001\u0001 . Moreover, computational complexity of the MRE-C algorithm is polynomial in m, n, and d. Our results also provide the minimum communication budget required for any estimator to achieve the performance of the centralized algorithm. We also study a regime with tiny communication budget, where B is bounded by a constant. We show that when B is a constant and n = 1, the error of any estimator is lower bounded by a constant, even when m tends to in\ufb01nity. On the other hand, we propose an algorithm with the budget of B = d bits per transmission and show that its estimation error is no larger than O \u0000n\u22121/2 + m\u22121/2\u0001 . We evaluate the performance of MRE-C algorithm in two different machine learning tasks (with convex landscapes) and compare with the existing methods in (Zhang et al., 2012). We show via experiments, for the n = 1 regime, that MRE algorithm outperforms these algorithms. The observations are also in line with the expected error bounds we give in this paper and those previously available. In particular, in the n = 1 regime, the expected error of MRE algorithm goes to zero as the number of machines increases, while the expected errors of the previously available estimators remain lower bounded by a constant. Unlike existing works, our results concern a regime where the number of machines m is large, and our bounds tend to zero as m goes to in\ufb01nity, even if the number of per-machine observations (n) is bounded by a constant. This is contrary to the algorithms in (Zhang et al., 2012), whose errors tend to zero only when n goes to in\ufb01nity. In fact, when n = 1, a simple example1 shows that the expected errors of the simple Averaging and Bootstrap algorithms in (Zhang et al., 2012) remain lower bounded by a constant, for all values of m. The algorithm in (Jordan et al., 2018) suffers from a similar problem and its expected error may not go to zero when n = 1. 1.3 Outline The paper is organized as follows. We begin with a detailed model and problem de\ufb01nition in Section 2. We then propose our lower bound on the estimation error in Section 3, under general communication constraints. In Section 4, we present the MRE-C algorithm and its error upper bound. Section 5 then 1. Consider two convex functions f0(\u03b8) = \u03b82 + \u03b83/6 and f1(\u03b8) = (\u03b8 \u22121)2 + (\u03b8 \u22121)3/6 over [0, 1]. Consider a distribution P that associates probability 1/2 to each function. Then, EP [f(\u03b8)] = f0(\u03b8)/2+f1(\u03b8)/2, and the optimal solution is \u03b8\u2217= ( \u221a 15 \u22123)/2 \u22480.436. On the other hand, in the averaging method proposed in (Zhang et al., 2012), assuming n = 1, the empirical minimizer of each machine is either 0 if it observes f0, or 1 if it observes f1. Therefore, the server receives messages 0 and 1 with equal probability , and E \u0002\u02c6 \u03b8 \u0003 = 1/2. Hence, E \u0002 |\u02c6 \u03b8 \u2212\u03b8\u2217| \u0003 > 0.06, for all values of m. 5 \fprovides our results for the regime where communication budget is limited to constant number of bits per transmission. After that, we report our numerical experiments in Section 6. Finally, in Section 7 we conclude the paper and discuss several open problems and directions for future research. All proofs are relegated to the appendix for improved readability. 2. Problem De\ufb01nition Consider a positive integer d and a collection F of real-valued convex functions over [\u22121, 1]d. Let P be an unknown probability distribution over the functions in F. Consider the expected loss function F(\u03b8) = Ef\u223cP \u0002 f(\u03b8) \u0003 , \u03b8 \u2208[\u22121, 1]d. (1) Our goal is to learn a parameter \u03b8\u2217that minimizes F: \u03b8\u2217= argmin \u03b8\u2208[\u22121,1]d F(\u03b8). (2) The expected loss is to be minimized in a distributed fashion, as follows. We consider a distributed system comprising m identical machines and a server. Each machine i has access to a set of n independently and identically distributed samples {fi 1, \u00b7 \u00b7 \u00b7 , fi n} drawn from the probability distribution P. Based on these observed functions, machine i then sends a signal Y i to the server. We assume that the length of each signal is limited to b bits. The server then collects signals Y 1, . . . , Y m and outputs an estimation of \u03b8\u2217, which we denote by \u02c6 \u03b8. See Fig. 1 for an illustration of the system model.2 We let the following assumptions be in effect throughout the paper: Assumption 1 (Differentiability) We assume: \u2022 Each f \u2208F is once differentiable and its derivatives are bounded and Lipschitz continuous. More concretely, for any f \u2208F and any \u03b8, \u03b8\u2032 \u2208[\u22121, 1]d, we have |f(\u03b8)| \u2264 \u221a d, \u2225\u2207f(\u03b8)\u2225\u22641, and \u2225\u2207f(\u03b8) \u2212\u2207f(\u03b8\u2032)\u2225\u2264\u2225\u03b8 \u2212\u03b8\u2032\u2225. \u2022 The minimizer of F lies in the interior of the cube [\u22121, 1]d. Equivalently, there exists \u03b8\u2217\u2208 (\u22121, 1)d such that \u2207F(\u03b8\u2217) = 0. In Assumption 1 we consider a class of functions with Lipschitz continuous \ufb01rst order derivatives, compared to previous works that consider function classes with Lipschitz continuous second or third order derivatives (Zhang et al., 2013; Jordan et al., 2018). This broadens the scope and applicability of our model to learning tasks where the loss landscape is far from being smooth (see Section 7 for further discussions). Assumption 2 (Convexity) We assume: \u2022 Every f \u2208F is convex. \u2022 Distribution P is such that F (de\ufb01ned in (1)) is strongly convex. More speci\ufb01cally, there is a constant \u03bb > 0 such that for any \u03b81, \u03b82 \u2208[\u22121, 1]d, we have F(\u03b82) \u2265F(\u03b81) + \u2207F(\u03b81)T (\u03b82 \u2212 \u03b81) + \u03bb\u2225\u03b82 \u2212\u03b81\u22252. 2. The considered model here is similar to the one in (Salehkaleybar et al., 2019). 6 \fThe convexity assumption (Assumption 2) is common in the literature of distributed learning (Zhang et al., 2013; Jordan et al., 2018). When F is strongly convex, the objective is often designing estimators that minimize E \u0002 \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u22252\u0003 . Given the upper and lower bounds on the second derivative (in Assumptions 1 and 2), this is equivalent (up to multiplicative constants) with minimization of E \u0002 F(\u02c6 \u03b8) \u2212F(\u03b8\u2217) \u0003 . Note also that the assumption \u2225\u2207F(x)\u2225\u22641 (in Assumption 2) implies that \u03bb \u2264 1 \u221a d . (3) This is because if \u03bb > 1/ \u221a d, then \u2225\u2207F(x)\u2225> 1, for some x \u2208[\u22121, 1]d. 3. Main Lower Bound The following theorem shows that in a system with m machines, n samples per machine, and B bits per signal transmission, no estimator can achieve estimation error less than \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc \u2126 \u0000max \u0000n\u22121/2(mB)\u22121/d, (mn)\u22121/2\u0001\u0001 . The proof is given in Appendix B. Theorem 1 Suppose that Assumption 2 is in effect for \u03bb \u22641/ \u000010 \u221a d \u0001 . Then, for any estimator with output \u02c6 \u03b8, there exists a probability distribution over F such that Pr \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc \u2126 \u0012 max \u0012 1 \u221an (mB)1/d , 1 \u221amn \u0013\u0013 ! \u22651 3. (4) More speci\ufb01cally, for large enough values of mn, for any estimator there is a probability distribution over F such that with probability at least 1/3, \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225\u2265max 1 640 \u00d7 501/d d2.5 log2+3/d(mn) \u00d7 1 \u221an (mB)1/d , \u221a d 5\u221amn ! (5) In light of (3), the assumption \u03bb \u22641/ \u000010 \u221a d \u0001 in the statement of the theorem appears to be innocuous, and is merely aimed to facilitate the proofs. The proof is given in Section B. The key idea is to show that \ufb01nding an O \u0000n\u22121/2m\u22121/d)-accurate minimizer of F (i.e., \u2225\u02c6 \u03b8\u2212\u03b8\u2217\u2225= O \u0000n\u22121/2m\u22121/d)) is as dif\ufb01cult as \ufb01nding an O \u0000n\u22121/2m\u22121/d)-accurate approximation of \u2207F for all points in an n\u22121/2neighborhood of \u03b8\u2217. This is quite counter-intuitive, because the latter problem looks way more dif\ufb01cult than the former. To see the unexpectedness more clearly, it suggests that in the special case where n = 1, \ufb01nding an m\u22121/d-approximation of \u2207F over the entire domain is no harder than \ufb01nding an m\u22121/d-approximation of \u2207F at a single (albeit unknown) point \u03b8\u2217. This provides a key insight bene\ufb01cial for devising estimation algorithms: Insight 1 Finding an \u02dc O \u0000n\u22121/2m\u22121/d)-accurate minimizer of F is as dif\ufb01cult as \ufb01nding an O \u0000n\u22121/2m\u22121/d)accurate approximation of \u2207F over an n\u22121/2-neighborhood of \u03b8\u2217. This inspires estimators that \ufb01rst approximate \u2207F over a neighborhood of \u03b8\u2217and then choose \u02c6 \u03b8 to be a point with minimum \u2225\u2207F\u2225. We follow a similar idea in Section 4 to design the MRE-C algorithm with order optimal error. As an immediate corollary of Theorem 3, we obtain a lower bound on the moments of estimation error. 7 \fCorollary 2 For any estimator \u02c6 \u03b8, there exists a probability distribution over F such that for any k \u2208N, E h \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225ki = \u02dc \u2126 max \u0012 1 \u221an (mB)1/d , 1 \u221amn \u0013k! . (6) In view of (6), no estimator can achieve performance of a centralized solution with the budget of B = O(log mn) when d \u22653. As discussed earlier in the Introduction section, this is in contrast to the result in (Zhang et al., 2012) that a simple averaging algorithm achieves O(1/\u221anm) accuracy (similar to a centralized solution), in a regime that n > m. This apparent contradiction is resolved by the difference in the set of functions considered in the two works. The set of functions in (Zhang et al., 2012) are twice differentiable with Lipschitz continuous second derivatives, while we do not assume existence or Lipschitz continuity of second derivatives. 4. MRE-C Algorithm and its Error Upper Bound Here, we propose an order optimal estimator under general communication budget B, for B \u2265 d log mn. The high level idea, in view of Insight 1, is to acquire an approximation of derivatives of F over a neighborhood of \u03b8\u2217, and then letting \u02c6 \u03b8 be the minimizer of size of these approximate gradients. For ef\ufb01cient gradient approximation, transmitted signals are designed such that the server can construct a multi-resolution view of gradient of function F(\u03b8) around a promising grid point. Thus, we call the proposed algorithm \u201cMulti-Resolution Estimator for Convex loss (MRE-C)\". The description of MRE-C is as follows: Each machine i observes n functions and sends a signal Y i comprising \u2308B/(d log mn)\u2309subsignals of length \u230ad log mn\u230b. Each sub-signal has three parts of the form (s, p, \u2206). The three parts s, p, and \u2206are as follows. \u2022 Part s: Consider a grid G with resolution log(mn)/\u221an over the d-dimensional cube [\u22121, 1]d. Each machine i computes the minimizer of the average of its \ufb01rst n/2 observed functions, \u03b8i = argmin \u03b8\u2208[\u22121,1]d n/2 X j=1 fi j(\u03b8). (7) It then lets s be the closest grid point to \u03b8i. Note that all sub-signals of a machine have the same s-part. \u2022 Part p: Let \u03b4 \u225c2d log3(mn) max 1 (mB)1/d , 2d/2 m1/2 ! . (8) Let t = log(1/\u03b4). Without loss of generality we assume that t is a non-negative integer.3 Let Cs be a d-dimensional cube with edge size 2 log(mn)/\u221an centered at s. Consider a sequence of t+1 grids on Cs as follows. For each l = 0, . . . , t, we partition the cube Cs into 2ld smaller equal sub-cubes with edge size 2\u2212l+1 log(mn)/\u221an. The lth grid \u02dc Gl s comprises the centers of 3. If \u03b4 > 1, we reset the value of \u03b4 to \u03b4 = 1. It is not dif\ufb01cult to check that the rest of the proof would not be upset in this spacial case. 8 \f\ud835\udc60 \ud835\udc60 \ud835\udc5d\u2032 \ud835\udc5d 2 log \ud835\udc5a\ud835\udc5b/ \ud835\udc5b Grid \ud835\udc3a Cube \ud835\udc36\ud835\udc60 Figure 2: An illustration of grid G and cube Cs centered at point s for d = 2. The point p belongs to \u02dc G2 s and p\u2032 is the parent of p. these smaller cubes. Then, each \u02dc Gl s has 2ld grid points. For any point p\u2032 in \u02dc Gl s, we say that p\u2032 is the parent of all 2d points in \u02dc Gl+1 s that are in the \u00002\u2212l \u00d7 (2 log mn)/\u221an \u0001 -cube centered at p\u2032 (see Fig. 2). Thus, each point \u02dc Gl s (l < t) has 2d children. In each sub-signal, to select p, we randomly choose an l from 0, . . . , t with probability Pr(l) = 2(d\u22122)l Pt j=0 2(d\u22122)j (9) We then let p be a uniformly chosen random grid point in \u02dc Gl s. The level l and point p chosen in different sub-signals of a machine are independent and have the same distribution. Note that O(d log(1/\u03b4)) = O(d log(mn)) bits suf\ufb01ce to identify p uniquely. \u2022 Part \u2206: We let \u02c6 F i(\u03b8) \u225c2 n n X j=n/2+1 fi j(\u03b8), (10) and refer to it as the empirical function of the ith machine. For each sub-signal, if the selected p in the previous part is in \u02dc G0 s, i.e., p = s, then we set \u2206to the gradient of \u02c6 F i at \u03b8 = s. Otherwise, if p is in \u02dc Gl s for l \u22651, we let \u2206\u225c\u2207\u02c6 F i(p) \u2212\u2207\u02c6 F i(p\u2032), where p\u2032 \u2208\u02dc Gl\u22121 s is the parent of p. Note that \u2206is a d-dimensional vector whose entries range over \u00002\u2212l\u221a d log(mn)/\u221an \u0001 \u00d7 \u0002 \u22121, +1 \u0003 . This is due to the Lipschitz continuity of the derivative of the functions in F (see Assumption 1) and the fact that \u2225p\u2212p\u2032\u2225= 2\u2212l\u221a d log(mn)/\u221an. Hence, O(d log(mn)) bits suf\ufb01ce to represent \u2206within accuracy 2\u03b4 log(mn)/\u221an. At the server, we choose an s\u2217\u2208G that has the largest number of occurrences in the received signals. Then, based on the signals corresponding to \u02dc G0 s\u2217, we approximate the gradients of F over Cs\u2217as follows. We \ufb01rst eliminate redundant sub-signals so that no two surviving sub-signals from a same machine have the same p-parts (consequently, for each machine, the surviving sub-signals are distinct). We call this process \u201credundancy elimination\u201d. We then let Ns\u2217be the total number of 9 \fsurviving sub-signals that contain s\u2217in their p part, and compute \u02c6 \u2207F(s\u2217) = 1 Ns\u2217 X Subsignals of the form (s\u2217,s\u2217,\u2206) after redundancy elimination \u2206, Then, for any point p \u2208\u02dc Gl s\u2217with l \u22651, we let \u02c6 \u2207F(p) = \u02c6 \u2207F(p\u2032) + 1 Np X Subsignals of the form (s\u2217,p,\u2206) after redundancy elimination \u2206, (11) where Np is the number of signals having point p in their second argument, after redundancy elimination. Finally, the sever lets \u02c6 \u03b8 be a grid point p in \u02dc Gt s\u2217with the smallest \u2225\u02c6 \u2207F(p)\u2225. Theorem 3 Let \u02c6 \u03b8 be the output of the above algorithm. Then, Pr \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225> 4d1.5 log4(mn) \u03bb max 1 \u221an (mB)1/d , 2d/2 \u221amn !! = exp \u0010 \u2212\u2126 \u0000log2(mn) \u0001\u0011 . The proof is given in Appendix G. The proof goes by \ufb01rst showing that s\u2217is a closest grid point of G to \u03b8\u2217with high probability. We then show that for any l \u2264t and any p \u2208\u02dc Gl s\u2217, the number of received signals corresponding to p is large enough so that the server obtains a good approximation of \u2207F at p. Once we have a good approximation \u02c6 \u2207F of \u2207F at all points of \u02dc Gt s\u2217, a point at which \u02c6 \u2207F has the minimum norm lies close to the minimizer of F. Corollary 4 Let \u02c6 \u03b8 be the output of the above algorithm. There is a constant \u03b7 > 0 such that for any k \u2208N, E \u0002 \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225k\u0003 < \u03b7 4d1.5 log4(mn) \u03bb max 1 \u221an (mB)1/d , 2d/2 \u221amn !!k . Moreover, \u03b7 can be chosen arbitrarily close to 1, for large enough values of mn. The upper bound in Theorem 3 matches the lower bound in Theorem 1 up to a polylogarithmic factor. In this view, the MRE-C algorithm has order optimal error. Moreover, as we show in Appendix G, in the course of computations, the server obtains an approximation \u02c6 F of F such that for any \u03b8 in the cube Cs\u2217, we have \u2225\u2207\u02c6 F(\u03b8) \u2212\u2207F(\u03b8)\u2225= \u02dc O \u0000m\u22121/dn\u22121/2). Therefore, the server not only \ufb01nds the minimizer of F, but also obtains an approximation of F at all points inside Cs\u2217. This is in line with our previous observation in Insight 1. 5. Learning under Tiny Communication Budget In this section, we consider the regime that communication budget per transmission is bounded by a constant, i.e., B is a constant independent of m an n. We present a lower bound on the estimation error and propose an estimator whose error vanishes as m and n tend to in\ufb01nity. We begin with a lower bound. The next theorem shows that when n = 1, the expected error is lower bounded by a constant, even if m goes to in\ufb01nity. 10 \fTheorem 5 Let n = 1 and suppose that the number of bits per signal, B, is limited to a constant. Then, there is a distribution P over F such that expected error, EP h \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225 i , of any randomized estimator \u02c6 \u03b8 is lower bounded by a constant, for all m \u22651. The constant lower bound holds even when d = 1. The proof is given in Appendix K. There, we construct a distribution P that associates non-zero probabilities to 2b + 2 polynomials of order at most 2b + 2. Theorem 5 shows that the expected error is bounded from below by a constant regardless of m, when n = 1 and B is a constant. We now show that the expected error can be made arbitrarily small as m and n go to in\ufb01nity simultaneously. Theorem 6 Under the communication budget of B = d bits per transmission, there exists a randomized estimator \u02c6 \u03b8 such that E h \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u22252i1/2 = O \u0012 1 \u221an + 1 \u221am \u0013 . The proof is given in Appendix L. There, we propose a simple randomized algorithm in which each machine i \ufb01rst computes an O(1/\u221an)-accurate estimation \u03b8i based on its observed functions. It then generates as its output signal a random binary sequence of length d whose jth entry is 1 with probability (1 + \u03b8i j)/2, where \u03b8i j is the jth entry of \u03b8i. The server then computes \u02c6 \u03b8 based on the average of received signals. 6. Experiments We evaluated the performance of MRE-C on two learning tasks and compared with the averaging method (AVGM) in (Zhang et al., 2012). Recall that in AVGM, each machine sends the empirical risk minimizer of its own data to the server and the average of received parameters at the server is returned in the output. The \ufb01rst experiment concerns the problem of ridge regression. Here, each sample (X, Y ) is generated based on a linear model Y = XT \u03b8\u2217+E, where X, E, and \u03b8\u2217are sampled from N(0, Id\u00d7d), N(0, 0.01), and uniform distribution over [0, 1]d, respectively. We consider square loss function with l2 norm regularization: f(\u03b8) = (\u03b8T X \u2212Y )2 + 0.1\u2225\u03b8\u22252 2. In the second experiment, we perform a logistic regression task, considering sample vector X generated according to N(0, Id\u00d7d) and labels Y randomly drawn from {\u22121, 1} with probability Pr(Y = 1|X, \u03b8\u2217) = 1/(1 + exp(\u2212XT \u03b8\u2217)). In both experiments, we consider a two dimensional domain (d = 2) and assumed that each machine has access to one sample (n = 1). In Fig. 3, the average of \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u22252 is computed over 100 instances for the different number of machines in the range [104, 106]. Both experiments suggest that the average error of MRE-C keep decreasing as the number of machines increases. This is consistent with the result in Theorem 3, according to which the expected error of MRE-C is upper bounded by \u02dc O(1/\u221amn). It is evident from the error curves that MRE-C outperforms the AVGM algorithm in both tasks. This is because where m is much larger than n, the expected error of the AVGM algorithm typically scales as O(1/n), independent of m. 11 \f0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of machines (m) 106 10-3 10-2 10-1 100 Average error MRE-C AVGM (a) Ridge regression 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of machines (m) 106 10-3 10-2 10-1 100 Average error MRE-C AVGM (b) Logistic regression Figure 3: The average of MRE-C and AVGM algorithms versus the number of machines in two different learning tasks. 7. Discussion We studied the problem of statistical optimization of convex loss landscapes in a distributed system with one-shot communications. We present matching upper and lower bounds on estimation error under general communication constraints. We showed that the expected error of any estimator is lower bounded by \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc \u2126 \u0000max \u0000n\u22121/2(mB)\u22121/d, (mn)\u22121/2\u0001\u0001 . We proposed an algorithm called MRE-C , whose estimation errors meet the above lower bound up to a poly-logarithmic factor. More speci\ufb01cally, the MRE-C algorithm has error no large than \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u2225= \u02dc O \u0000max \u0000n\u22121/2(mB)\u22121/d, (mn)\u22121/2\u0001\u0001 . Aside from being order optimal, the MRE-C algorithm has the advantage over the existing estimators that its error tends to zero as the number of machines goes to in\ufb01nity, even when the number of samples per machine is upper bounded by a constant and communication budget is limited to d log mn bits per transmission. This property is in line with the out-performance of the MRE-C algorithm in the m \u226bn regime, as veri\ufb01ed in our experimental results. The key insight behind the proof of the lower bound and the design of our algorithm is an observation that emerges from our proofs: in the one-shot model, \ufb01nding an O \u0000n\u22121/2m\u22121/d)accurate minimizer of F is as dif\ufb01cult as \ufb01nding an O \u0000n\u22121/2m\u22121/d)-accurate approximation of \u2207F for all points in an n\u22121/2-neighborhood of \u03b8\u2217. Capitalizing on this observation, the MRE-C algorithm computes, in an ef\ufb01cient way, an approximation of the gradient of the expected loss over a neighborhood of \u03b8\u2217. It then output a minimizer of approximate gradient norms as its estimate of the loss minimizer. It is quite counter intuitive that while MRE-C algorithm carries out an intricate and seemingly redundant task of approximating the loss function for all points in a region, it is still very ef\ufb01cient, and indeed order optimal in terms of estimation error and sample complexity. This remarkable observation is in line with the above insight that \ufb01nding an approximate minimizer is as hard as \ufb01nding an approximation of the function over a relatively large neighborhood of the minimizer. We also addressed the problem of distributed learning under tiny (constant) communication budget. We showed that when budget B is a constant and for n = 1, the expected error of any 12 \festimator is lower bounded by a constant, even when m goes to in\ufb01nity. We then proposed an estimator with the budget of B = d bits pert transmission and showed that its expected error is no larger than O \u0000n\u22121/2 + m\u22121/2\u0001 . Our algorithms and bounds are designed and derived for a broader class of functions with Lipschitz continuous \ufb01rst order derivatives, compared to the previous works that consider function classes with Lipschitz continuous second or third order derivatives. The assumption is indeed both practically important and technically challenging. For example, it is well-known that the loss landscapes involved in learning applications and neural networks are highly non-smooth. Therefore, relaxing assumptions on higher order derivatives is actually a practically important improvement over the previous works. On the other hand, considering Lipschitzness only for the \ufb01rst order derivative renders the problem way more dif\ufb01cult. To see this, note that when n > m, the existing upper bound O \u0000(mn)\u22121/2 + n\u22121\u0001 for the case of Lipschitz second derivatives goes below the O(m\u22121/dn\u22121/2) lower bound in the case of Lipschitz \ufb01rst derivatives. A drawback of the MRE algorithms is that each machine requires to know m in order to set the number of levels for the grids. This however can be resolved by considering in\ufb01nite number of levels, and letting the probability that p is chosen from level l decrease exponentially with l. The constant lower bound in Theorem 5 decreases exponentially with B. This we expect because when B = d log mn, error of the MRE-C algorithm is proportional to an inverse polynomial of m and n (see Theorem 3, and therefore decays exponentially with B. There are several open problems and directions for future research. The \ufb01rst group of problems involve the constant bit regime. It would be interesting if one could verify whether or not the bound in Theorem 6 is order optimal. We conjecture that this bound is tight, and no estimator has expected error smaller than o \u0000n\u22121/2 + m\u22121/2\u0001 , when the communication budget is bounded by a constant. This would essentially be an extension of Theorem 5 for n > 1. Another interesting problem involves the regime B < d, and best accuracy achievable with B < d bits per transmission? As for the MRE-C estimator, the estimation error of these algorithms are optimal up to polylogarithmic factors in m and n. However, the bounds in Theorem 3 have an extra exponential dependency on d. Removing this exponential dependency is an important problem to address in future works. More importantly, an interesting problem involves the relaxation of the convexity assumption (Assumption 2) and \ufb01nding tight lower bounds and order-optimal estimators for general non-convex loss landscapes, in the one-shot setting. This we address in an upcoming publication (see Sharifnassab et al. (2020) for \ufb01rst drafts). Another important group of problems concerns a more restricted class of functions with Lipschitz continuous second order derivatives. Despite several attempts in the literature, the optimal scaling of expected error for this class of functions in the m \u226bn regime is still an open problem. Acknowledgments This research was supported by INSF under contract No. 97012846. The \ufb01rst author thanks Nitin Vaidya for giving invaluable insights about the problem of distributed learning. The second author thanks John N. Tsitsiklis and Philippe Rigollet for fruitful discussions on Lemma 10. 13" + }, + { + "url": "http://arxiv.org/abs/1703.08831v1", + "title": "Token-based Function Computation with Memory", + "abstract": "In distributed function computation, each node has an initial value and the\ngoal is to compute a function of these values in a distributed manner. In this\npaper, we propose a novel token-based approach to compute a wide class of\ntarget functions to which we refer as \"Token-based function Computation with\nMemory\" (TCM) algorithm. In this approach, node values are attached to tokens\nand travel across the network. Each pair of travelling tokens would coalesce\nwhen they meet, forming a token with a new value as a function of the original\ntoken values. In contrast to the Coalescing Random Walk (CRW) algorithm, where\ntoken movement is governed by random walk, meeting of tokens in our scheme is\naccelerated by adopting a novel chasing mechanism. We proved that, compared to\nthe CRW algorithm, the TCM algorithm results in a reduction of time complexity\nby a factor of at least $\\sqrt{n/\\log(n)}$ in Erd\\\"os-Renyi and complete\ngraphs, and by a factor of $\\log(n)/\\log(\\log(n))$ in torus networks.\nSimulation results show that there is at least a constant factor improvement in\nthe message complexity of TCM algorithm in all considered topologies.\nRobustness of the CRW and TCM algorithms in the presence of node failure is\nanalyzed. We show that their robustness can be improved by running multiple\ninstances of the algorithms in parallel.", + "authors": "Saber Salehkaleybar, S. Jamaloddin Golestani", + "published": "2017-03-26", + "updated": "2017-03-26", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "stat.ML" + ], + "main_content": "INTRODUCTION Distributed function computation is an essential building block in many network applications where it is required to compute a function of initial values of nodes in a distributed manner. For instance, in wireless sensor networks, distributed inference algorithms can be executed by computing average of the sensor measurements as a subroutine. Examples of distributed inference in sensor networks include transmitter localization [1], parameter estimation [2], and data aggregation [3]. As another application, consider a network with n processors in which each processor has a local utility function and the goal is to obtain the optimal solution of sum of the utility functions subject to some constraints. This problem has frequently arisen in network optimization algorithms such as distributed learning [4], link scheduling [5], and network utility maximization [6]. All these algorithms utilize a distributed sum or average computation subroutine in solving the optimization problems. Consider the problem of computing a target function fn(v0 1, \u00b7 \u00b7 \u00b7 , v0 n) in a network with n nodes, where v0 i is the initial value of node i. A common approach is based on constructing spanning trees [7], [8]. In this solution, the values would be sent toward the root where the \ufb01nal result is computed and sent back to all nodes over the spanning tree. Although the spanning tree-based solution is quite ef\ufb01cient in terms of message and time complexities, it is not robust against network perturbations such as node failures or time-varying topologies. For example, the \ufb01nal result may be dramatically corrupted if a node close to the root fails. To overcome the above drawback of spanning tree-based solutions, recent approaches take advantage of local interactions between nodes [9]. In these approaches, each node i which has a value, chooses one of its neighbors, say node j; The two nodes then update their values arXiv:1703.08831v1 [cs.DC] 26 Mar 2017 \f2 based on a prede\ufb01ned rule function g(., .) which is determined by the target function fn(.) (see Lemma II.1). By iterating this process in the entire network, the target function is computed in a distributed manner. Let vi and vj be the current values of nodes i and j, respectively. Two possible options for executing the rule function g(vi, vj) are: ( 1)v+ i = v+ j = g(vi, vj), 2)v+ i = e, v+ j = g(vi, vj), (1) where v+ i and v+ j are the updated values of nodes i and j, respectively. The value e is the identity element of the rule function g(., .), i.e. g(v, e) = g(e, v) = v for any value v. The \ufb01rst option in (1) corresponds to the class of distributed algorithms commonly called gossip algorithms [9]. The main advantage of these algorithms is that they are robust against network perturbations due to their simple structure. However, this robust structure is obtained at the expense of huge time and message complexities [9]. For the \ufb01rst option, various updating rule functions have been proposed for speci\ufb01c target functions like average [10], min/max, and sum [11]. For instance, the updating rules g(vi, vj) = (vi +vj)/2 and g(vi, vj) = min(vi, vj) can be used to compute average and min functions, respectively. The second updating option can compute a wide class of target functions including the ones computable by gossip algorithms (see Lemma II.1) and it is much more energy-ef\ufb01cient than the gossip algorithms [12]. This approach can be easily implemented by a token-based algorithm: Suppose that each node has a token at the beginning of the algorithm and passes its initial value to its token. A node is said to be inactive when it does not have a token. If the local clock of an active node like i ticks, it chooses a random neighbor node, like node j, and sends its token carrying its value. Upon receiving the token, node j updates its value, and becomes active (if it is not already)1. Then, node i sets its own value to e, and becomes inactive. From token\u2019s view, each token walks in the network, randomly, until it meets another token. The two tokens will then coalesce and form a token with an updated value. This process continues until the result is aggregated in one token. Finally, the last active node can broadcast the result by a controlled \ufb02ooding mechanism2. This computation scheme is called Coalescing Random Walk (CRW) algorithm after the coalescing random walks [13]. The CRW algorithm offers comparable performance to spanning tree-based solutions in terms of message complexity [12], making it much more energy-ef\ufb01cient than the gossip algorithms. However, it is still slow due to de\ufb01ciency in token coalescence when only a few tokens remain in the network. Hence, authors in [12], modi\ufb01ed the CRW algorithm in order to improve its running time. In the modi\ufb01ed algorithm, which we call the truncated CRW algorithm, at some point of time, the execution of the CRW algorithm is terminated and each active node broadcasts the value of its token via a controlled \ufb02ooding mechanism, leaving the completion of the computation to each network node. However, this solution does not lead to a signi\ufb01cant improvement in time or message complexity [12]. In this paper, we propose a mechanism to speed up the coalescence of tokens. Suppose that each token has a unique identi\ufb01er (UID) besides its carried value. In the proposed mechanism, each node registers the maximum UID of tokens seen so far, and the outgoing edge taken by the token with the maximum UID. When a token enters a node previously visited by a token with 1In case of computing the sum function, the updating rule function g(vi, vj) is vi + vj and the identity element is equal to zero. 2In section II, we will explain how the last active node broadcasts the \ufb01nal result. \f3 1 ID 2 ID Figure 1. An example of execution of TCM algorithm in a torus network: Suppose that two tokens are left in the network. Let the UIDs of the two tokens be ID1 and ID2 where ID1 > ID2. Nodes with shaded patterns are the nodes that token ID1 has visited seen so far. The arrows show the most recent direction taken by token ID1. If token ID2 chooses its left neighbor node in the next step, it is trapped in the set of shaded nodes and follows a path to token ID1. higher UID, it follows the registered outgoing edge. Otherwise, it will go to a random chosen neighbor node, according to a prede\ufb01ned probability. Figure 1 illustrates a scenario where two tokens are left in the network and show how coalescing is expedited in the proposed scheme. Since nodes memorize the outgoing edge of a token with maximum UID they have seen, we call the proposed scheme \u201cToken-based function Computation with Memory\u201d (TCM) Algorithm. It is interesting to mention an analogy between this scheme and cosmology. Think of tokens in the network as cosmic dusts in space. Accordingly, the process of function computation is like forming a planet from cosmic dusts. By running the TCM algorithm, tokens with small UID (light dusts) are trapped in the set of nodes visited by tokens with higher UID (in the gravitational \ufb01eld of heavy dusts). The coalescing process continues until a single token is left, similar to birth of a planet. The main contributions of the paper are as follows: \u2022 We show that the proposed TCM algorithm, by accelerating coalescing of tokens, reduces the average time complexity by a factor p n/ log(n) in complete graphs and Erd\u00a8 os-Renyi model compared to the CRW algorithm and its truncated version. Furthermore, there is at least log(n)/ log(log(n)) factor improvement in torus networks. Simulation results show that the TCM algorithm also outperforms the CRW algorithm in terms of message complexity. \u2022 In CRW and TCM algorithms, the \ufb01nal result may be corrupted if an active node fails. Hence, it is quite important to study the robustness of these algorithms under node failures. In this regard, we evaluate the performance of CRW and TCM algorithms based on a proposed robustness metric. We show that the robustness can be substantially improved by running multiple instances of the TCM and CRW algorithms in parallel. We prove that, for the CRW algorithm, the required number of instances in order to tolerate the failure rate \u03b1/n in complete graphs, is of the order O(n\u03b1). While the TCM algorithm needs to run only O(1) instances in parallel. \u2022 We study the performance of TCM and CRW algorithms under random walk mobility model [14]. Simulation results show that both algorithms can compute the class of target functions de\ufb01ned in Lemma II.1 successfully even in high mobility conditions. The remainder of the paper is organized as follows: In Section II, the TCM algorithm is described. In Section III, the performances of TCM and CRW algorithms are analyzed and compared for different network topologies. In Section IV, we study the robustness of both algorithms in complete graphs. In Section V, the performances of TCM and CRW algorithms are \f4 evaluated through simulations and then compared with analytical results. Finally, we conclude with Section VI. II. THE TCM ALGORITHM A. System model Consider a network of n nodes, where each node i has an initial value v0 i and the goal is to compute a function fn(v0 1, \u00b7 \u00b7 \u00b7 , v0 n) of initial values in a distributed manner. The topology of the network is represented by a bidirected graph, G = (V, E), with the vertex set V = {1, ..., n}, and the edge set E \u2286V \u00d7 V , such that (i, j) \u2208E if and only if nodes i and j can communicate directly. We index ports of node i with {1, \u00b7 \u00b7 \u00b7 , di}, where di is the degree of node i. It is assumed that the function fn(.) is symmetric for any permutation \u03c0 of the set {1, \u00b7 \u00b7 \u00b7 , n}, i.e. fn(v0 1, \u00b7 \u00b7 \u00b7 , v0 n) = fn(v0 \u03c01, \u00b7 \u00b7 \u00b7 , v0 \u03c0n). This means that it does not matter which node of the network holds which part of the initial values. B. Description of the TCM algorithm Assume that a UID is assigned to each node i.3 At the beginning of the algorithm, each node has a token to which it passes its UID and initial value. It is also assumed that each node has an independent clock which ticks according to a Poisson process with rate one. Let the value and UID of the token at node i be value(i) and ID(i), respectively. We denote the token at node i by the vector [value(i), size(i), ID(i)]. The role of parameter size(i) will be explained in the next part. The TCM algorithm computes the target function fn(.) by passing and merging tokens in the network. When a node does not have a token, it becomes inactive until a neighbor node gets in contact with it. Let memory(i) be the maximum UID of the tokens, node i has seen so far. Algorithm 1 describes how and when an active node i sends or merges tokens. The subroutine SEND() is executed by each tick of local clock while the subroutine RECEIVE() is activated upon receiving a token from some neighbor node. Suppose that the local clock of active node i ticks. Node i decides to send the token [value(i), size(i), ID(i)] to a neighbor node. In this respect, we make distinction between two cases: Case 1memory(i) = ID(i): In this case, node i decides to pass the token to a random neighbor node with probability psend. Thus, node i waits for 1 psend number of clock ticks on average before sending out the token. To implement the waiting mechanism, node i will exit the subroutine Send() with probability 1 \u2212psend, each time its clock ticks (line 6). Otherwise, it chooses a random port j, sets the path(i) to j, and sends the token on that port (lines 7-8). Case 2ID(i) < memory(i): In this case, node i sends the token on the port path(i) with probability one. Now, suppose that node i receives a token [value, size, ID]. If node i is inactive, then the received token remains unchanged. Otherwise, it will coalesce with the token at nodes i and the token with greater UID remains in the network (line 15). Then, the parameters value(i), size(i), and memory(i) are updated to g(value(i), value), size(i) + size, and max(memory(i), ID), respectively (lines 16-18). The updating rule function g(., .) is determined by the target function 3One can use randomized algorithms to assign UIDs. Each node randomly chooses an integer number in the set {1, \u00b7 \u00b7 \u00b7 , kn2}. From birthday problem [15], it can be shown that each node gets a UID with high probability if k is large enough. Furthermore, each node can encode its UID with O(log(n)) bits. \f5 Algorithm 1 The TCM algorithm 1: Initialization: memory(i) \u2190ID(i), path(i) \u2190{}, value(i) \u2190v0 i , size(i) \u21901 ,\u2200i \u2208{1, \u00b7 \u00b7 \u00b7 , n}, 2: Node i generates token [value(i), size(i), ID(i)]. 3: procedure SEND( ) 4: if ID(i) \u0338= 0 then \u25b7ID(i): the UID of token which is now in node i. It is equal to zero for inactive nodes. 5: if memory(i) = ID(i) then 6: Break with probability 1 \u2212psend. 7: choose a port randomly like j. 8: path(i) \u2190j \u25b7path(i): a port number of node i through which the token with highest UID has passed. 9: end if 10: Send token [value(i), size(i), ID(i)] on port path(i). 11: ID(i) \u21900, value(i) \u2190e, size(i) \u21900. 12: end if 13: end procedure 14: procedure RECEIVE([value, size, ID]) 15: ID(i) \u2190max(ID(i), ID) 16: value(i) \u2190g(value(i), value) 17: size(i) \u2190size(i) + size. 18: memory(i) \u2190max(memory(i), ID) \u25b7memory(i): maximum UID that node i has ever seen. 19: end procedure fn(.) as explained in Lemma II.1. Furthermore, the value e is the identity element of the rule function g(., .), i.e. g(v, e) = g(e, v) = v for any value v. From top view, each token walks randomly in the network until it enters a node visited by a token with higher UID (Case 1). Then, it follows a path to meet the token with higher UID (Case 2). We call the walking modes in the \ufb01rst and second cases the random walk and chasing modes, respectively. In the random walk mode, a token walks with the lower speed psend. Thus, it can be followed by tokens with lower UID more quickly. C. Termination of the TCM algorithm The process in Algorithm 1 continues until a few tokens remain in the network. In order to terminate the algorithm, we consider two options: \u2022 Option 1Assume that the exact network size, n, is known by all nodes. Furthermore, each node i has a parameter size(i), beside its initial value which is equal to one at the beginning. The sum of parameters {size(i), i \u2208{1, \u00b7 \u00b7 \u00b7 , n}} can be computed in parallel to the target function. If the parameter size in an active node reaches n, it can identify itself as the unique active node in the network. Then, it broadcasts the output of the TCM algorithm to all nodes by controlled \ufb02ooding, further explained below. \u2022 Option 2Suppose that there exists an upper bound on the network size. Then, the execution time of the TCM algorithm can be adjusted to a time Trun such that, on average, at most a constant number of active nodes remain after time Trun. Afterwards, each active node broadcasts the value of its token including the UID. All nodes can obtain the \ufb01nal result by combining values received from the active nodes. In analyzing the performances of CRW and TCM algorithms, we consider the \ufb01rst option. In controlled \ufb02ooding, an active node i sends the value and UID of its token to all neighbor nodes. Each node j, upon receiving this message from a node k for the \ufb01rst time, forwards it to all its neighbor nodes except node k. Since each message is transmitted on each edge at most \f6 twice, the time and message complexities of controlled \ufb02ooding are \u0398(diam(G)) and \u0398(|E|), respectively4. The allocation of memory at node i would be: (memory(i), path(i), size(i), value(i)) where the possible values of the \ufb01rst three entries are in the set {1, \u00b7 \u00b7 \u00b7 , n}. Thus, the TCM algorithm requires at most \u0398(log(n)) bits more storage capacity compared to the CRW algorithm. The next Lemma identi\ufb01es the class of target functions fn(v0 1, \u00b7 \u00b7 \u00b7 , v0 n) which can be computed by the TCM algorithm. Lemma II.1. The TCM algorithm can compute a collection of symmetric functions {fn(.)} if there exists an updating rule function g(., .) such that for any permutation \u03c0 of the set {1, \u00b7 \u00b7 \u00b7 , n}, we have: fn(v0 1, \u00b7 \u00b7 \u00b7 , v0 n) = g(fk(v0 \u03c01, \u00b7 \u00b7 \u00b7 , v0 \u03c0k), fn\u2212k(v0 \u03c0k+1, \u00b7 \u00b7 \u00b7 , v0 \u03c0n)), 1 \u2264k \u2264n, \u2200n. Proof. The proof is the same as Lemma 3.1 in [12]. A wide class of target functions ful\ufb01l these requirements such as min/max, average, sum, and exclusive OR. For instance, updating rule functions g(vi, vj) = vi + vj, g(vi, vj) = max(vi, vj), and g(vi, vj) = vi \u2295vj are used for computing sum, minimum, and exclusive OR functions, respectively. The average function can also be computed by dividing the output of the sum function by the network size which is obtained by summing parameter size of nodes in parallel to computing the sum function. III. PERFORMANCE ANALYSIS OF THE CRW AND TCM ALGORITHMS In this section, we study the performances of CRW and TCM algorithms in complete graphs, Erd\u00a8 os-Renyi model, and torus networks. The considered network topologies may resemble different practical networks. For instance, the topology of a wireless network, in which all stations are in transmission range of each other, is typically modelled by a complete graph. A peer-to-peer network such that all nodes can communicate with each other in the overlay network, is another example of complete graphs. As we explain later, the Erd\u00a8 os-Renyi model is frequently used as a model to represent social networks. Furthermore, torus network is a simple structure widely used to model distributed processing systems with grid layout or grid-based wireless sensor networks. As a prelude to analyze the performance of the TCM algorithm, we \ufb01rst present an analysis of time and message complexities of the CRW algorithm for complete graphs, although the CRW algorithm is already analyzed in [17]. Then, we study time complexity of the TCM algorithm in complete graphs. We also give a naive analysis of message complexity of the TCM algorithm in complete graphs and time/message complexity of both algorithms in Erd\u00a8 os-Renyi model and torus networks. The summary of time and message complexities for the TCM algorithm and the CRW/truncated CRW algorithms are given in Table 1. In complete graphs and Erd\u00a8 os-Renyi model, the TCM algorithm reduces the time complexity at least by a factor p n/ log(n). In the case of torus networks, there is an improvement at least by a factor log(n)/ log(log(n)) with respect to the CRW algorithm. Furthermore, the message complexity of the TCM algorithm is at most the same as the CRW and truncated CRW algorithms. Simulation results show that there is at least a constant factor improvement in the message complexity by employing the TCM algorithm in all considered topologies. In analyzing the CRW and TCM algorithms, we assume that each token is transmitted instantaneously. Furthermore, passing a token is counted as sending one message in the network. 4In complete graphs, we can employ gossip algorithm proposed in [16] to broadcast the output with time and message complexities of the order O(log(n)) and O(n log(n)), respectively. \f7 Table I PERFORMANCE COMPARISON OF THE TCM AND CRW ALGORITHMS IN TERMS OF TIME AND MESSAGE COMPLEXITIES. (a) Time complexity Complete graphs Erd\u00a8 os-Renyi model Torus networks TCM O( p n log(n)) O( p n log(n)) O(n log(log(n))) CRW \u0398(n) \u0398(n) \u0398(n log(n)) [12] Truncated CRW \u0398(n) \u0398(n) \u0398(n) [12] (b) Message complexity Complete graphs Erd\u00a8 os-Renyi model Torus networks TCM O(n log(n)) O(n log(n)) CRW \u0398(n log(n)) \u0398(n log(n)) \u0398(n log2(n)) [12] Truncated CRW \u0398(n log(n)) \u0398(n log(n)) \u0398(n log2(n)) [12] A. Time and message complexities of the CRW algorithm on complete graphs Let TCRW and MCRW be the average time and message complexities of the CRW algorithms, respectively. Next theorem gives a tight bound on TCRW and MCRW. Theorem III.1. The average time and message complexities of the CRW algorithm in complete graphs are of the orders \u0398(n) and \u0398(n log(n)), respectively. Proof. We can represent the process of token coalescing by a Markov chain with the number of active nodes remaining in the network de\ufb01ned as the state (see Fig. 2). The chain undergoes transition from state k to state k \u22121 if a token chooses an active nodes for the next step, which occurs with rate k(k\u22121) n\u22121 . Let Tk be the sojourn time in state k. Then the average time complexity is: TCRW = n X k=2 E{Tk} = n X k=2 n \u22121 k(k \u22121) = (n \u22121)(1 \u22121/n) \u2248n \u22122. (2) Besides, in state k, on average, (n \u22121)/(k \u22121) messages are transmitted before observing a coalescing event. Therefore, the average message complexity would be5: MCRW = n X k=2 n \u22121 k \u22121 \u2248(n \u22121)(log(n \u22121) + 0.577). (3) Thus, the average time and message complexities of CRW algorithm are of the orders \u0398(n) and \u0398(n log(n)), respectively. B. Time complexity of TCM algorithm on complete graphs Let the UIDs of the n tokens at the beginning of the algorithm be denoted as ID1, \u00b7 \u00b7 \u00b7 , IDn. Without loss of generality, assume that ID1 > \u00b7 \u00b7 \u00b7 > IDn. Throughout this section, we also assume that psend = 1 2. 5 n X k=1 1/k \u2248log(n) + c where c \u22480.577 is the Euler-Mascheroni Constant. \f8 n n-1 k k-1 n-2 n 2 \u2212 n 1 ) 1 ( \u2212 \u2212 n k k Figure 2. Markov chain model for the process of token coalescing in the CRW algorithm. The state k corresponds to k active nodes in the network. De\ufb01nition III.1. Let Tcoal(IDi), i = 2, \u00b7 \u00b7 \u00b7 , n, denote the time that token IDi coalesces with a token with a larger UID. Thus, the algorithm running time would be: Trun(n) = maxi\u2208{2,\u00b7\u00b7\u00b7 ,n} Tcoal(IDi). In the TCM algorithm, token ID1 walks randomly in the network. In each step, it chooses a random node from the whole set of network nodes except the node where it is currently presented. After taking j steps, the average number of visited nodes by token ID1 would be: n \u2212(n \u22121) \u00d7 (1 \u22121/(n \u22121))j. De\ufb01nition III.2. We call the set of nodes visited by token ID1 during its \ufb01rst j movements the event horizon of ID1, and denote it by EH1(j). Notice that, in the TCM algorithm, when a token gets in the event horizon of token ID1, it cannot escape and will eventually coalesce with token ID1. We borrowed the term event horizon from general relativity, where it refers to \u201cthe point of no return\u201d. Lemma III.1. The size of event horizon of token ID1 after taking 2j steps, i.e. |EH1(2j)|, is at least E{|EH1(j)|} \u2248n(1 \u2212(1 \u22121/n)j) with probability greater than 1 \u2212e\u2212n/4\u2212j\u03b7 where constant \u03b7 \u22650.05. Proof. See Appendix A in the supplemental material. Now, we can obtain an upper bound on the average time complexity of the TCM algorithm, from Lemma III.1. Theorem III.2. In complete graphs, the average time complexity of TCM algorithm is of the order O( p n log(n)). Proof. For a complete proof, see Appendix B in the supplemental material. Here, in order to provide better insight about the algorithm, we present a naive analysis, that is based on a modi\ufb01ed model of the network, where Poisson assumption for clock ticks is relaxed. Instead, we adopt a slotted model for time, where each token in the chasing mode, takes one step in each time slot. Furthermore, in the random walk mode, we replace the assumption of psend = 1 2 with sending token every other slot. Tokens which are scheduled to move in a time slot, take steps in a random order. In our analysis, we utilize the following inequality that we trust is correct, based on intuition and simulation veri\ufb01cation: Pr{Tcoal(IDi) \u2264t} \u2265Pr{Tcoal(ID2) \u2264t}, 2 \u2264i \u2264n. (4) As an example, simulation results are given for a network with n = 100 nodes in Fig. 3. \f9 0 10 20 30 40 50 60 70 80 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time Pr{Tcoal(IDi) \u2264t} ID10 ID5 ID4 ID3 ID2 Figure 3. Cumulative distribution function of coalescing time for different tokens, n = 100. First, we derive an upper bound on the probability that the token ID2 gets in the event horizon of ID1 after time slot t. According to the simpli\ufb01ed timing model, token ID1 moves at even time slots and token ID2 tries to get in the event horizon of token ID1 at the same time slots. In order to obtain the upper bound, we wait for 2k time slots to have a big enough event horizon of token ID1. Since the size of event horizon in the next 2k time slots is equal or greater than the one at time slot 2k, the probability of not hitting the event horizon in time interval [2k, 4k] is less than (1 \u2212|EH1(k)|/n)k. By bounding |EH1(k)| from below (see Lemma III.1), we have for k \u22652 p n log(n): Pr{TEH1(ID2) > 4k} \u2264(1 \u2212E{|EH1(k/2)|} n )k \u00d7 Pr{|EH1(k)| \u2264E{|EH1(k/2)|}} + Pr{|EH1(k)| \u2264E{|EH1(k/2)|}} \u00d7 1 \u2264(1 \u2212E{|EH1(k/2)|} n )k + e\u2212n/4\u2212\u03b7k/2 \u2264e\u2212\u221a log(n)/nk + e\u2212n/4\u2212\u03b7k/2, (5) where the last inequality is obtained by replacing E{|EH1(k/2)|} \u2265E{|EH1(\u230a p n log(n)\u230b)|}, for k \u22652 p n log(n). When token ID2 reaches the event horizon of token ID1 at time slot 4k, it takes at most another 4k time slots to coalesce with token ID1. Because the size of |EH1(k)| is at most 2k and the relative velocity of two tokens is 1/2. From this fact, we have: Pr{Tcoal(ID2) \u22648k} \u2265 Pr{TEH1(ID2) \u22644k}. From (5), we can obtain the following: Pr{Tcoal(ID2) > k} < e\u2212\u221a log(n)/nk/8 + e\u2212n/4\u2212\u03b7k/16, k \u226516 p n log n. (6) \f10 Now, an upper bound can be derived on the average time complexity: E{Trun(n)} = \u221e X k=1 Pr{Trun(n) > k} = \u221e X k=1 Pr{ max i\u2208{2,\u00b7\u00b7\u00b7 ,n} Tcoal(IDi) > k} \u2264 \u221e X k=1 min(1, X i\u2208{2,\u00b7\u00b7\u00b7 ,n} Pr{Tcoal(IDi) > k}) \u2264a 16 p n log(n) + Z \u221e 16\u221a n log(n) min(1, (n \u22121) \u00d7 (e\u2212\u221a log(n)/nt/8 + e\u2212n/4\u2212\u03b7t/16))dt \u2264b 16 p n log(n) + 8/ p n log(n) + 16n \u03b7 e\u2212n/4\u2212\u03b7\u221a n log(n). (7) (a) From the inequalities in (4) and (6). (b) Due to (n \u22121) \u00d7 (e\u2212\u221a log(n)/nt/8 + e\u2212n/4\u2212\u03b7t/16) \u22641 for t \u226516 p n log(n). From (7), we conclude that the average time complexity is of the order O( p n log(n)). Comparing with the CRW algorithm, the TCM algorithm improves the time complexity with at least a factor of p n/ log(n). C. Message complexity of TCM algorithm on complete graphs In this part, we give a naive analysis of the message complexity of TCM algorithm in complete graphs. To obtain the bound on message complexity, we will show that the average number of messages sent in the TCM algorithm until observing a coalescing event, is less than the case for the CRW algorithm. Proposition III.1. The average message complexity of the TCM algorithm is of the order O(n log(n)) in complete graphs. Proof. Assume that clock of an active node i ticks at time t and k tokens remain in the network. Suppose that token IDr is in node i. The token IDr may be in two different modes: Walking randomly or following another token with higher UID. In the \ufb01rst mode, it will choose any node like j with probability 1/(n \u22121). Thus, the probability of coalescing is: 1 n \u22121 X j\u2208{1,\u00b7\u00b7\u00b7 ,n}\\{i} Pr{\u03b6j(t) = 1}, (8) where \u03b6j(t) is an indicator parameter which is equal to one if node j is active at time t and otherwise, it is zero. But the expected number of active nodes excluding node i is: X j\u2208{1,\u00b7\u00b7\u00b7 ,n}\\{i} 1\u00d7 Pr{\u03b6j(t) = 1} = k \u22121. Hence, the probability of coalescing in this mode is (k \u22121)/(n \u22121). In the second mode, token IDr follows another token with higher UID and decided to go to a neighbor node, let say node l. We know that there exist k \u22121 tokens excluding token IDr which walk randomly or follow another token on a trajectory of a random walk. Thus, node l is active with probability at least (k \u22121)/(n \u22121). Following the same arguments in analyzing the message complexity of the CRW algorithm, the message complexity is of the order O(n log(n)). \f11 D. Time and message complexities of TCM and CRW algorithms in Erd\u00a8 os-Renyi model In some network applications, it is required to compute a speci\ufb01c function in social networks, such as majority voting [18]. Hence, it is quite important to study the performances of TCM and CRW algorithms in these scenarios. Erd\u00a8 os-Renyi model is frequently used as a simple model to represent social networks [19]. In this part, we use this model to give a naive analysis on the time and message complexities of TCM and CRW algorithms in social networks. In Erd\u00a8 os-Renyi model, there exists an edge between any two nodes with probability p. It can be shown that the graph is almost certainly connected, if p \u22652 log(n)/n [20]. The next two propositions give upper bounds on the time and message complexities of CRW and TCM algorithms. Proposition III.2. In the Erd\u00a8 os-Renyi model, the average time and message complexities of CRW algorithm are of the order O(n) and O(n log(n)), respectively. Proof. Assume that k tokens remain in the network. Consider token IDi walks randomly until it meets another token. In each step, it may be located in any node. From the token\u2019s view, it seems that edges are randomly established with probability p in each step. Suppose that token IDi is in node l at time t. It will choose an active node with probability, Pselec: Pselec = X m\u2208{q|\u03b6q(t)=1} n\u22122 X j=0 p \u00d7 Pr{d\u2032 l = j} \u00d7 1/(j + 1) = (k \u22121) \u00d7 p \u00d7 E{1/(d\u2032 l + 1)}, (9) where d\u2032 l is the degree of node l excluding an active node m. The \ufb01rst term in summation shows the probability of having an edge between two nodes l and m. The second term represents the probability that node l has j number of neighbor nodes excluding the node m and the last term is the probability that node l chooses active node m from the set of its neighbor nodes. From Jensen\u2019s inequality and convexity of function f(x) = 1/(x + 1) over x > 0, we have: Pselec \u2265(k\u22121)p/(E{d\u2032 l}+1) = (k\u22121)/(n\u22122+1/p) \u2265(k\u22121)/(n\u22122+n/(2 log(n))). It can be easily veri\ufb01ed that Pselec \u2265(k \u22121)/(1.12(n \u22121)) = \u0398((k \u22121)/(n \u22121)) for n \u2265100. Following the same arguments in analyzing the performance of CRW algorithm in complete graphs, we can deduce that the time and message complexities are of the order O(n) and O(n log(n)), respectively. Proposition III.3. In the Erd\u00a8 os-Renyi model, the average time and message complexities of TCM algorithm are of the orders O( p n log(n)) and O(n log(n)), respectively. Proof. Suppose that the token IDi is in random walk mode. In each step, it visits each node with probability p\u00d7E{1/(d\u2032 l+1)} \u22651/(n\u22122+1/p) \u22481/(n\u22121) for large enough n. Intuitively, we still have the same bounds on the probabilities Pr{Tcoal(IDi) > t}, 2 \u2264i \u2264n. By the same arguments for the case of complete graphs, the time and message complexities are of the order O( p n log(n)) and O(n log(n)), respectively. E. Time complexity of TCM algorithm on torus networks In this part, we give a naive analysis on the time complexity of TCM algorithm in torus networks. We will show that the average running time of the algorithm is of the order O(n log(log(n))). To obtain the bound, we \ufb01rst need to review two lemmas about single random walks. \f12 Figure 4. After k steps, the region of visited nodes by token ID1, is approximately a disc with radius of r = p k/n log(k). The visited nodes are shown by dashed patterns. The average time to hit the disc by a token (depicted by black color) is \u0398(n log(r\u22121)). Lemma III.2. [21] Consider a \u221an \u00d7 \u221an discrete torus. Let Thit be the average time for a single random walk to hit the set of nodes contained in a disc of radius r < R/2 around a point x starting from the boundary of a disc of radius R around x. Then, we have: E{Thit} = \u0398(n log(r\u22121)). Lemma III.3. [22] Let Vk be the number of nodes visited by a single random walk on Z2 after k steps. Then, we have: E{Vk} = \u03c0k log k and variance Var(Vk) = O(k2 log(log(k)) log(k)3 ). Proposition III.4. In torus networks, the average time complexity of the TCM algorithm is of the order O(n log(log(n))). Proof. Consider the token ID1. From Lemma III.3, \u03c0k log k number of nodes are visited on average by token ID1 after k steps. To simplify the analysis, we approximate the region of visited nodes with a disc of radius p k/n log k on a unit torus (see Fig. 4). Hence, after k = \u03b2n steps, radius of the disc would be p \u03b2/ log(\u03b2n) where \u03b2 << 1. Furthermore, any other token IDi (i \u22652) walks randomly or follows another token on a trajectory of a random walk. Hence, from Lemma III.2, token IDi hits the disc after \u0398(n log(log(n))) average time units if it does not coalesce with any other token during this time interval. Following that, at most 2n time slots are required to reach token ID1. Therefore, the time complexity is of the order O(n log(log(n))). IV. ROBUSTNESS ANALYSIS In this section, we study the robustness of CRW and TCM algorithms. In the literature of distributed systems, identifying robust algorithms is done mostly from a qualitative rather than quantitative perspective. For instance, there is a common belief that gossip algorithms have a robust structure against network perturbations such as node failures or time-varying topologies [9]. Nevertheless, this advantage is achieved by huge time and message complexities [9]. To the best of our knowledge, there exist a few works [23], [24] on analyzing the robustness of distributed function computation (DCF) algorithms. One of the main challenges is that it is dif\ufb01cult to devise a well de\ufb01ned robustness metric. Despite the challenges, there exist some \f13 methodologies for de\ufb01ning a robustness metric in a computing system [25], [26]. Here, we follow the same approach in these methodologies. To do so, three steps should be taken: 1) First, a metric should be considered for the system performance. In our case, we consider it as the probability of successful computation at the end of the algorithm, i.e. Pr{vi = f(v0 1, \u00b7 \u00b7 \u00b7 , v0 n), \u2200i \u2208{1, \u00b7 \u00b7 \u00b7 , n}, node i has not failed} where vi is the output of node i. Note that the correct result is a function of initial values of whole nodes. 2) In the second step, network perturbations should be modelled. In the CRW and TCM algorithms, the \ufb01nal result may be corrupted if an active node fails. Thus, studying the impact of such event on the robustness of these algorithms is quite important. In order to model node failures, we assume that each node may crash according to exponential distribution with rate \u03bb. Therefore, the average lifespan of a node is 1/\u03bb. As a result, at most n \u00d7 (1 \u2212e\u2212\u03bbE{Trun(n)}) number of nodes fail on average. We assume that the expected number of crashed nodes during the execution of the algorithm is at most a small fraction of network size, i.e. \u03bbE{Trun(n)} < \u2212log(1 \u2212\u03b1) \u2248\u03b1 where \u03b1 << 1. 3) At the end, it should be identi\ufb01ed how much perturbation the algorithm can tolerate such that the performance metric remains in an acceptable region. For this purpose, we de\ufb01ne the following robustness metric. De\ufb01nition IV.1. The robustness metric, r(\u03f5), is de\ufb01ned by the following equation: r(\u03f5) \u225cmax \u03bb0 s.t. Pr{vi = f(v0 1, \u00b7 \u00b7 \u00b7 , v0 n), \u2200i \u2208{1, \u00b7 \u00b7 \u00b7, n}, node i has not failed|\u03bb = \u03bb0} \u22651 \u2212\u03f5, (10) Intuitively, the robustness metric shows maximum failure rate which an algorithm can tolerate such that the probability of successful computation is greater than a desired threshold, 1 \u2212\u03f5. In order to execute CRW and TCM algorithms in the presence of node failure, it is assumed that each token chooses a random neighbor node for the next clock tick, if the contacting node at the current moment has been failed. A. Robustness of CRW algorithm in complete graphs We \ufb01rst derive the probability that node i is active at time t, i.e. Pr{\u03b6i(t) = 1}. Lemma IV.1. In the non-failure scenario, node i is active at time t with probability Pr{\u03b6i(t) = 1} = 1/(t + 1). Proof. We use the mean \ufb01eld theorem to calculate the probability p(t) = Pr{\u03b6i(t) = 1} (for more on mean \ufb01eld theorem, see [27]). Due to symmetry property of the complete graphs, each node is active at time t with the same probability p(t). Thus, the portion of active nodes will decrease with rate \u2212p2(t). Therefore, we have: dp(t) dt = \u2212p2(t). By solving the differential equation and considering the fact that p(0) = 1, we have: p(t) = 1/(t + 1) and E{c(t)} = n/(t + 1) where c(t) = n X i=1 \u03b6i(t) is the the number of active nodes at time t. Lemma IV.2. In the CRW algorithm, the probability of successful computation is greater than n\u2212\u03bbn for the node failure rate \u03bb < \u03b1/E{Trun(n)}. \f14 Proof. The function computation is successful iff none of active nodes fail up to time Trun(n).6 Let F[t0,t1) be the event that none of active nodes fails in the time interval [t0, t1). Thus, the probability Psucc(t) \u225cPr{F[0,t)}, (t < Trun(n)), satis\ufb01es the following equation: Psucc(t + dt) = Psucc(t) \u00d7 Pr{F[t,t+dt)|F[0,t)}, = Psucc(t) \u00d7 Ec(t) \b Pr{F[t,t+dt)|c(t), F[0,t)} \t , =a Psucc(t) \u00d7 Ec(t){e\u2212\u03bbc(t)dt}, = Psucc(t) \u00d7 Ec(t){1 \u2212\u03bbc(t)dt} + O(dt2), =b Psucc(t) \u00d7 (1 \u2212\u03bbn t + 1dt). (a) From property of exponential distribution considered in modelling node failures. (b) We assume that E{c(t)} \u2248n/(t + 1) is not affected by missing a small fraction of nodes. Therefore, we have: dPsucc(t) dt = \u2212Psucc(t) \u03bbn t + 1. (11) By solving the above differential equation, we have: Psucc(t) = (t + 1)\u2212\u03bbn. Hence, we can obtain a lower bound on the probability of successful computation, Psucc, as follows: Psucc = ETrun(n) \b Psucc \u0000Trun(n) \u0001\t \u2265(E{Trun(n)} + 1)\u2212\u03bbn \u2265n\u2212\u03bbn. (12) The above inequality holds due to Jensen\u2019s inequality and considering the fact that function f(x) = (x + 1)\u2212n\u03bb, x > 0 is convex. After some manipulations, it can be easily veri\ufb01ed that: r(\u03f5) > log((1 \u2212\u03f5)\u22121)/(n log(n)). Hence, the single CRW can tolerate failure rates of order O(1/(n log(n))). But, how can we improve the performance of this algorithm such that it tolerates failure rates of order \u03b1/E{Trun(n)} = \u03b1/n? One effective solution is to run multiple CRWs in parallel. More speci\ufb01cally, we run R instances of CRW algorithm denoted by 1, . . . , R; As a result, if an active node fails in some instances of the CRW algorithm, it might be inactive in the other instances and those instances survive from that node failure. In order to run multiple instances of the algorithm, tokens carry the index of the corresponding instance in the execution of the algorithm and can only coalesce with token of the same index. At the end of the algorithm, nodes decide on the output of an instance which includes as many values as possible in computing the target function. To do so, we can assume that each node i has a count parameter size(i) which is equal to one at the beginning of the algorithm (see section II). The sum of these count parameters is obtained alongside computing the target function of initial values for each instance of the algorithm. Nodes decide on the output of instance with maximum count parameter. Lemma IV.3. To tolerate the failure rate of \u03b1/n and get the correct result with probability 1\u2212\u03f5, the number of instances of the CRW algorithm should be greater than: R > log(\u03f5\u22121)n\u03b1. (13) 6In controlled \ufb02ooding mechanism, the value of last active node is broadcasted to all nodes. Thus, node failures have negligible impact on the \ufb01nal result in this phase and we neglect it in our analysis. \f15 Proof. Assuming that the multiple instances are approximately independent and considering \u03bb = \u03b1/n and Lemma IV.2, the probability of successful computation of the target function with R instances of CRW algorithm is greater than: 1 \u2212(1\u2212n\u2212\u03b1)R \u22651 \u2212\u03f5, \u2192R \u2265 log(\u03f5) log(1 \u2212n\u2212\u03b1) \u2248log(\u03f5\u22121)n\u03b1. (14) Corollary IV.1. The CRW algorithm is robust against failing \u03b1 fraction of nodes by running O(n\u03b1) instances of CRW algorithm in parallel. Thus, the message complexity is of the order O(n1+\u03b1 log(n)). Since \u03b1 << 1, this solution imposes low message overhead. B. Robustness of TCM algorithm in complete graphs To study the robustness of TCM algorithm, we \ufb01rst need to obtain the average percentage of active nodes at time t. However, deriving E{c(t)}/n for TCM algorithm in complete graphs is not an easy task as the one for the CRW algorithm. Since it is required to compute the following sum: E{c(t)} = 1 n n X i=1 Pr{Tcoal(IDi) > t}, (15) where obtaining Pr{Tcoal(IDi) > t}, \u2200i \u2208{2, \u00b7 \u00b7 \u00b7 , n} (or even bounds on them) is quite challenging. In order to simplify the analysis, we consider a form of function E{c(t)}/n \u2248 log2(t + 2)/(at2 + bt + 1) where a = 0.23 and b = 1.8. The reason for choosing this form is that the average running time is of the order O( p n log(n)) and it can also be \ufb01tted properly to the simulation results7. According to this assumption, we can derive the probability of successful computation by the following lemma. Lemma IV.4. The probability of successful computation by TCM algorithm is greater than e\u2212\u03b3n\u03bb in complete graphs where \u03b3 \u22484.13. Proof. By the same arguments in the proof of Lemma IV.2, we have: Psucc(t) = exp \u0012 \u2212\u03bb Z t 0 E{c(\u03c4)}d\u03c4 \u0013 . (16) Since h(t) = e\u2212\u03bbt is convex and non-increasing and g(t) = R t 0 E{c(\u03c4)}d\u03c4 is concave ( d dtE{c(t)} < 0, t > 0), the Psucc(t) = h(g(t)) is convex. Hence, we have from Jensen\u2019s inequality: Psucc = ETrun(n) \b Psucc \u0000Trun(n) \u0001\t \u2265exp \u0012 \u2212n\u03bb Z E{Trun(n)} 0 log2(\u03c4 + 2) a\u03c4 2 + b\u03c4 + 1d\u03c4 \u0013 \u2265e\u2212\u03b3n\u03bb, (17) where R E{Trun(n)} 0 log2(\u03c4+2) a\u03c4 2+b\u03c4+1d\u03c4 \u2264 R \u221e 0 log2(\u03c4+2) a\u03c4 2+b\u03c4+1d\u03c4 = \u03b3. Corollary IV.2. From Lemma IV.4, we can see that r(\u03f5) is at least \u03f5/(\u03b3n) for a single TCM algorithm. Similar to the CRW algorithm, we can run multiple instances of TCM algorithm in 7From simulation results, the root mean square error (RMSE) of \ufb01tted function is less than 10\u22123 for all n \u2208[100, 2500]. \f16 500 1000 1500 2000 2500 60 80 100 120 140 160 180 200 220 Number of nodes E{Time Complexity} simulation the curve: 4.5\u221an (a) The TCM algorithm 100 150 200 250 300 100 150 200 250 300 Number of nodes E{Time complexity} simulation analysis: n (b) The CRW algorithm Figure 5. Average time complexities of TCM and CRW algorithms in complete graphs. parallel to improve its robustness. In order to tolerate the failure rate of \u03b1/n, the required number of instances running in parallel should be of the order O(1). V. SIMULATION RESULTS In this section, we evaluate the performances of TCM and CRW algorithms through simulation. Simulation results are averaged over 10000 runs for both algorithms in complete graphs, torus networks, and Erd\u00a8 os-Renyi model. In Fig. 5, average time complexities of TCM and CRW algorithms are given for complete graphs. In the TCM algorithm, psend is set to 1 2. As it can be seen, simulation results are close to our analysis. Furthermore, the TCM algorithm outperforms the CRW algorithm by a scale factor \u221an. For instance, for n = 256, the average time complexities of TCM and CRW algorithms are 67 and 255 time units, respectively. Hence, the amount of improvement is 255/67 = 3.81 \u2248n/(4.5n0.5) = 3.56. In Fig. 6, the average message complexities of TCM and CRW algorithms are depicted in complete graphs. As it can be seen, the average message complexity of TCM algorithm is always less than half of the one for the CRW algorithm. In order to study the effect of parameter psend on the running time of TCM algorithm, the average time complexity is plotted versus psend for the complete graphs in Fig. 7. Intuitively, the event horizon of token ID1 grows with a pace inversely proportional to psend. On the other hand, the relative velocity of two tokens is approximately related to 1 \u2212psend. Thus, the average time complexity increases as psend goes to zero or one. Furthermore, the optimal psend gets close to 0.5 as network size increases. In Fig. 8, we evaluate the average time and message complexities of TCM and CRW algorithms in torus networks. We can see that TCM algorithm has at least a gain of log(n) in time complexity and a scale factor of 2.85 in message complexity. In Fig. 9, the average time and message complexities of TCM and CRW algorithms are depicted in Erd\u00a8 os-Renyi model. According to Fig. 9(a), the TCM algorithm has an improvement in time complexity by a factor \u221an. Furthermore, the average message complexity of TCM algorithm is approximately half of the CRW algorithm. \f17 500 1000 1500 2000 2500 0 2000 4000 6000 8000 10000 12000 Number of nodes E{Message Complexity} simulation the curve: 1 2n(log(n) + 0.58) (a) The TCM algorithm 100 150 200 250 300 600 800 1000 1200 1400 1600 1800 2000 2200 Number of nodes E{Message Complexity} simulation analysis: n(log(n) + 0.58) (b) The CRW algorithm Figure 6. Average message complexities of TCM and CRW algorithms in complete graphs. 0.4 0.5 0.6 0.7 0.8 0.9 1 40 60 80 100 120 140 160 180 200 psend E{Time Complexity} n = 196 n = 400 n = 900 Figure 7. Average time complexity of TCM algorithm versus psend. In Fig. 10, the probability of successful computation by running one instance of TCM and CRW algorithms are depicted in the case of complete graphs. The failure rate is set to 0.05/n. For the TCM algorithm, Psucc is approximately equal to 0.83 for different values of n in the range [100, 400]. Besides, results from analysis are close to it by an offset of 0.001. In the case of CRW algorithm, results from the simulation and the analysis are also close to each other. For this algorithm, Psucc is greater than 0.74 for various values of n in the range [100, 400]. In Fig. 11(a), the message complexities of the TCM and CRW algorithms are plotted versus failure rate in a complete graph with n = 100 nodes. The number of parallel instances is determined such that the probability of successful computation is equal to 0.95. As it can be seen, it is required to run a few more instances of the TCM and CRW algorithms to tolerate higher failure rate. Furthermore, message complexity of the TCM algorithm is less than the one for the CRW algorithm. In Fig. 11(b), the time complexities of both algorithms are given versus failure rate. For higher failure rate, we need to run more instances of the TCM/CRW algorithm \f18 100 200 300 400 500 600 700 800 900 0 500 1000 1500 2000 2500 Number of nodes E{Time Complexity} simulation (the TCM algorithm) simulation (the CRW algorithm) the curve: 1 3n log(n) the curve: 3 4n (a) Average time complexity 100 200 300 400 500 600 700 800 900 0 2000 4000 6000 8000 10000 12000 14000 Number of nodes E{Message Complexity} simulation (the TCM algorithm) simulation (the CRW algorithm) the curve: 2n log(n) the curve: 0.7n log(n) (b) Average message complexity Figure 8. Average time and message complexities of TCM and CRW algorithms in torus networks. 100 150 200 250 300 0 50 100 150 200 250 300 350 400 Number of nodes E{Time Complexity} 200 400 600 800 40 60 80 100 120 140 simulation (the TCM algorithm) simulation (the CRW algorithm) the curve: n the curve: 4.5\u221an (a) Average time complexity 100 150 200 250 300 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 Number of nodes E{Message Complexity} simulation (the TCM algorithm) simulation (the CRW algorithm) (b) Average message complexity Figure 9. Average time and message complexities of TCM and CRW algorithms in Erd\u00a8 os-Renyi model. to have Psucc = 0.95. On the other hand, executing multiple instance of the algorithms improves the time complexity. Since the target function is computed if any of the instances is terminated successfully. In Fig. 12, the probabilities of successful computation of the TCM and CRW algorithms are plotted versus number of multiple instances in a complete graph with n = 400 nodes for the failure rates \u03bb = 0.05/n, 0.1/n. It can be seen that the analytical lower bounds in (12) and (17) are close to simulation results. Furthermore, Psucc goes to one in all cases when 6 number of instances are executed in parallel. Thus, the proposed solution makes both algorithms robust against node failures by running a few number of instances in parallel as we expected from Corollaries IV.1 and IV.2. Studying the impact of dynamic topologies on the performance of distributed algorithms is quite important. Here, we evaluate the performance of TCM and CRW algorithms under node mobility. There exist different mobility models in the literature of mobile ad hoc networks [14]. In the simulations, we consider the Random Walk (RW) mobility model which is frequently \f19 100 150 200 250 300 350 400 0.82 0.825 0.83 0.835 0.84 0.845 0.85 0.855 Number of nodes Psucc simulation analysis (lower bound) (a) The TCM algorithm 100 150 200 250 300 350 400 0.72 0.74 0.76 0.78 0.8 0.82 0.84 Number of nodes Psucc simulation analysis (lower bound) (b) The CRW algorithm Figure 10. The probabilities of successful computation in TCM and CRW algorithms for complete graphs, R = 1. 1 2 3 4 5 6 7 8 9 10 x 10 \u22124 0 500 1000 1500 2000 2500 3000 failure rate (\u03bb) E{Message Complexity} The TCM algorithm The CRW algorithm (a) Message complexity 1 2 3 4 5 6 7 8 9 10 x 10 \u22124 30 40 50 60 70 80 90 100 110 failure rate (\u03bb) E{Time Complexity} 2 4 6 8 10 x 10 \u22124 30 32 34 36 38 40 42 The TCM algorithm The CRW algorithm (b) Time complexity Figure 11. Average time and message complexities of TCM and CRW algorithms versus failure rate in complete graphs, n = 100. The dashed lines represent the linear regression between message complexity and failure rate. used in determining the protocol performance and it can mimic movements of mobile nodes walking in an unpredictable way [14]. Initially, suppose that nodes are located randomly over a square of unit area. Let [xi(t), yi(t)] be the location of node i at time t. In the RW mobility model, the differences x(t + h) \u2212x(t) and y(t + h) \u2212y(t) are two independent normally distributed random variables with zero mean and variance 2Dh , \u2200h > 0 where D is the diffusion coef\ufb01cient [28]. Thus, the mean square displacement of a node is related to the parameter D. In particular, the probability of large displacement increases as diffusion coef\ufb01cient D grows. We assumed that if a node reaches the boundary of simulated area, it will be bounced off the boundary according to the same angle. Furthermore, two nodes are neighbor if the distance between them is less than a \ufb01xed transmission range. The transmission range is set to a value such that the graph remains connected with high probability for the static case, i.e. D = 0 [29]. In the TCM algorithm, we assume that each node i registers the UID of the node that the token memory(i) passed to it. Whenever an active node should send a token to a node which is not in \f20 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Number of instances (R) Psucc simulation (the TCM algorithm) simulation (the CRW algorithm) analytical lower bound (the TCM algorithm) analytcial lower bound (the CRW algorithm) failure rate \u03bb = 0.1/n failure rate \u03bb = 0.05/n Figure 12. The probability of successful computation versus number of multiple instances in complete graphs, n = 400. 10 \u22126 10 \u22125 10 \u22124 10 \u22123 10 \u22122 60 70 80 90 100 110 120 130 Di\ufb00usion Coe\ufb03cient (D) E{Time Complexity} The TCM algorithm The CRW algorithm (a) Time complexity 10 \u22126 10 \u22125 10 \u22124 10 \u22123 10 \u22122 350 400 450 500 550 600 650 Di\ufb00usion Coe\ufb03cient (D) E{Message Complexity} The TCM algorithm The CRW algorithm (b) Message complexity Figure 13. Average time and message complexities of TCM and CRW algorithms versus diffusion coef\ufb01cient D in a network with n = 100 nodes which are deployed in a square of unit area. The transmission range is set to 0.18. its transmission range any more, it will pass its token to a random neighbor node. In Fig. 13, the time and message complexities of TCM and CRW algorithms are depicted versus the parameter D in a network with n = 100 nodes. It is noteworthy that both algorithms can compute the class of target functions de\ufb01ned in Lemma II.1 successfully even in high mobility networks. Furthermore, the time and message complexities of TCM algorithm increases as the parameter D grows while node mobility improves the performance of CRW algorithm. In fact, higher mobility weakens the advantage of chasing mechanism. On the other hand, it gives an opportunity to a completely randomized solution, i.e. the CRW algorithm, to reduce the coalescing time of distant tokens. Nevertheless, simulation results show that the TCM algorithm outperforms the CRW algorithm in both time and message complexities. \f21 VI." + }, + { + "url": "http://arxiv.org/abs/1702.08575v3", + "title": "Learning Vector Autoregressive Models with Latent Processes", + "abstract": "We study the problem of learning the support of transition matrix between\nrandom processes in a Vector Autoregressive (VAR) model from samples when a\nsubset of the processes are latent. It is well known that ignoring the effect\nof the latent processes may lead to very different estimates of the influences\namong observed processes, and we are concerned with identifying the influences\namong the observed processes, those between the latent ones, and those from the\nlatent to the observed ones. We show that the support of transition matrix\namong the observed processes and lengths of all latent paths between any two\nobserved processes can be identified successfully under some conditions on the\nVAR model. From the lengths of latent paths, we reconstruct the latent subgraph\n(representing the influences among the latent processes) with a minimum number\nof variables uniquely if its topology is a directed tree. Furthermore, we\npropose an algorithm that finds all possible minimal latent graphs under some\nconditions on the lengths of latent paths. Our results apply to both\nnon-Gaussian and Gaussian cases, and experimental results on various synthetic\nand real-world datasets validate our theoretical results.", + "authors": "Saber Salehkaleybar, Jalal Etesami, Negar Kiyavash, Kun Zhang", + "published": "2017-02-27", + "updated": "2017-11-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Identifying causal in\ufb02uences among time series is a problem of interest in many \ufb01elds. In macroeconomics, for instance, researchers seek to understand what factors contribute to economic \ufb02uctuations and how they interact with each other (L\u00a8 utkepohl and Kr\u00a8 atzig 2004). In neuroscience, many researchers focus on learning the interactions between different regions of brain by analyzing neural spike trains (Roebroeck, Formisano, and Goebel 2005; Besserve et al. 2010; Kim et al. 2011). Granger causality (Granger 1969), transfer entropy (Schreiber 2000), and directed information (Massey 1990; Marko 1973) are some of the most commonly used measures in the literature to calculate time-delayed dependence structures in time series. Measuring the reduction of uncertainty in one variable after observing another variable is the key concept behind such measures. Under certain assumptions, these measures may represent causal relations among the variables (Pearl 2009; Spirtes, Glymour, and Scheines 2000). In (Eichler 2012), an overview of various de\ufb01nitions of causation is given for time series. In this work, we study the causal identi\ufb01cation problem in VAR models when only a subset of times series is observed. More precisely, we assume that the available measurements are a set of random processes \u20d7 X(t) \u2208Rn which, together with another set of latent random processes \u20d7 Z(t) \u2208Rm, where m \u2264n form a \ufb01rst order VAR model as follows: \u0014 \u20d7 X(t + 1) \u20d7 Z(t + 1) \u0015 = \u0014 A11 A12 A21 A22 \u0015 \u0014 \u20d7 X(t) \u20d7 Z(t) \u0015 + \u0014 \u20d7 \u03c9X(t + 1) \u20d7 \u03c9Z(t + 1) \u0015 . (1) Here we assume that observed data were measured at the right causal frequency of the VAR process; otherwise one may need to consider the effect of the sampling procedure such as subsampling or temporal aggregation (Danks and Plis 2013; Gong et al. 2015; Gong et al. 2017). Under certain assumptions (e.g., causal suf\ufb01ciency), the support of the transition matrix corresponds to the causal structure between these processes (Granger 1969; Spirtes, Glymour, and Scheines 2000; Pearl 2009). If we ignore the in\ufb02uence of latent processes and just regress \u20d7 X(t+1) on \u20d7 X(t), we may get a wrong estimate of the transition matrix between observed processes (see the example in (Geiger et al. 2015)). Hence, it is crucial to consider the presence of latent processes and their in\ufb02uences on the observed processes. Contributions: The contributions of this paper are as follows: we propose a learning approach that recovers the observed sub-network (support of A11) by regressing the observed vector \u20d7 X(t + 1) on a set of its past observations (not just \u20d7 X(t)) as long as the graph representation of latent subnetwork (support of A22) is a directed acyclic graph (DAG). We also derive a set of suf\ufb01cient conditions under which we can uniquely recover the in\ufb02uences from latent to observed processes, (support of A12) and also the in\ufb02uences among the latent variables, (support of A22). Additionally, we propose a suf\ufb01cient condition under which the support of the complete transition matrix can be recovered uniquely. More speci\ufb01cally, we show that under an assumption on the observed to latent noise power ratio, if neither of the submatrices A12 and A21 are zero, it is possible to determine the length of all directed latent paths1. We refer to this informa1A directed path is a latent path if it connects two observed variables and all the intermediate variables on that path are latent. arXiv:1702.08575v3 [cs.LG] 10 Nov 2017 \ftion as linear measurements2. This information reveals important properties of the causal structure among the latent and observed processes, i.e., support of [0, A12; A21, A22]. We call this sub-network of a VAR model unobserved network. We show that in the case that the unobserved network is a directed tree and each latent variable has at least two parents and two children, a straightforward application of (Patrinos and Hakimi 1972) can recover the unobserved network uniquely. Furthermore, we propose Algorithm 1 that recovers the support of A22 and A12 given the linear measurements when only the latent sub-network is a directed tree plus some extra structural assumptions (see Assumption 2). Lastly, we study the causal structures of VAR models in a more general case in which there exists at most one directed latent path of length k \u22652 between any two observed processes (see Assumption 3). For such VAR models, we propose Algorithm 2 that can recover all possible unobserved networks with minimum number of latent processes. Our results apply to both non-Gaussian and Gaussian cases, and experimental results on various synthetic and real-world datasets validate our theoretical results. All proofs can be found in supplemental material. Related works: The problem of recovering latent causal structure for time series has been studied in the literature. Assuming that connections between observed variables are sparse and each latent variable interacts with many observed variables, it has been shown that the transition matrix between observed variables can be identi\ufb01ed in a VAR model (Jalali and Sanghavi 2012). However, their approach focuses on learning only the observed sub-network. (Boyen, Friedman, and Koller 1999) applied a method based on expectation maximization (EM) to infer properties of partially observed Markov processes, without providing theoretical analysis for identi\ufb01ability. (Geiger et al. 2015) showed that if the exogenous noises are independent non-Gaussian and additional so-called genericity assumptions hold, then the sub-networks A11 and a part of A12 are uniquely identi\ufb01able. However, these assumptions may not hold true in a real-world dataset even with three variables (Geiger et al. 2015). They also presented a result in which they allowed Gaussian noises in their VAR model and obtained a set of conditions under which they can recover up to \u00002n n \u0001 candidate matrices for A11. Their learning approach is also based on EM and approximately maximizes the likelihood of a parametric VAR model with a mixture of Gaussians as noise distribution. Recently, (Etesami, Kiyavash, and Coleman 2016) studied a network of processes (not necessary a VAR model) whose underlying structure is a polytree and introduced an algorithm that can learn the entire casual structure (observed and unobserved networks) using a particular discrepancy measure. Compared to related works, we improve the state of the art for latent recovery by showing the identi\ufb01ability of a much larger class of structures. Unlike (Geiger et al. 2015), we do not assume the non-Gaussian distribution of the exogenous noises or those genericity assumptions. Moreover, our re2This is because it can be inferred from the observational data using linear regression. sults do not rely on the assumption that connections between observed variables are sparse or each latent variables interacts with many observed variables as in (Jalali and Sanghavi 2012). Furthermore, these works (Geiger et al. 2015; Jalali and Sanghavi 2012) can uniquely identify at most a part of transition matrix (A11 or a part of A12). Problem De\ufb01nition In this part, we review some basic de\ufb01nitions and our notation. Throughout this paper, we use an arrow over the letters to denote vectors. We assume that the time series are stationary and denote the autocorrelation of \u20d7 X by \u03b3X(k) := E[ \u20d7 X(t) \u20d7 X(t \u2212k)T ]. We denote the support of a matrix A by Supp(A) and use Supp(A) \u2286Supp(B) to indicate [A]ij = 0 whenever [B]ij = 0. We also denote the Fourier transform of g by F(g) and it is given by P\u221e h=\u2212\u221eg(h)e\u2212h\u2126j. In a directed graph G = (V, \u2212 \u2192 E ) with the node set V and the edge set \u2212 \u2192 E , we denote the set of parents of a node v by Pv := {u : (u, v) \u2208\u2212 \u2192 E } and the set of its children by Cv := {u : (v, u) \u2208\u2212 \u2192 E }. The skeleton of a directed graph G is the undirected graph obtained by removing all the directions in G. System Model Consider the VAR model in (1). Let \u20d7 \u03c9X(t) \u2208Rn and \u20d7 \u03c9Z(t) \u2208Rm be i.i.d random vectors with mean zero. For simplicity, we denote the matrix [A11, A12; A21, A22] by A. Our goal is to recover Supp(A) from observational data, i.e., { \u20d7 X(t)}. Rewrite 1 as follows \u20d7 X(t + 1) = t X k=0 A\u2217 k \u20d7 X(t \u2212k) + A12At 22 \u20d7 Z(0)+ t\u22121 X k=0 \u02dc Ak\u20d7 \u03c9Z(t \u2212k) + \u20d7 \u03c9X(t + 1), (2) where A\u2217 0 := A11, A\u2217 k := A12Ak\u22121 22 A21 for k \u22651, and \u02dc Ak := A12Ak 22. Assumption 1. We assume that the A22 is acyclic, i.e., \u22030 < l \u2264m, such that Al 22 = 0. Based on the above assumption, for t \u2265l, Equation (2) becomes3 \u20d7 X(t+ 1) = l X k=0 A\u2217 k \u20d7 X(t\u2212k)+ l\u22121 X k=0 \u02dc Ak\u20d7 \u03c9Z(t\u2212k) + \u20d7 \u03c9X(t+ 1). (3) We are interested in recovering the set {Supp(A\u2217 k)}l k=0 because it captures important information about the structure of the VAR model. Speci\ufb01cally, Supp(A\u2217 0) = Supp(A11); so it represents the direct causal in\ufb02uences between the observed variables and Supp(A\u2217 k) for k \u22651 determines whether at least one directed path of length k + 1 exists between any two observed nodes which goes through the latent sub-network.4 We will make use of this informa3Note that the limits of summations in (3) are changed. 4Herein, we exclude degenerate cases where there is a direct path from an observed node to another one with length k but the corresponding entry in matrix Supp(A\u2217 k) is zero. In fact, such special cases can be resolved by small perturbation of nonzero entries in matrix A. In the causal discovery literature, this assumption is known as faithfulness (Spirtes, Glymour, and Scheines 2000). \f1\u2022 2\u2022 3\u2022 4\u2022 \u25e6 \u25e6 \u25e6 1\u2022 2\u2022 3\u2022 4\u2022 \u25e6 \u25e6 \u25e6 \" D D D / G \u0010 \u0010 2 e e e e , Y Y Y Y < z z z \" D D D / E \f \f 2 e e e e 8 q q q , Y Y Y Y Figure 1: Two unobserved networks with the same linear measurements. White circles denote latent nodes. tion in our recovery algorithm. We call the set of matrices {Supp(A\u2217 k)}k\u22650, linear measurements. In Section 4, we present a set of suf\ufb01cient conditions under which given the linear measurements, we can recover the entire or most parts of the unobserved network uniquely. Note that in general, the linear measurements cannot uniquely specify the unobserved network. For example, Figure 1 illustrates two different unobserved networks that both share the same set of linear measurements, A\u2217 k = 0 for k > 2 and the only nonzero entries of A\u2217 1 and A\u2217 2 are {(3, 2)} and {(4, 1), (4, 2)}, respectively. Identi\ufb01ability of the Linear Measurements As we need the linear measurements for our structure learning, in this section, we study a suf\ufb01cient condition under which we can recover the linear measurements from the observed processes { \u20d7 X(t)}. To do so, we start off by rewriting Equation (3) as follows, \u20d7 X(t + 1) = A \u20d7 Xt\u2212l:t + l\u22121 X k=0 \u02dc Ak\u20d7 \u03c9Z(t \u2212k) + \u20d7 \u03c9X(t + 1), (4) where A := [A\u2217 0, ..., A\u2217 l ], and \u20d7 Xt\u2212l:t := [ \u20d7 X(t); \u00b7 \u00b7 \u00b7 ; \u20d7 X(t \u2212l)]. By projecting \u02dc Ak\u20d7 \u03c9Z(t\u2212k) onto the vector space spanned by the observed processes, i.e., { \u20d7 X(t), ..., \u20d7 X(t \u2212l)}, we obtain \u02dc Ak\u20d7 \u03c9Z(t\u2212k)= l X r=0 Cs r \u20d7 X(t\u2212r)+ \u20d7 NZ(t\u2212k), 0 \u2264k\u2264l\u22121, (5) where { \u20d7 NZ(t \u2212k)} denote the residual terms and {Cs r} are the corresponding coef\ufb01cient matrices. Substituting (5) into (4) implies \u20d7 X(t + 1) = B \u20d7 Xt\u2212l:t + \u20d7 \u03b8(t + 1), (6) where B := [B\u2217 0, ..., B\u2217 l ], B\u2217 k := A\u2217 k + Pl\u22121 s=0 Cs k, and \u20d7 \u03b8(t + 1) := \u20d7 \u03c9X(t+1)+Pl\u22121 k=0 \u20d7 NZ(t\u2212k). Note that by this representation, \u20d7 \u03b8(t + 1) is orthogonal to \u20d7 Xt\u2212l:t. Hence, Equation (6) shows that the minimum mean square error (MMSE) estimator can learn the coef\ufb01ecient matrix B given the observed processes. More precisely, let \u0393X(l) := E{ \u20d7 Xt\u2212l:t \u20d7 X T t\u2212l:t}, then we have B = [\u03b3X(1), .., \u03b3X(l + 1)] \u00d7 \u0393X(l)\u22121. (7) Proposition 1. Under Assumption 1, for the stationary VAR model in (1), we have ||B\u2217 k \u2212A\u2217 k||1 \u2264 p n(l\u2212k\u22121)M/L||A12||2||A22||k+1 2 , where M := \u03bbmax(\u0393\u03c9Z(0)) and L := \u03bbmin (\u0393X(0)). This result implies that we can asymptotically recover the support of {A\u2217 k}l k=0 as long as the absolute values of non-zero entries of A\u2217 k are bounded away from zero by 2 q n(l\u2212k\u22121) M L ||A12||2||A22||k+1 2 . Please note that A11 = A\u2217 0 = B\u2217 0 if ||A12||2 = 0. In Appendix (the second section), we explained how these bounds can be estimated from observational data. Proposition 2. Let \u03a3X = \u03c32 XIn\u00d7n and \u03a3Z = \u03c32 ZIm\u00d7m be the autocovariance matrices of \u20d7 \u03c9X(t) and \u20d7 \u03c9Z(t), respectively. Then, the ratio M/L strictly increases by decreasing \u03c32 X/\u03c32 Z. Proposition 2 implies that when the \u03c32 X/\u03c32 Z increases, M/L will decrease, and based on the bound in Proposition 1, the estimation error will decrease (it goes to zero as \u03c32 X/\u03c32 Z tends to in\ufb01nity). This shows that recovering the linear measurements is much easier in high \u03c32 X/\u03c32 Z regime as illustrated in Figure 3b. Note that Proposition 1 stresses a suf\ufb01ecient condition for recovering the linear measurements. As shown in Figure 3b, in practice, the actual estimation error is much smaller than the bound in Proposition 1. In the next section, we will make use of {Supp(A\u2217 k)}k>0 to recover the unobserved network. We assume that the correct linear measurements can be obtained from matrix B. In order to estimate the support of matrix B from a \ufb01nite number of samples drawn from the observed processes, say { \u20d7 X(t)}T t=1, \ufb01rst we obtain the lag length l in (6) by AIC or FPE criterion (see Chapter 4 in (L\u00a8 utkepohl 2005)). Afterwards, we can estimate the coef\ufb01cient matrix B, using an empirical estimator for \u0393X(l), {\u03b3X(h)}l+1 h=1, and then applying (7). Denote the result of this estimation by BT . It can be shown that (L\u00a8 utkepohl 2005), \u221a Tvec(BT \u2212B) d \u2212 \u2212 \u2212 \u2212 \u2192 T \u2192\u221e N(0, \u0393\u22121 X (l) \u2297\u03a3), where d \u2212 \u2192denotes convergence in distribution, and \u03a3 is the autocovariance matrix of \u20d7 \u03b8(t). vec(.) transforms a matrix to a vector by stacking its columns and \u2297is the Kronecker product. Having the estimates of \u0393X(l) and \u03a3, we can test whether the entries of matrix B are greater than the bounds in Proposition 1 (see Chapter 3 in (L\u00a8 utkepohl 2005)). Learning the Unobserved Network Recall that we refer to Supp([0, A12; A21, A22]) as the unobserved network and Supp(A22) as the latent sub-network. We present three algorithms that take the linear measurements {Supp(A\u2217 k)}k\u22650 as their input. The \ufb01rst algorithm recovers the entire unobserved network uniquely as long as it is a directed tree and each latent node has at least two parents and two children. The output of the second algorithm is Supp([0, A12; b A21, A22]), where Supp(A21) \u2286Supp( b A21). This is guaranteed whenever the latent sub-network is a directed tree and some extra conditions are satis\ufb01ed on how the latent and observed nodes are connected. The third algorithm \ufb01nds the set of all possible networks with minimum number of latent nodes that are consistent with the measurements. This algorithm is able to do so when there exists at most one directed latent path of any arbitrarily length between two observed nodes. A directed path is latent if all the intermediate variables on that path are latent. \fUnobserved Network is a Directed Tree Authors in (Patrinos and Hakimi 1972) introduced a necessary and suf\ufb01cient condition for recovering a weighted directed tree uniquely from a valid distance matrix D de\ufb01ned on the observed nodes,5 and also proposed a recovery algorithm. The condition is as follows: every latent node must have at least two parents and two children. A matrix D, in (Patrinos and Hakimi 1972), is a valid distance matrix, when [D]ij equals the sum of all the weights of those edges that belong to the directed path from i to j, and [D]ij = 0, if there is no directed path. The algorithm in (Patrinos and Hakimi 1972) has two phases. In the \ufb01rst phase, it creates a directed graph among the observed nodes with the adjacency matrix Supp(D). In the second phase, it recursively \ufb01nds and removes the circuits by introducing latent nodes for each circuit.6 For more details, see (Patrinos and Hakimi 1972). In order to adopt (Patrinos and Hakimi 1972)\u2019s algorithm for learning the unobserved network, we introduce a valid distance matrix using our linear measurements as follows, Dij = k + 1 if [Supp(A\u2217 k)]ji \u0338= 0 and 0, otherwise. Recall that [Supp(A\u2217 k)]ji indicates whether there exists a directed latent path from i to j of length k + 1 in the unobserved network. From theorem 8 in (Patrinos and Hakimi 1972), it is easy to show that the unobserved network can be recovered uniquely from above distance matrix if its topology is a directed tree and every latent node has at least two parents and two children. Latent Sub-network Is a Directed Tree De\ufb01nition 1. We denote the subset of observed nodes that are parents of a latent node h by PO h and denote the subset of observed nodes for which h is a parent, by CO h . We further denote the set of all leaves in the latent sub-network by L. We consider learning an unobserved network G that satis\ufb01es the following assumptions. Assumption 2. Assume that the latent sub-network of G is a directed tree. Furthermore, for any latent node h in G, (i) PO h \u0338\u2286\u222ah\u0338=jPO j and, (ii) if h is a leaf of the latent subnetwork, then CO h \u0338\u2286\u222ai\u2208L,i\u0338=hCO i . This assumption states that the latent sub-network of G must be a directed tree such that each latent node in G has at least one unique parent in the set of observed nodes. That is, a parent who is not shared with any other latent node. Furthermore, each latent leaf has at least one unique child among the observed nodes. For instance, when Supp(A22) represents a directed tree and both Supp(A12) and Supp(A21) contain identity matrices, Assumption 2 holds. As we will see later in Experimental Results (Figure 3c), a large portion of randomly generated graphs satisfy Assumption 2. Figure 2e illustrates a simple network that satis\ufb01es Assumption 2 in which the unique parents of latent nodes 5The skeleton of the recovered tree is the same as the original one but not necessary the weights. 6In a directed graph, a circuit is a cycle after removing all the directions. Algorithm 1 DTR Algorithm 1: Input: {Supp(A\u2217 k)}k\u22651 2: Find {li} using (8) and set U := \u2205. 3: Find Ri, Mi from (9) for all 1 \u2264i \u2264n. 4: for i = 1, ..., n do 5: Yi := {j : j \u0338= i \u2227lj = li} 6: if \u2200j \u2208Yi, (Rj \u0338\u2286Ri) \u2228(Rj = Ri \u2227Mi \u2286Mj) then 7: if i = min{k : Rk = Ri \u2227Mk = Mi} then 8: Create node hi and set Phi={i}, U\u2190{i} \u222aU 9: end if 10: end if 11: end for 12: for every latent node hs do 13: if \u2203hk, (lk = ls + 1) \u2227(Rs \u2286Rk) then 14: Phs \u2190{hk} \u222aPhs 15: end if 16: Chs \u2190{j : [A\u2217 1]js \u0338= 0} 17: end for 18: for i = 1, ..., n do 19: if \u2203j \u2208U, s.t. Mj \u2286Mi then 20: Phj \u2190{i} \u222aPhj 21: end if 22: end for a, b, c, and d are {1}, {3}, {2}, and {4}, respectively. The unique children of latent leaves c and d are {5} and {2, 4}, respectively. Theorem 1. Among all unobserved networks that are consistent with the linear measurements induced from (1), any graph G that satis\ufb01es Assumption 2 has the minimum number of latent nodes. Note that if Assumption 2 is violated, one can \ufb01nd many unobserved networks that are consistent with the linear measurements but are not minimum (in terms of the number of latent nodes). For example, the network in Figure 2a satis\ufb01es Assumption 2 (ii) but not (i). Figure 2b depicts an alternative network with the same linear measurements as the network in Figure 2a but it has fewer number of latent nodes. Similarly, the graph in Figure 2c satis\ufb01es Assumption 2 (i) but not (ii). Figure 2d shows an alternative graph with one less latent node. Theorem 2. Consider an unobserved network G with adjacency matrix Supp([0, A12; A21, A22]). If G satis\ufb01es Assumption 2, then its corresponding linear measurements uniquely identify G upto Supp([0, A12; b A21, A22]), where Supp(A21) \u2286Supp( b A21). Figure 2e gives an example of a network satisfying Assumption 2 and an alternative network, Figure 2f, with the same linear measurements which departs from the Figure 2e in the A21 component. Next, we propose the directed tree recovery (DTR) algorithm that takes the linear measurements of an unobserved network G satisfying Assumption 2 and recovers G upto the limitation in Theorem 2. This algorithm consists of three main loops. Recall that Assumption 2 implies that each latent node has at least one unique observed parent. The \ufb01rst \f\u25e6 \u25e6 \u25e6 \u25e6 1\u2022 2\u2022 3\u2022 4\u2022 . ] ] ] ] ] 3 g g g g . \\ \\ \\ \\ \\ \\ \\ \\ : 2 e e e e e e e \u000b\u0016\u0016\u0016\u0016 \u001b 7 7 7 \u000b\u0018\u0018\u0018 ^===== hQQQQQQQQQ fLLLLLLLL (a) \u25e6 \u25e6 \u25e6 1\u2022 2\u2022 3\u2022 4\u2022 . ] ] ] ] ] . \\ \\ \\ \\ \\ \\ \\ \\ : 2 e e e e e e e \u001b 7 7 7 \u000b\u0018\u0018\u0018 hQQQQQQQQQ fLLLLLLLL (b) \u25e6 \u25e6 \u25e6 \u25e6 1\u2022 2\u2022 3\u2022 4\u2022 . ] ] ] ] ] 8 p p p ' 2 e e e e e e e \u001b 7 7 7 \u0007\u000f\u000f\u000f \u000b\u0018\u0018\u0018 n lYYY fLLLLLLLL (c) \u25e6 \u25e6 \u25e6 1\u2022 2\u2022 3\u2022 4\u2022 . ] ] ] ] ] 3 h h h ' ? ~ ~ ~ ~ \u001b 7 7 7 \t\u0013\u0013\u0013\u0013 sh h h h n hQQQ (d) \u25e6a \u25e6b \u25e6c \u25e6d \u20221 \u20222 \u20223 \u20224 \u20225 . ] ] ] ] ] 8 q q q q 8 p p p ) \u0005\u000b \u000b \u001e = = \r\u001b\u001b \u0002\u0004\u0004\u0004\u0004\u0004\u0004 aC C C C xp p p \u000e\u001d\u001d\u001d\u001d\u001d\u001d c eK K K K (e) \u25e6a \u25e6b \u25e6c \u25e6d \u20221 \u20222 \u20223 \u20224 \u20225 . ] ] ] ] ] 8 q q q q 8 p p p ) \u0005\u000b \u000b \u001e = = \r\u001b\u001b \u0002\u0004\u0004\u0004\u0004\u0004\u0004 aC C C C xp p p \u000e\u001d\u001d\u001d\u001d\u001d\u001d c (f) Figure 2: Latent nodes are indicated by white circles. Graph (a) satis\ufb01es (ii) but not (i) and it can be reduced to (b). Graph (c) satis\ufb01es (i) but not (ii) and it can be reduced to (d). (e) and (f) satisfy Assumption 2 and have the same induced linear measurements but Supp( A21)(f) \u2282Supp( A21)(e). loop \ufb01nds all the unique observed parents for each latent node (lines: 4-11). The second loop reconstructs Supp(A22) and Supp(A12) (lines: 12-17). And \ufb01nally, the third loop constructs Supp( b A21) such that Supp(A21) \u2286Supp( b A21) (lines: 18-22). The following lemma shows that the \ufb01rst loop of Algorithm 1 can \ufb01nd all the unique observed parents from each latent node. To present the lemma, we need the following de\ufb01nitions. De\ufb01nition 2. For an observed node i, we de\ufb01ne li := max{k : [A\u2217 k\u22121]si \u0338= 0, for some s}, (8) Ri := {j : [A\u2217 li\u22121]ji \u0338= 0}, Mi := {(j, r) : [A\u2217 r\u22121]ji \u0338= 0}. (9) In the above equations, li denotes the length of longest directed latent path that connects node i to any observed node. Ri is the set of all observed nodes that can be reached by i with a directed latent path of length li and set Mi consists of all pairs (j, r) such that there exists a directed latent path from i to j with length r. Lemma 1. Under Assumption 2, an observed node i is the unique parent of a latent node if and only if for any other observed node j s.t. li = lj, we have (Rj \u0338\u2286Ri) \u2228(Rj = Ri \u2227Mi \u2286Mj). In the \ufb01rst loop, if there exist multiple unique parents of a latent node (for instance, node 2 and node 3 in Figure 2b), we pick the one with a minimum index (lines: 7-9). The second loop recovers Supp(A22) based on the following observation. If a latent node hk is the parent of latent node hs, then hk can reach all the observed nodes in Rs, i.e., Rs \u2286Rk and lk = ls + 1 (line: 13). Furthermore, Supp(A12) can be recovered using the fact that an observed node j is a children of a latent node hs, if a unique parent of hs, e.g., s, can reach j by a directed latent path of length 2 (line: 16). Finally, the third loop reconstructs Supp( b A21) by adding an observed node i to the parent set of latent node hj, if i can reach all the observed nodes that a unique parent of hj, e.g., j, reaches (lines: 18-22). Proposition 3. Suppose network G satis\ufb01es Assumption 2. Then given its corresponding linear measurements, Algorithm 1 recovers G upto the limitation in Theorem 2. Learning More General Unobserved Networks with Minimum Number of Latent Nodes In general, the latent sub-network may not be a tree or there may not be a unique minimal unobserved network consistent Algorithm 2 NM Algorithm 1: Initialization: Construct graph G0. 2: G0 := G0, Gs := \u2205, \u2200s > 0 3: k := 0 4: while Gk \u0338= \u2205do 5: for G \u2208Gk do 6: for i\u2032, j\u2032 \u2208G do 7: if Check(G, i\u2032, j\u2032) then 8: Gk+1 := Gk+1 \u222aMerge(G, i\u2032, j\u2032). 9: end if 10: end for 11: end for 12: k := k + 1 13: end while 14: Output: Gout := Gk\u22121 with the linear measurements (see Figure 1). Hence, we try to \ufb01nd an ef\ufb01cient approach to recovering all possible minimal unobserved networks under some conditions. In fact, without any extra conditions, \ufb01nding a minimal unobserved network is NP-hard. Theorem 3. Finding an unobserved network that is both consistent with a given linear measurements and has a minimum number of latent nodes is NP-hard. Below, after some de\ufb01nitions, we propose the NodeMerging (NM) algorithm that returns all possible unobserved networks with minimum number of latent nodes under the following assumption. Assumption 3. Assume that there exists at most one directed latent path of each length between any two observed nodes. For example, the graph in Figure 2f satis\ufb01es this assumption but not the one in Figure 2e. This is because there are two directed latent paths of length 2 from node 5 to node 4. De\ufb01nition 3. (Merging) We de\ufb01ne merging two nodes i\u2032 and j\u2032 in graph G as follows: remove node j\u2032 and the edges between i\u2032 and j\u2032, and then give all the parents and children of j\u2032to i\u2032. We denote the resulting graph after merging i\u2032 and j\u2032 by Merge(G, i\u2032, j\u2032). We say that two nodes i\u2032 and j\u2032 are mergeable if Merge(G, i\u2032, j\u2032) is consistent with the linear measurements of G. De\ufb01nition 4. (Connectedness) Consider an undirected graph \u00af G over the observed nodes which is constructed as \f10 20 30 40 50 Number of observed nodes (n) 0 0.01 0.02 0.03 0.04 0.05 0.06 Ratio of errors in estimating A11 (a) The average normalized error versus number of observed nodes. 0.5 1 1.5 2 X 2/ Z 2 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Average estimation error Bound in Prop. 1 Estimation error (b) The average of estimation error versus OLNR. 0.05 0.1 0.15 0.2 p 0 0.2 0.4 0.6 0.8 1 Psat. m=12 m=10 m=8 (c) The probability Psat. versus the parameter p. Figure 3: Average error in computing linear measurements. follows: there is an edge between two nodes i and j in \u00af G, if there exists k \u22651 s.t. Supp([A\u2217 k]ij) = 1 or Supp([A\u2217 k]ji) = 1; We say that two observed nodes i and j are \u201cconnected\u201d if there exist a path between them in \u00af G. It can be seen that if pairs i, j and j, k are connected then node i, k are also connected. We then de\ufb01ne a connected class as a subset of observed nodes in which any two nodes are connected. Initialization: We \ufb01rst \ufb01nd the set of all connected classes, say S1, S2, ..., SC. For each class Sc, we create a directed graph G0,c that is consistent with the linear measurements. To do so, for any two observed nodes i, j \u2208Sc, if [A\u2217 r]ji \u0338= 0, we construct a directed path with length r + 1 from node i to node j by adding r new latent nodes to G0,c. Merger: In this phase, for any G0,c from the initialization phase, we merge its latent nodes iteratively until no further latent pairs can be merged. Since the order of mergers leads to different networks with minimum number of latent nodes, the output of this phase will be the set of all such networks. Algorithm 2 summarizes the steps of NM algorithm. In this algorithm, subroutine Check(G, i\u2032, j\u2032) checks whether two nodes i\u2032 and j\u2032 are mergeable. Theorem 4. Under Assumptions 1 and 3, the NM algorithm returns the set of all networks that are consistent with the linear measurements and have minimum number of latent nodes. Experimental Results Synthetic Data: We considered a directed random graph, denoted by DRG(p, q), such that there exists a directed link between an observed and latent node with probability p, independently across all pairs, and there is a directed link between two latent nodes with probability q. If there is a link between two nodes, we set the weight of that link uniformly from [\u2212a, a]. We utilize the method described in Section 3 to estimate linear measurements with a signi\ufb01cance level of 0.05. In order to evaluate how well we can estimate the linear measurements, we generated 1000 instances of DRG(0.4, 0.4) with n + m = 100, \u03a3X = 0.1In\u00d7n, \u03a3Z = 0.1Im\u00d7m, and a = 0.1. The length of the time series was set to T = 1000. Let Supp( \u02c6 A11) be the estimate of support of A11. In Figure 3a, the expected estimation error, i.e. ||Supp( \u02c6 A11)\u2212Supp(A11)||2 F /n2, is computed, where ||.||F 10 20 30 40 50 60 70 80 90 100 Number of nodes (n) 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1 Percentage of full reconstruction (a) The percentage of instances that can be reconstructed ef\ufb01ciently in time. 10 20 30 40 50 60 70 80 90 100 Number of nodes (n) 0 0.1 0.2 0.3 0.4 0.5 0.6 Average running time (s) (b) Average run time of the algorithm. Figure 4: Recovering the minimal unobserved network for instances of DRG(1/(2n), 1/(2n)) where n \u2208 {10, ..., 100}, m = n/2. is the Frobenius norm. One can see that the estimation error decreases as the number of observed variables increases. We also studied the effect of the observed to latent noise power ratio (OLNR), \u03c32 X/\u03c32 Z, on ||B\u2217 0 \u2212A\u2217 0||1, and compared it with the bound given in Proposition 1. We generated 1000 instances of DRG(0.05, 0.05) with n = 5, m = 5, and a = 0.1. As it can be seen in Figure 3b, the average estimation error decreases as OLNR increases, as expected from Proposition 2. We investigated what percentage of instances of the random graphs satisfy Assumption 2. We generated 1000 instances of DRG(p, 1/n) with n = 100, and p \u2208[0.04, 0.2]. In Figure 3c, the probability of satisfying Assumption 2, Psat., is depicted versus p for different numbers of latent variables in the VAR model. For larger m, it is less likely to see a unique observed parent for each latent node and thus Psat. decreases. For a \ufb01xed m, the same phenomenon will occur if we increase p when p is relatively large. Furthermore, for small p, there might exist some latent nodes that have no observed parent or no observed children. We also evaluated the performance of the NM algorithm in random graphs. We generated 1000 instances of DRG(1/2n, 1/2n) with n = 10, ..., 100 and m = n/2, and computed the linear measurements. To save time, if for a class of connected nodes the number of latent nodes generated in the initial phase exceeds 40, we supposed that the \f1 2 3 4 5 Number of errors in estimating A 11 0 2 4 6 8 10 Number of graphs (a) High power 1 2 3 4 5 6 Number of errors in estimating A 11 0 1 2 3 4 5 6 Number of graphs (b) Low power Figure 5: Histogram of ||Supp( \u02c6 A11) \u2212Supp(A11)||2 F . corresponding instance cannot be recovered ef\ufb01ciently in time and did not proceed to the merging phase. Figures 4a and 4b depict the percentage of instances in which the algorithm can recover all possible minimal unobserved networks and the average run time (in seconds) of the algorithm, respectively.7 This plot shows that we can recover all possible minimal unobserved networks for a large portion of instances ef\ufb01ciently even in relatively large networks. US Macroeconomic Data: We considered the following set of time series from the quarterly US macroeconomic data for the period from 31-Mar-1947 to 31-Mar-2009 collected from the St. Louis Federal Reserve Economic Database (FRED) (FRE ): GDP, GDPDEF, COE, HOANBS, TB3MS, PCEC, GPDI. Assuming that the underlying dynamics is linear (Eq. (1)), we considered the estimated VAR model over all variables as the ground truth. Then, we selected four arbitrary times series as observed processes and computed Supp( \u02c6 A11). We divided the \u00007 4 \u0001 = 35 possible selections into two classes: 1) high power, where tr(E{\u03c9X(t)\u03c9X(t)T }) > \u03c4 for a \ufb01xed threshold \u03c4; 2) low power: where tr(E{\u03c9X(t)\u03c9X(t)T }) < \u03c4. In this experiment, we set \u03c4 = 0.02. In Figure 5, we plotted the histograms of ||Supp( \u02c6 A11)\u2212Supp(A11)||2 F for these two classes. As it can be seen, in the high power regime, most of the possible selections have small estimation errors. We also considered the following six time series of US macroeconomic data during 1-Jun-2009 to 31-Dec2016 from the same database: GDP, GPDI, PCEC, TBSMS, FEDFUND, and GS10. We obtained the causal structure among these six time series by \ufb01tting a VAR model on all of them and considered the result as our ground truth (see Figure 6). Then, we removed GPDI from the dataset and considered the remaining \ufb01ve time series as observe processes and checked whether the in\ufb02uences from the \u201clatent\u201d process (GPDI) can be corrected estimated. GDP GPDI\u25e6 PCEC GS10 TBSMS FED & \u000f 3 % K K K K w xq q q q \u000f wo o o + X X X X X X X X X l Figure 6: US macroeconomic data. We estimated the linear measurements and gave them as an input to Algorithm 1, which successfully recovered the ground truth 7We performed the experiment on a Mac with 2\u00d72.4 GHz 6Core Intel Xeon processor and 32 GB of RAM. (the estimated structure, in which the latent process is denoted by a circle, is identical to that in Figure 6). Dairy Prices: A collection of three US dairy prices has been observed monthly from January 1986 to December 2016 (Dai ): milk, butter, and cheese prices. Milk Butter Cheese / 5 l l l l l l ( P P P P i Figure 7: Dairy prices We estimated the VAR model on all the time series with lag length l =1 and considered the resulting graph as our ground truth (see Figure 7). Next, we omitted the butter prices from the dataset and considered the milk and cheese prices as observed processes. The estimated linear measurements were: Supp(A\u2217 0) = Supp(A11) = [1, 1; 1, 0] and Supp(A\u2217 1) = [0, 0; 1, 0]. Algorithm 1 correctly recovered the true causal graph using this linear measurements. Note that the genericity assumptions in (Geiger et al. 2015) do not hold true for this data set (see Experiments section). West German Macroeconomic Data: We considered the quarterly West German consumption expenditures X1, \ufb01xed investment X2, and disposable income X3, during 1960-1982 (WG ). Expend Income Invest / 7 o o o o o o & M M M M M M \u0005 Figure 8: West German macroeconomic data. Similar to the previous experiment with dairy prices, we \ufb01rst obtained the entire transition matrix among all the process. Figure 8 depicts the resulting graph. Next, we considered X3 to be latent and used {X1, X2} to estimate the linear measurements Supp(A\u2217 0) = Supp(A11) = [0, 0; 1, 1] and Supp(A\u2217 1) = [1, 0; 1, 0]. Using this linear measurements, Algorithm 1 recovered the true network in Figure 8 correctly." + }, + { + "url": "http://arxiv.org/abs/1701.06605v1", + "title": "Identifying Nonlinear 1-Step Causal Influences in Presence of Latent Variables", + "abstract": "We propose an approach for learning the causal structure in stochastic\ndynamical systems with a $1$-step functional dependency in the presence of\nlatent variables. We propose an information-theoretic approach that allows us\nto recover the causal relations among the observed variables as long as the\nlatent variables evolve without exogenous noise. We further propose an\nefficient learning method based on linear regression for the special sub-case\nwhen the dynamics are restricted to be linear. We validate the performance of\nour approach via numerical simulations.", + "authors": "Saber Salehkaleybar, Jalal Etesami, Negar Kiyavash", + "published": "2017-01-23", + "updated": "2017-01-23", + "primary_cat": "cs.IT", + "cats": [ + "cs.IT", + "cs.LG", + "math.IT", + "stat.ME" + ], + "main_content": "Introduction Identifying causal in\ufb02uences in a network of time series is one of fundamental problems in many different \ufb01elds, including social sciences, economics, computer science, and biology. In macroeconomics, for instance, researchers seek to understand what are the factors contributing to economic \ufb02uctuations and how these factors interact with each other [12]. In neuroscience, extensive body of research focuses on learning the interactions between different regions of brain by analyzing neural spike trains [16]. In 1960\u2019s, Granger proposed a de\ufb01nition of causality between random processes [8]. The key idea of his de\ufb01nition is that if a process X2(t) causes another process X1(t), then knowing the past of X2(t) up to time t must aid in predicting X1(t). In particular, let \u03a3X1(h|\u2126t) be the mean square error (MSE) of the optimal h-step predictor of a random process X1(t) at time t given information \u2126t. Process X2(t) is said to Granger cause process X1(t) if: \u2203h > 0, s.t. \u03a3X1(h|\u2126t) < \u03a3X1(h|\u2126t\\{X2(s)}t s=0), (1) where the set \u2126t contains all information in the universe related to the past and the present of X1(t). We also say that the process X2(t) has a 1-step cause on X1(t) if the above inequality holds for h = 1. In other words, considering X2(t) in the set \u2126t improves prediction of X1(t + 1). Granger\u2019s de\ufb01nition of causality is consistent with the belief that a cause cannot come after the effect, but it is not practical in some settings because it requires knowledge of the entire set \u2126t. To 1 arXiv:1701.06605v1 [cs.IT] 23 Jan 2017 \fput it differently, it is hard to identify and account for all parts of universe that are related to a speci\ufb01c process X1(t). Hence, only the available information related to X1(t) is considered in practice [11]. To see what may go wrong in such a situation, consider the following linear model with three state variables: \"X1(t) X2(t) Z(t) # = \" 0 0 0.5 0.5 0.1 0.9 0.9 0 0.5 # \"X1(t \u22121) X2(t \u22121) Z(t \u22121) # + \"\u03c91(t) \u03c92(t) 0 # , (2) where X1(t) and X2(t) are observable but Z(t) is latent. Let \u03c91(t) and \u03c92(t) be i.i.d random variables with the same variance. If we \ufb01t a linear model only on X1(t) and X2(t) without considering Z(t), our estimation of upper left 2 \u00d7 2 submatrix would be: [0.06, 0.32; 0.61, 0.69]. This result implies that X2(t) is a 1-step cause of X1(t) with the strength 0.32 which is wrong. The concept of Granger causality can be generalized to nonlinear setting using an information-theoretic quantity \u201cdirected information\u201d [13]. Still the problem caused by latent processes persists in that setting as well. Identifying causal relations between random variables has been studied in the presence of latent variables to some extent. For instance, Elidan et al. proposed an algorithm based on expectation maximization (EM) to estimate the parameters of their model by \ufb01xing the number of latent variables and also the structural relationships between latent and observed variables [3]. Chandrasekaran et al. [1] presented a tractable convex program based on regularized maximum likelihood for recovering causal relations for a model where the latent and observed variables are jointly Gaussian, and the conditional statistics of the observed variables given the latent variables is a sparse graph. A wellknown approach for learning latent Markov models uses quartet-based distances to discover the structure [10, 4]. In most of quartet-based solutions, a set of quartets is constructed for all subsets of four observable variables and then quartets are merged to form a tree structure. In recent years, there has been an increasing interest in inferring causal relations in random processes. Jalali and Sanghavi showed that 1-step causal relations between observed variables can be identi\ufb01ed in a Vector Auto-Regressive (VAR) model assuming that connections between observed variables are sparse and each latent variable interacts with many observed variables [9]. In [7], Geiger et al. showed that identifying 1-step causes between observed variables is possible under some algebraic conditions on the transition matrix of VAR model. Recently, Etesami et al. studied a network of processes with polytree structure and introduced an algorithm that can learn latent polytrees using a discrepancy measure [6]. In this paper, we propose an information-theoretic criteria for identifying the causal relations in a general model of stochastic dynamical systems without restricting the mapping functions (say to linear mappings) or the underlying structure (e.g., being a tree) among the observed nodes also when there is no exogenous noise in the latent part. We propose an ef\ufb01cient method to identify functional dependencies for the special case of linear mappings. We further demonstrate the applicability of this criteria though simulation results for both linear and nonlinear cases. The paper is organized as follows. In Section 2, we provide the preliminary de\ufb01nitions and describe the system model. In Section III, we present the main result and study the special restriction of it to linear models. We provide our simulation results in Section IV. Finally, we conclude in Section V. 2 Problem De\ufb01nition In this section, after some notational conventions, the model of stochastic dynamical system is presented. Afterwards, we present our de\ufb01nition of 1-step functional dependency between the processes for this model. 2.1 Notations Any n\u00d71 vector with with entries [V1(t); \u00b7 \u00b7 \u00b7 ; Vn(t)] is denoted by \u20d7 V (t). We denote the t-th random variable in the i-th process by Vi(t). We use underlined characters to represent a collection of processes, for example V t K,0 is used to denote a set of random processes with index set K from time 0 up to time t. For K = {1, ..., n}, we denote V t K,0 by V t 0. We also de\ufb01ne: \u2212j := {1, \u00b7 \u00b7 \u00b7 , n}\\{j}. The identity matrix of size n is shown by In\u00d7n. We denote (i, j) entry of a matrix A by A(i, j). 2 \fX1 X2 Z ytttttttttt \" F F F F F F F F F o Figure 1: Functional dependency graph of Example 1. In a directed graph \u2212 \u2192 G = (V, \u2212 \u2192 E ) that is characterized by a set V of vertices (or nodes) and a set of ordered pairs of vertices, called arrows (or edges) \u2212 \u2192 E \u2282V \u00d7 V , we denote the set of parents of a node v by PA(v) and de\ufb01ne it as PA(v) := {u \u2208V : (u, v) \u2208\u2212 \u2192 E }. 2.2 System model Consider a dynamical system described by n + m states \u20d7 V (t) = [V1(t), \u00b7 \u00b7 \u00b7 , Vn+m(t)] in which the \ufb01rst n processes, denoted by \u20d7 X = [X1, ..., Xn] are observable states and the rest which denoted by \u20d7 Z = [Z1, ..., Zm] are latent. More precisely, the joint dynamic of the system is given by: \u001a Xi(t) = Fi( \u20d7 X(t \u22121), \u20d7 Z(t \u22121)) + \u03c9i(t), 1 \u2264i \u2264n, \u20d7 Z(t) = G( \u20d7 X(t \u22121), \u20d7 Z(t \u22121)), (3) where the exogenous noises {\u03c9i(t)} are i.i.d. with mean zero. Fi : Rn+m \u2192R, G : Rn+m \u2192Rm are mapping functions that belong to appropriately constrained class of functions. Furthermore, we assume that \u20d7 Z(0) is a vector of unknown but \ufb01xed values. The goal of this work is to identify the causal structure among the observed processes \u20d7 X given their realizations. Next, we formally introduce what we mean by a causal structure of a dynamical system. 2.3 Causal Structural Graph In dynamical systems with functional dependencies, there is a natural notion of in\ufb02uence among the processes, in the sense that process Vj causes process Vi, if Vi is a function of Vj. Such dependencies has been studied in the literature [5]. Adopting the de\ufb01nition of functional dependency in [5], we de\ufb01ne the causal structure of the system in (3) as follows. Random process Vi 1-step functionally depends on process Vj over the time horizon [0, T], if changing the value of Vj(t \u22121) while keeping all the other variables \ufb01xed results in a change in Vi(t) for some time 0 < t \u2264T. Next, we present our formal de\ufb01nition of functional dependencies in systems whose joint dynamics is described by (3). De\ufb01nition 1. We say Vj 1-step functionally in\ufb02uences Vi if and only if \u03b1i,j := 1/T PT t=0 \u03b1i,j(t) > 0, where \u03b1i,j(t) := sup vj,v\u2032 j,\u20d7 v\u2212j |Fi(vj,\u20d7 v\u2212j) \u2212Fi(v\u2032 j,\u20d7 v\u2212j)|, (4) (vj,\u20d7 v\u2212j) and (v\u2032 j,\u20d7 v\u2212j) are two realizations of (Vj(t \u22121), \u20d7 V\u2212j(t \u22121)). In order to visualize the causal structure in (3), we introduce a directed graph whose nodes represent random processes and there is an arrow from node j to nodes i, if Vj 1-step functionally in\ufb02uences Vi. Example 1. Consider a causal system with 3 processes such that their joint dynamic is given by: X1(t) = e\u2212|X1(t\u22121)+X2(t\u22121)| \u2212e\u2212|Z(t\u22121)|/5 + \u03c91(t), X2(t) = p |X2(t \u22121)|/2 + \u03c92(t), Z(t) = X2(t \u22121)Z(t \u22121), where \u03c9is are independent exogenous noises. Figure 1 depicts the functional dependency graph of this system. 3 \fDirected Information Graphs (DIGs) are another type of graphical models that encode statistical dependencies in dynamical systems [2]. These graphs are de\ufb01ned using an information-theoretic measure, the \u201cconditional directed information\u201d [14, 18]. The relationship between the functional dependencies in a stochastic dynamical system and their corresponding DIG has been studied in [5]. For the sake of completeness, we present the de\ufb01nition of DIG. Consider two random processes Vi and Vj and a set of indices K such that K \u2286{1, ..., n} \\ {i, j}, then the conditional directed information from Vj to Vi, given V K is de\ufb01ned as: I(Vj \u2192Vi||V K) := EPV K\u222a{i,j} \" log dPVi||Vj,V K dPVi||V K # , (5) where dPV dQV is the Radon-Nikodym derivative [19] and PVi||V K denotes the causal conditioning de\ufb01ned as PVi||V K := Q t\u22651 PVi(t)|V t\u22121 i ,V t\u22121 K . De\ufb01nition 2. [17] A directed information graph (DIG) is a directed graph, \u2212 \u2192 G DI, over a set of random processes V . Node i represents the random process Vi; there is an arrow from j to i for i, j \u2208{1, ..., n} if and only if: I(Vj \u2192Vi|| V \u2212{i,j}) > 0. (6) Note that in the de\ufb01nition of DIG, it is assumed that there are no latent processes. Thus as demonstrated in the example below, when a subset of processes is not observable (as in our model), the corresponding DIG may not encode the 1-step causal relationships accurately. Example 2. Consider the following joint dynamics: X1(t) = X1(t \u22121)/3 + W1(t), Z(t) = X1(t \u22121), X2(t) = Z(t \u22121)/3 + W2(t), where {W1, W2} are independent exogenous noises. The corresponding DIG of this system when all processes are observed is X1 \u2192Z \u2192X2, and when Z is latent, it is X1 \u2192X2. But we know that there is no 1-step functional dependency between X1 and X2. De\ufb01nition 3. A joint distribution PV is called positive if there exists a reference measure \u03c6 such that PV \u226a\u03c6 and dPV d\u03c6 > 0. Remark 1. In addition to requiring no latent processes, DIGs recover the structure correctly when underlying distribution is positive. This is to avoid degenerate cases that arise with deterministic relationships. For instance, suppose X and Y are two random processes such that Y = \u03c8(X) for some deterministic function \u03c8. Then PX,Y is not positive since the distribution of Y given X is a point mass. Note that our model in (3) does not satisfy the non-degeneracy assumption. This is because in this model the hidden variables are a deterministic functions of the other processes. Yet as we will show next, the 1-step causal structure between the observed processes is unique and recoverable as long as the marginal distribution of the observed processes is positive. 3 Main Result Herein, we introduce our approach for learning the 1-step functional dependencies among the observed variables given their realizations. This approach does not require any prior knowledge about the number of latent process nor functions {Fi} and G. Theorem 1. Consider the dynamical system in (3) and assume that the marginal distribution of the observed variables PX is positive. Then \u03b1i,j(t) = 0 if and only if: I(Xi(t); Xj(t \u22121)|Xt\u22121 0 \\{Xj(t \u22121)}) = 0. (7) 4 \fProof. First, we prove that if \u03b1i,j(t) = 0 then (7) holds. Suppose that Xi(t) does not 1-step functionally depend on Xj(t \u22121). According to (3), the latent vector \u20d7 Z(t) can be determined recursively as a function of Xt\u22121 0 and \u20d7 Z(0). We denote this by \u20d7 Z(t) = \u03a8t(Xt\u22121 0 , \u20d7 Z(0)). Therefore, the entropy of Xi(t) given Xt\u22121 0 will be: H(Xi(t)|Xt\u22121 0 ) = H(Fi( \u20d7 X(t \u22121), \u20d7 Z(t \u22121)) + \u03c9i(t)|Xt\u22121 0 ), = H(Fi( \u20d7 X(t \u22121), \u03a8t\u22121(Xt\u22122 0 , \u20d7 Z(0))) + \u03c9i(t)|Xt\u22121 0 ), = H(\u03c9i(t)|Xt\u22121 0 ) = H(\u03c9i(t)). (8) The last equation holds because Fi( \u20d7 X(t \u22121), \u03a8t\u22121(Xt\u22122 0 , \u20d7 Z(0))) is a deterministic function of Xt\u22121 0 , \u20d7 Z(0), and \u03c9i(t) is independent of Xt\u22121 0 . Furthermore, we have: H(Xi(t)|Xt\u22121 0 \\{Xj(t \u22121)}) = H(F \u2032 i(Xj(t \u22121)) + \u03c9i(t)|Xt\u22121 0 \\{Xj(t \u22121)}), (9) where F \u2032 i(Xj(t \u22121)) is a uni-variate function obtained from Fi( \u20d7 X(t \u22121), \u03a8t\u22121(Xt\u22122 0 , \u20d7 Z(0))) by determining the values of Xt\u22121 0 \\Xj(t\u22121). But F \u2032 i(Xj(t\u22121)) does not change by varying Xj(t\u22121) since we assumed \u03b1i,j(t) = 0. Hence, the above equation is equal to H(\u03c9i(t)) and by comparing with (8), we can deduce that (7) holds. For the converse, note that Xi(t) and Xj(t\u22121) are independent given Xt\u22121 0 \\{Xj(t\u22121)} according to (7). Consequently, we have: E{Xi(t)|Xt\u22121 0 } = E{Xi(t)|Xt\u22121 0 \\{Xj(t \u22121)}} (10) For any realization of Xt\u22121 0 like xt\u22121 0 , the left hand side of the above equation is equal to: E{Xi(t)|Xt\u22121 0 = xt\u22121 0 } = Fi(\u20d7 x(t \u22121), \u20d7 z(t \u22121)). (11) where \u20d7 z(t \u22121) = \u03a8t\u22121(xt\u22122 0 , \u20d7 Z(0)). Since the joint distribution of the observed processes PX is positive, we know that Xj(t\u22121) cannot be written as a deterministic function of Xt\u22121 0 \\{Xj(t\u22121)}. Thus, the right hand of (10) does not depends on Xi(t\u22121). From this fact and (11), we can conclude that Fi( \u20d7 X(t \u22121), \u20d7 Z(t \u22121)) is not a function of Xj(t \u22121) for any realization of Xt\u22121 0 and thus \u03b1i,j(t) = 0. This result can be used to recover the 1-step causal structure of the observed processes in (3) given Xt 0. To do so, one can estimate the conditional mutual information in (7) for all t. If (7) holds, then we declare that there is no 1-step dependency from Xj to Xi. Next, we propose an ef\ufb01cient method to learn the 1-step causal structure of the observed processes in (3) when Fis and G are linear functions. 3.1 The Linear Model Suppose {Fi}s and G are linear functions, then the equations in (3) can be rewritten as follows: \u0014 \u20d7 X(t) \u20d7 Z(t) \u0015 = \u0014 A11 A12 A21 A22 \u0015 \u0014 \u20d7 X(t \u22121) \u20d7 Z(t \u22121) \u0015 + \u0014 \u20d7 \u03c9(t) \u20d7 0m\u00d71 \u0015 , (12) where [A11]n\u00d7n, [A12]n\u00d7m, [A21]m\u00d7n, and [A22]m\u00d7m denote the coef\ufb01cient matrices. We also de\ufb01ne A = [A11, A12; A21, A22]. The functional dependency of state vector \u20d7 X(t) on its history Xt\u22121 0 and also \u20d7 \u03c9(t), and \u20d7 Z(0) for t > 1 can be written as follows: \u20d7 X(t) = t\u22121 X k=0 A\u2217 k \u20d7 X(t \u22121 \u2212k) + A12At\u22121 22 \u20d7 Z(0) + \u20d7 \u03c9(t), (13) where A\u2217 0 = A11 and A\u2217 k = A12Ak\u22121 22 A21, k \u22651. Now, suppose that information-theoretic criteria in (7) is zero. By the same arguments in the proof of Theorem 1, we can show that the following term is zero: E{Xi(t)|xt\u22121 0 } \u2212E{Xi(t)|xt\u22121 0 \\{xj(t \u22121)}} = = A11(i, j) \u0000xj(t \u22121) \u2212E{Xj(t \u22121)|xt\u22121 0 \\{xj(t \u22121)}} \u0001 (14) 5 \ffor any realization of Xt\u22121 0 = xt\u22121 0 . Since PX is positive, we can deduce that A11(i, j) = 0. Consequently, learning the 1-step causal structure among the observed processes reduces to determining the support of A11. Assume that the support of A22 corresponds to an acyclic directed graph, i.e. there exists an l > 0 such that Al 22 = 0. Under this condition, the equation (13) can be simpli\ufb01ed as: \u20d7 X(t) = l X k=0 A\u2217 k \u20d7 X(t \u22121 \u2212k) + \u20d7 \u03c9(t), t \u2265l + 1. (15) The above equation can be interpreted as a VAR model of order l + 1. Hence, all matrices {A\u2217 k} can be obtained by doing multivariate least square estimation [11]. Moreover, coef\ufb01cients in the VAR model can be checked for zero constraints by Wald test [11]. Thus, we can check the informationtheoretic criteria merely performing a Wald test. 4 Experimental Results In this section, we utilize the method described in previous part for network identi\ufb01cation problem in consensus protocols [15]. In control systems, a well-known approach for network identi\ufb01cation is based on running a series of \u201cnode-knockout\u201d experiments in which variables are sequentially forced to be zero without being removed from the network [15, 20]. The main drawback of this approach is that we need to intervene in the system. Here, we will show that the direct edges between observed nodes can be detected just by analyzing the time-series of observed processes. Consider the weighted consensus protocol within a system with n + m nodes: Vi(t) = wiiVi(t \u22121) + X j\u0338=i wij(Vj(t \u22121) \u2212Vi(t \u22121)) + Bi\u03c9i(t), (16) where Vi(t) represents the state of node i at time t such as its speed, heading, or position, and the weight wij denotes the weight on the edge (i, j). The \ufb01rst n state variables correspond to states of observed nodes and the rest belong to hidden nodes. We are trying to \ufb01nd all directed edges (with nonzero weight) between observed nodes by injecting the white noise \u03c9i(t) into the observed node i, i.e. Bi = 1 if Vi is an observed node and Bi = 0 otherwise. In fact, this problem can be reformulated to the form in (12) such that: A(i, k) = ( wik, if i \u0338= k, wii \u2212P j\u0338=i wij, otherwise, (17) where [V1(t), \u00b7 \u00b7 \u00b7 , Vn(t)] = [X1(t), \u00b7 \u00b7 \u00b7 , Xn(t)], and [Vn+1(t), \u00b7 \u00b7 \u00b7 , Vn+m(t)] = [Z1(t), \u00b7 \u00b7 \u00b7 , Zm(t)]. Hence, identifying all directed edges with nonzero weight between observed nodes is equivalent to obtaining the support of matrix A11. We generated 1000 instances of the linear system with n = 10 observed nodes and m = 10 latent nodes. The weight wij (i \u0338= j) was selected randomly from the set {\u2212b, 0, b} with probability [q, 1 \u22122q, q] where q = 0.1 and b = 0.7 if i, j were hidden. Otherwise, the weight wij (i \u0338= j) was chosen randomly from the set {\u2212a, 0, a} with probability [p, 1 \u22122p, p] where a = 0.2. Moreover, we set wii to P (i,j)\u2208\u20d7 E wij. In our simulations, we excluded the generated networks which had cycles in the latent part. Furthermore, the noise process \u20d7 \u03c9(t) was chosen as i.i.d N(\u20d7 0n\u00d71, \u03c32In\u00d7n) with \u03c32 = 0.1. It can be easily seen that the conditional mutual information in (3) is equal to: I(Xi(t); Xj(t \u22121)|Xt\u22121 0 \\Xj(t \u22121)) = = 1 2 log 1 + (A11(i, j))2\u03c32 \u03c9j \u03c32 \u03c9i ! , t > l (18) where \u03c32 \u03c9i and \u03c32 \u03c9j are the variances of \u03c9i(t) and \u03c9j(t), respectively. Thus, learning 1-step functional dependencies, corresponds to \ufb01nding the support of matrix A11, denoted by Supp(A11). 6 \f1 1.5 2 2.5 3 3.5 4 4.5 5 Lag length 0 0.2 0.4 0.6 0.8 1 1.2 Average error p=0.2 p=0.225 p=0.25 Figure 2: Average error versus lag length for different values of parameter p. In order to obtain nonzero entries in matrix A11, we performed a linear regression between \u20d7 X(t+1) and data Xt t\u2212l+1 where l is the lag length. Let \u02c6 A11 be the output of linear regression for time series of length 10000. According to Wald test [11], for large number of samples, we can obtain Supp( \u02c6 A11) by setting entry (i, j) to one if | \u02c6 A11(i, j)|> a/2. In Fig. 2, the error ||Supp( \u02c6 A11) \u2212Supp(A11)||2 F is averaged over generated random matrices where ||.||F is the Frobenius norm of a matrix. As it can be seen, the support of matrix A11 can be recovered perfectly as the lag length increases. This trend is expected since the lag length should be at least equal to the order of linear model l, in order to have perfect recovery. Moreover, as shown in Fig. 2, for a \ufb01xed lag length, the average error is higher for larger p. This is because the matrices {A\u2217 k} become more dense for larger p which leads to higher average error when the right lag length is not selected. We also examined our proposed criteria in a nonlinear system with three state variables with the following joint dynamics: \uf8f1 \uf8f2 \uf8f3 X1(t) = 0.2X1(t \u22121) + 0.4 p |Z(t \u22121)| + \u03c91(t), X2(t) = 0.5(X1(t \u22121))2 + 0.9Z(t \u22121) + \u03c92(t), Z(t) = 0.9(X1(t \u22121))3 + 0.4Z(t \u22121), (19) where \u03c91(t) and \u03c92(t) are i.i.d N(0, 0.1) and X1(0), X2(0) have normal distribution with zero mean and unit variance. The quantity in (3) can be written as a linear combination of some joint entropies. Hence, we can utilize the K-nearest neighbor method of [21] to obtain an estimation of the desired quantity. To do so, we generated 1000 samples of X1, X2, Z for t = 0, 1. For K = 10, the numerical results were: I(X1(1); X2(0)|X1(0)) \u22480.06, and I(X2(1); X1(0)|X2(0)) \u22481.68. From these results, we can infer that X2(t) is 1-step functional dependent on X1(t \u22121) which is consistent with the system model in (19). 5" + } + ], + "Sina Ghiassian": [ + { + "url": "http://arxiv.org/abs/2007.00611v4", + "title": "Gradient Temporal-Difference Learning with Regularized Corrections", + "abstract": "It is still common to use Q-learning and temporal difference (TD)\nlearning-even though they have divergence issues and sound Gradient TD\nalternatives exist-because divergence seems rare and they typically perform\nwell. However, recent work with large neural network learning systems reveals\nthat instability is more common than previously thought. Practitioners face a\ndifficult dilemma: choose an easy to use and performant TD method, or a more\ncomplex algorithm that is more sound but harder to tune and all but unexplored\nwith non-linear function approximation or control. In this paper, we introduce\na new method called TD with Regularized Corrections (TDRC), that attempts to\nbalance ease of use, soundness, and performance. It behaves as well as TD, when\nTD performs well, but is sound in cases where TD diverges. We empirically\ninvestigate TDRC across a range of problems, for both prediction and control,\nand for both linear and non-linear function approximation, and show,\npotentially for the first time, that gradient TD methods could be a better\nalternative to TD and Q-learning.", + "authors": "Sina Ghiassian, Andrew Patterson, Shivam Garg, Dhawal Gupta, Adam White, Martha White", + "published": "2020-07-01", + "updated": "2020-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Off-policy learning\u2014the ability to learn the policy or value function for one policy while following another\u2014underlies many practical implementations of reinforcement learning. Many systems use experience replay, where the value function is updated using previous experiences under many different policies. A similar strategy is employed in asynchronous learning systems that use experience from several different policies to update multiple distributed learners (Espeholt et al., 2018). Off-policy updates can also be used to *Equal contribution 1Amii, Department of Computing Science, University of Alberta. 2DeepMind, Alberta. Correspondence to: Sina Ghiassian , Andrew Patterson . Proceedings of the 37 th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s). learn a policy from human demonstrations. In general, many algorithms attempt to estimate the optimal policy from samples generated from a different exploration policy. One of the most widely-used algorithms, Q-learning\u2014a temporal difference (TD) algorithm\u2014is off-policy by design: simply updating toward the maximum value action in the current state, regardless of which action the agent selected. Both TD and Q-learning, however, have well documented convergence issues, as highlighted in the seminal counterexample by Baird (1995). The fundamental issue is the combination of function approximation, off-policy updates, and bootstrapping: an algorithmic strategy common to samplebased TD learning and Dynamic Programming algorithms (Precup, Sutton & Dasgupta, 2001). This combination can cause the value estimates to grow without bound (Sutton & Barto, 2018). Baird\u2019s result motivated over a decade of research and several new off-policy algorithms. The most well-known of these approaches, the Gradient TD methods (Sutton et al., 2009), make use of a second set of weights and importance sampling. Although sound under function approximation, these Gradient TD methods are not commonly used in practice, likely due to the additional complexity of tuning two learning rate parameters. Many practitioners continue to use unsound approaches such as TD and Q-learning for good reasons. The evidence of divergence is based on highly contrived toy counter-examples. Often, many large scale off-policy learning systems are designed to ensure that the target and behaviour policies are similar\u2014and therefore less off-policy\u2014by ensuring prioritization is mixed with random sampling (Schaul et al., 2016), or frequently syncing the actor policies in asynchronous architectures (Mnih et al., 2016). However, if agents could learn from a larger variety of data streams, our systems could be more \ufb02exible and potentially more data ef\ufb01cient. Unfortunately, it appears that current architectures are not as robust under these more aggressive off-policy settings (van Hasselt et al., 2018). This results in a dilemma: the easy-to-use and typically effective TD algorithm can sometimes fail, but the sound Gradient TD algorithms can be dif\ufb01cult to use. There are algorithms that come close to achieving convergence and lower variance updates without the need to tune multiple stepsize parameters. Retrace (Munos et al., 2016) arXiv:2007.00611v4 [cs.LG] 17 Sep 2020 \fGradient Temporal-Difference Learning with Regularized Corrections and its prediction variant Vtrace (Espeholt et al., 2018) reduce the variance of off-policy updating, by clipping importance sampling ratios. These methods, however, are built on off-policy TD and so still have divergence issues (Touati et al., 2018). The sound variants of these algorithms (Touati et al., 2018), and the related work on an algorithm called ABQ (Mahmood, Yu & Sutton, 2017), maintain some of the variance reduction, but rely on Gradient TD to obtain soundness and so inherit the issues therein\u2014the need to tune multiple stepsize parameters. Linear off-policy prediction can be reformulated as a saddlepoint problem, resulting in one time-scale, true gradient descent variant of the GTD2 algorithm (Mahadevan et al., 2014; Liu et al., 2015; Liu et al., 2016). The Emphatic TD algorithm achieves convergence with linear function approximation and off-policy updates using only a single set of weights and thus one stepsize parameter (Sutton et al., 2016). Unfortunately, high variance updates reduce the practicality of the method (White & White, 2016). Finally, Hybrid TD algorithms (Hackman, 2012, White & White, 2016) were introduced to automatically switch between TD updates when the data is on-policy, and gradient-style updates otherwise, thus ensuring convergence. In practice these hybrid methods are more complicated to implement and can have stability issues (White & White, 2016). In this paper we introduce a new Gradient TD method, called TD with Regularized Corrections (TDRC). With more regularization, the algorithm acts like TD, and with no regularization, it reduces to TD with gradient Corrections (TDC). We \ufb01nd that for an interim level of regularization, TDRC obtains the best of both algorithms, and is not sensitive to this parameter: a regularization parameter of 1.0 was effective across all experiments. We show that our method (1) outperforms other Gradient TD methods overall across a variety of problems, and (2) matches TD when TD performs well while maintaining convergence guarantees. We demonstrate that TDC frequently outperforms the saddlepoint variant of Gradient TD, motivating why we build on TDC and the utility of being able to shift between TD and TDC by setting the regularization parameter. We then highlight why TDRC improves so signi\ufb01cantly on TDC, by examining TDC\u2019s sensitivity to its second stepsize. We conclude with a demonstration in control, with non-linear function approximation, showing that (1) TDC can perform very well in some settings and very poorly in others, and (2) TDRC is always comparable to Q-learning, and in some cases, is much better. 2. Background In this paper we tackle the policy evaluation problem in Reinforcement Learning. We model the agent\u2019s interactions with its environment as a Markov Decision Process (MDP). The agent and environment interact continually. On each time step t = 0, 1, 2, . . . , the agent selects an action At \u2208 A in state St \u2208S. Based on the agent\u2019s action At and the transition dynamics, P : S \u00d7 A \u00d7 S \u2192[0, 1], the environment transitions into a new state, St+1, and emits a scalar reward Rt+1. The agent selects actions according to its policy \u03c0 : S \u00d7 A \u2192[0, 1]. The main objective in policy evaluation is to estimate the value of a state s, de\ufb01ned as the expected discounted sum of future rewards under \u03c0: v\u03c0(s) def = E\u03c0 \u0002 Rt+1 + \u03b3Rt+2 + \u03b32Rt+3 + \u00b7 \u00b7 \u00b7 |St = s \u0003 = E\u03c0[Gt|St = s] , (1) where \u03b3 \u2208[0, 1], Gt \u2208R is called the return, and E\u03c0 is the expectation taken with respect to future states, actions, and rewards generated by \u03c0 and P. In many problems of interest, the agent cannot directly observe the state. Instead, on each step the agent observes a featurized representation of the state xt def = x(St) \u2208Rn, where n \u226a|S|. In this setting, the agent cannot estimate the value of each state individually, but must approximate the value with a parametric function. In this paper, we focus on the case of linear function approximation, where the value estimate \u02c6 v : S \u00d7 Rn \u2192R is simply formed as an inner product between x(s) and a learned set of weights w \u2208Rn given by \u02c6 v(s, w) def = w\u22a4x(s). Our objective is to adjust wt on each time step to construct a good approximation of the true value: \u02c6 v \u2248v\u03c0. Perhaps the most well known and successful algorithm for doing so is temporal difference (TD) learning : \u03b4t def = Rt+1 + \u03b3w\u22a4 t xt+1 \u2212w\u22a4 t xt wt+1 \u2190wt + \u03b1t\u03b4txt (2) for stepsize \u03b1t > 0. TD is guaranteed to be convergent under linear function approximation and on-policy sampling. The classical TD algorithm was designed for on-policy learning; however, it can be easily extended to the off-policy setting. In on-policy learning, the policy used to select actions is the same as the policy used to condition the expectation in the de\ufb01nition of the value function (Eq. 1). Alternatively, we might want to make off-policy updates, where the actions are chosen according to some behavior policy b, different from the target policy \u03c0 used in Eq. 1. If we view value estimation as estimating the expected return, this off-policy setting corresponds to estimating an expectation conditioned on one distribution with samples collected under another. TD can be extended to make off-policy updates by using importance sampling ratios \u03c1t def = \u03c0(At|St) b(At|St) \u22650. The resulting algorithm is a minor modi\ufb01cation of TD, wt+1 \u2190wt + \u03b1t\u03c1t\u03b4txt, where \u03b4t is de\ufb01ned in Eq. 2. Off-policy TD can diverge with function approximation, but fortunately there are several TD-based algorithms that \fGradient Temporal-Difference Learning with Regularized Corrections are convergent. When TD learning converges, it converges to the TD \ufb01xed point: the weight vector where E[\u03b4txt] = 0. Interestingly, TD does not perform gradient descent on any objective to reach the TD \ufb01xed point. So, one way to achieve convergence is to perform gradient descent on an objective whose minimum corresponds to the TD-\ufb01xed point. Gradient TD methods do exactly this on the Mean Squared Projected Bellman Error (MSPBE) (see Eq. 7). There are several ways to approximate and simplify the gradient of MSPBE, each resulting in a different algorithm. The two most well-known approaches are TD with Corrections (TDC) and Gradient TD (GTD2). Both these require double the computation and storage of TD, and employ a second set of learned weights h \u2208Rn with a different stepsize parameter \u03b7\u03b1t, where \u03b7 is a tunable constant. The updates for the TDC algorithm otherwise are similar to TD: wt+1 \u2190wt + \u03b1t\u03c1t\u03b4txt \u2212\u03b1t\u03c1t\u03b3(h\u22a4 t xt)xt+1 ht+1 \u2190ht + \u03b7\u03b1t \u0002 \u03c1t\u03b4t \u2212(h\u22a4 t xt) \u0003 xt. (3) The GTD2 algorithm uses the same update for ht, but the update to the primary weights is different: wt+1 \u2190wt + \u03b1t\u03c1t(xt \u2212\u03b3xt+1)(h \u22a4 t xt). (4) The Gradient TD algorithms are not widely used in practice and are considered dif\ufb01cult to use. In particular, for TDC, the second stepsize has a big impact on performance (White & White, 2016), and the theory suggests that \u03b7 > 1 is necessary to guarantee convergence (Sutton et al., 2009). Attempts to improve Gradient TD methods has largely come from rederiving GTD2 using a saddlepoint formulation of the MSPBE (Mahadevan et al., 2014). This formulation enables us to view GTD2 as a one-time scale algorithm with a single set of weights [w, h] using a single global stepsize parameter. In addition, saddlepoint GTD2 can be combined with acceleration techniques like Mirror Prox (Mahadevan et al., 2014) and stochastic variance reduction methods such as SAGA and SVRG (Du et al., 2017). Unfortunately, Mirror Prox has never been shown to improve performance over vanilla GTD2 (White & White, 2016; Ghiassian et al., 2018). Current variance reduction methods like SAGA are only applicable in the of\ufb02ine setting, and extension to the online setting would require new methods (Du et al., 2017). In Appendix B we include comparisons of off-policy prediction algorithms in the batch setting, including recent Kernel Residual Gradient methods (Feng et al., 2019). These experiments suggest that accelerations do not change the relative ranking of the algorithms in the batch setting. TD is widely considered more sample ef\ufb01cient than all the methods discussed above. A less well-known family of algorithms, called Hybrid methods (Maei, 2011; Hackman, 2012; White & White, 2016), were designed to exploit the sample ef\ufb01ciency of TD when data is generated on-policy\u2014 they reduce to TD in the on-policy setting\u2014and use gradient corrections, like TDC, when the data is off-policy. These methods provide some of the ease-of-use bene\ufb01ts of TD, but unfortunately do not enjoy the same level of stability as the Gradient TD methods: for instance, HTD can diverge on Baird\u2019s counterexample (White & White, 2016). 3. TD with Regularized Corrections In this section we develop a new algorithm, called TD with Regularized Corrections (TDRC). The idea is very simple: to regularize the update to the secondary parameters h. The inspiration for the algorithm comes from behavior observed in experiments (see Section 4). Consistently, we \ufb01nd that TDC outperforms\u2014or is comparable to\u2014GTD2 in terms of optimizing the MSPBE; as we reaf\ufb01rm in our experiments. These results match previous experiments comparing these two algorithms (White & White, 2016; Ghiassian et al., 2018). Previous results suggested that TDC could match TD (White & White, 2016); but, as we highlight in Section 4, this is only when the second stepsize is set so small that TDC is effectively behaving like TD. This behavior is unsatisfactory because to have guaranteed convergence\u2014e.g. on Baird\u2019s Counterexample\u2014the second stepsize needs to be large. Further, it is somewhat surprising that attempting to obtain an estimate of the gradient of the MSPBE, as done by TDC, can perform so much more poorly than TD. Notice that the h update is simply a linear regression update for estimating the (changing) target \u03b4t conditioned on xt, for both GTD2 and TDC. As w converges, \u03b4t approaches zero, and consequently h goes to 0 as well. But, a linear regression estimate of E[\u03b4t|St = s] is not necessarily the best choice. In fact, using ridge regression\u2014\u21132 regularization\u2014 can provide a better bias-variance trade-off: it can significantly reduce variance without incurring too much bias. This is in particular true for h, where asymptotically h = 0 and so the bias disappears. This highlights a potential reason that TD frequently outperforms TDC and GTD2 in experiments: the variance of h. If TD already performs well, it is better to simply use the zero variance but biased estimate ht = 0. Adding \u21132 regularization with parameter \u03b2, i.e. \u03b2\u2225h\u22252 2, provides a way to move between TD and TDC. For a very large \u03b2, h will be pushed close to zero and the update to w will be lower variance and more similar to the TD update. On the other hand, for \u03b2 = 0, the update reduces to TDC and the estimator h will be an unbiased estimator with higher variance. The resulting update equations for TDRC are wt+1 \u2190wt + \u03b1\u03c1t\u03b4txt \u2212\u03b1\u03c1t\u03b3(h \u22a4 t x)xt+1 (5) ht+1 \u2190ht + \u03b1 \u0002 \u03c1t\u03b4t \u2212(h \u22a4 t xt) \u0003 xt \u2212\u03b1\u03b2ht. (6) \fGradient Temporal-Difference Learning with Regularized Corrections The update to w is the same as TDC, but the update to h now has the additional term \u03b1\u03b2ht which corresponds to the gradient of the \u21132 regularizer. The updates only have a single shared stepsize, \u03b1, rather than a separate stepsize for the secondary weights h. We make this choice precisely for our motivated reason upfront: for ease-of-use. Further, we \ufb01nd empirically that this choice is effective, and that the reasons for TDC\u2019s sensitivity to the second stepsize are mainly due to the fact that a small second stepsize enables TDC to behave like TD (see Section 4.2). Because TDRC has this behavior by design, a shared stepsize is more effective. While there are many approaches to reduce the variance of the estimator, h, we use an \u21132 regularizer because (1) using the \u21132 regularizer ensures the set of solutions for TDRC match TD; (2) the resulting update is asymptotically unbiased, because it biases towards the known asymptotic solution of h; and (3) the strongly convex \u21132 regularizer improves the convergence rate. TDC convergence proofs impose conditions on the size of the stepsize for h to ensure that it converges more quickly than the \u201cslow-learner\u201d w, and so increasing convergence rate for h should make it easier to satisfy this condition. Additionally, the \u21132 regularizer biases the estimator h towards h = 0, the known optimum of the learning system as w converges. This means that the bias imposed on h disappears asymptotically, changing only the transient trajectory (we prove this in Theorem 3.1). As a \ufb01nal remark, we motivate that TDRC should not require a second stepsize, but have introduced a new parameter (\u03b2) to obtain this property. The idea, however, is that TDRC should be relatively insensitive to \u03b2. The choice of \u03b2 sweeps between two reasonable algorithms: TD and TDC. If we are already comfortable using TD, then it should be acceptable to use TDRC with a larger \u03b2. A smaller \u03b2 will still result in a sound algorithm, though its performance may suffer due to the variance of the updates in h. In our experiments, we in fact \ufb01nd that TDRC performs well for a wide range of \u03b2, and that our default choice of \u03b2 = 1.0 works reasonably across all the problems that we tested. 3.1. Theoretically Characterizing the TDRC Update The MSPBE (Sutton et al., 2009) is de\ufb01ned as MSPBE(wt) def = E[\u03b4txt]\u22a4E \u0002 xtx\u22a4 t \u0003\u22121 E[\u03b4txt] (7) = (\u2212Aw + b)\u22a4C\u22121(\u2212Aw + b) where E[\u03b4txt] = b \u2212Awt for C def = E \u0002 xx\u22a4\u0003 , A def = E \u0002 x(x \u2212\u03b3x\u2032)\u22a4\u0003 , b def = E[Rx] . The TD \ufb01xed point corresponds to E[\u03b4txt] = 0 and so to the solution to the system Awt = b. The expectation is taken with respect to the target policy \u03c0, unless stated otherwise. The expected update for TD corresponds to E[\u03b4txt] = b \u2212 Awt. The expected update for w in TDC corresponds to the gradient of the MSPBE, \u22121 2\u2207MSPBE(wt) = A\u22a4C\u22121(b \u2212Awt). Both TDC and GTD2 estimate h def = C\u22121(b \u2212Awt) = E \u0002 xtx\u22a4 t \u0003\u22121 E[\u03b4txt], to get the least squares estimate h\u22a4xt \u2248E[\u03b4t|xt] for targets \u03b4t. TDC rearranges terms, to sample this gradient differently than GTD2; for a given h, both have the same expected update for w: A\u22a4h. We can now consider the expected update for TDRC. Solving for the \u21132 regularized problem with target \u03b4t, we get (E \u0002 xtx\u22a4 t \u0003 + \u03b2I)h = E[\u03b4txt] which implies h\u03b2 = C\u22121 \u03b2 (b \u2212Awt) for C\u03b2 def = C + \u03b2I. To get a similar form to TDC, we consider the modi\ufb01ed expected update A\u22a4 \u03b2h\u03b2 for A\u03b2 def = A+\u03b2I. We can get the TDRC update by rearranging this expected update, similarly to how TDC is derived A\u22a4 \u03b2 h\u03b2 = (E \u0002 (x \u2212\u03b3x\u2032)x\u22a4\u0003 + \u03b2I)h\u03b2 = \u0000E \u0002 xx\u22a4\u0003 + \u03b2I \u2212\u03b3E \u0002 x\u2032x\u22a4\u0003\u0001 C\u22121 \u03b2 E[\u03b4txt] = \u0000E \u0002 xx\u22a4\u0003 + \u03b2I \u0001 C\u22121 \u03b2 E[\u03b4txt] \u2212\u03b3E \u0002 x\u2032x\u22a4\u0003 C\u22121 \u03b2 E[\u03b4txt] = E[\u03b4txt] \u2212\u03b3E \u0002 x\u2032x\u22a4\u0003 h\u03b2 This update equation for the primary weights looks precisely like the update in TDC, except that our h is estimated differently. Despite this difference, we show in Theorem I.1 (in Appendix I) that the set of TDRC solutions w to A\u22a4 \u03b2 h\u03b2 = 0 includes the TD \ufb01xed point, and this set is exactly equivalent if A\u03b2 is full rank. In the following theorem (proof in Appendix H) we directly compare convergence of TDRC to TDC. Though the TDRC updates are no longer gradients, we maintain the convergence properties of TDC. This theorem extends the TDC convergence result to allow for \u03b2 > 0, where TDC corresponds to TDRC with \u03b2 = 0. Theorem 3.1 (Convergence of TDRC) Consider the TDRC update, with a TDC like stepsize multiplier \u03b7 \u22650: ht+1 = ht +\u03b7\u03b1t h \u03c1t\u03b4t \u2212h\u22a4 t xt i xt \u2212\u03b7\u03b1t\u03b2 ht, (8) wt+1 = wt +\u03b1t\u03c1t\u03b4t xt \u2212\u03b1t\u03c1t\u03b3(h\u22a4 t xt) xt+1, (9) with stepsizes \u03b1t \u2208(0, 1], satisfying P\u221e t=0 \u03b1t = \u221eand P\u221e t=0 \u03b12 t < \u221e. Assume that (xt, Rt, xt+1, \u03c1t) is an i.i.d. sequence with uniformly bounded second moments for states and rewards, A +\u03b2 I and C are non-singular, and that the standard coverage assumption (Sutton & Barto, 2018) holds, i.e. b(A|S) > 0 \u2200S, A where \u03c0(A|S) > 0. Then wt converges with probability one to the TD \ufb01xed point if either of the following are satis\ufb01ed: \fGradient Temporal-Difference Learning with Regularized Corrections (i) A is positive de\ufb01nite, or (ii) \u03b2 < \u2212\u03bbmax(H\u22121 A A\u22a4) and \u03b7 > \u2212\u03bbmin(C\u22121 H), with H def = A + A\u22a4 2 . Note that when A is not positive de\ufb01nite, \u2212\u03bbmax(H\u22121 A A\u22a4) and \u2212\u03bbmin(C\u22121 H) are guaranteed to be positive real numbers. We can extend this result to allow for singular C, which was not possible for TDC. The set of conditions on \u03b7 and \u03b2, however, are more complex. We include this result in Appendix H.4, with conditions given in Eq. 22. Theorem 3.1 shows that TDRC maintains convergence when TD is convergent: the case when A is positive de\ufb01nite. Otherwise, TDRC converges under more general settings than TDC, because it has the same conditions on \u03b7 as given by Maei (2011) but allows for \u03b2 > 0. The upper bound on \u03b2 makes sense, since as \u03b2 \u2192\u221e, TDRC approaches TD. Examining the proof, it is likely that the conditions on \u03b7 could actually be relaxed (see Eq. C3). One advantage of TDRC is that the matrix C\u03b2 = C +\u03b2 I is non-singular by construction. This raises the question: could we have simply changed the MSPBE objective to use C\u03b2 and derived the corresponding TDC-like algorithm? This is easier than TDRC, as the proof of convergence for the resulting algorithm trivially extends the proof from Maei (2011), as the change to the objective function is minimal. We derive corresponding TDC-like update and demonstrate that it performs notably worse than TDRC in Appendix A. 4. Experiments in the Prediction Setting We \ufb01rst establish the performance of TDRC across several small linear prediction tasks where we carefully sweep hyper-parameters, analyze sensitivity, and average over many runs. The goal is to understand if TDRC has similar performance to TD, with similar parameter sensitivity, but avoids divergence. Before running TDRC, we set \u03b2 = 1.0 across all the experiments to refrain from tuning this additional parameter. 4.1. Prediction Problems In the prediction setting, we investigate three different problems with variations in feature representations, target and behavior policies. We choose problems that have been used in prior work empirically investigating TD methods. The \ufb01rst problem, Boyan\u2019s chain (Boyan, 2002), is a 13 state Markov chain where each state is represented by a compact feature representation. This encoding causes inappropriate generalization during learning, but v\u03c0 can be represented perfectly with the given features. Code for all experiments is available at: https://github.com/rlai-lab/Regularized-GradientTD The second problem is Baird\u2019s (1995) well-known star counterexample. In this MDP, the target and behavior policy are very different resulting in large importance sampling corrections. Baird\u2019s Counterexample has been used extensively to demonstrate the soundness of Gradient TD algorithms, so provides a useful testbed to demonstrate that TDRC does not sacri\ufb01ce soundness for ease-of-use. Finally, we include a \ufb01ve state random walk MDP. We use three different feature representations: tabular (unit basis vectors), inverted, and dependent features. This last problem was chosen so that we could exactly mirror the experiments used in prior work benchmarking TDC, GTD2, and TD (Sutton et al., 2009). Like Hackman (2012), we used an offpolicy variant of the problem. The behavior policy chooses the left and right action with equal probability, and the target policy chooses the right action 60% of the time. Figure 18 in the appendix summarizes all three problems. We report the total RMSPBE over 3000 steps, measured on each time step, averaged over 200 independent runs. The learning algorithms under study have tunable metaparameters that can dramatically impact the ef\ufb01ciency of learning. We extensively sweep the values of these metaparameters (as described in Appendix G), and report both summary performance and the sensitivity of each method to its meta-parameters. For all results reported in the prediction setting, we use the Adagrad (Duchi, Hazan & Singer, 2011) algorithm to adapt a vector of stepsizes for each algorithm. Additional results for constant scalar stepsizes and ADAM vector stepsizes can be found in Appendix B and Appendix E; the conclusions are similar. 4.2. Overall Performance We \ufb01rst report performance for both the best stepsize as well as provide the parameter sensitivity plots in Figure 1. In the bar plot, we compactly summarize relative performance to TDRC. TDRC performs well across problems, while every other method has at least one setting where it does noticeably worse than TDRC. GTD2 generally learns more slowly than other methods. This result is unsurprising, as it relies so heavily on h for learning w: wt+1 \u2190wt + \u03b1(xt\u2212\u03b3xt+1)h \u22a4 t xt. In the beginning, when h is inaccurate, the updates for w are poor. TDC generally learns much faster. In Boyan\u2019s chain, however, TDC seems to suffer from variance in h. The features in this environment cause bigger changes in h than in the other environments. TDRC, on the other hand, which regularizes h, signi\ufb01cantly improves learning in Boyan\u2019s chain. TD and HTD perform very well across all problems except for Baird\u2019s. Finally, Vtrace\u2014 which uses a TD update with importance sampling ratios clipped at 1\u2014performs slightly worse than TD due to the introduced bias, but does not mitigate divergence issues due to off-policy learning in Baird\u2019s. \fGradient Temporal-Difference Learning with Regularized Corrections Relative RMSPBE Tabular Inverted Dependent Boyan Baird Vtrace TDRC TDC GTD2 HTD Vtrace TDRC TDC GTD2 TD HTD Vtrace TDRC TDC GTD2 HTD TDRC TDC GTD2 TD GTD2 TDRC TDC 0.5 1.0 1.5 2.0 2.5 3.0 TD TD RMSPBE 5 4 3 2 1 0 4 3 2 1 0 6 4 2 0 6 4 2 0 5 4 3 2 1 0 0.04 0.06 0.08 0.10 0.02 0.04 0.06 0.08 0.10 0 1 2 3 0 1 2 3 4 5 0.04 0.06 0.08 0.10 GTD2 TDC Vtrace TDRC HTD TD TDC GTD2 TDRC Vtrace TDC GTD2 TDRC TD HTD Vtrace GTD2 TDC TD TDRC HTD TD HTD TDRC Vtrace TDC GTD2 \u03b1 = 2\u2212x \u03b1 = 2\u2212x \u03b1 = 2\u2212x \u03b1 = 2\u2212x \u03b1 = 2\u2212x Figure 1. Top: The normalized average area under the RMSPBE learning curve for each method on each problem. Each bar is normalized by TDRC\u2019s performance so that each problem can be shown in the same range. All results are averaged over 200 independent runs with standard error bars shown at the top of each rectangle, though most are vanishingly small. TD and VTrace both diverge on Baird\u2019s Counterexample, which is represented by the bars going off the top of the plot. HTD\u2019s bar is also off the plot due to its oscillating behavior. Bottom: Stepsize sensitivity measured using average area under the RMSPBE learning curve for each method on each problem. HTD and VTrace are not shown in Boyan\u2019s Chain because they reduce to TD for on-policy problems. Values for bar graphs are given in Table 1. RMSPBE GTD2 TDC TDRC HTD TD TDC GTD2 TDRC TDC GTD2 TD GTD2 TDC TD TDRC HTD TD HTD TDRC TDC GTD2 -5 -3 -1 1 3 5 0.04 0.06 0.08 0.10 0.04 0.06 0.08 0.10 0.02 0.04 0.06 0.08 0.00 0.50 0.75 0.25 1.00 1.25 0 1 2 3 4 -5 -3 -1 1 3 5 -5 -3 -1 1 3 5 -5 -3 -1 1 3 5 -5 -3 -1 1 3 5 TDRC HTD Tabular Inverted Dependent Boyan Baird \u03b7 = 2x \u03b7 = 2x \u03b7 = 2x \u03b7 = 2x \u03b7 = 2x Figure 2. Sensitivity to the second stepsize, for changing parameter \u03b7. All methods use Adagrad. All methods are free to choose any value of \u03b1 for each \u03b7. Methods that do not have a second stepsize are shown as a \ufb02at line. Values swept are \u03b7 \u2208{2\u22126, 2\u22125, . . . , 25, 26}. The results reported here for TDC do not match previous results which indicate performance generally as good as TD (White & White, 2016). The reason for this discrepancy is that previous results carefully tuned the second stepsize \u03b7\u03b1 for TDC. The need to tune \u03b7 is part of the dif\ufb01culty in using TDC. To better understand the role it is playing here, we include an additional result where we sweep \u03b7 as well as \u03b1 for TDC; for completeness, we also include this sweep for GTD2 and HTD. We sweep \u03b7 \u2208{2\u22126, 2\u22125, . . . , 25, 26}. This allows for \u03b7\u03b1 that is very near zero as well as \u03b7\u03b1 much larger than \u03b1. The theory for TDC suggests \u03b7 should be larger than 1. The results in Figure 2, however, demonstrate that TDC almost always prefers the smallest \u03b7; but for very small \u03b7 TDC is effectively a TD update. By picking a small \u03b7, TDC essentially keeps h near zero\u2014its initialization\u2014 and so removes the gradient correction term. TDC was therefore able to match TD by simply tuning a parameter so that it effectively was TD. Unfortunately, this is not a general strategy, for instance in Baird\u2019s, TDC picks \u03b7 \u22651 and small \u03b7 perform poorly. 4.3. Sensitivity to \u03b2 So far we have only used TDRC with a regularization parameter \u03b2 = 1. This choice was both to avoid over-tuning our method, as well as to show that an intuitive default value could be effective across settings. Intuitively, TDRC should not be sensitive to \u03b2, as both TDC (\u03b2 = 0) and TD (large \u03b2) generally perform reasonably. Picking a \u03b2 > 0 should enable TDRC to learn faster like TD\u2014by providing a lower variance correction\u2014as long as it\u2019s not too large, to ensure we avoid the divergence issues of TD. We investigate this intuition by looking at performance across a range of \u03b2 \u22080.1 \u2217{20, 21, . . . , 25, 26}. For \u03b2 = 0, we have TDC. Ideally, performance should quickly improve for any non-negligible \u03b2, with a large \ufb02at region of good performance in the parameter sensitivity plots for a wide range of \u03b2. This is generally what we observe in Figure 3. For even very small \u03b2, TDRC noticeably improves performance over TDC, getting halfway between TDC and TD (Random Walk with Tabular or Dependent features) or in some cases immediately obtaining the good performance of TD (Random Walk with Inverted Features, Boyan\u2019s chain and Baird\u2019s). Further, in these three cases, it even performs better or comparably to both TDC and TD for all tested \u03b2. Notably, these are the settings with more complex feature representations, suggesting that the regularization parameter helps TDRC learn an h that is less affected by harmful aliasing in the feature representation. Finally, the results also show that \u03b2 = 1.0 was in fact not optimal, and we could have obtained even better results in the previous sec\fGradient Temporal-Difference Learning with Regularized Corrections 0.04 0.06 TDRC 0.05 TDC TD 0.040 0.050 TDC TD TDRC TDRC TD TDC TD TDC TDRC 0.2 0.6 0.4 0.3 0.5 -3 2 -1 -2 0 1 -3 2 -1 -2 0 1 -3 2 -1 -2 0 1 -3 2 -1 -2 0 1 TD TDRC 4.0 -3 2 -1 -2 0 1 0.0 1.0 2.0 3.0 Tabular Inverted Dependent Boyan Baird RMSPBE TDC 0.022 0.027 0.032 0.045 0.055 \u03b2 = 2x \u03b2 = 2x \u03b2 = 2x \u03b2 = 2x \u03b2 = 2x Figure 3. Sensitivity to the regularization parameter, \u03b2. TD and TDC are shown as dotted baselines, demonstrating extreme values of \u03b2; \u03b2 = 0 represented by TDC and \u03b2 \u2192\u221erepresented by TD. This experiment demonstrates TDRC\u2019s notable insensitivity to \u03b2. Its similar range of values across problems, including Baird\u2019s counterexample, motivates that \u03b2 can be chosen easily and is not heavily problem dependent. Values swept are: \u03b2 \u22080.1 \u2217{20, 21, . . . , 25, 26}. tion, typically with a larger \u03b2. These improvements, though, were relatively marginal over the choice of \u03b2 = 1.0. Naturally, the scale of \u03b2 should be dependent on the magnitude of the rewards, because in TDRC the gradient correction term is attempting to estimate the expected TD error. One answer is to simply employ adaptive target normalization, such as Pop-Art (van Hasselt et al., 2016), and keep \u03b2 equal to one. We found TDRC with \u03b2 = 1 performed at least as well as TD in on-policy chain domains across a large range of reward scales (see Appendix C). 5. Experiments in the Control Setting Like TD, TDRC was developed for prediction, under linear function approximation. Again like TD, there are natural\u2014 though in some cases heuristic\u2014extensions to the control setting and to non-linear function approximation. In this section, we investigate if TDRC can provide similar improvements in the control setting. We \ufb01rst investigate TDRC in control with linear function approximation, where the extension is more straightforward. We then provide a heuristic strategy to use TDRC\u2014and TDC\u2014with non-linear function approximation. We demonstrate, for the \ufb01rst time, that Gradient TD methods can outperform Q-learning when using neural networks, in two classic control domains and two visual games. 5.1. Extending TDRC to Control Before presenting the control experiments, we describe how to extend TDRC to control, and to non-linear function approximation. The extension to non-linear function approximation is also applicable in the prediction setting; we therefore begin there. We then discuss the extension to Q-learning which involves estimating action-values for the greedy policy. Consider the setting where we estimate \u02c6 v(s) using a neural network. The secondary weights in TDRC are used to obtain an estimate of E[\u03b4t|St = s]. Under linear function approximation, this expected TD error is estimated using linear regression with \u21132 regularization: h\u22a4xt \u2248E[\u03b4t|St = s]. With neural networks, this expected TD error can be estimated using an additional head on the network. The target for this second head is still \u03b4t, with a squared error and \u21132 regularization. One might even expect this estimate of E[\u03b4t|St = s] to improve, when using a neural network, rather than a hand-designed basis. An important nuance is that gradients are not passed backward from the error in this second head. This choice is made for simplicity, and to avoid any issues when balancing these two losses. The correction is secondary, and we want to avoid degrading performance in the value estimates simply to improve estimates of E[\u03b4t|St = s]. It also makes the connection to TD more clear as \u03b2 becomes larger, as the update to the network is only impacted by w. We have not extensively tested this choice; it remains to be seen if using gradients from both heads might actually be a better choice. The next step is to extend the algorithm to action-values. For an input state s, the network produces an estimate \u02c6 q(s, a) and a prediction \u02c6 \u03b4(s, a) of E[\u03b4t|St = s, At = a] for each action. The weights ht+1,At for the head corresponding to action At are updated using the features produced by the last layer xt, with \u02c6 \u03b4(St, At) = h \u22a4 t,Atxt: ht+1,At \u2190ht,At + \u03b1 \u0002 \u03b4t \u2212h \u22a4 t,Atxt \u0003 xt \u2212\u03b1\u03b2ht,At (10) For the other actions, the secondary weights are not updated since we did not get a target \u03b4t for them. The remaining weights wt, which include all the weights in the network excluding h, are updated using \u03b4t = Rt+1 + \u03b3q(St+1, a\u2032) \u2212q(St, At) (11) wt+1\u2190wt+\u03b1\u03b4t\u2207w\u02c6 q(St, At)\u2212\u03b1\u03b3\u02c6 \u03b4(St, At)\u2207w\u02c6 q(St+1, a\u2032) where a\u2032 is the action that the policy we are evaluating would take in state St+1. For control, we often select the greedy policy, and so a\u2032 = arg maxa q(St+1, a) and \u03b4t = Rt+1 + \u03b3 maxa q(St+1, a) \u2212q(St, At) as in Q-learning. This action a\u2032 may differ from the (exploratory) action At+1 that is actually executed, and so this estimation is off-policy. There are no importance sampling ratios because we are estimating action-values. \fGradient Temporal-Difference Learning with Regularized Corrections We call this \ufb01nal algorithm QRC: Q-learning with Regularized Corrections. The secondary weights in QRC are initialized to 0, to maintain the similarity to TD. We can obtain, as a special case, a control algorithm based on TDC, which we call QC. If we set \u03b2 = 0 in Eq. 10, we obtain QC. We conclude this section by highlighting that there is an alternative route to use TDRC, as is, for control: by using TDRC as a critic within Actor-Critic. We provide the update equations in Appendix G.1. 5.2. Control Problems We \ufb01rst test the algorithms in a well-understood setting, in which we know Q-learning is effective: Mountain Car with a tile-coding representation. We then use neural network function approximation in two classic control environments\u2014 Mountain Car and Cart Pole\u2014and two visual environments from the MinAtar suite (Young & Tian, 2019). For all environments, we \ufb01x \u03b2 = 1.0 for QRC, \u03b7 = 1.0 for QC and do not use target networks (for experiments with target networks see Appendix F). In the two classic control environments, we use 200 runs, an \u03f5-greedy policy with \u03f5 = 0.1 and a discount of \u03b3 = 0.99. In Mountain Car (Moore, 1990; Sutton, 1996), the goal is to reach the top of a hill, with an underpowered car. The state consists of the agent\u2019s position and velocity, with a reward of \u22121 per step until termination, with actions to accelerate forward, backward or do nothing. In Cart Pole (Barto, Sutton & Anderson, 1983), the goal is to keep a pole balanced as long as possible, by moving a cart left or right. The state consists of the position and velocity of the cart, and the angle and angular velocity of the pole. The reward is +1 per step. An episode ends when the agent fails to balance the pole or balances the pole for more than 500 consecutive steps. For non-linear control experimental details on these environments see Appendix G.3. For the two MinAtar environments, Breakout and Space Invaders, we use 30 runs, \u03b3 = 0.99 and a decayed \u03f5-greedy policy with \u03f5 = 1 decaying linearly to \u03f5 = 0.1 over the \ufb01rst 100,000 steps. In Breakout, the agent moves a paddle left and right, to hit a ball into bricks. A reward of +1 is given for every brick hit; new rows appear when all the rows are cleared. The episode ends when the agent misses the ball and it drops. In Space Invaders, the agent shoots alien ships coming towards it, and dodges their \ufb01re. A reward of +1 is given for every alien that is shot. The episode ends when the spaceship is hit by alien \ufb01re or reached by an alien ship. These environments are simpli\ufb01ed versions from the Atari suite, designed to avoid the need for large networks and make it more feasible to complete more exhaustive comparison, including using more runs. All methods use a network with one convolutional layer, followed by a fully connected layer. All experimental settings are identical to the original MinAtar paper (see Appendix G.4 for details). 5.3. Linear Control We compare TD, TDC and TDRC for control, both within an Actor-Critic algorithm and with their extensions to Qlearning. In Figure 4, we can see two clear outcomes from both control experiments. In both cases, the control algorithm based on TDC fails to converge to a reasonable policy. The TDRC variants, on the other hand, match the performance of TD. Q-learning QRC QC 0 500 1000 1500 2000 # Steps / episode AC-TD AC-TDRC AC-TDC Steps ( ) \u00d7103 Steps ( ) \u00d7103 Actor-Critic Action-value 0 12.5 25 37.5 50 5 15 0 10 20 Figure 4. Numbers of steps to reach goal, averaged over runs, versus number of environment steps, in Mountain Car with tile-coded features. Left: Comparison of actor-critic control algorithms with various critics with ADAM optimizer. For actor critic experimental details see Appendix G.1. Right: Comparison of state-action value control algorithms with constant stepsizes. Stepsizes were swept over \u03b1 \u2208{2\u22128, 2\u22127, . . . , 2\u22122, 2\u22121} and then scaled by the number of active features. We used 16 tilings and 4 \u00d7 4 tiles. Results are averaged over 200 independent runs, with shaded error corresponding to standard error. This result might be surprising, since the only difference between TDRC and TDC is regularizing h. This small addition, though, seems to play a big role in avoiding this surprisingly bad performance of TDC, and potentially explains why gradient methods have been dismissed as hard-to-use. When we looked more closely at TDC\u2019s behavior, we found that the TDC agent improved its behavior policy quickly. But, the magnitude of the gradient corrections also grew rapidly. This high magnitude gradient correction resulted in a higher magnitude gradient for w, and pushed down the learning rate for TDC. The constraint on this correction term provided by TDRC seems to prevent this explosive growth, allowing TDRC to attain comparable performance to the TD-based control agent. 5.4. Non-linear Control When moving to non-linear function approximation, with neural networks, we \ufb01nd a more nuanced outcome: QC still suffers compared to Q-learning and QRC in the classic control environments\u2014though less than before\u2014yet provides substantial improvements in the two MinAtar environments. In Figure 5, we \ufb01nd that QC learns more slowly than QRC and Q-learning. Again, QRC brings performance much \fGradient Temporal-Difference Learning with Regularized Corrections # Steps / episode DQN DQC DQRC 200 400 600 800 1000 5 15 0 10 20 25 Steps (\u00d7103) QRC QC Q-learning QRC QC Steps(\u00d7104) 0 100 200 300 400 # Steps / episode Q-learning DQRC DQN DQC 200 400 600 800 1000 \u03b1 = 2\u2212x QRC QC Q-learning 12 10 8 6 4 2 16 14 2 6 0 4 8 10 QRC QC 12 10 8 6 4 2 \u03b1 = 2\u2212x 0 100 200 300 400 Q-learning Mountain Car Cart Pole Figure 5. Performance of Q-learning, QC and QRC on two classic control environments. On top the learning curves are shown and at the bottom the parameter sensitivity for various stepsizes. Lower is better for Mountain Car (fewer steps to goal) and higher is better for Cart Pole (more steps balancing the pole). Results are averaged over 200 runs, with shaded error corresponding to standard error. closer to Q-learning, when QC is performing notably more poorly. In Mountain Car, we tested a more highly off-policy setting: 10 replay steps. By using more replay per step, more data from older policies is used, resulting in a more offpolicy data distribution. Under such an off-policy setting, we expect Q-learning to suffer, and in fact, we \ufb01nd that QRC actually performs better than Q-learning. We provide additional experiments on Mountain Car in Appendix D. On the two MinAtar environments, in Figure 6, we obtain a surprising result: QC provides substantial performance improvements over Q-learning. QRC with \u03b2 = 1 is not as performant as QC in this setting and instead obtains performance in-between QC and Q-learning. However, QRC with smaller values of regularization parameter (shown as lighter blue lines) results in the best performance. This outcome highlights that Gradient TD methods are not only theoretically appealing, but could actually be a better alternative to Q-learning in standard (non-adversarially chosen) problems. It further shows that, though QRC with \u03b2 = 1.0 generally provides a reasonable strategy, substantial improvements could be obtained with an adaptive method for selecting \u03b2. 6." + }, + { + "url": "http://arxiv.org/abs/2003.07417v1", + "title": "Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks", + "abstract": "Reinforcement learning systems require good representations to work well. For\ndecades practical success in reinforcement learning was limited to small\ndomains. Deep reinforcement learning systems, on the other hand, are scalable,\nnot dependent on domain specific prior knowledge and have been successfully\nused to play Atari, in 3D navigation from pixels, and to control high degree of\nfreedom robots. Unfortunately, the performance of deep reinforcement learning\nsystems is sensitive to hyper-parameter settings and architecture choices. Even\nwell tuned systems exhibit significant instability both within a trial and\nacross experiment replications. In practice, significant expertise and trial\nand error are usually required to achieve good performance. One potential\nsource of the problem is known as catastrophic interference: when later\ntraining decreases performance by overriding previous learning. Interestingly,\nthe powerful generalization that makes Neural Networks (NN) so effective in\nbatch supervised learning might explain the challenges when applying them in\nreinforcement learning tasks. In this paper, we explore how online NN training\nand interference interact in reinforcement learning. We find that simply\nre-mapping the input observations to a high-dimensional space improves learning\nspeed and parameter sensitivity. We also show this preprocessing reduces\ninterference in prediction tasks. More practically, we provide a simple\napproach to NN training that is easy to implement, and requires little\nadditional computation. We demonstrate that our approach improves performance\nin both prediction and control with an extensive batch of experiments in\nclassic control domains.", + "authors": "Sina Ghiassian, Banafsheh Rafiee, Yat Long Lo, Adam White", + "published": "2020-03-16", + "updated": "2020-03-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "main_content": "INTRODUCTION Reinforcement learning (RL) systems require good representations to work well. For decades practical success in RL was restricted to small domains\u2014with the occasional exception such as Tesauro\u2019s Proc. of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), B. An, N. Yorke-Smith, A. El Fallah Seghrouchni, G. Sukthankar (eds.), May 9\u201313, 2020, Auckland, New Zealand. \u00a9 2020 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. TD-Gammon (Tesauro, 1995). High-dimensional and continuous inputs require function approximation where the features must either be designed by a domain expert, constructed from an exhaustive partitioning schemes (e.g., Tile Coding), or learned from data. Expert features can work well (Sturtevant and White, 2007 and Silver, 2009), but depending on prior knowledge in this way limits scalability. Exhaustive partition strategies can be extended beyond small toy tasks (Stone and Sutton, 2001; Modayil et al., 2014; Rafiee et al., 2019), but ultimately do not scale either. Neural Networks (NN), on the other hand, are both scalable and not dependent on domain specific prior knowledge. Unfortunately, training NNs is typically slow, finicky, and not well suited for RL tasks where the training data is temporally correlated, non-stationary, and presented as an infinite stream of experience rather than a batch. The practice of combining neural network function approximation and reinforcement learning has significantly improved. Deep reinforcement learning systems have been successfully deployed on visual tasks like Atari, 3D navigation, and video games (Mnih et al., 2015; Parisotto and Salakhutdinov, 2017; Vinyals et al., 2019). Deep RL systems can control high degree of freedom robots (Riedmiller et al., 2018), and learn in robot simulation domains directly from joint angles and velocities (Duan et al., 2016). All these systems rely on a combination of improved optimization algorithms (Kingma and Ba, 2014), Experience Replay (Lin, 1992), and other tricks such as Target Networks (Mnih et al., 2015). There are many challenges in designing and training NN-based RL systems. Many systems exhibit extreme sensitivity to key hyperparameters (Henderson et al., 2018)\u2014choices of replay buffer size (Zhang and Sutton, 2017), optimizer hyper-parameters (Jacobsen et al., 2019), and other algorithm-dependent hyper-parameters have a large impact on performance. Many systems exhibit fast initial learning, followed by catastrophic collapse in performance, as the network unlearns its previously good policy (Goodrich, 2015). In some domains, simpler learning systems can match and surpass state-of-the-art NN-based alternatives (Rajeswaran et al., 2017). Perhaps many of these frailties can be largely explained by aggressive generalization and interference that can occur in neural network training. The concept of Catastrophic Interference is simple to explain: training on a sequence of tasks causes the network to override the weights trained for earlier tasks. This problem is particularly acute in RL, because the agent\u2019s decision making policy changes over time causing interference to occur during single task training (Kirkpatrick et al., 2017; Liu et al., 2019). There are three primary strategies for dealing with interference: (1) adapting the loss to account for interference (Javed and White, 2019), (2) utilising networks that are robust to interference (Liu et al., 2019), or (3) arXiv:2003.07417v1 [cs.LG] 16 Mar 2020 \fadapting the training\u2014classically by shuffling the training data in supervised learning (French, 1999), or via experience replay in RL. In this paper we propose a new approach to reducing interference and improving online learning performance of neural-network based RL. Our idea is based on a simple observation. Deep RL systems are more difficult to train in domains where there is significant and inappropriate generalization of the inputs. For example, in Mountain Car the observations are the position and velocity of the car. A NN will exploit the inherent generalization between states that are close in Euclidean space. However, the value function exhibits significant discontinuities and it is difficult for the network to overcome the harmful generalization to learn a good policy. This inappropriate generalization is prevalent in many classic RL control domains, and could compound the effects of interference, resulting in slow learning. Inappropriate generalization is less prevalent in visual tasks because standard architectures utilize convolutional layers which are designed to manage the input generalization. Our proposed solution is to simply map the observations to a higher-dimensional space. This approach significantly reduces the harmful generalization in the inputs, has low computational overhead, and is easily scaled to higher dimensions. In fact, this is an old idea: randomly projecting the inputs was a common preprocessing step in training perceptrons (Minsky and Papert, 2017) and can be competitive with networks learned via Backprop (Sutton and Whitehead, 1993; Mahmood and Sutton, 2013). We explore input preprocessing based on simple independent discretization, and Tile Coding (Sutton and Barto, 2018). We show that our input preprocessing improves the learning speed of several standard neural-network systems, and reduces interference. In fact, DQN (Mnih et al., 2015) achieves low interference and efficient learning, possibly because it uses experience replay and target networks which reduce interference, as our experiments suggest. Across the board, our results show our input preprocessing strategy never reduced performance, and in many cases dramatically improved learning speed and hyper-parameter sensitivity. Our results show that neural-network learners can be made more user friendly and exhibit reliable and efficient training. 2 BACKGROUND This paper studies both prediction\u2014value function approximation\u2014 and control\u2014maximizing reward. We employ the Markov Decision Process (MDP) framework (Puterman, 2014). In this framework, an agent and environment interact at discrete time steps t = 0, 1, 2, . . .. At each time step,t, the agent is in a state St \u2208S and takes an action At \u2208A, where S and A are state and action spaces respectively. The agent takes actions according to a policy \u03c0 : A \u00d7S \u2192[0, 1]. In response to the action, the environment emits a reward Rt+1 \u2208R and takes the agent to the next state St+1. The environment makes this transition according to transition function P(St+1|St,At ). In prediction (policy evaluation), the policy \u03c0, is fixed. The goal is to estimate the value function, defined as the expected return (Gt \u2208R) if the agent starts from a state s and follows policy \u03c0 until termination: v\u03c0 (s) \u0011 E\u03c0[Gt | St = s] \u0011 E\u03c0 \" \u221e \u00d5 k=0 \u03b3 kRt+k+1 | St = s # , for all s \u2208S, where E\u03c0[.|.] denotes the conditional expectation of a random variable under \u03c0 and \u03b3 \u2208[0, 1] is a scalar discount factor parameter. In the control setting, the policy is not fixed. The agent seeks to find a policy that maximizes the expected return. In control, state-action value functions replace state value functions from the policy evaluation case. The state-action value is defined as: q\u03c0 (s,a) \u0011 E\u03c0[Gt | St = s,At = a] \u0011 E\u03c0 \" \u221e \u00d5 k=0 \u03b3 kRt+k+1 | St = s,At = a # . We consider the case in which the state space is large and we cannot estimate one value for each state or state-action pair. Instead, we seek to make use of a parametric function to approximate the value function. We denote the approximate value function for states and state-action pairs by \u02c6 v(s,w) and \u02c6 q(s,a,w) respectively, where \u02c6 v(s,w) \u2248v\u03c0 (s) and \u02c6 q(s,a,w) \u2248q\u03c0 (s,a) where vector w \u2208Rd includes the learned parameters of the approximation. To estimate the state values, we use temporal-difference learning, more specifically, TD(0) (Sutton, 1988), to update the neural network parameters at each time step, where the neural network is the function approximator to TD(0). Let w be the weights and \u03b1 be a small step-size. The network parameters are updated according to: wt+1 \u2190wt + \u03b1 \u03b4t \u2207w \u02c6 v(St,wt ). \u2207w \u02c6 v(St,wt ) denotes the gradient of the function \u02c6 v(St,wt ) with respect to the parametersw wherew = wt .\u03b4t is called the temporaldifference error: \u03b4t \u0011 Rt+1 + \u03b3 \u02c6 v(St+1,wt ) \u2212\u02c6 v(St,wt ). To estimate the state-action pair values, we use TD(0)\u2019s control variant, called Sarsa(0) (Rummery and Niranjan, 1994). Sarsa(0) update rules are the same as TD(0) except that \u02c6 v(St,wt ) is replaced by \u02c6 q(St,At,wt ). Sarsa(0) typically uses an \u03f5-greedy policy with respect to the state-action values to select actions. Neural networks have been widely used to approximate value functions in reinforcement learning tasks, but there are several additional ingredients that improve performance. Often, neural networks are combined with Experience Replay (ER) (Lin, 1992), Target Networks (TN), and step-size adaptation methods (called optimizers). ER is a way to reuse previous data stored in a replay buffer to increase sample efficiency. The idea is simple, the agent stores recent experience in a buffer of experience tuples\u2014(St,At,Rt+1,St+1)\u2014 and replays it repeatedly, as if the agent was re-experiencing the data. The experience is typically sampled randomly from the buffer to break the temporal dependencies in the training data. The main idea in using target networks is to stabilize the update used in TD methods. TD updates towards a bootstrap target: the update target contains the network\u2019s current estimate of the value function. Instead of changing the network in the target on every update, the Target Network is periodically set equal to the learned network. Optimizers such as Adam (Kingma and Ba, 2014) are used in place of a global learning rate in stochastic gradient descent (e.g., \u03b1 above). Instead we use a vector of step-sizes\u2014one for each weight\u2014that change with time. These three ingredients were used in DQN (Mnih et al., 2014), which is perhaps one of the most widely used and robust deep reinforcement learning systems available today. \fContinuous 2D Space Tiling 1 Tiling 2 Tiling 3 2 Sample points in the state space Neighbourhood of generalization Figure 1: A continuous 2D space with 1 tiling on top of it is on the left. Three overlapping tilings on the 2D continuous space are shown in the middle in blue, green and red. The generalization region for a sample point is on the right. 3 BREAKING GENERALIZATION IN NEURAL NETWORKS Neural networks can forget what they learned in the past due to a phenomenon known as catastrophic interference. Interference happens when a neural network is trained on new data and it overwrites what it has learned in the past. This phenomenon can be related to neural network\u2019s global generalization. To alleviate the interference issue, we propose mapping the input to a higher dimensional space. Specifically, we propose discretizing or tile coding the input as a preprocessing step before feeding it to the neural network. This preprocessing step breaks the input generalization and our hypothesis is that it helps reduce the overly global generalization of neural networks and in turn reduces interference and improves performance. We propose two simple approaches for breaking the generalization in the input space as discussed below and test our hypothesis in later sections. The first approach is to simply use binning to discretize each dimension of the input separately. In this case, each dimension of the input is covered with a one dimensional grid and a one-hot vector is created that has a one in the bin where the input lies in and zero everywhere else. The same is done for all of the input dimensions and then the resulting vectors are concatenated to create a long one-dimensional vector, which is the final representation fed to the neural network. We will simply refer to this method as discretization-neural network or the shorthand D-NN. The second approach that we use to break the generalization is Tile Coding (TC) (Albus 1975, 1981). We refer to this method with the shorthand TC-NN. Tile coding works by covering the state space with a number of overlapping grids called tilings. Each grid divides the state space into small squares, called tiles. In Figure 1, a continuous 2D space is covered by 3 tilings where each tiling has 4 tile across each dimension (overall 16 tiles). Tile coding creates a representation for each point in space by concatenating the representation it has for each tiling. The representation for each tiling consists of a one hot vector that has a one for the tile that the point falls within and zero otherwise. For example, the representation for the point in Figure 1 will have three ones in a vector of size 48 (3 tilings \u00d7 4 \u00d7 4 tiles). See Sutton and Barto (2018) for a thorough explanation of tile coding. Breaking the generalization in the input space increases the ability of the neural network to respond to different parts of the state space locally. With ReLU gates, the activation region of a node is the open half-space: {x | \u27e8w,x\u27e9+ b > 0} where w represents the weights and b is the bias associated with a node. If the NN receives raw observations as input, every node will respond to an entire half-space in the input space and might cause undesirable generalization. However, when the generalization is broken in the 0 2 0.07 -0.07 Vel. 0 2 0.07 -0.07 0 1 -1.2 0.6 0.6 0.6 0.6 -1.2 -1.2 -1.2 Pos. Pos. Pos. Pos. Vel. NN TC-NN Figure 2: Response functions with raw inputs (top) and tile coding preprocessing (bottom) for Mountain Car control. input space using discretization, each small area in the input space is mapped to a vertex of a hypercube. These vertices are all extreme points of a convex set and thus the ReLU activations will have the ability to respond to each of these sub-areas separately. Figure 2 shows heat-maps for the case where NN used raw inputs (top) and tile coding preprocessing (bottom). The feature maps were created using a neural network trained on the Mountain Car problem for 500 episodes. Each heat-map represents the magnitude of the output of a node from the first hidden layer. Heat-maps on the bottom row of Figure 2 show two rather global and two rather local node responses from the hidden layer. As shown in the figure, responses from the neural net that use raw inputs are global. So far we discussed the features of the proposed methods and have shown that the proposed method can have more local generalizations. However, what we have shown so far is more qualitative than quantitative. The next section uses quantitative measures of interference for comparison. 4 EXPERIMENTAL SETUP The experiments described in the following sections are rather extensive and have many components. At the highest level we investigate the impact of input preprocessing on a variety of base NN learning systems. Our experiments include both prediction and control, in two classic reinforcement learning control tasks. The following sections describe the simulation problems, and the base NN learning systems we used. 4.1 Simulation problems We investigate three different problems, one prediction problem and two control problems. The control problems that we used are Mountain Car and Acrobot. The prediction problem also uses the Mountain Car testbed, but with a fixed given policy. Mountain Car simulates an underpowered car on the bottom of a hill that should pass the finish line on top of the hill (Moore, 1991). The problem has two dimensions: position and velocity. The position can vary between -1.2 and 0.6, and the velocity varies between -0.07 and 0.07. There are three actions in each state, throttle forward, throttle backward, and no throttle. The car starts around the bottom of the hill randomly in a point uniformly chosen between -0.4 and -0.6. The reward is -1 for each time step before the car passes the finish line at the top of the hill. When the position becomes larger than 0.5, the agent receives a reward of 0 and the episode terminates. The problem is episodic and not discounted (\u03b3 = 1). In the control version of the Mountain Car problem, the agent seeks to find a policy that ends the episode as fast as possible. In control, episodes were cut-off after 1000 steps. In prediction variant of the problem, we used a simple energy pumping policy. This policy \fchooses the action in agreement with the current velocity: left if velocity is negative, right otherwise. The Acrobot (Sutton, 1996) is similar to a gymnast. Its goal is to swing its feet above a bar it is hanging from. The problem has four dimensions, two angles and two angular velocities (all realvalued). There are three discrete actions: positive torque, negative torque, and no torque. The reward is -1 on each time step before the Acrobot swings its feet above over the bar to terminate the episode. The task is undiscounted with \u03b3 = 1. Episodes are cutoff after 500 steps. We used the Open AI Gym implementation of the Acrobot problem (Brockman et al., 2016). 4.2 Methods We are interested in studying the impact of input preprocessing on a variety of NN learning systems. We investigate five NN learning systems that represent a spectrum from naive learning systems that we expect to work poorly, to well-known and widely used architectures. In particular we selected, (1) simple feed-forward NN with stochastic gradient descent (SGD); (2) simple feed-forward NN with ER and SGD; (3) simple feed-forward NN with Adam (Kingma and Ba, 2014); (4) simple feed-forward NN with ER, and Adam; (5) simple feed-forward NN with ER, Adam, and target networks. Given our five representative learning systems, we experiment wih three different input preprocessing strategies. The first involves no preprocessing, feeding in the raw input observations. The second, first discretizes the inputs before they are passed to the network. The third, first tile codes the observations. 5 EXPERIMENTAL RESULTS Our experiments primarily focus on how different input preprocessing strategies impact both the speed of learning, and sensitivity to the step-size parameter values. In addition we also investigate how preprocessing impacts interference. To do so, we use a recently proposed interference measure (Liu, 2019). Our main hypothesis is that input preprocessing improves performance in prediction and control, because it reduces interference in the network updates. We begin first by reporting the improvements in prediction and control with discretization and tile coding. 5.1 Overall performance We used different measures to compare the algorithms for control and prediction tasks. In the control tasks, we simply report the number of steps it takes for the agent to finish the episode. Note that the reward at each time step for all tasks is -1 and thus the agent\u2019s objective is finish the task as fast as possible. For the Mountain Car prediction task, we used an easily interpretable error measure, the Root Mean Squared Value Error (RVE). We measure, at the end of each episode, how far the learned approximatet value function is from the optimal value function. The RVE(wt ) is defined as: s\u00d5 s \u2208S d\u03c0 (s) [ \u02c6 v(s,wt ) \u2212v\u03c0 (s)]2 \u2248 s 1 |D| \u00d5 s \u2208D [ \u02c6 v(s,wt ) \u2212v\u03c0 (s)]2 (1) where d\u03c0 (s) is the stationary distribution under \u03c0, \u02c6 v is the agent\u2019s estimate of the value function, and v\u03c0 is the true value function. Since the state space is continuous, d\u03c0 (s) is estimated by sampling states when following d\u03c0 . D is a set of states that is formed by following \u03c0 to termination and restarting the episode and following \u03c0 again. This was done for 10,000,000 steps, and we then sampled 500 states from the 10,000,000 states randomly. The true valuev\u03c0 (s) was simply calculated for each s \u2208D by following \u03c0 once to the end of the episode. To create learning curves1, we ran each method with many different hyper-parameter combinations and plotted the one that minimized the Area Under the learning Curve (AUC)\u2014total steps to goal in control and total RVE in prediction. We ran each method with each specific parameter setting 30 times (30 independent runs). We then averaged the results over runs and computed the standard error. Figure 3, rows 1-3 compare each method using raw input\u2013with its best performing hyper-parameters\u2013with its counterpart that uses discretized inputs. In most cases, NNs with discretized inputs learned faster than NNs using raw inputs and converged to the same or a better final performance. Figure 3, rows 4-6 compares raw inputs with tile coding preprocessing. NNs with tile coding preprocessing outperforms NN with raw inputs. In most cases, the difference between the performance of NN using remapped inputs and raw inputs were statistically significant (according to the standard error). The results suggest that preprocessing the inputs and projecting them into a higher dimensional space helps neural networks learn faster and more accurately. To further assess the significance of the difference in pairwise comparisons, we performed two sample t-tests. The pairs for the t-test were: 1) the method that used raw inputs and 2) the method that used either tile coding or discretization preprocessing. Each one of the 30 plots in Figure 3 includes two learning curves. Each plotted learning curve is the average over 30 learning curves of independent runs. We first averaged the performance (the value error for Mountain Car prediction, the number of steps per episode for Mountain Car control and Acrobot) over episodes for each run. This produced 30 numbers (one for each run) for each of the learning curves in each plot in Figure 3. We then used these 60 numbers (30 for each group) for a two sample t-test. Appendix A summarizes the p-values of the t-test in a table. The test revealed that in cases that tile coding or discretization improved performance, the improvement was statistically significant. 5.2 Sensitivity to step-size In our previous experiment, we reported the results using the best performing step-size, which is not always feasible in practice. Ideally, an algorithm will perform well with many different settings of the step-size parameter. In this section, we investigate the performance of each of our five learning systems with many different step-size parameter values. Our objective is to understand how the input preprocessing interacts with step-size sensitivity. We evaluated the performance using different step-sizes (see Table 1 for a list of step-sizes). Figure 4 summarizes the sensitivity to step-size for the methods that used SGD and the initial stepsize for the methods that used Adam. To create sensitivity curves, we ran each method with a specific step-size parameter value 30 1All learning curves were smoothed using a sliding window of size 10. \fNN NN TC-NN NN NN TC-NN NN NN Neural network with experience replay bu\ufb00er and Adam optimizer and target network Neural network with experience replay bu\ufb00er and Adam optimizer Vanilla neural network with Adam optimizer Neural network with experience replay bu\ufb00er Vanilla neural network NN NN TC-NN NN NN TC-NN NN NN NN NN NN NN TC-NN NN NN TC-NN NN NN NN NN D-NN # Steps per episode # Steps per episode RVE ACBXicbVDLSgMxFM3UV62vUZe6CBbBVZmpgi4LorisYh/QGUomzbSh mQfJHbEMA+LGX3HjQhG3/oM7/8a04W2Hgczjm5yT1eLgCy/o2CguLS8srxdXS2vrG5pa5 vdNUSIpa9BIRLtEcUED1kDOAjWjiUjgSdYyxuej/3WHZOKR+EtjGLmBqQfcp9TAlrqmvsO sHtIbzIn0rHxlDRXmhdZ1jXLVsWaAM8Te0rKaIp61/xyehFNAhYCFUSpjm3F4KZEAqeCZSU nUSwmdEj6rKNpSAKm3HSyRYPtdLDfiT1CQFP1N83UhIoNQo8nQwIDNSsNxb/8zoJ+GduysM 4ARbS/CE/ERgiPK4E97hkFMRIE0Il13/FdEAkoaCLK+kS7NmV50mzWrGPK9Xrk3Lt8iGvo4j 20AE6QjY6RTV0heqogSh6RM/oFb0ZT8aL8W585NGCMa1wF/2B8fkDS9+aIg= D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN D-NN NN D-NN NN Mountain Car Prediction Mountain Car Control Acrobot NN TC-NN NN TC-NN NN NN TC-NN TC-NN NN NN NN TC-NN TC-NN Mountain Car Prediction NN TC-NN NN TC-NN NN TC-NN NN TC-NN TC-NN NN NN NN NN TC-NN TC-NN Mountain Car Control NN TC-NN NN TC-NN NN TC-NN NN TC-NN TC-NN NN NN NN NN TC-NN TC-NN Episode Episode Episode Episode Episode # Steps per episode # Steps per episode Acrobot RVE ACBXicbVDLSgMxFM3UV62vUZe6CBbBVZmpgi4LorisYh/QGUomzbSh mQfJHbEMA+LGX3HjQhG3/oM7/8a04W2Hgczjm5yT1eLgCy/o2CguLS8srxdXS2vrG5pa5 vdNUSIpa9BIRLtEcUED1kDOAjWjiUjgSdYyxuej/3WHZOKR+EtjGLmBqQfcp9TAlrqmvsO sHtIbzIn0rHxlDRXmhdZ1jXLVsWaAM8Te0rKaIp61/xyehFNAhYCFUSpjm3F4KZEAqeCZSU nUSwmdEj6rKNpSAKm3HSyRYPtdLDfiT1CQFP1N83UhIoNQo8nQwIDNSsNxb/8zoJ+GduysM 4ARbS/CE/ERgiPK4E97hkFMRIE0Il13/FdEAkoaCLK+kS7NmV50mzWrGPK9Xrk3Lt8iGvo4j 20AE6QjY6RTV0heqogSh6RM/oFb0ZT8aL8W585NGCMa1wF/2B8fkDS9+aIg= NN TC-NN 30 50 30 50 Figure 3: Learning curves for all tasks. Top three rows compare raw inputs with discretized inputs. Three bottom rows compare raw inputs with tile coding preprocessing. Discretizing/tile coding the input helped neural networks learn faster and converge to a better final performance. D-NN is short for Discretization+NN and TC-NN is short for Tile Coding+NN. times. We then averaged each run\u2019s performance to get a single number that represents the area under the learning curve for that specific run for that specific method with the specific parameter settings. We then computed the average and standard error over the 30 numbers (one for each run) and plotted this number for each specific step-size in Figure 4. Each learning system that we tested actually has several hyperparameters, and these can also have a significant impact on performance. For example, the algorithms that use the Adam optimizer have two extra parameters that specify the exponential decay rate for the first and second moment estimates, which are typically referred to as \u03b21 and \u03b22. To create a single curve for step-size sensitivity, we first searched over all the parameters to find the minimum AUC, we then fixed \u03b21 and \u03b22 to the values that achieved the minimum AUC and plotted the AUC over the step-size for those values. As we see in Figure 4, discretizing/tile coding reduced sensitivity to step-size (compare the size of the bowl-shaped curves when using tile coding or discretization to raw inputs). The standard errors in Figure 4 (some of which are not visible due to being small) show that the difference in the errors are statistically significant. 5.3 Interference The results in the previous section indicate that input preprocessing can improve learning speed, stability, and parameter sensitivity, but it is not clear how these improvements relate to interference in the network updates. One hypothesis is that preprocessing reduces interference. To test this hypothesis, we use a recently proposed measure of interference (Liu, 2019). The measure is only applicable to prediction tasks and so the results in this section are only suggestive of the possible relationship between performance improvements due to input preprocessing and reduction in interference. \fMountain Car Prediction Mountain Car Control # Steps per episode # Steps per episode Acrobot RVE ACBXicbVDLSgMxFM3UV62vUZe6CBbBVZmpgi4LorisYh/QGUomzbShmQfJHbEMA+L GX3HjQhG3/oM7/8a04W2Hgczjm5yT1eLgCy/o2CguLS8srxdXS2vrG5pa5vdNUSIpa9BIRLtEcU ED1kDOAjWjiUjgSdYyxuej/3WHZOKR+EtjGLmBqQfcp9TAlrqmvsOsHtIbzIn0rHxlDRXmhdZ1jXLVs WaAM8Te0rKaIp61/xyehFNAhYCFUSpjm3F4KZEAqeCZSUnUSwmdEj6rKNpSAKm3HSyRYPtdLDfiT1C QFP1N83UhIoNQo8nQwIDNSsNxb/8zoJ+GduysM4ARbS/CE/ERgiPK4E97hkFMRIE0Il13/FdEAkoaCL K+kS7NmV50mzWrGPK9Xrk3Lt8iGvo4j20AE6QjY6RTV0heqogSh6RM/oFb0ZT8aL8W585NGCMa1wF/2 B8fkDS9+aIg= TC-NN NN TC-NN NN NN NN TC-NN TC-NN NN TC-NN TC-NN NN TC-NN NN NN NN TC-NN TC-NN NN TC-NN TC-NN NN TC-NN NN NN NN TC-NN TC-NN NN TC-NN Mountain Car Prediction RVE ACBXicbVDLSgMxFM3UV62vUZe6CBbBVZmpgi4LorisYh/QGUomzbShmQfJHbEMA+L GX3HjQhG3/oM7/8a04W2Hgczjm5yT1eLgCy/o2CguLS8srxdXS2vrG5pa5vdNUSIpa9BIRLtEcU ED1kDOAjWjiUjgSdYyxuej/3WHZOKR+EtjGLmBqQfcp9TAlrqmvsOsHtIbzIn0rHxlDRXmhdZ1jXLVs WaAM8Te0rKaIp61/xyehFNAhYCFUSpjm3F4KZEAqeCZSUnUSwmdEj6rKNpSAKm3HSyRYPtdLDfiT1C QFP1N83UhIoNQo8nQwIDNSsNxb/8zoJ+GduysM4ARbS/CE/ERgiPK4E97hkFMRIE0Il13/FdEAkoaCL K+kS7NmV50mzWrGPK9Xrk3Lt8iGvo4j20AE6QjY6RTV0heqogSh6RM/oFb0ZT8aL8W585NGCMa1wF/2 B8fkDS9+aIg= Mountain Car Control # Steps per episode NN NN NN NN D-NN D-NN D-NN D-NN NN D-NN NN NN NN NN NN D-NN D-NN D-NN D-NN D-NN # Steps per episode Acrobot NN NN NN NN D-NN D-NN D-NN D-NN D-NN Stepsize Stepsize Stepsize Stepsize Stepsize Neural network with experience replay bu\ufb00er and Adam optimizer Vanilla neural network with Adam optimizer Neural network with experience replay bu\ufb00er Vanilla neural network NN Neural network with experience replay bu\ufb00er and Adam optimizer and target network Figure 4: Step-size sensitivity curves over all tasks. Top three rows compare discretized and raw inputs. Three bottom rows compare tile coded preprocessing and raw inputs. Breaking the generalization in the input reduced sensitivity to step-size. Liu\u2019s measure is fairly straightforward and is based on the Pairwise Interference (PI): PI(Si,Sj) = \u2207w \u0002 \u02c6 v(w,Si) \u0003\u22a4\u2207w \u0002 \u02c6 v(w,Sj) \u0003 ||\u2207w \u02c6 v(w,Si)||2 \u00d7 ||\u2207w \u02c6 v(w,Sj)||2 , (2) which means if the two vectors in the numerator have a zero inner product, they are orthogonal and update made to Si does not affect the update from Sj. This is similar to the neural tangent kernel that has been previously used for studying generalization in deep reinforcement learning (Achaim, Knight and Abbeel, 2019), scaled by the norm. The measure is also similar to the one proposed by French (1999) for batch supervised learning. We measured the pairwise interference (Equation 2) for all pairs of samples in the dataset D where D is the same as the one used to compute RVE. See Equation 1. We averaged the pairwise interference over all samples and then averaged the results over 30 runs. We repeated this process as learning progressed, specifically, before the learning started, after initializing the neural network, and then after episodes 1, 5, 10, 25, and every 25 episodes afterwards until the end of learning (500 episodes in this case). The interference for each method was measured for the parameter setting that produced the best learning curve as explained in Section 5.1. To sanity check this Liu\u2019s measure, we can compare the interference of a learning system we expect to have high interference with one we expect to have low interference and see if the measure agrees. Let\u2019s consider the simple feed-forward NN with SGD shown in Figure 5 (first row, leftmost subplot). We can compare this with the measured interference of a NN with ER, Adam and target networks (Figure 5, first row, rightmost subplot). We can see the simple NN results in higher measured interference than the NN with target networks, ER, and Adam.. Rows 1 and 4 of Figure 5, compare the interference of different methods over episodes. The parameters for these interference measurements were the same as the ones used to produce learning \fTC-NN NN TC-NN Interference Episode Episode # Hidden units # Hidden units Interference NN TC-NN NN TC-NN # Hidden layers # Hidden layers Interference NN TC-NN NN TC-NN # Hidden layers # Hidden layers NN TC-NN NN Vanilla neural network with Adam optimizer Neural network with experience replay bu\ufb00er Vanilla neural network Mountain Car Prediction Interference # Hidden units # Hidden units Interference NN NN Interference NN NN D-NN D-NN D-NN D-NN D-NN NN D-NN NN D-NN NN NN TC-NN Episode # Hidden units NN TC-NN # Hidden layers NN TC-NN # Hidden layers # Hidden units NN NN D-NN D-NN NN TC-NN NN TC-NN Episode # Hidden units NN TC-NN # Hidden layers NN TC-NN # Hidden layers # Hidden units NN NN D-NN D-NN NN TC-NN NN D-NN Neural network with experience replay bu\ufb00er and Adam optimizer Episode # Hidden units TC-NN NN # Hidden layers TC-NN NN # Hidden layers # Hidden units NN NN D-NN D-NN Episode Episode Episode Episode Episode NN TC-NN Neural network with experience replay bu\ufb00er and Adam optimizer and target network D-NN NN TC-NN NN Figure 5: Interference over time (first and fourth row), interference over the number of hidden units (second and fifth row), and interference over the number of hidden layers (third and sixth row) for discretized vs. raw and tile coding vs. raw are shown. Breaking the generalization in the input space reduced the interference. curves in Figure 3. Discretization and tile coding both reduced the interference compared to when raw input is used. We compared the interference of different methods with different numbers of hidden layers in the network. We fixed the number of hidden units at each layer to 25. To calculate a single number that represents the interference of each setting, we measured the interference over time, as discussed before for each run. We then averaged the results over time to get a single interference measurement for each run. We then computed the average and standard deviation of the resulting numbers as a measure of interference for each setting. Rows 3 and 6 of Figure 5, show the interference over different number of hidden layers, 1 layer to 4 layers. In almost all cases, the interference was reduced when using tile coding or discretization. The interference was rather insensitive to number of hidden layers. We also measured the interference over different number of hidden units. We fixed the number of hidden layers to 1, and changed the number of hidden units between 5, 10, 25, 50, 75 units. The process was the same as what we used to compute interference for increasing number of hidden layers, described before. Figure 5, rows 2 and 5, shows that the interference generally decreased with increasing number of hidden units. This is possibly because larger networks have more capacity to reduce the impact of input generalization. \f6" + }, + { + "url": "http://arxiv.org/abs/1805.07476v2", + "title": "Two geometric input transformation methods for fast online reinforcement learning with neural nets", + "abstract": "We apply neural nets with ReLU gates in online reinforcement learning. Our\ngoal is to train these networks in an incremental manner, without the\ncomputationally expensive experience replay. By studying how individual neural\nnodes behave in online training, we recognize that the global nature of ReLU\ngates can cause undesirable learning interference in each node's learning\nbehavior. We propose reducing such interferences with two efficient input\ntransformation methods that are geometric in nature and match well the\ngeometric property of ReLU gates. The first one is tile coding, a classic\nbinary encoding scheme originally designed for local generalization based on\nthe topological structure of the input space. The second one (EmECS) is a new\nmethod we introduce; it is based on geometric properties of convex sets and\ntopological embedding of the input space into the boundary of a convex set. We\ndiscuss the behavior of the network when it operates on the transformed inputs.\nWe also compare it experimentally with some neural nets that do not use the\nsame input transformations, and with the classic algorithm of tile coding plus\na linear function approximator, and on several online reinforcement learning\ntasks, we show that the neural net with tile coding or EmECS can achieve not\nonly faster learning but also more accurate approximations. Our results\nstrongly suggest that geometric input transformation of this type can be\neffective for interference reduction and takes us a step closer to fully\nincremental reinforcement learning with neural nets.", + "authors": "Sina Ghiassian, Huizhen Yu, Banafsheh Rafiee, Richard S. Sutton", + "published": "2018-05-18", + "updated": "2018-09-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Reinforcement learning systems must use function approximation in order to solve complicated real-world problems. Neural nets provide an effective architecture for nonlinear function approximation (Cybenko 1989), and their ability to adapt through a data-driven training process makes them powerful general function approximators for self-learning systems. Neural nets have been used since the early days of reinforcement learning (Miller, Sutton, and Werbos 1990; Tesauro 1995); reinvigorated now by the advances in deep learning, they are the driving force behind most recent progresses toward large-scale reinforcement learning (Mnih et al. 2015; Silver et al. 2016; Wang et al. 2018). On the other hand, it was also known early on that as neural nets generalize globally in training, they have a weakCopyright c \u20dd2019, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. ness: when the network gains new experience, it tends to \u201cforget\u201d what it has learned in the past, a phenomenon known as \u201ccatastrophic interference\u201d (McCloskey and Cohen 1989; French 1999). Indeed, since in a neural net a function is implicitly represented by the weights on the network connections, changes in a few weights during training could result in global changes of the function. In this paper we are interested in catastrophic interference as it arises in online reinforcement learning. Solution methods have been proposed to address the catastrophic interference issue in the supervised learning context (French 1999; Lee et al. 2017); however, these methods are not readily available for use in the reinforcement learning framework. Most if not all of these techniques focus on multitask supervised learning and are speci\ufb01c to the transfer learning context and are not applicable to the online reinforcement learning context with rapidly changing policies. The main reason behind this limitation is that interference reduction techniques proposed for supervised learning settings rely on the fact that the learning agent can successfully learn to make progress on a single task. As we will show later in the paper, due to severe interference, systems that use neural nets as function approximators can fail to make progress even on a single task in online reinforcement learning. Failing to solve a single task successfully, it is then out of the question to apply multi-task transfer-learning techniques to the reinforcement learning framework. An interesting research direction is to adapt these supervised learning techniques to the reinforcement learning context; however, this is not what we pursue in this work. Instead, we propose novel methods that are suitable for online reinforcement learning. The main strategy to work around the catastrophic interference problem in reinforcement learning has been to train neural networks of\ufb02ine with batch data and experience replay buffers (Mnih et al. 2015; Lin 1993). Experience replay uses, at each training step, a rich sample of past and newly gathered data in order to update the neural network weights. This strategy seems to work in practice, but it requires a lot of memory and computation and slows down the training speed. Moreover, experience replay avoids interference at the cost of losing the advantages of online updating, which is one of the important characteristics of the reinforcement learning framework (Sutton and Barto 2018). In this work, we explore alternative fully incremental onarXiv:1805.07476v2 [cs.LG] 6 Sep 2018 \fline approaches to mitigate the catastrophic interference problem instead of using experience replay. To begin with, we consider a two-layer network with one hidden layer and focus on the behavior of individual nodes that use the popular ReLU gates. We recognize that with the ReLU activation function, a neural node has to respond linearly to inputs from an entire half-space, and this global nature of ReLU gates can cause undesirable learning interference in each node\u2019s learning behavior. The observation led us to propose reducing such interferences with two input transformation methods. Both methods are geometric in nature, and as we will show, their geometric properties match well the geometric property of ReLU gates. As we will discuss in more detail later in the paper, both methods enable the neural nodes to respond to a local neighbourhood of their input space. This can help neural networks to generalize more locally and prevent interference. While input transformation is one major approach to address the interference problem in neural nets (French 1999), the two geometric methods we study in this paper have not been considered before, to our knowledge. The \ufb01rst method is tile coding (Albus 1975), a classic binary encoding scheme that captures the topological structure of the input space in the codes and can help promote local generalization. We refer to the combination of tile coding with neural nets as TC-NN. We will show that compared to neural nets operating on raw inputs, TC-NN generalizes more locally, has less interference, and learns much faster. We will also show that TC-NN has advantages over the classic approach of combining tile coding with a linear function approximator (TC-Lin), especially for high-dimensional problems, in terms of function approximation capability. The second method (EmECS) is a new method we introduce. It is based on topological embedding of the input space and geometric properties of convex sets. The idea is to embed the input space in the set of extreme points of a closed convex set, so that although with ReLU, a neural node must always respond linearly to all points from an entire half-space of the transformed input space, with respect to the original input space, it can respond only to the inputs from a small local region, thus reducing its learning interference. As we will show, EmECS can be implemented easily and ef\ufb01ciently, and it differs from other high-dimensional representations in that (i) it does not increase the dimensionality of the inputs by much (indeed it can work with just one extra dimension), and (ii) it can be applied on top of any pre-extracted features that are suitable for a given task. As we will also show, EmECS shares some similarities with coarse coding (Hinton, McClelland, and Rumelhart 1986), of which tile coding is a special case, despite their being seemingly unrelated. Our experimental results show that with EmECS, neural nets can perform as well as TC-NN, achieving both fast learning and accurate approximations. The rest of this paper is organized as follows. We \ufb01rst provide the background on nonlinear TD(\u03bb) and Sarsa(\u03bb). We then discuss TC-NN and EmECS methods. We present experimental results before ending the paper with a discussion on future work. A few supporting results and detailed discussions are collected in the appendices. 2 Background: Nonlinear TD(\u03bb) and nonlinear Sarsa(\u03bb) In this paper we use TD(\u03bb) and Sarsa(\u03bb) methods (Sutton 1988; Rummery and Niranjan 1994) for solving prediction and control problems respectively. The prediction problem is that of learning the value function of a given stationary policy in a standard Markov Decision Process (MDP) with discounted or total reward criteria (Puterman 1994). Speci\ufb01cally, an agent interacts with an environment at discrete time steps t = 0, 1, 2, . . .. If at time t the agent is in state St and selects an action At, the environment emits a reward Rt+1 and takes the agent to the next state St+1 according to certain probabilities that depend only on the values of (St, At). We consider problems where the action space A is \ufb01nite, and the state space S is either \ufb01nite or a bounded subset in a Euclidean space\u2014the problems in our experiments have continuous state spaces. A stationary policy is represented by a function \u03c0 : A \u00d7 S \u2192[0, 1], which speci\ufb01es the probabilities of taking each action at a state in S. The value function of the policy \u03c0 is de\ufb01ned by the expected sum of the discounted future rewards (or simply the expected return), v\u03c0(s) = E\u03c0 \u0002P\u221e k=0 \u03b3kRt+k+1 | St = s \u0003 , for all s \u2208S, where \u03b3 \u2208[0, 1] is the discount factor and E\u03c0[\u00b7] denotes taking expectation under policy \u03c0. The prediction problem for the agent is to estimate v\u03c0. In the control problem, policies are not \ufb01xed and through interactions with the environment, the agent needs to \ufb01nd an optimal policy that maximizes the expected return. For prediction problems, we apply TD(\u03bb) with nonlinear function approximation to update the weights w of a neural network using a small step size \u03b1 according to wt+1 = wt + \u03b1 \u03b4t zt. (1) Here \u03b4t is the temporal-difference error and zt the eligibility trace vector, calculated iteratively as \u03b4t = Rt+1 + \u03b3\u02c6 v(St+1, wt) \u2212\u02c6 v(St, wt), (2) zt = \u03b3\u03bbzt\u22121 + \u2207w\u02c6 v(St, wt), (3) where \u02c6 v(s, w) represents the approximate value for state s produced by the neural net with weights w, and \u2207w\u02c6 v(St, wt) denotes the gradient of the function \u02c6 v(St, \u00b7) at w = wt. Sarsa(\u03bb) is the control variant of TD(\u03bb). When it was \ufb01rst proposed, it actually used neural networks as function approximators (Rummery and Niranjan 1994). Its update rules are similar to those of TD(\u03bb) except that \u02c6 v(St, wt) is replaced by \u02c6 q(St, At, wt), the approximate value for the state-action pair (St, At) produced by the neural net. The action at each time step is typically chosen in an \u03f5-greedy way with respect to the current approximating function. (We use \u03f5 = 0.1.) The network structures we used for prediction and control are different. For prediction, the network receives, as input, the state (or a representation of it) and outputs the approximate value for that state. For control, the input stays the same, but the network outputs multiple values, one for each action, to approximate the state-action values at that state. All neural nets in this work have a single hidden layer that uses ReLU gates and a linear output layer that does not use any gate function. \fContinuous 2D Space Tiling 1 Tiling 2 Tiling 3 Point in state space Generalization region of the point Figure 1: A continuous 2D space with 1 tiling on top of it is shown on the left. Three overlapping tilings on the 2D continuous space are shown in the middle in blue, green and red. The generalization region for a sample point is shown on the right. 3 Tile coding plus neural networks: TC-NN Tile coding is a form of coarse coding, in which we cover the state space S with overlapping sets, and encode a state s by a binary string, where the bits that are 1 indicate which sets contain s. These overlapping sets capture, in a coarse way, the topological structure of the state space (i.e., which points are close together and which regions are connected to each other), and the encoding carries this structural information. In tile coding the overlapping sets are hyper-rectangles; Figure 1 illustrates a simple encoding for a 2D space. The states are thus mapped to vertices of a unit cube in a higher dimensional space. Tile coding is well-suited when the physical states occupy only a small portion of the input space, and also when the state space is non-Euclidean and has a natural product structure, as in many robotics applications. For example, in Acrobot, two angular control parameters lie on a Cartesian product of two circles (a torus) and can be tile-coded ef\ufb01ciently. Tile coding was invented by Albus (1975; 1981). It is the key component of his CMAC computation architecture, which is, in fact, tile coding plus a linear function approximator (TC-Lin). The nonlinear input map provided by the encoding was to facilitate local generalization: the result of training at a particular state generalizes locally to the \u201cneighborhood\u201d of that state as de\ufb01ned by the union of those sets that contain the state (cf. Figure 1). CMAC has been applied in control and robotics and is known, among brain-inspired computational models, as a different type of neural network, an alternative to the globally generalizing, backpropagationtype neural net (Miller, Glanz, and Kraft 1990; Balakrishnan and Weil 1996). In reinforcement learning, Lin and Kim (1991) proposed CMAC/TC-Lin for TD(\u03bb). Tham (1994) used it with a variety of online reinforcement learning algorithms, including Q-learning and Sarsa(\u03bb), to solve complex robotics problems. Other successful examples of using TCLin with Sarsa(\u03bb) were also shown by Sutton (1996). (See the textbook (Sutton and Barto 2018, Chapter 9.5) for an excellent introduction to tile coding and its applications in reinforcement learning.) Given the rich history of TC-Lin, our proposal to combine tile coding with a neural net may seem unorthodox at \ufb01rst sight. Let us now explain the merits of this TC-NN combination, as well as its differences from TC-Lin, from several perspectives. It is true that a neural net tends to generalize globally, so in TC-NN each neural node tends to respond to a much larger area of the state space than an ideal local neighborhood as in # Steps per episode Acrobot control TCj-Lin NN TCj-NN Episode \u03bb = 0.95 Episode TCj-Lin NN TCj-NN \u03bb = 0.5 TCj-Lin NN TCj-NN Episode \u03bb = 0.0 # Steps per episode Figure 2: Learning curves for TC-Lin and TC-NN with the joint tile coding scheme (TCj-Lin and TCj-NN), and neural nets with raw inputs (NN). Standard deviation and mean of 30 runs are shown for each curve. TCj-NN was fast and converged to a lower \ufb01nal performance when \u03bb = 0. As \u03bb got larger, different methods performed more similarly. TC-Lin. However, tile coding gives each node the ability to pick the size and shape of its activation region with respect to the original state space (see Appendix D for examples of activation regions of TC-NN from an experiment). In contrast, if the neural net works on the state space directly, every node has to respond to an entire half-space linearly. This causes interference and can slow down learning considerably, as we observed in the Acrobot problem (Figure 2). Sometimes the interference can be so severe that it prevents the network from learning at all (see Appendix B for some failure examples of neural nets with raw state inputs and with RBF features). An advantage that TC-NN has over TC-Lin is in the function approximation capability. This becomes critical, as the dimensionality of the state space S increases. To cope with the curse of dimensionality, when S has a natural Cartesian product structure, one can tile-code separately each component in the product. This encoding captures the same information as tile-coding all the dimensions of S jointly, but is much more ef\ufb01cient, since the resulting code length then scales linearly with the dimensionality of S. However, with a linear function approximator, the encoding is also tied with how TC-Lin generalizes during training and what functions TC-Lin can approximate. As the result of these strong ties, if we tile-code each dimension separately: (i) the generalization of TC-Lin becomes global, and (ii) the set of functions TCLin can approximate becomes limited, since it can represent only functions that are sums of functions of each component. In contrast, for TC-NN, if we use the separate tile coding scheme: (i) the neural net still has the freedom to choose regions of generalization as before, and these regions need not be as global as those in TC-Lin, and (ii) the set of functions that the neural net can approximate remains the same. The latter is because with either the separate or joint tile coding \fscheme, the states are mapped to vertices of a hyper-unit-cube with the same granularity, and the neural net can separate each vertex from the rest of the vertices by using a single hidden node (with ReLU) and assign a speci\ufb01c value to that vertex. We will show experimental results that con\ufb01rm this advantage of TC-NN in the experimental results section (cf. Figure 6) and in Appendix D (cf. Figure 16), where we will also discuss this subject in a more intuitive manner. 4 Embedding into Extreme points of a Convex Set (EmECS) We now introduce a new input transformation method, EmECS, for reducing the learning interference of individual neural nodes with ReLU gates. This method is based on two geometric properties: (i) With ReLU, the activation region of a neural node is the open half-space {x | \u27e8w, x\u27e9+ b > 0} (4) that corresponds to the hyperplane \u27e8w, x\u27e9+b = 0, where w is the vector of weights and b the scalar bias term associated with the node. (ii) For a closed convex set C, consider a point x \u2208C and the neighborhoods of x relative to C (i.e., the intersections of its neighborhoods with C). If x is an extreme point of C, there is a hyperplane whose open half-space (4) contains only a (arbitrarily) small neighborhood of x relative to C.1 The left part of Figure 3 illustrates this property of an extreme point. If X is the original input space of the neural net (X can be the state space of the problem or the space of any preextracted features of states), our method is to embed X in the set of extreme points of a closed convex set in a higher dimensional Euclidean space, and let the neural-net work with the transformed inputs instead of the original inputs. Here, by embedding, we mean a one-to-one continuous map f whose inverse f \u22121 : f(X) \u2192X is also continuous. Such a map is called a topological (or homeomorphic) embedding because it preserves topological properties (Engelking 1989). For example, if the states lie on a manifold in X (say, a torus), their images under f lie on a topologically equivalent manifold (thus also a torus), and if the states form two 1An extreme point of a convex set C is one that cannot be expressed as a convex combination of other points of C. For a closed convex set C, by Straszewicz\u2019s Theorem (Rockafellar 1970, Theorem 18.6), every extreme point is the limit of some sequence of exposed points, where an exposed point of C is a point through which there is a supporting hyperplane that contains no other points of C (Rockafellar 1970, Section 18, p. 163). This means that for an exposed point y, there is a linear function fy achieving its maximum over C uniquely at y. Consequently, for any extreme point x, we can choose an exposed point y suf\ufb01ciently close to x so that for some linear function fy with the property just mentioned and for some \u03f5 > 0, the half-space {z | fy(z) \u2265fy(y) \u2212\u03f5} contains only a small neighborhood (relative to C) of x. As this neighborhood of x consists of the \u03f5-optimal solutions of supz\u2208C fy(z), it can be made arbitrarily small by the choices of y and \u03f5. The corresponding linear function fy then gives the hyperplane fy(z) = fy(y) \u2212\u03f5 with the desired property (ii), thus proving our claim. X C e Figure 3: On the left is a cross-section view of a hyperplane cutting out a small part of a closed convex set C (in R3) around an extreme point e (black dot). This point e corresponds to the point f \u22121(e) in the original input space X shown on the right. The entire half-space above the hyperplane is the activation region of a neural node in the transformed input space. But only those boundary points of C above the hyperplane (indicated by the thick black line in this cross-section view) correspond to real inputs in X, which form a small neighborhood (shaded area) around the point f \u22121(e) (black dot). The neural node thus only responds to inputs from that neighborhood in X. disconnected sets in X, so do their images under f. By combining this topology-preserving property of an embedding with the geometric properties of convex sets discussed earlier, we obtain the following. If we choose a closed convex set C whose boundary points are all extreme points, and if we embed X into the boundary of C, then a neural node with a ReLU gate, when applied to the transformed inputs, becomes capable of responding to only a small neighborhood of any given point in the original input space X (cf. Figure 3). This explains the mechanism of our EmECS method: it enables each neural node to work locally, despite the global nature of ReLU. Of course, having the ability of localized generalization at each node does not mean that the network always allocates a small region to each node or different regions to different nodes\u2014indeed it is hard for such coordination between nodes to emerge automatically during training. Nonetheless, our experiments showed that EmECS can improve considerably neural nets\u2019 learning performance. We can implement EmECS ef\ufb01ciently. Below are a few simple examples of the embedding; in our experiments we have used (a) and (c) (which give the LPj-NN and LPs-NN algorithms in the experimental results section). Example 1 (Some instances of maps for EmECS) Suppose X \u2282Rn. (a) Map X into an n-sphere of radius r in Rn+1 by \ufb01rst \u201clifting\u201d the set X along the (n + 1)-th dimension and then projecting it on the sphere. Speci\ufb01cally, for x \u2208X, let f(x) = r \u00b7 (x, 1)/\u2225(x, 1)\u22252. (5) We shall refer to this type of map as lift-and-project (LP for short). (b) Let g be a continuous, strictly convex function on Rn (e.g., g(\u00b7) = \u2225\u00b7 \u22252 2). Map x \u2208X to f(x) = (x, g(x)). This embeds X into the graph of the function g, and the closed convex set C here is the epigraph of g: C = {(x, \u03bd) | x \u2208Rn, \u03bd \u2208R, \u03bd \u2265g(x)}. \f(c) If X = X1 \u00d7 X2 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xm where each Xi \u2282Rni, we can separately embed each Xi in Rni+1, with a map fi of the form given in (a)-(b), for instance. The result is the embedding of X in Rn+m given by f(x) = \u0000f1(x), . . . , fm(x) \u0001 . The range of f is a subset of extreme points of the closed convex set C = C1\u00d7\u00b7 \u00b7 \u00b7\u00d7Cm, where Ci is the convex set associated with the embedding fi. Sometimes, a component space Xi already contains the desired embedding of the state components (e.g., when the latter lie on a circle or sphere in Xi). Then we do not need to embed Xi any more and can simply take fi above to be the identity map fi(xi) = xi. EmECS shares some similarities with coarse coding (Hinton, McClelland, and Rumelhart 1986) despite their being seemingly unrelated. With EmECS, the activation regions of individual nodes, viewed in the original input space, resemble the receptive \ufb01elds (i.e., the overlapping sets) in coarse coding. Like the latter, each activation region is connected (thanks to the embedding property), if the hyperplane associated with the node has its normal vector point in the right direction in the transformed input space. For instance, for the lift-and-project map in Example 1(a), it suf\ufb01ces that the normal vector points \u201cupwards\u201d with respect to the extra (n+1)-th dimension (in our experiments we always initialize the network weights in this way). In coarse coding, receptive \ufb01elds can have different sizes and shapes, but they are chosen before learning takes place. With EmECS, the activation regions of neural nodes change their locations and sizes dynamically during learning. The shapes of these regions depend partly on the embedding, so by choosing the embedding, we can have some in\ufb02uence over them, like in coarse coding. For example, the separate embedding scheme in Example 1(c) gives the network more freedom to produce activation regions that are asymmetric, wider in some dimensions and narrower in others, whereas a joint embedding scheme like Example 1(a) can be used to enforce symmetric or speci\ufb01c asymmetric shapes. (See Appendix E for illustrations and a more detailed discussion.) Let us now compare EmECS and TC-NN. They both map the original inputs to the extreme points of a convex set\u2014in the case of tile coding, the convex set is a hypercube and the extreme points are its vertices, and they both use the topological structure of the original space to do so. A difference between them is that for EmECS the input transformation is an embedding, whereas for tile coding it is not. As a consequence, in TC-NN, an activation region of a neural node, viewed in the original input space, can (and usually do, as observed in our experiments) contain multiple disconnected components. This suggests that one may be able to further improve the performance of TC-NN by initializing the neural net in a certain way or by monitoring and \u201cpruning\u201d the activation regions of its nodes during training. Another difference between the two methods is in computational complexity. For TC-NN, suppose each dimension of the inputs can be tilecoded separately; then the dimensionality of the transformed inputs will still depend on the size of the original input space along each dimension. In contrast, EmECS only increases the dimensionality of the inputs by the number of component spaces that are embedded separately (cf. Example 1). So, given the same number of hidden-layer nodes, the neural net with EmECS has much fewer parameters than the TC-NN network. 5 Experimental results In this section, we show experimentally that our proposed methods are fast and accurate. We compare our proposed methods with two existing online methods: tile coding plus linear function approximation (TC-Lin) and neural networks with raw inputs (NN). We do not compare our methods to experience replay as it is not a fully incremental online method. Our proposed methods are tile coding plus neural networks (TC-NN) and EmECS plus neural networks. From EmECS, we used the lifting-and-projecting scheme plus neural networks (LP-NN). We add letters j and s to TC and LP (e.g., TCj-NN or TCs-NN) to show whether the dimensions of the input are transformed in a joint or in a separate fashion. LPj-NN uses the lift-and-project map in Example 1(a), and LPs-NN uses the separate embedding scheme in Example 1(c) with each component map fi being a lift-and-project map. We \ufb01rst use three small problems to compare TCjNN, LPj-NN, NN, and TCj-Lin: Mountain Car prediction, Mountain Car control and Acrobot control. All problems are on-policy, undiscounted, and episodic. We perform another set of experiments on the Mountain Car prediction problem to study the effects of transforming the dimensions of the input jointly or separately. Finally, we assess the practicality of our methods in higher dimensions by applying TCs-NN and LPs-NN to a real world robot problem in an off-policy continuing setting. We also present (in the appendix on learning interferences in online training of neural nets) the results of using RBF kernels to transform the input space and show that not every input transformation method that uses the neighborhood information or creates sparse features can be effective. Implementation details of these experiments are given in Appendix A. TCj-Lin NN LPj-NN TCj-NN Episode 4 3 2 1 0 6 5 4 3 2 Mountain car prediction \u21b5 AB7XicbVDLSgNBEOyNrx hfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3 npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2s F1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrx hfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3 npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2s F1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrx hfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3 npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2s F1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrx hfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3 npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2s F1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= (10\u2212x) AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2 B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2 B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2 B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2 B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B \u21b5 AB7XicdVDLSgMxFM3UV6 2vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/ vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZ Mkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hv n9odj04= AB7XicdVDLSgMxFM3UV6 2vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/ vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZ Mkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hv n9odj04= AB7XicdVDLSgMxFM3UV6 2vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/ vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZ Mkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hv n9odj04= AB7XicdVDLSgMxFM3UV6 2vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/ vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZ Mkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hv n9odj04= (10\u2212x) AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkhHL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+ SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZn209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkhHL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+ SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZn209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkhHL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+ SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZn209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkhHL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+ SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZn209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= TCj-NN LPj-NN NN TCj-Lin Root mean square value error estimate Figure 4: Learning curve (left) and parameter study (right) using TD(0). The top and bottom axes in the parameter study show the step size values for linear function approximation and neural nets respectively. TCj-Lin converged to a higher asymptotic performance compared to other methods. Neural networks alone (without any input transformation) slowly converged to a good \ufb01nal performance. The proposed methods were fast and converged to a low \ufb01nal performance. The \ufb01rst testbed was Mountain Car in the prediction setting, involving the evaluation of a \ufb01xed policy. The policy \f\u21b5 AB7XicdVDLSgMxFM3UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtr YTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w /aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/vWfacCVv7CRlYUKGksecEukdo+Id ET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGm plxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MO cyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjC I7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtr YTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w /aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/vWfacCVv7CRlYUKGksecEukdo+Id ET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGm plxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MO cyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjC I7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtr YTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w /aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/vWfacCVv7CRlYUKGksecEukdo+Id ET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGm plxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MO cyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjC I7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtr YTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w /aRmWashZVQulORAwTXLKW5VawTqoZSLBbqPx5cy/vWfacCVv7CRlYUKGksecEukdo+Id ET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGm plxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9LCufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MO cyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjC I7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3uiteAtZw7BD3hvn9odj04= (10\u2212x) AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= \u21b5 AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsm Y2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPj xqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIR tgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJj Nr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ3 2uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoP MIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsm Y2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPj xqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIR tgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJj Nr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ3 2uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoP MIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsm Y2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPj xqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIR tgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJj Nr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ3 2uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoP MIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsm Y2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPj xqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIR tgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJj Nr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ3 2uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoP MIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA= (10\u2212x) AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 7 6 5 4 3 2 6 5 4 3 2 1 0 NN TCj-Lin LPj-NN TCj-NN TCj-Lin NN TCj-NN LPj-NN \u21b5 AB7XicbVDLSgNBEOy NrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBW tH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cg sezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPG A= AB7XicbVDLSgNBEOy NrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBW tH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cg sezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPG A= AB7XicbVDLSgNBEOy NrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBW tH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cg sezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPG A= AB7XicbVDLSgNBEOy NrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBW tH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cg sezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPG A= (10\u2212x) AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJC uWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB0/QiVX6KIyULW nQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 6 5 4 3 \u21b5 AB7XicdVDLSgMxFM3 UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBb qPx5cy/vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9L Cufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3 uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3 UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBb qPx5cy/vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9L Cufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3 uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3 UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBb qPx5cy/vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9L Cufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3 uiteAtZw7BD3hvn9odj04= AB7XicdVDLSgMxFM3 UV62vqks3wSK4GpLS0S6LblxWsLXQDiWTZtrYTDIkGaEM/Qc3LhRx6/+4829MH4KHrhwOde7r0nSgU3FqEPr7Cyura+UdwsbW3v7O6V9w/aRmWashZVQulORAwTXLKW5VawTqoZSLBb qPx5cy/vWfacCVv7CRlYUKGksecEukdo+IdET65Qry0VlQw3WI/ADhOg4cqQYoSrEPpqjApZo9svYGiWcKkpYIY08UotWFOtOVUsGmplxmWEjomQ9Z1VJKEmTCfXzuFJ04ZwFhpV9L Cufp9IieJMZMkcp0JsSPz25uJf3ndzMb1MOcyzSyTdLEozgS0Cs5ehwOuGbVi4gihmrtbIR0RTah1AZVcCF+fwv9Ju+pj5OPrWqVxsYyjCI7AMTgFGJyDBrgCTdACFNyB/AEnj3lPXov3 uiteAtZw7BD3hvn9odj04= (10\u2212x) AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= AB8HicdVDLSgMxFM3UV62vqks3wSLUhUNSOtpl0Y3LCvYh7VgyadqGZjJDkh HL0K9w40IRt36O/G9CGo6IELh3Pu5d57glhwbRD6cDJLyura9n13Mbm1vZOfnevoaNEUVankYhUKyCaCS5Z3XAjWCtWjISBYM1gdDH1m3dMaR7JazOmR+SgeR9Tomx0k0Ro9v05H5y3M0XkItOvTKuQOR6CFewZ0nJwiVIHbRDAWwQK2bf+/0IpqETBoqiNZtjGLjp0QZTgWb5DqJZjGhIzJgbUslCZ n209nBE3hklR7sR8qWNHCmfp9ISaj1OAxsZ0jMUP/2puJfXjsx/Yqfchknhk6X9RPBDQRnH4Pe1wxasTYEkIVt7dCOiSKUGMzytkQvj6F/5NGycXIxVflQvV8EUcWHIBDUAQYnIEquAQ1UAcUhOABPIFnRzmPzovzOm/NOIuZfADztsnvjmPtw= 4 3 2 1 5 TCj-Lin NN LPj-NN TCj-NN Episode 7 6 Acrobot control TCj-Lin NN LPj-NN TCj-NN Episode Mountain car control # Steps per episode (b) (a) Figure 5: Results on two control problems with \u03bb = 0. Proposed methods were fast and converged to a good asymptotic performance on both problems. Neural nets alone (without any input transformation) were unable to solve the mountain car problem due to catastrophic interference. was to push towards the direction of velocity. Neural nets that were used with different input transformation methods, had different numbers of inputs and hidden units. NN had 2 raw inputs: position and velocity. LPj-NN had 3 inputs: position, velocity, and its extra dimension. TCj-NN used a feature size of 80 (see Appendix A for details of tile coding and why the number of features is 80). TCj-Lin had the same number of features as TCj-NN. In this problem we carefully chose the number of hidden units to make sure all methods had almost the same number of weights in the neural net. We gave NN and LPj-NN 135 and 100 hidden units to create networks with a total of 405 and 400 weights respectively. We gave TCj-NN only 5 hidden units, which resulted in a network with 405 weights. We ran each method under each parameter setting for 30 independent times (30 runs). Each run had 2000 episodes. We then averaged over runs to create learning curves. We also performed a parameter study over different step sizes: for each \u03b1 and each run, we computed an average over the last 5% of episodes, which produced 30 numbers \u2013 one for each run. We then computed the mean and standard error over the resulting numbers. We used our parameter study results to choose the value of the step size for the learning curves we plotted. For all methods, we chose the largest step size (and thus fastest convergence) for which the \ufb01nal performance was close to the best \ufb01nal performance of that method. We used an estimation of the root mean square value error as the error measure: \\ RMSVE(wt) = s 1 |D| X s\u2208D [\u02c6 v(s, wt) \u2212v\u03c0(s)]2 Here D is a set of states that is formed by following \u03c0 to termination and restarting the episode and following \u03c0 again. This was done for 10,000,000 steps, and we then sampled 500 states form the 10,000,000 states randomly. The true value v\u03c0(s) was simply calculated for each s \u2208D by following \u03c0 once to the end of the episode. Results on the Mountain Car prediction (Figure 4) show that NN had a good \ufb01nal approximation of the value function; however, it was slow. TCj-Lin was fast but it could not approximate the value function as accurately as other methods. TCj-NN and LPj-NN were both fast and approximated the value function accurately. LPj-NN made the most accurate approximation. The second testbed was Mountain Car in the control setting. We used \u03f5-greedy Sarsa(\u03bb) with \u03f5 = 0.1. The performance measure was the number of steps per episode, which is equal to the negative return. We did 30 runs. Each run had 500 episodes. NN, TCj-NN and LPj-NN all had 800 hidden units. We used the same tile coding scheme as in the Mountain Car prediction problem. All methods except NN learned to successfully reach the goal. NN could not solve the task with raw inputs. TCj-NN was the fastest method to achieve its best \ufb01nal performance, LPj-NN came second and TCj-Lin was the slowest (see Figure 5(a)). The third testbed was Acrobot in the control setting. We used \u03f5-greedy Sarsa(\u03bb) with \u03f5 = 0.1. Performance measure was similar to Mountain Car control. TCj-Lin and TCjNN had a feature size of 256 (see Appendix A for tile coding details). TCj-NN used 4000 hidden units. NN and LPj-NN had 2000 hidden units. LPj-NN fed the neural net with 5 inputs (original dimensions plus one extra dimension). We did 30 runs. Each run had 500 episodes. The best \ufb01nal performance was achieved by TCj-NN, followed by LPj-NN, then NN and then TCj-Lin. Speed wise, LPj-NN and TCj-NN were the fastest methods. Figure 5(b) summarizes the results. We also did a parameter study on the Mountain Car prediction problem to study the effects of transforming the dimensions of the input jointly and separately. The \ufb01nal performances of LPs-NN and LPj-NN were similar, so were the \ufb01nal performances of TCj-NN and TCs-NN. However, the \ufb01nal performance of TCs-Lin was worse than that of TCj-Lin. This con\ufb01rms one of our assumptions from Section 3: tile coding the input dimensions separately (vs jointly) does not pose generalization restrictions (and does not affect the \ufb01nal performance) when combined with neural nets. However, it does pose restrictions (and affects the \ufb01nal performance) if combined with linear function approximation. See Figure 6 for these results when \u03bb = 0 and 0.85, and see Appendix D for results on other values of \u03bb. As a starting point for working in higher dimensions, we applied our methods to a real world robot task in the offpolicy setting. In this problem, a Kobuki robot wanders in a pen, learning to predict when it will bump into something if it \fLPj-NN TCj-NN LPs-NN TCs-NN \u21b5 AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> (10\u2212x) AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 6 5 4 3 2 TCj-Lin TCs-Lin \u21b5 AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> (10\u2212x) AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 4 3 1 2 5 LPj-NN TCj-NN TCj-Lin TCs-Lin LPs-NN TCs-NN Mountain car prediction \u21b5 AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> (10\u2212x) AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B \u21b5 AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> AB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYI S/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbE RomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3i t/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEk cuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0 L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA=< /latexit> (10\u2212x) AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbV BNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbmSXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFK ENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQT SNMRnhAe1YKrGg2k9nB0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qx XPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 4 3 2 1 0 6 5 4 3 2 Root mean square value error estimate \u03bb = 0 AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXneP fmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2EM4up n6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtOVUsEmlx mWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/FmUA2QdPrU cQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzDK7x52nvx3r2P ebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXneP fmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2EM4up n6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtOVUsEmlx mWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/FmUA2QdPrU cQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzDK7x52nvx3r2P ebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXneP fmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2EM4up n6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtOVUsEmlx mWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/FmUA2QdPrU cQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzDK7x52nvx3r2P ebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXneP fmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2EM4up n6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtOVUsEmlx mWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/FmUA2QdPrU cQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzDK7x52nvx3r2P ebTkFTOH8Afe5w8xoY/+ \u03bb = 0 AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXn ePfmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2 EM4upn6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtO VUsEmlxmWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/ FmUA2QdPrUcQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzD K7x52nvx3r2PebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXn ePfmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2 EM4upn6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtO VUsEmlxmWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/ FmUA2QdPrUcQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzD K7x52nvx3r2PebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXn ePfmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2 EM4upn6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtO VUsEmlxmWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/ FmUA2QdPrUcQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzD K7x52nvx3r2PebTkFTOH8Afe5w8xoY/+ AB8HicbVBNSwMxFHxbv2r9qnr0EiyCp5IVQS9C0YvHCrZW2qVks9k2NMkuSVYoS3+Fw+KePXn ePfmLZ70NaBwDAzj7w3YSq4sRh/e6WV1bX1jfJmZWt7Z3evun/QNkmKWvRCS6ExLDBFesZbkVrJNqRmQo2 EM4upn6D09MG56oeztOWSDJQPGYU2Kd9NgTLhqRK9yv1nAdz4CWiV+QGhRo9qtfvSihmWTKUkGM6fo4tUFOtO VUsEmlxmWEjoiA9Z1VBHJTJDPFp6gE6dEKE60e8qimfp7IifSmLEMXVISOzSL3lT8z+tmNr4Mcq7SzDJF5x/ FmUA2QdPrUcQ1o1aMHSFUc7crokOiCbWuo4orwV8eZm0z+o+rvt357XGdVFHGY7gGE7BhwtowC0oQUJDzD K7x52nvx3r2PebTkFTOH8Afe5w8xoY/+ \u03bb = 0.85 \u03bb = 0.85 Root mean square value error estimate Figure 6: Parameter studies comparing separate and joint input transformation. Transforming the input dimensions jointly or separately did not affect the performance of the proposed methods; however, it affected the performance of tile coding plus linear function approximation. goes forward. More details about the reward and policies can be found in Appendix A. The sensory information available to the robot to learn this task consisted of 50 RGB pixels from its camera, represented as a vector of size 150. We did 30 runs of 12000 time steps and used an estimation of root mean square return error as the performance measure: \\ RMSRE(wt) = v u u t 1 |D| X (s,G)\u2208D [\u02c6 v(s, wt) \u2212G]2 Here D is a set of state and return pairs selected according to the following procedure. To sample each pair (s, G), the robot followed the behavior policy for some random number of steps, sampled state s, and followed the target policy from s to compute the true return G. After sampling each pair, the robot switched back to the behavior policy for a random number of time steps to get the next sample. We repeated this whole procedure 150 times to construct D. We tile coded each of 150 numbers separately. Tile coding produced a feature vector of size 9600. LPs-NN had 300 features. Both TCs-NN and LPs-NN had 1000 hidden units. More experimental details can be found in Appendix A. Our methods worked well in this environment. The results are presented in Figure 7 (TCs-Lin and NN also performed well; their results are not shown here). We also studied the effect of using larger values of \u03bb. Eligibility traces has been shown to be effective in the past (Rummery and Niranjan 1994; Sutton 1996). Our results (in Appendix C) con\ufb01rm that larger values of \u03bb help all methods (except RBFs) to learn faster and more accurately. One of the reasons can be that eligibility traces carry the past information Collision task prediction TCs-NN LPs-NN LPs-NN TCs-NN \u21b5 AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7 UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7I sPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937W LQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7 UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7I sPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937W LQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7 UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7I sPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937W LQWvHzmGP7A+/wBi4GPGA= AB7XicbVDLSgNBE OyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZlZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7 UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7I sPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937W LQWvHzmGP7A+/wBi4GPGA= (10\u2212x) AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbm SXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB 0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbm SXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB 0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbm SXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB 0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B AB8HicbVBNSwMxEJ2tX7V+VT16CRahHiy7RdBj0YvHCvZD2rVk02wbm SXJCuWpb/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHdbye3srq2vpHfLGxt7+zuFfcPmjpKFKENEvFItQOsKWeSNgwznLZjRbEIOG0Fo+up3qkSrNI3plxTH2B5KFjGBjpfuy5z6kZ0+T016x5FbcGdAy8TJSgz1XvGr249Iqg0hGOtO54bGz/FyjDC6aTQTSNMRnhAe1YKrGg2k9nB 0/QiVX6KIyULWnQTP09kWKh9VgEtlNgM9SL3lT8z+skJrz0UybjxFBJ5ovChCMToen3qM8UJYaPLcFEMXsrIkOsMDE2o4INwVt8eZk0qxXPrXi356XaVRZHo7gGMrgwQXU4Abq0ACAp7hFd4c5bw4787HvDXnZDOH8AfO5w9vnY+B 4 3 2 1 0 Time step Root mean square return error estimate Figure 7: Learning curve and parameter study for our proposed methods on the robot collision task. Results show that our methods can be effective in higher dimensional spaces. and this can help prevent interference. Eligibility traces are not computationally expensive and can be an alternative to experience replay. 6" + } + ], + "Kun Zhang": [ + { + "url": "http://arxiv.org/abs/2402.05052v2", + "title": "Causal Representation Learning from Multiple Distributions: A General Setting", + "abstract": "In many problems, the measured variables (e.g., image pixels) are just\nmathematical functions of the hidden causal variables (e.g., the underlying\nconcepts or objects). For the purpose of making predictions in changing\nenvironments or making proper changes to the system, it is helpful to recover\nthe hidden causal variables $Z_i$ and their causal relations represented by\ngraph $\\mathcal{G}_Z$. This problem has recently been known as causal\nrepresentation learning. This paper is concerned with a general, completely\nnonparametric setting of causal representation learning from multiple\ndistributions (arising from heterogeneous data or nonstationary time series),\nwithout assuming hard interventions behind distribution changes. We aim to\ndevelop general solutions in this fundamental case; as a by product, this helps\nsee the unique benefit offered by other assumptions such as parametric causal\nmodels or hard interventions. We show that under the sparsity constraint on the\nrecovered graph over the latent variables and suitable sufficient change\nconditions on the causal influences, interestingly, one can recover the\nmoralized graph of the underlying directed acyclic graph, and the recovered\nlatent variables and their relations are related to the underlying causal model\nin a specific, nontrivial way. In some cases, each latent variable can even be\nrecovered up to component-wise transformations. Experimental results verify our\ntheoretical claims.", + "authors": "Kun Zhang, Shaoan Xie, Ignavier Ng, Yujia Zheng", + "published": "2024-02-07", + "updated": "2024-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Causal representation learning holds paramount significance across numerous fields, offering insights into intricate relationships within datasets. Most traditional methodologies (e.g., causal discovery) assume the observation of causal variables. This assumption, however reasonable, falls short in complex scenarios involving indirect measurements, such as electronic signals, image pixels, and linguistic tokens. 1Carnegie Mellon University 2Mohamed bin Zayed University of Artificial Intelligence. Preprint. Moreover, there are usually changes on the causal mechanisms in real-world, such as the heterogeneous or nonstationary data. Identifying the hidden causal variables and their structures together with the change of the causal mechanism is in pressing need to understand the complicated real-world causal process. This has been recently known as causal representation learning (Sch\u00a8 olkopf et al., 2021). It is worth noting that identifying only the hidden causal variables but not the structure among them, is already a considerable challenge. In the i.i.d. case, different latent representations can explain the same observations equally well, while not all of them are consistent with the true causal process. For instance, nonlinear independent component analysis (ICA), where a set of observed variables X is represented as a mixture of independent latent variables Z, i.e, X = g(Z), is known to be unidentifiable without additional assumptions (Comon, 1994). While being a strictly easier task since there are no relations among hidden variables, the identifiability of nonlinear ICA often relies on conditions on distributional assumptions (non-i.i.d. data) (Hyv\u00a8 arinen & Morioka, 2016; 2017; Hyv\u00a8 arinen et al., 2019; Khemakhem et al., 2020a; Sorrenson et al., 2020; Lachapelle et al., 2022; H\u00a8 alv\u00a8 a & Hyv\u00a8 arinen, 2020; H\u00a8 alv\u00a8 a et al., 2021; Yao et al., 2022) or specific functional constraints (Comon, 1994; Hyv\u00a8 arinen & Pajunen, 1999; Taleb & Jutten, 1999; Buchholz et al., 2022; Zheng et al., 2022; Zheng & Zhang, 2023). To generalize beyond the independent hidden variables and achieve causal representation learning (recovering the latent variables and their causal structure), recent advances either introduce additional experiments in the forms of interventional or counterfactual data, or place more restrictive parametric or graphical assumptions on the latent causal model. For observational data, various graphical conditions have been proposed together with parametric assumptions such as linearity (Silva et al., 2006; Cai et al., 2019; Xie et al., 2020; 2022; Adams et al., 2021; Huang et al., 2022) and discreteness (Kivva et al., 2021). For interventional data, single-node interventions have been considered together with parametric assumptions (e.g., linearity) on the mixing function (Varici et al., 2023; Ahuja et al., 2023; Buchholz et al., 2022) or also on the latent causal model (Squires et al., 2023). The nonparametric settings for both the mixing function and causal model have been explored by (Brehmer et al., 2022; von K\u00a8 ugelgen et al., 1 arXiv:2402.05052v2 [cs.LG] 10 Apr 2024 \fCausal Representation Learning from Multiple Distributions: A General Setting 2023; Jiang & Aragam, 2023) together with additional assumptions on counterfactual views (Brehmer et al., 2022), distinct paired interventions (von K\u00a8 ugelgen et al., 2023), and graphical conditions (Jiang & Aragam, 2023). Despite the exciting developments in the field, one fundamental question pertinent to causal representation learning from multiple distributions remains unanswered\u2013in the most general situation, without assuming parametric models on the data-generating process or the existence of hard interventions in the data, what information of the latent variables and the latent structure can be recovered? This paper attempts to provide an answer to it, which, surprisingly, shows that each latent variable can be recovered up to clearly defined indeterminacies. It suggests what we can achieve in the general case and furthermore, what unique contribution the typical assumptions that are currently made in causal representation learning from multiple distributions make towards complete identifiability of the latent variables (up to component-wise transformations). This may make it possible to figure out what minimal assumptions are needed to achieve complete identifiability, given partial knowledge of the system. Contributions. Concretely, as our contributions, we show that under the sparsity constraint on the recovered graph over the latent variables and suitable sufficient change conditions on the causal influences, interestingly, one can recover the moralized graph of the underlying directed acyclic graph (Thm. 3.1), and the recovered latent variables and their relations are related to the underlying causal model in a specific, nontrivial way (Thm. 3.4)\u2013each latent variables is recovered as a function of itself and its so-called intimate neighbors in the Markov network implied by the true causal structure over the latent variables. Depending on the properties of the true causal structure over latent variables, the set of intimate neighbors might even be empty, in which case each latent variable can be recovered up to an invertible transformation (Remark 1). Lastly, we show how the recovered moralized graph relates to the underlying causal graph under new relaxations of faithfulness assumption (Thm. 3.5). Simulation studies verified our theoretical findings. 2. Problem Setting Let X = (X1, . . . , Xd) be an d-dimensional random vector that represents the observations. We assume that they are generated by n hidden causal variables Z = (Z1, . . . , Zn) via a nonlinear injective mixing function g : Rn \u2192Rd (d \u2265n), which is also a C2 diffeomorphism. Furthermore, the variables Zi\u2019s are assumed to follow a structural equation model (SEM) (Pearl, 2000). Putting them together, the data generating process can be written as X = g(Z) | {z } Nonlinear mixing , Zi = fi(PA(Zi), \u03f5i; \u03b8i), i = 1, . . . , n | {z } Latent SEM . (1) Z4 Z2 Z3 Z5 Z1 \u03b81 \u03b83 \u03b82 \u03b84 \u03b85 g X Figure 1: The generating process for each hidden causal variable Zi changes, governed by a latent factor \u03b8i. The observations X are generated by X = g(Z) with a nonlinear mixing function g. where PA(Zi) denotes the parents of variable Zi, \u03f5i\u2019s are exogenous noise variables that are mutually independent, and \u03b8i denotes the latent (changing) factor (or effective parameters) associated with each model. Here, the data generating process of each hidden variable Zi may change, e.g., across domains or over time, governed by the corresponding latent factor \u03b8i; it is commonplace to encounter such changes in causal mechanisms in practice (arising from heterogeneous data or nonstationary time series). In addition, interventional data can be seen as a special type of change, which qualitatively restructure the causal relations. As their names suggest, we assume that the observations X are observed, while the hidden causal variables Z and latent factors \u03b8 = (\u03b81, . . . , \u03b8n) are unobserved. Let PX and PZ be the distributions of X and Z, respectively, and their corresponding probability density functions be pX(X; \u03b8) and pZ(Z; \u03b8), respectively. To lighten the notation, we drop the subscript in the density when the context is clear. The latent SEM in Eq. (1) induces a causal graph GZ with vertices {Zi}n i=1 and edges Zj \u2192Zi if and only if Zj \u2208PA(Zi). We assume that GZ is acyclic, i.e., a directed acyclic graph (DAG). This implies that the distribution of variables Z satisfy the Markov property w.r.t. DAG GZ (Pearl, 2000), i.e., p(Z; \u03b8) = Qn i=1 p(Zi | PA(Zi); \u03b8i). We provide an example of the data generating process in Eq. (1) and its corresponding latent DAG GZ in Figure 1. In particular, given the observations X arising from multiple distributions (governed by the latent factors \u03b8), our goal is to recover the hidden causal variables Z = g\u22121(X) and their causal relations up to surprisingly minor indeterminacies. 3. Learning Causal Representations with Sparsity Constraints In this section, we provide theoretical results to show how one is able to recover the underlying hidden causal variables and their causal relations up to certain indeterminacies. Specifically, we show that under the sparsity constraint on the recovered graph over the latent variables and suit2 \fCausal Representation Learning from Multiple Distributions: A General Setting able sufficient change conditions on the causal influences, the recovered latent variables are related to the underlying hidden causal variables in a specific, nontrivial way. Such theoretical results serve as the foundation of our algorithm described in Section 4. To start with, we estimate a model (\u02c6 g, \u02c6 f, p \u02c6 Z) which assumes the same data generating process as in Eq. (1) and matches the true distribution of X in different domains: pX(X\u2032; \u03b8\u2032) = p \u02c6 X(X\u2032; \u03b8\u2032), \u2200X\u2032, \u03b8\u2032. (2) where X and \u02c6 X are generated from the true model (g, f, pZ) and the estimated model (\u02c6 g, \u02c6 f, p \u02c6 Z), respectively. A key ingredient in our theoretical analysis is the Markov network that represents conditional dependencies among random variables in a graphical manner via an undirected graph. Let MZ be the Markov network over variables Z, specifically, with vertices {Zi}n i=1 and edges (i, j) \u2208 E(MZ) if and only if Zi \u22a5 \u0338\u22a5Zj | Z[n]\\{i,j}.1 Also, we denote by |MZ| the number of undirected edges in the Markov network. In Section 3.1, apart from showing how to estimate the underlying hidden causal variables up to certain indeterminacies, we also show that such latent Markov network MZ can be recovered up to trivial indeterminacies (i.e., relabeling of the hidden variables). To achieve so, we make use of the following property (assuming that pZ is twice differentiable): Zi \u22a5 \u22a5Zj | Z[n]\\{i,j} \u21d0 \u21d2\u22022 log p(Z) \u2202Zi\u2202Zj = 0. Such a connection between pairwise conditional independence and cross derivatives of the density function has been noted by Lin (1997) and utilized in Markov network learning for observed variables (Zheng et al., 2023). With the recovered latent Markov network structure, we provide results in Section 3.2 to show how it relates to the true latent causal DAG GZ, by exploiting a specific type of faithfulness assumption that is considerably weaker than the standard faithfulness assumption used in the literature of causal discovery (Spirtes et al., 2001). 3.1. Recovering Hidden Causal Variables and Latent Markov Network We show how one benefits from multiple distributions to recover the hidden causal variables and the true Markov network structure among them up to minor indeterminacies, by making use of sparsity constraint and sufficient change conditions on the causal mechanisms. We start with the following result that provides information about the relationship between the Markov network MZ 1We use [n] to denote {1, . . . , n} and Z[n]\\{i,j} to denote {Zi}n i=1 \\ {Zi, Zj}. over true hidden causal variables Z and the Markov network M \u02c6 Z over the estimated hidden variables \u02c6 Z. This result serves as the backbone of our further analysis in this section. Denote by \u2295the vector concatenation symbol. Theorem 3.1. Let the observations be sampled from the data generating process in Eq. (1), and MZ be the Markov network over Z. Suppose that the following assumptions hold: \u2022 A1 (Smooth and positive density): The probability density function of latent causal variables is smooth and positive, i.e. pZ is smooth and pZ > 0 over Rn. \u2022 A2 (Sufficient changes): For any Z \u2208Rn, there exist 2n + |MZ| + 1 values of \u03b8, i.e., \u03b8(u) with u = 0, . . . , 2n+|MZ|, such that the vectors w(Z, u)\u2212 w(z, 0) with u = 1, . . . , 2n + |MZ| are linearly independent, where vector w(Z, u) is defined as w(Z, u) = \u0012\u2202log p(Z; \u03b8(u)) \u2202Z1 , . . . , \u2202log p(Z; \u03b8(u)) \u2202Zn , \u22022 log p(Z; \u03b8(u)) \u2202Z2 1 , . . . , \u22022 log p(Z; \u03b8(u)) \u2202Z2 n \u0013 \u2295 \u0012\u22022 log p(Z; \u03b8(u)) \u2202Zi\u2202Zj \u0013 (i,j)\u2208E(MZ) . Suppose that we learn (\u02c6 g, \u02c6 f, p \u02c6 Z) to achieve Eq. (2). Then, for every pair of estimated hidden variables \u02c6 Zk and \u02c6 Zl that are not adjacent in the Markov network M \u02c6 Z over \u02c6 Z, we have the following statements: (a) Each true hidden causal variable Zi is a function of at most one of \u02c6 Zk and \u02c6 Zl. (b) For each pair of true hidden causal variables Zi and Zj that are adjacent in the Markov network MZ over Z, at most one of them is a function of \u02c6 Zk or \u02c6 Zl. The proof is provided in Appx. A. Here is the basic idea. Let h\u2032 i,l := \u2202Zi \u2202\u02c6 Zl , and h\u2032\u2032 i,kl = \u22022Zi \u2202\u02c6 Zk\u2202\u02c6 Zl . Under the assumptions A1 and A2, one can finally show that the following constraints hold: h\u2032 i,lh\u2032 i,k = 0, (3) h\u2032 j,lh\u2032 i,k = 0, (4) h\u2032\u2032 i,kl = 0. (5) Eq. (3) indicates that Zi is a function of at most one of \u02c6 Zk and \u02c6 Zl, while Eq. (4) implies that given that Zi and Zj are adjacent in Markov network MZ, at most one of them is a function of \u02c6 Zk or \u02c6 Zl. 3 \fCausal Representation Learning from Multiple Distributions: A General Setting It is worth noting that the requirement of a sufficient number of environments has been commonly adopted in the literature (e.g., see (Hyv\u00a8 arinen et al., 2023) for a recent survey), such as visual disentanglement (Khemakhem et al., 2020b), domain adaptation (Kong et al., 2022), video analysis (Yao et al., 2021), and image-to-image translation (Xie et al., 2022). Also, we do not specify exactly how to learn (\u02c6 g, \u02c6 f, p \u02c6 Z) to achieve Eq. (2), and leave the door open for different approaches to be used, such as normalizing flow or variational approaches. For example, we adopt a variational approach in Section 4. The above result sheds light on how each pair of the estimated latent variables \u02c6 Zk and \u02c6 Zl that are not adjacent in Markov network M \u02c6 Z relate to the true hidden causal variables Z. Without any constraint on the estimating process, a trivial solution would be a complete graph over \u02c6 Z. To avoid it, we enforce the sparsity of the Markov network over \u02c6 Z. In fact, the Markov network of the underlying DAG GZ can be recovered, as shown in the following theorem, with a proof provided in Appx. B. Theorem 3.2 (Identifiability of Latent Markov Network). Let the observations be sampled from the data generating process in Eq. (1), and MZ be the Markov network over Z. Suppose that Assumptions A1 and A2 from Theorem 1 hold. Suppose also that we learn (\u02c6 g, \u02c6 f, p \u02c6 Z) to achieve Eq. (2) with the minimal number of edges of Markov network M \u02c6 Z over \u02c6 Z. Then, the Markov network M \u02c6 Z over estimated hidden variables \u02c6 Z is isomorphic to the true latent Markov network MZ. Since traditional nonlinear ICA always has a valid solution (to produce nonlinear independent components) (Hyv\u00a8 arinen et al., 1999), one may wonder whether it is possible to find nonlinear components as functions of X that are independent in each domain, as produced by recent methods for nonlinear ICA with surrogates (Hyv\u00a8 arinen et al., 2019). As a corollary of the above theorem, we show that the answer is no\u2013there do not exist nonlinear components that are independent across domains. Corollary 3.3 (Impossibility of Finding Independent Components). Let the observations be sampled from the data generating process in Eq. (1), and MZ be the Markov network over Z. Suppose that Assumptions A1 and A2 from Theorem 1 hold, and that MZ is not an empty graph. Suppose also that we learn (\u02c6 g, \u02c6 f, p \u02c6 Z) with the components of \u02c6 Z being independent in each domain. Then, (\u02c6 g, \u02c6 f, p \u02c6 Z) cannot achieve Eq. (2). Apart from recovering the true Markov network MZ, we show that the sparsity constraint on the Markov network structure over \u02c6 Z also allows us to recover the underlying hidden causal variables Z up to specific, relatively minor indeterminacies. In the result, the following variable set, termed intimate neighbor set, plays an important role: \u03a8Zi := {Zj | j \u0338= i, but Zj is adjacent to Zi and all other neighbors of Zi in MZ}. For example, according to the Markov network implied by GZ in Figure 1, \u03a8Z1 = {Z2}, \u03a8Z2 = \u03a6, where \u03a6 denotes the empty set, \u03a8Z3 = {Z2, Z4}, \u03a8Z4 = \u03a6, and \u03a8Z5 = {Z4}. As another example, according to the Markov network in Figure 2(b), which is implied by the DAG in Figure 2(a), we have \u03a8Zi = \u03a6 for i = 1, 2, 3, 5, 6 and \u03a8Z4 = {Z3, Z6}. Theorem 3.4 (Identifiability of Hidden Causal Variables). Let the observations be sampled from the data generating process in Eq. (1), and MZ be the Markov network over Z. Let NZi be the set of neighbors of variable Zi in MZ. Suppose that Assumptions A1 and A2 from Theorem 1 hold. Suppose also that we learn (\u02c6 g, \u02c6 f, p \u02c6 Z) to achieve Eq. (2) with the minimal number of edges of Markov network M \u02c6 Z over \u02c6 Z. Then, there exists a permutation \u03c0 of the estimated hidden variables, denoted as \u02c6 Z\u03c0, such that each \u02c6 Z\u03c0(i) is a function of (a subset of) the variables in {Zi} \u222a\u03a8Zi. The proof is given in Appx. D. It is worth noting that in many cases, the above result already enables us to recover some of the hidden variables up to a component-wise transformation. Remark 1. No matter how many neighbors each hidden causal variable Zi has, as long as each of its neighbors is not adjacent to at least one other neighbor in the Markov network MZ, then Zi can be recovered up to a componentwise transformation. Even if the above case does not hold, Theorem 3.4 still shows how the estimated hidden variables relate to the underlying causal variables in a specific, nontrivial way. Two examples are provided below. Example 1. First consider the Markov network MZ corresponding to the DAG GZ over Zi in Figure 1. By Theorem 3.4 and suitable permutation of estimated hidden variables \u02c6 Z, we have: (a) \u02c6 Z\u03c0(1) is a function of Z1 and possibly Z2, (b) \u02c6 Z\u03c0(2) is a function of Z2, (c) \u02c6 Z\u03c0(3) is a function of Z3 and possibly Z2, Z4, (d) \u02c6 Z\u03c0(4) is a function of Z4, and (d) \u02c6 Z\u03c0(5) is a function of Z5 and possibly Z4. In this example, the hidden causal variables Z2 and Z4 can be recovered up to component-wise transformation, while variables Z1, Z3, and Z5 can be identified up to mixtures with certain neighbors in the Markov network. Example 2. One may think that generally speaking, the more complex GZ, the more indeterminacies we have in the estimated latent variables (in the sense that each estimated latent variable receives contributions from more latent variables). In fact, this may not be the case. In example 2, the 4 \fCausal Representation Learning from Multiple Distributions: A General Setting Z3 Z2 Z5 Z4 Z1 Z6 (a) GZ, the DAG over true latent variables Zi. Z3 Z2 Z5 Z4 Z1 Z6 (b) The corresponding Markov network MZ. Figure 2: Illustrative example 2. underlying latent causal graph GZ is given in Figure 2(a), which involves more variables and more edges and whose Markov network is shown in Figure 2(b). For every variables Zi that is not the sink node, it has \u03a8Zi = \u03a6 and thus can be recovered up to a component-wise transformation. Permutation of estimated latent variables. Theorems 3.2 and 3.4 involve certain permutation of the estimated hidden variables \u02c6 Z. Such an indeterminacy is common in the literature of causal discovery and representation learning tasks involving latent variables. In our case, since the function h := \u02c6 g\u22121 \u25e6g where \u02c6 Z = h(Z) is invertible, there exists a permutation of the latent variables such that the corresponding Jacobian matrix Jh has nonzero diagonal entries (see Lemma 2 in Appx. B); such a permutation is what Theorems 3.2 and 3.4 refer to. 3.2. From Latent Markov Network to Latent Causal DAG Now we have identified the Markov network up to an isomorphism, which characterizes conditional independence relations in the distribution. To build the connection between Markov network or conditional independence relations and causal structures, prior theory relies on the Markov and faithfulness assumptions. However, in real-world scenarios, the faithfulness assumption could be violated due to various reasons including path cancellations (Zhang & Spirtes, 2008; Uhler et al., 2013). Since our goal is to generalize the identifiability theory as much as possible to fit practical applications, we introduce two relaxations of the faithfulness assumptions: Assumption 1 (Single adjacency-faithfulness (SAF)). Given a DAG GZ and distribution PZ over the variable set Z, if two variables Zi and Zj are adjacent in GZ, then Zi \u22a5 \u0338\u22a5Zj|Z[n]\\{i,k}. Assumption 2 (Single unshielded-collider-faithfulness (SUCF) (Ng et al., 2021)). Given a latent causal graph GZ and distribution PZ over the variable set Z, let Zi \u2192 Zj \u2190Zk be any unshielded collider in GZ, then Zi \u22a5 \u0338\u22a5 Zk|Z[n]\\{i,k}. We propose SAF as a relaxtion of the Adjacencyfaithfulness (Ramsey et al., 2012). The SUCF assumption is first introduced by Ng et al. (2021), which is strictly weaker than Orientation-faithfulness (Ramsey et al., 2012). Thus, both of them are strictly weaker than the faithfulness assumption, since the combination of Adjacency-faithfulness and Orientation-faithfulness is weaker than the faithfulness assumption (Zhang & Spirtes, 2008). Interestingly, not only they are weaker variants of faithfulness, but we also prove that they are actually necessary and sufficient conditions, thus the weakest possible ones, to bridge conditional independence relations and causal structures. Specifically, we show that the recovered Markov network is exactly the moralized graph of the true causal DAG if and only if the proposed variants of faithfulness hold. The proofs of Lemma 1 and Theorem 3.5 are shown in Appx. E. Lemma 1. Given a latent causal graph GZ and distribution PZ with its Markov Network MZ, under Markov assumption, the undirected graph defined by MZ is a subgraph of the moralized graph of the true causal DAG G. Theorem 3.5. Given a causal DAG GZ and distribution PZ with its Markov Network MZ, under Markov assumption, the undirected graph defined by MZ is the moralized graph of the true causal DAG GZ if and only if the SAF and SUCF assumptions are satisfied. It is worth noting that the connection between conditional independence relations and causal structures has been developed by (Loh & B\u00a8 uhlmann, 2014; Ng et al., 2021) in the linear case by leveraging the properties of the inverse covariance matrix; our results here focus on the nonparametric case and thus being able to serve the considered general settings for identifiability. Also note that the necessary and sufficient assumptions may also be of independent interest for other causal discovery tasks exploring conditional independence relations in the nonparametric case. Discussion on additional assumptions. We investigated how the sparsity constraint on the recovered graph over latent variables and sufficient change conditions on causal influences can be used to recover the latent variables and causal graph up to certain indeterminacies. Our framework is connected with previous ones in a spectrum of related studies (Varici et al., 2023; Ahuja et al., 2023; Buchholz et al., 2022; Squires et al., 2023; Brehmer et al., 2022; von K\u00a8 ugelgen et al., 2023; Brehmer et al., 2022; von K\u00a8 ugelgen et al., 2023; Zheng & Zhang, 2023; Zhang et al., 2023). For instance, the connection between conditional independence and cross-derivatives of the log density in both linear and nonlinear cases means our theorems directly apply to linear SEMs. Furthermore, our results do not require the mixing function to be sufficiently nonlinear, allowing them to encompass linear mixing processes as well. At the same time, we may be able to leverage possible parametric constraints on the data generating process (or functions) or specific types of interventions. For instance, if 5 \fCausal Representation Learning from Multiple Distributions: A General Setting we know that the changes happen to the linear causal mechanisms with Gaussian noises, this constraint can readily help reduce the search space and improve the identifiability. Moreover, since we only require the changing distribution, any type of interventions will be covered since any change to the conditional distribution is allowed. Given the additional information illustrated by experimental interventions (e.g., single-node interventions), alternative identifiability that might be particularly useful in certain tasks can be established. We hope this work can provide a helpful, bigger picture of causal representation learning in the general setting and further illustrates the necessity and connections of the different assumptions formulated in this line of works. 4. Change Encoding Network for Representation Learning Thanks to the identifiability result, we now present two different practical implementations to recover the latent variables and their causal relations from observations from multiple domains. We build our method on the variational autoencoder (VAE) framework and can be easily extended to other models, such as normalizing flows. We learn a deep latent generative model (decoder) p(X|Z; \u03b8u) and a variational approximation (encoder) q(Z|X, u) of its true posterior p(Z|X; \u03b8u) since the true posterior is usually intractable. To learn the model, we minimize the lower bound of the log-likelihood as logp(X; \u03b8u) = log Z p(X|Z; \u03b8u)p(Z; \u03b8u)dZ (6) = log Z q(Z|X, u) q(Z|X, u)p(X|Z; \u03b8u)p(Z; \u03b8u)dZ \u2265\u2212KL(q(Z|X, u)||p(Z; \u03b8u)) + Eq[log p(X|Z; \u03b8u)] = \u2212LELBO For the posterior q(Z|X, u), we assume that it is a multivariate Gaussian or a Laplacian distribution, where the mean and variance are generated by the neural network encoder. As for q(X|Z), we assume that it is a multivariate Gaussian and the mean is the output of the decoder and the variance is a pre-defined value. In practice, we can parameterize p(X|Z; \u03b8u) as the decoder which takes as input the latent representation Z and q(Z|X, u) as an encoder which outputs the mean and scale of the posterior distribution. An essential difference from VAE (Kingma & Welling, 2013) and iVAE (Khemakhem et al., 2020a) is that our method allow the components of Z to be causally dependent and we are able to learn the components and causal relationships. And the key is the prior distribution P(Z; \u03b8u). Now we present two different implementations to capture the changes with a properly defined prior distribution. 4.1. Nonparametric Implementation of the Prior Distribution To recover the relationships and latent variables Z, we build the normalizing flow to mimic the inverse of the latent SEM Zi = fi(PA(Zi), \u03f5i) in Eq. (1). We first assume a causal ordering as \u02c6 Z1, . . . , \u02c6 Zn. Then, for each component \u02c6 Zi, we consider the previous components { \u02c6 Z1, . . . , \u02c6 Zi\u22121} as potential parents of \u02c6 Zi and we can select the true parents with the adjacency matrix \u02c6 A, where \u02c6 Ai,j denotes that component \u02c6 Zj contributes in the generation of \u02c6 Zi. If \u02c6 Ai,j = 0, it means that \u02c6 Zj will not contribute to the generation of \u02c6 Zi. Since \u03b8u governs the changes across domain, we use the observed domain index u to discover the changes. Then, we use the selected parents { \u02c6 Ai,1 \u02c6 Z1, . . . , \u02c6 Ai,i\u22121 \u02c6 Zi\u22121} and the domain label u to generate parameters of normalizing flow and apply the flow transformation on \u02c6 Zi to turn it into \u02c6 \u03f5i. Specifically, we have \u02c6 \u03f5i, log deti = Flow( \u02c6 Zi; NN({ \u02c6 Ai,j \u02c6 Zj}i\u22121 j=1, u)), (7) where log deti is the log determinant of the conditional flow transformation on \u02c6 Zi. To compute the prior distribution, we make an assumption on the noise term \u03f5 that it follows an independent prior distribution p(\u03f5), such as a standard isotropic Gaussian or a Laplacian. Then according to the change of variable formula, the prior distribution of the dependent latents can be written as log p( \u02c6 Z; \u03b8u) = n X i=1 (log p(\u02c6 \u03f5i) + log deti). (8) Intuitively, to minimize the KL divergence loss between p(Z; \u03b8u) and q(Z|X, u), the network has to learn the correct structure and the underlying latent variables; otherwise, it can be difficult to transform the dependent latent variables \u02c6 Z to a factorized prior distribution, e.g., N(0, I). 4.2. Parametric Implementation of the Prior Distribution We can make parametric assumption on the latent causal process and facilitate the learning of true causal structure and components. Here, we consider the linear SEM and more complex SEMs can be generalized. Specifically, we assume that the true generation process of the latent Z is linear and only consists of scaling and shifting mechanisms: Z = A(C(u)Z) + S(u)\u03f5 + B(u), (9) where A \u2208[0, 1]n\u00d7n is a causal adjacency matrix which can be permuted to be strictly lower-triangular, C(u) \u2208Rn\u00d7n and S(u) \u2208Rn\u00d71 are underlying domain-specific scaling matrix and vector for domain u, respectively, B(u) \u2208Rn\u00d71 6 \fCausal Representation Learning from Multiple Distributions: A General Setting is the underlying domain-specific shift vector, and \u03f5 is the independent noise. To estimate the latent variables Z, the causal structure A, and capture the changes across domains, we introduce the learnable scaling \u02c6 C \u2208Rn\u00d7n, \u02c6 S \u2208Rn\u00d71and bias parameters \u02c6 B \u2208Rn\u00d71 and pre-define a causal ordering as \u02c6 Z1, \u02c6 Z2, . . . , \u02c6 Zn. Then we have the matrix form as \u02c6 \u03f5 = ( \u02c6 Z \u2212\u02c6 B(u) \u2212\u02c6 A \u02c6 C(u) \u02c6 Z)/ \u02c6 S(u). (10) Given a prior distribution of the noise term p(\u02c6 \u03f5), and according to the change of variable rule, we have the prior distribution for \u02c6 Z in parametric case as log p( \u02c6 Z; \u03b8u) = n X i=1 (log p(\u02c6 \u03f5i) \u2212log | \u02c6 S(u) i |), (11) since the determinant of the strict lower triangular matrix \u02c6 C is 0. 4.3. Full Objective After we have properly defined the needed distributions p(X|Z; \u03b8u), q(Z|X, u), p(Z; \u03b8u), we can train our model to minimize the loss function LELBO. However, without any further constraint, the powerful network may choose to use the fully connected causal graph during training. In other words, all lower-triangular elements of the estimated graph \u02c6 A is non-zero, which implies that each component \u02c6 Zi is caused by all previous i \u22121 components. To exclude such unwanted solutions and encourage the model to learn the true causal structure among components of Z, we apply the \u21131 regularization on \u02c6 A, i.e., Lsparsity = \u2225\u02c6 A\u22251. (12) It is worth noting that the sparsity regularization term above is an approximation of the sparsity constraint on the edges of the estimated Markov network specified in Thms. 3.2 and 3.4, since it is not straightforward to impose the latter constraint in a differentiable end-to-end training process. Finally, the full training objective is Lfull = LELBO + Lsparsity. (13) After the model converges, the output of the encoder \u02c6 Z is our recovered latents from the observations in multiple domains and the revealed causal structure is in \u02c6 A which encapsulates the causal relationships across the components. 4.4. Simulations To verify our theory and the proposed implementations, we run experiments on the simulated data because the ground truth causal adjacency matrix and the latent variables across domains are available for simulated data. Consequently, we consider following common causal structures (i) Y-structure with 4 variables, Z1 \u2192Z3 \u2190Z2, Z3 \u2192Z4 and (ii) chain structure Z1 \u2192Z2 \u2192Z3 \u2192Z4. The noises are modulated with scaling random sampled from Unif[0.5, 2] and shifts are sampled from Unif[\u22122, 2]. The scaling on the Z are also randomly sampled from Unif[0.5, 2]. In other words, the changes are modular. After generating Z, we feed the latent variables into MLP with orthogonal weights and LeakyReLU activations for invertibility. Specifically, we sample orthogonal matrix as the weights of the MLP layers. Since orthogonal matrix and LeakyReLU are invertible, the MLP function is also invertible. We present the results in Fig. 3 and 4. Each sub-figure consist of 4 \u00d7 4 panels and penal on i-th row and j \u2212 thcolumn denotes the relationship between the estimated component \u02c6 Zi with the true latent Zj. We can see that under most cases, our model learns a strong one-to-one correspondence from the estimated components and the true components. For instance, the first column in Fig. 3 show that \u02c6 Z1 is strongly correlated with the true components Z1 while it is nearly independent from the true Z2. From the estimated \u02c6 A, we find that our method is able to recover the true causal structure. For instance, on the Y structure with Z1 \u2192Z3 \u2190Z2 and Z3 \u2192Z4, our estimated model only keep the components \u02c6 A1,3, \u02c6 A2,3, \u02c6 A3,4 nonzero with the proposed sparsity regularization. The estimated causal graph is consistent with the true Y -structure causal graph. We can also see that the latent causal structure is also recovered from Fig. 4 and 3. We observe that the learned \u02c6 Z1 is strongly correlated with the true Z2 and is independent from the true Z1, but correlated with the \u02c6 Z3 and \u02c6 Z4. These results aligns well with the true causal graph since Z2 is independent from Z1 while is the cause of Z3 and Z4. The experiments support our theoretical result that the components and structure are identifiable up to certain indeterminacies. As for the results in Fig. 3, we observe that our non-parametric method is still able to recover the true latent variables with Laplace noise. 5. Related Work Causal representation learning aims to unearth causal latent variables and their relations from observed data. Despite its significance, the identifiability of the hidden generating process is known to be impossible without additional constraints, especially with only observational data. In the linear, non-Gaussian case, Silva et al. (2006) recover the Markov equivalence class, provided that each observed variable has a unique latent causal parent; Xie et al. (2020); Cai et al. (2019) estimate the latent variables and their relations assuming at least twice measured variables as latent 7 \fCausal Representation Learning from Multiple Distributions: A General Setting Figure 3: Recovered latent variables v.s. the true latent variables with Non-Parametric Approach. (a) Y-structure with Laplace noise. (b) Y-structure with Gaussian noise. (c) Chain structure with Laplace noise. (d) Chain structure with Gaussian noise. In each sub-figure, i-th row and j-th column depcits the relationship between the estimated \u02c6 Zi and the true components Zj. Figure 4: Recovered latent variables v.s. the true latent variables with Linear Parameterization Approach. The X-axis denotes the components of true latent variables Z and the Y -axis represent the components of estimated latent variables \u02c6 Z. (a) Y-structure with Laplace noise. (b) Y-structure with Gaussian noise. (c) Chain structure with Laplace noise. (d) Chain structure with Gaussian noise. ones, which has been further extended to learn the latent hierarchical structure (Xie et al., 2022). Moreover, Adams et al. (2021) provide theoretical results on the graphical conditions for identification. In the linear, Gaussian case, Huang et al. (2022) leverage rank deficiency of the observed sub-covariance matrix to estimate the latent hierarchical structure, while Dong et al. (2023) further extend the rank constraint to accommodate flexibly related latent and observed variables. In the discrete case, Kivva et al. (2021) identify the hidden causal graph up to Markov equivalence by assuming a mixture model where the observed children sets of any pair of latent variables are different. Given the challenge of identifiability on purely observational data, a different line of research leverage experiments by assuming the accessibility of various types of interventional data. Based on the single-node perfect intervention, Squires et al. (2023) leverage single-node interventions for the identifiability of linear causal model and linear mixing function; (Varici et al., 2023) incorporate for nonlinear causal model and linear mixing function; (Varici et al., 2023; Buchholz et al., 2023; Jiang & Aragam, 2023) provide the identifiability of the nonparametric causal model and linear mixing function; (Ahuja et al., 2023) further generalize the result to nonparametric causal model and polynomial mixing functions with additional constraints on the latent support; and (Brehmer et al., 2022; von K\u00a8 ugelgen et al., 2023; Jiang & Aragam, 2023) explore the nonparametric settings for both the causal model and mixing function. In addition to the single-node perfect interventions, Brehmer et al. (2022) introduced counterfactual preand post-intervention views; von K\u00a8 ugelgen et al. (2023) assume two distinct, paired interventions per node for multivariate causal models; Zhang et al. (2023) explore soft interventions on polynomial mixing functions; and Jiang & Aragam (2023) places specific structural restrictions on the latent causal graph. Our study lies in the line of leveraging only observational data, and provides identifiability results in the general nonparametric settings on both the latent causal model and mixing function. Unlike prior works with observational data, we do not have any parametric assumptions or graphical restrictions; Compared to those relying on interventional data, our results naturally benefit from the heterogeneity of observational data (e.g., multi-domain data, nonstationary time series) and avoid additional experiments for interventions. 8 \fCausal Representation Learning from Multiple Distributions: A General Setting 6." + } + ], + "Jalal Etesami": [ + { + "url": "http://arxiv.org/abs/2312.16707v1", + "title": "Modeling Systemic Risk: A Time-Varying Nonparametric Causal Inference Framework", + "abstract": "We propose a nonparametric and time-varying directed information graph\n(TV-DIG) framework to estimate the evolving causal structure in time series\nnetworks, thereby addressing the limitations of traditional econometric models\nin capturing high-dimensional, nonlinear, and time-varying interconnections\namong series. This framework employs an information-theoretic measure rooted in\na generalized version of Granger-causality, which is applicable to both linear\nand nonlinear dynamics. Our framework offers advancements in measuring systemic\nrisk and establishes meaningful connections with established econometric\nmodels, including vector autoregression and switching models. We evaluate the\nefficacy of our proposed model through simulation experiments and empirical\nanalysis, reporting promising results in recovering simulated time-varying\nnetworks with nonlinear and multivariate structures. We apply this framework to\nidentify and monitor the evolution of interconnectedness and systemic risk\namong major assets and industrial sectors within the financial network. We\nfocus on cryptocurrencies' potential systemic risks to financial stability,\nincluding spillover effects on other sectors during crises like the COVID-19\npandemic and the Federal Reserve's 2020 emergency response. Our findings\nreveals significant, previously underrecognized pre-2020 influences of\ncryptocurrencies on certain financial sectors, highlighting their potential\nsystemic risks and offering a systematic approach in tracking evolving\ncross-sector interactions within financial networks.", + "authors": "Jalal Etesami, Ali Habibnia, Negar Kiyavash", + "published": "2023-12-27", + "updated": "2023-12-27", + "primary_cat": "econ.EM", + "cats": [ + "econ.EM", + "cs.AI", + "cs.IT", + "math.IT", + "stat.AP" + ], + "main_content": "Introduction Understanding the interconnection between financial institutions, especially in the context of systemic risk, is of great importance. In principle, there are two main approaches to measuring such interconnections between institutions in the literature, which are mainly visualized by a directed graph. One approach is based on a mathematical model of financial market participants, and the relations are derived from a combination of information extracted from financial statements, like the market value of liabilities of counterparties. The other approach, which is also adopted in this work, is based on statistical analysis of financial series related to the institutions of interest. Nevertheless, most existing methods in the literature rely on pairwise comparisons or impose additional assumptions on the underlying model, such as linearity, and often presume time-invariant interconnections. In this work, we introduce a novel framework designed to capture causal relationships within a time series network while relaxing these assumption. To highlight a few relevant works, Billio et al. (2012) propose systemic risk measures to capture the connections between the monthly returns of various financial institutions (hedge funds, banks, brokers, and insurance companies), utilizing pairwise linear Granger-causality tests. Alternatively, in their work, Diebold and Y\u0131lmaz (2014) propose a connectedness measure based on generalized variance decomposition (GVD). This measure, again limited to linear systems, also suffers from disregarding the entire network akin to pairwise analysis commonly used in the literature, as we will discuss in Section 2.6.3. Barigozzi and Hallin (2016) focus on one particular network structure: the long-run variance decomposition network (LVDN). Similar to the work by Diebold and Y\u0131lmaz (2014), the LVDN defines a weighted directed graph where the associated weight to edge (i, j) represents the proportion of h-step-ahead forecast error variance of variable i which is accounted for by the innovations in variable j. LVDNs are also limited to linear systems. Building on these approaches, there are several attempts to go beyond linear models in the literature. For instance, Andersen et al. (2019) provide unifying theory for parametric nonlinear factor models based on a panel of noisy observations. The nonlinear model is governed by stochastic differential equations and the estimation procedure is carried out via penalized least squares. The inference theory for penalized least squares with fixed time span has been studied by Andersen et al. (2015). Later, Andersen et al. (2019) develop an inference theory that applies in either fixed or large time span cases. Expanding further on nonlinear methodologies, Bonaccolto et al. (2019) study quantile based methods of Granger causality and multiplex networks. This work provides a possible solution to combine multiplex network, or a collection of networks existing between a set of subjects. Ideally, the constituents of the multiplex network represent the outcomes of different approaches to the estimation or identification of links between the analyzed subjects. Another contribution of this work is developing 2 \fa quantile based method to interpret the risk of the estimated financial networks that identifies causality among quantiles of the modelled variables. It is important to realize that this method is consistent with the works by Hong et al. (2009) and Corsi et al. (2018) that focus on causality among tail events. The main idea is to capture causalities that are not in the center of their distributions, or in the mean but they are in the tails of their distributions. We discuss this method in details in Section 2.6.5. Etesami et al. (2017) use a non-parametric information-theoretic measure to infer causal relations in a network however, it relies on an assumption that the causal network is time invariant. Herein, we extend this result to time-varying networks. Connectedness measures based on correlation remain widespread. Such measures capture only pairwise association and are mainly studied for linear Gaussian models. This makes them of limited value in finance. Different approaches have been developed to relax these assumptions. For example, equi-correlation approach of Engle and Kelly (2012) uses average correlations across all pairs. The CoVaR approach of Adrian and Brunnermeier (2008) measures the value-at-risk of financial institutions conditional on other institutions experiencing financial distress. The marginal expected shortfall approach of Acharya et al. (2017) measures the expected loss to each financial institution conditional on the entire set of institutions\u2019 poor performance. Although these measures rely less on linear Gaussian methods, a general framework that can capture non-pairwise connectedness in time-varying networks remains elusive. Introducing such measure is the main purpose of this work. Recently, there have been various attempts to infer time-varying causal relations among time series. These methods generally fall into three categories: rollingwindow methods, change-point detection techniques, and model-based approaches. Rolling-window methods, like Lu et al. (2014), use rolling sub-samples for timevarying Granger causality in oil markets, building on Hong (2001)\u2019s Granger causality test. This method, and its adaptations by (Hong et al., 2009; Shi et al., 2018; Baum et al., 2021; Zhang et al., 2021), primarily reveal causal relationships in means and variances between time series. However, absence of such causalities doesn\u2019t rule out all causality, and these tests assume an ARMA-GARCH model for conditional variance. Shi et al. (2018) introduced a recursive evolving window test for detecting causal relationship changes, while Phillips et al. (2017) examined nonlinear cointegration models with time-varying structural coefficients, using non-parametric kernel methods for estimation. In statistical analysis, change point detection is crucial for identifying instances when a process\u2019s probability distribution changes. This technique is particularly useful in pinpointing alterations in causal networks, such as changes in the parent sets of network nodes or in their conditional probabilities. Detecting these shifts in high-dimensional distributions remains a challenging task (Aminikhanghahi and Cook, 2017; Wang et al., 2021; Barigozzi et al., 2021; Pelger and Xiong, 2022). To overcome these challenges, several model-based methods have been developed. For example, Barigozzi et al. (2018, 2021) propose a framework for the estimation of time 3 \fseries factor models with multiple change-points in their second-order structure. As we will discuss in Section 2.8, our proposed method is also capable of detecting such changes in the conditional distributions as well. Model-based approaches like Markov switching models describe dynamic interconnections via time-dependent or state-dependent model parameters. Bianchi et al. (2019) introduce a Markov switching graphical model for analyzing time-varying systemic risk using multi-factor asset pricing models. This model assumes each time series as a dynamic multi-factor linear model with normally distributed residuals. They employ a Markov chain Monte Carlo (MCMC) scheme for parameter inference and use weighted eigenvector centrality to identify latent states. However, their time-varying network, defined as a Markov random field in the error terms, fails to adequately represent causal relationships as it forms an undirected graph. Recently, Billio et al. (2022) suggest a tensor-on-tensor regression for multilayer networks, combining zero-inflated logistic regression and Markov-switching coefficients for structural changes, using Bayesian inference with Gibbs sampling. Despite innovations, this approach shares limitations in causal network inference similar to (Bianchi et al., 2019). Most of the aforementioned methods are studied in small size networks. The inference problem in high dimensional settings requires estimating a large number of parameters and thus most of the above methods may not be applicable. Billio et al. (2019) propose a Bayesian nonparametric Lasso prior for high-dimensional VAR models that can improve estimation efficiency and prediction accuracy. This approach clusters the VAR coefficients into groups and shrinks the coefficients of each group toward a common location to overcome over-parametrization and overfitting issues. Another related work is (Petrova, 2019), in which the authors propose a quasi-Bayesian local likelihood estimation methodology for multivariate models in high dimensional setting with time-varying parameters. However, these methods are limited to linear models. To overcome this limitation, Kalli and Griffin (2018) propose a Bayesian non-parametric VAR model that allows for nonlinearity in the conditional mean, heteroscedasticity in the conditional variance, and non-Gaussian innovations. But, unlike the BNP-Lasso, it does not allow sparsity in the model. Iacopini and Rossini (2019) and Bernardi and Costola (2019) tackle the curse of dimensionality through a two-stage prior specification. In the first stage, a spikeand-slab prior distribution is used for each entry of the coefficient matrix. In the second stage, it imposes prior dependence on the coefficients by specifying a Markov process for their random distribution. Causal network is a by-product of this estimation procedure. Within the hierarchical conditional Bayesian prior methods, Korobilis and Pettenuzzo (2019) develop a new estimation algorithm for VARs that breaks the multivariate estimation problem into a series of independent tasks and consequently reducing the curse of dimensionality. Note that all aforementioned methods are limited to VAR models. Hu\u00b4 e et al. (2019) propose a new network measure of systemic risk that com4 \fbines the pair-wise Granger causality approach with the leave-one-out concept. This methodology allows them to deal with the issue of indirect causalities, without facing the inherent curse of dimensionality that arises in the multivariate approaches. However, since this approach only leaves out one financial institution at a time, it fails to capture causalities with more than one indirect paths. In this work, we propose a framework for estimating time-varying causal relations in a general and possibly high-dimensional network of time series. 1.1. Contributions In our research, we contribute both theoretically and empirically to the understanding of dynamic causal relationships within complex networks of time series, especially in the financial domain and the emerging influence of crypto assets. Theoretically, we introduce the time-varying directed information (TV-DI) measure, an information-theoretic framework specifically designed to capture the evolving nature of causal relationships in a general network of time series. Additionally, we enhance the estimation process of directed information (DI) by proposing a non-parametric method. This method leverages rolling window moving block bootstrapping in conjunction with non-parametric estimators for mutual information, such as k-nearest and kernel-based methods. The efficacy of this estimator is rigorously validated through a series of simulated experiments, showcasing its robustness and accuracy. We also discuss different approaches to tackle the curse of dimensionality for estimating DI in large networks. Furthermore, we introduce a DI-based measure for inferring non-parametric and non-pairwise aggregated causal effects between two disjoint subsets of time series. This can be used to quantify the causal effect of, for instance, a financial sector on another sector. In the context of network analysis, this translates into estimating the influence of an entire sub-network, rather than individual nodes, on another sub-network. Moreover, we quantify the contributions of each individual time series in such aggregated causal effects. This is useful to detect the most or the least influential asset in a financial sector influencing another sector, providing deeper insights into the dynamics of systemic risk. We also establish the connection between our framework with several well-known econometric models such as VAR, GARCH, switching models and others. We show how our framework improves the measurement of systemic risk. Empirically, to the best of our knowledge, this is the inaugural study that investigates the evolving dynamics of systemic risk posed by crypto assets to financial networks, marking a significant advancement in empirical financial network analysis. 2. Methodology In order to investigating the dynamic of systemic risk, it is important to measure the causal relationship between financial institutions. In this section, we introduce a statistical approach to measure such causal interconnections using a generalized 5 \fversion of Granger causality. We begin by introducing some notations. Plain capital letters denote random variables or processes, while lowercase letters denote their realizations. Bold lowercase and capital letters are used for column vectors and matrices, respectively. Calligraphy letters are used for denoting sets. We use Xj,t to denote a time series Xj at time t and Xt j to denote the time series Xj up to time t. For a set A = {a1, ..., an} and an index set I \u2286{1, ..., n}, we define A\u2212I := A \\ {ai : i \u2208I}. 2.1. Graphical Models and Granger Causality Researchers from different fields have developed various graphical models suitable for their application of interest to encode interconnections among variables or processes. For instance, Koller and Friedman (2009) define Markov Networks, Bayesian networks (BNs), and Murphy (2002) introduces Dynamic Bayesian networks (DBNs). These are three examples of such graphical models that have been used extensively in the literature. Markov networks are undirected graphs that represent the conditional independence between the variables. On the other hand BNs and DBNs are directed acyclic graphs (DAGs) that encode conditional dependencies in a reduced factorization of the joint distribution. In DBNs, the size of the graphs depend on the time-homogeneity and the Markov order of the random processes. Therefore, in general, the size of the graphs can grow with time. As an example, the DBN graph of a vector autoregressive (VAR) introduced by Dahlhaus and Eichler (2003) with m processes each of order L requires mL nodes. Hence, they are not suitable for succinct visualization of relationships between the time series. Similar to the works by Quinn et al. (2015) and Massey (1990), we use directed information graphs (DIGs) to encode interconnections among the financial institutions in which each node represents a time series. Below, we formally introduce this type of graphical models. We use an information-theoretical generalization of the notion of Granger causality to determine the interconnection between time series. The basic idea in this framework was originally introduced by Wiener (1956) and later formalized by Granger (1969). The idea is as follows: \u201cwe say that X is causing Y if we are better able to predict the future of Y using all available information than if the information apart from the past of X had been used.\u201d Despite broad philosophical viewpoint of Granger (1963), his formulation for practical implementation was done using autoregressive models and linear regression. This version has been widely adopted in econometrics and other disciplines. More precisely, in order to identify the influence of Xt on Yt in a VAR comprises of three time series {X, Y, Z}, Granger\u2019s idea is to compare the performance of two linear regressions: the first one predicts Yt using {Xt\u22121, Y t\u22121, Zt\u22121} and the second one predicts Yt given {Y t\u22121, Zt\u22121}. Clearly, the performance of the second predictor is bounded by the first one. If they have the same performance, then we say X does not Granger cause Y . 6 \fBelow, we introduce directed information (DI), an information-theoretical measure that generalized Granger causality beyond linear models. DI has been used in many applications to infer causal relationships. For example, Quinn et al. (2011) and Kim et al. (2011) used it for analyzing neuroscience data and Etesami et al. (2017) applied for market data . 2.2. Directed Information Graphs (DIGs) Consider a causal1 dynamical system comprised of three time series {X, Y, Z}. To answer whether X has influence on Y or not over time horizon [1, T], we compare the average performance of two particular predictors over this time horizon. The first predictor is non-nested that uses the full history of all three time series while the second one is nested and it uses the history of all processes excluding process X. On average, the performance of the predictor with less information (the second one) is upper bounded by the performance of the predictor with more information (the first one). However, when the performance of the predictors are close over time horizon [1, T], then we declare that X does not influence Y over this time horizon. In order to consider higher order moments, our prediction lies in the space of probability measures. More precisely, the first prediction at time t is given by P(Yt|Xt\u22121, Y t\u22121, Zt\u22121) that is the conditional distribution of Yt given the past of all processes and the second predictor is given by P(Yt|Y t\u22121, Zt\u22121). To measure the performance of a predictor, we use a nonnegative loss function which defines the quality of the prediction. This loss function increases as the prediction deviates from the true outcome. Although there are many candidate loss functions, e.g., the squared error loss, absolute loss, etc, for the purpose of this work, we consider the logarithmic loss. More precisely, when the outcome yt is revealed for Yt, the two predictors incur losses \u21131,t := \u2212log P(Yt = yt|Xt\u22121, Y t\u22121, Zt\u22121) and \u21132,t := \u2212log P(Yt = yt|Y t\u22121, Zt\u22121), respectively. This loss function has meaningful information-theoretical interpretations and it is related to the Shannon entropy. The log-loss is the Shannon code length, i.e., the number of bits required to efficiently represent a symbol yt. Thus, it may be thought of the description length of yt. For more justifications on this loss function, see the work by Quinn et al. (2015). The reduction in the loss at time t, known as regret is defined as rt := \u21132,t \u2212\u21131,t = log P(Yt = yt|Xt\u22121, Y t\u22121, Zt\u22121) P(Yt = yt|Y t\u22121, Zt\u22121) . (1) Note that the regrets are non-negative. The average regret over the time horizon [1, T], i.e., 1 T PT t=1 E[rt] is called directed information (DI). This will be our measure of causation and its value determines the strength of influence. If this quantity is 1In causal systems, given the full past of the system, the present of the processes become independent. In other words, there are no simulations relationships between the processes. 7 \fclose to zero, it indicates that the past values of time series X contain no information that would help in predicting the future of time series Y given the history of Y and Z. This definition can be generalized to more than three processes as follows, Definition 1. Consider a network of m time series R := {R1, ..., Rm}. We define the directed information from Ri to Rj as follows I(Ri \u2192Rj||R\u2212{i,j}) := 1 T T X t=1 E \" log P(Rj,t|Rt\u22121) P(Rj,t|Rt\u22121 \u2212{i}) # , (2) where R\u2212{i,j}:=R\\{Ri, Rj} and Rt\u22121 \u2212{i} denotes { Rt\u22121 1 , ..., Rt\u22121 m }\\{Rt\u22121 i }. We declare Ri causes Rj within time horizon [1, T], if and only if I(Ri \u2192Rj||R\u2212{i,j}) > 0. Definition 2. Directed information graph (DIG) of a set of m processes {R1, ..., Rm} is a weighted directed graph G = (V, E, W), where nodes V represent processes and an arrow (Ri, Rj) \u2208E denotes that Ri influences Rj with weight wj,i := I(Ri \u2192 Rj||R\u2212{i,j}). Consequently, (Ri, Rj) / \u2208E if and only if wj,i = 0. Pairwise comparison has been applied in the literature to identify the causal structure of time series. The works by Billio et al. (2012), Billio et al. (2010), and Allen et al. (2010) are such examples. Pairwise comparison is not correct in general and fails to capture the true causal relationships. See Appendix A for an example. 2.3. Estimating the DIs Given the definition of DI in Equation (2), it is straightforward to see that DI can be written as the summation of conditional mutual information, I(Ri \u2192Rj||R\u2212{i,j}) = 1 T T X t=1 I(Rj,t; Rt\u22121 i |Rt\u22121 \u2212{i,j}, Rt\u22121 j ), (3) where I(X; Y |Z) denotes the conditional mutual information between X and Y given Z. For more details see the book by Cover and Thomas (2012). Therefore, parametric and non-parametric estimators for the conditional mutual information can be used to estimate DI. There are different methods that can be used to estimate the terms in Equation (3) given i.i.d. samples such as plug-in empirical estimator, kernel estimator, and k-nearest neighbor estimator. For an overview of such estimators including their asymptotic behavior see the articles by Paninski (2003), Noshad et al. (2019), and Jiao et al. (2013). For our experimental results, we used the k-nearest method since it shows relatively better performance compared to the other non-parametric estimators. For more details see Appendix B. 8 \fMoving Block Bootstrap. It is noteworthy that the aforementioned non-parametric estimators require large amount of i.i.d. data to output accurate DIs\u2019 estimations. However, often in practice (e.g., analysing financial market) such amount of i.i.d. data is not available for a short time window. To overcome this challenge, we adopt Moving block bootstrap (MBB) method. The MBB, introduced by Kunsch (1989), is a non-parametric bootstrap procedure that can be applied to time series for replicating data. It obtains replicated series of data by drawing with replacement from the blocks of consecutive data. This method is well investigated in the case of strictly stationary strong mixing time series by Kunsch (1989); Lahiri (1999) and also in non-stationary case by Fitzenberger (1998); Gon\u00b8 calves and White (2002); Synowiecki (2007). Trw GICS1 GICS2 GICSm GICS1 GICS2 GICSm Trw Trw LB LB LB GICS1 GICS2 GICSm Trw GICS1 GICS2 GICSm Trw GICS1 GICS2 GICSm Trw Trw LB Trw LB Trw LB 0 Figure 1: Rolling window MBB for constructing resamples. In this work, we combined the rolling window method and MBB to estimate the DIs in (37) as depicted in Figure 1. More specifically, a window of length Trw is selected (this is the rolling window and will be moved throughout the entire horizon [1, T]) and then batches of bootstrap resamples from the selected window are generated. This is done by first dividing Trw into overlapping blocks of length LB. This leads to total Trw \u2212LB + 1 number of blocks. By randomly choosing Trw/LB blocks (with replacement) and concatenating them, we create a bootstrap resample. Repeating this process several times constructs a batch of bootstrap resamples. The generated bootstrap resamples are treated as i.i.d. samples to estimate the DIs in Equation (37) via k-nearest estimator. 2.4. Strength of Causal Relationships Measuring the causal strength is important in many applications, e.g., finding which financial company has the strongest or weakest influence. There are several works in literature that consider different measures for causal strength. For instance, Janzing et al. (2013) propose an axiomatic measure of causal strength based on a set of postulates. In this section, we show that the DI introduced in (2) can capture the strength of causal relationships as well. We do so using a simple linear example 9 \fand then extend it to general systems. Consider a network of three time series xt = (X1,t, X2,t, X3,t)T with the following joint dynamics X1 X2 X3 vllllll hRRRRRR o Figure 2: The DIG of (4). xt = \uf8eb \uf8ed 0 0.1 0.3 0 0 \u22120.2 0 0 0 \uf8f6 \uf8f8xt\u22121 + \u03f5t, (4) where \u03f5t denotes a vector of exogenous noises that has normal distribution with mean zero and identity covariance matrix. Figure 2 shows the corresponding DIG of this network. Note that in this particular example where the relationships are linear, the support of the coefficient matrix encodes the corresponding DIG of the network (Etesami and Kiyavash, 2014). In order to compare the strength of causal relationships X2 \u2192X1 and X3 \u2192X1 over a time horizon [1, T], we compare the performance of two linear predictors of X1,t. The first one predicts X1,t using {Xt\u22121 1 , Xt\u22121 3 } and the other one uses {Xt\u22121 1 , Xt\u22121 2 }. If the first predictor shows better performance compared to the second one, it implies that X3 contains more relevant information about X1 compared to X2. In other words, X3 has a stronger influence on X1 than X2. To measure the performance of these two predictors over the time horizon [1, T], we consider the mean squared errors. e1 := min (a,b)\u2208R2 1 T T X t=1 E||X1,t \u2212(aX1,t\u22121 + bX3,t\u22121)||2, (5) e2 := min (c,d)\u2208R2 1 T T X t=1 E||X1,t \u2212(cX1,t\u22121 + dX2,t\u22121)||2, (6) It is easy to show that e1 = 1 + 0.12 and e2 = 1 + 0.32. Since e1 < e2, we infer that X3 has stronger influence on X1 compared to X2. Similar to the directed information, we generalize the above framework to nonlinear systems. Consider a network of m time series {R1, ..., Rm} with corresponding DIG G = (V, E, W). Suppose that (Ri, Rj), (Rk, Rj) \u2208E, i.e., Ri and Rk both are parents of Rj. We say Ri has stronger influence on Rj compared to Rk within [1, T] if P(Rj,t|Rt\u22121 \u2212{k}) is a better predictor than P(Rj,t|Rt\u22121 \u2212{i}) over that time horizon, i.e., 1 T T X t=1 E h \u2212log P(Rj,t|Rt\u22121 \u2212{k}) i < 1 T T X t=1 E h \u2212log P(Rj,t|Rt\u22121 \u2212{i}) i . (7) The above inequality holds if and only if I(Ri \u2192Rj||R\u2212{i,j}) > I(Rk \u2192Rj||R\u2212{k,j}). Thus, the DI in Equation (2) quantifies the causal relationships in a network. For instance, in the system of (4), we have I(X2 \u2192X1||X3) = 1 2 log(1.01) < 1 2 log(1.09) = I(X3 \u2192X1||X2). (8) This means that X3 has stronger influence on X1 compared to X2, which is consistent with the result of linear predictors. 10 \fX1 X2 X3 tiiiiiiiii jUUUUUUUUU o X1 X2 X3 jUUUUUUUUU o Figure 3: DIG of the system in (4) and a corresponding tree approximation. 2.5. DIG in High-dimensional Settings For large networks with thousands nodes or millions of edges, DIGs become too complex to infer (i.e., it requires large amount of data) and also too complicated for direct humans analysis. There are several approaches for inferring the DIG of large networks. To mention a few: approximate large networks with smaller and simpler networks, use side information (e.g., model class) to develop alternative simpler inferring algorithms, and reduce the complexity of DI by reducing the size of the conditioning set. Below, we will briefly discuss each of these approaches. A major approach to manage large high-dimensional problems is to keep a few edges of the causal network which together best approximate the dynamics of the system. In this case, instead of the true DIG, we obtain an approximation of it with less number of edges (sparser network). For example, Figure 3 depicts the DIG of the system in (4) and a tree2 approximation. Quinn et al. (2013) propose an efficient algorithm to identify the best directed tree approximation, where goodness of approximation is measured by Kullback-Leibler (KL) divergence from the full joint distribution to the distribution induced by the directed tree, i.e., min PT R\u2208T reeR D \u0000PR || PT R \u0001 , (9) where D(\u00b7||\u00b7) denote the KL divergence, PR is joint distribution of R, and T reeR is the set of all distributions on R = {R1, ..., Rm} that their DIGs are directed trees. They showed that the above optimization is equivalent to the following problem max PT R\u2208T reeR m X i=1 I(Rai \u2192Ri), (10) where ai denotes the parent node of Ri in the DIG of PT R \u2208T reeR. This result shows that inferring the best tree approximation of a DIG can be obtained by calculating the pair-wise DI, i.e., I(Rj \u2192Ri) for all i \u0338= j \u2208{1, ..., m} instead of (2). It is important to emphasize that the complexity of (2) increases with the network size. On the other hand, the complexity of the terms in (10) are independent of the network size and hence, in large networks, inferring the best directed tree approximation might be more suitable than the complete DIG. A generalized version of this 2It is called tree because there is no cycle in the graph. 11 \fapproximation is proposed by Quinn et al. (2017) in which they propose an algorithm to identify the optimal bounded in-degree approximations. In this algorithm, there is a trad off between the complexity and the accuracy of the approximation. Using side information about the underlying dynamic (e.g., it is linear) or the structure (e.g., the true DIG is a directed tree) might help to develop alternative learning algorithms with lower complexities. As an example, Etesami and Kiyavash (2014) show that when the underlying dynamic is linear then the coefficient matrices of the model can determine the corresponding DIG of the system. Therefore, give the side information that the system is linear, one can infer its DIG by estimating the coefficient matrices of the model instead of estimating the DIs in (2). Inferring the coefficients is a much simpler problem in terms of both computational and sample complexity compared to estimating the DIs. Another approach to manage high-dimensional settings is substituting the conditioning set R\u2212{i,j} in (2) with a smaller set that contains the parents of node Rj. This will reduce both computational and sample complexity of the problem without introducing any approximation error. This is because, if I(Ri \u2192Rj||R\u2212{i,j}) = 0, then for any subset Sj \u2286R\u2212{i,j} that contains all the parents of Rj, we have I(Ri \u2192Rj||Sj) = 0. However, forming such subsets may not be possible. In this case, one approach is to form a subset S\u2032 j \u2282R\u2212{i,j} that contains the parents of Rj with high probability. This will reduce the complexity but it may introduce approximation error. We use this approach in our experiment to infer the causal network of major 124 financial institutions. More precisely, we formed S\u2032 j by collecting 10 institutions that have the highest correlation with institute Rj. 2.6. DIG of Econometric Models In the remaining of this section, we study well-known econometric models and introduce how their parameters are related to their corresponding DIGs. These relationships can be used to infer the interconnections within these models without directly estimating the DIs. We also discuss the shortcomings of other approaches compared with DI facing the network identification problem in these models. 2.6.1. VAR models: Consider a set of m stationary time series, such that their relationships are captured by the following model: rt = p X k=1 Akrt\u2212k + \u03f5t, (11) where rt = (R1,t, ..., Rm,t)T, \u03f5t = (\u03f51,t, ..., \u03f5m,t)T, and {Ak} is a set of m\u00d7m matrices. Moreover, we assume that the exogenous noises, i.e., {\u03f5i,t} are mutually independent and also independent from {Rj,t}. For simplicity, we also assume that the {\u03f5i,t} have 12 \fmean zero. For this model, we have shown in Etesami and Kiyavash (2014) that I(Ri \u2192Rj||R\u2212{i,j}) > 0 if and only if p X k=1 \f \f[Ak]j,i \f \f > 0, (12) where [Ak]j,i denotes the (j, i)-th entry. Thus, to learn the corresponding causal network (DIG) of this model, instead of estimating the DIs in (2) which is a complex task, we can check whether the corresponding coefficients are zero or not. To do so, one approach is to use information criterion for the model-selection and learn the parameter p as described by Schwarz et al. (1978), and then use F-tests as described by Lomax and Hahs-Vaughn (2013) to check the null hypotheses for the coefficients. Materassi and Salapaka (2012) use Wiener filtering as another alternative approach to estimate the coefficients and consequently learn the DIG. The idea of this approach is to find the coefficients by solving the following optimization problem, {b A1, ..., b Ap} = arg min A1,...,Ap E \" 1 T T X t=1 ||rt \u2212 p X k=1 Akrt\u2212k||2 # . (13) This leads to a set of Yule-Walker equations that can be solved efficiently by Levinson-Durbin algorithm introduced by Musicus (1988). Note that one can enforce sparsity by adding proper regularizer. 2.6.2. GARCH models: The relationship between the coefficients of the linear model and the corresponding DIG can easily be extended to the financial data in which the variance of {\u03f5i,t}T t=1 are no longer independent of {Ri,t} but due to the heteroscedasticity, they are Rt\u22121 i measurable. More precisely, in financial data, the returns are modeled by GARCH that is given by Ri,t|Rt\u22121 \u223cN(\u00b5i,t, \u03c32 i,t), i = 1, ..., m, \u03c32 i,t = \u03b10 + q X k=1 \u03b1k(Ri,t\u2212k \u2212\u00b5i,t)2 + s X l=1 \u03b2l\u03c32 i,t\u2212l, (14) where {\u03b1k} and {\u03b2l} are nonnegative constants. Proposition 1. Consider a network of time series whose dynamic is given by (14). In this case, I(Rj \u2192Ri||R\u2212{i,j}) = 0 if and only if E[Ri,t|Rt\u22121] = E[Ri,t|Rt\u22121 \u2212{j}], \u2200t. (15) All proofs appear in Appendix C. Multivariate GARCH models are a generalization of (14) in which the variance of \u03f5i,t is Rt\u22121-measurable. In this case, not 13 \fonly \u00b5i,t but also \u03c32 i,t encode the interactions between the returns. More precisely, in multivariate GARCH, we have rt|Rt\u22121 \u223cN(ut, Ht), vech[Ht] = \u21260 + q X k=1 \u2126kvech[\u03f5t\u2212k\u03f5T t\u2212k] + p X l=1 \u0393lvech[Ht\u2212l], (16) where \u03f5t = rt \u2212ut, ut \u2208Rm, Ht \u2208Rm\u00d7m is symmetric positive definite matrix and Rt\u22121-measurable. Note that vech denotes the vector-half operator, which stacks the lower triangular elements of an m \u00d7 m matrix as an (m(m + 1)/2) \u00d7 1 array. Proposition 2. Consider a network of time series whose dynamic is captured by a multivariate GARCH model in (16). In this case, I(Rj \u2192Ri||R\u2212{i,j}) = 0 if and only if both the condition in Proposition 1 and the following condition hold E[(Ri,t \u2212ui,t)2|Rt\u22121] = E[(Ri,t \u2212ui,t)2|Rt\u22121 \u2212{j}], \u2200t. (17) See Appendix A for an example demonstrating the above result. Recall that the pairwise Granger-causality calculation, in general, fails to identify the true causal network. It was proposed by Billio et al. (2012) that the returns of the i-th institution linearly depend on the past returns of the j-th institution, when E[Ri,t|Rt\u22121] = E \u0002 Ri,t|Rj,t\u22121, Ri,t\u22121, {Rj,\u03c4 \u2212uj,\u03c4}t\u22122 \u03c4=\u2212\u221e, {Ri,\u03c4 \u2212ui,\u03c4}t\u22122 \u03c4=\u2212\u221e \u0003 . (18) This test is obtained based on pairwise Granger-causality calculation and does not consider non-linear causation through the variance of {\u03f5i}. For instance, if both institutions Rj and Rk affect institution Ri, then the above equality does not hold. This is because Rk influences Ri and it is included in the conditioning on the left but not on the right hand side. 2.6.3. Moving-Average (MA) models: Pesaran and Shin (1998) show that the model in (11) can be represented as an infinite MA as long as rt is covariance-stationary, i.e., all the roots of |I\u2212Pp k=1 Akzk| fall outside the unit circle: rt = \u221e X k=0 Wk\u03f5t\u2212k, (19) where Wk = 0 for k < 0, W0 = I, and Wk = Pp l=1 AlWk\u2212l. In this representation, {\u03f5i}s are called shocks and when they are independent, they are also called orthogonal. Herein, we study the causal structure of a MA model of finite order p, i.e., the summation in (19) is up to p. In this case, Equation (19) can be written 14 \fas W\u22121 0 rt = \u03a8(L)\u03f5t, where \u03a8(L) := I + Pp k=1 W\u22121 0 WkLk and L is the lag operator, i.e., Li\u03f5t = \u03f5t\u2212i. Subsequently, we have \u03a8(L)\u22121W\u22121 0 rt = W\u22121 0 rt \u2212 \u221e X k=0 JiLiW\u22121 0 rt = \u03f5t, (20) Ji := W\u22121 0 Wi \u2212 i\u22121 X j=1 Ji\u2212jW\u22121 0 Wj. (21) This representation is equivalent to an infinite VAR model. Corollary 1. Consider a MA model described by (19) with orthogonal shocks such that W0 is non-singular and diagonal. In this case, I(Rj \u2192Ri||R\u2212{i,j}) = 0 if and only if the corresponding coefficients of {Rj,t\u2212k}k>0 in Ri\u2019s equation in (20) are zero. In the interest of simplicity and space, we do not present the explicit form of the coefficients, but show the importance of this result via an example in Appendix A. We studied the DIG of a MA model with orthogonal shocks. However, the shocks are rarely orthogonal in practice. To identify the causal structure of such systems, we can apply the whitening transformation to transform the shocks into a set of uncorrelated variables. More precisely, suppose E[\u03f5t\u03f5T t ] = \u03a3, where the Cholesky decomposition of \u03a3 is VVT. Hence, V\u22121\u03f5t is a vector of uncorrelated shocks. Using this fact, we can transform (19) with correlated shocks into rt = Pp k=0 f Wk\u02dc \u03f5t\u2212k, with uncorrelated shocks, where \u02dc \u03f5t := V\u22121\u03f5t and f Wk := WkV. Remark 1. Diebold and Y\u0131lmaz (2014) applied GVD method to identify the population connectedness or in another word the causal structure of a MA model with correlated shocks. Using this method, they monitor and characterize the network of major U.S. financial institutions during 2007-2008 financial crisis. In this method, the weight of Rj\u2019s influence on Ri in (19) was defined to be proportional to di,j = p X k=0 \u0010 [Wk\u03a3]i,j \u00112 . (22) Recall that E[\u03f5t\u03f5T t ] = \u03a3. Applying the GVD method to Example 4, where \u03a3 = I, we obtain that d1,2 = d3,1 = 0. That is R2 does not influence R1 and R1 does not influence R3. This result is not consistent with the Granger-causality concept since the corresponding causal network (DIG) of this example is a complete graph, i.e., every node has influence on any other node. Thus, the GVD analysis also seems to suffer from disregarding the entire network akin to pairwise analysis commonly used in traditional application of the Granger-causality. 2.6.4. Switching models: Note that DIG as defined in Definition 2 does not require any assumptions on the underlying model. However, side information about the model class can simplify 15 \fcomputation of the DIs in (2). For instance, assuming that R = {R1, ..., Rm} is a first-order Markov chain, then I(Ri \u2192Rj||R\u2212{i,j}) = 0 if and only if P(Rj,t|Rt\u22121) = P(Rj,t|R\u2212{i},t\u22121) for all t. Recall that R\u2212{i},t\u22121 = {R1,t\u22121, ..., Rm,t\u22121} \\ {Ri,t\u22121}. Furthermore, suppose that the transition probabilities are represented through a logistic function similar to the work by Billio et al. (2010). More specifically, for any subset of processes H := {Ri1, ..., Rih} \u2286R, we have P(Rj,t|Ht\u22121) = P(Rj,t|Ri1,t\u22121, ..., Rih,t\u22121) := exp(aT HuH) 1 + exp(aT HuH), (23) where uT H := N Ri\u2208H(1, Ri,t\u22121) = (1, Ri1,t\u22121) \u2297(1, Ri2,t\u22121) \u2297\u00b7 \u00b7 \u00b7 \u2297(1, Rih,t\u22121), \u2297 denotes the Kronecker product, and aH is a vector of dimension 2h \u00d71. Under these assumptions, the causal discovery in the network reduces to the following statement: Ri does not influence Rj if and only if all the terms of uR depending on Ri have zero coefficients. Proposition 3. Consider the model in (23) and let uR = uR\u2212{i} \u2297(1, Ri,t\u22121) = (uR\u2212{i}, uR\u2212{i}Ri,t\u22121), (24) aT R = (aT 1 , aT 2 ), where a1 and a2 are the vectors of coefficients corresponding to uR\u2212{i} and uR\u2212{i}Ri,t\u22121, respectively. Then, I(Ri \u2192Rj||R\u2212{i,j}) = 0 if and only if a2 = 0. Multiple chain Markov switching model (MCMS)-VAR of Billio and Di Sanzo (2015) is a family of non-linear models, in which the relationship among a set of time series is given by Yi,t = \u00b5i(Si,t) + p X k=1 m X j=1 [Bk(Si,t)]i,jYj,t\u2212k + \u03f5i,t, i \u2208{1, ..., m}, (25) and \u03f5t := (\u03f51,t, ..., \u03f5m,t) \u223cN(0, \u03a3(st)), where the mean \u00b5i(Si,t), the lag matrices Bk(Si,t), and the covariance matrix of the error terms \u03a3(st) depend on a latent random vector st known as the state of the system. Random variable Si,t represents the state variable associated with Yi,t that can take values from a finite set S. The random vector st = (S1,t, ..., Sm,t) is assumed to be a time-homogeneous first-order Markov process with the transition probability P(st|st\u22121, Yt\u22121) = P(st|st\u22121). Furthermore, given the past of the states, the presents are independent, i.e., P(st|st\u22121) = Qm i=1 P(Si,t|st\u22121). Next result states a set of sufficient conditions under which by observing only the time series Yt and estimating the DIs, we are able to identify the causal relationships between the processes. Proposition 4. Consider a MCMS-VAR in which \u03a3(st) is diagonal for all st. In this case, I(Yj \u2192Yi||Y\u2212{i,j}) = 0 if \u2022 [Bk(si,t)]i,j = 0 for all k and all realizations si,t, 16 \f\u2022 [\u03a3(st)]i,i = [\u03a3(Si,t)]i,i, \u2022 P(Sk,t|st\u22121, S1,t, ..., Sk\u22121,t, Sk+1,t, ..., Sm,t) = P(Sk,t|Sk,t\u22121), for all k. Note that the conditions introduced in this proposition are different from the ones in (Billio and Di Sanzo, 2015). More precisely, Billio and Di Sanzo (2015) study the causal relationships between the time series under an extra condition that the state variables are known. Such assumption is not realistic as they are often not observable in real applications. Below, we present a simple example in which the above conditions do not hold and Y1 does not functionally depend on Y2. Yet, observing the states leads to the deduction that Y2 does not influence Y1, but without observing the states the inference would be different. Example 1. Consider a bivariate MCMS-VAR {Y1, Y2} in which the states only take binary values and Y1,t = b1,1(S1,t)Y1,t\u22121 + \u03f51,t, (26) Y2,t = \u00b52(S2,t) + 0.5Y1,t\u22121 + \u03f52,t, where (\u03f51,t, \u03f52,t) \u223cN(0, I), \u00b52(0) = 10, \u00b52(1) = \u22125, b1,1(0) = 0.5, and b1,1(1) = \u22120.5. Moreover, the transition probabilities of the states are P(S1,t|S1,t\u22121, S2,t\u22121) = P(S1,t|S1,t\u22121) = 0.8 whenever S1,t = S1,t\u22121, and P(S2,t = S1,t\u22121) = 0.9. Based on Billio and Di Sanzo (2015), in this setup, Y2,t\u22121 does not Granger-cause Y1,t given Y1,t\u22121, S1,t\u22121, i.e., P(Y1,t|Y2,t\u22121, Y1,t\u22121, S1,t\u22121) = P(Y1,t|Y1,t\u22121, S1,t\u22121). (27) However, without observing the states, we have P(Y1,t|Y2,t\u22121, Y1,t\u22121) \u0338= P(Y1,t|Y1,t\u22121). This is because, Y2,t\u22121 contains information about S2,t\u22121 which in turn contains information about S1,t\u22122. Another attempt to capture time-varying networks using Markov switching models is the work by Bianchi et al. (2019). In this work, the stock returns of the i-th firm in excess of the risk-free rate at time t, denoted by Yi,t is given by Yi,t = ZT i,tbi(st) + \u03b5i,t, t = 1, ..., T, i = 1, ..., n, (28) where Zi,t := (1, XT i,t)T and Xi,t denotes the mi-dimensional vector of systematic risk factors. bi(st) is a (mi + 1)-vector of time-varying regression coefficients, and {\u03b5i,t} are error terms that can be identified with a firm-specific idiosyncratic risk factor when Cov(Zi,t, \u03b5i,t) = 0. Furthermore, it is assumed that the risk factors are common across stocks and the error terms have a full time-varying variancecovariance matrix and they are normally distributed conditionally on the latent state st, i.e., (\u03b51,t, ..., \u03b5n,t) \u223cN(0, \u03a3(st)). This model is a reduced-form approximation of a linear pricing kernel (Cochrane, 2009; Vassalou, 2003) and can be seen as a 17 \fspecial variation of (25). Bianchi et al. (2019) define the interconnection network of this model using the inverse covariance matrix of the error terms which is an undirected graph. More precisely, there is no interconnection between two firms i and j at time t, when \u03b5i,t and \u03b5i,t are independent given {\u03b5l,t : l \u0338= i, j}. Such network cannot capture the causal relationships among the firms. Next example illustrates a scenario in which the model by Bianchi et al. (2019) fails to capture the true influences. Example 2. Consider a setting with three firms {i, j, k} in which both i and j influence k but there is no influence between i and j. The DIG of this system is i \u2192k \u2190j. This happens when \u03b5i,t and \u03b5j,t are independent but they become statistically dependent by conditioning on \u03b5k,t. In this case, all the off-diagonal entries of the inverse covariance matrix are non-zero. Hence, the method by Bianchi et al. (2019) inaccurately infers the network of this system to be a complete graph. 2.6.5. Quantiles: Quantile-on-quantile causality proposed by Sim and Zhou (2015) and later discussed by Bonaccolto et al. (2019) aims to check whether the \u03b8-th quantile of variable X causes the \u03c4-th quantile of Y , and vice versa. More precisely, in this work, they consider the following linear model on the conditional quantiles QYt(\u03c4, \u03b8) = \u03b20,1(\u03c4, \u03b8) + \u03b21,1(\u03c4, \u03b8)Yt\u22121 + \u03b22,1(\u03c4, \u03b8)Xt\u22121, (29) QXt(\u03c4, \u03b8) = \u03b20,2(\u03c4, \u03b8) + \u03b21,2(\u03c4, \u03b8)Yt\u22121 + \u03b22,2(\u03c4, \u03b8)Xt\u22121, (30) where QYt(\u03c4, \u03b8) and QXt(\u03c4, \u03b8) denote the conditional quantiles of Yt and Xt, respectively. The coefficients are obtained using quantile regression method from (Sim and Zhou, 2015). Note that this is a parametric approach that assumes a linear model for the conditional quantiles. Jeong et al. (2012) propose a non-parametric approach based on hypotheses testing for detecting causality in quantiles. This approach identifies X as a cause of Y in its \u03c4-th quantile when QY \u03c4 (Y t\u22121 t\u2212p , Xt\u22121 t\u2212q ) \u0338= QY \u03c4 (Y t\u22121 t\u2212p ), where QY \u03c4 (Z) denotes the \u03c4-th quantile of Yt conditional on set Z. This definition of causality is analogous to the definition of DI in which X was declared as a cause of Y when there exists a time t such that P(Yt|Y t\u22121, Xt\u22121) \u0338= P(Yt|Y t\u22121). The non-parametric test developed by Jeong et al. (2012) is based on a measure of distance defined as: J\u03c4 := E \u0014\u0010 F \u0010 QY \u03c4 (Y t\u22121 t\u2212p ) \f \f \fY t\u22121 t\u2212p , Xt\u22121 t\u2212q \u0011 \u2212\u03c4 \u00112 f(Y t\u22121 t\u2212p , Xt\u22121 t\u2212q ) \u0015 , (31) where F(u|v) denotes the conditional cumulative distribution function (CDF) of u given v and f(v) is the marginal density function of v. Jeong et al. (2012) show that this measure can be estimated using the feasible kernel-based estimator. The causal network recovered by this method may vary depending on the value of \u03c4 as it is possible that X causes Y in its quantile only for certain values of \u03c4. Hence, 18 \fthe quantile method detects causal relationship from X to Y only for such values of \u03c4. On the other hand, next result shows that DI robustly detect the causation. Proposition 5. Consider a network of two time series X and Y . If there exists \u03c4 \u2208(0, 1) such that X causes Y in quantile \u03c4, i.e., J\u03c4 > 0, then I(X \u2192Y ) > 0. 2.7. Causal Effect Between Substes of Processes Often in practice, it is important to infer the causal effect between two disjoint subsets of processes. For instance, in market analysis, it is important to understand the effect of Cryptocurrency sector either on a specific market such as Apple company or on a different sector such as Real Estate. Herein, we show how DI can be used to quantify such effects. First, we quantify the causal effect of a subset of l processes RA with index set A := {A1, ..., Al} \u2286\u2212{i} = {1, ..., m} \\ {i} on a time series Ri denoted by I(RA \u2192Ri||R\u2212A\u222a{i}). To do so, we need to compare the effects of adding and removing the history of RA on predicting Ri during a time horizon T while the effect of the remaining processes R\u2212A\u222a{i} is omitted. This leads to I(RA \u2192Ri||R\u2212A\u222a{i}) := 1 T T X t=1 E \" log P(Ri,t|Rt\u22121 i , Rt\u22121 \u2212A\u222a{i}, Rt\u22121 A ) P(Ri,t|Rt\u22121 i , Rt\u22121 \u2212A\u222a{i}) # . (32) Equation (32) can be written in terms of DI as follows, I(RAl \u2192Ri||R\u2212{i,Al}) + 1 T T X t=1 E \" log P(Ri,t|Rt\u22121 i , Rt\u22121 A\\{Al}, Rt\u22121 \u2212A\u222a{i}) P(Ri,t|Rt\u22121 i , Rt\u22121 \u2212A\u222a{i}) # . By induction, the above expression will be I(RA \u2192Ri||R\u2212A\u222a{i}) = l\u22121 X j=0 I(RAl\u2212j \u2192Ri||R\u2212{i,Al,...,Al\u2212j}). (33) It is noteworthy that the above equation resembles the chain rule. Given the above expression for the effect of a subset RA on a time series Ri, we can naturally define the causal effect of the subset RA on another disjoint subset RB in a network of m processes. We denote this effect by I(RA \u2192RB||R\u2212A\u222aB), where \u2212A \u222aB := {1, ..., m} \\ (A \u222aB) and it is the average individual effects of RA on the processes in RB, i.e., I(RA \u2192RB||R\u2212A\u222aB) := 1 |B| X i\u2208B I(RA \u2192Ri||R\u2212A\u222a{i}) = 1 |B| X i\u2208B l\u22121 X j=0 I(RAl\u2212j \u2192Ri||R\u2212{i,Al,...,Al\u2212j}). (34) 19 \fY1 Ym t t \u2212S/2 t + S/2 Y2 Figure 4: Measuring causal effect of Y2 on Y1 withing a time window of length S around time t. The last term in the above expression is the conditional DI between two processes and can be estimated similar to (3). The next important task is to quantify how much of the aggregated effect from A to B is by a specific time series RAj, Aj \u2208A? Answering this question can clarify, for instance, within the influences of a financial sector (e.g., Cryptocurrency) on another sector (e.g., Real Estate), which asset contributes the most or the least. More precisely, to quantify the contribution of RAj, Aj \u2208A, we compare the influences of the subsets RA and RA\\{Aj} on RB, i.e., CA\u2192B(RAj) := I \u0000RA \u2192RB||R\u2212A\u222aB \u0001 \u2212I \u0000RA\\{Aj} \u2192RB||R\u2212(A\\{Aj})\u222aB \u0001 . (35) 2.8. Time-varying Causal Networks All aforementioned formulations of causal influences are based on an assumption that the underlying causal network is time invariant over the time horizon during which the causal effects are measured. However, it is quite possible that the causal relationships and information spillover among financial assets changes over time. Changes in causal relationships among financial assets usually indicate varying patterns, so it is important to detect and assess such dynamic relationships. Below, we introduce a framework using DI to capture such time-varying networks and compare it with a major approaches in literature for detecting time-varying causal relations. Lu et al. (2014) propose a time-varying Granger causality test for the influence of Y2 on Y1 at time t with rolling sample size S. If the test statistic is larger than the critical value at a given level of significance, then there is significant Granger causality at time t. The test is defined as follows, Ht(S) := S PS\u22121 j=1 k2(j/M)r2 1,2,t(j, S) \u2212LS(k) p 2DS(k) , (36) where r1,2,t(j, S) denotes lag j sample cross correlation between standardized residuals of Y1 and Y2 in the sub-sample [t \u2212S + 1, t]. Furthermore, k(x) is the kernel function (e.g., Bartlett kernel), M is a positive integer, and LS and DS are LS(k) := S\u22121 X j=1 (1 \u2212j/S)k2(j/M), DS(k) := S\u22121 X j=1 (1 \u2212j/S)(1 \u2212(j + 1)/S)k4(j/M). It is important to emphasize that the above time-varying Granger test is limited to ARMA-GARCH, it cannot capture causalities in higher moments, and it is pairwise. 20 \fOn the other hand, we overcome these shortcomings by proposing the time-varying DI (TV-DI). More precisely, we propose the following quantity to detect the causal influence from Y2 to Y1 at time t with rolling sample size S (for simplicity, we assume S is even) in a network of m processes. It,S(Y2 \u2192Y1||Y\u2212{1,2}) := 1 S S/2 X j=\u2212S/2 E \" log P(Y1,t\u2212j|Yt\u2212S/2\u22121, Yt\u2212j\u22121 t\u2212S/2 ) P(Y1,t\u2212j|Yt\u2212S/2\u22121, Yt\u2212j\u22121 \u2212{2},t\u2212S/2) # . (37) This quantity is developed by comparing the performance of two predictors for Y1 at time t\u2212j (j varies within [t\u2212S/2, t+S/2]) both of which use the information of all processes up to time t \u2212S/2. Note that one of them uses additional information of all processes from t\u2212S/2 to t\u2212j \u22121 while the other one uses additional information of all processes excluding the information of Y2 from t \u2212S/2 to t \u2212j \u22121. If the performance of these two predictors are the same, i.e., the expression in (37) is zero, it implies that Y2 has no influence on Y1 during the time window [t\u2212S/2, t+S/2]. It is noteworthy that by including the information of all variables up to time t \u2212S/2, as it is shown in Figure 4, we ensure that the expression in (37) only detects impacts on Y1 that are from after t \u2212S/2. When this quantity is greater than the critical value at given level of significance, then Y2 influences Y1 within this time window. It is straightforward to see when time-varying Granger test proposed by Lu et al. (2014) or change-point detection methods detect a change in the causal network, our TV-DI test will pick that change as well, i.e., (37) will be positive. However, when the change occurs in higher moments, unlike the method of Lu et al. (2014), our TV-DI test can detect it. We will use our TV-DI test in Section 3 to detect time-varying causal relations in both synthetic and real-world experiments. 2.9. Sample Path Causal Influences The definition of DI involves taking an expectation over the histories of the time series, and are thus well suited to address questions such as \u201cDoes the past of time series Y up to time t \u22121, i.e., Y t\u22121 cause Xt?\u201d A natural next question that one might pose is \u201cDid the past realization of Y up to time t \u22121, i.e., yt\u22121 cause Xt?\u201d In other words, the goal of sample path causal influence is to identify the causal effect that particular values of Y have on the distribution of the subsequent sample of X. Examples of this include \u201cWhen does the institute A have the greatest effect on another institute B?\u201d Schamberg and Coleman (2019) develop a framework using sequential prediction for estimating the sample path causal relationships. In this framework, the influence from Y to X in the presence of other processes Z at time t is defined by SY \u2192X(Ht\u22121) := E \u0014 log P(xt|Ht\u22121) P(xt|Ht\u22121 \u2212y ) \u0015 , (38) 21 \fwhere Ht\u22121 is the \u03c3-algebra generated by {xt\u22121, yt\u22121, zt\u22121} and Ht\u22121 \u2212y is the \u03c3-algebra generated by {xt\u22121, zt\u22121}. The relationship between the sample path causal measure and the DI is given by P t E[SY \u2192X(Ht\u22121)] = I(Y \u2192X||Z). Furthermore, Schamberg and Coleman (2019) discuss how sequential prediction theory may be leveraged to estimate the proposed causal measure and introduce a notion of regret for assessing the performance of such estimator. 3. Simulation Studies This section presents the results of our simulation experiments, which assess the efficacy of the TV-DIG model presented in Section 2.8 in capturing complex causal relationships within multivariate time series networks. These experiments focus on scenarios involving nonlinear dynamics and time-varying effects, highlighting the model\u2019s ability to infer network structures under diverse conditions. We particularly address scenarios where traditional models, constrained by linear, normal, and time-invariant assumptions, are inadequate. This underscores the importance of nonparametric approaches in deciphering the dynamic interconnections between nodes in complex networks. 3.1. Setting 1: Time-Invariant Nonlinear Network In the first setting, our simulation targets a time-invariant graph with 10 nodes. We employed the Monte Carlo simulation to generate data from a nonlinear Data Generating Process (DGP) of 10 time series, aiming to demonstrate the generality of the DI method in capturing causal relationships within these series. Subsequently, we inferred the corresponding network using DI. This nonlinear DGP is uniquely designed, not aligning with any econometric model classes previously studied in related works, to robustly test our method\u2019s capability to handle diverse and complex scenarios. We simulate the following system Xi,t = cut\u03bd(xT t\u22121Aixt\u22121) + \u03f5i,t, 1 \u2264i \u226410, (39) where xt = (X1,t, ..., X10,t)T \u2208R10\u00d71, Ai \u2208R10\u00d710, \u03f5i,t is a standard Normal distribution, and the nonlinearity is introduced through the cut\u03bd function \u03bd \u2212\u03bd \u2212\u03bd \u03bd Figure 5: Function cut\u03bd(x). cut\u03bd(x) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2212\u03bd x \u2264\u2212\u03bd x \u2212\u03bd < x < \u03bd \u03bd x \u2265\u03bd . (40) This system is a composition of a quadratic system with a nonlinear function cut\u03bd(x) (depicted in Figure 5). The main reason for using this function is to prevent the 22 \fsystem from diverging and having a stable system3. Clearly, the above system is neither VAR nor switching models and thus for example, the proposed methods in (Billio et al., 2012; Diebold and Y\u0131lmaz, 2014; Billio et al., 2021; Kalli and Griffin, 2018; Bianchi et al., 2019) will fail to capture its corresponding interconnection network. On the other hand, as we discussed in Section 2.2, DI is capable of inferring the causal network of a dynamical system without any prior assumption on its underlying model. Hence, it can detect the interconnections within this system. To demonstrate this capability of DI, we first generated {A1, ..., A10} by selecting their entries from {\u2212.5, 0, .5} uniformly at random. Afterwards, using Monte Carlo simulation, we generated {x1, ..., xT} from (39). By applying the kernel estimator to the generated dataset, we estimated the DIs and consequently infer the DIG. For a given threshold \u03c4, the corresponding DIG\u03c4 is obtained by drawing an arrow from Xj to Xi when I(Xj \u2192Xi||X\u2212{i,j}) > \u03c4. On the other hand, because the functional dependencies among the time series are given in (39), it is possible to obtain the true causal network of this system. More precisely, Xj is a parent of Xi in the true network if and only if Xi,t+1 functionally depends on Xj,t in (39). To measure the performance of our DIG method, we compared the inferred DIG with the true network and reported the precision and the recall in Figure 7. This figure is obtained by averaging over 50 trails. Precision := TP TP + FP , Recall := TP TP + FN , where TP, FP, and FN denote true positive, false positive, and false negative, respectively. TP is the number of edges that are common between the estimated DIG and the true network. FP denotes the number of edges that they are in the estimated DIG but do not exist in the true network. Finally, FN represents the number of edges that do not exist in the estimated DIG but they are in the true network. The results, as depicted in Figures 6 and 7, demonstrate the robustness of the DI model in accurately inferring the network structure. In Figure 6, the comparison between the inferred DIG and the true network illustrates the model\u2019s precision in capturing most of the existing connections, with only a single missed edge between nodes 2 and 3. This high level of accuracy in network reconstruction is further substantiated by the precision and recall curves shown in Figure 7. The curves indicate an optimal balance between precision and recall across different threshold values, underscoring the model\u2019s effectiveness in identifying true relationships without overfitting. These results validate the DI method\u2019s capability to detect complex causal interconnections in a time-invariant nonlinear network, highlighting its potential application in similar complex settings. 3In our experiment, we used \u03bd = 15. 23 \f1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 (a) DIG\u03c4 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 (b) True Network Figure 6: The resulting estimated DIG of the synthetic data with threshold \u03c4 = 0.05 and its corresponding true network. Estimated DIG only misses one edge between nodes 2 and 3. Figure 7: Precision and recall curves in nonlinear experiment for varying amount of thresholds. 3.2. Setting 2: Time-Varying Network In the second setting, we increase complexity by incorporating both nonlinearity and time-varying influences among nodes, simulating a dynamic scenario with evolving network relationships. The TV-DIG model\u2019s performance here is assessed based on its accuracy in tracking these time-varying causal influences, reflected in changes to the network structure. We compare these results with those from the time-varying Granger test method proposed by Hong (2001). We simulate a nonlinear, time-varying system using the following DGP: xt+1 = p |Atxt| + nt, (41) where xt = (X0,t, ..., X4,t)T \u2208R5\u00d71, p | \u00b7 | acts element-wise, nt \u2208R5\u00d71 is a random vector distributed as Normal with mean zero and identity covariance matrix. The matrix At, a time-varying coefficient matrix, introduces dynamic interactions among 24 \fthe time series, as shown below: At := \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed 10 1+t/200 \u221220 1+t/150 \u22120.8 0 0 0 0 8 sin(1.4\u03c0t T ) 0 8 sin(2.4\u03c0t T + \u03c0) 0 0 0 4u \u0000cos( 1.6\u03c0t T ) \u0001 \u22125u \u0000sin(\u22121.6\u03c0t T ) \u0001 2u \u00007.9 \u2212ln(t) \u0001 0 0 0.7 0 0 0.8 0 0.9 \u22120.5 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , where u(t) denotes the step function, i.e., u(t) = 1 for t \u22650 and zero otherwise. The matrix At, with its time-dependent elements, introduces a unique dynamic in the network\u2019s causal relationships. The function u(t) selectively activates or deactivates these influences over time. (a) |At|4,1 over time. (b) |At|1,2 over time. (c) |At|2,3 over time. in Figure 4). The main reason for using this function is to prevent the system from diverging and having a stable system4. Clearly, the above system is neither a VAR nor a switching system and thus for example, the proposed methods in Billio et al. (2012); Diebold and Y\u0131lmaz (2014); Billio et al. (2018); Kalli and Gri\ufb03n (2018); Bianchi et al. (2019) will fail to capture its corresponding interconnection network. On the other hand, as we discussed in Section II.B, DI is capable of inferring the causal network of a dynamical system without any prior assumption on its underlying model. Hence, it can detect the interconnections within this system. To demonstrate this capability of DI, we \ufb01rst generated {A1, ..., A10} by selecting their entries from {\u2212.5, 0, .5} uniformly at random. Afterwards, using Monte Carlo simulation, we generated {x1, ..., xT } from (53). By applying the kernel estimator to the generated dataset, we estimated the DIs and consequently infer the DIG. For a given threshold \u2327, the corresponding DIG\u2327is obtained by drawing an arrow from Xj to Xi when I(Xj ! Xi||X\u2212{i,j}) > \u2327. On the other hand, because the the functional dependencies among the time series are given in (53), it is possible to obtain the true causal network of this system. More precisely, Xj is a parent of Xi in the true network if and only if 10 X k=1 \" \"[Ai]j,k + [Ai]k,j \" \" > 0. (55) This is because if the inequality in (55) holds, then Xi,t+1 functionally depends on Xj,t in (53). To measure the performance of our DIG method, we compared the inferred DIG with the true network and reported the precision and the recall in Figure 6. This \ufb01gure is obtained by averaging over 50 trails. Formally, the precision and the recall are de\ufb01ned by P recision := T P T P + F P , Recall := T P T P + F N , where T P, F P , and F N denote true positive, false positive, and false negative, respectively. T P is the number of edges that are common between the estimated DIG and the true network. F P denotes the number of edges that they are in the estimated DIG but do not exist in the true network. Finally, F N represents the number of edges that do not exist in the estimated DIG but they are in the true network. Time-varying Network Experiment In this section, we evaluated the performance of our time-varying DI method presented in Section II.G and compared it with the time-varying Granger test method proposed by Hong (2001). In particular, we simulated the following non-linear timevarying dynamical system. xt+1 = p|Atxt| + nt, 4For our experiment, we use \u232b= 15. 25 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 (a) DIG\u2327 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 (b) True Network Figure 5. The resulting DIG of the synthetic data with threshold \u2327= 0.05 and its corresponding true network. DIG only misses one edge between nodes 2 and 3. Figure 6. Precision and recall curves in nonlinear experiment for varying amount of thresholds. where xt = (X1,t, ..., X5,t)T 2 R5\u21e51, p| \u00b7 | acts element-wise, nt 2 R5\u21e51 is a random vector distributed as Normal with mean zero and identity covariance matrix, and At := 0 B B B B B B @ 10 1+t/200 \u221220 1+t/150 \u22120.8 0 0 0 0 8 sin(1.4 \u21e1t T ) 0 8 sin(2.4 \u21e1t T + \u21e1) 0 0 0 4u% cos( 1.6\u21e1t T )& \u22125u% sin(\u22121.6\u21e1t T )& 2u%7.9 \u2212ln(t)& 0 0 0.7 0 0 0.8 0 0.9 \u22120.5 1 C C C C C C A , where u(t) denotes the step function, i.e., u(t) = 1 for t \u22650 and zero otherwise. B. Financial Data Section II.B introduced a tool for identifying the causal structure in a network of time series. In this section, we put this tool to work and use it to identify and monitor the evolution of connectedness among major \ufb01nancial institutions during 2006-2016. 26 (d) X0 \u2192X3 over time. in Figure 4). The main reason for using this function is to prevent the syst having a stable system4. Clearly, the above system is neither a VAR nor a thus for example, the proposed methods in Billio et al. (2012); Diebold and et al. (2018); Kalli and Gri\ufb03n (2018); Bianchi et al. (2019) will fail to cap interconnection network. On the other hand, as we discussed in Section inferring the causal network of a dynamical system without any prior assum model. Hence, it can detect the interconnections within this system. To dem of DI, we \ufb01rst generated {A1, ..., A10} by selecting their entries from {\u2212 random. Afterwards, using Monte Carlo simulation, we generated {x1, . applying the kernel estimator to the generated dataset, we estimated the infer the DIG. For a given threshold \u2327, the corresponding DIG\u2327is obtained by drawin Xi when I(Xj ! Xi||X\u2212{i,j}) > \u2327. On the other hand, because the the fu among the time series are given in (53), it is possible to obtain the true system. More precisely, Xj is a parent of Xi in the true network if and only 10 X k=1 \" \"[Ai]j,k + [Ai]k,j \" \" > 0. This is because if the inequality in (55) holds, then Xi,t+1 functionally depe measure the performance of our DIG method, we compared the inferred DIG and reported the precision and the recall in Figure 6. This \ufb01gure is obtaine trails. Formally, the precision and the recall are de\ufb01ned by P recision := T P T P + F P , Recall := T P T P + F N , where T P, F P , and F N denote true positive, false positive, and false nega is the number of edges that are common between the estimated DIG and denotes the number of edges that they are in the estimated DIG but do network. Finally, F N represents the number of edges that do not exist in t they are in the true network. Time-varying Network Experiment In this section, we evaluated th time-varying DI method presented in Section II.G and compared it with the test method proposed by Hong (2001). In particular, we simulated the foll varying dynamical system. xt+1 = p|Atxt| + nt, 4For our experiment, we use \u232b= 15. 25 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 (a) DIG\u2327 1 2 4 5 6 9 10 1 2 4 5 6 9 10 (b) True Network Figure 5. The resulting DIG of the synthetic data with threshold \u2327= 0.05 true network. DIG only misses one edge between nodes 2 and 3. Figure 6. Precision and recall curves in nonlinear experiment for varying where xt = (X1,t, ..., X5,t)T 2 R5\u21e51, p| \u00b7 | acts element-wise, nt 2 R5\u21e51 is tributed as Normal with mean zero and identity covariance matrix, and At := 0 B B B B B B @ 10 1+t/200 \u221220 1+t/150 \u22120.8 0 0 0 8 sin(1.4 \u21e1t T ) 0 8 sin(2 0 0 0 4u% cos( 1.6\u21e1t T )& \u22125u% si 2u%7.9 \u2212ln(t)& 0 0 0.7 0 0.8 0 0.9 \u2212 where u(t) denotes the step function, i.e., u(t) = 1 for t \u22650 and zero otherw B. Financial Data Section II.B introduced a tool for identifying the causal structure in a n In this section, we put this tool to work and use it to identify and mon connectedness among major \ufb01nancial institutions during 2006-2016. 26 (e) X1 \u2192X0 over time. in Figure 4). The main reason for using this function is to prevent the syste having a stable system4. Clearly, the above system is neither a VAR nor a thus for example, the proposed methods in Billio et al. (2012); Diebold and et al. (2018); Kalli and Gri\ufb03n (2018); Bianchi et al. (2019) will fail to cap interconnection network. On the other hand, as we discussed in Section inferring the causal network of a dynamical system without any prior assump model. Hence, it can detect the interconnections within this system. To demo of DI, we \ufb01rst generated {A1, ..., A10} by selecting their entries from {\u2212 random. Afterwards, using Monte Carlo simulation, we generated {x1, .. applying the kernel estimator to the generated dataset, we estimated the infer the DIG. For a given threshold \u2327, the corresponding DIG\u2327is obtained by drawing Xi when I(Xj ! Xi||X\u2212{i,j}) > \u2327. On the other hand, because the the fu among the time series are given in (53), it is possible to obtain the true c system. More precisely, Xj is a parent of Xi in the true network if and only 10 X k=1 \" \"[Ai]j,k + [Ai]k,j \" \" > 0. This is because if the inequality in (55) holds, then Xi,t+1 functionally depen measure the performance of our DIG method, we compared the inferred DIG and reported the precision and the recall in Figure 6. This \ufb01gure is obtained trails. Formally, the precision and the recall are de\ufb01ned by P recision := T P T P + F P , Recall := T P T P + F N , where T P, F P , and F N denote true positive, false positive, and false nega is the number of edges that are common between the estimated DIG and t denotes the number of edges that they are in the estimated DIG but do network. Finally, F N represents the number of edges that do not exist in t they are in the true network. Time-varying Network Experiment In this section, we evaluated th time-varying DI method presented in Section II.G and compared it with the test method proposed by Hong (2001). In particular, we simulated the follo varying dynamical system. xt+1 = p|Atxt| + nt, 4For our experiment, we use \u232b= 15. 25 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 (a) DIG\u2327 1 2 4 5 6 9 10 1 2 4 5 6 9 10 (b) True Network Figure 5. The resulting DIG of the synthetic data with threshold \u2327= 0.05 true network. DIG only misses one edge between nodes 2 and 3. Figure 6. Precision and recall curves in nonlinear experiment for varying a where xt = (X1,t, ..., X5,t)T 2 R5\u21e51, p| \u00b7 | acts element-wise, nt 2 R5\u21e51 is tributed as Normal with mean zero and identity covariance matrix, and At := 0 B B B B B B @ 10 1+t/200 \u221220 1+t/150 \u22120.8 0 0 0 8 sin(1.4 \u21e1t T ) 0 8 sin(2. 0 0 0 4u% cos( 1.6\u21e1t T )& \u22125u% sin 2u%7.9 \u2212ln(t)& 0 0 0.7 0 0.8 0 0.9 \u2212 where u(t) denotes the step function, i.e., u(t) = 1 for t \u22650 and zero otherw B. Financial Data Section II.B introduced a tool for identifying the causal structure in a n In this section, we put this tool to work and use it to identify and mon connectedness among major \ufb01nancial institutions during 2006-2016. 26 (f) X2 \u2192X1 over time. (g) Ht from X0 to X3. (h) Ht from X1 to X0. (i) Ht from X2 to X1. Figure 8: A selection of the resulting plots of the time-varying network experiment. First row illustrates the functional dependencies between the selected time series, the second and the third rows present the resulting TV-DI in (37) and the time-varying Granger test in (36), respectively. This setup enables us to observe and analyze how causal influences evolve over time, thus providing insights into the adaptability and effectiveness of our method in adjusting to changing network dynamics and capturing evolving causal relationships. In order to capture the time-varying causal effects among these five time series, we used the Monte Carlo simulation and generated data using the nonlinear DGP in (41) for a period of T time steps. Afterwards, using the method introduced in 25 \fSection 2.3, we estimated (37) between all pairs of time series for t \u2208{30\u03c4 : \u03c4 \u22650} and S = 800. Some of the resulting plots4 are depicted in Figure 8. The remaining plots are presented in Appendix D.3. For instance, the influences X0 \u2192X3 and X1 \u2192X0 are encoded in [At]4,1 = 2u \u00007.9\u2212ln(t) \u0001 and [At]1,2 = \u221220 1+t/150, respectively. Therefore, the influence X1 \u2192X0 decays over time and the influence X0 \u2192X3 vanishes after certain time when ln(t) > 7.9. These changes of influences can be seen in Figures 8(d) and (e). Similarly, the influence X2 \u2192X1, captured by [At]2,3 = 8 sin \u00001.4 \u03c0t T \u0001 , showcases a different type of dynamic interaction. This term, reliant on a sinusoidal function of time, implies a periodic influence between these nodes, with the strength of the influence oscillating over time. This influence is captured by our TV-DI in Figure 8(f). The model\u2019s ability to track these periodic changes, closely mirroring the sinusoidal pattern defined in the DGP, further demonstrates the TV-DI\u2019s precision in capturing time-variant and nonlinear relationships. The consistency between the estimated influences and the theoretical expectations from the sinusoidal term in At emphasizes the model\u2019s capability to adapt to and accurately represent cyclic or oscillating causal effects in complex networks. It is noteworthy that the time-varying Granger test of Hong (2001) defined by Ht in (36) fails to accurately capture the time-varying causal influences as shown in figures 8(g)-(i). 4. Empirical Studies In this section, we apply the TV-DIG framework to identify and monitor the evolution of interconnectedness and systemic risk among major assets and industrial sectors within the financial network. Our primary objective is to support policymakers and regulators in managing systemic risk, thus maintaining financial market stability and integrity through macroprudential approaches. While there is a rich and growing body of literature on static or time-invariant network estimation and systemic risk, research on estimating dynamic networks from time series data remains limited. This research focuses on understanding how cryptocurrencies might affect financial stability. We explore the changing interactions between different sectors in the financial industry, especially how cryptocurrencies influence other sectors. The study also delves into the impact of the COVID-19 crisis and the Federal Reserve\u2019s emergency actions during the coronavirus outbreak in March 2020 on these interactions. To verify any possible non-linear relations in the data, we applied a non-linearity test based on principal component analysis, as introduced by Kruger et al. (2008), detailed in Appendix D.2. This test is particularly effective in identifying non-linear dependencies in multivariate time series. Kruger et al. (2008)\u2019s method involves 4The x-axis present time with the scaling 30\u03c4, i.e., 60 represent 1800. 26 \fusing principal component analysis to transform the data, followed by applying specific statistical tests to these transformed components to detect non-linearity. The test\u2019s strength lies in its ability to uncover complex, non-linear relationships that traditional linear tests might miss. Our application of this test to our data set revealed significant non-linear interactions, thereby rejecting the null hypothesis of linear dependency between the series. This finding is crucial as it indicates that the underlying structure of the relationships between the series is inherently non-linear and any model attempting to estimate the influences within the network must be capable of capturing this non-linearity. 4.1. Data In support of this research question, we obtained the daily logarithmic returns of 124 assets, 113 stocks from Global Industry Classification Standard (GICS) sectors: 17 stocks from Financials/Banks (GICS 4010), 25 stocks from Financials/Diversified Financials (GICS 4020), 22 stocks from Financials/Insurance (GICS 4030), 31 stocks from Real Estate/Equity Real Estate Investment Trusts REITs (GICS 6010), and 18 stocks from Fintech. Notably, Fintech is not a GICS-defined sector, and the selected companies are based on the KBW Nasdaq Financial Technology Index (KFTX). We also included a representative sample of the cryptocurrency market to determine the impact this new sector has on systemic risk within the entire financial industry. Our sample has 11 major cryptocurrencies with the largest market capitalization and trading volume which together account for approximately 75% of total crypto market capitalization. The complete list of assets, including Banks, Diversified Financial Services, Insurance, REITs, FinTech, and Crypto, is presented in Appendix D. The period of study spans five years, from January 1, 2018, to January 1, 2023, which covers the recent global pandemic and the Fed\u2019s response to pandemic. The Fed\u2019s aggressive interest rate cuts, aimed at stimulating the economy, increased investors\u2019 propensity to seek higher returns through riskier investments. This phenomenon, known as \u201creaching for yield,\u201d has been well-documented in existing literature, with significant findings both in institutional (e.g., Di Maggio and Kacperczyk (2017); Andonov et al. (2017)) and individual investor behaviors (Lian et al., 2019). 4.2. Related Literature Over the past decade, the cryptocurrency sector has evolved from an obscure asset to a wildly popular investment. The market capitalization of cryptocurrencies experienced a meteoric rise from approximately US$7 billion at the end of 2015 to nearly US$3 trillion in November 2021, as per data from CoinMarketCap.com. This was followed by a significant contraction, where the market capitalization plummeted to around US$1 trillion during the recent cryptocurrency winter. The market value of cryptocurrencies has been characterized by substantial volatility. In early 2023, the cumulative market capitalization witnessed a resurgence after the decline in 2022, which was precipitated by the Terra/Luna collapse and the FTX 27 \fcrisis. The TerraUSD collapse eradicated over $50 billion in value as shown in Uhlig (2022), and the subsequent failure of FTX, the second-largest cryptocurrency exchange, intensified skepticism towards the industry, highlighting issues related to regulatory oversight. The dynamics of market capitalization in the cryptocurrency sector are indicative of the capital flow within this market. Although, the cryptocurrency market\u2019s size is relatively small in comparison to the nearly $300 trillion global financial system, its impact cannot be underestimated. Historical precedents, such as the financial crisis of 2008, demonstrate that even a small segment of the financial sector can catalyze significant stability concerns. For instance, the subprime mortgage market, va lued at approximately $1.2 trillion in 2008, was a key factor in the crisis (Pinto (2010)). This underscores the importance of understanding the implications of the cryptocurrency market\u2019s evolution, not just in terms of its size but also in the context of its potential impact on broader financial stability. In response to the growing prominence of cryptocurrencies, governments worldwide, including the United States, are examining how to regulate them. A notable example is the executive order by the U.S. President on March 9, 2022, which mandates a comprehensive review of digital assets, including cryptocurrencies. This review, conducted by federal agencies, focuses on evaluating the impact of digital currencies on overall financial stability and other relevant factors. Building on the governmental efforts to regulate the burgeoning cryptocurrency sector, it\u2019s important to recognize that modeling and analyzing the cryptocurrency market, along with its interactions with other financial markets, presents a significant challenge. This complexity arises from the market\u2019s inherent volatility, its multifaceted nature, and the rapid pace of its evolution. Many cryptocurrency assets are characterized by a lack of clear fundamental value or cash flows, as noted by Makarov and Schoar (2020). These assets are prone to fragmentation, offering opportunities for arbitrage and being vulnerable to market manipulation, as indicated by studies like Griffin and Shams (2020) and Gandal et al. (2018). The crypto market\u2019s periods of rapid growth have drawn the attention of both retail and investors, as well as politicians and regulators, as Auer and Tercero-Lucas (2022) points out. Additionally, fluctuations in the cryptocurrency market may increasingly align with other asset classes. Research by Chuen et al. (2017); Borri (2019); Petukhina et al. (2021) highlights a generally low correlation between cryptocurrencies and other asset classes. However, Iyer (2022) and Dong et al. (2023) present evidence of growing links between U.S. equity markets and the prices of Bitcoin and Ethereum, suggesting a deeper integration of crypto markets with the equity cycle. Specifically, Iyer (2022) and Dong et al. (2023) demonstrate that the correlation between Bitcoin and the S&P 500 was initially low but has significantly increased since 2020. Most of these studies compare market indices, typically representing the stock market, with Bitcoin as the crypto market proxy. The methodologies in the aforementioned work are often basic, relying on visual 28 \finspection and correlation-based analyses. Such methods can be misleading due to their sensitivity to the chosen time frame and the distribution of the sample data. They generally assume linearity and a time-invariant distribution of model parameters, which may limit their accuracy. Our understanding of the primary factors driving crypto asset prices in this immature industry, as well as the elements influencing the correlation between cryptocurrency and other markets, remains limited. The existing literature offers a variety of potential and compatible explanations for these dynamics. To elucidate why crypto assets may have become more correlated with other asset classes post-COVID, studies indicate an increase in retail trading during the pandemic lockdowns, encompassing both crypto and stock markets. This is highlighted in the research by Divakaruni and Zimmerman (2023) regarding crypto trading and Ozik et al. (2021) in the context of stock trading. Furthermore, Toczynski (2022) estimates that approximately US$15 billion of the federal stimulus checks were invested in trading crypto assets. The landscape for crypto asset trading evolved to accommodate increasing demand. Notably, well-known mobile payment applications like Revolut and PayPal, along with trading platforms such as Robinhood, began offering crypto trading services to their clients. April 2021 saw Coinbase, a prominent centralized crypto exchange, make its debut on the Nasdaq. Additionally, new investment vehicles like the Grayscale Bitcoin Trust emerged, providing investors with exposure to crypto assets without the need to hold the tokens directly. Amid these developments, institutional investor engagement in crypto assets also saw a rise. Research by Liu et al. (2022) indicates a notable decrease in the volatility of crypto returns from 2011 to 2018. This trend suggests a diminishing risk factor, potentially making the crypto markets more appealing to institutional investors over time. Utilizing a supervisory database, Auer et al. (2023) highlights the increasing role of traditional financial intermediaries in the crypto sphere. They observe that banks\u2019 involvement in crypto assets, although still a small fraction of their total balance sheets, has grown and holds significant implications for a market that was once dominated largely by retail investors. However, the dynamics between crypto and stock markets may not be predominantly influenced by retail investors, as their trading patterns for these assets differ (Kogan et al. (2023)). It is posited that the post-COVID monetary policies of the Federal Reserve and the resulting response from institutional investors are more crucial factors. These investors, now more active in the crypto markets, tend to trade cryptocurrencies and other high-risk assets in a correlated manner. In a lowinterest environment, it\u2019s logical for institutions to pursue higher returns by opting for riskier assets (Di Maggio and Kacperczyk (2017); Andonov et al. (2017)). This trend is underscored by findings from DeVault and Wang (2021), who note that investors in crypto assets often outperform their peers. Thus, the increasing institutional participation in crypto markets raises the possibility of heightened risk for spillovers into the broader economy. This is particularly relevant as these institu29 \f3/6/2018 6/5/2018 (a) Network of (3/6/2018 6/5/2018). 11/4/2019 2/5/2020 (b) Network of (11/4/2019 2/5/2020). 9/3/2020 12/3/2020 (c) Network of (9/3/2020 12/3/2020). 5/5/2022 8/5/2022 (d) Network of (5/5/2022 8/5/2022). Figure 9: The causal networks among the industries in different time periods. tions\u2019 portfolios often include a mix of crypto and traditional assets, and they are less likely to rebalance their portfolios as frequently as retail investors. 4.3. DIG of the Industry groups We use this literature to explain the time-varying and dynamic nature of the interconnection between crypto and other assets in the financial network, and to investigate potential cross-sector spillovers. This approach enables us to indirectly track the flow of funds across different investment avenues, such as real estate, capital markets, the cryptocurrency market, and bank accounts. The insights gained from this model are valuable for policymakers and regulators. They can use this information to devise effective strategies and take actions that reduce systemic risk within financial networks. To detect the cross-sector causal relationships, we obtained the DIG between them by estimating the DIs in Equation (34) in a rolling window moving block bootstrapping scheme (See Section 2.3) between 01/2018 and 12/2022. More precisely, we used rolling windows of size Trw=63 that is almost three months, LB=3, 30 \fFigure 10: A selection of the resulting plots of the time-varying influences between the industries. and constructed batches of size 20. The windows are shifted r = 21 (about one month) steps into the future which leads to 57 point estimates of DIs. Figure 9 illustrate four of the learned networks. In these networks, nodes with label [0, 1, 2, 3, 4, 5] denote industries [Banks, Diversified Financial, Insurance, Real Estate Investment, Financial Technology, Crypto], respectively. More precisely, the causal influence from Banks to Insurance is I(0 \u21923||1, 2, 4, 5) which is computed at 57 different time periods between 2018 and 2022. For instance, as depicted in Figure 9(a), this influence between 3/6/2018 to 6/5/2018 is 1.15 which is shown by a weighted edges from node 0 to 3. In particular, node 5 which represents the Crypto has several outgoing edges that is an indication that the cryptocurrencies have non-negligible impacts on other industries. Please note that the thickness of the edges are proportional to the estimated DIs, i.e., the larger the causal influence from one node to another node is, the thicker the edge between the two nodes will be. Furthermore, the size of the nodes are proportional to their centrality, that is the ratio of their outgoing influences to the overall influences. Table 1 reports the top three influential assets in each industry groups for four time windows that is arg maxAj\u2208A CA\u2192B(RAj), where CA\u2192B(RAj) is defined in (35). For a better visualisation of the time-varying causal influences, we present a selection of estimated DIs between 01/2018 and 12/2022 in Figure 10. The x-axis of this figure presents the beginning of the 57 time periods in which the DIs defined in (34) are estimated. Note that each time period is about three months and the y-axis presents the estimated values of the DIs. In the top right of Fig. 10, we observe that our method detects a heightened level of interaction between cryptocurrencies and assets in other industry groups during the first half of 2020. This increase coincides with the COVID-19 crisis and the Federal Reserve\u2019s response to it. Additionally, notable periods in 2022 are aligned with significant events, such as the collapse of Terra in May 2022, which erased $50 billion in valuation, and the FTX crisis in November 2020. The plot illustrates the 31 \f3/6/20186/5/2018 11/4/20192/5/2020 9/3/202012/3/2020 5/5/20228/5/2022 Banks JPMorgan, PNC, M&T Bank M&T Bank, US Bancorp, Comerica Inc. PNC, Zions, JPMorgan PNC, JPMorgan, M&T Bank Div. Fin. Morgan St., American Exp., BlackRock BlackRock, Morgan St., Ameriprise Fin. MSCI, American Exp., Discover Fin. Goldman Sachs, Nasdaq, Morgan St. Insurance Allstate, Chubb Lim., Marsh& McLennan Travelers Co., Allstate, Marsh& McLennan Allstate, American Int., Progressive Co. Prudential Fin., Employers Holdings, Chubb Lim. Real Est. Prologis Inc., Equity Residential, Federal Realty Inv. Prologis Inc., Equinix Inc, Simon Property Equity Residential, Extra Space Storage, SL Green American Tower, Welltower, AvalonBay Fin. Tech. Mastercard, Verisk Analytics, Visa Fair Isaac, Global Pay., Mastercard Fair Isaac, CoStar, Verisk Analytics Thomson Reuters, Visa, Euronet Crypto BTC-USD, ETH-USD, DOGE-USD ETH-USD, BTC-USD, BNB-USD ETH-USD, LTC-USD, DOGE-USD BTC-USD, ETH-USD, BNB-USD Table 1: Top three influential assets in each Industry group for four time windows. influence of assets from other industry groups on cryptocurrencies, where all groups exhibit almost similar patterns. To quantify this influence, we applied a threshold to include only significant influences. Generally, as observed in the plot, the influence from assets in other industry groups on cryptocurrencies was comparatively lower before 2020. For complete list of the results, please see Appendix D.4. In the top left of Fig. 10, we observe a surprising deviation from existing studies, which predominantly suggest that the influence of cryptocurrencies on the stock market was insignificant before 2020. These studies often focus on Bitcoin and Ethereum, representing the crypto market, and their interactions with stock market indices using relatively simplistic models. However, our results reveal a different scenario when we concentrate on a specific segment of the market, including only assets in financial sectors, and incorporate a broader range of crypto assets. We found that, even before 2020, there were significant influences from cryptocurrencies to these industry groups, particularly Banks and Diversified Finance, indicative of the capital market. In 2020, an increase in the influence of cryptocurrencies on Banks and other sectors was observed. Although our study did not focus on other sectors and industries, similar analyses could be applied to them. Given these findings, we 32 \fcan assert that cryptocurrencies, as a new sector, have the potential to pose systemic risks within financial networks. These changes in influence can be tracked using our time-varying approach. 4.4. DIG of the Assets An alternative measure of the aggregated influences between different sectors and industry groups is the number of interconnections between the assets of those industries. For instance, the number of edges from assets of Financial/Banks to the assets of Financial/Insurances. Such measure has been used in various work such as (Billio et al., 2012). This requires the complete causal network between all 124 assets. Hence, we inferred the DIGs of the assets by estimating the time-varying causal network among all 124 assets by estimating the DIs in (37) in a rolling window moving bock bootstrapping scheme for 57 different time instances within the time horizon between 01/2018 and 12/2022. Figure 11: Number of connections over time between or within industry groups. Estimating the complete network presents the challenge of high-dimensionality. As discussed in Section 2.2, to determine the influence of node i on node j at time t, it\u2019s necessary to estimate It,S(Ri \u2192Rj || R\u2212{i,j}). In this study, this required estimating a joint distribution with a dimension of 124. Generally, such non-parametric estimation, without any knowledge and assumption about the underlying distribution, demands a substantial volume of independent samples, which were limited in our empirical study. To address this, we approximated the DIs by dimension reduction. Instead of conditioning on the entire set R\u2212{i,j} with 122 elements, we conditioned on a smaller subset S\u2032 j \u2282R\u2212{i,j} with only 10 elements, comprising assets most correlated with Rj. Specifically, we ranked institutions in R\u2212{i,j} by their correlation with Rj and selected the top 10 for the conditioning set. This approximation is based on the fact that the conditioning set R\u2212{i,j} can be reduced to only the set of all direct causes of Rj within the set R\u2212{i,j}. By postulating that these causes are among the highly correlated assets with Rj, we obtained the approximation. 33 \fFigure 11 presents the resulting number of connections within and between different industry groups and crypto. For instance, the curve labeled with \u2018into Banks\u2019 demonstrates the number of edges from assets that are not categorized as Financial/Banks to assets that are categorized as Financial/Banks. The curve labeled with \u2019within industries\u2019 shows the number of edges that both of its end nodes are assets of the same industries, e.g., edges between assets of Insurances or edges between assets of Cryptos. The curve \u2018between industries\u2019 is the total number of edges from one industry to an another industry. Comparing the results in Figure 11 and Figure 10, we observe similarities. For instance, as it is illustrated in Figure 10 (top right), there is higher amount of influences from other industries on Crypto during 11/4/2019 to 11/3/2020 and similarly in this time window, the number of edges coming into the Crypto\u2019s assets from other assets are higher according to the \u2018into crypto\u2019 curve in Figure 11. 5." + }, + { + "url": "http://arxiv.org/abs/2005.12360v1", + "title": "Non-cooperative Multi-agent Systems with Exploring Agents", + "abstract": "Multi-agent learning is a challenging problem in machine learning that has\napplications in different domains such as distributed control, robotics, and\neconomics. We develop a prescriptive model of multi-agent behavior using Markov\ngames. Since in many multi-agent systems, agents do not necessary select their\noptimum strategies against other agents (e.g., multi-pedestrian interaction),\nwe focus on models in which the agents play \"exploration but near optimum\nstrategies\". We model such policies using the Boltzmann-Gibbs distribution.\nThis leads to a set of coupled Bellman equations that describes the behavior of\nthe agents. We introduce a set of conditions under which the set of equations\nadmit a unique solution and propose two algorithms that provably provide the\nsolution in finite and infinite time horizon scenarios. We also study a\npractical setting in which the interactions can be described using the\noccupancy measures and propose a simplified Markov game with less complexity.\nFurthermore, we establish the connection between the Markov games with\nexploration strategies and the principle of maximum causal entropy for\nmulti-agent systems. Finally, we evaluate the performance of our algorithms via\nseveral well-known games from the literature and some games that are designed\nbased on real world applications.", + "authors": "Jalal Etesami, Christoph-Nikolas Straehle", + "published": "2020-05-25", + "updated": "2020-05-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction A multi-agent system can be de\ufb01ned as a group of autonomous agents that are interacting in a common environment. Due to their rich ability of modeling complex dynamics, multi-agent systems are rapidly \ufb01nding applications in different \ufb01elds such as autonomous robotics, telecommunications, distributed control, and economics. Although the behavior of agents in a multi-agent system can be prede\ufb01ned in advance, it is often necessary that they explore new behaviors to gradually improve their performance. Another reason that makes the a priori design of a good strategy even more dif\ufb01cult is that many multi-agent systems contain humans as agents. In such systems, modeling agents\u2019 preferences for selecting their strategies is often complex. Stochastic games (SGs) have been used for modeling multi-agent systems. However, most of the existing works consider fully cooperative scenarios Wei and Luke [2016] or settings in which particular communication between the agents is possible. Considering systems such as autonomous cars and their interactions with pedestrians, it is clear that some multi-agent systems are partially cooperative or even competitive, and in many situations, no communication links can be established between the agents. More importantly, in these systems, agents select their strategies knowing that the other agents also select their strategies with the same level of awareness. Another important property of such systems is that the agents do not always select their best-response strategies. This is related to Quantal response equilibrium (QRE) which is a smoothed-out best arXiv:2005.12360v1 [cs.AI] 25 May 2020 \fFigure 1: A multi-agent system in which the goals of players A and B are to reach a and b, respectively. responses, in the sense that agents are more likely to select better strategies than worse strategies McKelvey and Palfrey [1995]. This idea has also its origins in statistical limited dependent variable models such as in economics McKelvey and Palfrey [1998], Goeree et al. [2002], psychology, and in biology Palfrey [2016]. As an example, consider the grid game between two players {A, B} in Figure 1. Each player can choose to stay at its current position or move to one of its adjacent neighbors. Players A and B want to reach their destinations at a and b, respectively and avoid collision. For simplicity, assume the players are aware of each other\u2019s goals, i.e., A knows that B wants to go to b and vice versa but they cannot communicate and they only get to play this game once. In real world, the pedestrians (players) may face similar situation and they can reach their goals with small chance of collision by selecting a set of near optimal strategies that consider the behavior of the other pedestrians. This work aims to mimic the logical abilities of human, by building a method that reasons about the anticipated learning of the other agents and select an exploration but near optimum mixed strategy. Overall, the proposed model in this work can be categorized as a prescriptive, non-cooperative agenda according to Shoham et al. [2007]. In our modeling, we take advantage of game theory and reinforcement learning. Previous works have also attempted to model similar systems but using different models such as social forces Helbing and Molnar [1995], potential \ufb01elds Alahi et al. [2014], \ufb02ow \ufb01elds Ali and Shah [2008], \ufb01ctitious games Ma et al. [2017], and others Kretzschmar et al. [2014], Huang et al. [2015], Park et al. [2016], Jain et al. [2015], Carmel and Markovitch [1999]. Yet these works are performing either in static environments or in dynamic environments with short-term prediction, and they do not address the complex interactions among the agents. Contributions: We propose a game-theoretic model in which all agents select mixed strategies that are distributed according to the Boltzmann distribution. In this case, the strategies of the agents depend on their Q-functions and simultaneously, the Q-functions depend on the strategies. \u2022 We introduce a set of coupled Bellman-type equations describing the Q-functions of the agents and show that under some conditions, this set of equations have a unique solution. In another words, we introduce a set of assumptions under which there exists a unique QRE for our Markove game. \u2022 Assuming that the agents are aware of each other\u2019s goals1, we propose two algorithms to obtain the solutions in \ufb01nite and in\ufb01nite time horizon settings. \u2022 We study a practical setting in which the occupancy measures of the agent can capture their interactions and propose a forward-backward algorithm to obtain their behavior with less complexity compared to the general setting. \u2022 We establish the connection between our model and the principle of maximum causal entropy. This result can be used to develop an algorithm in which each agent can simultaneously infer the goals of the others and its strategy. 1Goals can be interpreted differently depending on the problem. In this context, we mean the reward (utility) functions. 2 \f2 Related Works In this section, we review some related multi-agent reinforcement learning (MARL) algorithms. For a more comprehensive review please see Busoniu et al. [2008], Albrecht and Stone [2018]. Depending on the overall goal of the agents, MARLs can be divided into cooperative or noncooperative. Cooperative SGs are the games in which agents have the same reward function and the learning goal is to maximize the common discounted return. The learning algorithms in such SGs are typically derived from a model-free algorithm known as Q-learning Lauer and Riedmiller [2000], Greenwald et al. [2003], Bowling [2003], Hu and Wellman [2003]. Combination of cooperative and competitive Q-learning was developed in Littman [2001a] called friend-or-foe (FOF). The convergence of these algorithms is based on several strong assumptions, which may not be realistic in real problems. For instance, either every stage game during learning has a Nash equilibrium or every stage game has a Nash equilibrium that is both bene\ufb01ciary for the other agents and the learner does not bene\ufb01t from deviating from this equilibrium Bowling [2000]. Such requirement is satis\ufb01ed in a small class of problems. However, our model is not necessarily cooperative and we show that if the norm of the reward functions are bounded, our model has a unique equilibrium. The work in Guestrin et al. [2002] develops an approach that simpli\ufb01es the coordination among the agents when the Q-function can be decomposed additively into local functions such that each local function only depends on the actions of a subset of agents. Works in Kok et al. [2005], Kok and Vlassis [2005] study conditions under which such decomposition of an optimal joint action can be guaranteed. Team Q-learning algorithm is another type of learning algorithm for cooperative SGs and it is based on an assumption that the agents have unique optimal joint actions. Therefore, they are able to learn the common Q-function in parallel Littman [2001b]. However, this is rarely the case in real world problems. Agent-tracking algorithms estimate models of the other agents\u2019 strategies and response to them accordingly. For example, the joint action learners in Claus and Boutilier [1998] use empirical models of the other agents\u2019 strategies. i.e., agent i\u2019s model of j\u2019s strategy is de\ufb01ned as \u03c0i j(aj) := Ci j(aj)/ X a Ci j(a), where \u03c0i j(aj) is agent i\u2019s empirical model of agent j\u2019s strategy and Ci j(aj) denotes the number of times agent i observed agent j taking action aj. On the other hand, the FMG algorithm in Kapetanakis and Kudenko [2002] keeps track of only those actions that yielded good rewards in the past. Similar to our work, agents in FMQ use Boltzmann action selection. However, it only works for deterministic dynamics. Moreover, FMQ increases the Q-values of only those actions that produced good rewards in the past. This enforces the agent towards coordination. Algorithms based on \ufb01ctitious game Ma et al. [2017], Conitzer and Sandholm [2007], MetaStrategy algorithm Powers and Shoham [2005], and Hyper-Q learning Tesauro [2004] are other related examples. The AWESOME algorithm in Conitzer and Sandholm [2007] uses \ufb01ctitious play that switches from the best response in \ufb01ctitious play to a pre-computed Nash equilibrium. The work in Hernandez-Leal and Kaisers [2017] directly models the distribution over opponents. However, addressing the dynamic behavior of the opponent is missing in this work. The heuristic algorithm in Ma et al. [2017] also uses the \ufb01ctitious game approach. Analogous to our work, it models the policies of the agents by Boltzmann distribution. However, unlike our algorithm, during the forecast of the agents\u2019 policies, the learning algorithm in Ma et al. [2017] is agnostic to the fact that each agent selects its policy also by forecasting its opponents\u2019 policies and it misses theoretical analysis. Authors in Wang and Sandholm [2003] propose an optimal adaptive learning for team Markov games in which each agent solves a virtual game that is constructed on top of each stage game. Hence, each agent requires empirically estimating a model of the SG, model of the other agents, and estimating an optimal value function for the SG. Under some conditions, it convergences to a coordinated optimal joint action Wunder et al. [2010]. We use a similar approach in the sense that each agent solves a 3 \fvirtual game to \ufb01nd its policy but in non-stationary and non-cooperative setting and without empirical estimation of the SG. Works that use policy search method in multi-agent setting are the alternative to the Q-learning based algorithms. Generalized IGA Zinkevich [2003] and GIGA-WoLF Bowling [2005] are two such algorithms. However, unlike the setting in this work, both are designed for two-agent and two-action games. Deep neural network has been also used in MARL problems Vinyals et al. [2019]. Most deep-MARL algorithms are also developed for fully cooperative settings Omidsha\ufb01ei et al. [2017], Foerster et al. [2017a] and emergent communication Foerster et al. [2016], Sukhbaatar et al. [2016]. The work in Leibo et al. [2017] considers general sum settings with independent learner and Lowe et al. [2017] proposes a centralized actor-critic architecture for ef\ufb01cient training in mixed environments. However, they do not reason about the other agents\u2019 behaviors. Heinrich and Silver [2016], Lanctot et al. [2017] use the principle of best response algorithms. However, such best responses may not be desired or executed by the agents. Analogous to the LOLA algorithm Foerster et al. [2017b] and the algorithm based on generative adversarial networks Metz et al. [2016], our algorithm makes no assumptions about cooperation among the agents and simply assumes that each agent is maximizing its own return. However, LOLA is developed by approximating the value function of a two-player game in which the policy of each player is parameterized with a recurrent neural network. The algorithm in Metz et al. [2016] relies on an end-to-end differentiable loss function, and thus does not work in the general RL setting. 3 Preliminaries Markov Decision Process: A Markov decision process (MDP) is speci\ufb01ed by a tuple (X, A, P0, P, R). The set of states is X that can be continuous or discrete but in this work we consider a discrete state space and a \ufb01nite set of actions A. The initial distribution P0 describes the initial state x(0). The transition probabilities are denoted by P(x(t + 1)|x(t), a(t)) that is the probability of transitioning to state x(t + 1) after selecting action a(t) at state x(t). The agent gets R(x, a) \u2208R as a reward for selecting action a at state x. A policy \u03c0(a|x) is a conditional distribution that speci\ufb01es how an agent selects its actions at state x. Stationary policies do not depend on the time step. The agent\u2019s goal in an in\ufb01nite-time horizon setting is to maximize, at each time-step k, the expected discounted return E \u0002 X \u03c4\u22650 \u03b3\u03c4R(x(\u03c4 + k), a(\u03c4 + k)) \u0003 , where \u03b3 \u2208[0, 1) is the discount factor and the expectation is taken over the probabilistic state transitions. The Q-function, Q\u03c0 : X \u00d7 A \u2192R for a given policy \u03c0 is de\ufb01ned as Q\u03c0(x, a):=E[ X \u03c4\u22650 \u03b3\u03c4R(x(\u03c4), a(\u03c4))|x0 = x, a0 = a, \u03c0]. The optimal Q-function is de\ufb01ned as Q\u2217(x, a) = max\u03c0 Q\u03c0(x, a) and it is characterized by the Bellman optimality equation: Q\u2217(x, a) = R(x, a) + \u03b3 X x\u2032 P(x\u2032|x, a)V \u2217(x\u2032), (1) V \u2217(x\u2032) = max a Q\u2217(x\u2032, a). A broad spectrum of single-agent RL algorithms exists, e.g., model-based methods based on dynamic programming Bertsekas [1995] and model-free methods based on online estimation of value functions. Markov Game: A natural extension of an MDP to multi-agent environments is using Markov games Thuijsman [1992]. Markov games are a special case of stochastic games (SGs), that are de\ufb01ned by a tuple (X1, ..., XM, A1, ..., AM, P0, P, R1, ..., RM), where M is the number of agents, Xi, Ai, 4 \fand Ri are the state space2, action space and the reward function of agent (player) i, respectively. In this work, we assume that all agents share the same action space A. We denote the state of agent i at time \u03c4 by xi(\u03c4), and the state of its opponents by \u20d7 x\u2212i(\u03c4). We also denote the policy of agent i at time \u03c4 by \u03c0\u03c4 i (a|\u20d7 x) : X1 \u00d7 ... \u00d7 XM \u00d7 A \u2192R. Stationary policies are denoted analogously but without time superscript. When Ri = R, for all i \u2208[M] := {1, ..., M}, the agents have the same goal and the SG is cooperative. 4 Markov Game with Exploration Strategies 4.1 In\ufb01nite-time Horizon As we mentioned, most of the existing works consider scenarios in which the agents play their best strategies against their opponents\u2019 best or random strategies. However, in different settings such as human movements in a crowd, the behaviors (strategies) are not necessary optimal but rather close to optimal. In this work, we assume that all agents select their mixed strategies such that actions with higher Q-functions3 are selected with higher probabilities (Boltzmann distribution), i.e., \u03c0i(ai|\u20d7 x) \u221dexp (\u03b2Qi(\u20d7 x, ai)), for i \u2208[M], (2) where 0 < \u03b2 is known as the inverse temperature. This particular form of policy allows us to model the behavior of exploring agents (e.g., human) in a common environment. The interactions between the agents are encoded into their Q-functions. We de\ufb01ne the Q-function of agent i \u2208[M] in in\ufb01nite-time horizon setting as follows, Qi(\u20d7 x, ai) = Ri(\u20d7 x, ai) + \u03b3E\u03c0\u2212i[Vi(\u20d7 x\u2032)|\u20d7 x, ai], (3) In the above equations, \u20d7 x denotes (xi, \u20d7 x\u2212i), and the expectation is taken over the probabilistic state transitions and the strategies of the i\u2019s opponents (\u2212i := [M] \\ {i}). The \ufb01rst term in (3) is the individual reward of agent i that represents the immediate effect of action ai at state \u20d7 x. The second term encodes the future effect of selected action ai considering the behavior of i\u2019s opponents. In this equation, Vi denotes the value-function, Vi(\u20d7 x\u2032):=E\u03c0i[Qi(\u20d7 x\u2032; a\u2032)]= X a\u2032 \u03c0i(a\u2032|\u20d7 x\u2032)Qi(\u20d7 x\u2032; a\u2032). (4) Since in (2), the agents select their actions based on the state of the game, the joint policy of the agents can be factorized as follows \u03c0[M](\u20d7 a|\u20d7 x) = Y j\u2208[M] \u03c0j(aj|\u20d7 x). (5) It worth noting that Lauer and Riedmiller [2000] de\ufb01nes the Q-function by assuming that the opponents of agent i select their best actions, (taking maximum over the actions instead of the expectation in (3)). But in this work, analogous to Ma et al. [2017], we assume that all the agents select their actions according to the Boltzmann distribution. Remark 1. An alternative de\ufb01nition of the value-function is to use softmax4 instead of the expectation in (4) Ziebart [2010], Zhou et al. [2018]. The idea of using softmax is to approximate the max function in the de\ufb01nition of the Bellman equation (1). However, when the Q-function does not vary too much for different actions, softmax(Q) will have a bias term of order log |A|, where |A| denotes the number of actions5. On the other hand, the value-function in (4) which is also an approximation of maxa\u2032 Q(x\u2032, a\u2032) does not have this issue. 2It is common to represent the state of the game with a variable x, but without loss of generality, herein, we describe the state of the game using \u20d7 x. 3It represents the quality of an action in a given state. 4It is given by softmaxx f(x) := log P x exp(f(x)). 5Consider the following vector of length n, \u20d7 v = [z, ..., z], then softmax(\u20d7 v) = z + log n but max(\u20d7 v) = z. 5 \fEquations (2)-(5) imply a set of M coupled equations describing the relationships between the Q-functions/policies of the agents. More precisely, Equation (3), can be written as Qi = Ri + \u03b3EExp{Q\u2212i}[EExp{Qi}[Qi]] := Ti(Q\u2212i, Qi), for i \u2208[M]. (6) In the above equations, we removed all the arguments only for simpler representation, and Exp{Qi} denotes the policy of ith player that is given in (2). This model describes the behavior (policy selection) of a set of interacting agents that do not always select their best responses but rather their near optimal responses. This raises two main questions that we will address in this section: does the system of equations in (6) admits a unique solution? If so, how can agent i obtain its policy? Next result shows that if the reward functions are bounded, Tis are contraction mappings. Therefore, the equations in (6) admit a unique solution. The sketch of proof is provided in the Section 9. Theorem 1. Assume that maxi ||Ri||\u221e\u2264(1\u2212\u03b3)2 2\u03b3M\u03b2 . Then, for every i \u2208[M], Ti is a contraction mapping. The assumption of Theorem 6 is not restrictive as one can ensure it is satis\ufb01ed by simply scaling the reward functions. However, this may not be the case in some scenarios. Later, in the Section 7, we introduce an alternative that relaxes this assumption to incorporate rewards with higher norms. Since {Ti} are contraction mappings, one can \ufb01nd the solution of (6) using a value iteration algorithm. Here, we present MGE-I in Algorithm 1 that can \u03f5-approximate the solution of (6). Algorithm 1 MGE-I {Rj}, \u03f5 Initialize: \u20d7 Q0, \u20d7 Q1, s = 0 while maxj ||Qs+1 j \u2212Qs j|| \u2265\u03f5 do for j \u2208\u2212i do Qs+1 j \u2190Tj(Qs \u2212j, Qs j) end for Qs+1 i \u2190Ti(Qs+1 \u2212i , Qs i) s \u2190s + 1 end while 4.2 Finite-time horizon In the \ufb01nite-horizon setting, the problem is slightly different. This is because of non-stationary policies and no discount factor \u03b3 in the de\ufb01nition of returns. More precisely, in our model, the Q-function of the ith agent at time \u03c4 \u2208[0, T] is de\ufb01ned by Q\u03c4 i (\u20d7 x, ai) := Ri(\u20d7 x, ai) + E\u03c0\u03c4 \u2212i[V \u03c4+1 i (\u20d7 x\u2032)|\u20d7 x, ai], (7) V \u03c4+1 i (\u20d7 x\u2032) = E\u03c0\u03c4+1 i [Q\u03c4+1 i (\u20d7 x\u2032, a\u2032)], with the boundary condition V T i (\u20d7 x) = Ri,F (\u20d7 x) that is the \ufb01nal reward of agent i. Analogous to the in\ufb01nite-time horizon setup, the expectation in (7) depends on the policy of the other agents, which we assume that are all distributed according to the Boltzmann distribution. Thus, Equation (7) can be written as Q\u03c4 i = Ui(Q\u03c4 \u2212i, V \u03c4+1 i ), for i \u2208[M], (8) where Ui denotes the right hand side of Equation (7). Similar to the result of in\ufb01nite-time horizon section, we have the following result. Theorem 2. If maxi{||Ri||\u221e, ||Ri,F ||\u221e} \u22641/2\u03b2(M \u22121)(1+T), then Ui is a contraction mapping with respect to its \ufb01rst argument. 6 \fThis result shows that the system of equations in (8) admits a unique set of solution { \u20d7 Q0, ..., \u20d7 QT } and Algorithm 2 is able to \u03f5-approximate the solutions. Algorithm 2 MGE-F 1: {Rj, Rj,F }, T, \u03f5 2: for j = 1, ..., M do 3: V T j \u2190Rj,F 4: end for 5: Initialize: \u20d7 Q0, \u20d7 Q1 6: for \u03ba = T \u22121, ..., 0 do 7: s = 0 8: while maxj ||Qs+1 j \u2212Qs j|| \u2265\u03f5 do 9: for i = 1, ..., M do 10: Qs+1 i \u2190Ui(Qs \u2212i, V \u03ba+1 i ) 11: end for 12: s \u2190s + 1 13: end while 14: \u20d7 Q0 \u2190\u20d7 Qs 15: for j = 1, ..., M do 16: b Q\u03ba j \u2190Uj(Q0 \u2212j, V \u03ba+1 j ) 17: V \u03ba j \u2190E\u03c0\u03ba j [ b Q\u03ba j ] 18: end for 19: end for 5 Special Setting In many applications, the interactions among the agents can be summarized to not-colliding or colliding (not merging/merging into a same state) in a common environment. Example of a not colliding scenario is a set of self-driving cars in a highway and an example of colliding scenario is two robots exchanging their loads in a warehouse. The need for such model is to reduce the complexity of the inference algorithms in the previous section. In this section, we introduce a simpli\ufb01ed Markov game that can model such scenarios. This game has lower complexity and requires less memory compared to the games in Section 4. For the rest of this section, we assume that all agents have the same state space X. The main idea of this model is to assume that the reward function of agent i in (7) can be factorized into two terms: the individual goal of the agent that unlike Ri(\u20d7 x, ai) in (7) depends only on (xi, ai) and another term that encodes the interactions between i and its opponents. The latter term should depends only on the existence probability of i\u2019s opponents at state xi. Therefore, we de\ufb01ne the occupancy measure O\u03c4 j (xi) : X \u2192[0, 1] to denote the likelihood of agent j being at state xi at time \u03c4. With this assumption, we rede\ufb01ne the Q-function of agent i at time \u03c4, which now, it only depends on i\u2019s state and action. More precisely, we de\ufb01ne e Q\u03c4 i (xi, ai) := Ri(xi, ai)+\u03a8 \u0000O\u03c4 \u2212i(xi) \u0001 +E[e V \u03c4+1 i (x)|xi, ai], (9) where O\u03c4 \u2212i(xi) := {O\u03c4 1(xi), ..., O\u03c4 M(xi)} \\ {O\u03c4 i (xi)}, and \u03a8(\u00b7) is a functional6. The expectation is taken over the probabilistic state transitions. The value-function is de\ufb01ned similar to (7), e V \u03c4+1 i (x) := Ee \u03c0\u03c4+1 i [ e Q\u03c4+1 i (x; a)], (10) 6As an example, in the self-driving car scenario, a possible choice for \u03a8(O\u03c4 \u2212i(xi)) is \u2212P j\u2208\u2212i \u00b5jO\u03c4 j (xi), where \u00b5j > 0. 7 \fFigure 2: Left: Graphical model demonstrating the dependencies between variables in algorithm 3 with two agents {i, j} and T = 2. Here, k denotes the kth iteration of the algorithm. Right: State space and possible actions of the pursuit game. Red and black arrows including self-loop for every node indicate the possible actions of the hunter and the prey, respectively. and e V T i (x) = Ri,F (x), where Ri,F (x) denotes the \ufb01nal reward of agent i at state x. The policy e \u03c0\u03c4 i = Exp{ e Q\u03c4 i } is de\ufb01ned similar to (2). The above equations lead to e Q\u03c4 i = Ri +\u03a8(O\u03c4 \u2212i)+ EExp{ e Q\u03c4+1 i }[ e Q\u03c4+1 i ] := Bi(O\u03c4 \u2212i, e Q\u03c4+1 i ), for i \u2208[M], (11) This equation describes the dependency between the Q-function and the occupancy measures. On the other hand, the occupancy measure can be written recursively as follows O\u03c4+1 i (x) = X x\u2032,a e \u03c0\u03c4 i (a|x\u2032)P(x|x\u2032, a)O\u03c4 i (x\u2032) =EExp{ e Q\u03c4 i }[O\u03c4 i ] := Gi(O\u03c4 i , e Q\u03c4 i ), for i \u2208[M], (12) and the boundary condition O0 i = \u03b4i, where \u03b4i denotes the delta function that is zero everywhere except at the current state of agent i. By assuming that agents are aware of each others goals (reward functions {Rj, Rj,F }j\u2208[M] are known to all the agents), we propose a forward-backward algorithm in 3 that outputs the Q-functions of the agents. In the forward-pass of Algorithm 3, the occupancy measures {O\u03c4 i } are updated using (12) and afterward, they are used in the backward-pass to update the Q-functions via (11). Figure 2 demonstrates the dependencies between the variables of a two-players system with T = 2. Algorithm 3 MGE-FB 1: {Rj, Rj,F }, T, K 2: Initialize: { e Q0 j}, ..., { e QT j } 3: for j = 1, ..., M do 4: O0 j \u2190\u03b4j, e QT +1 j \u2190Rj,F 5: end for 6: for k = 1, ..., K do 7: Forward-pass: 8: for \u03c4 = 1, ..., T and j = 1, ..., M do 9: O\u03c4 j \u2190Gj(O\u03c4\u22121 j , e Q\u03c4\u22121 j ) 10: end for 11: Backward-pass: 12: for \u03c4 = T \u22121, ..., 0 and i = 1, ..., M do 13: e Q\u03c4 i \u2190Bi(O\u03c4 \u2212i, e Q\u03c4+1 i ) 14: end for 15: end for 8 \fNext result introduces a condition under which Algorithm 3 converges to a set of unique Q-functions as the number of iterations K tends to in\ufb01nity. To establish this result, we assume that there is a constant7 L such that for any pairs of occupancy measures O, O : X \u2192[0, 1] and \u2200i \u2208[M], ||\u03a8(O\u2212i) \u2212\u03a8(O\u2212i)||\u221e\u2264L max j\u2208\u2212i ||Oj \u2212Oj||1. Furthermore, we assume that there exist constants \u03c9 and \u03d5 such that maxi{||Ri||\u221e, ||Ri,F ||\u221e} \u2264\u03c9 and ||\u03a8||\u221e\u2264\u03d5 and let \u03be := (T + 1)(\u03c9 + \u03d5). Theorem 3. Under the above assumptions, Algorithm 3 converges as K tends to in\ufb01nity when 2LT \u2264\u03be exp(\u2212\u03b2(T + 1)\u03be). 6 Multi-agent Maximum Causal Entropy In the above formulation of the Markov games, it is assumed that the agents are aware of all the reward functions. However, this may not be the case in some problems. In order to relax this assumption, we establish the connection between Markov games with exploration strategies and the principle of maximum causal entropy (MCE). Then, we show how this connection can be used to relax the aforementioned assumption. The principle of MCE prescribes a policy by maximizing the entropy of a sequence of actions causally conditioned on sequentially revealed side information Ziebart [2010]. The original formulation of MCE is for single-agent setting but the multi-agent extension of MCE is also introduced in Ziebart et al. [2011]. The problem of MCE in multi-agent setting is as follows, max {\u03c0\u03c4 i } H(\u20d7 a||\u20d7 x) := \u2212E\u20d7 a,\u20d7 x h X \u03c4\u2264T log \u03c0\u03c4(\u20d7 a(t)|\u20d7 x(t)) i , s.t. Eai,\u20d7 x[Fi(\u20d7 x, ai)] = b Eai,\u20d7 x[Fi(\u20d7 x, ai)], \u2200i, X ai(\u03c4)\u2208A \u03c0\u03c4 i (ai(\u03c4)|\u20d7 x(\u03c4))=1, \u2200\u03c4, i, \u20d7 x(\u03c4), \u03c0\u03c4 i (ai(\u03c4)|\u20d7 x(\u03c4)) \u22650, \u2200\u03c4, i, ai(\u03c4), \u20d7 x(\u03c4), (13) The \ufb01rst constraint is to ensure that for any agent i, the expectation of the feature function Fi = (Fi,1, ..., Fi,Ni) matches its empirical mean b Eai,\u20d7 x[Fi(\u20d7 x, ai)]. The feature functions are revealed as side information to all agents. Next results describe the solution of (13). Theorem 4. Solution of (13) is recursively given by \u03c0\u03c4 i = 1 Zi(\u03c4) exp \u0010 W \u03c4 i (\u20d7 x(\u03c4), ai(\u03c4)) \u0011 , W \u03c4 i (\u20d7 x(\u03c4) , ai(\u03c4)) =< \u03b8i, Fi(\u20d7 x(\u03c4), ai(\u03c4)) > +E\u03c0\u03c4 \u2212i[log Zi(\u03c4 +1)], log Zi(\u03c4) = softmax a\u2032\u2208A W \u03c4 i (\u20d7 x(\u03c4), a\u2032), log Zi(T) = softmax a\u2208A < \u03b8i, Fi(\u20d7 x(T), a) >, where \u03c0\u03c4 \u2212i = Q j\u2208\u2212i \u03c0\u03c4 j . The boundary condition is Zi(T + 1) = 1 for all the agents. This result resembles the setting of our Markov games with exploration strategies; W \u03c4 i (\u20d7 x(\u03c4), ai(\u03c4)) plays the role of Q-function in (7) but with the difference that softmax is been used to de\ufb01ne the value-function instead of the expectation in (7). < \u03b8i, Fi(\u20d7 x(\u03c4) , ai(\u03c4)) > is the reward function, and \u03c0\u03c4 i is the policy and it is distributed according to the Boltzmann distribution with \u03b2 = 1. An important consequence of this result is to be able to develop a gradient-based algorithm similar to the MCE-IRL Ziebart et al. [2008], Bloem and Bambos [2014] or an online inverse reinforcement learning (IRL) algorithm similar to Rhinehart and Kitani [2017] that can simultaneously infer the rewards and the policies. Such algorithm requires the gradient function that is given below. 7 For the choice of \u03a8(O\u03c4 \u2212i(xi)) = \u2212P j\u2208\u2212i \u00b5jO\u03c4 j (xi) and when the state space X is discrete, we have L = P j \u00b5j. 9 \fTheorem 5. The gradient of the dual problem with respect to \u03b8i is given by b E[Fi(\u20d7 x, ai)] \u2212 E[Fi(\u20d7 x, ai)]. Algorithm 4 Online MMCE-IRL for agent i 1: initialize: {\u03b8j} 2: for j \u2208\u2212i do 3: e F (\u03c4) j \u2190P (\u20d7 x,aj)\u2208Ht Fj(\u20d7 x, aj) 4: R(\u03c4) j \u2190< \u03b8j, Fj > 5: end for 6: \u20d7 \u03c0 \u2190MGE({R(\u03c4) j : j \u2208\u2212i}, Ri) 7: for j \u2208[M] do 8: \u00af Fj \u2190E\u20d7 \u03c0[Fj] 9: \u03b8j \u2190Project||\u03b8j||\u2264B \u0000\u03b8j \u2212\u03f1( e F (\u03c4) j \u2212\u00af Fj) \u0001 10: end for Algorithm 4 summarizes the steps of an online inverse reinforcement learning (IRL) approach for agent i in which this agent infers the individual rewards of its opponent by observing their behavior and knowing its own reward function. Similar to Rhinehart and Kitani [2017], we include a projection step to ensure that the reward functions are bounded. In Algorithm 3, Ht denotes the trajectories up to time t, i.e., pairs of states and actions and \u03f1 is the step size of the gradient descent. 7 Experimental Results This section summarizes different settings where we evaluated the performance of our algorithms. Pursuit Game: The pursuit problem has been widely examined in multi-agent systems Weinberg and Rosenschein [2004]. We used different variants of this problem in our experiments: First, we consider two agents; one hunter (h) and one prey (p). Figure 2 illustrates the state space (nodes of the graph) and the possible actions of the agents. The red and black arrows indicate the possible actions for the hunter and the prey, respectively. Each agent can choose to stay at its current position or follow one of its corresponding arrows. This game is deterministic, i.e., specifying the states and the actions, the next state of the agents are known with probability one. The agents can observe the current position of their opponent and they move simultaneously. The goal of the predator is to hunt the prey during a time horizon of length T = 22. This is a Markov game in which \u2212Rp(xh, xp, ap) = Rh(xh, xp, ah) = 0.4, when xh = xp \u2208 {0, 1, ..., 8} and zero, otherwise. A similar but simpler variant of this game was studied by Weinberg and Rosenschein [2004] in which both agents play on the same grid and the prey moves left or up at random. The output of MGE-F is a set of time-dependent mixed strategies but we assumed that the agents only executed the actions with the maximum probabilities. In this case, the average score of the hunter and its average number of hunts using MGE-F during 10 games with random initial states were 1.16 and 2.9, respectively. We compared the performance of our algorithm with NSCP in Weinberg and Rosenschein [2004] for which both players used the NSCP to learn their policies. The results of NSCP were 0.84 and 2.1. In the second variant, we consider three agents; two hunters {h1, h2} and one prey {p}. They move on the same graph as in Figure 2 but unlike the previous setting, they can either stay at their current position or move according to the red arrows. The hunters have only T = 3 steps to catch the prey without colliding with each other. We used the following reward for h1, Rh1(xh1, xh2, xp, ah1) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 xh1 \u0338\u2208{xh2, xp} \u221215/4 xh1 = xh2 \u0338= xp \u221210/4 xh1 = xh2 = xp +5/4 xh1 = xp \u0338= xh2. 10 \f0 20 40 60 80 100 120 Number of iterations 0.0 0.2 0.4 0.6 0.8 1.0 Convergence error \u03b1=0.6 \u03b1=0.4 \u03b1=0.2 \u03b1=0.05 Figure 3: Effect of \u03b1 on the convergence speed of the modi\ufb01ed updating rule in (14) for the second variant of the pursuit game. Figure 4: The learned mixed policies for the \ufb01rst move of h1 and p in the pursuit games with three players. The reward of the second hunter was de\ufb01ned similarly, and the prey\u2019s reward was given by Rp(xh1, xh2, xp, ah1) = 0, when xp \u0338\u2208{xh1, xh2} and -1/8, otherwise. To relax the assumption of Theorem 7, we modi\ufb01ed the update rule of MGE-F. More precisely, we reduced the exploration property of the agents by adding a portion of the previously estimated Q-function to the new Q-function, i.e., instead of line 10 of MGE-F, we used Qs+1 i \u2190\u03b1Ui(Qs \u2212i, V t+\u03ba+1 i ) + (1 \u2212\u03b1)Qs i, (14) where \u03b1 \u2208[0, 1]. Note that \u03b1 = 1 leads to the same update rule as in MGE-F. This modi\ufb01cation will not disturb the convergence of the algorithm as long as \u03b3\u03b1b + (1 \u2212\u03b1) < 1, where \u03b3\u03b1b denotes the contraction coef\ufb01cient of Ui when \u03b1 maxj{||Rj||\u221e} \u2264b. Thus, by selecting proper \u03b1, the convergence can be guaranteed for reward functions that have norms greater than the bound in Theorem 7. The cost for this relaxation is losing the convergence speed. Figure 3 shows the convergence error for the second variant of the pursuit game when \u03b1 \u2208{0.05, 0.2, 0.4, 0.6}. All different values of \u03b1 led to the same set of policies. In all the experiments in this section, we used \u03b2 = 1. Table in Figure 4 presents the learned policies for the \ufb01rst moves of h1 and p, when the initial states are I1 := {h1 \u22080, h2 \u22088, p \u22084}8 and I2 := {h1 \u22080, h2 \u22088, p \u22082}. Another interesting variant of this game is when the initial states of the players are {h1 \u22081, h2 \u2208 5, p \u22082} and T = 1, i.e., each player can only execute one action. As we expected, the output of MGE-F was that the best action9 for each player is not to move. This behavior can be explained using the predictive ability of the players. 8h1 is at node 0, h2 is at node 8 and the prey is at node 4. 9Action with the maximum probability. 11 \fFigure 5: Upper left: Initial states of the grid game (II). G denotes the goal of both agents. They can successfully pass the barriers with probability 0.5. Upper right: The state space of the rabbit-hole game. Table: Average scores of the players in grid games. Rabbit-Hole Game: This is also a pursuit game in which a fox chasing a rabbit over a grid, Figure 5. There is a prize +0.3 for the rabbit in a small hole. Both rabbit and fox can enter the hole. Fox will gain +2 points, when it catches the rabbit and the rabbit loses 2 points. Each agent has only 12 moves. We modeled this game using a \ufb01nite-time horizon SG with T = 12, and learned the policies using MGE-F. Interestingly, the behavior of the learned policies was that with high probability, the rabbit entered the hole when it knew that the fox is far enough from the entrance and the fox moved such that its distance to the rabbit and the entrance is minimized. The average scores of the rabbit and the fox after playing 10 rounds with random initial states were 0.06 and 0, respectively. Grid Games: We also studied the behavior of our learning algorithm in two different grid games all of which are two-player games: (I) a stochastic game, Figure 1 and (II) a cooperative game that is the stochastic version of the Battle of Sexes, Figure 5. In (I), agents are rewarded +30, when they reached their goals and punished -1, when they collided. In (II), both agents wish to reach G without collision, but if they try to go over the barrier (indicated by curves in Figure 5), they may fail with probability 0.5. The reward at G was +2 and the collision cost was losing one point. We compared our algorithm with three algorithms in Greenwald et al. [2003]. The selected algorithms (Q-learning, uCE-Q, lCE-Q) were trained by repeatedly playing the games. As it is discussed in Greenwald et al. [2003], they all converge to a symmetric Nash equilibrium in (I) and asymmetric Nash equilibrium in (II). It is important to mention that the main difference between MGE and the algorithms in Greenwald et al. [2003] is that they learn the best-response policies for the agents but MGE learns the exploration policies. Table in Figure 5 shows the average scores of different learning algorithms after playing the games 1000 times. Driving Scene: In this experiment, we simulated a driving scene in which 4 agents (three vehicles and a pedestrian) interact at a road junction. The initial positions and the corresponding goals of the agents are illustrated in Figure 6. In this experiment, the states and the actions are discrete, i.e., agents can choose to stay or move to one of their neighboring cells. This complex multi-agent scenario is quite interesting for self-driving cars application. We used Algorithm 3 (MGE-FB) with \u03a8(O\u03c4 \u2212i(xi)) = \u2212P j\u2208\u2212i \u00b5jO\u03c4 j (xi). Actions with the maximum probabilities are: all cars stop before the zebra crossing while the pedestrian walks toward its destination. Then, car 2 drives towards its goal. Cars 1 and 3 are the next agents that drive to their goals, respectively. All the agents select their shortest routes. These behaviors match human driving. 8 Discussion We developed several multi-agent algorithms in which the agents use exploration policies while considering their opponent\u2019s strategies. We showed the convergence of our algorithms and also evaluated their learning behavior using several game-theoretic examples. A possible future work is extending the presented results to continuous state-action spaces. This is of great importance for several applications such as robotic and self-driving cars. 12 \fFigure 6: Initial state of the driving scene. Goals are indicated by circles, e.g., car 1 wants to go to circle 1. Supplementary Materials Theorem 6. Assume that maxi ||Ri||\u221e\u2264(1\u2212\u03b3)2 2M\u03b3\u03b2 . Then, for every i \u2208[M], Ti is a contraction mapping. Theorem 7. If for every i, max{||Ri||\u221e, ||Ri,F ||\u221e} \u22641/(2\u03b2(M \u22121)(1 + T)), then the system of equation in (6) admits a unique solution. Theorem 8. Suppose that . Then, Algorithm 3 converges to a set of unique Q-functions as the number of iterations K tends to in\ufb01nity. Theorem 9. Solution of the multi-agent MCE (7) is recursively given by \u03c0\u03c4 i = 1 Zi(\u03c4) exp \u0010 W \u03c4 i (\u20d7 x(\u03c4), ai(\u03c4)) \u0011 , W \u03c4 i (\u20d7 x(\u03c4) , ai(\u03c4))=\u03b8T i Fi(\u20d7 x(\u03c4) , ai(\u03c4))+E\u03c0\u03c4 \u2212i[log Zi(\u03c4 +1)], log Zi(\u03c4)=softmax a\u2032 W \u03c4 i (\u20d7 x(\u03c4), a\u2032), log Zi(T) = softmax a\u2208A \u03b8T i Fi(\u20d7 x(T), a), where \u03c0t \u2212i = Q j\u2208\u2212i \u03c0t j. The boundary condition is Zi(T + 1) = 1 for all the agents. Theorem 10. The gradient of the dual with respect to \u03b8i is given by e E[Fi(\u20d7 x, ai)] \u2212E[Fi(\u20d7 x, ai)]. In order to establish the above results, we require the following results. For simplicity, in the remaining, we use || \u00b7 || to denote || \u00b7 ||\u221e. Lemma 1. Let X = [x1, ..., xn] \u2208Rn. There exists 0 < \u03b1 < 1, such that 1 2 X j,i \u03bbi\u03bbj|exi \u2212exj| \u2264\u03b1||X|| X k \u03bbkexk ! , (15) where 0 < \u03bbi < 1 and P k \u03bbk = 1. Proof. We prove this by induction on n. First step is when n = 2. In this case, the left hand side of (15) becomes \u03bb(1 \u2212\u03bb)|ex2 \u2212ex1|. Without loss of generality, assume x2 \u2265x1. We will show that there exists an \u03b1 such that the function below is positive for x2 \u2212x1 = x \u22650, g\u03b1(x) := 2\u03b1 max{|x1 + x|, |x1|} (\u03bbex + (1 \u2212\u03bb)) \u22122\u03bb(1 \u2212\u03bb)(ex \u22121). If Since 2 max{|x1 + x|, |x1|} \u2265|x|, we can show that g\u03b1(x) is positive, if h\u03b1(x) is positive, h\u03b1(x) := \u03b1x (\u03bbex + (1 \u2212\u03bb)) \u22122\u03bb(1 \u2212\u03bb)(ex \u22121). 13 \fThis is straightforward, because for 1 > \u03b1 > (1\u2212\u03bb) max{1, 2\u03bb}, we have h\u03b1(0) = 0, \u2202h\u03b1(0)/\u2202x > 0, and \u22022h\u03b1(x)/\u2202x2 \u22650. Induction hypothesis; for any vector Z \u2208Rn, there exists an \u03b1 such that (15) holds. Let X \u2208Rn+1 such that |xn+1| = ||X||, and de\ufb01ne xn+1 := x\u2217. By induction hypothesis, we have X j,i\u2264n \u03bbi\u03bbj|eyi \u2212eyj| \u22642(1 \u2212\u03bb)\u03b1\u2217||Y || X k\u2264n \u03bbkeyk, where Y := [xn, ..., x1] \u2208Rn, in which the zero entry is removed. Therefore, to prove the n + 1st step, we need to show that 2(1 \u2212\u03bb)\u03b1\u2217||Y || X k\u2264n \u03bbkeyk + 2\u03bb X i\u2264n \u03bbi|eyi \u2212ex\u2217| \u22642\u03b1||X||( X k\u2264n \u03bbkeyk + \u03bbe\u2217). (16) Let 1 > \u03b1 \u2265\u03b1\u2217, and note that ||Y || \u2264||X|| = |x\u2217|. Therefore, we can prove (16) by showing 0 \u2264\u03b1|x\u2217|( X k\u2264n e \u03bbkeyk + ex\u2217 1 \u2212\u03bb) \u2212 X i\u2264n e \u03bbi|eyi \u2212ex\u2217|. (17) where e \u03bbi := \u03bbk 1\u2212\u03bb. Note that P k e \u03bbk = 1. Without loss of generality, let y1 \u2264... \u2264yn. Since |x\u2217| \u2265||Y ||, there are only two possible scenarios: I) x\u2217\u22650, then \u2212x\u2217\u2264y1 \u2264... \u2264yn \u2264x\u2217or II) x\u2217< 0 in which, we have x\u2217\u2264y1 \u2264... \u2264yn \u2264\u2212x\u2217. I) In this case, the right hand side of (17) can be bounded by (\u03b1x\u2217+ 1) X k\u2264n e \u03bbkeyk + \u03b1x\u2217ex\u2217 1 \u2212\u03bb \u2212ex\u2217\u2265(\u03b1x\u2217+ 1)e\u2212x\u2217+ \u03b1x\u2217ex\u2217 1 \u2212\u03bb \u2212ex\u2217:= J\u03b1(x\u2217). We show that there exists 0 < \u03b1 < 1 such that for all x\u2217> 0, ex\u2217J\u03b1(x\u2217) \u22650. Because J\u03b1(0) = 0, \u2202ex\u2217J\u03b1(x\u2217)/\u2202x\u2217is \u03b1 + \u03b1/(1 \u2212\u03bb) \u22122 for x\u2217= 0, and \u22022ex\u2217J\u03b1(x\u2217) \u2202(x\u2217)2 \f \f \f x\u2217=0 = \u0012 4 \u03b1 1 \u2212\u03bb \u22124 + 4\u03b1x\u2217 1 \u2212\u03bb \u0013 e2x\u2217, selecting 1 > \u03b1 \u2265max{1 \u2212\u03bb, 2\u22122\u03bb 2\u2212\u03bb }, we have J\u03b1(x\u2217) \u22650. II) Suppose x\u2217= \u2212x < 0. In this case, the right hand side of (17) becomes (\u03b1x \u22121) X k\u2264n e \u03bbkeyk + \u03b1x\u2217e\u2212x 1 \u2212\u03bb + e\u2212x. Clearly, the above equation is positive if \u03b1x \u22121 > 0, otherwise, it is bounded by the following function h\u03b1(x) := (\u03b1x \u22121)ex + \u03b1x 1 \u2212\u03bbe\u2212x + e\u2212x. We show there exists an \u03b1 < 1 such that r\u03b1(x) := h\u03b1(x)ex is positive for all \u03b1x < 1. This is true because of the following facts r\u03b1(0) = 0, \u2202r\u03b1(x) \u2202x \f \f x=0 = \u03b1 + \u03b1 1 \u2212\u03bb \u22122, min x \u2202r\u03b1(x) \u2202x = \u2212\u03b1e 2(1\u2212\u03b1) \u03b1 + \u03b1 1 \u2212\u03bb. Since minx \u2202r\u03b1(x) \u2202x is a monotone increasing function of \u03b1 and it is positive for \u03b1 = 1, there exists \u00af \u03b1 < 1, such that minx \u2202r\u00af \u03b1(x)/\u2202x \u22650. Hence, r\u00af \u03b1(x) \u22650 for \u03b1x < 1. This concludes the results. Lemma 2. Let Q, e Q \u2208Rn, then there exists an 0 < \u03b1 < 1, such that ||Exp\u03b2(Q) \u2212Exp\u03b2( e Q)||1 \u22642\u03b2\u03b1||Q \u2212e Q||\u221e, (18) where Exp\u03b2([q1, ..., qn]) := [e\u03b2q1, ..., e\u03b2qn]/(P i e\u03b2qi). 14 \fProof. Suppose that e Q = Q + X, ||Exp\u03b2(Q) \u2212Exp\u03b2( e Q)||1 = X i \f \f \f \f e\u03b2qi P k e\u03b2qk \u2212 e\u03b2qi+\u03b2xi P k e\u03b2qk+\u03b2xk \f \f \f \f = X i e\u03b2qi P k e\u03b2qk \f \f \f \f \f \f 1 \u2212 e\u03b2xi P k e\u03b2qk P j e\u03b2qj e\u03b2xk \f \f \f \f \f \f = X i \u03bbi \f \f \f \f1 \u2212 e\u03b2xi P k \u03bbke\u03b2xk \f \f \f \f \u2264 1 P k \u03bbje\u03b2xj X i,k \u03bbi\u03bbk \f \fe\u03b2xi \u2212e\u03b2xk\f \f , where \u03bbi := e\u03b2qi P k e\u03b2qk . Applying Lemma 1 will imply the result. Lemma 3. For two probability measures u and v de\ufb01ned over a countable space, we have sup ||f||\u2264c \f \f \fEu[f] \u2212Ev[f] \f \f \f \u2264c||u \u2212v||1. 8.1 Proof of Theorem 6 To conclude the result, \ufb01rst, we show that for any \ufb01xed Qi there exists 0 \u2264\u03b1 < 1 independent of Qi, such that |Ti(Q\u2212i, Qi) \u2212Ti( e Q\u2212i, Qi)| \u2264Bi\u03b3 1 \u2212\u03b3 2\u03b1\u03b2(M \u22121) max j\u2208\u2212i ||Qj \u2212e Qj||. (19) Second, we show that for any \ufb01xed Q\u2212i, |Ti(Q\u2212i, e Qi) \u2212Ti(Q\u2212i, e Qi)| \u2264(\u03b3 + Bi\u03b3 1 \u2212\u03b3 2\u03b1\u03b2)||Qi \u2212e Qi||. (20) To show (19), we start with the de\ufb01nition of Ti that is Ti(Q\u2212i, Qi) := Ri+\u03b3EExp\u03b2{Q\u2212i} \u0002 Vi \u0003 , (21) where Vi = EExp\u03b2{Qi}[Qi] and the expectation in (21) is taken with respect to the policies of players \u2212i. The left hand side of (19) can be written as follows, |Ti(Q\u2212i, Qi) \u2212Ti( e Q\u2212i, Qi)| = \u03b3 \f \f \fEExp\u03b2{Q\u2212i}[Vi] \u2212EExp\u03b2{ e Q\u2212i}[Vi] \f \f \f. Notice that if ||Ri|| \u2264Bi for some Bi, then ||Vi|| \u2264Bi(1 + \u03b3 + \u00b7 \u00b7 \u00b7 ) = Bi/(1 \u2212\u03b3). Using the result of Lemma 3, the right hand side of the above equation can be bounded as follows \u03b3 \f \f \fEExp\u03b2{Q\u2212i}[Vi] \u2212EExp\u03b2{ e Q\u2212i}[Vi] \f \f \f \u2264Bi\u03b3 1 \u2212\u03b3 ||Exp\u03b2{Q\u2212i} \u2212Exp\u03b2{ e Q\u2212i}||1, (22) Given the result of Lemma 2, there exists an 0 < \u03b1 < 1 such that the right hand side of the above inequality can be bounded by Bi\u03b3 1 \u2212\u03b3 2\u03b1\u03b2||Q\u2212i \u2212e Q\u2212i||. Note that Q\u2212i = L j\u2208\u2212i Qj, where \u20d7 u L\u20d7 v = [un + vm] for 1 \u2264n \u2264|\u20d7 u| and 1 \u2264m \u2264|\u20d7 v|. Therefore, ||Q\u2212i \u2212e Q\u2212i|| \u2264(M \u22121) max j\u2208\u2212i ||Qj \u2212e Qj||. This concludes the inequality in (19). In order to prove (20), we have |Ti(Q\u2212i, Qi) \u2212Ti(Q\u2212i, e Qi)| = \u03b3 \f \f \fEExp\u03b2(Qi)EExp\u03b2{Q\u2212i}[Qi] \u2212EExp\u03b2( e Qi)EExp\u03b2{Q\u2212i}[ e Qi] \f \f \f \u2264\u03b3 \f \f \fEExp\u03b2(Qi)EExp\u03b2{Q\u2212i}[Qi] \u2212EExp\u03b2(Qi)EExp\u03b2{Q\u2212i}[ e Qi] \f \f \f + \u03b3 \f \f \fEExp\u03b2(Qi)[\u03c6] \u2212EExp\u03b2( e Qi)[\u03c6] \f \f \f \u2264\u03b3||Qi \u2212e Qi|| + Bi\u03b3 1 \u2212\u03b3 2\u03b1\u03b2||Qi \u2212e Qi||, 15 \fwhere \u03c6 := EExp\u03b2{Q\u2212i}[ e Qi]. The last inequality is obtained similar to the proof of (19) and the fact that E[g] \u2264||g||. Final step is to combine (19) and (20). |Ti(Q\u2212i, Qi) \u2212Ti( e Q\u2212i, e Qi)| \u2264|Ti(Q\u2212i, Qi) \u2212Ti(Q\u2212i, e Qi)| + |Ti(Q\u2212i, e Qi) \u2212Ti( e Q\u2212i, e Qi)| \u2264(\u03b3 + Bi\u03b3 1 \u2212\u03b3 2\u03b1\u03b2M)||Q \u2212e Q||. In the above equation ||Q\u2212e Q|| = maxk ||Qk\u2212e Qk||. This shows that Ti is contraction if Bi < (1\u2212\u03b3)2 2M\u03b3\u03b1\u03b2 in the in\ufb01nite-time horizon problem. 8.2 Proof of Theorem 7 We prove the result by showing that, if ||Ri,F ||, ||Ri|| are bounded, there exists 0 \u2264\u03b1 < 1, such that |Ui(Qt \u2212i, V t+1 i ) \u2212Ui( e Qt \u2212i, V t+1 i )| \u2264Ci(T + 1)2\u03b1\u03b2(M \u22121) max j\u2208\u2212i ||Qj \u2212e Qj||\u221e. This inequality is analogous to (19) and can be shown similarly by applying Lemma 2 and Lemma 3 and keeping in mind that if ||Ri|| \u2264Ci for some Ci, then ||Qi|| \u2264Ci(T + 1). 8.3 Proof of Theorem 3 We prove this statement by showing that there exists an \u03b1 \u2208[0, 1) such that for any agent i and any t, ||Qt i,k+1 \u2212Qt i,k||\u221e\u2264\u03b1||Qt i,k \u2212Qt i,k\u22121||\u221e. (23) where ||Q \u2212e Q||\u221e:= supx supa |Q(x, a) \u2212e Q(x, a)|. As a reminder, we have Qt i,k = Bi(Ot \u2212i,k\u22121, Qt+1 i,k ) = Ri + \u03a8(Ot \u2212i,k\u22121)+ EExp(Qt+1 i,k )[Qt+1 i,k ], (24) Ot i,k = G(Qt\u22121 i,k , Ot\u22121 i,k ) = EExp(Qt\u22121 j,k )[Ot\u22121 j,k ], (25) Inequality (23) and the result of Lemma 6 conclude the result. To show (23), let \u2206t k := ||Qt k \u2212Qt k\u22121||\u221e, \u039bt k := ||Ot k \u2212Ot k\u22121||1, in which for simplicity, the index corresponding to the agent is dropped. Using the update equations in Algorithm 3, i.e., Equation 11 of the main paper, and Lemma 7, we obtain \u2206t k \u2264L\u039bt k\u22121 + S\u2206t+1 k , (26) \u2206T k \u2264L\u039bT k\u22121, (27) where S := (1 + \u03be\u03b2) is given in Lemma 7. On the other hand, using Equation 12 of the main paper, we have \u039bt k = ||G(Qt\u22121 k , Ot\u22121 k ) \u2212G(Qt\u22121 k\u22121, Ot\u22121 k\u22121)||1 \u2264||G(Qt\u22121 k , Ot\u22121 k ) \u2212G(Qt\u22121 k\u22121, Ot\u22121 k )||1 + ||G(Qt\u22121 k\u22121, Ot\u22121 k ) \u2212G(Qt\u22121 k\u22121, Ot\u22121 k\u22121)||1. Given the results of Lemma 4, we have \u039bt k \u22642\u03b2\u2206t\u22121 k + \u039bt\u22121 k , (28) \u039b0 k = 0. (29) From the inequalities in (26)-(29), we obtain \u2206t k \u2264L T \u2212t X j=0 Sj\u039bt+j k\u22121 ! , \u039bt k\u22121 \u22642\u03b2 t\u22121 X i=0 \u2206t\u2212i\u22121 k\u22121 ! . 16 \fCombining the above inequalities implies \u2206t k \u22642L\u03b2 T \u2212t X j=0 Sj t+j\u22121 X i=0 \u2206t+j\u2212i\u22121 k\u22121 \u22642L\u03b2\u2206k\u22121 T \u2212t X j=0 t+j\u22121 X i=0 Sj (a) \u22642L\u03b2\u2206k\u22121 T X j=1 j\u22121 X i=0 Sj = 2L\u03b2\u2206k\u22121S TST +1 \u2212(T + 1)ST + 1 (S \u22121)2 \u22642L\u03b2\u2206k\u22121S ST T (S \u22121) \u2264\u2206k\u22121 2LT \u03be e\u03b2(T +1)\u03be. where \u2206k\u22121 := max0\u2264\u03c4\u2264T \u2206\u03c4 k\u22121. Inequality (a) is due to the fact that S > 1. To obtain convergence, we require the coef\ufb01cient of \u2206k\u22121 to be less than one. Therefore, we obtain 2LT \u2264\u03bee\u2212\u03b2(T +1)\u03be. Lemma 4. We have ||G(Qt\u22121 k , Ot\u22121 k ) \u2212G(Qt\u22121 k\u22121, Ot\u22121 k )||1 \u22642\u03b2\u2206t\u22121 k , ||G(Qt\u22121 k\u22121, Ot\u22121 k ) \u2212G(Qt\u22121 k\u22121, Ot\u22121 k\u22121)||1 \u2264\u039bt\u22121 k . Proof. Using the de\ufb01nition of transformation U, the left hand side can be written as follows Z x \f \f \f \f \f Z x\u2032 X a \u0000\u03c0t\u22121 k (a|x\u2032) \u2212\u03c0t\u22121 k\u22121(a|x\u2032) \u0001 P(x|x\u2032, a)Ot\u22121 k (x\u2032)dx\u2032 \f \f \f \f \f dx (30) \u2264 Z x Z x\u2032 X a \f \f\u03c0t\u22121 k (a|x\u2032) \u2212\u03c0t\u22121 k\u22121(a|x\u2032) \f \f P(x|x\u2032, a)Ot\u22121 k (x\u2032)dx\u2032dx = Z x\u2032 X a \f \f\u03c0t\u22121 k (a|x\u2032) \u2212\u03c0t\u22121 k\u22121(a|x\u2032) \f \f Ot\u22121 k (x\u2032)dx\u2032 where \u03c0t\u22121 k (a|x\u2032) \u221dexp(\u03b2Qt\u22121 k (x\u2032, a)). Based on the result of Lemma 2, we can bound the above difference as follows, Z x\u2032 2\u03b2||Qt\u22121 k (x\u2032, \u00b7) \u2212Qt\u22121 k\u22121(x\u2032, \u00b7)||\u221eOt\u22121 k (x\u2032)dx\u2032 \u22642\u03b2||Qt\u22121 k \u2212Qt\u22121 k\u22121||\u221e= 2\u03b2\u2206t\u22121 k . The left hand side of the second inequality is Z x \f \f \f \f \f Z x\u2032 X a \u03c0t\u22121 k\u22121(a|x\u2032)P(x|x\u2032, a) \u0000Ot\u22121 k (x\u2032) \u2212Ot\u22121 k\u22121(x\u2032) \u0001 dx\u2032 \f \f \f \f \f dx \u2264 Z x Z x\u2032 X a \u03c0t\u22121 k\u22121(a|x\u2032)P(x|x\u2032, a) \f \fOt\u22121 k (x\u2032) \u2212Ot\u22121 k\u22121(x\u2032) \f \f dx\u2032dx = Z x\u2032 \f \fOt\u22121 k (x\u2032) \u2212Ot\u22121 k\u22121(x\u2032) \f \f dx\u2032 = \u039bt\u22121 k . Lemma 5. For a given vectors \u03b1 = (\u03b11, ..., \u03b1n), let f\u03b1(Q) := X j exp(\u03b2qj)\u03b1j P i exp(\u03b2qi), where Q = (q1, ..., qn). Then, for any two arbitrary vectors Q and e Q, we have |f\u03b1(Q) \u2212f\u03b1( e Q)| \u2264\u03b2||\u03b1||d 2 ||Q \u2212e Q||\u221e. (31) where ||\u03b1||d := maxi,j |\u03b1i \u2212\u03b1j|. Proof. The norm of the gradient of f\u03b1(\u00b7) is given by ||\u2207f\u03b1||1 = X j | P i \u03b2(\u03b1j \u2212\u03b1i) exp(\u03b2(qi + qj))| (P i exp(\u03b2qi))2 . 17 \fThe right hand side of the above inequality can be written as X j bj P i \u03b2(\u03b1j \u2212\u03b1i) exp(\u03b2(qi + qj)) (P i exp(\u03b2qi))2 , where bj \u2208{\u22121, 1} speci\ufb01es the sign of the term within the absolute-value. Without loss of generality, we assume that \u03b11 \u2264... \u2264\u03b1n. In this case, bn = 1 and b1 = \u22121 but the rest of bjs cannot be speci\ufb01ed only based on \u03b1js. We de\ufb01ne matrix A \u2208Rn\u00d7n, such that Ai,j = (\u03b1i \u2212\u03b1j), then ||\u2207f\u03b1||1 = \u03b2 (b \u25e6exp(\u03b2Q))T A exp(\u03b2Q) || exp(\u03b2Q)||2 1 \u2264\u03b2 max b\u2032\u2208{0,1}n (b\u2032 \u25e6exp(\u03b2Q))T A exp(\u03b2Q) || exp(\u03b2Q)||2 1 , (32) where b \u25e6exp(\u03b2Q) = (b1 exp(\u03b2q1), ..., bn exp(\u03b2qn)). Let us de\ufb01ne \u03f1b\u2032 := (b\u2032 \u25e6exp(\u03b2Q))T A exp(\u03b2Q) = exp(\u03b2Q)T Ab\u2032 exp(\u03b2Q), where Ab\u2032 is matrix A in which ith row is multiplied by b\u2032 i. In this case, 2\u03f1b\u2032 = exp(\u03b2Q)T (Ab\u2032 + AT b\u2032) exp(\u03b2Q). Note that A + AT = 0, and hence the absolute value of the (i, j)th entry of Ab\u2032 + AT b\u2032 is zero iff b\u2032 ib\u2032 j = 1, otherwise it is at most 2|\u03b1i \u2212\u03b1j|. Therefore, Ab\u2032 + AT b\u2032 has most non-zero entries iff exactly half of b\u2032 is +1 and the other half is -1. Let I1 and I2 denote the indices that b\u2032 i are +1 and -1, respectively and |I1| = \u230an/2\u230b, |I2| = \u2308n/2\u2309. Therefore, we obtain 2\u03f1b\u2032 \u22642 X i\u2208I1 X j\u2208I2 2|\u03b1i \u2212\u03b1j| exp(\u03b2(qi + qj)) \u22644||\u03b1||d X i\u2208I1 X j\u2208I2 exp(\u03b2(qi + qj)) = 4||\u03b1||d X i\u2208I1 exp(\u03b2qi) ! X j\u2208I2 exp(\u03b2qj) ! . (33) Using the de\ufb01nition of \u03f1b\u2032, Equations (32) and (33), we have ||\u2207f\u03b1||1 \u2264\u03b2 max b\u2032 \u03f1b\u2032 \u2264\u03b2||\u03b1||d max I1,I2 2 \u0010P i\u2208I1 exp(\u03b2qi) \u0011 \u0010P j\u2208I2 exp(\u03b2qj) \u0011 \u0010P i\u2208I1 exp(\u03b2qi) + P j\u2208I2 exp(\u03b2qj) \u00112 \u2264\u03b2 ||\u03b1||d 2 . The rest follows by the mean-value theorem, the fact that f\u03b1 is a continuous function of Q, and H\u00a8 older inequality. Lemma 6. For a given sequence {an}\u221e n=1, if there exists \u03b1 \u2208[0, 1) such that |an \u2212an\u22121| \u2264\u03b1|an\u22121 \u2212an\u22122|, \u2200n \u22651, (34) then an converges. Lemma 7. Function g(Q) := EExp(Q)[Q] is Lipschitz with the constant S := (1 + \u03be\u03b2), where ||Q||\u221e\u2264\u03be, and \u03b2 is the inverse temperature. Proof. This is a consequence of Lemma 5 and the fact that maxi,j |\u03b1i \u2212\u03b1j| \u22642||\u03b1||\u221e. Note that |g(Q) \u2212g( e Q)| \u2264 \f \f \fEExp(Q)[Q] \u2212EExp( e Q)[Q] \f \f \f + \f \f \fEExp( e Q)[Q] \u2212EExp( e Q)[ e Q] \f \f \f \u2264(\u03b2||Q||\u221e+ 1)||Q \u2212e Q||\u221e, where ||Q||\u221e:= supx supa |Q(x, a)|. 8.4 Proof of Theorem 9 Differentiating the Lagrangian of the problem in 13 of the main paper, \u0393 = H(\u20d7 a||\u20d7 x) + X i,k X \u03c4 \u03b8k,i \u0010 E[Fi,k(\u20d7 x(\u03c4), ai(\u03c4))] \u2212e E[Fi,k(\u20d7 x(\u03c4), ai(\u03c4))] \u0011 + X i,\u03c4 X C(i) ai(\u03c4),\u20d7 a\u03c4\u22121,\u20d7 x\u03c4 \u03c0(ai(\u03c4)|\u20d7 a\u03c4\u22121, \u20d7 x\u03c4)+ X i,\u03c4 X D(i) \u20d7 a\u03c4\u22121,\u20d7 x\u03c4 \uf8eb \uf8edX ai(\u03c4) \u03c0(ai(\u03c4)|\u20d7 a\u03c4\u22121, \u20d7 x\u03c4)\u22121 \uf8f6 \uf8f8, 18 \fimplies \u2202\u0393 \u2202\u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt) = C(i) ai(t),\u20d7 at\u22121,\u20d7 xt + D(i) \u20d7 at\u22121,\u20d7 xt \u2212 X \u20d7 a\u2212i(t) P(\u20d7 at, \u20d7 xt) \u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt)\u2212 X \u03c4\u2265t X \u2202P(\u20d7 a\u03c4, \u20d7 x\u03c4) \u2202\u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt) log \u03c0(\u20d7 a(\u03c4)|\u20d7 a\u03c4\u22121, \u20d7 x\u03c4) + X j,k \u03b8k,j X \u03c4\u2265t X \u2202P(\u20d7 a\u03c4, \u20d7 x\u03c4) \u2202\u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt)Fj,k(\u20d7 x(\u03c4), aj(\u03c4)), where for \u03c4 \u2265t, \u2202P(\u20d7 a\u03c4, \u20d7 x\u03c4) \u2202\u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt) = \u03c0(\u20d7 a(t)|\u20d7 at\u22121, \u20d7 xt) \u03c0(ai(t)|\u20d7 at\u22121, \u20d7 xt)P(\u20d7 at\u22121, \u20d7 xt)P(\u20d7 a\u03c4 t+1, \u20d7 x\u03c4 t+1|\u20d7 at, \u20d7 xt). For simplicity, we denote \u03c0(aj(\u03c4)|\u20d7 a\u03c4\u22121, \u20d7 x\u03c4) and \u03c0(\u20d7 a(\u03c4)|\u20d7 a\u03c4\u22121, \u20d7 x\u03c4) by \u03c0j(\u03c4) and \u03c0(\u03c4), respectively. By equating the derivative of the Lagrangian to zero, we obtain C(i) ai(t),\u20d7 at\u22121,\u20d7 xt + D(i) \u20d7 at\u22121,\u20d7 xt = P(\u20d7 at\u22121, \u20d7 xt) 1 + X \u20d7 a\u2212i(t) \u03c0\u2212i(t) log \u03c0\u2212i(t) + log \u03c0i(t)+ X \u03c4>t X \u20d7 a\u2212i(t) \u03c0\u2212i(t) E [log \u03c0(\u03c4) \f \f \u20d7 at, \u20d7 xt] \u2212 X j,k \u03b8k,j X \u03c4\u2265t X \u20d7 a\u2212i(t) \u03c0\u2212i(t) E [Fj,k(\u20d7 x(\u03c4), aj(\u03c4)) \f \f \u20d7 at, \u20d7 xt] ! , where \u03c0\u2212i(t) = Q j\u2208\u2212i \u03c0j(t). The above equation suggests the following form for agent\u2019s i policy at time T, log \u03c0i(T) = exp \u0010 \u03b8T i Fi(\u20d7 x(T), ai(T)) \u2212softmax a\u2032 \u03b8T i Fi(\u20d7 x(T), a\u2032) \u0011 \u2200i, log \u03c0i(T \u22121) \u221dexp \u0010 \u03b8T i Fi(\u20d7 x(T \u22121), ai(T \u22121)) + X \u20d7 a\u2212i(T \u22121) \u03c0\u2212i(T \u22121) P(\u20d7 x\u2212i(T)|\u20d7 x\u2212i(T \u22121),\u20d7 a\u2212i(T \u22121)) \u0002 softmax a\u2032 \u03b8T i fi(\u20d7 x(T), a\u2032 i(T)) \u0003\u0011 \u2200i, . . . where \u03b8i = [\u03b81,i, ..., \u03b8K,i] and Fi = [Fi,1, ..., Fi,K]T . 8.5 Proof of Theorem 10 We have H(\u20d7 a||\u20d7 x) = X t\u2264T E\u20d7 xt,\u20d7 at[\u2212log \u03c0t(a(t)|\u20d7 x(t))] = \u2212 X t\u2264T X i E\u20d7 xt,\u20d7 at[\u03b8T i Fi(\u20d7 x(t), ai(t))] \u2212 T \u22121 X t=1 X i E\u20d7 xt+1,\u20d7 at[softmax W t+1 i (Ht+1, a)] + T X t=1 X i E\u20d7 xt,\u20d7 at\u22121[softmax W t i (\u20d7 x(t), a)] = \u2212 X t\u2264T X i E\u20d7 xt,\u20d7 at[\u03b8T i Fi(\u20d7 x(t), ai(t))] + X i E\u20d7 x1[softmax W 1 i (H1, a)]. (35) On the other hand, we have E\u20d7 x(t)[\u2207\u03b8j softmax W t i (\u20d7 x(t), a)] = E[ X a \u03c0t i(a|\u20d7 x(t))\u2207\u03b8jW t i (\u20d7 x(t), a)] = E\u20d7 x(t)\u222aai(t)[\u2207\u03b8jW t i (\u20d7 x(t), a)]. Using the above equality, for any deterministic function \u03a6, we have E\u20d7 x(t) h X a \u03a6(\u20d7 x(t))\u2207\u03b8j\u03c0k(a|\u20d7 x(t)) i = (36) E\u20d7 x(t) h X a \u03a6(\u20d7 x(t))\u03c0k(a|\u20d7 x(t)) \u0000\u2207\u03b8jW t k(\u20d7 x(t), a) \u2212\u2207\u03b8j softmax b W t k(\u20d7 x(t), b) \u0001i = 0. (37) 19 \fTaking the derivative of the last term of (35) implies \u2207\u03b8jE\u20d7 x1[softmax W 1 j (H1, a)] = E\u20d7 x1,a[Fj(\u20d7 x(1), a)] + E\u20d7 x2,\u20d7 a2[\u2207\u03b8jW 2 j (H2, aj(2))]+ E\u20d7 x1,a h \u03c0\u2212j(\u20d7 a\u2212j(1)|\u20d7 x(t)) softmax c W 2 j (H2, c) X k\u2208\u2212j \u2207\u03b8j \u0000W 1 k (H1, ak(1)) \u2212softmax d W 1 k (H1, d) \u0001i . Due to (36), the above last term is zero, and by pushing the recursion to the \ufb01nal step, we obtain \u2207\u03b8jE\u20d7 x1[softmax W 1 j (H1, a)] = X t\u2264T E\u20d7 xt,\u20d7 at[Fj(\u20d7 x(t), aj(t))]. Similarly, when i \u0338= j, one can obtain \u2207\u03b8jE\u20d7 x1[softmax W 1 i (H1, a)] = E\u20d7 x2,\u20d7 a2[\u2207\u03b8jW 2 i (H2, aj(2))] = \u00b7 \u00b7 \u00b7 = E\u20d7 xT ,\u20d7 aT [\u2207\u03b8jW T i (\u20d7 x(t), aj(T))] = 0. Going back to the derivative of the dual function, we will have two terms: H(\u20d7 a||\u20d7 x) and \u0398(E[F] \u2212 e E[F]). Due to the above calculation, the derivative of the \ufb01rst term is zero, with respect to \u03b8i and the derivative of the second term concludes the result." + }, + { + "url": "http://arxiv.org/abs/2003.00806v1", + "title": "Causal Transfer for Imitation Learning and Decision Making under Sensor-shift", + "abstract": "Learning from demonstrations (LfD) is an efficient paradigm to train AI\nagents. But major issues arise when there are differences between (a) the\ndemonstrator's own sensory input, (b) our sensors that observe the demonstrator\nand (c) the sensory input of the agent we train. In this paper, we propose a\ncausal model-based framework for transfer learning under such \"sensor-shifts\",\nfor two common LfD tasks: (1) inferring the effect of the demonstrator's\nactions and (2) imitation learning. First we rigorously analyze, on the\npopulation-level, to what extent the relevant underlying mechanisms (the action\neffects and the demonstrator policy) can be identified and transferred from the\navailable observations together with prior knowledge of sensor characteristics.\nAnd we device an algorithm to infer these mechanisms. Then we introduce several\nproxy methods which are easier to calculate, estimate from finite data and\ninterpret than the exact solutions, alongside theoretical bounds on their\ncloseness to the exact ones. We validate our two main methods on simulated and\nsemi-real world data.", + "authors": "Jalal Etesami, Philipp Geiger", + "published": "2020-03-02", + "updated": "2020-03-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Motivation. Learning from demonstrations is an important paradigm to train AI agents (Argall et al. 2009; Schaal 1999; Ho and Ermon 2016; Jeon, Seo, and Kim 2018). Ideally, one would like to harness as much cheaply available (and relevant) demonstrator data as possible. But major issues arise when there are differences between the sensors of demonstrator, us and agent we train. When ignoring such issues, or addressing them in a naive way, wrong and potentially harmful conclusions can result: about demonstrator\u2019s behavior and the demonstrator\u2019s actions\u2019 effects on the environment. Example 1 (Highway drone data). In the development of self-driving cars, recently drones have been deployed to \ufb02y over highways and record the behavior of human-driven cars (Krajewski et al. 2018; Zhan et al. 2019). Clearly, in such drone recordings, some crucial variables are either more noisy than observed from within the car, or completely missing, such as indicator lights. Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Figure 1: In highway drone data, the indicator light of the lead car would be missing, introducing a hidden common cause between acceleration of demonstrator car and lane changing behavior of the lead car. Assume we want to use such data to learn, say, how an acceleration action A of a \u201cdemonstrator car\u201d affects the lane changing behavior Z of a \u201clead car\u201d in front of it on the slower lane, as depicted in Figure 1. Slightly simplifying reality, assume the indicator light of the lead car serves as a perfect coordination device: it is on if and only if, subsequently, (1) the demonstrator car decelerates and (2) the lead car changes lane to the fast lane. Now assume we just use the variables recorded in the drone data, where the indicator light is not contained, estimate P(Z|A) from it, and naively consider it as the causal effect of A on Z. This leads us to the conclusion that an agent in the place of the demonstrator can arbitrarily chose any acceleration or deceleration action as A, and the lead car will perfectly adapt Z and only change lane when agent decelerates \u2013 which in practice can lead to crashes. In the language of causal models (Pearl 2009; Spirtes et al. 2000), the indicator light is a hidden common cause (confounder). Main tasks, approach and contributions: In this paper, we address learning from demonstrations (LfD) under sensorshift, i.e., when there are differences between (a) the demonstrator\u2019s own sensory input, (b) our sensors that observe the demonstrator and (c) the sensory input of the agent we train. Speci\ufb01cally, we consider two closely related \u201csubtasks\u201d of LfD: (1) inferring the effect of the demonstrator\u2019s decisions (as in Example 1) and (2) imitating the demonstrator. Our approach is based on causal models (Pearl 2009; Spirtes et al. 2000; Peters, Janzing, and Sch\u00a8 olkopf 2017), arXiv:2003.00806v1 [cs.AI] 2 Mar 2020 \fwhich allow us to generalize from data beyond i.i.d. settings. The idea is that, while some modular causal mechanisms that govern the data vary (the sensors), other mechanisms are invariant (e.g., the action-effect). Our main contributions are: \u2022 We rigorously analyze, on the population-level, to what extent the relevant underlying mechanisms (the actioneffect and the demonstrator policy) can be identi\ufb01ed and transferred from the available observations together with prior knowledge of sensor characteristics (Sections 5, 6.1, 6.2 and 7.1). And we propose algorithms to calculate them (Algorithms 1 and 2). \u2022 We introduce several proxy methods (Sections 6.3 and 7.2) which are easier to calculate, estimate from \ufb01nite data and interpret than the exact solutions, alongside theoretical bounds on their closeness to the exact ones (Propositions 2, 4 and 5). (Proofs are in the supplement1 of this paper.) \u2022 We conduct experiments to validate our two main methods on simulated and semi-real world highway drone data used for autonomous driving (Section 8). 2 Related work Learning from demonstrations (LfD) (Argall et al. 2009) is a broad area, with two concrete tasks being the ones we also consider in this paper: (1) inferring the effect of action on outcome given observation (we call it \u201caction-effect\u201d in our a-temporal framework, while in the language of (Argall et al. 2009) this is called the \u201csystem model\u201d or \u201cworld dynamics\u201d), and (2) imitation learning (see next paragraph). Generally in LfD, the problem that sensors differ between demonstrator, observer and target AI agent has been considered (Argall et al. 2009; Ude, Atkeson, and Riley 2004; Atkeson and Schaal 1997). In the language of (Argall et al. 2009), this is described as the \u201crecording mapping\u201d or \u201cembodiment mapping\u201d not being the identity. However, we are not aware of any treatment of this problem which is as systematic and general as ours in terms of guarantees on exact and approximate identi\ufb01ability. Instead, approaches are practically-focused, tailored to speci\ufb01c, say, robot tasks (Ude, Atkeson, and Riley 2004; Atkeson and Schaal 1997). Within LfD, imitation learning means learning to perform a task from expert demonstrations (Ho and Ermon 2016; Muller et al. 2006). There are two main approaches to address this problem: behavioral cloning (Pomerleau 1991), which we are focusing on, and inverse reinforcement learning (IRL) (Ng, Russell, and others 2000; Ziebart et al. 2008). The problem of bounding as well as transferring and integrating causal relations across different domains has been studied by (Balke and Pearl 1994; Bareinboim 2014; Magliacane et al. 2017). But all this work does not consider the training of AI agents. Within causal modelling, maybe closest related to our paper are (Bareinboim, Forney, and Pearl 2015; Forney, Pearl, and Bareinboim 2017; Zhang and Bareinboim 2017; Geiger, Hofmann, and Sch\u00a8 olkopf 2016), who also study the integration of data from heterogeneous settings 1The supplement can be found at \u201chttps://doi.org/10.5281/ zenodo.3549981\u201d. for training agents (often with latent confounders and from a multi-armed bandit perspective). For example, (Zhang and Bareinboim 2017) tackle the problem of transferring knowledge across bandit agents in settings where causal effects cannot be identi\ufb01ed by standard learning techniques. Their approach consists of two steps: (1) deriving bounds over the effects of selecting arms and (2) incorporating these bounds to search for more promising actions. However, when bounding the causal effect, they focus on binary variables, while we consider arbitrary \ufb01nite as well as continuous ranges (which are highly relevant in practice) and they do not focus on general sensor-shifts. The authors of (P. Haan 2018) study \u201ccausal confusion\u201d in causal-model-free imitation learning. There, additional observations can lead to worse performance due to the mechanism (policy) that generates them differing between demonstrations and target environment. However, in their model they assume that both the demonstrator and the imitator have (at least) the same observations. This is not always the case, and therefore our treatment allows the observations to differ. 3 Background Conventions: We use D(\u00b7||\u00b7), H(\u00b7), and I(\u00b7; \u00b7|\u00b7) to denote the Kullback-Leibler (KL) divergence, entropy, and mutual information, respectively (Cover and Thomas 2012). We consider both, discrete and continuous random variables; P R stands for the sum or integral, accordingly; P(W) for the distribution of a variable W, and p(w) for the density at value W = w. If not stated otherwise, we assume that distributions have full support2 and densities. Causal models: According to Pearl\u2019s de\ufb01nition (Pearl 2009), a causal model is an ordered triple (U, V, E), where U denotes a set of exogenous variables whose values are determined by factors outside the model (not observable); V is a set of endogenous variables whose values are determined within the model; and E is a set of structural equations that express, for each endogenous variable W \u2208V , the mechanism of how W is generated by certain other endogenous and exogenous variables. Namely, for all W \u2208V , we have W = fW (PAW , UW ), where fW (\u00b7, \u00b7) is a function and PAW denotes the parent set of variable W. W is called a child of PAW . This induces a joint distribution over the endogenous variables, which can be factorized as follows: P(V ) = Y W \u2208V P(W|PAW ). This factorization is usually expressed using a directed acyclic graph (DAG), in which nodes represent the endogenous variables and arrows are from parents to their children. It is also possible that a sub-set of V is hidden. In this case, we denote the hidden variable with circles in the DAG. 2Full support is a commonly made (Pearl 2009) but non-trivial assumption, important for identi\ufb01ability. \fA Z X YS YD \u03c0D A Z X YT \u03c0T Figure 2: Causal DAGs. Left: source domain. Right: target domain. Circle means hidden to us. The post-interventional distribution is de\ufb01ned by replacing a subset of structural equations without generating cycles in the DAG (Pearl 2009). More speci\ufb01cally, the postintervention distribution after (atomic) intervening on variable W is de\ufb01ned by replacing fW (PAW , UW ) with value w and it is denoted by P \u0000V |do(W = w) \u0001 . 4 Setting and problem formulation 4.1 General model of our setting Causal models of source and target domain. There are two domains, the source domain where the demonstrator (agent) observes and acts, and the target domain where the target agent, which we design, observes and acts. (By domain we mean the complete causal model of environment, sensors, and agent.) The two domains, including what is hidden and what is observed by us, are depicted by the two causal DAGs in Figure 2 over the following variables: X is the state of the system, A is the action of the agent, Z stands for the outcome (an abstract variable that could be, as in Example 1, the state of cars in the the next time instance). Regarding observations, we assume that in the source domain we have YD, the demonstrator\u2019s input, generated by the demonstrator\u2019s sensors, YS, the spectator\u2019s \u2013 i.e., our \u2013 observation of the state of the source system, and in the target domain we have YT , the input to the target agent from the target agent\u2019s sensors. We often denote distributions over variables (e.g. P(Z)) in the source and target domain by subscript S and T, respectively (e.g., PS(Z) and PT (Z)). Let \u03c0D(A|YD) denote the policy of the demonstrator, and \u03c0T (A|YT ) denote the policy of the target agent. Relationship between source and target domain, and what is known to us. We assume that the two domains are related by sharing the same invariant mechanism for outcome given state and action, i.e., PT (Z|A, X) = PS(Z|A, X), so that we can drop the subscript and just write P(Z|A, X). We assume we are given PS(Z, A, YS) (or a sample of it), as well as the sensor characteristics3 PS(YS|X) and PT (YT |X). 3This may be based on performing an experimental system identi\ufb01cation of the sensors or using physical knowledge. 4.2 Problem formulation The overarching goal is to design the target agent that observes and successfully acts in the target domain, based on what we know from the source domain and its relation to the target domain. We consider two speci\ufb01c tasks that serve this overarching goal: Task 1 (Action-effect transfer learning task). Infer PT (Z|do(A), YT ), the effect of action A on outcome Z conditional on observation YT in the target domain.4 Task 2 (Imitation transfer learning task). Learn a policy \u03c0T (A|YT ) for the target agent (also called imitator in this task) such that it behaves as similarly as possible to the demonstrator (details follow). 5 Basic step addressing both tasks: equations and algorithm In this section, we make general derivations about our model (Section 4.1), which serve as steps towards both, the imitation and the action-effect transfer learning tasks. Basic equation: Our model (Section 4.1) implies the following equations, for all z, a, y: pS(z, a, yS) = P Z x pS(yS|x)pS(z, a, x) (1) = P Z x,yD pS(yS|x)p(z|a, x)\u03c0D(a|yD)pS(yD, x). (2) These are the basic equations that relates what is known \u2013 pS(z, a, yS) (l.h.s. of (1)) \u2013 to what we would like to know (r.h.s. of (2)): \u03c0D(a|yD) for Task 2 and p(z|a, x) for Task 1. More speci\ufb01cally, these equations constrain the unknown quantities to a set of possibilities. This is exactly the set up to which we can identify (Pearl 2009) them. Finite linear equation system in discrete case: Solving (1) for pS(z, a, x) is an important intermediate step to addresses Task 1 and 2 simultaneously, since pS(z, a, x) contains all the information that pS(z, a, yS) contains about \u03c0D(a|yD) and p(z|a, yT ). (In particular, in the classical case 4Once the effect PT (Z|YT , do(A)) is inferred, what remains to be done for designing the target agent is to \ufb01x a utility function u(Z) on the outcome, and then pick the optimal a by, say, maximizing ET (u(Z)|do(a), yT ) w.r.t. a. \fof YS = YT = YD = X, pS(z, a, x) uniquely determines the latter two quantities via marginalization/conditioning.) So let us for a moment focus on (1). In the discrete case, it can be rewritten as the following collection of matrix equations. Let {x1, . . . , x\u2113} and {y1, . . . , ym} be the range of X and YS, respectively. Then, for all z, a, \uf8ee \uf8ef \uf8f0 P(z, a, y1) . . . P(z, a, ym) \uf8f9 \uf8fa \uf8fb | {z } P (z,a,YS)\u2208Rm = \uf8ee \uf8ef \uf8f0 P(y1|x1) \u00b7 \u00b7 \u00b7P(y1|x\u2113) . . . . . . P(ym|x1) \u00b7 \u00b7 \u00b7 P(ym|x\u2113) \uf8f9 \uf8fa \uf8fb | {z } [P (yi|xj)]m,\u2113 i,j=1\u2208Rm\u00d7\u2113 \uf8ee \uf8ef \uf8f0 P(z, a, x1) . . . P(z, a, x\u2113) \uf8f9 \uf8fa \uf8fb | {z } P (z,a,X)\u2208R\u2113 . (3) Algorithm for solution set in discrete case: Algorithm 1 yields a parametrization of the set of all possible solutions P(z, a, X) \u2208R\u2113to (3), for any z, a. Speci\ufb01cally, it outputs the \ufb01nite set of corner vectors whose convex combinations parametrize the solution set. It uses singular-value decomposition (SVD) to cope with non-invertibility, and then a routine inspired by the simplex algorithm to account for the constraint that the output has to be a proper probability distributions.5 For the algorithm, w.l.o.g., we assume m \u2264\u2113and that [P(yi|xj)]m,\u2113 i,j=1 has full rank (otherwise one removes linearly dependent rows). Note that if m = \u2113and [P(yi|xj)]m,\u2113 i,j=1 is non-singular, then (3) determines P(z, a, X) uniquely, via a simple matrix inversion. Therefore, for this algorithm, the interesting scenario is m < \u2113. This is the case, e.g., in Example 1 \u2013 the highway drone data where indicator lights are not recorded. 6 Approach to the action-effect transfer learning task Let us now address Task 1 \u2013 inferring the target domain\u2019s action-effect PT (Z|do(A), YT ). Example 2. To illustrate what can go wrong when naively addressing this task, let us get back to the highway drone data (Example 1). There, in the source domain, the indicator light is not observed by us, and for simplicity we assumed that there are no other variables, i.e., YS is empty/constant. Our informal argument in that example can now be stated formally based on causal models (Section 3): Observe that in the causal DAG (Figure 2), X, YD are hidden confounders that introduce \u201cspurious correlations\u201d between A and Z. Therefore, in the generic case, the naive guess PS(Z|a) does not coincide with the actual action-effect PS(Z|do(A)) (= PT (Z|do(A))). Assumption 1. In this section, we assume the target agent observes the full state, i.e., YT =X.6 5Since the left hand side of (3) is a probability vector, it is not necessary to bound P(z, a, xi) by one. 6Observability of X, similar as in Markov decision processes (MDPs), seems to be a good approximation to many real-world situations while at the same time keeping the analysis instructive. We make no assumption w.r.t. YD. Algorithm 1: Finding solution set for (3) Input: P(z, a, YS) (l.h.s. of (3)), [P(yi|xj)]m,\u2113 i,j=1 Output: \u03b61, . . . , \u03b6k \u2208R\u2113, such that their convex hull is the solution set to (3) 1 Rearrange columns of [P(yi|xj)]m,\u2113 i,j=1 such that [P(yi|xj)]m,\u2113 i,j=1 = [D E] and D \u2208Rm\u00d7m is non-singular; 2 U\u03a3V T \u2190SVD of [P(yi|xj)]m,\u2113 i,j=1 ; 3 for i = 1 to \u2113\u2212m do 4 ei \u2190zero vector of length \u2113\u2212m whose ith entry is one; 5 M \u2190V \u0014 0 \u00b7 \u00b7 \u00b7 0 e1 \u00b7 \u00b7 \u00b7 e\u2113\u2212m \u0015 , b \u2190 \u0014 D\u22121P(z, a, YS) 0 \u0015 ; 6 i \u21901; 7 for any sub-matrix R of M with dimension (\u2113\u2212m) \u00d7 (\u2113\u2212m) do 8 \u02c6 b \u2190the sub-vector of b of length \u2113\u2212m that corresponds to the selected rows of M; 9 if R\u22121 exists and \u2212MR\u22121\u02c6 b + b \u22650 then 10 \u03b6i \u2190\u2212MR\u22121\u02c6 b + b; 11 i \u2190i + 1; Under Assumption 1, we have PT (Z|do(A), YT ) = PT (Z|do(A), X) = P(Z|A, X). So Task 1 means inferring P(Z|A, X) (which could also be referred to as the (target domain\u2019s) \u201cdynamics\u201d). We now propose three methods, which differ w.r.t. the setting in which they are applicable and/or w.r.t. yielding exact or approximate solutions. 6.1 Exact solution set in the discrete case In the case of all variables being discrete, we can build on our basic step in Section 5 to analytically \ufb01nd the set of possible action-effects P(Z|X, A) as follows: \ufb01rst we deploy Algorithm 1 to get all possible P(Z, X, A), and then from this (simply by dividing by the marginals), we get P(Z|X, A). 6.2 Exact solution in the linear invertible continuous case In the continuous case, the general identi\ufb01cation analysis \u2013 the analysis of the solution set of (2) \u2013 is very dif\ufb01cult because the vectors space is in\ufb01nite-dimensional. Therefore let us here consider the special case of linear relationships. Assumption 2. In this Section 6.2, assume all relationships are linear, in particular, for matrices D, E, F, YS = FX + N, (4) Z = [D E] \u0014 A X \u0015 + O (5) with N, O the usual noise terms that are independent of all other (non-descendant) variables. \fAlgorithm 2: Exact linear action-effect transfer method (sample-level) Input: sample (z1, a1, y1), . . . , (z\u2113, a\u2113, y\u2113) from P(Z, A, YS); prior knowledge F, \u03a3NN (see (4)); regularization parameter \u03bb Output: Estimates \u02c6 D, \u02c6 E for the regression matrices D, E (see (5)) 1 Calculate the empirical covariance matrices \u02c6 \u03a3ZA, \u02c6 \u03a3ZYS, \u02c6 \u03a3AYS, \u02c6 \u03a3YSYS from the sample 2 Add a regularization term \u03bb1 to \u02c6 \u03a3AA and \u02c6 \u03a3YSYS 3 Calculate the Schur complements S1 := \u02c6 \u03a3AA \u2212\u02c6 \u03a3AYS(\u03a3YSYS \u2212\u03a3NN)\u22121 \u02c6 \u03a3YSA, S2 := \u02c6 \u03a3YSYS \u2212\u03a3NN \u2212\u02c6 \u03a3YSA \u02c6 \u03a3\u22121 AA \u02c6 \u03a3AYS. 4 Calculate the estimates \u02c6 D := \u02c6 \u03a3ZAS\u22121 1 \u2212\u02c6 \u03a3ZYS(\u02c6 \u03a3YSYS \u2212\u03a3NN)\u22121 \u02c6 \u03a3YSAS\u22121 1 , and \u02c6 E := \u2212\u02c6 \u03a3ZAS\u22121 1 \u02c6 \u03a3AYS(\u02c6 \u03a3YSYS \u2212\u03a3NN)\u22121 + \u02c6 \u03a3ZYSS\u22121 2 )F We propose Algorithm 2 as (sample-level) method in this setting. Proposition 1. Assume all variables have mean zero (otherwise center them). Furthermore, assume that X and YS have the same dimension, and that F (in (4)) is invertible. Then Algorithm 2 is sound in the following sense: when replacing the empirical covariance matrices \u02c6 \u03a3ZA, \u02c6 \u03a3ZYS, \u02c6 \u03a3AYS, \u02c6 \u03a3YSYS in Line 1 by their population-level counterparts, and setting the regularization term \u03bb = 0, the output will be the true D, E (in (5)). 6.3 Average-based action-effect proxy in the general case The exact general solution can be dif\ufb01cult to handle in terms of computation, estimation and analysis, and the linear case (Section 6.2) is of course restrictive. Let us de\ufb01ne the following average-based action-effect proxy of the density p(z|x, a), for all z, x, a, de\ufb01ned only based on things we do know (from the source domain): \u02dc p(z|x, a):=P Z yS pS(z|yS, a)p(yS|x), (6) and let \u02dc P(Z|X, A) be the corresponding distribution. The deviation between the average-based proxy and the ground truth it approximates can be bounded as follows: Proposition 2. We have7 D(PS(Z|X, A)\u2225\u02dc P(Z|X, A)) \u2264IS(X; Z|A, YS). 7In fact we bound the KL divergence between proxy and p(Z|X, A), but the expectation over X, A is w.r.t. the source domain, and therefore we have to write pS(Z|X, A) on the l.h.s. of D(\u00b7\u2225\u00b7). See also the proof. In particular, if YS = fYS(X) with fYS injective, then \u02dc P(Z|X, A) = P(Z|X, A). Note that in the discrete case, the r.h.s. in turn can be bounded by an expression that is solely based on quantities, which we assumed to know: maxP \u2032(X) HX\u223cP \u2032(X)(X|YS). 7 Approach to the imitation learning task In this section, we address Task 2. To do so, we propose an imitator (the target agent) that selects a policy \u03c0T (A|YT ) such that its behavior8 is as close as possible to the demonstrator. Recall that, for the design of the imitator, what is available about the demonstrator is (a sample from) PS(YS, Z, A). However, the challenge is that the observation set of the demonstrator and the imitator may not be the same. Therefore, we propose an imitator that behaves as close as possible to the demonstrator in case of perfect observation, i.e., arg min \u03c0T D \u0010 PT (A, Z|X)||PS(A, Z|X) \u0011 . (7) It is worth noting that the imitator can also introduce additional constraints to this optimization problem according to its environment. Next, we give a simple example to illustrate what can go wrong when naively addressing the imitation task under sensor-shift. Then we propose methods for the problem in (7) for several settings. Example 3. Let us come back to Example 1 and Figure 1, where the indicator light perfectly correlates deceleration and lane changing. Let us add some modi\ufb01cations: Assume we have the same sensors to observe the demonstrator as we have on board of the imitator\u2019s car, i.e., spectator\u2019s and imitator\u2019s sensors coincide, P(YT |X) = P(YS|X). And assume these sensors (similar to the drone) are missing the indicator light of the lead car (unlike the demonstrator\u2019s observation YD). Now, for the imitation task at hands, assume we naively take \u03c0T (a|yT ) := pS(a|YS = yT ) as the imitator\u2019s policy. This means that the imitator will accelerate and decelerate randomly, instead of, as the demonstrator, perfectly adapting these actions to the indicator light of the lead car (the indicator light is the actual source of variation in A given YD, but the imitator just takes PS(A|YS) for a randomized policy). This will necessarily lead to crashes in the target domain \u2013 whenever the lead car indicates and the imitator randomly decides to accelerate. This issue can also be seen formally, based on the causal DAG (Figure 2): there is a back-door path (Pearl 2009) between action A and outcome Z that is not blocked by YS, and therefore, in the generic case, PS(Z|do(A), YS) \u0338= PS(Z|A, YS). 7.1 Exact solution set in the discrete case Assumption 3. Here we assume that both the demonstrator and the imitator have the same sensors9, i.e., PS(YD|X) = PT (YT |X). 8Our notion of behavior is the conditional distribution of the action-outcome pair given the observation. 9However, we relax this assumption in the next section. \fProposition 3. Given Assumption 3, the solution of (7) is \u03c0T (a|YT = y) := \u03c0D(a|YD = y). Although this result introduces the optimal policy for the imitator, it is practical only if the imitator can infer \u03c0D(a|YD = y) using its observation from the source domain. In case of all variables being discrete, the imitator is able to do so using a set of \ufb01nite linear equations similar to Section 5. More precisely, (2) leads to PS(a, yS)= X y PS(yS|YD = y)PS(a, YD = y). (8) Assumption 4. For the rest of Section 7, we assume that PS(A, YS), PS(YS|YD) are known to the imitator. This forms a set of equation similar to (3). Algorithm 1 (with input P(a, YS), [P(yi S|yj D)]m,\u2113\u2032 i,j=1, with \u2113\u2032 denoting the size of the range of YD) obtains the set of possible PS(a, YD) and consequently \u03c0D(a|YD) = PS(a, YD) P a\u2032 PS(a\u2032, YD). Remark 1. Generally, it is important to mention that such assumptions can be weakened. But it will signi\ufb01cantly increase the complexity of the problem by essentially adding another layer of non-unique-identi\ufb01ability of the joint from the conditional, e.g., PS(X, YS) from PS(YS|X). 7.2 Average-based proxy in the general case Here, we propose proxy methods, which have the advantage that they can also be applied to the continuous case and may be easier to estimate/compute. We do so for three different cases of sensor-shift. First case: In this case, the imitator and the demonstrator have the same sensors in their domains, but the other sensors can be different, i.e., PT (YT |X) = PS(YD|X). Based on Proposition 3, the optimal policy for the imitator is indeed \u03c0D. Thus, we propose the following policy: \u02dc \u03c0(1) T (a|YT = y) := \u02dc \u03c0D(a|YD = y), where the latter is de\ufb01ned by P Z y\u2032 pS(a|YS = y\u2032)pS(YS = y\u2032|YD = y). (9) Proposition 4. We have D(\u03c0D||\u02dc \u03c0(1) T ) \u2264IS(A; YD|YS). In the discrete case, additionally, the r.h.s. can be bounded by IS(A; YD|YS) \u2264H(YD|YS). The above result implies that the proposed proxy and the demonstrator\u2019s policy are the same, when there exist deterministic relationship between the observation sets. Next result goes beyond the policies and looks at the overall behavior of the system induced by this policy. Proposition 5. The proposed proxy in (9) implies that the KL-divergence in (7) is bounded by D(\u02dc \u03c0(1) T ||\u03c0D). Second case: In this case, the spectator and the demonstrator have the same set of sensors in the source domain, i.e., PS(YS|X) = PS(YD|X) but the imitator can have different sensors in the target domain. Optimizing an upper bound of (7) that is described in the Supplement gives the following policy to the imitator, \u02dc \u03c0(2) T (a|yT ) \u221dexp \uf8eb \uf8edP Z yS p(yS|yT ) log pS(a|yS) \uf8f6 \uf8f8. Proposition 6. The proposed policy in this case will lead to the following upper bound for (7), X a,yT ,yS p(yS|yT )pT (yT )\u02dc \u03c0(2) T (a|yT ) log \u02dc \u03c0(2) T (a|yT ) pS(a|yS) . Note that in an extreme setting when YS is determined uniquely from YT , it is straightforward to show that the upper bound in Proposition 6 becomes zero. Thus, the proposed proxy leads to the demonstrator\u2019s behavior. Third case: This is the general case where all sensors can be different. Note that Example 3 belongs to this case. Here, we propose the following policy for the imitator \u02dc \u03c0(3) T (a|yT ) \u221dexp \uf8eb \uf8edP Z x p(x|yT ) log \u02dc p(a|x) \uf8f6 \uf8f8, where \u02dc p(a|x) := P Z y pS(a|YS = y)pS(YS = y|x). We introduced the other two cases since they occur frequently in different applications and we can derive theoretical bounds for them. 8 Experiments In this section we perform experiments for some of the methods proposed in Sections 5, 6 and 7. 8.1 Action-effect learning task Setup: In this experiment, we test two of our methods for the action-effect transfer learning task: Algorithm 2 and the proxy in (6) (more speci\ufb01cally: a sample-level version of it for the linear case). We use the real-world data set \u201chighD\u201d (Krajewski et al. 2018) that consists of recordings by drones that \ufb02ew over several highway sections in Germany (mentioned in Example 1). From this data set, we selected all situations, where there is a lead car \u2013 the demonstrator (this is a different setup than Example 110) \u2013 and a following car on the same lane (which are less than 50m from each other, and have speed at least 80km/h). Here X is distance, velocities, and acceleration of the follower; A is the acceleration of 10While this is the data set mentioned in Example 1, here we do not consider the indicator lights, since for them we would not have the ground truth. \f0 5000 10000 15000 20000 Training sample length 0.0 0.2 0.4 0.6 0.8 1.0 Mean squared test error Exact linear method Average-based proxy method If X were observed Figure 3: Left: Outcome for the action-effect learning experiment. Our exact linear transfer method (Algorithm 2) has higher variance, but outperforms the average-based proxy method (sample-level version of (6) for linear case), which can be seen as a baseline, for long enough samples. We also plot what could be achieved if X was fully observed in the source domain, as a lower bound. Right: Learned policies for the imitation learning experiment: the true policy \u03c0D, the policy from the method in Section 7.1, \u02c6 \u03c0T , and the corresponding proxy \u02dc \u03c0(1) T . The three policies are evaluated at three different points (a|Vo, bo) \u2208{(1|50, 0), (1|50, 1), (\u22121|50, 1)}. the demonstrator; and Z is the acceleration of the follower, 1.5 seconds later. Furthermore, the source domain\u2019s YS is generated by a randomly drawn matrix F applied to X plus Gaussian noise (as in (4)). This semi-real approach allows us to have ground truth samples from P(Z, A, X) = PT (Z, A, YT ), i.e., the target domain (recall our Assumption 1). We apply the two methods on training samples from the source domain PS(Z, A, YS) up to length 20000, and calculate the means (over 20 different data and synthetic noise samples) squared error on separate test samples of length 1000 from P(Z, A, X). Outcome: The outcome for this experiment is depicted and discussed in Figure 3. 8.2 Imitation learning task Setup: In this experiment we simulated the driving scene illustrated in Figure 1. The observation set of the demonstrator YD contains the speed vo \u2208{40, 45, ..., 60} km/h and the indicator light bo \u2208{0, 1} of the lead vehicle. The imitator only gets to see a noisy observation of the demonstrator\u2019s speed, i.e., YS = vd + N, where N \u223cN(0, 1/4). Actions are \u22121, +1, 0 denoting speed reduction by 5km/h, increasing it by 5km/h, and keep the same speed, respectively. In this experiment, we assumed YD = YT . We de\ufb01ned the demonstrator\u2019s policy to reduce the speed when the indicator of the other vehicle is on bo = 1 and increase its speed or keep the same speed when bo = 0. Note that the classical imitation learning approach will fail in this setting since YT \u0338= YS. We applied Algorithm 1 plus a criterion to obtain the policy \u02dc \u03c0(1) T for the imitator This criterion (that is described in the supplement) ensures that the imitator neither increases its speed when bo = 1 nor decreases its speed with the same probability when bo = 0. We formulated this as a linear programming. Outcome: Figure 3 compares the true policy \u03c0D, the policy from the method in Section 7.1, \u02c6 \u03c0T , and the corresponding proxy \u02dc \u03c0(1) T for different sample sizes. 9" + }, + { + "url": "http://arxiv.org/abs/1704.00607v2", + "title": "A New Measure of Conditional Dependence", + "abstract": "Measuring conditional dependencies among the variables of a network is of\ngreat interest to many disciplines. This paper studies some shortcomings of the\nexisting dependency measures in detecting direct causal influences or their\nlack of ability for group selection to capture strong dependencies and\naccordingly introduces a new statistical dependency measure to overcome them.\nThis measure is inspired by Dobrushin's coefficients and based on the fact that\nthere is no dependency between $X$ and $Y$ given another variable $Z$, if and\nonly if the conditional distribution of $Y$ given $X=x$ and $Z=z$ does not\nchange when $X$ takes another realization $x'$ while $Z$ takes the same\nrealization $z$. We show the advantages of this measure over the related\nmeasures in the literature. Moreover, we establish the connection between our\nmeasure and the integral probability metric (IPM) that helps to develop\nestimators of the measure with lower complexity compared to other relevant\ninformation theoretic based measures. Finally, we show the performance of this\nmeasure through numerical simulations.", + "authors": "Jalal Etesami, Kun Zhang, Negar Kiyavash", + "published": "2017-03-31", + "updated": "2017-06-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Identifying the conditional independencies (CIs) among the variables or processes in a systems is a fundamental problem in scienti\ufb01c investigations in different \ufb01elds such as biology, econometric, social sciences, and many others. In probability theory, two events X and Y are conditionally independent given a third event Z, if the occurrence or nonoccurrence of X and Y are \u201cindependent\u201d events in their conditional probability distribution given Z (Gorodetskii, 1978). There are several CI measures in literature that have been developed for different applications to capture such independency. For instance, the most commonly used one is conditional mutual information (CMI) (Gorodetskii, 1978) that is an information theoretical quantity. This measure has been used in different \ufb01elds such as communication engineering, channel coding (Cover and Thomas, 2012), and causal discovery (Spirtes et al., 2000b). CMI between X and Y given Z is de\ufb01ned by comparing two conditional distributions: P(X|Y, Z) and P(X|Z) using KL-divergence and then taking average over the conditioning variable Z. Hence, it is limited to those realizations with positive probability (see Section 4.1). One shortcoming of such measure is that it cannot capture CIs that occur rarely or even over zero measure sets. Another shortcoming of this measure is that it is symmetric and thus it fails to encode asymmetric dependencies such as causal directions in a network. Most of the conditional dependency/independency measures are de\ufb01ned similar to the CMI in a sense that they take average over the conditioning variables. Kernel-based method in (Zhang et al., 2011) is another example. Consequently, such measures may fail to distinguish the range of the conditioning variable Z in which the dependency between the variables of interest X and Y is more clearer. For example, consider a treatment that has different effects on a special disease for different genders. There are scenarios in which the previous CI measures (e.g., CMI) fail to identify \fA New Measure of Conditional Dependence for which gender the effect of the treatment on the disease is maximized (see Section 4.3). Discovering the causal relationships in a network is one of the main applications for CI measures (Spirtes et al., 2000b). In this area, it is important to capture the direct causal in\ufb02uence between two variables in a network independent of the other causal indirect in\ufb02uences between them. As we will show in Section 4.2, previous CI measures (e.g., CMI) cannot capture the direct causal in\ufb02uences between two variables (cause and effect) in a network when some variables in the indirect causal path depend on the cause almost deterministically. The main contribution of this paper is the introduction of a statistical metric inspired by Dobrushin\u2019s coef\ufb01cient (Dobrushin, 1970) to measure the dependency/independency between X and Y given Z in a network from their realizations. Our metric has been developed based on the paradigm that if Y has no dependency on X given Z, then the conditional distribution of Y given X = x and Z = z will not change if x varies and Z takes the same realization z. We will show that this dependency measure overcomes the aforementioned limitations. Moreover, we will establish the connection between our meausre and the IPM to develop estimators for our metric with lower complexity compared to other relevant information-theoretic based measures such as CMI. This is because the proposed estimators depend on the sample points only through the metric of the space, and thus its complexity is independent of the dimension of the samples. Perhaps the best known paradigm for visualizing the CIs among the variables of a network is Bayesian networks (Pearl, 2003). They are directed acyclic graphs (DAGs) in which nodes represent random variables and directed edges denote the direction of causal in\ufb02uences. Analogously, using the dependency measure in this work, we can represent the causal structure of a network via a DAG that possesses the same properties as the Bayesian networks. It is also worth mentioning that there exist several measures to capture CIs and the causal in\ufb02uences among time series, for instance, transfer entropy (Schreiber, 2000) and directed information (Massey, 1990). Measuring the reduction of uncertainty in one variable after knowing another variable is the key idea in such measures. Because these measure are de\ufb01ned based on CMI, they also suffer the aforementioned limitations. Note that the proposed measure can easily be modi\ufb01ed to capture such in\ufb02uences in time series as well. 2. De\ufb01nitions In this Section, we review some basic de\ufb01nitions and our notation. Throughout this paper we use capital letters to represent random variables, lowercase letters to denote a realization of a random variable, and bold capital letters to denote matrices. We denote a subset of random variables with index set K \u2286[m], where [m] := {1, ..., m} by XK and [m] \\ {j} by \u2212{j}. In a directed graph \u2212 \u2192 G = (V, \u2212 \u2192 E ), we denote the parent set of a node i \u2208V by Pai := {j : (j, i) \u2208\u2212 \u2192 E }, and denote the set of its non-descendant1 by Ndi. We use X \u22a5 \u22a5Y |Z to denote X and Y are independent given Z. Bayesian Network: A Bayesian network is a graphical model that represents the conditional independencies among a set of random variables via a directed acyclic graph (DAG) (Spirtes et al., 2000b). A set of random variables X is Bayesian with respect to a DAG \u2212 \u2192 G, if P(X) = m Y i=1 P(Xi|XP ai). (1) Up to some technical conditions (Lauritzen, 1996), this factorization is equivalent to the causal Markov condition. Causal Markov condition states that a DAG is only acceptable as a possible causal hypothesis if every node is conditionally independent of its non-descendant given its parents. Corresponding DAG of a joint distribution possesses Global Markov condition if for any disjoint set of nodes A, B, and C for which A and B are d-separated2 by C, then XA \u22a5 \u22a5XB|XC. It is shown in (Lauritzen, 1996) that causal Markov condition and Global Markov condition are equivalent. Faithfulness: A joint distribution is called faithful with respect to a DAG if all the conditional independence (CI) relationships implied by the distribution can also be found from its corresponding DAG using d-separation and vice versa3 (Pearl, 2014). It is possible that several DAGs encode the same set of CI relationships. In this case, they are called Markov equivalence. 3. New Dependency Measure As we mentioned earlier, we use the following paradigm to de\ufb01ne our measure of independency: if Y has no dependency on X given Z, then the conditional distribution of Y given X = x and Z = z should not change when X takes different realization x\u2032 while Z takes the same realization z. This paradigm is similar in nature to Pearl\u2019s paradigm of causal in\ufb02uence (Pearl, 2003). He proposed that the in\ufb02uence of a variable (potential cause) on another variable (effect) in a network is assessed by assigning different val1A node v is a non-descendant of another node u, if there is no direct path from u to v. 2It is d-seperated by Z if it contains a collider \u2192\u00b7 \u2190whose descendants are not in Z or a non-collider in Z. 3The set of distributions that do not satisfy this assumption has measure zero (Meek, 1995). \fA New Measure of Conditional Dependence ues to the potential cause, while other variables\u2019 effects are removed, and observing the behavior of the effect variable. Below, we formally introduce our dependency measure. Consider X a collection of m random variables. In order to identify the dependency of Xi on Xj, we select a set of indices K, where K \u2286\u2212{i, j} and consider the following two probability measures: \u00b5i(xK\u222a{j}):=P \u0010 Xi \f \f \fXK\u222a{j} = xK\u222a{j} \u0011 , \u00b5i(yK\u222a{j}):=P \u0010 Xi \f \f \fXK\u222a{j} = yK\u222a{j} \u0011 , (2) where xK\u222a{j} and yK\u222a{j} \u2208E|K|+1 are two realizations for XK\u222a{j} that are the same every where except at Xj. Further, assume xK\u222a{j} at position Xj equals x and yK\u222a{j} equals y (y \u0338= x) at this position. If there exists a subset K \u2286\u2212{i, j} such that for all such realizations, \u00b5i(xK\u222a{j}) and \u00b5i(yK\u222a{j}) are the same, then we say Xi has no dependency on Xj. This is analogous to the conditional independence that states if Xj and Xi are independent given some XK, then there is no causal in\ufb02uence between them. Note that using mere observational data, comparing the two conditional probabilities in (2) reveals the dependency between Xi and Xj. However, when interventional data is available, we can identify whether Xj causes Xi, i.e., the direction of in\ufb02uence. In order to compare the two probability measure in (2), a metric on the space of probability measures is required. There are several metrics that can be used such as KLdivergence, total variation, etc (Gibbs and Su, 2002). For instance, using the KL-divergence will lead to develop CI test-based approaches (Singh and Valtorta, 1995). In this work, we use Wasserstein distance and discuss the advantage of using such metric in Section 5.1. De\ufb01nition 1. Let (E, d) be a metrical complete and separable space equipped with the Borel \ufb01eld B, and let M be the space of all probability measures on (E, B). Given \u03bd1, \u03bd2 \u2208M, the Wasserstein metric between \u03bd1, \u03bd2 is given by Wd(\u03bd1, \u03bd2) := inf\u03c0 (E\u03c0[d(x, y)]), where the in\ufb01mum is taken over all probability measures \u03c0 on E \u00d7 E such that its marginal distributions are \u03bd1 and \u03bd2, respectively. Using the above distance, we de\ufb01ne the dependency of Xi on Xj given K \u2286\u2212{i, j} as follows: cK i,j := sup xK\u222a{j}=yK\u222a{j}, off j Wd \u0010 \u00b5i(xK\u222a{j}), \u00b5i(yK\u222a{j}) \u0011 d(x, y) . (3) The suprimum is over all realizations xK\u222a{j} and yK\u222a{j} that only differ at the jth variable. Moreover, we assume xK\u222a{j} at jth position equals x and yK\u222a{j} equals y (y \u0338= x) at this position. When K = \u2212{i, j}, cK i,j is called Dobrushin\u2019s coef\ufb01cient (Dobrushin, 1970). Similarly, we de\ufb01ne the dependency of a set of nodes B on a disjoint set A given K, where K \u2229(A \u222aB) = \u2205, as follows, cK B,A := sup xK\u222aA=yK\u222aA, off A Wd \u0010 \u00b5B(xK\u222aA), \u00b5B(yK\u222aA) \u0011 d(xA, yA) . (4) Remark 1. An alternative way of interpreting the above measure is via an equivalent network in which all the nodes in the set K \u222a{j} are injected with independent inputs that have distributions equal to their marginals, i.e., node k is injected with an independent random variable that has distribution P(Xk). In this equivalent network, the dependency of i on j given K can be expressed by Z E Y k\u2208K P(Xk = xk)P(Xj = y)P(Xj = x) Wd \u0010 \u00b5i(xK\u222a{j}), \u00b5i(yK\u222a{j}) \u0011 d(x, y) dxkdxdy. Clearly, this expression is bounded above by (3). 3.1. Maximum Mean Discrepancy Using a special case of the duality theorem of Kantorovich and Rubinstein (Villani, 2003), we obtain an alternative approach for computing the Wasserstein metric as follows: Wd(\u03bd1, \u03bd2) = sup f\u2208FL \f \f \f \f Z E fd\u03bd1 \u2212 Z E fd\u03bd2 \f \f \f \f , (5) where FL is the set of all continuous functions satisfying the Lipschitz condition: ||f||Lip := supx\u0338=y |f(x) \u2212 f(y)|/d(x, y) \u22641. This representation of the Wasserstein metric is a special form of integral probability metric (IPM) (M\u00a8 uller, 1997) that has been studied extensively in probability theory (Dudley, 2002) with applications in empirical process theory (Van Der Vaart and Wellner, 1996), transportation problem (Villani, 2003), etc. IPM is de\ufb01ned similar to (5) but instead of FL, the suprimum is taken over a class of real-valued bounded measurable functions on E. One particular instance of IPM is maximum mean discrepancy (MMD) in which the suprimum is taken over FH := {f : ||f||H \u22641}. More precisely, MMD is de\ufb01ned as MMD(\u03bd1, \u03bd2) := sup f\u2208FH \f \f \f \f Z E fd\u03bd1 \u2212 Z E fd\u03bd2 \f \f \f \f , (6) Here, H represents a reproducing kernel Hilbert space (RKHS) (Aronszajn, 1950) with reproducing kernel k(\u00b7, \u00b7). MMD has been used in statistical applications such as independence testing and testing for conditional independence (Gretton et al., 2007; Fukumizu et al., 2007; Sun et al., 2007). It is shown in (Gretton et al., 2006) that when H is a universal RKHS (Micchelli et al., 2006), de\ufb01ned on the compact metric space E, then MMD(\u03bd1, \u03bd2) = 0 if and only if \fA New Measure of Conditional Dependence \u03bd1 = \u03bd2. In this case, MMD can also be used to compare the two conditional distributions in (2). This is because, MMD(\u00b5i(xK\u222a{j}), \u00b5i(yK\u222a{j})) = 0 implies that the two conditional distributions are the same. This allows us to de\ufb01ne a new dependency measure which we denoted it by \u02dc cK i,j similar to (3) that uses MMD instead of Wasserstein distance. It is straight forward to show that this measure has similar properties as the one in (3). The main difference between these two measures is their estimation method that we discuss in Section 5.1. 4. Advantages of the Dependency Measure Herein, we discuss the advantages of our measure over other dependency measures in the literature. 4.1. Mutual Information and Information Flow Conditional mutual information is an information theoretic measure that has been used in the literature to identify the conditional independence structure of a network. This measure compares two probability measures P(Xi|Xj, XK) and P(Xi|XK) using the KL-divergence as follows, I(Xi; Xj|XK) := X xi,xj,xK P(xi, xj, xK) log P(xi|xj, xK) P(xi|xK) . (7) This measure is symmetric and hence it cannot capture the direction of in\ufb02uence. Moreover, it only compares the probability measures over all pairs (Xi, Xj) that have positive probability. Note that any other measures in the literature that is based on conditional independence test such as the kernel-based methods in (Sun et al., 2007; Zhang et al., 2011) have the similar limitation. Example 1. Consider a network of two variables X and Y , in which X \u223cN(0, 1) is a zero mean Gaussian variable and Y is N(0, 1) whenever X is a rational number and N(1, 2) otherwise. In this network, Y is dependent on X but it cannot be captured using CI. This is because I(X; Y ) = 0. On the other hand, we have cy,x > 0 and cx,y = 0. Another quantity that has been introduced in the literature to quantify causal in\ufb02uences in a network is information \ufb02ow (Ay and Polani, 2008). This quantity is de\ufb01ned using Pearl\u2019s do-calculus (Pearl, 2003). Intuitively, operating do(xi) removes the dependencies of Xi on its parents, and replaces P(Xi|XP ai) with the delta function. Herein, to give an interpretation on how (3) can be used to identify causal relationships that are de\ufb01ned in terms of intervention, we compare our measure with information \ufb02ow. Below, we introduce the formal de\ufb01nition of information \ufb02ow from XA to XB imposing XK, I(XA \u2192 XB|do(XK)), where A, B, and K are three disjoint subsets of V . X xA\u222aB\u222aK P(xK)P(xA|do(xK))P(xB|do(xA\u222aK)) (8) log P(xB|do(xA\u222aK)) P x\u2032 A P(x\u2032 A|do(xK))P(xB|do(x\u2032 A, xK)). This is de\ufb01ned analogous to the conditional mutual information in (7). But unlike the conditional mutual information, the information \ufb02ow is de\ufb01ned for all pairs (xA; xC) rather than being limited to those with positive probability (similar to our measure). Similar measures are introduced in (Janzing et al., 2013; Ay and Krakauer, 2007) which are also based on do-calculation. Analogously, we can de\ufb01ne our measure based on do-operation in order to capture the direction of causal in\ufb02uences in a network by substituting the conditional distributions in (2) with their do versions. Because the Wasserstein metric can be estimated using a linear programming (see Section 5.1), our measure has computational advantages over the information \ufb02ow or other similar measures that uses KL-divergence. Another advantage of (3) over the information \ufb02ow is that it requires less number of interventions in case of using interventional data. More precisely, calculating (8) requires at least two do-operations (do(xA\u222aK) and do(xK)) but (3) requires only one (do(xK\u222a{j})). Moreover, as the next example shows, unlike our measure, the information \ufb02ow depends on the underlying DAG. Example 2. Consider a network of three binary random variables {X, Y, Z} with Z = X \u2295Y an XOR. Suppose the underlying DAG of this network is given by Figure 1(b), in which X takes zero with probability b. In this case, I(X \u2192Z|do(Y )) = H(b), where H denotes the entropy4. However, if the underlying DAG is given by Figure 1(a), we have I(X \u2192Z|do(Y )) = H(\u01eb). Now, consider a scenario in which \u01eb tends to zero. In this scenario, both DAGs describe a system in which X = Y and Z = 0. However, in (b), we have I(X \u2192Z|do(Y )) = H(b) > 0, while in (a), I(X \u2192Z|do(Y )) \u21920. But cy z,x in both DAGs is independent of \u01eb and it is positive. 4.2. A Better Measure for Direct Causal In\ufb02uences Consider a network comprises of three random variables {X, Y, Z}, in which Y = f(X, W1) and Z = g(X, Y, W2), such that the transformations from (X, W1) to (X, Y ) and from (X, Y, W1) to (X, Y, Z) are invertible and W1 and W2 are independent exogenous noises. In other words, there exist functions \u03c6 and \u03d5 such that W1 = \u03c6(X, Y ) and W2 = \u03d5(X, Y, Z). Furthermore, f is an injective function in its \ufb01rst argument, i.e., if f(x1, w) = f(x2, w) for some w, then x1 = x2. In order to measure the direct in\ufb02uence from X to Z, one may compute the conditional mutual information between 4More precisely, H(b) = \u2212b log b \u2212(1 \u2212b) log(1 \u2212b). \fA New Measure of Conditional Dependence X X = n Y w.p. 1 \u2212\u01eb 1\u2212Y w.p. \u01eb, Y Z (a) X Y = n X w.p. 1 \u2212\u01eb 1\u2212X w.p. \u01eb, Y Z (b) * \u276f \u276f \u276f \u276f / t\u2765 \u2765 \u2765 \u2765 * \u276f \u276f \u276f \u276f / 4 \u2710 \u2710 \u2710 \u2710 Figure 1. DAGs for which information \ufb02ow fails to capture the in\ufb02uence. X and Z given Y , i.e., I(X; Z|Y ). However, this is not a good measure because as the dependency of Y on X grows, i.e., H(Y |X) \u21920, then I(X; Z|Y ) \u21920. This can be explained by the fact that as H(Y |X) goes to zero, in other words, as PW1 tends to \u03b4w0(W1) for some \ufb01xed value w0, then by specifying the value of X, the ambiguity about the value of Y will go to zero. Thus, using the injective property of f, it is straight forward to see that I(X; Z|Y ) \u21920. This analysis shows that I(X; Z|Y ) fails to capture the direct in\ufb02uence between X and Z when Y depends on X almost in a deterministic manner. However, looking at cy z,x, we have cy z,x = sup y,x,x\u2032 Wd (Px,y(Z), Px\u2032,y(Z)) d(x, x\u2032) , where Px,y(Z) := PW2(\u03d5(x, y, Z))| \u2202g \u2202W2 (x, y, \u03d5(x, y, Z))|\u22121 . This distribution depends only on realizations of (X, Y ) and it is independent of PX,Y . Hence, changing the dependency between X and Y will not affect cy z,x, which makes it a better candidate to measure the direct in\ufb02uences between variables of a network. As an illustration, we present a simple example. But \ufb01rst, we need the following result. Theorem 1. Consider X = AX + W, where A has zero diagonals and its support represents a DAG. W is a vector of zero mean independent random variables. Then, cP ai\\{j} i,j = |Ai,j|. Example 3. Consider a network of three variables {X, Y, Z} in which Y = aX + W1 and Z = bX + cY + W2 for some non-zero coef\ufb01cients {a, b, c} and exogenous noises {W1, W2}. Hence, I(X; Z|Y ) = H(bX + W2|aX + W1) \u2212H(W2). (9) As we mentioned earlier, by reducing the variance of W1, the \ufb01rst term in (9) tends to H(bX + W2|X) = H(W2). Hence, (9) goes to zero. But, using the result of Theorem 1, we have cy z,x = |b|, which is independent of the variance of W1. 4.3. Group Selection for Effective Intervention Consider a network of three variables {X, Y, C} in which C is a common cause for X and Y , and X in\ufb02uences Y . In this network, to measure the in\ufb02uence of X on Y , one may consider P(Y |do(X)) that is given by P c P(Y |X, c)P(c) = Ec[P(Y |X, c)]. See, e.g., the back-door criterion in (Pearl, 2003). This conditional distribution is an average over all possible realizations of the common cause C. Consider an experiment that is been conducted on a group of people with different ages C in which the goal is to identify the effect of a treatment X on a special disease Y . Suppose that this treatment has clearer effect on that disease for elderly people and less obvious effect for younger ones. In this case, averaging the effect of the treatment on the disease for all people with different ages, i.e., P(Y |do(X)) might not reveal the true effect of the treatment. Hence, it is important to identify a regime (in this example age range) of C in which the in\ufb02uence of X on Y is maximized. As a consequence, we can identify the group of subjects on which the intervention is effective. Note that this problem cannot be formalized using dooperation or other measures that take average over all possible realizations of C. However, using the measure in (3), we can formulate this problem as follows: given X = x and two different realizations for C, say c and c\u2032, we obtain two conditional probabilities P(Y |x, c) and P(Y |x, c\u2032). Then, we say in group C = c, the causal in\ufb02uence between X and Y is more obvious compare to the group C = c\u2032, if given C = c, changing the assignments of X leads to larger variation of the conditional probabilities compared to changing the assignment of X given C = c\u2032. More precisely, if cC=c y,x \u2265cC=c\u2032 y,x , where cC=c y,x := sup x\u0338=x\u2032 Wd \u0010 P(Y |x, c), P(Y |x\u2032, c) \u0011 d(x, x\u2032) . (10) Note that cc y,x = supc cC=c y,x , where cc y,x is given in (3). Using this new formulation, we de\ufb01ne the range of C in which the in\ufb02uence from X to Y is maximized as arg maxc cC=c y,x . Example 4. Suppose that Y = CX+W2 and X = W1/C, where C takes value from {1, ..., M} w.p. {p1, ..., pM} and Wi \u223cN(0, 1). In this case, we have cC=c y,x = |c|. Thus, C = M will show the in\ufb02uence of X on Y more clearer. On the other hand, such property cannot be detected using other measures. For example, we have I(X; Y |C = c) = 0.5 log(2), for all c. 5. Properties of the Measure Lemma 1. The measure de\ufb01ned in (3) possesses the following properties: (1) Asymmetry: In general cK i,j \u0338= cK j,i. (2) cK i,j \u22650 and when it is zero, we have Xi \u22a5 \u22a5Xj|XK. (3) Decomposition: cK i,{j,k} = 0 implies cK i,j =cK i,k =0. (4) Weak union: If cK i,{j,k} = 0, then cK\u222a{k} i,j = cK\u222a{j} i,k = 0. (5) Contraction: If cK i,j = ci,K = 0, then ci,K\u222a{j} = 0. (6) Intersection: If cK\u222a{k} i,j = cK\u222a{j} i,k = 0, then cK i,{j,k} = 0. \fA New Measure of Conditional Dependence Note that unlike the intersection property of the conditional independence, which does not always hold, the intersection property of the dependency measure in (3) always holds. This is due to the fact that (3) is de\ufb01ned for all realizations (xj, xK) not only those with positive measure. See Example 1 for the asymmetric property of cK i,j. We say a DAG possesses global Markov property with respect to (3) if for any node i and disjoint sets B, and C for which i is d-separated from B by C, we have cC i,B = cC B,i = 0. Using the above Lemma and the results of Theorem 3.27 in (Lauritzen, 1996), it is straightforward to show that a faithful network of m random variables whose causal structure is a DAG possesses the global Markov property5. This property can be used to develop reconstruction algorithms (e.g., PC algorithm (Spirtes et al., 2000b)) for the causal structure of a network. 5.1. Estimation The measure introduced in (3) can be computed explicitly for special probability measures. For instance, if the joint distribution of X is Gaussian with mean \u20d7 \u00b5 and covariance matrix \u03a3, then using the results of (Givens et al., 1984), we obtain cK i,j = |\u03a3i,{j,K} \u0000\u03a3{j,K},{j,K} \u0001\u22121 e1|, where \u03a3i,{j,K} denotes the sub-matrix of \u03a3 comprising row i and columns {j, K}, and e1 = (1, 0, ..., 0)T. Hence, in such systems, one can estimate the dependency measure by estimating the covariance matrix. However, this is not the case in general. Therefore, we introduce a non-parametric method for estimating our dependency measure using kernel method. Given {x(1), ..., x(N1)} and {x(N1+1), ..., x(N1+N2)} that are i.i.d. samples drawn randomly from \u03bd1 and \u03bd2, respectively, the estimator of (5) is given by (Sriperumbudur et al., 2010), c Wd(\u02c6 \u03bd1, \u02c6 \u03bd2) := max {\u03b1i} 1 N1 N1 X i=1 \u03b1i \u22121 N2 N2 X j=1 \u03b1j+N1, (11) such that |\u03b1i \u2212\u03b1j| \u2264d(x(i), x(j)), \u2200i, j. In this equation, \u02c6 \u03bd1 and \u02c6 \u03bd2 are empirical estimator of \u03bd1 and \u03bd2, respectively. The estimator of MMD is given by (\\ MMD(\u02c6 \u03bd1, \u02c6 \u03bd2))2 := N1+N2 X i,j=1 yiyjk(x(i), x(j)), (12) where yi := 1/N1 for i \u2264N1 and yi := \u22121/N2, elsewhere. k(\u00b7, \u00b7) represents the kernel of H. It is shown in (Sriperumbudur et al., 2010) that (11) converges to (5) as N1, N2 \u2192\u221ealmost surely as long as the underlying metric space is totally bounded. It is important to mention that the estimator in (11) depends on {x(j)}s only through the metric d(\u00b7, \u00b7), and thus its complexity is independent of the dimension of x(i), unlike the KLdivergence estimator (Wang et al., 2005). The estimator in 5See Appendix for more details. (12) also converges to (6) almost surely with the rate of order O(1/\u221aN1 + 1/\u221aN2), when k(\u00b7, \u00b7) is measurable and bounded. Consider a network of m random variables X. Given N i.i.d. realizations of X, {z(1), ..., z(N)}, where z(l) \u2208Em, we use (11) and de\ufb01ne b cK i,j := max 1\u2264l,k\u2264N c Wd \u0010 \u02c6 \u00b5i \u0010 z(l) K\u222a{j} \u0011 , \u02c6 \u00b5i \u0010 z(k) K\u222a{j} \u0011 \u0011 d(z(l) j , z(k) j ) , (13) such that z(l) K\u222a{j} = z(k) K\u222a{j} off j. Similarly, one can introduce an estimator for \u02dc cK i,j using (12). By applying the result of Corollary 5 in (Spirtes et al., 2000a), we obtain the following result. Corollary 1. Let (E, d) be a totally bounded metric space and a network of random variables with positive probabilities, then b cK i,j converges to cK i,j almost surely as N goes to in\ufb01nity. 6. Experimental Results Herein, we present two simulations in order to verify the theoretical results. In particular, the \ufb01rst experiment veri\ufb01es the group selection advantages and the second one shows an application of the measure for capturing rare dependencies. Group selection for : In this simulation, we considered a group of individuals (C \u2208{male,female}) to study the effect of an special treatment X on their health condition Y . For instance, X can denote sleep aids and Y can represent the individual\u2019s awareness level in the next morning. Most psychotropic drugs are metabolized in the liver. Because the male body breaks down Ambien and other sleep aids faster, women typically have more of the drug in their system the next morning. For this simulation, we considered a mathematical model between X, Y , and C as follows: X = N(1.5, 1) and Y = 2X + N(0, 1), when C =female and X = N(1, 4) and Y = 3X +N(0, 9), otherwise. Accordingly, we generated different sample sizes N \u2208{40, ..., 1200} and estimated I(X; Y |c) and \u02c6 cc y,x. Figure 3 depicts the results. Since for given c, (X, Y ) is jointly Gaussian, we estimated I(X; Y |c) by estimating the covariance matrix (Cover and Thomas, 2012), and estimated our measure using (12) with Gaussian kernels. As Figure 3 shows, although the treatment has different effects on different genders, I(X; Y |C) cannot capture that. Capturing rare dependencies: We simulated the following non-linear system with Wi \u223cU[\u22121, 1] and learned its corresponding structure. X1 = W1, X2 = X2 1 + 2X4 \u2212|X5| + W2, X3 = W3, X4 = X3 \u2212X5 + W4, X5 = W5, if X3 is natural, (14) X5 = 2 p |X1| + W5, o.t. We used the estimator of MMD given in (12) with Gaussian kernels and estimated the dependency measures. We \fA New Measure of Conditional Dependence X1 X5 X2 X4 X3 (a) X1 X5 X2 X4 X3 (b) X1 X5 X2 X4 X3 (c) X1 X5 X2 X4 X3 (d) X1 X5 X2 X4 X3 (e) X1 X5 X2 X4 X3 (f) % \u274f \u274f / \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u0013\u272d \u272d t t K\u2716 \u2716 \u0007\u270e\u270e\u270e\u270e\u270e\u270e o % \u274f \u274f / o \u0013\u272d \u272d 7 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 t t % \u274f \u274f / \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u0013\u272d \u272d t t K\u2716 \u2716 \u0007\u270e\u270e\u270e\u270e\u270e\u270e o % \u274f \u274f / o \u0013\u272d \u272d 7 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 t t K\u2716 \u2716 % \u274f \u274f yt t \u0007\u270e\u270e\u270e\u270e\u270e\u270e gPPPPPPP K\u2716 \u2716 o \u272d\u272d \u2665\u2665\u2665\u2665\u2665\u2665\u2665 % \u274f \u274f / o \u0013\u272d \u272d 7 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 \u2665 yt t gPPPPPPP Figure 2. Recovered DAGs of the system given in (14) for different sample sizes. (a)-(b) use the measure in (3) and pure observation. (c)-(d) use kernel-based method and pure observation. (e)-(f) use the measure in (3) and interventional data. (f) shows the true structure. N 100 200 300 400 500 600 700 800 900 1000 1100 1200 0.5 1 1.5 2 2.5 3 3.5 I(X;Y|C=female) I(X;Y|C=male) Our measure (C=female) Our measure (C=male) Figure 3. Estimated measures for different N. obtained the corresponding DAG of this network given a set of observation of size N \u2208{900, 2500}. Using the results on the convergence rate of the MMD estimator, we used a threshold of order O(1/ \u221a N) to distinguish positive and zero measure. Figure 2 depicts the resulting DAGs. We also compared the performance of our measure with the kernel-based method proposed in (Zhang et al., 2011). Note that in this example, since the in\ufb02uence of X3 on X5 is not detectable by mere observation, the best we can learn from mere observation is the DAG presented in Figure 2(b). However, with the same number of observations, the kernel-based method identi\ufb01es an extra edge, Figure 2(d). Next, we \ufb01xed the value of X3 to be natural number and irrational, separately and observed the outcome of the other variables for different sample sizes. Figures 2(e)-(f) depict the outcomes of the learning algorithm that uses our measure. In this case, X3 \u2192X5 was identi\ufb01ed and then the Meek rules helped to detect all the directions even the direction of X1 \u2212X5 as it is shown in Figure 2(f)." + }, + { + "url": "http://arxiv.org/abs/1603.04319v1", + "title": "Learning Network of Multivariate Hawkes Processes: A Time Series Approach", + "abstract": "Learning the influence structure of multiple time series data is of great\ninterest to many disciplines. This paper studies the problem of recovering the\ncausal structure in network of multivariate linear Hawkes processes. In such\nprocesses, the occurrence of an event in one process affects the probability of\noccurrence of new events in some other processes. Thus, a natural notion of\ncausality exists between such processes captured by the support of the\nexcitation matrix. We show that the resulting causal influence network is\nequivalent to the Directed Information graph (DIG) of the processes, which\nencodes the causal factorization of the joint distribution of the processes.\nFurthermore, we present an algorithm for learning the support of excitation\nmatrix (or equivalently the DIG). The performance of the algorithm is evaluated\non synthesized multivariate Hawkes networks as well as a stock market and\nMemeTracker real-world dataset.", + "authors": "Jalal Etesami, Negar Kiyavash, Kun Zhang, Kushagra Singhal", + "published": "2016-03-14", + "updated": "2016-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction In many disciplines, including biology, economics, social sciences, and computer science, it is important to learn the structure of interacting networks of stochastic processes. In particular, succinct representation of the causal interactions in the network is of interest. A lot of studies in the causality \ufb01elds focus on causal discovery from time series. To \ufb01nd causal relations from time series, one may \ufb01t vector autoregressive models on the time series, or more generally, evaluate the causal in\ufb02uences with transfer entropy [22] or directed information [19]. This paper considers learning causal structure for a speci\ufb01c type of time series, multivariate linear Hawkes process [8]. Hawkes processes were originally motivated by the quest for good statistical models for earthquake occurrences. Since then, they have been successfully applied to seismology [15], biology [21], criminology [13], computational \ufb01nance [5, 12, 14], etc. It is desirable to develop speci\ufb01c causal discovery methods for such processes and study the properties of existing methods in this particular scenario. In multivariate or mutually exciting point processes, occurrence of an event (arrival) in one process affects the conditional probability of new occurrences, i.e., the intensity function of other processes 1 arXiv:1603.04319v1 [cs.LG] 14 Mar 2016 \fin the network. Such interdependencies between the intensity functions of a linear Hawkes process are modeled as follows: the intensity function of processes j is assumed to be a linear combination of different terms, such that each term captures only the effects of one other process (See Section 2.1). Therefore, a natural notion of functional dependence (causality) exists among the processes in the sense that in linear mutually exciting processes, if the coef\ufb01cient pertaining to the effects of process i is non-zero in the intensity function of process j, we know that process i is in\ufb02uencing process j. This dependency is captured by the support of the excitation matrix of the network. As a result, estimation of the excitation (kernel) matrix of multivariate processes is crucial both for learning the structure of their causal network and for other inference tasks and has been the focus of research. For instance, maximum likelihood estimators were proposed for estimating the parameters of excitation matrices with exponential and Laguerre decay in [16, 25]. These estimators depend on existence of i.i.d. samples. However, often we do not have access to i.i.d. samples when analyzing time series. Second-order statistics of the multivariate Hawkes processes were used to estimate the kernel matrix of a subclass of multivariate Hawkes processes called symmetric Hawkes processes [1]. Utilizing the branching property of the Hawkes processes, an expectation maximization algorithm was proposed to estimate the excitation matrix in [10]. We aim to investigate ef\ufb01cient approaches to estimation of excitation matrix of Hawkes processes from time series that does not require i.i.d. samples and investigate how the concept of causality in such processes is related to other established approaches to analyze causal effects in time series. 1.1 Summary of Results and Organization Our contribution in this paper is two fold. First, we prove that for linear multivariate Hawkes processes, the causal relationships implied by the excitation matrix is equivalent to a speci\ufb01c factorization of the joint distribution of the system called minimal generative model. Minimal generative models encode causal dependencies based on a generalized notion of Granger causality, measured by causally conditioned directed information [20]. One signi\ufb01cance of this result is that it provides a surrogate to directed information measure for capturing causal in\ufb02uences for Hawkes processes. Thus, instead of estimating the directed information, which often requires estimating a high dimensional joint distribution, it suf\ufb01ces to learn the support of the excitation matrix. Our second contribution is indeed providing an estimation method for learning the support of excitation matrices with exponential form using second-order statistics of the Hawkes processes. Our proposed learning approach, in contrast with the previous work [1, 24], is not limited to symmetric Hawkes processes. In a symmetric Hawkes process, it is assumed that the Laplace transform of the excitation matrix can be factored into product of a diagonal matrix and a constant unitary matrix. Moreover, it is assumed that the expected values of all intensities are the same. A numerical method to approximate the excitation matrix from a set of coupled integral equations was recently proposed in [3]. Our approach is based on an exact analytical solution to \ufb01nd the excitation matrix. Interestingly, the exact approach turns out to be both more robust and less expensive in terms of complexity compared to the numerical method of [3]. The rest of this paper is organized as follows. Background material, some de\ufb01nitions, and the notation are presented in Section 2. Speci\ufb01cally, therein, we formally introduce multivariate Hawkes processes and directed information graphs. In Section 3, we establish the connection between the excitation matrix and the corresponding DIG. In Section 4, we propose an algorithm for learning the excitation matrix or equivalently the DIG of a class of stationary multivariate linear Hawkes processes. Section 5 illustrates the performance of the proposed algorithm in inferring the causal structure in a network of synthesized mutually exciting linear Hawkes processes and in stock market. Finally, we conclude our work in Section 6. 2 Preliminary De\ufb01nitions In this Section we review some basic de\ufb01nitions and our notation. We denote random processes by capital letters and a collection of m random processes by X[m] = {X1, ..., Xm}, where [m] := {1, ..., m}. We denote the ith random process at time t by Xi(t), the random process Xi from time s up to time t by Xt i,s, and a subset K \u2286[m] of random process up to time t by Xt K. The Laplace 2 \ftransform and Fourier Transform of Xi are denoted, respectively by L[Xi](s) = Z \u221e 0 Xi(t)e\u2212stdt, (1) F[Xi](\u03c9) = Z \u221e \u2212\u221e Xi(t)e\u2212j\u03c9tdt, where j = \u221a\u22121. The convolution between two functions f and g is de\ufb01ned as f \u2217g(t) := R R f(x)g(t \u2212x)dx. The joint distribution of processes {Xn 1 , ..., Xn m} is represented by PX(n). 2.1 Multivariate Hawkes Processes Fix a complete probability space (\u2126, F, P). Let N(t) denotes the counting process representing the cumulative number of events up to time t and let {Ft}t\u22650 be a set of increasing \u03c3-algebras such that Ft = \u03c3{N t}. The non-negative, Ft-measurable process \u03bb(t) is called the intensity of N(t) if P(N(t + dt) \u2212N(t) = 1|Ft) = \u03bb(t)dt + o(dt). A classical example of mutually exciting processes, a multivariate Hawkes process [8], is a multidimensional process N(t) = {N1, ..., Nm} such that for each i \u2208[m] P \u0000dNi(t) = 1|Ft\u0001 = \u03bbi(t)dt + o(dt), (2) P(dNi(t) > 1|Ft) = o(dt), where Ft = \u03c3{N t}. The above equations imply that E[dNi(t)/dt|Ft] = \u03bbi(t). Furthermore, the intensities are all positive and are given by \u03bbi(t) = vi + m X k=1 Z t 0 \u03b3i,k(t \u2212t\u2032)dNk(t\u2032). (3) The exciting functions \u03b3i,k(\u00b7)s are in \u21131 such that \u03bbi(t) \u22650 for all t > 0. Equivalently, in matrix representation: \u039b(t) = v + Z t 0 \u0393(t \u2212t\u2032)dN(t\u2032), (4) where \u0393(\u00b7) denotes an m \u00d7 m matrix with entries \u03b3i,j(\u00b7); dN, \u039b(\u00b7), and v are m \u00d7 1 arrays with entries dNi, \u03bbi(\u00b7), and vi, respectively. Matrix \u0393(\u00b7) is called the excitation (kernel) matrix. Figure 1 illustrates the intensities of a multivariate Hawkes process comprised of two processes (m = 2) with the following parameters v = \u0012 0.5 0.4 \u0013 , \u0393(t) = \u0012 0.1e\u2212t 0.3e\u22121.1t 0.5e\u22120.9t 0.3e\u2212t \u0013 u(t), where u(t) is the unit step function. Assumption 1. A joint distribution is called positive (non-degenerate), if there exists a reference measure \u03c6 such that PX \u226a\u03c6 and dPX d\u03c6 > 0, where PX \u226a\u03c6 denotes that PX is absolutely continuous with respect to \u03c61. Note that the Assumption 1 states that none of the processes is fully determined by the other processes. 2.2 Causal Structure A causal model allows the factorization of the joint distribution in some speci\ufb01c ways. Generative model graphs are a type of graphical model that similar to Bayesian networks [17] represent a causal factorization of the joint [19]. More precisely, it was shown in [19] that under Assumption 1, the 1A measure PX on Borel subsets of the real line is absolutely continuous with respect to measure \u03c6 if for every measurable set B, \u03c6(B) = 0 implies PX(B) = 0. 3 \f0 5 10 15 0 0.2 0.4 0.6 0.8 1 Intensity, \u03bb1 Event\u2212occurrence time of N1 0 5 10 15 0 0.5 1 1.5 2 Intensity, \u03bb2 Event\u2212occurrence time of N2 Figure 1: Intensities of the multivariate Hawkes process. joint distribution of a causal2 discrete-time dynamical system with m processes can be factorized as follows, PX = m Y i=1 PXi||XBi , (5) where B(i) \u2286\u2212{i} is the minimal3 set of processes that causes process Xi, i.e., parent set of node i in the corresponding minimal generative model graph. Such factorization of the joint distribution is called minimal generative model. In Equation (5), PXi||XBi := n Y t=1 PXi(t)|Ft\u22121 B\u222a{i}, and Ft\u22121 B\u222a{i} = \u03c3{Xt\u22121 B\u222a{i}}. Extending the de\ufb01nition of generative model graphs to continuous-time systems requires some technicalities which are not necessary for the purpose of this paper. Hence we illustrate the general idea through an example. The following example demonstrates the minimal generative model graph of a simple continuoustime system. Example 1. Consider a dynamical system in which the processes evolve over time horizon [0, T] through the following coupled differential equations: dX1 = f(X1, X2)dt + dW, dX2 = g(X2)dt + dU, dX3 = h(X1, X2, X3)dt + dV, where W, U and V are independent exogenous noises. For small time dt, this becomes, dX1(t + dt) \u2248\u2206f(X1(t), X2(t)) + dW(t), dX2(t + dt) \u2248\u2206g(X2(t)) + dU(t), dX3(t + dt) \u2248\u2206h(X1(t), X2(t), X3(t)) + dV (t). (6) In this example, since the system is causal, the corresponding joint distribution can be factorized as follows, PX = 3 Y j=1 Y k\u22650 PXj(T \u2212kdt)|FT \u2212(k+1)dt, (7) 2In causal systems, given the full past of the system, the present of the processes become independent. 3Minimal in terms of its cardinality. 4 \fX1 X2 X3 shhhhhhhhhhhhhhhh + V V V V V V V V V V V V V V V V \u000f Figure 2: Minimal generative model graph of Example 1. where FT \u2212(k+1)dt = \u03c3{XT \u2212(k+1)dt {1,2,3} }. Due to (6), we can rewrite (7) as PX = PX1||X2PX2PX3||X1,X2. (8) Figure 2 demonstrates the corresponding generative model graph of the factorization in (8). In general, the joint distribution of a causal dynamical system can be factorized as PX = Qm i=1 PXi||XBi , where B(i) \u2286\u2212{i} is the parent set of node i in the corresponding minimal generative model graph, and PXi||XBi = Y k\u22650 PXi(T \u2212kdt)|FT \u2212(k+1)dt Bi . 3 Two Equivalent Notions of Causality for Multivariate Hawkes Processes In linear multivariate Hawkes processes, a natural notion of causation exists in the following sense: if \u03b3i,j \u0338= 0, then occurrence of an event in jth process will affect the likelihood of the arrivals in ith process. Next, we establish the relationship between the excitation matrix of multivariate Hawkes processes and their generative model graph. To do so, \ufb01rst, we discuss the equivalence of directed information graphs and generative models graphs which was established in [20]. 3.1 Directed Information Graphs (DIGs) An alternative graphical model to encode statistical interdependencies in stochastic causal dynamical systems are directed information graphs (DIGs) [19]. Such graphs are de\ufb01ned based on an information-theoretic quantity, directed information (DI) and it was shown in [20] that under some mild assumptions, they are equivalent to the minimal generative model graphs. Hence, DIGs also represent a minimal factorization of the joint distribution. In a DIG, to determine whether Xj causes Xi over a time horizon [0, T] in a network of m random processes, two conditional probabilities are compared in KL-divergence sense: one is the conditional probability of Xi(t + dt) given full past, i.e., Ft := \u03c3{Xt} and the other one is the conditional probability of Xi(t + dt) given full past except the past of Xj, i.e., Ft \u2212{j} := \u03c3{Xt \u2212{j}}. It is declared that there is no in\ufb02uence from Xj on Xi, if the two conditional probabilities are the same. More precisely, there is an in\ufb02uence from Xj on Xi if and only if the following directed information measure is positive [19], IT (Xj \u2192Xi||X\u2212{i,j}) := inf t\u2208T (0,T ) \u02dc It(Xj \u2192Xi||X\u2212{i,j}), (9) where \u2212{i, j} := [m] \\ {i, j}, T denotes the set of all \ufb01nite partitions of the time interval [0, T] [23], and \u02dc It(Xj \u2192Xi||X\u2212{i,j}) := n X k=0 I \u0010 Xtk i,tk\u22121; Xtk j,0|Ftk\u22121 \u2212{j} \u0011 , where t := (0 = t0, t1, ..., tn = T). Finally, I(X; Y |Z) represents the conditional mutual information between X and X given Z and it is given by I(X; Y |Z) := EPX,Y,Z \u0014 log dPX|Y,Z dPX|Z \u0015 . 5 \f3.2 Equivalence between Generative Model Graph and Support of Excitation Matrix As mentioned earlier, the corresponding minimal generative model graph and the DIG of a causal dynamical system are equivalent. Thus, to characterize the corresponding minimal generative model graphs of a multivariate Hawkes system, we study the properties of its corresponding DIG. Proposition 1. Consider a set of mutually exciting processes N with excitation matrix \u0393(t). Under Assumption 1, IT (Nj \u2192Ni||N \u2212{i,j}) = 0 if and only if \u03b3i,j \u22610 over time interval [0, T]. Proof: See Section 7.1. \u25a1 Proposition 1 signi\ufb01es that the support of the excitation matrix \u0393(\u00b7) determines the adjacency matrix of the DIG and vice versa. Therefore, learning DIG of a mutually exciting Hawkes processes satisfying Assumption 1 is equivalent to learning the excitation matrix given samples from each of the processes. In other word, in the presence of side information that the processes are Hawkes, it is more ef\ufb01cient to learn the causal structure through learning the excitation matrix rather than the directed information needed for learning the DIG in general. 4 Learning the Excitation Matrix In this section, we present an approach for learning the causal structure of a stationary Hawkes network with exponential exciting functions through learning the excitation matrix. This method is based on second order statistic of the Hawkes processes and it is suitable for the case when no i.i.d. samples are available. Note that when i.i.d. samples are available, non-parametric methods for learning the excitation matrix such as MMEL algorithm [25] exist. In this approach the exciting functions are expressed as linear combination of a set of base kernels and a penalized likelihood is used to estimate the parameters of the model. As mentioned earlier, we focus on learning the excitation matrix of multivariate Hawkes processes with exponential exciting functions. This class of Hawkes processes has been widely applied in many areas such as seismology, criminology, and \ufb01nance [15, 21, 13, 5]. De\ufb01nition 1. The excitation matrix of a multivariate Hawkes processes with exponential exciting functions is de\ufb01ned as follows Exp(m) := { D X d=1 Ade\u2212\u03b2dtu(t) : Ad \u2208Rm\u00d7m, ( D X d=1 Ade\u2212\u03b2dt)i,j \u22650, \u03c1( D X d=1 Ad \u03b2d ) < 1, D \u2208N}, (10) where {\u03b2d} > 0 is called the set of exciting modes. Example 2. Consider a set of m = 5 mutually exciting processes with the following exponential excitation matrix \uf8eb \uf8ec \uf8ed 2 0 0 0 0 0 0 .5 0 0 0 1.5 0 0 0 0 0 0 1.3 0 0 0 0 0 1 \uf8f6 \uf8f7 \uf8f8e\u2212t 20 + \uf8eb \uf8ec \uf8ed 0 0 .5 0 0 0 0 0 0 2 0 1 0 2.5 0 .1 0 0 0 0 0 0 0 1 0 \uf8f6 \uf8f7 \uf8f8e\u22121.4t 20 + \uf8eb \uf8ec \uf8ed 1 1.5 1 0 0 0 0 0 0 \u22121 0 0 2 0 0 2 0 0 0 0 0 0 0 0 0 \uf8f6 \uf8f7 \uf8f8e\u22122t 20 (11) In this example D = 3 and the exciting modes are {1, 1.4, 2}. By Proposition 1, the adjacency matrix of the corresponding DIG of this network is given by the support of its excitation matrix. Figure 3 depicts the corresponding DIG. Before describing our algorithm, we need to derive some useful properties of moments of the process. A multivariate Hawkes process with the excitation matrix \u0393 has stationary increments, i.e., the intensity processes is stationary, if and only if the following assumption holds [8, 6]: 6 \fN1 N5 N2 N4 N3 aBBBBBB U************** K\u0017 \u0017 \u0017 \u0017 \u0017 \u0017 / \u000b\u0017\u0017\u0017\u0017\u0017\u0017 / \b\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012\u0012 S'''''' Figure 3: Corresponding DIG of the network in Example 2 with the excitation matrix given by (11) Assumption 2. The spectral radius (the supremum of the absolute values of the eigenvalues) of the matrix \u0393, where [\u0393]i,j = ||\u03b3i,j||1 is strictly less than one, i.e., \u03c1(\u0393) < 1. In this case, from (4) and Equation (2): \u039b = E[\u039b(t)] = v + Z t 0 \u0393(t \u2212t\u2032)E[dN(t\u2032)] = v + Z t 0 \u0393(t \u2212t\u2032)\u039bdt\u2032 = v + \u0393\u039b. (12) By Assumption 2, P i\u22650 \u0393 i converges to (I \u2212\u0393)\u22121, thus \u039b = (I \u2212\u0393)\u22121v. The normalized covariance matrix of a stationary multivariate Hawkes process with lag \u03c4 and window size z > 0 is de\ufb01ned by \u03a3z(\u03c4) := 1 z E \u0014Z t+z t dN(x) Z t+\u03c4+z t+\u03c4 (dN(y))T \u0015 \u2212\u039b\u039bT z, (13) where R t+t\u2032 t dN(x) denotes the number of events in time interval (t, t + t\u2032]. Theorem 1. [1] The Fourier transform of the normalized covariance matrix of a stationary multivariate Hawkes process with lag \u03c4 and window size z > 0 is given by F[\u03a3z](\u2212\u03c9) (14) = 4sin2 z\u03c9/2 \u03c92z (I \u2212F[\u0393](\u03c9))\u22121 diag(\u039b) (I \u2212F[\u0393](\u03c9))\u2212\u2020 , where A\u2020 denotes the Hermitian conjugate of matrix A, and diag(\u039b) is a diagonal matrix with vector \u039b as the main diagonal. In order to learn the excitation matrix with exponential exciting functions, we need to learn the exciting modes {\u03b2d}, the number of components D, and coef\ufb01cient matrices {Ad}. Next results establishes the relationship between the exciting modes and the number of components D with the normalized covariance matrix of the process. Corollary 1. Consider a network of a stationary multivariate Hawkes processes with excitation matrix \u0393(t) belonging to Exp(m). Then the exciting modes of \u0393(t) are the absolute values of the zeros of 1/TrF[\u03a3z]\u22121(\u03c9). Proof: See Section 7.2. \u25a1 Next, we need to \ufb01nd the coef\ufb01cient matrices {Ad}. To do so, we use the covariance density of the processes. The covariance density of a stationary multivariate Hawkes process for \u03c4 > 0 is de\ufb01ned as [8] \u2126(\u03c4) := E \u0002 (dN(t + \u03c4)/dt \u2212\u039b)(dN(t)/dt \u2212\u039b)T \u0003 . (15) Since the processes have stationary increments, we have \u2126(\u2212\u03c4) = \u2126T (\u03c4). Lemma 1. [8] \u2126(\u03c4) = \u0393(\u03c4)diag(\u039b) + \u0393 \u2217\u2126(\u03c4), \u03c4 > 0. (16) 7 \fIt has been shown in [3] that the above equation admit a unique solution for \u0393(\u03c4). Next proposition provides a system of linear equations that allows us to learn the coef\ufb01cient matrices. Proposition 2. Consider a network of a stationary multivariate Hawkes processes with excitation matrix \u0393(t) \u2208Exp(m), and exciting modes {\u03b21, ..., \u03b2D}. Then {Ad} are a solution of the linear system of equations: S = AH, where Hm2\u00d7m2 is a block matrix with (i, j)th block given by Hi,j = diag(\u039b) + L[\u2126](\u03b2j) + L[\u2126]T (\u03b2i) \u03b2j + \u03b2i , and A = [A1, ..., AD] and S = [L[\u2126](\u03b21), ..., L[\u2126](\u03b2D)]. Proof: See Section 7.3.\u25a1 Combining the results of Corollary 1 and Proposition 2 allows us to learn the excitation matrix of exponential multivariate Hawkes processes from the second order moments. Consequently applying Proposition 1, the causal structure of the network can be learned by drawing an arrow from node i to j, when PD d=1|(Ad)j,i|> 0. 4.1 Estimation and Algorithm This section discusses estimators for the second order moments, namely the normalized covariance matrix and the covariance density of a stationary multivariate Hawkes processes from data. Once such estimators are available, the approach of previous section maybe used to learn the network. The most intuitive estimator for \u039b de\ufb01ned by Equation (12) is N(T)/T. It turns out that this estimator converges almost surely to \u039b as T goes to in\ufb01nity [2]. Furthermore, [2] proposes an empirical estimator for the normalized covariance matrix as follows b \u03a3z,T (\u03c4) := 1 T \u230aT/z\u230b X i=1 (Xiz \u2212X(i\u22121)z)(Xiz+\u03c4 \u2212X(i\u22121)z+\u03c4)T , (17) where Xt := N(t) \u2212\u039bt. In the same paper, it has been shown that under Assumption 2, the above estimator converges in \u21132 to the normalized covariance matrix (13), i.e., b \u03a3z,T (\u03c4) \u2212 \u2192 T \u2192\u221e\u03a3z(\u03c4). Notice that the normalized covariance matrix and the covariance density are related by \u03a3dt(\u03c4)/dt = \u2126T (\u03c4). Therefore, we can estimate the covariance density matrix using Equation (17) by choosing small enough window size z = \u2206. Namely, b \u2126T \u2206(\u03c4) = b \u03a3\u2206(\u03c4)/\u2206. Algorithm 1 1: Input : N T . 2: Output : DIG. 3: b \u039b \u2190N(T)/T 4: Choose \u03c3 > 0, z > 0, and small \u2206> 0. 5: Compute b \u03a3z,T (\u03c4) and b \u2126\u2206(\u03c4) using (17). 6: {b \u03b2d} b D d=1 \u2190Zeros of 1/TrF[\u03a3z]\u22121(\u03c9). 7: Compute L[b \u2126\u2206](b \u03b2d) for d = 1, ..., b D. 8: Solve the set of equations arises from (20) for b Ad. 9: Draw (j, i) if P b D d=1|( b Ad)i,j|\u2265\u03c3. Algorithm 1 summarizes the steps of our proposed approach for learning the excitation matrix and consequently the causal structure of an exponential multivariate Hawkes process. 5 Experimental Results In this section, we present our experimental results for both synthetic and real data. 8 \fN1 N5 N2 N4 N3 (a) T = 1000 aC C C C C U++++++++++++ K\u0017 \u0017 \u0017 \u0017 \u0017 / \u000b\u0017\u0017\u0017\u0017\u0017 / o \b\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011 N1 N5 N2 N4 N3 (b) T = 2100 aC C C C C U++++++++++++ K\u0017 \u0017 \u0017 \u0017 \u0017 / \u000b\u0017\u0017\u0017\u0017\u0017 / \b\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011 S''''' N1 N5 N2 N4 N3 (c) Numerical method aC C C C C U++++++++++++ / \u000b\u0017\u0017\u0017\u0017\u0017 o S''''' / \b\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011 xrrrrrrrrrrr N1 N5 N2 N4 N3 (d) MMEL aC C C C C U++++++++++++ H \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 \u0011 K\u0017 \u0017 \u0017 \u0017 \u0017\u000b\u0017\u0017\u0017\u0017\u0017 / / \b\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011\u0011 S''''' 8 r r r r r r r r r r r Figure 4: Recovered DIG of the network in Example 2 with the excitation matrix given by (11), (a), (b) Algorithm 1 with \u2206= 0.2, z = 2, and T \u2208{1000, 2100}, (c) the numerical method of [3] with Q = 70 and T = 2100, and (d) MMEL with 35 i.i.d. samples each of length 60. Our approach learns the graph with T = 2100, while other approaches fail at the same sample size. N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 N11 N12 N13 N14 N15 2 d d d d d d d d d d d d d d d d d d d d . ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ , Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z & N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N \u001d : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : \u0017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . \r\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c xqqq \u001d : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : \u0018 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \u0013& & & & & & & & & & & & & & & & & & & & & & & & & & & & & & \u0005\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b\u000b }zzzzzzzzz ukkkkkkkkkkk , Z Z Z Z Z Z ' N N N N N N N N N N N N N N N N N N N N \" F F F F F F F F F F F F F F F F F F F F F F F \u001d ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; \u0018 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \u000b\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016 o_____________ lZZZZZZ # F F F F \u001c 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 \u000f \u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016\u0016 ~||||||||||||||||||||||||||||| ysssssssssssssssssssssssssss tjjjjjjjjjjjjjjjjjjjjjjjj lYYYYYYYYYYYYYYYYYYYY ! C C C \u007f\u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f ujjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj qccccccccccccccccccccccccccccc lZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ jUUUUUUUUUUUUUUUUUUUUUUUUUU gNNNNNNNNNNNNNNNNNNNN aC C C \u0006\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e\u000e ztttttttttttttttttttttttttttttt ukkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk jTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT gNNNNNNNNNNNNNNNNNNNNNNNNNNNNN \u0002\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005 vmmmmmmmmmmmmmmmmmmmmmmmmmmmm lZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ fNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN bFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ];;;;;;;;;;;;;;;;;;;;;;;;;;;; W0000000000000000000000 T))))))))))))))) L\u001a\u001a\u001a\u001a o```````````````````````````````` jTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT bEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE ]:::::::::::::::::::::::::::::::: X00000000000000000000000000000 O F \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e tiiiiiiiiiiii mZZZZZZZZZZZZZZZZZZZZZZZZZZZ jUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU fMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM F \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e B \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 \u0005 ; w w w w w w w w w w 8 q q q gNNNNNNNNNNNNNNNNNNNNNNNNNN bEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE O J\u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 \u0016 F \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r \r ; w w w w w w w w w w w w w w w w w w w w w w w 7 n n n n n n n n n n n n n n n n n n n 4 i i i i i i i i i i i i hQ Q Q Q Q E \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b 3 f f f f f f f f f f f f f f f f f f f f f f f f f f ( Q Q Q Q Q bF F F F F [888888888888 W................... I \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 \u0013 E \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b \u000b / ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z * U U U U U U U U U U U U U U U U U U U U U U % L L L L L L L L L L L L L \" F F F F F Q$$$$$$$$$$$ 5 j j j j j j j j j j j j j j j j j j j j j j j j j j j j j j j * U U U U U U U U U U U U U U U U U U U U U U U U U U U U U U U ! C C C C C C C C C C C C C C C C C C C \u0014) ) ) ) B \u0006 \u0006 \u0006 \u0006 = z z z z z z z z z 9 r r r r r r r r r r r r r r r r 4 j j j j j j j j j j j j j j j j j j j j j j j j 1 c c c c c c c c c c c c c c c c c c c c c c c c c c c c c , Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z * T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T \u001b 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 Figure 5: True causal structure of the synthesized example. 5.1 Synthetic Data We apply the proposed algorithms to learn the causal structure of the multivariate Hawkes network of Example 2 with v = (0.5, 0.4, 0.5, 1, 0.3)T . This network satis\ufb01es Assumption 2, since \u03c1(\u0393) \u22480.16. The exciting modes are {1, 1.4, 2}. We observed the arrivals of all processes during a time period T. Figure 4 depicts the outputs of algorithms 1 for \u2206= 0.2, z = 2, and observation lengths T \u2208{1000, 2100}. As illustrated in Figure 4, by increasing the length of observation T, the output graph converges the true DIG shown in Figure 3. As a comparison, we applied the MMEL algorithm proposed in [25] to learn the excitation matrix for this example and the numerical method based on Nystrom method proposed in [3] with T = 2100 and the number of quadrature Q = 70. Since MMEL requires i.i.d. samples, we generate 35 i.i.d. samples each of length 60 to obtain Figure 4(MMEL). Our proposed algorithm outperforms both MMEL and the numerical method of [3]. Furthermore, we conducted another experiment for a network of 15 processes with 102 edges illustrated in Figure 5. For a sample of length T = 2500, our algorithm was able to recover 70 edges correctly but identi\ufb01ed 34 false arrows. MMEL could only recover 58 arrows correctly while detecting another 41 false arrows. The input for MMEL was 25 sequences each of length 100. 5.2 Stock Market Data As an example of how our approach may discover causal structure in real-world data, we analyzed the causal relationship between stock prices of 12 technology companies of the New York Stock 9 \fApple Cisco Dell Google HP Intel IBM MSFT Oracle Texas Xerox EMC (a) Algorithm 1. C \u0007 \u0007 \u0007 \u001a 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 \u0016 . . . . . . . . . . . . . . . . . ( Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q qddd ) S S S Q$$$$$$$$$$$$$ # F F F F F F F F F F F F \u0011######################## qdddddddddddddd \u0014 * * * * * * * * * * * \u000b\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018\u0018 \r\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b hQQQQQQQQQQQQQQQQQ cFFFFFFFFFFFF Z44444 \u0005\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f tiiiiiiiiiiiiiiiiiiii cGGGGGGGGGGGGGGGGGGGGG \\888888888888888888 \u007f\u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f Z6666666666666666666666 Q######################## V,,,,,,,,,,,,,,,,,,,,,,,, ? \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f 7 n n n n R%%%%%%%%%%%%%%%%%%%%% W////////////////////// wnnnn hQQQQQQQQQQQQQQQQQQQQQ 8 p p p p p p p p p p p p p p p p p p p p 3 g g g g g g g g g g g g g g g g g g D Apple Cisco Dell Google HP Intel IBM MSFT Oracle Texas Xerox EMC (b) DIG. C \u0007 \u0007 \u0007 ( Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q \u001a 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 \u0016 . . . . . . . . . . . . . . . . . qddd ) S S S hQQQQQQQQQQQQQQQQQ cFFFFFFFFFFFF Z44444 \r\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b \u0005\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f tiiiiiiiiiiiiiiiiiiii cGGGGGGGGGGGGGGGGGGGGG \\888888888888888888 qccccccccc wnnnn hQQQQQQQQQQQQQQQQQQQQQ W////////////////////// eKKKKKKKKKKKKK Z6666666666666666666666 V,,,,,,,,,,,,,,,,,,,,,,,, Q######################## K\u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 ? \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f 7 n n n n R%%%%%%%%%%%%%%%%%%%%% F \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e \u000e Q$$$$$$$$$$$$$ 8 p p p p p p p p p p p p p p p p p p p p 3 g g g g g g g g g g g g g g g g g g Apple Cisco Dell Google HP Intel IBM MSFT Oracle Texas Xerox EMC (c) MMEL. qdddddddddddddd Q$$$$$$$$$$$$$ C \u0007 \u0007 \u0007 V,,,,,,,,,,,,,,,,,,,,,,,, hQQQQQQQQQQQQQQQQQ R%%%%%%%%%%%%%%%%%%%%% qddd cFFFFFFFFFFFF D Q######################## ) S S S Z44444 K\u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 \u0018 8 p p p p p p p p p p p p p p p p p p p p ( Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q ? \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u001c 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 \u000f , Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 3 g g g g g g g g g g g g g g g g g g \u001a 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 \u0016 , , , , , , , , , , , , , , , , , , , , , , , , \u0011######################## \u0005\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f\f wnnnn W////////////////////// \u007f\u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \u007f \\888888888888888888 j \u0016 . . . . . . . . . . . . . . . . . \u0012$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ \r\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b\u001b 7 n n n n eKKKKKKKKKKKKK Figure 6: Causal structures for the S&P (a) using Algorithm 1, (b) by estimating the directed information DIG, and (c) using MMEL algorithm. Exchange sourced from Google Finance. The prices were sampled every 2 minutes for twenty market days (03/03/2008 03/28/2008). Every time a stock price changed by \u00b11% of its current price an event was logged on the stock\u2019s process. In order to prevent the substantial changes in stock\u2019s prices due to the opening and closing of the market, we ignored the samples at the beginning and at the end of each working day. For this part, we have assumed that the jumps occurring in stock\u2019s prices are correlated through a multivariate Hawkes process. This model class was advocated in [11, 2]. Figure 6(a) illustrate the causal graph resulting from Algorithm 1, with z = 30 and \u2206= 2 minutes. To compare our learning approach with other approaches, we applied the MMEL algorithm to learn the corresponding causal graph. For this scenario, we assumed that the data collected from each day is generated i.i.d. Hence, a total of 20 i.i.d. samples were used. Figure 6(c) illustrates the resulting graph. As one can see, Figures 6(a) and 6(c) convey pretty much a similar causal interactions in the dataset. For instance both of these graphs suggest that one of the most in\ufb02uential companies in that period of time was Hewlett-Packard (HP). Looking into the global PC market share during 2008, we \ufb01nd that this was indeed the case.4 To use another modality, we derive the corresponding DIG of this network applying Equation (9). For this part, we used the market based on the Black-Scholes model [4] in which the stock\u2019s prices are modeled via a set of coupled stochastic PDEs. We assumed that the logarithm of the stock\u2019s prices are jointly Gaussian and therefore the corresponding DIs were estimated using Equation (24) in [7]. The resulting DIG is shown in Figure 6(b). Note that this DIG is derived from the logarithm of prices and not the jump processes we used earlier. Still it shares a lot of similarities with the two other graphs. For instance, it also identi\ufb01es HP as one of the most in\ufb02uential companies and Microsoft as one the most in\ufb02uenced companies in that time period. Alg. 1 DIG MMEL Alg. 1 33 25 26 DIG 25 30 24 MMEL 26 24 34 This table shows the number of edges that each of the above approaches recovers and the number of edges that they jointly recover. This demonstrates the power of exponential kernels even when data does not come from such a model class. 5.3 MemeTracker Data We also studied causal in\ufb02uences in a blogosphere. The causal \ufb02ow of information between media sites may be captured by studying hyperlinks provided in one media site to others. Speci\ufb01cally, the 4Gartner, http://www.gartner.com/newsroom/id/856712 10 \fCr craigslist.org Ye yelp.com Am amazon.com Sp spiegel.de Wi wikipedia.org Yo youtube.com Cn cnn.com Gu guardian.co.uk Hu humanevents.com Bb bbc.co.uk Table 1: List of websites studied in MemeTracker experiment. time of such linking can be modeled using a linear multivariate Hawkes processes with exponential exciting functions [25, 18]. This model is also intuitive in the sense that after emerging a new hot topic, in the \ufb01rst several days, the blogs or websites are more likely feature that topics and it is also more likely that the topic would trigger further discussions and create more hyperlinks. Thus, exponential exciting functions are well suited to capture such phenomenon as the exiting functions should have relatively large values at \ufb01rst and decay fast as time elapses. For this experiment, we used the MemeTracker5 dataset. The data contains time-stamped phrase and hyperlink information for news media articles and blog posts from over a million different websites. We extracted the times that hyperlinks to 10 well-known websites listed in Table 1 are created during August 2008 to April 2009. When a hyperlink to a website is created at a certain time, an arrival events is recorded at that time. More precisely, in this experiment, we picked 30 different phrases that appeared on different websites at different times. If a website that published one of the phrases at time t also contained a hyperlink to one of the 10 listed websites, an arrival event was recorded at time t for that website in our list. Figure 7(a) illustrates the resulting causal structure learned by Algorithm 1 for z = 12 hours and \u2206= 1 hour. In this graph, an arrow from a node to another, say node Ye to Yo, means creating a hyperlink to yelp.com triggers creation of further hyperlinks to youtube.com. We also applied the MMEL algorithm with one exponential kernel function to learn the excitation matrix. For this experiment, the data corresponding to each phrase was treated as an i.i.d. realization of the system. The resulting causal structure is depicted in Figure 7(b). As Figure 7(a) illustrates, the nodes can be clustered into two main groups: {Cr, Ye, Am, Yo} and {Bb, Cn, Gu, Hu, Sp, Wi}. The \ufb01rst group consists of mainly merchandise and reviewing websites and the second group contains the broadcasting websites. However, this is not as clear in Figure 7(b). This is because MMEL requires more i.i.d. samples (phrases) to be able to identify the correct arrows. Note that as we increase the number of phrases (110), Figure 7(c), both graphs become similar with two clearly visible main clusters. 6" + } + ], + "Richard S. Sutton": [ + { + "url": "http://arxiv.org/abs/2202.03466v4", + "title": "Reward-Respecting Subtasks for Model-Based Reinforcement Learning", + "abstract": "To achieve the ambitious goals of artificial intelligence, reinforcement\nlearning must include planning with a model of the world that is abstract in\nstate and time. Deep learning has made progress with state abstraction, but\ntemporal abstraction has rarely been used, despite extensively developed theory\nbased on the options framework. One reason for this is that the space of\npossible options is immense, and the methods previously proposed for option\ndiscovery do not take into account how the option models will be used in\nplanning. Options are typically discovered by posing subsidiary tasks, such as\nreaching a bottleneck state or maximizing the cumulative sum of a sensory\nsignal other than reward. Each subtask is solved to produce an option, and then\na model of the option is learned and made available to the planning process. In\nmost previous work, the subtasks ignore the reward on the original problem,\nwhereas we propose subtasks that use the original reward plus a bonus based on\na feature of the state at the time the option terminates. We show that option\nmodels obtained from such reward-respecting subtasks are much more likely to be\nuseful in planning than eigenoptions, shortest path options based on bottleneck\nstates, or reward-respecting options generated by the option-critic. Reward\nrespecting subtasks strongly constrain the space of options and thereby also\nprovide a partial solution to the problem of option discovery. Finally, we show\nhow values, policies, options, and models can all be learned online and\noff-policy using standard algorithms and general value functions.", + "authors": "Richard S. Sutton, Marlos C. Machado, G. Zacharias Holland, David Szepesvari, Finbarr Timbers, Brian Tanner, Adam White", + "published": "2022-02-07", + "updated": "2023-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction (2nd ed.). MIT Press. Sutton, R. S., Bowling, M., Pilarski, P. M. (2022). The Alberta plan for AI research. ArXiv:2208.11173. Sutton, R. S., McAllester, D. A., Singh, S. P., Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, pp. 1057\u20131063. MIT Press, Cambridge, MA. Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In Proceedings of the International Conference on Autonomous Agents and Multiagent 27 \fSystems. Sutton, R. S., Precup, D., Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112(1\u20132):181\u2013211. van Hasselt, H. (2010). Double Q-learning. In Advances in Neural Information Processing Systems 23, pp. 2613\u20132621. Curran Associates, Inc. van Hasselt, H. (2011). Insights in Reinforcement Learning: Formal Analysis and Empirical Evaluation of Temporal-difference Learning. SIKS dissertation series number 2011-04. Veeriah, V. (2022). Discovery in Reinforcement Learning (Doctoral dissertation). Department of Computer Science and Engineering, University of Michigan. See Chapter 7. Wan, Y., Zaheer, M., White, A., White, M., Sutton, R. S. (2019). Planning with expectation models. In Proceedings of the International Joint Conference on Artificial Intelligence. 28 \fAppendix A. Additional results in the four-room gridworld In Section 7 we presented results with the STOMP progression using four reward-respecting feature-attainment subtasks. In this appendix we present additional results pertaining to model learning (Figure A.1) and to planning with the imperfectly learned models (Figure A.2). 0 0 0 0 Steps of o\ufb00-policy experience Steps of o\ufb00-policy experience H1 H2 H4 H3 RMS Error in reward part of option model RMS Error in reward part of option model RMS Error in transition part of option model RMS Error in transition part of option model Figure A.1: Model learning in the four-room gridworld. The time course of learning of the transition parts (blue, right scale) and reward parts (red, left scale) of the models of each of the four options. In all cases the error becomes dramatically smaller, but here the error will never converge to zero because of the stochasticity of the environmental dynamics. All lines were averaged over 30 runs and the shading represents the standard error. Number of planning look-ahead operations 0 Planning performance \u02c6 v(s0, w) AF/XicfZRLb9QwEMd6EIprxa4cUmJkIq0qjbIjg0ardip5Y Sl9Svawcx9mN6jzqON1urZSvwoUDCHlc3DkW/ARmOzG8fYhIkXx/Dx/e2bisZvwIJWNxp +pGzena7duz9yZvXv/oOHc/OP9tI4E5Tt0pjH4sAlKeNBxHZlIDk7SAQjocvZvnu0Xsz vnzCRBnG0I4cJ64SkFwV+QIkE1J17gvtEWieLabdRtxR2fWuQv+jO2Y2lxuixrg6cmC/ /YtGT7s7P/0bezHNQhZJykmaHjqNRHYUETKgnOWzOEtZQugR6TFwjQksl+Hrx9HMs2t54 WdXnZKh6Fbd8O6Cy/LKOHXOR5m0n/dUGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMe ifSIlVCVC1v2BEn6AT2FaD3mY6Iw4UmflKan4MOlNnsK90gYapMrzKHonrZlH2oZwodV imSMYIsSnCqFi3yg5qd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqL q1p3SdIJ+1FSICbpdJcImaEvTOJ6g7zXdmYA71bJQfxH0+pIEQ9KuGVC39KOa4ZVkbq Gufn1P9DzjI+nfQ4MO9Bs07BNzTYM28i7ONG/F52DPvEkh0DitiAxnBIti71coPm51S 5qpW3lW2gxfwQo458+Wh3Ryn27mkWy+Ey51iARDkStnN/ByHgedxhqGz5bmyl3Md2HrLdu zmSKVwPR8pQFDOwuRonXJGa9pjTVsUEcHKRtAWoIBcRWi1hQ4VF26jWHG17zarMquOgFN oixvFGeWqMbQZpBURl+tf4QcileMYfGhMhZPi3oDUERvjCqE+ik0a1V2selfzf5lNQ LWqOLfQhECJMPw6KQ1/pcrgEnctX3tXBXnPJWVl6+WHFXn0zvg3RDHqKnqF5KBXaBW9Q2 20iyg6Q1/QN/S9rn2tfaj9nPsemOq1DxGF57ar3/nahKa v\u21e4(s0) AF73icfZRb b9MwFMc9WGM2waPvGRESANU9MNwRubtk7siTJ2qTSXynGcNpzqeOs6zsQ/DCAwjxysfhkW/BR+CkjePuIixF8fn5/O1zbB+7CQ9SWa/mbl1e7Z25+7cvfn7Dx 4+eryw+OQwjTNB2QGNeSzaLkZDyJ2IAPJWTsRjIQuZ0fuyVYxfnTKRBrE0b4cJawTkl4U+AElElD7tPtqOe3WX3YX7Ppqfdys6x2n7Njv/qJxa3UXZ39jL6ZyCJ OUnTY6eyI4iQgaUs3weZylLCD0hPaZImIZE9lfg78eRTHPrRWGnV53SUeiuOGKCx/LKOE3OR5n0n/bUGUZJFxgzM+4JWOryNDyAsGo5CPoECoCiMeifSIlb APl5bsCZL0A3oG0XrMx0RhwpM+KU1PwY9LbfYU7pEw1CZXmM2e9qWfYXdEH6sUiQTBEuU4EwpXOTj+tZXjJmGNPs3LBzYaGDTXLDMs08w3zNRsZNtJMCvUZyzjRu ZPSB3bd2tROaTpFP2kqxBTdqxJhU7SpaRxP0Q+a7k/B/Wpa2H8R9PqSCBEPS7hrQt/VjpuGVZG6hrn5zQfoecbH0z5tw9qa7Ri2o9m2Ydt5Fyf6SOFjA1gnlmwAKGJ DGsMliWDtZjM/bnRKmauaeVfZDl7CSznmzJfHdmOSbueKbqsQrnWKCUCQK2U38gscBp7HGYZalhfKXst1YFtN27EbY5XCK/lYAYJyFAbH85QjWtOaFqiAhmNoKWA AXkKkKrJXSouHAbx4qrdfdYlVl1BZxCi2GLnHGuGkOZQVoRcbk+Cj8QqZzE4ENhKpwU7wZchiJ6Y1RnUIl9Ym0Tkt7UNqD/xafgt2i5tpCHQIhwtTjsLj0lS6HR9C 5+uRd7xw2Vp31dcf1+2NxuQ1RHPoGXqOlpGD3qAN9B610AGiKMv6Bv6XhvUvtZ+1H5OXG/NlJqn6FKr/foHub8NuA= averaged over 30 runs Figure A.2: Performance of planning for learned, approximate models with different amounts of training, corresponding to the vertical lines in Figure A.1. 29 \fAppendix B. Monte Carlo estimates of planning performance Throughout the paper we have presented planning results in which the estimated value of the start state, \u02c6 v(s0, w), was used as a proxy for the value of the policy \u03c0w induced by the state values. However, in some cases \u02c6 v(s0, w) is a poor proxy for v\u03c0w(s0). An example of the discrepancy is that in early stages of planning the value estimates will always be near zero because w starts near zero and \u02c6 v is linear in w, but v\u03c0w(s0) could be positive or negative depending on the environment. Specifically, in our initial illustrative example, the two-room gridworld with the field of \u22121s, the initial returns will have \u22121s in them and overall will probably be negative, whereas the estimated values in the initial stages of planning (see Figure 1) are slightly positive. In this appendix we redo all the paper\u2019s planning results with a more direct Monte-Carlo estimate of v\u03c0w(s0). First we need a clear specification of \u03c0w. Let g(s) denote the greedy option in state s given the current state-value weight vector for the main task, w, and the model, \u02c6 r and \u02c6 n: g(s) . = arg max o\u2208O \u0002 \u02c6 r(x(s), o) + \u02c6 v(\u02c6 n(x(s), o), w) \u0003 , \u2200s \u2208S. (B.1) For states s in which g(s) is an action, \u03c0w was defined to take that action deterministically. For states in which g(s) was an option, \u03c0w was defined as a stochastic selection from the actions with probabilities given by the soft-max policy for that option (11). During planning, after each update of w by (19), one trajectory from start state to termination was generated by following \u03c0w with the real environmental dynamics. The return on that trajectory was recorded as a noisy Monte Carlo estimate of vw(s0). If the trajectory did not terminate after 1000 steps, then the partial return was used as the estimate of the return (\u03b31000 \u22480.00004). To reduce the noise, these estimates were averaged over 30 runs and then averaged over a bin of updates to produce the plots that follow. Figures B.1 and B.2 used a bin size of 10, and the others used a bin size of 50. The values for the first few hundred updates in all cases were negative and are not shown in the plots (they are clipped at zero). 30 \fPrimitive actions only Shortestpath option Reward-respecting option Monte-carlo Return Number of planning look-ahead operations Monte Carlo estimate of . during planning v\u21e1w(s0) AF/XicfZRb9MwFMc9WGM2wa8ZIRIQ1pmpuCB6Q2LR1Yk+UsZ s0l8pxnDac6njrOusjK/Cw8gxCufg0e+BR+BkzaOu4uI1Mbn5/O3znxsZvwIJX1+p+pG zena7duz9yZvXv/oOHc/OP9tM4E5Tt0ZjH4tAlKeNBxPZkIDk7TAQjocvZgXu8UcwfnDCR BnG0K4cJa4ekGwV+QIkE1Jl7ctJROAngz/WtQZ4vp36i86cXV+ujx7r6sApB/bv2j0tD rz07+xF9MsZJGknKTpkVNPZFsRIQPKWT6Ls5QlhB6TLlMkTEMie0vw9uNIprn1vLDTy07pM HSX3HDJhR/LKOHXOR5l0n/dVkGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMeiPSIlVCV C1t2BUl6AT2FaD3mY6Iw4UmPlKan4MWlNrsKd0kYapMrzKHonrZlDwoawotVimSMYIsSnC qFi3yg8Kd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqLq1rp3SdIJ+1 FSICbpTJcImaFPTOJ6g7zXdnYC71bJQfxF0e5IEQ9KuG1C39aO64ZVkbqGufn1H9DzjI+n fQ4NO9Rsy7AtzTYN28w70D96UeazPuwTS9YHFLEBjeGQRLB3s5kfNdqlzFXNvKNsBy/ghR xz5sjuzFOt31Jt1EIV9rFAiDIlbIb+TkOA8/jDENny3Nlr+Q6sI2m7diNkUrhpXykAE5C 5OjdcoZrWmNS1RAQrG0FLgAJyFaHVEjpUXLiNYsXVvjusyqw6Ak6hLa4VZ5SrxtBmkFZE XK4/hR+IVI5j8KEx4T4q7g04DEX0xqiOoFO6hFpnZR2v7T7/20+BdWi5thCHwIhwvTjoD j0lS6HS9C5fOVdHew3lp3V5ZcfVu21N+PbEM2gp+gZWkQOeoXW0DvUQnuIojP0BX1D32ufa 19rP2o/x643pkrNY3Thqf36B+mKE4A= Number of planning lookahead operations Reward respecting Shortest path Actions only Figure B.1: Same experiment as Figure 1, but with Monte Carlo estimate of value. Monte-carlo Return Number of planning look-ahead operations D: 16,000-step model A: 2,000-step model B: 4,000-step model C: 8,000-step model v\u21e4(s0) AF73icfZTdbtMwFMe9scIYXxtcpMRIQ0TU03Pu7YtHViV5SxsUp zqRzHaM5H3WcdZ2VPQ3XIAQtzwOl7wFj8BJG8fdhrAUxefn87fPsX3sJjxIZb3+e2b2x lzt5q352wt37t67/2Bx6eHNM4EZYc05rFouyRlPIjYoQwkZ+1EMBK6nB25J9vF+NEpE2kQ RwdylLBOSHpR4AeUSEDt0+7zlbRbf9ZdtOtr9XGzrnecsmO/+YPGrdVdmvuFvZhmIYsk5SR Nj516IjuKCBlQzvIFnKUsIfSE9JgiYRoS2V+Fvx9HMs2tp4WdXnVKR6G76oarLnwso4T/y /E4k/7rjgqiJMsouACY37GLRlbRYaWFwhGJR9Bh1ARQDwW7RNBqIR9uLRkT5CkH9AziNZj PiYKE570SWl6Cn5carOncI+EoTa5why2dO27CvshvBjlSKZIFiBGdK4SIf17fO8pIxw5h m54adazY0bKhZlimW+Yr9nIsJFmUqhPWMaJzp2UPrDr1pZ2StMp+kFTIabofpUIm6JNTe N4ir7T9GAKHlTwv6LoNeXRIh4WMI9E/qedtwyrIrUNczN/32Anmd8PO3TNqyt2a5hu5rt GLaTd3GijxQ+NoB1YskGgCI2pDFckgjWbjbz40anlLmqmXeV7eBlvJxjznx5bDcm6Xau6LY L4XqnmAEuVJ2I7/AYeB5nGoZXmh7PVcB7bdtB27MVYpvJqPFSAoR2FwPE85ojWtiaYlio hgZiNoCVBAriK0WkKHigu3cay4WnefVZlV8AptBi2yBnqjGUGaQVEZfro/ADkcpJD4Up sJ8W7AZSiN0YV1SlUp9I67S0B6U9+G/xKdgtaq4t1CEQIkw9DotLX+lyeASdq0/e9c7 Hxpqzsfbi/Ya9+XLyGqJ59Bg9QSvIQa/QJnqLWugQUcTRZ/QVfasNal9q32s/Jq6zM6XmEb rUaj/ArzDbw= D: 16,000-step model A: 2,000-step model Monte Carlo estimate of . during planning v\u21e1w(s0) AF/XicfZRb9MwFMc9WGM2wa8ZIRIQ1pmpuCB6Q2LR1Yk+UsZs 0l8pxnDac6njrOusjK/Cw8gxCufg0e+BR+BkzaOu4uI1Mbn5/O3znxsZvwIJX1+p+pG zena7duz9yZvXv/oOHc/OP9tM4E5Tt0ZjH4tAlKeNBxPZkIDk7TAQjocvZgXu8UcwfnDCR BnG0K4cJa4ekGwV+QIkE1Jl7ctJROAngz/WtQZ4vp36i86cXV+ujx7r6sApB/bv2j0tDr z07+xF9MsZJGknKTpkVNPZFsRIQPKWT6Ls5QlhB6TLlMkTEMie0vw9uNIprn1vLDTy07pM HSX3HDJhR/LKOHXOR5l0n/dVkGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMeiPSIlVCV C1t2BUl6AT2FaD3mY6Iw4UmPlKan4MWlNrsKd0kYapMrzKHonrZlDwoawotVimSMYIsSnCq Fi3yg8Kd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqLq1rp3SdIJ+1F SICbpTJcImaFPTOJ6g7zXdnYC71bJQfxF0e5IEQ9KuG1C39aO64ZVkbqGufn1H9DzjI+n fQ4NO9Rsy7AtzTYN28w70D96UeazPuwTS9YHFLEBjeGQRLB3s5kfNdqlzFXNvKNsBy/ghRx z5sjuzFOt31Jt1EIV9rFAiDIlbIb+TkOA8/jDENny3Nlr+Q6sI2m7diNkUrhpXykAE5C5 OjdcoZrWmNS1RAQrG0FLgAJyFaHVEjpUXLiNYsXVvjusyqw6Ak6hLa4VZ5SrxtBmkFZEX K4/hR+IVI5j8KEx4T4q7g04DEX0xqiOoFO6hFpnZR2v7T7/20+BdWi5thCHwIhwvTjoDj 0lS6HS9C5fOVdHew3lp3V5ZcfVu21N+PbEM2gp+gZWkQOeoXW0DvUQnuIojP0BX1D32ufa1 9rP2o/x643pkrNY3Thqf36B+mKE4A= Number of planning lookahead operations Figure B.2: Same experiment as inset in Figure 3, but with Monte Carlo estimate of value. wi = 100 wi = 10 wi = 0.1 wi = 1 Monte-carlo Return Number of planning look-ahead operations Monte Carlo estimate of . during planning v\u21e1w(s0) AF/XicfZRb9MwFMc9WGM2wa8ZIRIQ1pmpuCB6Q2LR1Yk+UsZ s0l8pxnDac6njrOusjK/Cw8gxCufg0e+BR+BkzaOu4uI1Mbn5/O3znxsZvwIJX1+p+pG zena7duz9yZvXv/oOHc/OP9tM4E5Tt0ZjH4tAlKeNBxPZkIDk7TAQjocvZgXu8UcwfnDCR BnG0K4cJa4ekGwV+QIkE1Jl7ctJROAngz/WtQZ4vp36i86cXV+ujx7r6sApB/bv2j0tD rz07+xF9MsZJGknKTpkVNPZFsRIQPKWT6Ls5QlhB6TLlMkTEMie0vw9uNIprn1vLDTy07pM HSX3HDJhR/LKOHXOR5l0n/dVkGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMeiPSIlVCV C1t2BUl6AT2FaD3mY6Iw4UmPlKan4MWlNrsKd0kYapMrzKHonrZlDwoawotVimSMYIsSnC qFi3yg8Kd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqLq1rp3SdIJ+1 FSICbpTJcImaFPTOJ6g7zXdnYC71bJQfxF0e5IEQ9KuG1C39aO64ZVkbqGufn1H9DzjI+n fQ4NO9Rsy7AtzTYN28w70D96UeazPuwTS9YHFLEBjeGQRLB3s5kfNdqlzFXNvKNsBy/ghR xz5sjuzFOt31Jt1EIV9rFAiDIlbIb+TkOA8/jDENny3Nlr+Q6sI2m7diNkUrhpXykAE5C 5OjdcoZrWmNS1RAQrG0FLgAJyFaHVEjpUXLiNYsXVvjusyqw6Ak6hLa4VZ5SrxtBmkFZE XK4/hR+IVI5j8KEx4T4q7g04DEX0xqiOoFO6hFpnZR2v7T7/20+BdWi5thCHwIhwvTjoD j0lS6HS9C5fOVdHew3lp3V5ZcfVu21N+PbEM2gp+gZWkQOeoXW0DvUQnuIojP0BX1D32ufa 19rP2o/x643pkrNY3Thqf36B+mKE4A= Number of planning lookahead operations v\u21e4(s0) AF73icfZTdbtMwFMe9scIYXxtcpMRIQ0TU03BdIbNo6sSvK2FiluVSO47TRnI86zr Oyh6CGy5AiFseh0vegkfgpI3jbkNYiuLz8/nb59g+dhMepLJe/z0ze2OudvPW/O2FO3fv3X+wuPTwYxpn grJDGvNYtF2SMh5E7FAGkrN2IhgJXc6O3JPtYvzolIk0iKMDOUpYJyS9KPADSiSg9mn3+UrarT/rLtr1tf q4Wdc7Ttmx3/xB49bqLs39wl5Ms5BFknKSpsdOPZEdRYQMKGf5As5SlhB6QnpMkTANieyvwt+PI5nm1tP CTq86paPQXDVRc+lHC/+V4nEn/VUcFUZJFlFwgTE/45aMrSJDywsEo5KPoEOoCAei/aJIFTCPlxas idI0g/oGUTrMR8ThQlP+qQ0PQU/LrXZU7hHwlCbXGEO2+xpW/YVdkP4sUqRTBAsUYIzpXCRj+tbZ3nJmG FMs3PDzjUbGjbULDMs08w3zNdsZNhIMynUJyzjROdOSh/YdWtLO6XpFP2gqRBTdL9KhE3RpqZxPEXfaXow BQ+qaWH/RdDrSyJEPCzhngl9TztuGVZF6hrm5v8+QM8zPp72aRvW1mzXsF3Ndgzbybs40UcKHxvAOrFkA 0ARG9IYLkEazeb+XGjU8pc1cy7ynbwMl7OMWe+PLYbk3Q7V3TbhXC9U0wAglwpu5Ff4DwPM4w1LK8UPZ 6rgPbtqO3RirF7NxwoQlKMwOJ6nHNGa1kTEkVEMLMRtAQoIFcRWi2hQ8WF2zhWXK27z6rMqivgFoM W+SMc9UYygzSiojL9VH4gUjlJAYfClPhpHg34DIU0RujiuoUKqlPpHVa2oPSHvy3+BTsFjXFuoQCBGmHo fFpa90OTyCztUn73rnY2PN2Vh78X7D3nw9eQ3RPHqMnqAV5KCXaBO9RS10iCji6DP6ir7VBrUvte+1HxP X2ZlS8whdarWfwG8wQ3C .1 1 10 wh =100 AGMHicfZTPb9s2FMfZru6ydu uS7bgLU2HADoEhuUmcS7AEibP1VK9L0gCha1AUZQmlfoSi4jiE8h/tsv9k6KUDWhS9rQ/YU+2KLpBNwGC9D58X/K9Rz76uYgL5bp/37n7xb3O/S9Xvnrw8OtvHn27uvbdaZGVkvETlolMnvm04CJO+YmKle BnueQ08QV/4b86qMdfXHJZxFl6rGY5HyV0ksZhzKgCNF79hQ8JP7U8TJwLGeR0+rl5pERU4Z1253K1eV41XVg7nribreBeTdbD9KY4w2cWe645XHbe72e9vbW9jt+v1Xbfvws+2t+X1d7DXdeP8/M/a P4Mx2v3/iJBxsqEp4oJWhTnpurkaZSxUxwmL0sOETwik64pkmRUBVtwDfMUlVU+MfaLm47FbPE3/CTDR9eXjIqPud4XqpwZ6TjNC8VTxm4wFhYCqwyXJcIB7HkTIkZ/FAmY4gHs4hKyhQU8pMlJ5LmUcyu mtpQTajI9qYgYaPUMacaDKhSWJMoYmAfQqMrSJN/AQ+vFXkCwRLNOBKa1Ln4f4ymwIt4wbdm3ZtWFTy6aGlZaVhoWhYbNLJsZpqR+SVSWm9xp4wNVx/vGqSiW6O+GSrlEn7eJ8CU6MDTLlugzQ4+X4HE7 LdRfxpNIUSmzaQOf2tCfGsd9y9pIfcv86vMbGATWJzA+Z5adGXZk2ZFh5YdVmOSmy2Fl1/AOpniF4BSPmUZHJIU1h4MqvPeqJH5elCNteORdWg9Iniozp3eIt3RLd1BLXwyqicAQaW106tuSBIHgeAEelz daOdJZQI7GDie05urNmo5goQNKOD+mKwI0YzXGiGso4IZraCoQF5CoTPJQmVFK7zWMl7brPeZtZewS8WkugRN48V4OhzSCtlPrCbEUYy0ItYgihMTXJ63sDkMdvTXaqC6hkyKq8GVjXzT2xf82n4ZqMX tsoQ+BUGn7cVof+lZXwSVobjr83z+nva632d36bdPZ21nchmgF/YAeo5+Qh/poD/2KhugEMfQHeo3eonedPztvOu87Hxaud+80mu/RJ0/n479d3SZT Figure B.3: Same experiment as Figure 5, but with Monte Carlo estimate of value. 31 \fShortest-path options Primitive actions only Option-critic Eigenoptions Reward-respecting options Number of planning look-ahead operations Monte-carlo Return v\u21e4(s0) AF73icfZT dbtMwFMe9scIYXxtcpMRIQ0TU03Pu7YtHViV5SxsUpzqRzHaM5H3WcdZ2VPQ3XIAQtzwOl7wFj8BJG8fdhrAUxefn87fPsX3sJjxIZb3+e2b2xlzt5q352wt 37t67/2Bx6eHNM4EZYc05rFouyRlPIjYoQwkZ+1EMBK6nB25J9vF+NEpE2kQRwdylLBOSHpR4AeUSEDt0+7zlbRbf9ZdtOtr9XGzrnecsmO/+YPGrdVdmvuFvZh mIYsk5SRNj516IjuKCBlQzvIFnKUsIfSE9JgiYRoS2V+Fvx9HMs2tp4WdXnVKR6G76oarLnwso4T/y/E4k/7rjgqiJMsouACY37GLRlbRYaWFwhGJR9Bh1ARQDw W7RNBqIR9uLRkT5CkH9AziNZjPiYKE570SWl6Cn5carOncI+EoTa5why2dO27CvshvBjlSKZIFiBGdK4SIf17fO8pIxw5hm54adazY0bKhZlimW+Yr9nIsJF mUqhPWMaJzp2UPrDr1pZ2StMp+kFTIabofpUIm6JNTeN4ir7T9GAKHlTwv6LoNeXRIh4WMI9E/qedtwyrIrUNczN/32Anmd8PO3TNqyt2a5hu5rtGLaTd3GijxQ +NoB1YskGgCI2pDFckgjWbjbz40anlLmqmXeV7eBlvJxjznx5bDcm6Xau6LYL4XqnmAEuVJ2I7/AYeB5nGoZXmh7PVcB7bdtB27MVYpvJqPFSAoR2FwPE85ojW tiaYliohgZiNoCVBAriK0WkKHigu3cay4WnefVZlV8AptBi2yBnqjGUGaQVEZfro/ADkcpJD4UpsJ8W7AZSiN0YV1SlUp9I67S0B6U9+G/xKdgtaq4t1CEQ Ikw9DotLX+lyeASdq0/e9c7Hxpqzsfbi/Ya9+XLyGqJ59Bg9QSvIQa/QJnqLWugQUcTRZ/QVfasNal9q32s/Jq6zM6XmEbrUaj/ArzDbw= Monte Carlo estimate of . during planning v\u21e1w(s0) AF/XicfZRb9MwFMc9WGM2wa8ZIRIQ1pmpuCB 6Q2LR1Yk+UsZs0l8pxnDac6njrOusjK/Cw8gxCufg0e+BR+BkzaOu4uI1Mbn 5/O3znxsZvwIJX1+p+pGzena7duz9yZvXv/oOHc/OP9tM4E5Tt0ZjH4tAlKe NBxPZkIDk7TAQjocvZgXu8UcwfnDCRBnG0K4cJa4ekGwV+QIkE1Jl7ctJROAng z/WtQZ4vp36i86cXV+ujx7r6sApB/bv2j0tDrz07+xF9MsZJGknKTpkVNPZF sRIQPKWT6Ls5QlhB6TLlMkTEMie0vw9uNIprn1vLDTy07pMHSX3HDJhR/LKOHX OR5l0n/dVkGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMeiPSIlVCVC1t2BU l6AT2FaD3mY6Iw4UmPlKan4MWlNrsKd0kYapMrzKHonrZlDwoawotVimSMYIsS nCqFi3yg8Kd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqL q1rp3SdIJ+1FSICbpTJcImaFPTOJ6g7zXdnYC71bJQfxF0e5IEQ9KuG1C39aO 64ZVkbqGufn1H9DzjI+nfQ4NO9Rsy7AtzTYN28w70D96UeazPuwTS9YHFLEBje GQRLB3s5kfNdqlzFXNvKNsBy/ghRxz5sjuzFOt31Jt1EIV9rFAiDIlbIb+TkO A8/jDENny3Nlr+Q6sI2m7diNkUrhpXykAE5C5OjdcoZrWmNS1RAQrG0FLgA JyFaHVEjpUXLiNYsXVvjusyqw6Ak6hLa4VZ5SrxtBmkFZEXK4/hR+IVI5j8KEx 4T4q7g04DEX0xqiOoFO6hFpnZR2v7T7/20+BdWi5thCHwIhwvTjoDj0lS6HS9C 5fOVdHew3lp3V5ZcfVu21N+PbEM2gp+gZWkQOeoXW0DvUQnuIojP0BX1D32ufa 19rP2o/x643pkrNY3Thqf36B+mKE4A= Number of planning lookahead operations Figure B.4: Same experiment as Figure 8, but with Monte Carlo estimate of value. Number of planning look-ahead operations Monte-carlo Return A: 5,000-step model B: 10,000-step model C: 20,000-step model D: 40,000-step model v\u21e4(s0) AF73icfZT dbtMwFMe9scIYXxtcpMRIQ0TU03Pu7YtHViV5SxsUpzqRzHaM5H3WcdZ2VPQ3XIAQtzwOl7wFj8BJG8fdhrAUxefn87fPsX3sJjxIZb3+e2b2xlzt5q352wt 37t67/2Bx6eHNM4EZYc05rFouyRlPIjYoQwkZ+1EMBK6nB25J9vF+NEpE2kQRwdylLBOSHpR4AeUSEDt0+7zlbRbf9ZdtOtr9XGzrnecsmO/+YPGrdVdmvuFvZhm IYsk5SRNj516IjuKCBlQzvIFnKUsIfSE9JgiYRoS2V+Fvx9HMs2tp4WdXnVKR6G76oarLnwso4T/y/E4k/7rjgqiJMsouACY37GLRlbRYaWFwhGJR9Bh1ARQDwW 7RNBqIR9uLRkT5CkH9AziNZjPiYKE570SWl6Cn5carOncI+EoTa5why2dO27CvshvBjlSKZIFiBGdK4SIf17fO8pIxw5hm54adazY0bKhZlimW+Yr9nIsJFm UqhPWMaJzp2UPrDr1pZ2StMp+kFTIabofpUIm6JNTeN4ir7T9GAKHlTwv6LoNeXRIh4WMI9E/qedtwyrIrUNczN/32Anmd8PO3TNqyt2a5hu5rtGLaTd3GijxQ+N oB1YskGgCI2pDFckgjWbjbz40anlLmqmXeV7eBlvJxjznx5bDcm6Xau6LYL4XqnmAEuVJ2I7/AYeB5nGoZXmh7PVcB7bdtB27MVYpvJqPFSAoR2FwPE85ojWti aYliohgZiNoCVBAriK0WkKHigu3cay4WnefVZlV8AptBi2yBnqjGUGaQVEZfro/ADkcpJD4UpsJ8W7AZSiN0YV1SlUp9I67S0B6U9+G/xKdgtaq4t1CEQIk w9DotLX+lyeASdq0/e9c7Hxpqzsfbi/Ya9+XLyGqJ59Bg9QSvIQa/QJnqLWugQUcTRZ/QVfasNal9q32s/Jq6zM6XmEbrUaj/ArzDbw= Monte Carlo estimate of . during planning v\u21e1w(s0) AF/XicfZRb9MwFMc9WGM2wa8ZIRIQ1pmpuCB 6Q2LR1Yk+UsZs0l8pxnDac6njrOusjK/Cw8gxCufg0e+BR+BkzaOu4uI1Mbn 5/O3znxsZvwIJX1+p+pGzena7duz9yZvXv/oOHc/OP9tM4E5Tt0ZjH4tAlKe NBxPZkIDk7TAQjocvZgXu8UcwfnDCRBnG0K4cJa4ekGwV+QIkE1Jl7ctJROAng z/WtQZ4vp36i86cXV+ujx7r6sApB/bv2j0tDrz07+xF9MsZJGknKTpkVNPZFs RIQPKWT6Ls5QlhB6TLlMkTEMie0vw9uNIprn1vLDTy07pMHSX3HDJhR/LKOHXO R5l0n/dVkGUZJFxgzs+4JWOryNfyAsGo5EMYECoCiMeiPSIlVCVC1t2BUl 6AT2FaD3mY6Iw4UmPlKan4MWlNrsKd0kYapMrzKHonrZlDwoawotVimSMYIsSn CqFi3yg8Kd5yZhTLMzw840Gxg20CwzLNPMN8zXbGjYUDMp1Ccs40TnTkofqLq 1rp3SdIJ+1FSICbpTJcImaFPTOJ6g7zXdnYC71bJQfxF0e5IEQ9KuG1C39aO64 ZVkbqGufn1H9DzjI+nfQ4NO9Rsy7AtzTYN28w70D96UeazPuwTS9YHFLEBjeGQ RLB3s5kfNdqlzFXNvKNsBy/ghRxz5sjuzFOt31Jt1EIV9rFAiDIlbIb+TkOA8 /jDENny3Nlr+Q6sI2m7diNkUrhpXykAE5C5OjdcoZrWmNS1RAQrG0FLgAJy FaHVEjpUXLiNYsXVvjusyqw6Ak6hLa4VZ5SrxtBmkFZEXK4/hR+IVI5j8KEx4T 4q7g04DEX0xqiOoFO6hFpnZR2v7T7/20+BdWi5thCHwIhwvTjoDj0lS6HS9C5f OVdHew3lp3V5ZcfVu21N+PbEM2gp+gZWkQOeoXW0DvUQnuIojP0BX1D32ufa19 rP2o/x643pkrNY3Thqf36B+mKE4A= Number of planning lookahead operations Figure B.5: Same experiment as Figure A.2, but with Monte Carlo estimate of value. 32 \fAppendix C. Understanding the option-critic\u2019s performance As discussed in the main text, the poor performance of planning with optioncritic options (Figures 8 and B.4) may be surprising at first because these are reward-respecting options. Further inspection highlights the importance of appropriately choosing the state distribution we learn options from, and the distribution we use to sample state-feature vectors when performing AVI. In the four-room gridworld, as expected, the option-critic learns near-optimal policies that consistently reach the goal state. However, because the optioncritic is an on-policy method, it does not learn accurate estimates of the option values across the whole state-space, but only across the set of states the options are likely to visit\u2014see representative learned policies in Figure C.1 and stopping probabilities in Figure C.2. Because we are selecting state-feature vectors x in a random sequence when performing AVI, there are several statefeature vectors x with inaccurate values that hinder planning performance, even though the models we learn are very accurate (see Figure C.3). More effective techniques for selecting the states which will be updated, a problem known as search-control, could make options learned by the optioncritic more effective, but this is currently an open problem. The options learned by the option-critic, shown in Figures C.1 and C.2, highlight the importance of defining stopping values that are different from the estimated values of the state the option stops in. Because the option-critic does not do that, it rarely learns four distinct, useful options. Some of the options learned by the option-critic often look like random policies in most of the state space. Notice that different stopping values, such as deliberation cost (Harb et al., 2018), would not completely address this issue because of the on-policy nature of the underlying method. This discussion highlights the importance of taking into account how the option models will be used in planning. 33 \fFigure C.1: Option policies learned by the option-critic method in a representative run. The arrows indicate the greedy action in each state, and the numbers indicate its probability of being taken (the actual policies were stochastic). Darker green indicates states in which the policy is more deterministic. Figure C.2: Stopping probabilities for the option policies depicted above in Figure C.1. Darker green indicates states in which the option stops more frequently. In this case, starting from the start state S, Option 1 is unlikely to stop until reaching the hallway state, and then Option 3 is unlikely to stop until reaching the goal state. Error in transition part of model Error in reward part of model Steps of o\ufb00-policy experience 0 0 Figure C.3: Model learning in the four-room gridworld with an option-critic option. The time course of learning of the transition part (blue, right scale) and reward part (red, left scale) of the models of one of the options learned by the option-critic. All lines were averaged over 30 runs. 34" + } + ], + "Adam White": [ + { + "url": "http://arxiv.org/abs/2109.09809v1", + "title": "Counterfactual Instances Explain Little", + "abstract": "In many applications, it is important to be able to explain the decisions of\nmachine learning systems. An increasingly popular approach has been to seek to\nprovide \\emph{counterfactual instance explanations}. These specify close\npossible worlds in which, contrary to the facts, a person receives their\ndesired decision from the machine learning system. This paper will draw on\nliterature from the philosophy of science to argue that a satisfactory\nexplanation must consist of both counterfactual instances and a causal equation\n(or system of equations) that support the counterfactual instances. We will\nshow that counterfactual instances by themselves explain little. We will\nfurther illustrate how explainable AI methods that provide both causal\nequations and counterfactual instances can successfully explain machine\nlearning predictions.", + "authors": "Adam White, Artur d'Avila Garcez", + "published": "2021-09-20", + "updated": "2021-09-20", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Machine learning systems are increasingly being used for automated decision making. It is important that these systems\u2019 decisions can be trusted. This is particularly the case in mission critical situations such as medical diagnosis, airport security or high-value \ufb01nancial trading. But a machine learning system cannot be trusted simply on the basis of its accuracy with a test data set. The user also needs to be able to understand why the system is making its speci\ufb01c predictions. One possible solution is to treat machine learning systems as \u2018black-boxes\u2019 and to explain their input-output behaviour. Such approaches can be divided into two broad types: those providing global explanations of the entire system and those providing local explanations of single predictions. Local explanations are needed when a machine learning system\u2019s decision boundary is too complex to allow for global explanations. Miller (2018) carried out a review of over 250 papers on explanation taken from the disciplines of philosophy, psychology and cognitive science. He states that perhaps his most important \ufb01nding was that explanations are contrastive counterfactuals, seeking to answer the question \u2018Why event E rather than event F?\u2019 F is referred to as E\u2019s foil and comes from a contrast class of events that were alternatives to E, but which did not happen. When a person asks for an explanation, the relevant contrast class is often not explicitly conveyed but instead is implicit in the explanatory question. Multiple explainable AI (XAI) methods have been proposed that claim to provide \u2018counterfactual explanations\u2019 of single machine learning predictions. The explanation consists of either a single or multiple counterfactual instances. It is the argument of this paper that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that supports the counterfactual instances. Counterfactual instances by themselves explain little. The remainder of this paper is organised as follows: Section 2 provides an overview of counterfactual instances and highlights Karimi, Sch\u00f6lkopf, et al.\u2019s (2021) argument that existing XAI methods often fail to successfully generate actionable counterfactuals. Section 3 speci\ufb01es the requirements for a satisfactory scienti\ufb01c explanation and argues that these are not satis\ufb01ed by statements of counterfactual instances. Section 4 illustrates how XAI methods that provide both causal equations and counterPreprint. Under review. arXiv:2109.09809v1 [cs.AI] 20 Sep 2021 \ffactual instances can satisfy these requirements. Section 5 discusses aspects of understandability, summarises the paper and discusses future directions of work. 2 Counterfactual Instances A counterfactual instance speci\ufb01es a close possible world in which, contrary to the facts, a person gets their desired outcome. For example, suppose that a banking machine learning system declined Mr Jones\u2019 loan application and that the feature vector for Mr Jones was {income: $32,000, age: 45, education: graduate}; then a counterfactual instance for Mr Jones might be {income: $35,000, age: 45, education: graduate} where the $3000 increase in salary is suf\ufb01cient to \ufb02ip Mr Jones to the desired side of the banking system\u2019s decision boundary. Wachter et al. (2017) \ufb01rst proposed using counterfactual instances as explanations of single machine learning predictions. This type of explanation was intended primarily to be for the bene\ufb01t of the data subject de\ufb01ned as \u2018the natural person whose data is being collected or evaluated\u2019. It has two main objectives (i) understanding: to help a person understand why a machine learning system made a particular prediction affecting them (ii) recourse: to specify what the person would need to change for the machine learning system to then produce the person\u2019s desired outcome. Watcher et al. de\ufb01ne a counterfactual explanation as having the form: \u201dScore p was returned because variables V had values (v1, v2, . . . ) associated with them. If V instead had values (v\u2032 1, v\u2032 2, . . . ), and all other variables remained constant, score p\u2032 would have been returned\u201d (Wachter et al., 2017, p. 4) Wachter et al. state that their approach does not rely on \u2018any knowledge of the causal structure of the world\u2019. Nor is it intended to provide approximate explanations of a machine learning system\u2019s algorithm or to provide feature scores. They note that their proposed type of explanation is different to those found in the previous \u2018machine learning legal and ethics literature\u2019 As we will illustrate, it is also substantially different to those proposed in the philosophy of science. They state that a similar type of explanation has been proposed by Nozick (1983, p.172:174) within the philosophy of knowledge. Nozick is providing an analysis of the necessary conditions for propositional knowledge. He argues that satisfying the following subjunctive conditional may be necessary for a person s to know proposition p: If p weren\u2019t true, s wouldn\u2019t believe that p As Watcher et al. note, Nozick\u2019s conditional can be understood as a counterfactual (given p is true). Watcher et al.\u2019s paper then goes directly from considering Nozick\u2019s proposal to providing their de\ufb01nition of a counterfactual explanation. No philosophical argument is given to explain why Nozick\u2019s conditional provides a justi\ufb01cation for a form of explanation applicable to XAI. There are now many XAI methods that attempt to generate \u2018optimal\u2019 counterfactual instances, for example Karimi, Barthe, et al. (2020) review sixty counterfactual instance methods. The algorithms differ in the constraints they place and the attributes referenced in their loss functions (Verma et al. (2020)). Desiderata often include that a counterfactual instance is: (1) actionable \u2013 e.g. does not recommend that a person reduces their age (2) near to the original observation common measures include Manhattan distance, L1 norm and L2 norm (3) sparse \u2013 only changing the values of a small number of features (4) plausible e.g. the counterfactual instance must correspond to a high density part of the training data (5) ef\ufb01cient to compute. These XAI methods can be broadly viewed as having the following objective function for a counterfactual x + \u03b4 (this is adapted from Karimi, Sch\u00f6lkopf, et al. (2021) and Ustun et al. (2019)): \u03b4 \u2208argmin \u03b4 cost(\u03b4; x) s.t. class(m(xCF E)) \u0338= class(m(x)), xCF E = x+\u03b4 , xCF E \u2208P, \u03b4 \u2208F where m is the machine learning system, x is the observation, class(m(x)) is the label for prediction m(x), xCF E is a nearest counterfactual, F and P are optional feasibility and plausibility constraints. Karimi, Sch\u00f6lkopf, et al. (2021) show that counterfactual instances satisfying this function will often be suboptimal or infeasible; because they do not take account of the causal structure that determines the consequences of the person\u2019s actions. The underlying problem is that unless all of the person\u2019s 2 \ffeatures are causally independent of each other, then when the person acts to change the value of one feature, other downstream dependent features may also change. Karimi, Sch\u00f6lkopf, et al.\u2019s proposed solution uses Pearl\u2019s interventionist calculus to reformulate the recourse problem. The objective function now being to \ufb01nd the minimum cost set of actions that will achieve the desired outcome, where these actions are modelled as interventions on the structural causal model that subsumes the relevant portion of the person\u2019s world. In order to reliably compute a set of recourse actions, the structural causal model will need to be at least approximately true. Unfortunately, as Karimi, Sch\u00f6lkopf, et al. note, the true causal model of the world is rarely known, and therefore their proposed solution currently has limited applicability. Nevertheless, the key point for this paper is that the counterfactual instances being produced by current XAI methods are often likely to fail the second of Watcher et al.\u2019s objectives: identifying how a person should act in order to achieve their desired outcome. 3 Theories of Scienti\ufb01c Explanation The structure of a satisfactory scienti\ufb01c explanation has been extensively studied throughout the history of philosophy. The deductive-nomological (henceforth: D-N) account of scienti\ufb01c explanation once dominated the philosophy of science. According to this, to explain a phenomenon is to subsume it under general laws of nature. For systems with deterministic laws, successful scienti\ufb01c explanations were taken to have the form of the D-N model , as speci\ufb01ed by Hempel and Oppenheim (1948): Figure 1: The schema for a DN-Explanation The explanans must include statements of general laws of nature that are essential to the derivation of the explanandum. The laws are general, in the sense that statements of the laws make no reference to particulars and are true without exception. The explanandum is explained by showing that it is an instantiation of these laws. A similar account of explanation was held to apply for systems with probabilistic laws, but it is suf\ufb01cient for our purposes just to focus on D-N explanations. The D-N Model has been subject to some well-known counterexamples (Salmon, 1984, p.46-50). Consider, for example, the following deductive argument: Every man who regularly takes birth control pills avoids pregnancy John Jones regularly takes birth control pills John Jones avoids becoming pregnant This satis\ufb01es the criteria for being a D-N explanation but clearly fails to be explanatory. Such counterexamples have highlighted serious shortcomings with the D-N Model, including that: (i) irrelevant premises can be used to deduce and hence \u2018explain\u2019 an explanandum (ii) no temporal priority is required between the explanans and the explanandum (iii) perhaps most importantly, D-N explanations fail because a successful explanation of an event requires stating its causes. The dominant accounts of scienti\ufb01c explanation are now the counterfactual theories of causal explanation found in Woodward (2003) and Halpern and Pearl (2005). This paper will primarily focus on the account found in Woodward\u2019s book \u2018Making Things Happen: A Theory of Causal Explanation\u2019, which won the Lakatos Award for outstanding contribution to philosophy of science. Woodward and Pearl (2000) share similar analyses, with Woodward focussing more on the philosophy rather than the calculus of causality 1. However this papers arguments for XAI also apply, mutatis mutandis, to Pearl\u2019s work. Before providing Woodward\u2019s de\ufb01nition of causal explanation, it is \ufb01rst necessary to brie\ufb02y outline his notions of ideal intervention and invariance. For Woodward, causal relationships relate variables. 1See James Woodward (2003) for an analysis of the differences between Woodward and Pearl\u2019s speci\ufb01cation of an intervention 3 \fVariables are properties or magnitudes that can have more than one value; and the values of variables are possessed by particular entities. (Woodward, 2003, p.39). An \u2018ideal intervention\u2019 on X with respect to Y exogenously changes the value of X, such that any change that occurs to the value of Y occurs only because of the change in the value of X. Woodward\u2019s speci\ufb01cation of an ideal intervention involves an \u2018intervention variable\u2019 I which acts like a \u2018switch\u2019. When I is \u2018switched on\u2019: 1. I causes X. X\u2019s value is solely a function of I. 2. This means that all connections between X and its pre-intervention causes are \u2018broken\u2019. 3. I changes the value of Y , if at all, only by changing X. 4. I does not alter the relationship between Y and any of its causes Z that are not on a directed path from X to Y . (Woodward, 2008, p.202-203) An ideal intervention on X with respect to Y consists in I being \u2018switched on\u2019 (see Figure 2). The relationship between X and Y is \u2018invariant\u2019 if it holds for at least one \u2018testing intervention\u2019. Let the relationship between X and Y be represented by the generalisation: Y = G(X). A testing intervention is an ideal intervention that changes the value of X from, say, x0 to x1 and establishes that: G(x0) = y0 \u0338= G(x1) = y1. A necessary and suf\ufb01cient condition for a generalisation between variables X and Y to represent a causal relationship is that it is invariant (Woodward, 2003, p.250). X is a direct cause of Y with respect to a variable set V if there is a possible change to the value of X that will change the value of Y when all other variables in V (besides X) are held \ufb01xed. Figure 2: Example of an ideal intervention. X and Z are direct causes of Y . I is an intervention variable for X with respect to Y . When I is switched on X ceases to depend on W and depends only on I. Woodward provides the following de\ufb01nition of a causal explanation for a simple system without multiple connections: \u201cSuppose that M is an explanandum consisting in the statement that some variable Y takes the particular value y. Then an explanans E for M will consist of (a) a generalization G relating changes in the value(s) of a variable X (where X may itself be a vector or n-tuple of variables Xi ) with changes in Y , and (b) a statement (of initial or boundary conditions) that the variable X takes the particular value x. A necessary and suf\ufb01cient condition for E to be (minimally) explanatory with respect to M is that (i) E and M be true or approximately so; (ii) according to G, Y takes the value y under an intervention in which X takes the value x; (iii) there is some intervention that changes the value of X from x to where x \u0338= x\u2032 , with G correctly describing the value that Y would assume under this intervention, where y \u0338= y\u2032 .\" (Woodward, 2003, p.203) Central to Woodward\u2019s account is the requirement for an invariant generalisation. This is a causal equation in which the dependent variable is the effect and the independent variables are a complete set of its direct causes. This equation(s) support counterfactuals. An explanation reveals patterns of 4 \fcounterfactual dependence; and the purpose of counterfactual instances is to provide answers to a set of \u2018what-if-things-had-been-done-differently\u2019 questions. The generalisation(s) might only be true for a particular domain, and in such cases provides a local explanation. For Woodward, all causal claims are counterfactual and contrastive: \u2018to causally explain an outcome is always to explain why it, rather than some alternative, occurred\u2019(Woodward, 2003, p.146). This paper endorses Woodward\u2019s theory as specifying the structure of a satisfactory XAI explanation. A machine learning system\u2019s prediction y is caused by the values of the features of observation x. A satisfactory explanation therefore needs to specify (i) an approximately true invariant generalisation relating the prediction to the input features and (ii) a set of counterfactual instances that are supported by the invariant generalisation. The resulting explanation reveals the (approximate) direct causal relationships that the machine learning system has learnt between the features of x and prediction y. It can be argued that counterfactual instances are a type of contrastive causal explanation (e.g. see Karimi, Barthe, et al. (2020)). They identify a subset of the causes of a machine learning system m predicting class l for observation x, and specify counterfactual changes in the values of those causes that would result in m predicting class l\u2032. But these are, at best, shallow causal explanations. They have multiple failings. For example, they do not identify: the relative importance of different causes, each cause\u2019s functional form or any interactions between the causes. Furthermore, they might not state all the key causes of an event, Consider again Mr Jones\u2019 failed loan application. It may have been the case that the machine learning system was completely insensitive to any changes in Mr Jones\u2019 salary until it exactly reached $35,000, or perhaps Mr Jones only needs to increase his salary to $34,000 if he also enrols on an MBA. Providing only counterfactual instances is clearly insuf\ufb01cient to understanding the machine learning system\u2019s local behaviour. Imagine another science, say physics, treating a statement of counterfactual instances as an explanation, rather than seeking to discover the governing equation(s). 4 Current XAI Methods Two XAI methods will now be outlined, in order to illustrate how satisfactory causal explanations can be produced.2 . The \ufb01rst is LIME: Local Interpretable Model-agnostic Explanations (Ribeiro et al., 2016), which generates a causal equation, however LIME would have to be enhanced to produce counterfactual instances. The second is CLEAR: Counterfactual Local Explanations viA Regression (White and Garcez, 2020; White, Ngan, et al., 2021) which was explicitly developed to satisfy Woodward\u2019s de\ufb01nition. Both explain single predictions by performing local weighted regressions in which the dependent variable is the prediction y and the independent variables are the input features x, a subset of which are the direct causes of y. The regressions are carried out on a synthetic dataset, with the values of each synthetic observation x\u2032 being generated by replacing subsets of the observation x with random samples from parametric distributions and passing x\u2032 through the machine learning system to determine the corresponding y\u2032. Hence both methods are assuming that the values of each direct cause can be independently changed (a requirement of Woodward\u2019s theory). CLEAR was developed after LIME, with key differences including that: CLEAR discovers actual counterfactual instances by sampling m, includes the counterfactual instances in its regression dataset, and uses GAN generated images for explaining image classi\ufb01cations. CLEAR also measures its \ufb01delity i.e. how faithfully it mimics the input-out behaviour of the machine learning system it seeks to explain. Figure 3 shows extracts from a CLEAR explanation for the classi\ufb01cation probability of an observation taken from the Adult dataset. This illustrates how an XAI method can satisfy Woodward\u2019s requirements, with CLEAR providing a causal equation, counterfactual instances and reporting \ufb01delity statistics (see White and Garcez (2020) for a detailed comparison of LIME and CLEAR; see White, Ngan, et al. (2021) for how CLEAR uses a GAN to provide contrastive explanations of image classi\ufb01cations). 2It is unclear as to the extent to which other prominent XAI methods could be adapted to satisfy Woodward\u2019s speci\ufb01cation e.g. LORE (Guidotti et al., 2018) generates local decision trees rather than equations; Kernel SHAP\u2019s (Lundberg et al., 2017) regression coef\ufb01cients are SHAP values and therefore do not directly calculate the effect of changing a feature\u2019s value on the machine learning system\u2019s prediction. 5 \fFigure 3: Extracts from a CLEAR Image report. This explains a prediction y made by a neural network for an observation x from the Adult dataset. CLEAR states the causal equation that applies to the neighbourhood around x. Three counterfactuals are identi\ufb01ed; for example intervening to change x to being \u2019married\u2019, changes y to 0.57. CLEAR\u2019s causal equation estimates that y will change to 0.61, hence there is a \ufb01delity error of 0.04. 5 Discussion To explain an event is \u201cto provide information about the factors on which it depends and exhibit how it depends on those factors\u201d (Woodward, 2003, p.204). This requires counterfactual explanations that reveal the causal structure producing the event. This paper has argued that counterfactual instances by themselves do not do this. They need to be supported by either a causal equation or system of equations. This paper\u2019s focus is in the opposite direction to Karimi, Sch\u00f6lkopf, et al. Their focus is on recourse, whilst this paper\u2019s focus is on providing an understanding of why the machine learning system made its prediction. Karimi, Sch\u00f6lkopf, et al.\u2019s criticism is that XAI methods do not provide actionable counterfactual instances, as they do not take account of the causal structure that generates the machine learning system\u2019s input data. This paper\u2019s criticism concerns counterfactual instances being insuf\ufb01cient to explain the direct causal structure linking the values of an observation x\u2019s features with the machine learning system\u2019s prediction. Existing XAI counterfactual instance methods can be relevant here, as they help to illustrate the direct causal relationships that the machine learning system has learnt; but they need a supporting causal equation. A concern with providing Woodward\u2019s form of explanation is understandability. In order to satisfy the requirement that the causal equation(s) is at least approximately true, the equation(s) may need to be too complex for non-technical users to understand. For example, CLEAR\u2019s equations may include logistic functions, interaction terms, exponents and so forth. The solution to this is to recognise that different types of users require different levels of explanation. A data scientist may want to see the causal equation that most faithfully mimics the local behaviour of the machine learning system. Sometimes these equations may include terms with functions that contradicts their understanding of the domain, suggesting that the machine learning system\u2019s prediction is untrustworthy. By contrast a bank customer may only want to know which features most impacted the machine learning system\u2019s prediction. The issue then becomes one of deciding how much of the full explanation to provide to a particular user, in what format, and when to drill down into progressively more complex explanations. For example, for the bank customer, perhaps only a bar chart showing the relative weightings of the most important features plus some counterfactual instances might be suf\ufb01cient. Since Wachter et al.\u2019s paper, considerable resources have been devoted to developing counterfactual instance methods. Greater focus is now needed on developing methods for discovering the direct causal relations that generate these counterfactual instances, and for making them useful in practice as an explanatory tool that can help increase trust in machine learning systems. 6" + } + ], + "Banafsheh Rafiee": [ + { + "url": "http://arxiv.org/abs/2210.14361v1", + "title": "Auxiliary task discovery through generate-and-test", + "abstract": "In this paper, we explore an approach to auxiliary task discovery in\nreinforcement learning based on ideas from representation learning. Auxiliary\ntasks tend to improve data efficiency by forcing the agent to learn auxiliary\nprediction and control objectives in addition to the main task of maximizing\nreward, and thus producing better representations. Typically these tasks are\ndesigned by people. Meta-learning offers a promising avenue for automatic task\ndiscovery; however, these methods are computationally expensive and challenging\nto tune in practice. In this paper, we explore a complementary approach to the\nauxiliary task discovery: continually generating new auxiliary tasks and\npreserving only those with high utility. We also introduce a new measure of\nauxiliary tasks usefulness based on how useful the features induced by them are\nfor the main task. Our discovery algorithm significantly outperforms random\ntasks, hand-designed tasks, and learning without auxiliary tasks across a suite\nof environments.", + "authors": "Banafsheh Rafiee, Sina Ghiassian, Jun Jin, Richard Sutton, Jun Luo, Adam White", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "INTRODUCTION The discovery question\u2014what should an agent learn about\u2014remains an open challenge for AI research. In the context of reinforcement learning, multiple components de\ufb01ne the scope of what the agent is learning about. The agent\u2019s behavior de\ufb01nes its focus and attention in terms of data collection. Related exploration methods based on intrinsic rewards de\ufb01ne what the agent chooses to do outside of reward maximization. Most directly, the auxiliary learning objectives we build in, including macro actions or options, models, and representation learning objectives force the agent to learn about other things beyond a reward maximizing policy. The primary question is where do these auxiliary learning objectives come from? Classically, there are two approaches to de\ufb01ning auxiliary objectives that are the extremes of a spectrum of possibilities. The most common approach is for people to build the auxiliary objectives in pre-de\ufb01ning option policies, intrinsic rewards, and model learning objectives. Although most empirically successful, this approach has obvious limitations like feature engineering of old. At the other extreme is end-to-end learning. The idea is to build in as little inductive bias as possible including the inductive biases introduced by auxiliary learning objectives. Instead, we let the agent\u2019s neural network discover and adapt internal representations and algorithmic components (e.g., discovering objectives (Xu et al., 2020), update rules (Oh et al., 2020), and models (Silver et al., 2017)) just 1 arXiv:2210.14361v1 [cs.LG] 25 Oct 2022 \fUnder review through trial and error interaction with the world. This approach remains challenging due to data ef\ufb01ciency concerns and in some cases shifts the dif\ufb01culty from auxiliary objective design to loss function and curriculum design. An alternative approach that exists somewhere between human design and end-to-end learning is to hand-design many tasks in the form of additional output heads on the network that must be optimized in addition to the primary learning signal. These tasks, called auxiliary tasks, exert pressure on the lower layers of the neural network during training, yielding agents that can learn faster (Mirowski et al., 2016; Shelhamer et al., 2016), produce better \ufb01nal performance (Jaderberg et al., 2016), and at times transfer to other related problems (Wang et al., 2022). This positive in\ufb02uence on neural network training is called the auxiliary task effect and is related to the emergence of good internal representations we seek in end-to-end learning. The major weakness of auxiliary task learning is its dependence on people. Relying on people for designing auxiliary tasks is not ideal because it is challenging to know what auxiliary tasks will be useful in advance and, as we will show later, poorly speci\ufb01ed auxiliary tasks can signi\ufb01cantly slow learning. There has been relatively little work on autonomously discovering auxiliary tasks. One approach is to use meta learning. Meta-learning methods are higher-level learning methods that adapt the parameters of the base learning system, such as step-sizes, through gradient descent (Xu et al., 2018). This approach can be applied to learning auxiliary tasks de\ufb01ned via General Value Functions or GVFs (Sutton et al., 2011) by adapting the parameters that de\ufb01ne the goal (cumulant) and termination functions via gradient-descent (Veeriah et al., 2019). Generally speaking, these meta-learning approaches require large amounts of training data and are notoriously dif\ufb01cult to tune (Antoniou et al., 2018). An exciting alternative is to augment these meta-learning approaches with generate-and-test mechanisms that can discover new auxiliary tasks, which can later be re\ufb01ned via meta-learning. This approach has produced promising results in representation learning where simple generate-andtest signi\ufb01cantly improve classi\ufb01cation and regression performance when combined with back-prop (Dohare et al., 2021). Before we can combine meta-learning and generate-and-test, we must \ufb01rst develop the generate-and-test approach to auxiliary task discovery so that their combination has the best chance for success. Such an effort is worthy of an entire study on its own, so in this paper we leave combining the two to future work and focus on the generate-and-test approach. Despite signi\ufb01cant interest, it remains unclear what makes a good or bad auxiliary tasks. The metalearning approaches do not generate human-interpretable tasks. Updating toward multiple previous policies, called the value improvement path (Dabney et al., 2020), can improve performance but is limited to historical tasks. The gradient alignment between auxiliary tasks and the main task has been proposed as a measure of auxiliary tasks usefulness (Lin et al., 2019; Du et al., 2018). However, the ef\ufb01cacy of this measure has not been thoroughly studied. Randomly generated auxiliary tasks can help avoid representation collapse (Lyle et al., 2021) and improve performance (Zheng et al., 2021), but can also generate signi\ufb01cant interference which degrades performance (Wang et al., 2022). In this paper we take a step toward understanding what makes useful auxiliary tasks introducing a new generate-and-test method for autonomously generating new auxiliary tasks and a new measure of task usefulness to prune away bad ones. The proposed measure of task usefulness evaluates the auxiliary tasks based on how useful the features induced by them are for the main task. Our experimental results shows that our measure of task usefulness successfully distinguishes between the good and bad auxiliary tasks. Moreover, our proposed generate-and-test method outperforms random tasks, hand-designed tasks, and learning without auxiliary tasks. 2 BACKGROUND In this paper, we consider the interaction of an agent with its environment at discrete time steps t = 1, 2, . . .. The current state is denoted by St \u2208S. The agent\u2019s action At \u2208A is selected according to a policy \u03c0 : A\u00d7S \u2192[0, 1], causing the environment to transition to the next state St+1 emitting a reward of Rt+1 \u2208R. The goal of the agent is to \ufb01nd the policy \u03c0 with the highest stateaction value function de\ufb01ned as q\u03c0(s, a) . = E\u03c0[Gt|St = s, At = a] where Gt . = P\u221e k=0 \u03b3kRt+k+1 is called the return with \u03b3 \u2208[0, 1) being the discount factor. 2 \fUnder review To estimate the state-action value function, we use temporal-difference learning (Sutton, 1988). Speci\ufb01cally, we use Q-learning (Watkins & Dayan, 1992) to learn a parametric approximation \u02c6 q(s, a; w) by updating a vector of parameters w \u2208Rd. The update is as follows, wt+1 \u2190wt + \u03b1\u03b4t\u2207w\u02c6 q(St, At; w), where \u03b4t . = Rt+1 + \u03b3maxa\u02c6 q(St+1, a; wt) \u2212\u02c6 q(St, At; wt) is the TD error, \u2207w\u02c6 v(St; w) is the gradient of the value function with respect to the parameters wt, and the scalar \u03b1 denotes the stepsize parameter. For action selection, Q-learning is commonly combined with an epsilon greedy policy. We use neural networks for function approximation. We integrate a replay buffer, a target network, and the RMSProp optimizer with Q-learning as is commonly done to improve performance (Mnih et al., 2013). To formulate auxiliary tasks, a common approach is to use general value functions or GVFs (Sutton et al., 2011). GVFs are value functions with a generalized notion of target and termination. More speci\ufb01cally, a GVF can be written as the expectation of the discounted sum of any signal of interest: v\u03c0,\u03b3,c(s) . = E\u03c0[ \u221e X k=0 ( k Y j=1 \u03b3(St+j))c(St+k+1)|St = s, At:\u221e\u03c0] where \u03c0 is the policy, \u03b3 is the continuation function, and c is a signal of interest and is referred to as the cumulant. Similarly, a generalized state-action value function q\u03c0,\u03b3,c(s, a) can be de\ufb01ned where the expectation is conditioned on At = a as well as St = s. A control auxiliary tasks is one where the agent attempts to learn a \u03c0 to maximize the expected discounted sum of the future signal of interest (called a control demon or control GVF in prior work). To learn these auxiliary tasks, multi-headed neural networks are commonly used where the last hidden layer acts as the representation shared between the main task and the auxiliary tasks (Jaderberg et al., 2016). In this setting, each head corresponds to either the main task or one of the auxiliary tasks and the auxiliary tasks make changes to the representation alongside the main task via backpropagation. 3 AUXILIARY TASK DISCOVERY THROUGH GENERATE-AND-TEST We propose a new method for auxiliary task discovery based on a class of algorithms called generateand-test. Generate-and-test was originally proposed as an approach to representation learning or feature \ufb01nding. We can think of backprop with a large neural network as performing a massive parallel search in feature space (Frankle & Carbin, 2018). Backprop greatly depends on the randomness in the weight initialization to \ufb01nd good features. The idea of generate and test is to continually inject randomness in the feature search by continually proposing new features using a generator, to measure features usefulness using a tester, and to discard useless features. This idea has a long history in supervised learning (Sutton et al., 2014; Mahmood & Sutton, 2013), and can even be combined with backprop (Dohare et al., 2021). The same basic structure can be applied to auxiliary task discovery, which we explain next. We use generate-and-test for discovering and retaining auxiliary tasks that induce a representation useful for learning the main task. That is, the goal is to \ufb01nd auxiliary tasks that induce a positive auxiliary task effect. It is challenging to recognize which auxiliary tasks induce useful representations. To do so, we \ufb01rst evaluate how good each feature is based on how much it contributes to the approximation of the main task action-value function. Here we de\ufb01ne the features to be the output of neural network\u2019s last hidden layer after applying the activation function. We then identify which auxiliary task was responsible for shaping which features. Our proposed generate-and-test method for discovering auxiliary tasks consists of a generator and a tester. The generator generates new auxiliary tasks and the tester evaluates the auxiliary tasks. The auxiliary tasks that are assessed as useful are retained while the auxiliary tasks that are assessed to be useless are replaced by newly generated auxiliary tasks. The newly generated auxiliary tasks will most likely have low utility. To prevent the replacement of newly generated auxiliary tasks, we calculate the number of steps since their generation and refer to that as their age. An auxiliary task 3 \fUnder review can only be replaced if its age is bigger than some age threshold. Every T time step, some ratio of the auxiliary tasks get replaced. We refer to T as the replacement cycle and denote the replacement ratio by \u03c1. The pseudo-code for the proposed generate-and-test method is shown in Algorithm 1. Algorithm 1 Generate-and-test for auxiliary task discovery 1: Input: number of auxiliary tasks n, age threshold \u00b5, replacement cycle T, replacement ratio \u03c1 2: Initialization: 3: generate n auxiliary tasks using the generator 4: randomly initialize the base learning network 5: set age ai for each auxiliary task to zero 6: for Every time step do 7: do a DQN step to update the base learning network 8: Increase ai by one for i = 1, . . . , n 9: update the utility of each auxiliary task uaux(i) for i = 1, ..., n using the tester 10: for Every T time steps do 11: Find n\u03c1 auxiliary tasks with the lowest utilities such that ai > \u00b5 12: replace the n\u03c1 auxiliary tasks with new auxiliary tasks generated by the generator 13: reinitialize the input and output weights of the features induced by the n\u03c1 auxiliary tasks 14: reset ai to zero for the n\u03c1 auxiliary tasks 15: end for 16: end for Note that the proposed method does not generate-and-test on features but on auxiliary tasks. It, however, does assess the utility of features and derives the utility of the auxiliary tasks from the utility of the features that they induced. input main task action-value function auxiliary task 1 action-value function auxiliary task 2 action-value function . . . . . . . . . . . . features . . . . . . stop gradient connections Figure 1: The Master-User strategy for learning auxiliary tasks alongside the main task where all features are used by all tasks but only modi\ufb01ed by one task. The dotted arrows show stop-gradient connections from which gradient does not get backpropagated. When using the Master-User strategy, it is clear which auxiliary task was responsible for inducing which feature. We propose a tester that evaluates the auxiliary tasks based on how useful the features induced by them are for the main task. When following the standard practice of jointly learning the main task and the auxiliary tasks, recognizing which feature was in\ufb02uenced the most by which auxiliary task is challenging. This is because all features are jointly shaped by all the tasks, both auxiliary and main. To address this issue, we use a strategy for learning the representation where all features are used by all tasks in the forward pass; however, each feature is only modi\ufb01ed through the gradient 4 \fUnder review backpropagated from one task. See Figure 1. This learning strategy is similar to the Master-User algorithm proposed for continual recurrent learning (Javed et al., 2021). Therefore, we refer to this learning strategy as the Master-User strategy. When using the Master-User strategy, it is clear which auxiliary task was responsible for inducing which feature. As we mentioned above, the proposed tester assesses the utility of an auxiliary task based on how useful the features induced by it are for the main task. To assess each feature, the proposed tester looks at the magnitude of the outgoing weights from the feature to the main task action-value function for all actions. The magnitude of the weights represents how much the feature contributes to the approximation of the main task action-value function. The greater the magnitude is, the more important the feature is. The tester also considers the magnitude of each feature: the greater the magnitude of the feature is, the more it contributes to the approximation of the main task action-value function. Therefore, the instantaneous utility of a feature f i k is de\ufb01ned as: u(f i k) = X a |wmain ka \u00d7 f i k| (1) where f i k is the kth feature shaped by auxiliary task i and u(f i k) is the instantaneous utility of feature f i k. Instead of only looking at the instantaneous utility u(f i k), the proposed tester considers a trace of the past utilities using an exponential moving average of u(f i k)\u2019s: \u00af u(f i k) \u2190(1 \u2212\u03c4)\u00af u(f i k) + \u03c4u(f i k) (2) where \u00af u(f i k) is a trace of u(f i k) with the trace parameter denoted by \u03c4. This assessment method is similar to what has been used in generate-and-test on features (Mahmood & Sutton, 2013). After assessing the utility of the features, the utility of each auxiliary task is set to the sum of the utility of the features shaped by it: uaux(i) = P k \u00af u(f i k). We combined the proposed tester with a simple generator that randomly generates auxiliary tasks. The auxiliary task are formulated as subgoal-reaching GVFs where the continuation function returns 0 at the subgoals and 1 elsewhere (similar to \u03b3 in an epsiodic MDP). The cumulant is \u22121 everywhere and the policy is greedy. In plain english, b The subgoals are randomly selected from the observation space, meaning the agent is learning many policy to reach different parts of the observation space in addition to solving the main task. 4 EXPERIMENTAL RESULTS In this section, we provide empirical results supporting the ef\ufb01cacy of the proposed generate-and-test method for auxiliary task discovery. We include results on two gridworld environments: four-rooms and maze. We also include results on the pinball environment (Konidaris & Barto, 2009), which is widely used in skill chaining, option discovery, and recently model-based planning (Lo et al., 2022). We choose these environments so that we could easily visualize the discovered auxiliary tasks and easily design good and bad auxiliary task as baselines. All environments are episodic. In the gridworld environments, the goal is to learn the shortest path from start state to goal. The start and goal states are denoted by S and G respectively in Figure 2 and 3. At each cell, four actions are available: up, down, left, and right, which moves the agent one cell in the respective direction. The observation space is described with a one-hot representation with the index corresponding to the agent\u2019s position being 1. The reward is \u22121 on each time step. There is an episode cutoff of 500 steps. In the pinball environment, a small ball should be navigated to the goal in a maze-like environment with simpli\ufb01ed ball physics. In Figure 3, the pinball environment is shown with the ball and goal shown by a grey and yellow circle respectively. Collision with the obstacles causes the ball to bounce. The observation space is continuous and is described by x, y, \u02d9 x, \u02d9 y. The start location and goal location are at (0.8, 0.5) and (0.1, 0.1) respectively. The action space includes 5 actions of increasing or decreasing \u02d9 x or \u02d9 y and no change to \u02d9 x and \u02d9 y. The reward is \u22125 at each time step. There is no episode cutoff. 5 \fUnder review episode no aux tasks hallway aux tasks corner aux tasks # steps to goal (30 runs) hallway aux tasks 1 corner aux tasks 1 corner aux tasks 2 hallway auxtasks 2 episode Aux utility (30 run) 1 2 3 3 4 1 S 2 G Hallway corner Figure 2: Left: The four-rooms environment with the good and bad hand-designed auxiliary tasks shown in red and blue respectively. Middle: Hallway and corner auxiliary tasks improved and hurt the performance respectively. Right: The proposed tester evaluated the hand-designed auxiliary tasks well, giving higher utility to the hallway auxiliary tasks. Note that in the original pinball environment, the agent receives a special reward of 10, 000 upon arrival at the goal. Instead we gave a reward of -5 (like every other step) so that the scale of the action-value function for the main task and the auxiliary tasks would not be too different. When learning multiple tasks in parallel, the contribution of each task is determined by the scale of the corresponding value function (Hessel et al., 2019). Therefore, when the scale of value functions are very different, we would need to scale the reward of the main task and the cumulants of the auxiliary tasks appropriately. This issue requires an additional hyper parameter that would give our method an advantage if tuned. For this paper, we decided to focus on the case where the scale of the value function for the main task and the auxiliary tasks are similar. We used DQN with RMSProp optimizer as the base learning system. We used a neural network with one hidden layer and tanh activation function. (We used tanh activation function so that the induced features would be all in the same range of (\u22121, 1); however, our proposed tester should work well when other activation functions are used too. This can be investigated in future work.) For the girdworld environments, the one-hot observation vector was fed to the neural network. The hidden layer size for four-rooms and maze were 50 and 500 respectively. The replay buffer size for four-rooms and maze were 500 and 1000 respectively. For both four-rooms and maze, we used a batch-size of 16 and target network update frequency of 100. For the pinball environment, the 4-dimensional observation was normalized and fed to the neural network. The hidden layer size was 128. We used a replay buffer of size 10, 000, a batch-size of 16, and target network update frequency of 200. 4.1 THE PROPOSED TESTER REASONABLY EVALUATES THE AUXILIARY TASKS To see how well the proposed tester evaluates the auxiliary tasks, we designed good and bad auxiliary tasks in the four-rooms environment. The hand-designed auxiliary task were formulated as subgoalreaching GVFs with the good and bad hand-designed auxiliary tasks having hallway and corner subgoals respectively. See Figure 2. Note that when learning the auxiliary tasks alongside the main task using the Master-User strategy, the gradient backpropagated from the main task only modi\ufb01es 1 #auxiliary tasks+1 percent of the features. For example, in the case of learning the hallway auxiliary tasks, there are 3 auxiliary tasks. Therefore, the gradient backpropagated from the main task only modi\ufb01es 25% of the features. The hallway and corner auxiliary tasks improved and hurt learning in terms of learning speed respectively as expected (Figure 2, middle graph). The proposed tester evaluated the hallway and corner auxiliary tasks well, assigning higher utility to the hallway auxiliary tasks and clearly indicating the corner tasks are bad. 6 \fUnder review 4.2 THE GENERATE-AND-TEST METHOD IMPROVES OVER THE BASELINE OF NO AUXILIARY TASKS Next, we studied the performance of the base learning system when combined with the proposed generate-and-test method. The generate-and-test method uses the combination of the random generator and our proposed tester. The random generator produces subgoal-reaching auxiliary tasks with the subgoals randomly picked from the observation space. More speci\ufb01cally, in the gridworld environments, the subgoals are cells in the grid. In the pinball environment, the subgoals are determined by (x, y) and once the ball is within radius 0.035 of a subgoal, it is assumed that the agent has reached the subgoal. We included four baselines for comparison which included the base learning system with 1) no auxiliary tasks 2) hand-designed good auxiliary tasks 3) hand-designed bad auxiliary tasks 4) \ufb01xed random auxiliary tasks. All the auxiliary tasks were in form of subgoal-reaching tasks. The subgoals corresponding to the hand-designed good and bad auxiliary tasks for all three environments are shown in red and blue respectively in Figure 3. For the \ufb01xed random auxiliary tasks, the subgoals where randomly picked from the observation space and kept \ufb01xed throughout learning. We systematically swept the step-size parameter and report the performance of the best to ensure a fair comparison. To do so, we ran the baseline with no auxiliary tasks with different values of the step-size for 10 runs. We used the step-size that resulted in the lowest area under the curve and reran the baseline with the best step-size for 30 runs to get the \ufb01nal results. We repeated this process for the baselines with hand-designed auxiliary tasks. For the generate-and-test method, we used the same step-size as the baseline with good hand-designed auxiliary tasks. For four-rooms, maze, and pinball the sweep over the step-sizes included {0.000625, 0.0025, 0.01, 0.04}, {0.00025, 0.001, 0.004}, and {0.0025, 0.005, 0.01}. The generate-and-test method has hyper-parameters of its own: 1) number of auxiliary tasks 2) age threshold 3) replacement cycle 4) replacement ratio. For the gridworld environments, we used 8 auxiliary tasks, age threshold of 0, replacement rate of 1000 steps, and replacement ratio of 0.25. For the pinball environment, we used 5 auxiliary tasks, age threshold of 5000, replacement rate of 5000 steps, and replacement ratio of 0.2. The proposed generate-and-test method outperformed the baseline with no auxiliary tasks in all three environments (Figure 3). The generate-and-test method also outperformed the baseline with \ufb01xed random auxiliary tasks. This suggests that the subgoals discovered by the generate-and-test are actually better than random subgoals. Interestingly, the \ufb01xed random auxiliary tasks resulted in performance gain over the baseline with no auxiliary tasks in all three environments (Figure 3). This is in line with the \ufb01ndings from the literature suggesting that random GVFs can form good auxiliary tasks for reinforcement learning (Zheng et al., 2021). 4.3 THE AUXILIARY TASKS DISCOVERED BY THE GENERATE-AND-TEST METHOD ARE REASONABLY GOOD We conducted additional experiments to analyze how good the auxiliary tasks discovered by the generate-and-test method are. In the previous subsection, we compared the performance of the base learning system combined with generate-and-test and combined with \ufb01xed random auxiliary tasks. Generate-and-test outperformed \ufb01xed random auxiliary tasks(Figure 3). This suggests that the choice of the auxiliary tasks was important and generate-and-test discovered and retained useful auxiliary tasks. The auxiliary tasks discovered and retained by generate-and-test are shown in Figure 4. To plot the discovered auxiliary tasks, we ran the generate-and-test method for 30 runs and stored the auxiliary tasks that were retained. The green squares correspond to the discovered auxiliary tasks in the gridworld environments. Darker green indicates that the state was chosen as a subgoal in many runs. For the pinball environment, the discovered auxiliary tasks are shown by green circles. In the gridworld environments, the subgoals corresponding to the discovered auxiliary tasks were close to the goal states. In the pinball environment, the discovered auxiliary tasks were more concentrated in the central areas\u2014reasonable way-points on the path to the goal. 7 \fUnder review Four-rooms Maze Pinball S G G S episode no aux tasks corner aux tasks gen-and-test random aux tasks centre aux tasks episode gen-and-test corner aux tasks hallway aux tasks random aux tasks no aux tasks 3k 1k 2k # steps to goal (30 runs) no aux tasks random aux tasks corner aux tasks episode gen-and-test hallway aux tasks 500 100 300 500 100 300 Figure 3: The learning curves for the proposed generate-and-test method (green), the baseline with no auxiliary tasks (orange), the baseline with \ufb01xed random auxiliary tasks (black), and the baseline with good and bad hand-designed auxiliary tasks (red and blue). The results are averaged over 30 runs. The proposed generate-and-test method improved over the baseline with no auxiliary tasks. The hand-designed good and bad auxiliary tasks respectively improved and hurt the performance. Generate-and-test also outperformed the baseline with \ufb01xed random auxiliary tasks. Fixed random auxiliary tasks also resulted in performance gain over the baseline. Four-rooms Maze Pinball S G G S Figure 4: Example discovered auxiliary tasks in the three environments. Generate-and-test discovered reasonably good auxiliary tasks: In the gridworld environments, the subgoals corresponding to the discovered auxiliary tasks were close to the goal states. In the pinball environment, the discovered auxiliary tasks were more concentrated in the central areas. To con\ufb01rm that the auxiliary tasks discovered by generate-and-test were useful, we stored the auxiliary tasks discovered and retained, in a pool. We then randomly selected a number of auxiliary tasks from the pool and ran the base learning system, learning the main value function from scratch. We kept the auxiliary tasks \ufb01xed throughout learning. We repeated this for 30 runs. The discovered auxiliary tasks were useful and substantially improved over the baseline of no auxiliary tasks (Figure 5). 4.4 THE REPRESENTATION INDUCED BY THE GENERATE-AND-TEST METHOD HAS LOWER REDUNDANCY COMPARED TO THE BASELINE. Next, we studied the representation that emerged under the generate-and-test method. There are a multitude of metrics for evaluating the emergent representations (Wang et al., 2022; Javed & White, 2019). We used the stable rank of the weight matrix between the input layer and the hidden layer 8 \fUnder review gen-and-test episode no aux tasks discovered aux tasks episode gen-and-test no aux tasks discovered aux tasks # steps to goal (30 runs) episode no aux tasks gen-and-test discovered aux tasks 500 100 300 500 100 300 3k 1k 2k Figure 5: The learning curves corresponding to the discovered auxiliary tasks. The discovered auxiliary tasks were kept \ufb01xed throughout learning. The discovered auxiliary tasks improved over the baseline of no auxiliary tasks. (Arora et al., 2018). The stable rank of a matrix A is de\ufb01ned as P i \u03c32 i maxi\u03c32 i where \u03c3i are the the singular values of matrix A. The stable rank provides an approximation of the rank of the matrix but it is unaffected by the smaller singular values. The stable rank of the weight matrix between the input layer and the hidden layer characterizes the amount of generalization/redundancy of the network. The larger the stable rank is, the lower the redundancy in the representation is. The stable rank of the representation learned by generate-and-test is larger than the stable rank of the representation learned by the baseline with no auxiliary tasks (Figure 6). This suggests that the auxiliary tasks discovered by generate-and-test resulted in a representation with lower redundancy. The lower stable rank of the learned representation together with the better performance is encouraging and suggests that more aggressive pruning by the tester could get a lower stable rank while maintaining good performance. episode Maze rep. stable rank (10 runs) Four-rooms Pinball episode episode no aux tasks gen-and-test Figure 6: The stable rank of the weight matrix from the input layer to the hidden layer over episodes. The results are averaged over 10 runs. The combination of the base learning system with the generate-and-test method resulted in a representation with a higher stable rank compared to the case of only having the base learning system. This suggests that the auxiliary tasks discovered by generate-and-test resulted in a representation with lower redundancy. 5" + }, + { + "url": "http://arxiv.org/abs/2204.00565v1", + "title": "What makes useful auxiliary tasks in reinforcement learning: investigating the effect of the target policy", + "abstract": "Auxiliary tasks have been argued to be useful for representation learning in\nreinforcement learning. Although many auxiliary tasks have been empirically\nshown to be effective for accelerating learning on the main task, it is not yet\nclear what makes useful auxiliary tasks. Some of the most promising results are\non the pixel control, reward prediction, and the next state prediction\nauxiliary tasks; however, the empirical results are mixed, showing substantial\nimprovements in some cases and marginal improvements in others. Careful\ninvestigations of how auxiliary tasks help the learning of the main task is\nnecessary. In this paper, we take a step studying the effect of the target\npolicies on the usefulness of the auxiliary tasks formulated as general value\nfunctions. General value functions consist of three core elements: 1) policy 2)\ncumulant 3) continuation function. Our focus on the role of the target policy\nof the auxiliary tasks is motivated by the fact that the target policy\ndetermines the behavior about which the agent wants to make a prediction and\nthe state-action distribution that the agent is trained on, which further\naffects the main task learning. Our study provides insights about questions\nsuch as: Does a greedy policy result in bigger improvement gains compared to\nother policies? Is it best to set the auxiliary task policy to be the same as\nthe main task policy? Does the choice of the target policy have a substantial\neffect on the achieved performance gain or simple strategies for setting the\npolicy, such as using a uniformly random policy, work as well? Our empirical\nresults suggest that: 1) Auxiliary tasks with the greedy policy tend to be\nuseful. 2) Most policies, including a uniformly random policy, tend to improve\nover the baseline. 3) Surprisingly, the main task policy tends to be less\nuseful compared to other policies.", + "authors": "Banafsheh Rafiee, Jun Jin, Jun Luo, Adam White", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Learning about many aspects of the environment in addition to the main task of maximizing the discounted sum of rewards has been argued to be bene\ufb01cial in reinforcement learning [Sutton et al., 2011]. A common view is that these additional tasks, also known as auxiliary tasks, can improve the data ef\ufb01ciency by shaping the representation [Jaderberg et al., 2016, Shelhamer et al., 2016, Mirowski et al., 2016]. In environments with sparse reward structures, auxiliary tasks provide instantaneous targets for shaping the representation in the absence of reward. It has also been argued that auxiliary tasks can function as regularizers, improving the generalization and avoiding representation over\ufb01tting in RL [Dabney et al., 2020]. Finally, what has been learned about an auxiliary task can be transferred to the main task, improving the data ef\ufb01ciency. Many auxiliary tasks have been proposed and shown to accelerate learning on the main task. However, much of the work on auxiliary tasks have demonstrated their ef\ufb01cacy empirically and in many cases, the empirical results are mixed: in some cases auxiliary tasks result in substantial performance gain over the baselines whereas in other cases they achieve marginal improvements or even harm the performance [Jaderberg et al., 2016, Shelhamer et al., 2016]. Recent works have explored how auxiliary tasks bene\ufb01t the representation learning in a systematic way [Lyle et al., 2021, Dabney et al., 2020]. More systematic studies are required to answer the question of what makes useful auxiliary tasks. In this paper, we take a step toward answer the question of what makes useful auxiliary tasks. We speci\ufb01cally consider auxiliary tasks formulated as general value functions (GVF) [Sutton et al., 2011]. A core element of GVFs is the target policy as it both determines the target of prediction and the state-action distribution that the agent gets trained on. We empirically study the effect of the target policies associated with auxiliary tasks on their usefulness. 2 Background In this paper, we consider the interaction of an agent with its environment at discrete time steps. At each time step t, the agent is in a state St \u2208S, performs an action At \u2208A according to a policy \u03c0 : A \u00d7 S \u2192[0, 1], receives a reward Rt+1 \u2208R, and transitions to the next state St+1. We consider the problems of prediction and control. For the prediction problem, the goal of the agent is to approximate the value function for a given policy \u03c0 where the value function is de\ufb01ned by: v\u03c0(s) . = E\u03c0[Gt|St = s] where Gt = P\u221e k=0 \u03b3kRt+k+1 is called the return with \u03b3 \u2208[0, 1) being the discount factor. In the control setting, the goal of the agent is to maximize the expected return. In this setting, it is common to use state-action value functions: q\u03c0(s, a) . = E\u03c0[Gt|St = s, At = a]. To estimate the value function, we use semi-gradient temporal-difference learning [Sutton, 1988]. More speci\ufb01cally, we use TD(0) to learn a parametric approximation \u02c6 v(s; w) by updating a vector of parameters w \u2208Rd as follows: wt+1 \u2190wt + \u03b1\u03b4t\u2207w\u02c6 v(St; w) where \u03b1 denotes the step-size parameter and \u03b4t denotes the TD error: Rt+1 + \u03b3\u02c6 v(St+1; wt) \u2212\u02c6 v(St; wt). \u2207w\u02c6 v(St; w) is the gradient of the value function with respect to the parameters wt. For the control setting, to estimate the state-action value functions, we use the control variant of TD(0), Q-learning [Watkins and Dayan, 1992]. The update rule for Q-learning is similar to that of TD(0); however, \u02c6 q(St, At, w) is used instead of \u02c6 v(St, w) and the TD error is de\ufb01ned as Rt+1 + \u03b3maxa\u02c6 q(St+1, a; wt) \u2212\u02c6 q(St, At; wt). For action selection, Q-learning is commonly integrated with the epsilon greedy policy. As the function approximator, we use neural networks. We integrate a replay buffer, a target network, and the RMSProp optimizer with TD(0) and Q-learning as is commonly done to improve performance when using neural networks as the function approximator [Mnih et al., 2013]. To formulate auxiliary tasks, a common approach is to use general value functions (GVFs). GVFs are value functions with a generalized notion of target and termination. More speci\ufb01cally, a GVF can be written as the expectation of the discounted sum of any signal of interest: v\u03c0,\u03b3,c(s) . = E[P\u221e k=0(Qk j=1 \u03b3(St+j))c(St+k+1)|St = s, At:\u221e\u03c0] where \u03c0 is the target policy, \u03b3 is the continuation function, and c is a signal of interest. General state-action value function q\u03c0,\u03b3,c(s, a) can be de\ufb01ned similarly with the difference that the expectation is conditioned on At = a as well as St = s. 3 Investigating the effect of the auxiliary task\u2019s policy on its usefulness As we discussed in Section 1, we study the effect of the target policy of the auxiliary tasks on their usefulness. Our experimental setup includes two phases: 1) Pre-training on the auxiliary tasks 2) Learning the main task. For the pretraining phase, a neural network is used as the function approximator for learning the auxiliary tasks. The behavior 1 \faux net rep aux target input Phase 1: pre-training Phase 2: learning the main task aux net rep input main net main task target Figure 1: The two phases of pre-training on the auxiliary tasks and learning the main task. policy is set to the target policy of the auxiliary tasks. In the second phase, the representation learned during the pretraining phase is kept \ufb01xed and fed to another network to learn the main task. The behavior policy in this phase is set to the policy learned for the main task. See Figure 1. Using this procedure, the representation learned for the auxiliary tasks in Phase 1 is transferred to be used in Phase 2 when learning the main task. For the auxiliary tasks, we considered 4 policies: 1) Uniform random 2) Greedy 3) Sticky actions 4) The main task policy. The sticky actions policy is to follow the previous action with probability 0.9 and to select an action randomly with probability 0.1. Learning the auxiliary task corresponding to the greedy policy is a control problem whereas learning the auxiliary tasks corresponding to the other policies are prediction problems. We experimented with pixel-based and non pixel-based environments. For the non pixel-based environments, we considered two types of cumulants: 1) The observation Ot+1 2) The observation difference Ot+1\u2212Ot. We also used 6 different values of \u03b3: {0, 0.5, 0.75, 0.8, 0.9, 0.99} to cover a wide range of temporal horizons for the auxiliary tasks. Therefore, for each policy and each type of cumulant, 6\u00d7|O| auxiliary tasks were learned in parallel where |O| denotes the observation size. To learn the auxiliary tasks in parallel during the pre-training phase, we used a multi-headed network with the representation learning layer being shared between the auxiliary tasks and each head corresponding to one auxiliary task. (In the case of control auxiliary tasks, multiple heads are assigned to each auxiliary task, each corresponding to one state-action value.). In the case of prediction auxiliary tasks, the behavior policy during the pre-training phase was set to the target policy of the auxiliary tasks; therefore, learning was on-policy. In the case of control auxiliary tasks, 6 \u00d7 |O| target policies were being learned each maximizing the return corresponding to one cumulant and \u03b3. The behavior policy, in that case, was round-robin over the 6 \u00d7 |O| target policies, and learning was off-policy. For the pixel-based environments, to specify the auxiliary tasks, we cropped the 40 \u00d7 40 observation space into a 24 \u00d7 24 region and subdivided it into a 6 \u00d7 6 grid of non-overlapping cells of size 4 \u00d7 4. We considered two types of cumulants: 1) Sum of the pixels in each cell 2) The absolute difference of the sum of the pixels in each cell from t + 1 to t. We used one value of \u03b3 equal to 0.9. Therefore, for each policy, 36 auxiliary tasks were learned in parallel for the pixel-based environments. Similar to the non pixel-based environments, for the prediction auxiliary tasks, the behavior policy during pre-training was set to the target policy of the 36 auxiliary tasks. For the control auxiliary tasks, on the other hand, the behavior policy was round-robin over the 36 target policies each corresponding to one of the auxiliary tasks. 4 Experimental setup and results We experimented with classic control environments and pixel-based environments. For the classic control environments, we used the Mountain Car [Moore, 1991] and Acrobot [Sutton, 1995] problems. In the Mountain Car problem, an underpowered car should reach the top of a hill starting from the bottom of the hill. The observation space includes the position and velocity of the car with the position in [\u22121.2, 0.6] and velocity in [\u22120.07, 0.07]. The action space includes three actions: 1) throttle forward 2) throttle backward 3) no throttle. The reward is \u22121 in all time steps and \u03b3 = 1. Acrobot simulates a two-link underactuated robot with the goal of swinging the endpoint above the bar. The observation space includes two angles: 1) the angle between the \ufb01rst link and the vertical line 2) the angle between the two links. Note that in the original Acrobot environment, the observation space also includes the angular velocities corresponding to the two angles. In our experiments, however, to make the problem more challenging, we omitted the angular velocities. This makes the problem partially observable. The action space includes three actions: positive torque, negative torque, and no torque applied to the bottom joint. The reward is \u22121 at all steps and \u03b3 = 1. For the pixel-based environments, we used two instances of the Minimalistic Gridworld Environment [ChevalierBoisvert et al., 2018]: 1) Empty Minigrid 2) Door & Key Minigrid. The objective is to reach the goal cell. The observation 2 \fspace is 40 \u00d7 40 \u00d7 3 pixel input. The action space includes turning left, turning right, moving forward, picking an object, dropping something off, and opening the door. The reward is 0 except once the goal is reached in which case it is 1. \u03b3 is 0.99. In the pre-training phase, for the classic control environments, we used a neural network with two feed-forward hidden layers of size 100 with ReLU nonlinearity. For Mountain Car, the position and velocity were fed into the network. For Acrobot, to deal with partial observability, we passed the angles and the action at the previous time step in addition to the angles at the current time step to the network as input. For Empty Minigrid, we used a neural network with one convolutional hidden layer and one feedforward hidden layer of size 64 with ReLU nonlinearity. For Door & Key Minigrid, we used a neural network with three convolutional hidden layers and one feedforward hidden layer of size 32 with ReLU nonlinearity. For the phase of learning the main task, we fed the representation learned during pre-training to a network with two feed-forward hidden layers of size 100. We used learning curves to study the effect of different auxiliary tasks on performance. We \ufb01rst trained the representation using each of the auxiliary tasks for 500 episodes for Mountain Car, Acrobot, and Empty Minigrid, and 2000 episodes for Door & Key Minigrid. To create learning curves, we ran Q-learning with each of the learned representations 30 times (30 independent runs). To get the learning curve for Mountain Car, Acrobot, Empty Minigrid, and Door & Key Minigrid, we used 400, 200, 200, and 5000 episodes respectively. In the case where the cumulant was equal to the observation, the representation learned for the auxiliary tasks resulted in performance gain over the baseline of no pre-training in almost all cases (Figure 2). In Mountain Car, the sticky actions policy and greedy policy resulted in the largest improvement over the baseline. In Acrobot and Empty Minigrid, the choice of the target policy did not have a substantial effect. In Door & Key Minigrid, the random policy resulted in the largest improvement over the baseline. Mountain Car Episode # step to \ufb01nish (30 runs) no pre-training random greedy sticky actions main task policy Episode Acrobot Episode Episode Empty Minigrid Door & Key Minigrid Figure 2: Learning curves for the case of cumulant equal to the observation. In the pre-training phase, the neural network was trained using the auxiliary tasks. The last hidden layer of the neural network was then \ufb01xed and used as the representation to learn the main task for 30 independent runs. In almost all cases, the representation learned for the auxiliary tasks improved over the baseline of no pre-training. In Mountain Car, the sticky actions policy and greedy policy resulted in the largest performance gain over the baseline. In Acrobot and Empty Minigrid, all auxiliary tasks resulted in similar performance. In Door & Key Minigrid, the random policy resulted in the largest performance gain. Mountain Car # step to \ufb01nish (30 runs) Episode Empty Minigrid Door & Key Minigrid Episode Episode Acrobot Episode no pre-training main task policy random greedy sticky actions Figure 3: Learning curves for the case of cumulant equal to the observation difference. The results are for 30 independent runs. In almost all cases, the representation learned for the auxiliary tasks accelerated learning on the main task with the greedy policy resulting in the largest improvement in almost all cases. In the case where the cumulant was equal to the observation difference, in almost all cases, the representation learned for the auxiliary tasks resulted in performance gain over the baseline of no pre-training, except for the case where the policy was the main task policy in Acrobot (Figure 3). The auxiliary task corresponding to the greedy policy resulted in the highest improvement across tasks. The results on the other policies were mixed: In the classic control environments, the sticky actions policy resulted in good performance gain whereas the main task policy was less useful or even harmful. 3 \fOn the other hand, in the pixel-based environments, the main task policy resulted in good performance whereas the sticky actions policy resulted in worse performance compared to the other auxiliary tasks. The results from both classic control and pixel-based environments suggest that the auxiliary tasks based on the greedy policy tend to be useful. Note that in the case of pixel-based environments where the cumulant is the observation difference, these auxiliary tasks are similar to the well-known pixel-control auxiliary task. Another conclusion is that most policies, even the random policy result in performance gain over the baseline. A surprising observation is that in most cases, the auxiliary tasks corresponding to the main task policy tend to fall into the group of less useful auxiliary tasks and even in some cases result in worse performance compared to the baseline. 5" + } + ], + "Andrew Patterson": [ + { + "url": "http://arxiv.org/abs/2104.13844v2", + "title": "A Generalized Projected Bellman Error for Off-policy Value Estimation in Reinforcement Learning", + "abstract": "Many reinforcement learning algorithms rely on value estimation, however, the\nmost widely used algorithms -- namely temporal difference algorithms -- can\ndiverge under both off-policy sampling and nonlinear function approximation.\nMany algorithms have been developed for off-policy value estimation based on\nthe linear mean squared projected Bellman error (MSPBE) and are sound under\nlinear function approximation. Extending these methods to the nonlinear case\nhas been largely unsuccessful. Recently, several methods have been introduced\nthat approximate a different objective -- the mean-squared Bellman error (MSBE)\n-- which naturally facilitate nonlinear approximation. In this work, we build\non these insights and introduce a new generalized MSPBE that extends the linear\nMSPBE to the nonlinear setting. We show how this generalized objective unifies\nprevious work and obtain new bounds for the value error of the solutions of the\ngeneralized objective. We derive an easy-to-use, but sound, algorithm to\nminimize the generalized objective, and show that it is more stable across\nruns, is less sensitive to hyperparameters, and performs favorably across four\ncontrol domains with neural network function approximation.", + "authors": "Andrew Patterson, Adam White, Martha White", + "published": "2021-04-28", + "updated": "2022-03-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Value functions play a central role in reinforcement learning: value-based methods act greedily towards a learned action value function; policy gradient methods often employ a critic to estimate the value of the policy in order to reduce variance; model-based methods often employ learned values for tree search, model-based value prediction, or simulation based planning to update value estimates from hypothetical experience. Fundamental improvements in algorithms for learning value functions can have a signi\ufb01cant impact in a variety of problem settings; as such, many algorithms have been developed to improve learning values functions. This includes a variety of variance reduction improvements for o\ufb00-policy temporal di\ufb00erence algorithms (Precup et al., 2000; Munos et al., 2016; Mahmood et al., 2017); gradient TD methods with linear function approximation (Sutton et al., 2009; Mahadevan et al., 2014; Liu et al., 2016; Ghiassian et al., 2020) and nonlinear function 1 arXiv:2104.13844v2 [cs.LG] 28 Mar 2022 \fapproximation (Maei et al., 2009); and algorithms using approximations to the mean squared Bellman error (BE) (Dai et al., 2017, 2018; Feng et al., 2019). The most widely used value function learning algorithm is also one of the oldest: temporal di\ufb00erence (TD) learning. The continued popularity of TD stems from both (a) the algorithms simplicity and empirical performance, and (b) the lack of technical tools required to improve it. TD, however, does not follow the gradient of any known objective function (Baird, 1995; Antos et al., 2008), and without a clear objective for TD, it was di\ufb03cult to extend the algorithm. Related TD-like Residual gradient algorithms directly optimize the BE, but su\ufb00ers from the double sampling problem (Baird, 1995; Scherrer, 2010). Without a strategy to optimize the BE, in the absence of a simulator, it was di\ufb03cult to pursue the BE as an alternative. Value function learning algorithms that converge under general conditions fall into two camps de\ufb01ned by the objectives they optimize. The \ufb01rst is the formalization of the objective underlying TD\u2014the mean squared projected Bellman error (PBE) (Antos et al., 2008; Sutton et al., 2009)\u2014which projects the Bellman error into the space spanned by the function approximator. Several algorithms were introduced to minimize the PBE, most of which built on the originally introduced variants: GTD2 and TD with gradient corrections (TDC) (Sutton et al., 2009). Most of these algorithm, however, are limited to linear function approximation because the PBE is de\ufb01ned only for the linear case with the exception of nonlinear GTD (Maei et al., 2009), which used a locally linear approximation for the projection. The gradient computation, however, requires a Hessian-vector product and has not been widely adopted. As such, the extension of the linear PBE for nonlinear function approximation remains open. The second camp of methods pursue the BE, which is naturally de\ufb01ned for nonlinear function approximation. The idea is to use a conjugate form for the BE (Dai et al., 2017), to reformulate it into a saddlepoint problem with an auxiliary variable, removing the double sampling problem because the resulting saddlepoint uses an estimator in place of one of the samples. The SBEED algorithm (Dai et al., 2018) later extended the conjugate BE to the control case by using a smoothed Bellman optimality operator and parameterizing both the policy and value function estimates. In this work, we bring together these threads of research and provide a novel objective for nonlinear value function estimation in both prediction and control. In particular, we introduce a generalized PBE, by using the conjugate reformulation of the BE. To understand the role of objectives in o\ufb00-policy value estimation, it is useful to \ufb01rst understand the algorithmic development in o\ufb00-policy learning that led to several of these objectives. We \ufb01rst lay out this development, and then describe the contributions in this work. 1.1 A Short History of O\ufb00-policy Temporal Di\ufb00erence Learning The story of o\ufb00-policy learning begins with Q-learning (Watkins, 1989). O\ufb00-policy learning, in some sense, allows side-stepping the exploration-exploitation tradeo\ufb00: the agent makes use of an independent exploration policy to select actions while learning the value function for the optimal policy. The exploration policy need not maximize reward, but can instead select actions in order to generate data that improves the optimal policy through learning. Ultimately, the full potential of Q-learning\u2014and this ability to learn about one policy from 2 \fa data generated by a totally di\ufb00erent exploration\u2014proved limited. Baird\u2019s well-known counterexample (Baird, 1995) provided a clear illustration of how, under function approximation, the weights learned by Q-learning can become unstable.1 Baird\u2019s counterexample highlights that divergence can occur when updating o\ufb00-policy with function approximation and with bootstrapping (as in temporal di\ufb00erence (TD) learning); even when learning the value function of a \ufb01xed target policy from a \ufb01xed data-generating policy. The instability of TD methods is caused by how we correct the updates to the value function to account for the potential mismatch between the target and exploration policies. O\ufb00-policy training involves estimating the total expected future reward (the value function) that would be observed while selecting actions according to the target policy with training data (states, actions, and rewards) generated while selecting actions according to an exploration policy. One approach to account for the di\ufb00erences between the data produced by these two policies is based on using importance sampling corrections: scaling the update to the value function based on the agreement between the target and exploration policy at the current state. If the target and exploration policy would always select the same action in a state, then they completely agree. Alternatively, if they never take the same action in a state they completely disagree. More generally, there can be degrees of agreement. We call this approach posterior corrections because the corrections account for the mismatch between policies ignoring the history of interaction up to the current time step\u2014it does not matter what the exploration policy has done in the past. Another approach, called prior corrections, uses the history of agreement between the exploration and target policy in the update. The likelihood that the trajectory could have occurred under the target policy is used to scale the update. The most extreme version of prior corrections uses the trajectory of experience from the beginning of time, corresponding to what has sometimes been referred to as the alternative life framework. Prior and posterior corrections can be combined to achieve stable o\ufb00-policy TD updates (Precup et al., 2000), though \ufb01nite variance of the updates cannot be guaranteed (Precup et al., 2001). Learning about many di\ufb00erent policies in parallel has long been a primary motivation for o\ufb00-policy learning, and this usage suggested that perhaps prior corrections are not essential. Several approaches require learning many value functions or policies in parallel, including approaches based on option models (Sutton et al., 1999b), predictive representations of state (Littman and Sutton, 2002; Tanner and Sutton, 2005; Sutton et al., 2011), and auxiliary tasks (Jaderberg et al., 2016). In a parallel learning setting, it is natural to estimate the future reward achieved by following each target policy until termination from the states encountered during training\u2014the value of taking excursions from the behavior policy. This excursion view of o\ufb00-policy learning lead to the development of an new objective function for o\ufb00-policy TD learning called the mean squared projected Bellman error (PBE). The resultant family of Gradient-TD methods that optimize the PBE use posterior corrections via importance sampling and are guaranteed to be stable under function approximation (Sutton et al., 2009; Maei, 2011). This new excursion objective has the same \ufb01xed point as TD, thus Gradient-TD methods converge to the same solution in the cases for which TD converges. The major critiques of these methods are (1) the additional complexity due 1. The action-value star MDP can be found in the errata of Baird\u2019s paper (Baird, 1995). 3 \fto a second set of learned parameters, and (2) the variance due to importance sampling corrections. A wide array of algorithms arose to address these limitations and improve sample e\ufb03ciency. The strategies include (1) using a saddlepoint reformulation that facilitates the use of optimization accelerations (Liu et al., 2016; Du et al., 2017; Liu et al., 2015) and a \ufb01nite sample analysis (Touati et al., 2018), (2) hybrid TD methods that behave like TD when sampling is on-policy (Hackman, 2013; White and White, 2016), and (3) variance reduction methods for posterior corrections, using di\ufb00erent eligibility trace parameters (Precup et al., 2000; Munos et al., 2016; Wang et al., 2016; Mahmood et al., 2017). The second major family of o\ufb00-policy methods revisits the idea of using prior corrections. The idea is to incorporate prior corrections only from the beginning of the excursion. In this way, the values of states that are more often visited under the target policy are emphasized, but the high variance of full prior corrections\u2014to the beginning of the episode\u2014is avoided. The Emphatic TD(\u03bb) algorithm is based on this idea(Sutton et al., 2016) and was later extended to reduce the variance of the emphasis weights (Hallak et al., 2016). These Emphatic TD methods are guaranteed stable under both on-policy and o\ufb00-policy sampling with linear function approximation (Sutton et al., 2016; Yu, 2015; Hallak et al., 2016). Practitioners often bemoan both the complexity and poor performance of these sound o\ufb00-policy algorithms, favouring instead to combine o\ufb00-policy TD with heuristics to reduce variance. Many of the algorithms discussed above have not been developed for the case of nonlinear function approximation. On the other hand, o\ufb00-policy updating is used pervasively in deep learning architectures for learning from a replay bu\ufb00er, demonstrations, and trajectories generated by other agents (i.e., asynchronous architectures). In order to get the bene\ufb01ts of o\ufb00-policy updates and mitigate variance we can truncate the corrections: the Retrace algorithm does exactly this (Munos et al., 2016; Wang et al., 2016). Retrace and the much older Tree Backup algorithm can be viewed as adapting the eligibility trace to reduce variance, similar to the ABQ algorithm (Mahmood et al., 2017). These three methods can be extended to utilize gradient corrections to achieve stability, but this would only yield another batch of theoretically sound but practically ignored methods. In this paper, we explore an alternative path: extending gradient TD methods to both control and nonlinear function approximation through the development of a new objective function. 1.2 Revisiting the Objective for Value Estimation Central to all this development has been the de\ufb01nition of a precise objective: the linear PBE. The proliferation of algorithms, however, has resulted in a focus on comparing and discussing speci\ufb01c algorithms (Dann et al., 2014; White and White, 2016; Touati et al., 2018). This breeds confusion in exactly how the algorithms are related and what objective they are attempting to optimize. By separating the objective and the optimization strategies, we can more systematically understand the di\ufb00erences in solutions under objectives and how e\ufb00ective the optimization strategies are at reaching those solutions. Two key questions emerge about the speci\ufb01cation of the objective: the form of the Bellman error and the weighting across states. The \ufb01rst question is really a revisitation of the long-standing question about whether we should use the BE or the PBE. In terms of solution quality, it is mixed: the BE avoids some counterexamples that exist for the 4 \fPBE (Scherrer, 2010), however the PBE often produces a better solution (Dann et al., 2014; Scherrer, 2010; Sutton and Barto, 2018). Further, the BE has been shown to have an identi\ufb01ability problem (Sutton and Barto, 2018). Though the evidence comparing the BE and PBE is inconclusive, the PBE has been the default choice because many algorithms optimize it. The BE, on the other hand, is typically avoided due to the double sampling problem, where it is unclear how to obtain an unbiased sample of the gradient without a simulator. Recently, however, this technical challenge has been overcome with the introduction of a saddlepoint form for the BE (Dai et al., 2017, 2018; Feng et al., 2019). The resulting algorithms are similar to the saddlepoint algorithms for the linear PBE: a second estimator is used to estimate a part of the gradient. The BE is particularly alluring, as it equally applies to the linear and nonlinear value estimation settings. The PBE, on the other hand, was de\ufb01ned for the linear setting, due to the di\ufb03culty in computing the projection operator for the nonlinear setting. These potential advantages, as well as a viable strategy for optimizing the BE, motivates the utility of answering which of these objectives might be preferable. The second question is about the weighting on states in the objective, which must be speci\ufb01ed for either the BE or PBE. The importance of the weighting on states has been well-recognized for many years, and is in fact the reason TD diverges on Baird\u2019s counterexample: using the stationary distribution of the behavior policy, rather than the target policy, results in an iterative update that is no longer a contraction. The emphatic algorithms were introduced to adjust this weighting to ensure convergence. This reweighting, however, is not only about convergence; it also changes the \ufb01xed point and the quality of the solution. There has been some work investigating the impact of the state weighting on the optimal solution, not just on the behavior of the updates themselves. The most stark result is a simple example where using the solution to the PBE can result in an arbitrarily poor mean squared value error (VE) (Kolter, 2011). Several later results extended on-policy bounds on the VE to the o\ufb00-policy setting, showing that VE could be bounded in the o\ufb00-policy setting using careful choices on the state weighting\u2014namely using state weightings given by Emphatic TD (Hallak et al., 2016; White, 2017). Despite these insights, the role of the weighting on the quality of the solution in practice remains open. A natural question is: how do we choose the state weighting in the objective, and how much does it matter? 1.3 Contributions In this work, we focus on the question: what objective should we use for o\ufb00-policy value estimation? We \ufb01rst summarize many existing o\ufb00-policy algorithms as optimizing the linear PBE in di\ufb00erent ways and in some cases with di\ufb00erent state weightings. This summary separates the optimization strategy from the de\ufb01nition of the objective, allowing us to move away from speci\ufb01c algorithms to understanding the di\ufb00erences in \ufb01xed points obtained under the di\ufb00erent objectives. We then propose a generalized PBE that uses a generalized projection operator that both extends the PBE to the nonlinear setting and uni\ufb01es the BE and linear PBE under one objective. Using these insights, we provide the following speci\ufb01c contributions: 5 \f1. We show how the generalized PBE helps resolve the non-identi\ufb01ability of the BE, where a particular projection in the generalized PBE provides an Identi\ufb01able BE. 2. We highlight the role of the state weighting in this generalized objective, both extending theoretical results bounding the VE and empirically showing that the emphatic weighting can signi\ufb01cantly improve the quality of the solution. 3. We show that these insights also extend to control by de\ufb01ning an objective for learning action values with (soft) maximal operators. We use this objective to derive a sound gradient variant of Q-learning. 4. We exploit the connection to the linear PBE to develop a more e\ufb00ective algorithm for the generalized PBE using gradient corrections rather than the saddlepoint update. 5. Finally, we demonstrate the utility of these prediction and control algorithms in several small benchmark domains. 2. Problem De\ufb01nition and Background We consider the problem of learning the value function for a given policy under the Markov Decision Process (MDP) formalism. The agent interacts with the environment over a sequence of discrete time steps, t = 1, 2, 3, . . .. On each time step, the agent observes a partial summary of the state St \u2208S and selects an action At \u2208A. In response, the environment transitions to a new state St+1 according to transition function P(St+1|St, At) and emits a scalar reward Rt+1 \u2208R. The agent selects actions according to a stochastic, stationary target policy \u03c0 : S \u00d7 A \u2192[0, 1]. We study the problem of value estimation: the computation or estimation of the expected discounted sum of future rewards for policy \u03c0 from every state. The return at time t, denoted Gt \u2208R, is de\ufb01ned as the discounted sum of future rewards. The discount factor can be variable, dependent on the entire transition: \u03b3 : S \u00d7 A \u00d7 S \u2192[0, 1], with \u03b3t+1 def = \u03b3(St, At, St+1). The return is de\ufb01ned as Gt def = Rt+1 + \u03b3t+1Rt+2 + \u03b3t+1\u03b3t+2Rt+3 + \u03b3t+1\u03b3t+2\u03b3t+3Rt+4 + . . . = Rt+1 + \u03b3t+1Gt+1. When \u03b3t is constant, \u03b3c, we get the familiar return Gt = Rt+1 + \u03b3cRt+2 + \u03b32 c Rt+3 + . . .. Otherwise, variable \u03b3t can discount per transition, including encoding termination when it is set to zero. This generalization ensures we can discuss episodic problems without introducing absorbing states (White, 2017). It also enables the derivations and theory to apply to both the continuing and episodic settings. The value function v : S \u2192R maps each state to the expected return under policy \u03c0 starting from that state v\u03c0(s) def = E\u03c0[Gt | St = s] , for all s \u2208S (1) where the expectation operator E\u03c0[\u00b7] re\ufb02ects that the distribution over future actions is given by \u03c0, to distinguish from a potentially di\ufb00erent behavior policy. In this paper, we are interested in problems where the value of each state cannot be stored in a table; instead we must approximate the value with a parameterized function. The approximate value function \u02c6 vw(st) can have arbitrary form as long as it is everywhere 6 \fdi\ufb00erentiable with respect to the weights w \u2208Rd. Typically, the number of components in w is much fewer than the number of possible states (d \u226a|S|), thus \u02c6 v will generalize values across many states in S. An important special case is when the approximate value function is linear in the parameters and in features of the state. In particular, the current state St is converted into feature vector xt \u2208Rd by some \ufb01xed mapping x : S \u2192Rd. The value of the state can then be approximated with an inner product: \u02c6 vw(st) = w\u22a4xt \u2248v\u03c0(st). Another typical parameterization for \u02c6 vw(st) is a neural network where w consists of all the weights in the network. We refer to w exclusively as the weights, or weight vector, and reserve the word parameter for variables like the discount-rate and stepsize parameters. We \ufb01rst describe how to learn this value function for the on-policy setting, where the behavior policy equals the target policy. Temporal di\ufb00erence learning (Sutton, 1988) is perhaps the best known and most successful approach for estimating \u02c6 v directly from samples generated while interacting with the environment. Instead of waiting until the end of a trajectory to update the value of each state, the TD(\u03bb) algorithm adjusts its current estimate of the weights toward the di\ufb00erence between the discounted estimate of the value in the next state and the estimated value of the current state plus the reward along the way: \u03b4t def = \u03b4(St, At, St+1) def = Rt+1 + \u03b3t+1\u02c6 vw(St+1) \u2212\u02c6 vw(St). (2) We use the value function\u2019s own estimate of future reward as a placeholder for the future rewards de\ufb01ning Gt that are not available on time-step t + 1. In addition, the TD(\u03bb) algorithm also maintains an eligibility trace vector zt \u2208Rd that stores a fading trace of recent feature activations. The components of wt are updated on each step proportional to the magnitude of the trace vector. This simple scheme allows update information to propagate impacting the value estimates for previously encountered states. The update equations for TD(\u03bb) are straightforward: wt+1 \u2190wt + \u03b1\u03b4tzt zt \u2190\u03b3t\u03bbzt\u22121 + \u2207\u02c6 vw(St), where \u03b1 \u2208R is the scalar stepsize parameter that controls the speed of learning and \u03bb \u2208R controls the length of the eligibility trace. Under linear function approximation, intermediate values of \u03bb between zero and one often perform best. TD(\u03bb) is only sound for the linear function approximation setting, though TD(0) is often used outside the linear setting and often obtains good performance. These updates need to be modi\ufb01ed for the o\ufb00-policy case, where the agent selects actions according to a behavior policy b : S \u00d7 A \u2192[0, 1] that is di\ufb00erent from the target policy. The value function for target policy \u03c0 is updated using experience generated from a behavior policy that is o\ufb00, away, or distant from the target policy. For example, consider the most well-known o\ufb00-policy algorithm, Q-learning. The target policy might be the one that maximizes future discounted reward, while the behavior is nearly identical to the target policy, but instead selects an exploratory action on each time step with some small probability. More generally, the target and behavior policies need not be so closely coupled. The main requirement linking these two policies is that the behavior policy covers the actions selected by the target policy in each state visited by b, that is: b(a|s) > 0 for all states and actions in which \u03c0(a|s) > 0. 7 \f3. O\ufb00-policy Corrections and the Connection to State Weightings The key problem in o\ufb00-policy learning is to estimate the value function for the target policy conditioned on samples produced by actions selected according to the behavior policy. This is an instance of the problem of estimating an expected value under some target distribution from samples generated by some other behavior distribution. In statistics, this problem can be addressed with importance sampling, and most methods of o\ufb00-policy reinforcement learning use such corrections. There are two distributions that we could consider correcting: the distribution over actions, given the state, and the distribution over states. When observing a transition (S, A, S\u2032, R) generated by taking the action according to b(\u00b7|S), we can correct the update for that transition so that, in expectation, we update as if actions were taken according to \u03c0(\u00b7|S). However, these updates are still di\ufb00erent than if we evaluated \u03c0 on-policy, because the frequency of visiting state S under b will be di\ufb00erent than under \u03c0. All methods correct for the distribution over actions (posterior corrections), given the state, but several methods also correct for the distribution over states (prior corrections). In this section, we discuss the di\ufb00erence in the underlying objective for these updates. We give a brief introduction to prior and posterior corrections, and provide a more in-depth example of the di\ufb00erences in their updates in Appendix A for readers interested in a more intuitive explanation. We show that the underlying objective di\ufb00ers in terms of the state weighting and discuss how emphatic weightings further modify this state weighting in the objective. 3.1 Posterior Corrections The most common approach to developing sound o\ufb00-policy TD algorithms makes use of posterior corrections based on importance sampling. One of the simplest examples of this approach is O\ufb00-policy TD(\u03bb). The procedure is easy to implement and requires constant computation per time step, given knowledge of both the target and behavior policies. On the transition from St to St+1 via action At, we compute the ratio between \u03c0 and b: \u03c1t def = \u03c1(At|St) def = \u03c0(At|St) b(At|St) . (3) These importance sampling corrections simply weight the eligibility trace update: wt+1 \u2190wt + \u03b1\u03b4tz\u03c1 t z\u03c1 t \u2190\u03c1t(\u03b3t\u03bbz\u03c1 t\u22121 + xt), (4) where \u03b4t is de\ufb01ned in Equation (2). This way of correcting the sample updates ensures that the approximate value function \u02c6 v estimates the expected value of the return as if the actions were selected according to \u03c0. Posterior correction methods use the target policy probabilities for the selected action to correct the update to the value of state St using only the data from time step t onward. Values of \u03c0 from time steps prior to t have no impact on the correction. Many modern o\ufb00-policy prediction methods use some form of posterior corrections; including the Gradient-TD methods, Tree Backup(\u03bb), V-trace(\u03bb), and Emphatic TD(\u03bb). In fact, all o\ufb00-policy prediction methods with stability guarantees make use of posterior 8 \fcorrections via importance sampling. Only correcting the action distribution, however, does not necessarily provide stable updates, and O\ufb00-policy TD(\u03bb) is not guaranteed to converge (Baird, 1995). To obtain stable O\ufb00-policy TD(\u03bb) updates, we need to consider corrections to the state distribution as well, as we discuss next. 3.2 Prior Corrections We can also consider correcting for the di\ufb00erences between the target and behavior policy by using the agreement between the two over a trajectory of experience. Prior correction methods keep track of the product of either Qt k=1 \u03c0(Ak|Sk) or Qt k=1 \u03c0(Ak|Sk) b(Ak|Sk) , and correct the update to the value of St using the current value of the product. Therefore, the value of St is only updated if the product is not zero, meaning that the behavior policy never selected an action for which \u03c0(Ak|Sk) was zero\u2014the behavior never completely deviated from the target policy. To appreciate the consequences of incorporating these prior corrections into the TD update consider a state-value variant of Precup et al. (2000) O\ufb00-policy TD(\u03bb) algorithm: wt+1 \u2190wt + \u03b1\u03b4tz\u03c1 t z\u03c1 t \u2190\u03c1t \u03b3t\u03bbzt\u22121 + t\u22121 Y k=1 \u03c1kxt ! (5) where z\u03c1 0 = 0. We refer to the above algorithm as Alternative-life TD(\u03bb), as the product of importance sampling ratios in the trace simulate an agent experiencing an entirely di\ufb00erent trajectory from the beginning of the episode\u2014an alternative life. Note that the trace is always reinitialized at the start of the episode. The Alternative-life TD(\u03bb) algorithm has been shown to converge under linear function approximation, but in practice exhibits unacceptable variance (Precup et al., 2001). The Emphatic TD(\u03bb) algorithm, on the other hand, provides an alternative form for the prior corrections that is lower variance but still guarantees convergence. To more clearly explain why, next we will discuss how di\ufb00erent prior corrections account for di\ufb00erent weightings in optimizing the mean squared Projected Bellman Error (PBE). 3.3 The Linear PBE under Posterior and Prior Corrections In this section, we describe how di\ufb00erent forms of prior corrections correspond to optimizing similar objectives, but with di\ufb00erent weightings over the state. This section introduces the notation required to explain the many algorithms that optimize the linear PBE and clari\ufb01es convergence properties of algorithms, including which algorithms converge and to which \ufb01xed point. We start with the linear PBE as most algorithms have been designed to optimize it. In the next section, we extend beyond the linear setting to discuss the generalized PBE. We begin by considering a simpli\ufb01ed setting with \u03bb = 0 and a simpli\ufb01ed variant of the linear PBE, called the NEU (norm of the expected TD update (Sutton et al., 2009)) NEU(w) = \r \r \r X s\u2208S d(s)E\u03c0 \u0002 \u03b4(S, A, S\u2032)x(S) | S = s \u0003 \r \r \r 2 2, (6) 9 \fwhere d : S \u2192[0, \u221e) is a positive weighting on the states. Note the transition (S, A, S\u2032) is random as is the TD-error. Equation 6 does not commit to a particular sampling strategy. If the data is sampled on-policy, then d = d\u03c0, where d\u03c0 : S \u2192[0, 1] is the stationary distribution for \u03c0 which represents the state visitation frequency under behavior \u03c0 in the MDP. If the data is sampled o\ufb00-policy, then the objective might instead be weighted by the state visitation frequency under b, i.e., d = db. We \ufb01rst consider how to sample the NEU for a given a state. The behavior selects actions in each state s, so the update \u03b4txt must be corrected for the action selection probabilities of \u03c0 in state s, namely a posterior correction: E\u03c0[\u03b4(St, At, St+1)x(St) | St = s] = X a\u2208A \u03c0(a|s) X s\u2032\u2208S P(s\u2032|s, a)\u03b4(s, a, s\u2032)x(s) = X a\u2208A b(a|s) b(a|s)\u03c0(a|s) X s\u2032\u2208S P(s\u2032|s, a)\u03b4(s, a, s\u2032)x(s) = X a\u2208A b(a|s) X s\u2032\u2208S P(s\u2032|s, a)\u03c0(a|s) b(a|s) \u03b4(s, a, s\u2032)x(s) = Eb[\u03c1(At|St)\u03b4(St, At, St+1)x(St) | St = s] . (7) Therefore, the update \u03c1t\u03b4txt provides an unbiased sample of the desired expected update E\u03c0[\u03b4(St, At, St+1)x(St) | St = s]. Most o\ufb00-policy methods (except Q-learning and Treebackup) use these posterior corrections. We can also adjust the state probabilities from db to d\u03c0, using prior corrections. Consider the expected update using prior corrections starting in s0 and taking two steps following b: Eb[\u03c10\u03c11E\u03c0[\u03b4(St, At, St+1)x(St) | St = S2] | S0 = s0] = Eb \uf8ee \uf8f0\u03c10 X a1\u2208A b(a1|S1) X s2\u2208S P(s2|S1, a1)\u03c1(a1|S1)E\u03c0[\u03b4(St, At, St+1)x(St) | St = s2] | S0 = s0 \uf8f9 \uf8fb = Eb \uf8ee \uf8f0\u03c10 X a1\u2208A \u03c0(a1|S1)P(s2|S1, a1)E\u03c0[\u03b4(St, At, St+1)x(St) | St = s2] | S0 = s0 \uf8f9 \uf8fb = Eb[\u03c10E\u03c0[\u03b4(St, At, St+1)x(St) | St\u22121 = S1] | S0 = s0] = X a0\u2208A \u03c0(s0, a0) X s1\u2208S P(s1|s0, a0)E\u03c0[\u03b4(St, At, St+1)x(St) | St\u22121 = s1] = E\u03c0[\u03b4(St, At, St+1)x(St) | S0 = s0] . More generally, we get Eb \u0014 \u03c11 . . . \u03c1t\u22121E\u03c0[\u03b4(St, At, St+1)x(St) | St = s] |S0 = s0 \u0015 = E\u03c0[\u03b4(St, At, St+1)x(St) | S0 = s0] . These corrections adjust the probability of the sequence from the beginning of the episode as if policy \u03c0 had taken actions A1, . . . , At\u22121 to get to St, from which we do the TD(\u03bb) update. A natural question is which objective should be preferred: the alternative-life (d \u221dd\u03c0) or the excursions objective (d \u221ddb). As with all choices for objectives, there is no crisp 10 \fanswer. The alternative-life objective is di\ufb03cult to optimize because prior corrections can become very large or zero\u2014causing data to be discarded\u2014and is thus high variance. There is some work that directly estimates d\u03c0(s)/db(s) and pre-multiplies to correct the update (Hallak and Mannor, 2017; Liu et al., 2020); this approach, however, requires estimating the visitation distributions. On the other hand, the \ufb01xed-point solution to the excursion objective can be arbitrarily poor compared with the best value function in the function approximation class if there is a signi\ufb01cant mismatch between the behavior and target policy (Kolter, 2011). Better solution accuracy can be achieved using an excursion\u2019s weighting that includes db, but additionally reweights to make the states distribution closer to d\u03c0 as is done with Emphatic TD(\u03bb). We discuss this alternative weighting in the next section. The above discussion focused on a simpli\ufb01ed variant of the PBE with \u03bb = 0, but the intuition is the same for the PBE with \u03bb > 0. To simplify notation we introduce a conditional expectation operator: Ed[Y ] = X s\u2208S d(s)E\u03c0[Y | S = s]. We can now de\ufb01ne C def = Ed[x(S)x(S)\u22a4] A def = \u2212Ed[(\u03b3(S, A, S\u2032)x(S\u2032) \u2212x(S))z(S)\u22a4] b def = Ed[R(S, A, S\u2032)z(S)\u22a4] where the expected eligibility trace z(S) \u2208Rk is de\ufb01ned recursively z(s) def = x(s) + \u03bbE\u03c0[\u03b3(St\u22121, At\u22121, St)z(St\u22121)|St = s]. We can write the TD(\u03bb) \ufb01xed point residual as: Ed[\u03b4(S, A, S\u2032)z(S)] = \u2212Aw + b (8) where Ed[\u03b4(S, A, S\u2032)z(S)] = 0 at the \ufb01xed point solution for on-policy TD(\u03bb). The linear PBE can then be de\ufb01ned given the de\ufb01nition above: linear PBE(w) def = (\u2212Aw + b)\u22a4C\u22121(\u2212Aw + b) = \u2225\u2212Aw + b\u22252 C\u22121. (9) The only di\ufb00erence compared with the NEU is the weighted \u21132 norm, weighted by C\u22121, instead of simply \u2225\u2212Aw + b\u22252 2. Notation Remark: From the rest of the paper, we will not explicitly write the random variables as functions of (S, A, S\u2032). For example, we will use E\u03c0[\u03b4|S = s] to mean E\u03c0[\u03b4(S, A, S\u2032)|S = s] and E\u03c0[\u03b3|S = s] to mean E\u03c0[\u03b3(S, A, S\u2032)|S = s]. 3.4 Emphatic Weightings as Prior Corrections Emphatic Temporal Di\ufb00erence learning, ETD(\u03bb), provides an alternative strategy for obtaining stability under o\ufb00-policy sampling without computing the gradient of the linear PBE. The key idea is to incorporate some prior corrections so that the weighting d results in a positive de\ufb01nite matrix A. Given such an A, the TD(\u03bb) update\u2014a semi-gradient algorithm\u2014can be shown to converge. Importantly, this allows for a stable o\ufb00-policy algorithm with only a single set of learned weights. Gradient-TD methods, on the other hand, use two stepsize parameters and two weight vectors to achieve stability. 11 \fEmphatic TD(\u03bb) or ETD(\u03bb) minimizes a variant of the linear PBE de\ufb01ned in Equation 9, where the weighting d is de\ufb01ned based on the followon weighting. The followon re\ufb02ects (discounted) state visitation under the target policy when doing excursions from the behavior: starting from states sampled according to db. The followon is de\ufb01ned as f(st) def = db(st) + X st\u22121,at\u22121 db(st\u22121)\u03c0(at\u22121|st\u22121)P(st|st\u22121, at\u22121)\u03b3(st\u22121, at\u22121, st) + . . . . (10) The emphatic weighting is m(st) = db(st)\u03bb + (1 \u2212\u03bb)f(st). This is the state weighting ETD(\u03bb) uses in the linear PBE in Equation 9, setting d(s) = m(s). ETD(\u03bb) uses updates Ft \u2190\u03c1t\u22121\u03b3tFt\u22121 + 1 with Mt = \u03bbt + (1 \u2212\u03bbt)Ft z\u03c1 t \u2190\u03c1t \u0000\u03b3t\u03bbz\u03c1 t\u22121 + Mtxt \u0001 wt+1 \u2190wt + \u03b1\u03b4tz\u03c1 t , with F0 = 1 and z\u03c1 0 = 0. To gain some intuition for this weighting, consider the trace update with alternativelife TD(0) and ETD(0) with constant \u03b3. For ETD(0), Mt = Ft = Pt j=0 \u03b3j Qj i=1 \u03c1t\u2212i, giving z\u03c1 t \u2190\u03c1t \u0010 \u03b3\u03bbz\u03c1 t\u22121 + Pt j=0 \u03b3j Qj i=1 \u03c1t\u2212ixt \u0011 . The trace for Alternative-life TD(0) (see Equation 5) is z\u03c1 t \u2190\u03c1t \u0000\u03b3t\u03bbz\u03c1 t\u22121 + Qt i=1 \u03c1ixt \u0001 . Both adjust the weighting on xt to correct the state distributions. Alternative-Life TD more aggressively downweights states that would not have been visited under the target policy, because it only has a product, whereas ETD uses a sum over all t up to that point. Emphatic TD(\u03bb) has strong convergence guarantees in the case of linear function approximation. The ETD(\u03bb) under o\ufb00-policy training has been shown to converge in expectation using the same expected update analysis used to show that TD(\u03bb) converges under on-policy training. Later, Yu (2015) extended this result to show that ETD(\u03bb) converges with probability one. This weighting also resolves the issues raised by Kolter\u2019s example (Kolter, 2011). Kolter\u2019s example demonstrated that for a particular choice of \u03c0 and b, the solution to the linear PBE could result in arbitrarily bad value error compared with the best possible approximation in the function class. In other words, even if the true value function can be well approximated by the function class, the o\ufb00-policy \ufb01xed point from the linear PBE with weighting d = db can result in an arbitrarily poor approximation to the values. In Section 6, we explain why the \ufb01xed points of the linear PBE with the emphatic weighting do not su\ufb00er from this problem, expanding on (Hallak et al., 2016, Corollary 1) and (White, 2017, Theorem 1). 3.5 Broadening the Scope of Weightings To determine which weightings to consider, we need to understand the role of the weighting. There are actually two possible roles. The \ufb01rst is to specify states of interest: determine the relative importance of a state for the accuracy of our value estimates compared to the true values. This provides the weighting in the value error: VE(w) def = P s\u2208S d(s)(\u02c6 vw(s) \u2212v\u03c0(s))2. The second is the choice of weighting in our objective, such as the linear PBE, which is a surrogate for the value error. 12 \fFor the \ufb01rst question, we need to determine the relative importance of states. The choice depends on the purpose for the value estimation. For example, if the policy is being evaluated for deployment in an episodic problem, a common choice is to put all weight on the set of start states (Sutton et al., 1999a; Bottou et al., 2013) because it re\ufb02ects the expected return in each episode. On the other hand, if many value functions are learned in parallel\u2014as in predictive representations (Sutton et al., 2011; White, 2015) or auxiliary tasks (Jaderberg et al., 2016)\u2014it is may be better to the predictions are accurate in many states. It is also possible that values from some states might be queried much more often, or that states might correspond to important catastrophic events from which it is important to have accurate predictions for accurate decision-making. Overall, the choice of d is subjective. Once we make this choice for our evaluation objective, we can ask the second question: which optimization objectives and what weightings are most e\ufb00ective for minimizing our evaluation objective? It is not obvious that the minimum of the linear PBE with weighting db provides the best solution to the VE with weighting db, for example. In fact, we know that the linear PBE with d = db su\ufb00ers from a counterexample (Kolter, 2011), whereas using the emphatic weighting in the linear PBE provides an upper bound on the VE under weighting db. We discuss the potential utility of using a di\ufb00erent weighting for the objective than the desired weighting in the VE in Section 6. 4. Broadening the Scope of Objectives In this section, we discuss how to generalize the linear PBE, to obtain the generalized PBE. This objective allows for nonlinear value estimation and uni\ufb01es the BE and linear PBE under one objective. 4.1 An Overview of Existing Objectives Let us start by discussing the standard evaluation objective used for policy evaluation: the mean squared value error (VE) VE(w) def = X s\u2208S d(s)(\u02c6 vw(s) \u2212v\u03c0(s))2. (11) The approximation \u02c6 vw(s) is penalized more heavily for inaccurate value estimates in highly weighted states s. This objective cannot be directly optimized because it requires access to v\u03c0(s). One way to indirectly optimize the VE is to use the mean squared return error (RE): RE(w) def = X s\u2208S d(s)E\u03c0 h (\u02c6 vw(s) \u2212Gt)2 | St = s i . (12) The minima of the RE(w) and VE(w) are the same because their gradients are equal \u2207RE(w) = X s\u2208S d(s)E\u03c0[(\u02c6 vw(s) \u2212Gt) \u2207\u02c6 vw(s) | St = s] = 2 X s\u2208S d(s) (\u02c6 vw(s) \u2212E\u03c0[Gt | St = s]) \u2207\u02c6 vw(s) = 2 X s\u2208S d(s) (\u02c6 vw(s) \u2212v\u03c0(s)) \u2207\u02c6 vw(s) = \u2207VE(w). 13 \fTv AB6XicdVDLSsNAFL3xWeur6tLNYBFchSTGtu4KblxW6QvaUCbTSTt 08mBmUihf+DGhSJu/SN3/o2TtoKHrhwOde7r3HTziTyrI+jLX1jc2t7cJOcXdv/+CwdH TclnEqCG2RmMei62NJOYtoSzHFaTcRFIc+px1/cpP7nSkVksVRU80S6oV4FLGAEay0dN+c Dkply7yuVRy3gizTsq2Y+fEqbqXLrK1kqMKzQGpf+MCZpSCNFOJayZ1uJ8jIsFCOczov 9VNIEkwke0Z6mEQ6p9LFpXN0rpUhCmKhK1JoX6fyHAo5Sz0dWeI1Vj+9nLxL6+XqDmZS xKUkUjslwUpBypGOVvoyETlCg+0wQTwfStiIyxwETpcIo6hK9P0f+k7Zi2a17dueV6bRVHA U7hDC7AhirU4RYa0AICATzAEzwbE+PReDFel61rxmrmBH7AePsE52NmA= Tvw AB7XicbVBNSwMxEJ2 tX7V+VT16CRbBU9mVFnsePFYoV/QLiWbZtvYbLIk2UpZ+h+8eFDEq/Hm/GtN2Dtj4YeLw3w8y8IOZMG9f9dnJb2zu7e/n9wsHh0fFJ8fSsrWiCG0RyaXqBlhTzgRtGWY47caK4ijgt BNM7hZ+Z0qVZlI0zSymfoRHgoWMYGOldnM6SJ/mg2LJLbtLoE3iZaQEGRqD4ld/KEkSUWEIx1r3PDc2foqVYTeaGfaBpjMsEj2rNU4IhqP1eO0dXVhmiUCpbwqCl+nsixZHWsyiwnRE 2Y73uLcT/vF5iwpqfMhEnhgqyWhQmHBmJFq+jIVOUGD6zBPF7K2IjLHCxNiACjYEb/3lTdK+KXuVcvWhUqrXsjycAGXcA0e3EId7qEBLSDwCM/wCm+OdF6cd+dj1Zpzsplz+APn8wfd GY9M v\u21e1 AB7nicdZDLSgMxFIb P1Fut6pLN8EiuBpmxHrZFd24rGAv0A4lk2ba0EwmJlCGfoQblwo4tbncefbmLajqOgPgcP3n0PO+UPJmTae9+4UlpZXVteK6WNza3tnfLuXlMnqSK0QRKeqHaINeVM0IZhtO2VBTHI aetcHQ981tjqjRLxJ2ZSBrEeCBYxAg2FrXGvawr2bRXrnhu1fMvz3zkud5c6Iv4OalArnqv/NbtJySNqTCEY607vidNkGFlGOF0WuqmkpMRnhAO7YUOKY6yObrTtGRJX0UJco+YdCcfp/ IcKz1JA5tZ4zNUP/2ZvAvr5Oa6CLImJCpoYIsPopSjkyCZrejPlOUGD6xBSaK2V0RGWKFibEJlWwIn5ei/4vmieufutXb0rtKo+jCAdwCMfgwznU4Abq0ACI7iHR3hypPgPDsvi9aC k8/sw85rx/oSo/2 v AB6HicdVDLSsNAFJ34rPVdelmsAiuQhJjW3cFNy5bsA9oQ5lMb9q xkwczk0IJ/QI3LhRx6ye582+ctBVU9MCFwzn3cu89fsKZVJb1Yaytb2xubRd2irt7+weHpa PjtoxTQaFYx6Lrk8kcBZBSzHFoZsIKHPoeNPbnK/MwUhWRzdqVkCXkhGEQsYJUpLzemg VLbM61rFcSvYMi2rajt2Tpyqe+liWys5ymiFxqD03h/GNA0hUpQTKXu2lSgvI0IxymFe7Kc SEkInZAQ9TSMSgvSyxaFzfK6VIQ5ioStSeKF+n8hIKOUs9HVnSNRY/vZy8S+vl6qg5mUsSl IFEV0uClKOVYzr/GQCaCKzQhVDB9K6ZjIghVOpuiDuHrU/w/aTum7ZpXTbdcr63iKBTd IYukI2qI5uUQO1EWAHtATejbujUfjxXhdtq4Zq5kT9APG2ydC5Y06 VE AB/XicdVDLSgMxFM3 UV62v+ti5CRbBVcn03V1BJcV7APaUjJp2oZmHiR3xDoUf8WNC0Xc+h/u/BszbQUVPRA4nHMP9+Y4gRQaCPmwEiura+sbyc3U1vbO7l56/6Cp/VAx3mC+9FXboZpL4fEGCJC8HShOXUfyl jM5j/3WDVda+N41TAPec+nIE0PBKBipnz7q+saO01EX+C1EzYvZrJ/OkCwpFat5gkm2SOxytWoIaVKPodtQ2Jk0BL1fvq9O/BZ6HIPmKRad2wSQC+iCgSTfJbqhpoHlE3oiHcM9ajLdS+ aXz/Dp0YZ4KGvzPMAz9XviYi6Wk9dx0y6FMb6txeLf3mdEIaVXiS8IATuscWiYSgx+DiuAg+E4gzk1BDKlDC3YjamijIwhaVMCV8/xf+TZi5rF7LFq0KmVlnWkUTH6ASdIRuVUQ1dojpq Ibu0AN6Qs/WvfVovVivi9GEtcwcoh+w3j4B5HGWJg= PBE AB/nicdVDLSgMxFM34rPU1Kq7cBIvgqszUsa2 7oguK9gHdErJpGkbmnmQ3BHL MOCvuHGhiFu/w51/Y6atoKIHAodz7uHeHC8SXIFlfRgLi0vLK6u5tfz6xubWtrmz21RhLClr0FCEs u0RxQPWAM4CNaOJCO+J1jLG19kfuWScXD4AYmEev6ZBjwAacEtNQz91Q21k6cYHdQVI/v0zTnl mw imfVcskpY6toWRW7ZGekVHFOHGxrJUMBzVHvme9uP6SxzwKgijVsa0IugmRwKlgad6NFYsIHZM h62gaEJ+pbjI9P8VHWunjQSj1CwBP1e+JhPhKTXxPT/oERuq3l4l/eZ0YBtVuwoMoBhbQ2aJBLDCE OsC97lkFMRE0Il17diOiKSUNCN5X UJXz/F/5NmqWg7xdNrp1Bz5nXk0AE6RMfIRhVUQ1eojhqIo gQ9oCf0bNwbj8aL8TobXTDmT30A8bJ2GnlmE= \u21e7F,dTv AB/nicdVDLSsNAFJ3UV62vqLhyM1gEF1KSGNu6KwjiskJf0IQwmUz boZMHM5NCQV/xY0LRdz6He78GydtBRU9MHA4517umeMnjApGB9aYWV1bX2juFna2t7Z3d P3DzoiTjkmbRyzmPd8JAijEWlLKhnpJZyg0Gek64+vc787IVzQOGrJaULcEA0jOqAYSV5 +pHTpF7mhEiOMGLZzew8mLUmnl42Klf1qmVXoVExjJpmTmxavaFDU2l5CiDJZqe/u4EMU5 DEknMkB90ikmyEuKWZkVnJSQRKEx2hI+opGKCTCzebxZ/BUKQEcxFy9SMK5+n0jQ6EQ09 BXk3lO8dvLxb+8fioHdTejUZJKEuHFoUHKoIxh3gUMKCdYsqkiCHOqskI8QhxhqRorqRK+f gr/Jx2rYtqVyzu73Kgv6yiCY3ACzoAJaqABbkETtAEGXgAT+BZu9cetRftdTFa0JY7h+AH tLdPtoSV9w= vw = \u21e7F,dTvw ACBXicbVDLSsNAFJ3 4rPUVdamLwSK4kJIRTdCQRCXFfqCJoTJZNIOnUzCzKRSQjdu/BU3LhRx6z+482+ctFlo64ELh3Pu5d57/IRqSzr21haXldWy9tlDe3tnd2zb39toxTgUkLxywWXR9JwignLUVI91E BT5jHT84U3ud0ZESBrzphonxI1Qn9OQYqS05JlHI+/h2mlQL3MipAYsex2chZMYFMbnlmxqtYUcJHYBamAg3P/HKCGKcR4QozJGXPthLlZkgoihmZlJ1UkgThIeqTnqYcRUS62fSLCTz RSgDWOjiCk7V3xMZiqQcR7uzE+V814u/uf1UhVeuRnlSaoIx7NFYcqgimEeCQyoIFixsSYIC6pvhXiABMJKB1fWIdjzLy+S9nVrlUv7muVeq2IowQOwTE4BTa4BHVwBxqgBTB4BM/g FbwZT8aL8W58zFqXjGLmAPyB8fkDYpGYdg= \u21e7F,dv\u21e1 ACAXicdVDLSsNAFJ3 4rPUVdSO4GSyCwmT0mq7KwjisoJ9QBPCZDJph04ezEwKJdSNv+LGhSJu/Qt3/o3Th6CiBy4czrmXe+/xU86kQujDWFpeWV1bL2wUN7e2d3bNvf2TDJBaIskPBFdH0vKWUxbilOu6mgO PI57fjDy6nfGVEhWRLfqnFK3Qj3YxYygpWPQaTIvdyKsBgTz/GpyFkxGnpMyzywhq1xF9RqCyEIzaFJFdv3chvZCKYEFmp757gQJySIaK8KxlD0bpcrNsVCMcDopOpmkKSZD3Kc9TWM cUenmsw8m8EQrAQwToStWcKZ+n8hxJOU48nXn9FT525uKf3m9TIU1N2dxmikak/miMONQJXAaBwyYoETxsSaYCKZvhWSABSZKh1bUIXx9Cv8n7bJlV6zqTaXUqCziKIAjcAxOgQ0uQANc gyZoAQLuwAN4As/GvfFovBiv89YlYzFzAH7AePsEBqyXPQ= Projection Penalty Projection Penalty F/H AB9HicbVDLSsNAFL2 pr1pfUZduBovgqiZSscuCIF1WsA9oQ5lMJ+3QySTOTAol9DvcuFDErR/jzr9x0mahrQcGDufcyz1z/JgzpR3n2ypsbG5t7xR3S3v7B4dH9vFJW0WJLRFIh7Jro8V5UzQlma024sKQ59T jv+5C7zO1MqFYvEo57F1AvxSLCAEayN5PVDrMcE8/T+qjEf2GWn4iyA1ombkzLkaA7sr/4wIklIhSYcK9VznVh7KZaEU7npX6iaIzJBI9oz1CBQ6q8dBF6ji6MkRBJM0TGi3U3xspDpW ahb6ZzEKqVS8T/N6iQ5qXspEnGgqyPJQkHCkI5Q1gIZMUqL5zBMJDNZERljiYk2PZVMCe7ql9dJ+7riVis3D9VyvZbXUYQzOIdLcOEW6tCAJrSAwBM8wyu8WVPrxXq3PpajBSvfOYU/ sD5/AH5ukeU= Figure 1: The visualization above characterizes the true v\u03c0, PBE solution and how projections operate on successive approximations. Assume the estimate of v\u03c0 starts from v in red. The Bellman operator pushes the value estimate out of the space of representable functions represented by the plane (Note this corresponds to F = H introduced in Section 4.2). The projection brings the approximation back down to the nearest representable function on the plane. This process is repeated over and over until the value estimates converge to the blue dot at the base of the black line. Subsequent updates push the approximation to v\u03c0 out of the space of representable functions and the projection back onto the plane. The true value in this case is outside of F, with the VE being the distance between the v at PBE = 0 and v\u03c0. Note the projection of v\u03c0 onto F need not be equal to PBE solution. In practice, the RE is rarely used because it requires obtaining samples of entire returns (only updating at the end of episodes). Instead, bootstrapping is used and so forms of the Bellman error are used, as in the BE and PBE. The BE re\ufb02ects the goal of approximating the \ufb01xed-point formula given by the Bellman operator T , de\ufb01ned as T \u02c6 vw(\u00b7)(s) def = E\u03c0 \u0002 R + \u03b3\u02c6 vw \u0000S\u2032\u0001 | S = s \u0003 for all s. (13) When equality is not possible, the di\ufb00erence is minimized as in the BE BE(w) def = X s\u2208S d(s) (T \u02c6 vw(\u00b7)(s) \u2212\u02c6 vw(s))2 = X s\u2208S d(s)E\u03c0[\u03b4(w) | S = s]2 (14) where we write \u03b4(w) to be explicit that this is the TD error for the parameters w. There has been much discussion, formal and informal, about using the BE versus the PBE. The BE can be decomposed into the PBE and a projection penalty term (Scherrer, 2010). To understand why, recall the de\ufb01nition of a projection operator. For a vector space F, the projection of a vector v onto F is the closest point under a given (weighted) norm \u2225\u00b7 \u2225d: minu\u2208F \u2225u \u2212v\u2225d. This de\ufb01nition also applies to function spaces. Let \u03a0F,d be the weighted projection on the space of value functions, de\ufb01ned as \u03a0F,du def = arg minu\u2208F \u2225u \u2212v\u2225d. For a given vector v \u2208R|S|, composed of value estimates for each state, we get \u2225v \u2212T v\u22252 d = \u2225v \u2212\u03a0F,dT v\u22252 d | {z } PBE + \u2225T v \u2212\u03a0F,dT v\u22252 d | {z } Projection Penalty (15) This penalty causes the BE to prefer value estimates for which the projection does not have a large impact near the solution. The PBE can \ufb01nd a \ufb01xed point where applying the 14 \fv\u21e1 AB7nicdZDLSgMxFIb P1Fut6pLN8EiuBpmxHrZFd24rGAv0A4lk2ba0EwmJlCGfoQblwo4tbncefbmLajqOgPgcP3n0PO+UPJmTae9+4UlpZXVteK6WNza3tnfLuXlMnqSK0QRKeqHaINeVM0IZhtO2VBTHI aetcHQ981tjqjRLxJ2ZSBrEeCBYxAg2FrXGvawr2bRXrnhu1fMvz3zkud5c6Iv4OalArnqv/NbtJySNqTCEY607vidNkGFlGOF0WuqmkpMRnhAO7YUOKY6yObrTtGRJX0UJco+YdCcfp /IcKz1JA5tZ4zNUP/2ZvAvr5Oa6CLImJCpoYIsPopSjkyCZrejPlOUGD6xBSaK2V0RGWKFibEJlWwIn5ei/4vmieufutXb0rtKo+jCAdwCMfgwznU4Abq0ACI7iHR3hypPgPDsvi9aC k8/sw85rx/oSo/2 F/H AB9HicbVDLSsNAFL2 pr1pfUZduBovgqiZSscuCIF1WsA9oQ5lMJ+3QySTOTAol9DvcuFDErR/jzr9x0mahrQcGDufcyz1z/JgzpR3n2ypsbG5t7xR3S3v7B4dH9vFJW0WJLRFIh7Jro8V5UzQlma024sKQ59T jv+5C7zO1MqFYvEo57F1AvxSLCAEayN5PVDrMcE8/T+qjEf2GWn4iyA1ombkzLkaA7sr/4wIklIhSYcK9VznVh7KZaEU7npX6iaIzJBI9oz1CBQ6q8dBF6ji6MkRBJM0TGi3U3xspDp Wahb6ZzEKqVS8T/N6iQ5qXspEnGgqyPJQkHCkI5Q1gIZMUqL5zBMJDNZERljiYk2PZVMCe7ql9dJ+7riVis3D9VyvZbXUYQzOIdLcOEW6tCAJrSAwBM8wyu8WVPrxXq3PpajBSvfOYU/ sD5/AH5ukeU= vBE ACAXicdVDLSgMxFM3 UV62vqhvBTbAIroZMabXdFUVwWcE+oC0lk6ZtaOZBcqdYhnHjr7hxoYhb/8Kdf2P6EFT0QOBwzrlJ7nFDKTQ8mGlpZXVtfS65mNza3tnezuXl0HkWK8xgIZqKZLNZfC5zUQIHkzVJx6r uQNd3Qx9RtjrQI/BuYhLzj0YEv+oJRMFI3ezDuxu3AJKYXxG3gtxCfXyZJ0s3miE3ypWIhj4mdL5KyUzakSJzyaQE7NpkhxaodrPv7V7AIo/7wCTVuWQEDoxVSCY5EmHWkeUjaiA9 4y1Kce1514tkGCj43Sw/1AmeMDnqnfJ2LqaT3xXJP0KAz1b28q/uW1IuiXOrHwi4z+YP9SOJIcDTOnBPKM5ATgyhTAnzV8yGVFEGprSMKeFrU/w/qedtp2AXrwu5SmlRxodoiN0ghx0 hiroClVRDTF0hx7QE3q27q1H68V6nUdT1mJmH/2A9fYJOP+YBg= vPBE ACAnicdVDLSgMxFM3 UV62vUVfiJlgEV2Wmjm3dFUVwWcG2QltKJk3b0MyD5E6xDIMbf8WNC0Xc+hXu/BszbQUVPRA4nHNuknvcUHAFlvVhZBYWl5ZXsqu5tfWNzS1ze6ehgkhSVqeBCOSNSxQT3Gd14CDYTSgZ8 VzBmu7oPWbYyYVD/xrmISs45GBz/ucEtBS19wbd+N2oBPpBXEb2C3EtbOLJEm6Zt4qnFZKRaeErYJle2inZJi2Tl2sK2VFHk0R61rvrd7AY085gMVRKmWbYXQiYkETgVLcu1IsZDQER mwlqY+8ZjqxNMVEnyolR7uB1IfH/BU/T4RE0+piefqpEdgqH57qfiX14qgX+nE3A8jYD6dPdSPBIYAp3gHpeMgphoQqjk+q+YDokFHRrOV3C16b4f9IoFmyncHLl5KuVeR1ZtI8O0BGy URlV0SWqoTqi6A49oCf0bNwbj8aL8TqLZoz5zC76AePtE9lHmFo= VEPBE ACEnicdVDNSgMxGMz 6W+tf1aOXYBH0UrLW2vZWFMFjBVuFbinZNXQbHZJvhXLs/gxVfx4kERr568+TZmawUVHQhMZr4h+caPpDBAyLszNT0zOzefW8gvLi2vrBbW1tsmjDXjLRbKUF/41HApFG+BAMkvIs1p4 Et+7g+PMv/8msjQnUGo4h3A3qpxEAwClbqFXa90NpZOvGA30DSPk7TXjK5eCDUCDcPrZb2CkVSIgeVeplgUqoQt1qvW0LIQa28h1LMhTRBM1e4c3rhywOuAImqTEdl0TQTagGwSRP81 5seETZkF7yjqWKBtx0k/FKd62Sh8PQm2PAjxWvycSGhgzCnw7GVC4Mr+9TPzL68QwqHUToaIYuGKfDw1iSHEWT+4LzRnIEeWUKaF/StmV1RTBrbFvC3ha1P8P2nvldz9UuV0v9ioTerI oU20hXaQi6qogU5QE7UQ7foHj2iJ+fOeXCenZfP0SlnktlAP+C8fgDcBZ93 VEBE ACEXicdVDLSgMxFM3 4tr6qLt0Ei9BVyVSr7U4UwaWCfUCnlEyatsFMZkjuiGWYX3Djr7hxoYhbd+78GzNtBRU9EDg5x6Se/xICgOEfDgzs3PzC4tLy7mV1bX1jfzmVsOEsWa8zkIZ6pZPDZdC8ToIkLwVaU4DX /Kmf32a+c0bro0I1RWMIt4J6ECJvmAUrNTNF73Q2lk68YDfQtI4S9NuMr14INQIn1gp7eYLpEQOK7V9gkmpQtyjWs0SQg6r+2XsWpKhgKa46ObfvV7I4oArYJIa03ZJBJ2EahBM8jTnxY ZHlF3TAW9bqmjATScZb5TiPav0cD/U9ijAY/V7IqGBMaPAt5MBhaH57WXiX147hn61kwgVxcAVmzUjyWGEGf14J7QnIEcWUKZFvavmA2pgxsiTlbwtem+H/SKJfcg1Ll8qBwXJ3WsYR2 0C4qIhcdoWN0ji5QHTF0hx7QE3p27p1H58V5nYzONPMNvoB5+0TLUefHQ= TvPBE ACAHicbVDLSgMxFM3 UV62vURcu3ASL4KrMSMUuiyK4rNAXtKVk0rQNzWSG5E5xGbjr7hxoYhbP8Odf2PazkJbD9zL4Zx7Se7xQsE1OM63lVtb39jcym8Xdnb39g/sw6OmDiJFWYMGIlBtj2gmuGQN4CBYO1SM+ J5gLW9yO/NbU6Y0D2Qd4pD1fDKSfMgpASP17ZP6tJ8kXWCPYDqXMa7d3KVp2reLTsmZA68SNyNFlKHWt7+6g4BGPpNABdG64zoh9BKigFPB0kI30iwkdEJGrGOoJD7TvWR+QIrPjTLAw0 CZkoDn6u+NhPhax75nJn0CY73szcT/vE4Ew0ov4TKMgEm6eGgYCQwBnqWB1wxCiI2hFDFzV8xHRNFKJjMCiYEd/nkVdK8Lnl0tVDuVitZHk0Sk6QxfIRdeoiu5RDTUQRSl6Rq/ozXqy Xqx362MxmrOynWP0B9bnD1HWltg= Figure 2: A comparison of the BE and PBE solutions when the true value function is not representable. As before, we visualize how the approximation that minimizes the PBE at convergence can be far from v\u03c0 with a large projection penalty. The approximate value function that minimizes the BE on the other hand is closer to v\u03c0 and typically has a smaller projection penalty (note the Bellman operator would indeed push vBE outside F). Bellman operator T v moves far outside the space of representable functions, as long as the projection back into the space stays at v. The projection penalty is sensible, and in fact prevents some of the counterexamples on the solution quality for the PBE discussed in Section 6. We visualize the projection penalty and a potential solution under the PBE in Figure 1, and contrast to a potential solution under the BE in Figure 2. Despite the potential utility of the BE, it has not been widely used due to di\ufb03culties in optimizing this objective without a model. The BE is di\ufb03cult to optimize because of the well-known double sampling problem for the gradient. To see why, consider the gradient \u2207wBE(w) = X s\u2208S d(s)\u2207wE\u03c0[\u03b4(w) | S = s]2 = 2 X s\u2208S d(s)E\u03c0[\u03b4(w) | S = s] E\u03c0[\u2207w\u03b4(w) | S = s] = 2 X s\u2208S d(s)E\u03c0[\u03b4(w) | S = s] E\u03c0 \u0002 \u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 \u2212\u2207w\u02c6 vw(s) | S = s \u0003 To estimate this gradient for a given S = s, we need two independent samples of the next state and reward. We use the \ufb01rst to get a sample \u03b4(w) and the second to get a sample of \u03b3\u2207w\u02c6 vw(S\u2032) \u2212\u2207w\u02c6 vw(s). The product of these two samples gives an unbiased sample of the product of the expectations. If we instead only used one sample, we would erroneously obtain a sample of E\u03c0[\u03b4(w)(\u03b3\u2207w\u02c6 vw(S\u2032) \u2212\u2207w\u02c6 vw(s)) | S = s]. One promising attempt to approximate the BE used a non-parametric approach (Feng et al., 2019). The objective, called the KBE, takes pairs of samples from a bu\ufb00er to overcome the double sampling problem. Unfortunately, this cannot overcome the issue of non-identi\ufb01ability in the BE. There is a simple example where the same data is generated by two di\ufb00erent MDPs, with di\ufb00erent optima for the corresponding BE (Sutton and Barto, 2018, Chapter 11.6). The agent cannot hope to use the data to identify which of the two parameters is the optimal solution; that work used the term that the BE is not learnable. The linear PBE, on the other hand, is practical to optimize under linear function approximation, as discussed above: the whole family of (gradient) TD algorithms is designed to optimize the linear PBE. Unfortunately, the PBE is hard to optimize for the general nonlinear setting, because the projection is hard to compute. Prior attempts to extend GTD 15 \fto the nonlinear PBE (Maei et al., 2009) resulted in an algorithm that requires computing Hessian-vector products. In the next section, we discuss how to overcome these issues for the BE and PBE, with a uni\ufb01ed objective, that is a generalization of the PBE. Finally, for completeness, we conclude with a description of the Mean-Squared TD error (TDE), even though it is rarely used. The TDE was introduced to characterize the TD solution as a semi-gradient method. For the objective TDE(w) def = X s\u2208S d(s)E\u03c0 h\u0000R + \u03b3\u02c6 vw \u0000S\u2032\u0001 \u2212\u02c6 vw(s) \u00012 | S = s i (16) the gradient includes the gradient of \u02c6 vw(S\u2032). TD omits this term, and so is called a semigradient method. It is not typical to do gradient descent on the TDE, due to commonly held views of poor quality and a counterexample for the residual gradient algorithm which uses the TDE (Sutton and Barto, 2018). We highlight the signi\ufb01cant bias when using the TDE, in Appendix B, providing further evidence that it is likely not a useful direction. 4.2 An Identi\ufb01able BE Before discussing the generalized PBE, we start by showing a conjugate form for the BE. This reformulation uses the strategy introduced by Dai et al. (2017), which more generally introduces this conjugate form for several objectives that use conditional expectations. They show how to use it for the BE as an example, but de\ufb01ned it slightly di\ufb00erently because they condition on states and actions. For this reason, and because we will build further, we provide the explicit steps to derive the conjugate form for the BE. Let F be the space of parameterized value functions and Fall the space of all functions. The reformulation uses the fact that the biconjugate of the square function is y2 = maxh\u2208R 2yh \u2212h2 and the fact that the maximum can be brought outside the sum (interchangeability), as long as a di\ufb00erent scalar h can be chosen for each state s, as it can be for Fall the space of all functions. BE(w) = X s\u2208S d(s)E\u03c0[\u03b4(w) | S = s]2 = X s\u2208S d(s) max h\u2208R \u00002E\u03c0[\u03b4(w) | S = s] h \u2212h2\u0001 \u25b7using the biconjugate function = max h\u2208Fall X s\u2208S d(s) \u00002E\u03c0[\u03b4(w) | S = s] h(s) \u2212h(s)2\u0001 \u25b7using interchangeability. The optimal h\u2217(s) = E\u03c0[\u03b4(w) | S = s], because arg max h\u2208Fall X s\u2208S d(s) \u00002E\u03c0[\u03b4(w) | S = s] h(s) \u2212h(s)2\u0001 = arg max h\u2208Fall X s\u2208S d(s) \u0010 2E\u03c0[\u03b4(w) | S = s] h(s) \u2212h(s)2 \u2212E\u03c0[\u03b4(w) | S = s]2\u0011 = arg max h\u2208Fall \u2212 X s\u2208S d(s) (E\u03c0[\u03b4(w) | S = s] \u2212h(s))2 = arg min h\u2208Fall X s\u2208S d(s) (E\u03c0[\u03b4(w) | S = s] \u2212h(s))2 . 16 \fThe function h\u2217(s) = E\u03c0[\u03b4(w) | S = s] provides the minimal error of zero. This optimal solution also makes it clear why the above is simply a rewriting of the BE because 2E\u03c0[\u03b4(w) | S =s] h\u2217(s)\u2212h\u2217(s)2 = 2E\u03c0[\u03b4(w) | S =s]2 \u2212E\u03c0[\u03b4(w) | S =s]2 = E\u03c0[\u03b4(w) | S =s]2 . More generally, for the continuous state case, interchangeability also holds, as long as the function h(s) = E\u03c0[\u03b4(w) | S = s] satis\ufb01es h \u2208Fall. We can more generically expressed the BE using expectations over states: BE(w) = E[E\u03c0[\u03b4(w) | S]2], where the outer expectation is over S with distribution d. For g(h, s) = E\u03c0[\u03b4(w) | S = s] h \u2212h2, the BE is E \u0014 max h\u2208R g(h, S) \u0015 = Z S d(s) max h\u2208R g(h, s)ds = max h\u2208Fall Z S d(s)g(h(s), s)ds. (17) Because h(s) = E\u03c0[\u03b4 | S = s] satis\ufb01es h \u2208Fall, we know that a maximizer exists, as h\u2217= h \u2208 Fall. Then we can show that E[maxh\u2208R g(h, S)] = E[g(h\u2217(S), S)] = maxh\u2208Fall E[g(h(S), S)].2 As highlighted in (Sutton and Barto, 2018, Chapter 11.6), the BE is not identi\ufb01able. In that example, however, the inputs given to the value function learner are partially observable. In terms of the above formulation, this would mean the agent can only observe a part of the state for learning w but the whole state to learn h. Naturally, however, the input-space for h should be similarly restricted to only observable information. This leads us to a new set for h, which includes all functions on the same inputs \u03c6(s) as given to v, rather than on state: Hall def = {h = f \u25e6\u03c6 | where f is any function on the space produced by \u03c6}. The resulting h is restricted to functions of the form h(s) = f(\u03c6(s)). We call the resulting BE an Identi\ufb01able BE, written as: Identi\ufb01able BE(w) def = max h\u2208Hall E \u0002 2E\u03c0[\u03b4(w) | S] h(S) \u2212h(S)2\u0003 . Notice that Hall \u2286Fall, and so the solution to the Identi\ufb01able BE may be di\ufb00erent from the solution to the BE. In particular, we know the Identi\ufb01able BE(w) \u2264BE(w), because the inner maximization is more constrained. In fact, in many cases restricting h can be seen as a projection on the errors in the objective, as we discuss next, making the Identi\ufb01able BE an instance of the generalized PBE. 4.3 From the Identi\ufb01able Bellman Error back to a Projected Bellman Error The previous section discussed a conjugate form for the BE, which led to an identi\ufb01able BE. Even this Identi\ufb01able BE, however, can be di\ufb03cult to optimize, as we will not be able to perfectly represent any h in Hall. In this section, we discuss further approximations, with h \u2208H \u2286Hall, leading to a new set of objectives based on projecting the Bellman error. To practically use the minimax formulation for the BE, we need to approximate h as an auxiliary estimator. This means h must also be a parameterized function, and we will instead only obtain an approximation to the Identi\ufb01able BE. Let H be a convex space of 2. This argument is similar to (Dai et al., 2017, Lemma 1), but we do not need to assume g is upper semi-continuous and concave. They use this condition to ensure the existence of a maximum for g(\u00b7, s). We know the form of our g and can directly determine the existence of a maximizer (expected TD error). 17 \fparameterized functions for this auxiliary function h. As we show below, this H de\ufb01nes the projection in the generalized PBE. In order to de\ufb01ne the PBE, we \ufb01rst de\ufb01ne a projection operator \u03a0H,d which projects any vector u \u2208R|S| onto convex subspace H \u2286R|S| under state weighting d: \u03a0H,du def = arg min h\u2208H \u2225u \u2212h\u2225d. (18) We de\ufb01ne the generalized PBE as PBE(w) def = \u2225\u03a0H,d(T \u02c6 vw \u2212\u02c6 vw)\u22252 d (19) where each choice of H results in di\ufb00erent projection operators. This view provides some intuition about the role of approximating h. Depending on how errors are projected, the value function approximation will focus more or less on the Bellman errors in particular states. If the Bellman error is high in a state, but those errors are projected to zero, then no further approximation resources will be used for that state. Under no projection\u2014the set for h being the set of all functions\u2014no errors are projected and the values are learned to minimize the Bellman error. If H = F, the same space is used to represent h and v, then we obtain the projection originally used for the PBE. We now show the connection between this PBE and the BE. In the \ufb01nite state setting, we have a vector u \u2208R|S| composed of entries E\u03c0[\u03b4(w) | S = s]: u = T \u02c6 vw \u2212\u02c6 vw. The projection onto a convex set, with a diagonal weighting matrix D def = diag(d) is guaranteed to be an orthogonal projection. This property means u = \u03a0H,du + \u02dc u = h + \u02dc u, where h = \u03a0H,du and \u02dc u is the component in u that is orthogonal in the weighted space: h\u22a4D\u02dc u = 0 for D def = diag(d). Then we can write the conjugate form for the BE, now with restricted H \u2282Hall max h\u2208H X s\u2208S d(s) \u00002E\u03c0[\u03b4(w) | S = s] h(s) \u2212h(s)2\u0001 = max h\u2208H X s\u2208S d(s) \u00002u(s)h(s) \u2212h(s)2\u0001 \u25b7rewriting u(s) = E\u03c0[\u03b4(w) | S = s] = X s\u2208S d(s) \u00002u(s)h(s) \u2212h(s)2\u0001 \u25b7where h = \u03a0H,du = X s\u2208S d(s) \u00002[h(s) + \u02dc u(s)]h(s) \u2212h(s)2\u0001 \u25b7because u(s) = h(s) + \u02dc u(s) = X s\u2208S d(s) \u00002h(s)2 \u2212h(s)2\u0001 + 2 X s\u2208S d(s)\u02dc u(s)h(s) = X s\u2208S d(s)h(s)2 + 2 X s\u2208S d(s)\u02dc u(s)h(s) = X s\u2208S d(s)h(s)2 \u25b7where X s\u2208S d(s)\u02dc u(s)h(s) = 0 because = \u2225\u03a0H,d(T \u02c6 vw \u2212\u02c6 vw)\u22252 d h is orthogonal to \u02dc u, under weighting d = PBE(w) The key assumption above is that H is a convex set. Convexity is required to ensure that the projection operator \u03a0H,d is an orthogonal projection (Dudek and Holly, 1994). 18 \fF AB8nicdVDLSgMxFM3UV62vqks3wSK4Gmbq2NZdURCXFewDpkPJpJk2NJMSUYoQz/DjQtF3Po17vwbM 20FT0QOJxzLzn3hAmjSjvOh1VYWV1b3yhulra2d3b3yvsHSVSiUkbCyZkL0SKMpJW1PNSC+RBMUhI91wcpX73 XsiFRX8Tk8TEsRoxGlEMdJG8vsx0mOMWHY9G5Qrjn3RqFW9GnRsx6m7VTcn1bp35kHXKDkqYInWoPzeHwqcxoRrz JBSvuskOsiQ1BQzMiv1U0UShCdoRHxDOYqJCrJ5Bk8McoQRkKaxzWcq983MhQrNY1DM5lHVL+9XPzL81MdNYKM8 iTVhOPFR1HKoBYwvx8OqSRYs6khCEtqskI8RhJhbVoqmRK+LoX/k07Vdj37/NarNC+XdRTBETgGp8AFdAEN6AF2 gADAR7AE3i2tPVovVivi9GCtdw5BD9gvX0C2o6Rpg= F AB8nicdVDLSgMxFM3UV62vqks3wSK4Gmbq2 NZdURCXFewDpkPJpJk2NJMSUYoQz/DjQtF3Po17vwbM20FT0QOJxzLzn3hAmjSjvOh1VYWV1b3yhulra2d3b3yvsHSVSiUkbCyZkL0SKMpJW1PNSC+RBMUhI91wcpX73XsiFRX8Tk8TEsRoxGlEMdJG8vsx0mOMWHY9G5Qrjn3Rq FW9GnRsx6m7VTcn1bp35kHXKDkqYInWoPzeHwqcxoRrzJBSvuskOsiQ1BQzMiv1U0UShCdoRHxDOYqJCrJ5Bk8McoQRkKaxzWcq983MhQrNY1DM5lHVL+9XPzL81MdNYKM8iTVhOPFR1HKoBYwvx8OqSRYs6khCEtqskI8RhJhbVoqm RK+LoX/k07Vdj37/NarNC+XdRTBETgGp8AFdAEN6AF2gADAR7AE3i2tPVovVivi9GCtdw5BD9gvX0C2o6Rpg= H AB8nicdVDLSgMxFM34rPVdekmWARXQ2boaLsrumygn3AdCiZNOGZjJDkhHK0M9w40IRt36NO/GTFtBRQ 8EDufcS849YcqZ0gh9WGvrG5tb26Wd8u7e/sFh5ei4q5JMEtohCU9kP8SKciZoRzPNaT+VFMchp71welP4vXsqFUvEn Z6lNIjxWLCIEayN5A9irCcE87w1H1aqyEZu3au5ENmuhxpOwxAPOY3LGnRstEAVrNAeVt4Ho4RkMRWacKyU76BUBzmWm hFO5+VBpmiKyRSPqW+owDFVQb6IPIfnRhnBKJHmCQ0X6veNHMdKzeLQTBYR1W+vEP/y/ExH9SBnIs0FWT5UZRxqBNY3 A9HTFKi+cwQTCQzWSGZYImJNi2VTQlfl8L/Sde1nZrt3daqzetVHSVwCs7ABXDAFWiCFmiDiAgAQ/gCTxb2nq0XqzX 5eiatdo5AT9gvX0C5XiRrg= H AB8nicdVDLSgMxFM34rPVdekmWARXQ2boaLsr umygn3AdCiZNOGZjJDkhHK0M9w40IRt36NO/GTFtBRQ8EDufcS849YcqZ0gh9WGvrG5tb26Wd8u7e/sFh5ei4q5JMEtohCU9kP8SKciZoRzPNaT+VFMchp71welP4vXsqFUvEnZ6lNIjxWLCIEayN5A9irCcE87w1H1aqyEZu3au5ENmuhxp OwxAPOY3LGnRstEAVrNAeVt4Ho4RkMRWacKyU76BUBzmWmhFO5+VBpmiKyRSPqW+owDFVQb6IPIfnRhnBKJHmCQ0X6veNHMdKzeLQTBYR1W+vEP/y/ExH9SBnIs0FWT5UZRxqBNY3A9HTFKi+cwQTCQzWSGZYImJNi2VTQlfl8L/Sde1nZrt3d aqzetVHSVwCs7ABXDAFWiCFmiDiAgAQ/gCTxb2nq0XqzX5eiatdo5AT9gvX0C5XiRrg= v AB6HicdVDLSsNAFL3xWeur6tLNYBFchSTGtu4K bly2YB/QhjKZTtqxkwczk0IJ/QI3LhRx6ye582+ctBVU9MCFwzn3cu89fsKZV Jb1Yaytb2xubRd2irt7+weHpaPjtoxTQWiLxDwWXR9LylEW4opTruJoDj0Oe 34k5vc70ypkCyO7tQsoV6IRxELGMFKS83poFS2zOtaxXEryDItq2o7dk6cqnv pIlsrOcqwQmNQeu8PY5KGNFKEYyl7tpUoL8NCMcLpvNhPJU0wmeAR7Wka4ZBK L1scOkfnWhmiIBa6IoUW6veJDIdSzkJfd4ZYjeVvLxf/8nqpCmpexqIkVTQiy 0VBypGKUf41GjJBieIzTARTN+KyBgLTJTOpqhD+PoU/U/ajm75lXTLdedVR wFOIUzuAbqlCHW2hACwhQeIAneDbujUfjxXhdtq4Zq5kT+AHj7RNBF40 \u21e7HTv AB+nicdVDLSgMxFM34rPU1aWbYBFcDZmho+2u 4KbLCn1BOwyZNOGZh4kmUoZ+yluXCji1i9x59+YPgQVPRA4nHMv9+QEKWdSI fRhbGxube/sFvaK+weHR8dm6aQjk0wQ2iYJT0QvwJyFtO2YorTXiojgJOu8 HkZuF3p1RIlsQtNUupF+FRzEJGsNKSb5YGTeYPIqzGBPO8MW9NfbOMLORU3Yo DkeW4qGbXNHGRXbuqQNtCS5TBGk3fB8ME5JFNFaEYyn7NkqVl2OhGOF0Xhxk kqaYTPCI9jWNcUSly+jz+GFVoYwTIR+sYJL9ftGjiMpZ1GgJxch5W9vIf7l9 TMVr2cxWmaExWh8KMQ5XARQ9wyAQlis80wUQwnRWSMRaYKN1WUZfw9VP4P+ k4l2x3NtKue6s6yiAM3AOLoENrkEdNEATtAEBd+ABPIFn4954NF6M19XohrH eOQU/YLx9ArsNlEk= v0 = \u21e7FTv AB/3icdVDLSsNAFJ3UV62vqODGzWARXYUkxrYu hIgLiv0BU0pk+mkHTp5MDMplNiFv+LGhSJu/Q13/o2TtoKHhg4nHMv98zxY kaFNM0PLbe0vLK6l8vbGxube/ou3tNESUckwaOWMTbHhKE0ZA0JWMtGNOUO Ax0vJGV5nfGhMuaBTW5SQm3QANQupTjKSevrB+AReQrdGe26A5BAjl5P6+O eXjSNi0rJdkrQNEyzbNlWRuyc+ZASykZimCBWk9/d/sRTgISsyQEB3LjGU3 RVxSzMi04CaCxAiP0IB0FA1RQEQ3neWfwmOl9KEfcfVCWfq940UBUJMAk9NZ iHFby8T/I6ifQr3ZSGcSJiOeH/IRBGcGsDNinGDJogzKnKCvEQcYSlq ygSvj6KfyfNG3DcozW6dYtRd15MEhOAKnwAJlUAU3oAYaAIM78ACewLN2rz1 qL9rfDSnLXb2wQ9ob58q+5WN Tv AB6XicbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4D XjxGyQuSJcxOepMhs7PLzGwgLPkDLx4U8eofefNvnDwOmljQUFR1090VJIJr4 7rfTm5re2d3L79fODg8Oj4pnp61dJwqhk0Wi1h1AqpRcIlNw43ATqKQRoHAdj C+n/vtCSrNY9kw0wT9iA4lDzmjxkpPjUm/WHL7gJk3grUoIV6v3iV28QszR CaZigWnc9NzF+RpXhTOCs0Es1JpSN6RC7lkoaofazxaUzcmWVAQljZUsaslB/ T2Q0noaBbYzomak1725+J/XTU1452dcJqlByZaLwlQE5P52TAFTIjpZQp ri9lbARVZQZG07BhuCtv7xJWpWyVy3fPFZLtcoqjxcwCVcgwe3UIMHqEMTGI TwDK/w5oydF+fd+Vi25pzVzDn8gfP5A4WQjVA= q AB6HicdVDLSgMxFM3UV62vqks3wSK4GjJ9d1dw 47IF+4B2KJk08ZmMmOSEcrQL3DjQhG3fpI7/8ZMW0FD1w4nHMv97jRZwpj dCHldnY3Nreye7m9vYPDo/yxydFcaS0A4JeSj7HlaUM0E7mlO+5GkOPA47X mzq9Tv3VOpWChu9DyiboAngvmMYG2k9t0oX0A2qlYaJQSRXUFOrdEwBKFqvVS EjiEpCmCN1ij/PhyHJA6o0IRjpQYOirSbYKkZ4XSRG8aKRpjM8IQODBU4oMpN locu4IVRxtAPpSmh4VL9PpHgQKl54JnOAOup+u2l4l/eINZ+3U2YiGJNBVkt8 mMOdQjTr+GYSUo0nxuCiWTmVkimWGKiTY5E8LXp/B/0i3aTtmutMuFZnEdRx acgXNwCRxQA01wDVqgAwig4AE8gWfr1nq0XqzXVWvGWs+cgh+w3j4BQxyNg= = Tq AB6XicdV DLSsNAFJ3UV62vqks3g0VwFSYh0XZXcOySl/QhjKZTtqhk0mcmQgl9A/cuFDErX/kzr9x+hBU9MCFwzn3cu89YcqZ0gh9WIW19Y3NreJ2aWd3b/+gfHjUVkm CW2RhCeyG2JFORO0pZnmtJtKiuOQ04uZr7nXsqFUtEU09TGsR4JFjECNZGum3eDcoVZCO36nsuRLbro5pTM8RHTu3Cg46NFqiAFRqD8nt/mJAspkITjpXqOS jVQY6lZoTWamfKZpiMsEj2jNU4JiqIF9cOoNnRhnCKJGmhIYL9ftEjmOlpnFoOmOsx+q3Nxf/8nqZjqpBzkSaSrIclGUcagTOH8bDpmkRPOpIZhIZm6FZIwl JtqEUzIhfH0K/ydt13Y827/xKnV3FUcRnIBTcA4cAnq4Bo0QAsQEIEH8ASerYn1aL1Yr8vWgrWaOQY/YL19AuXrjZM= q0 = \u21e7FTq AB/3icdVDLSgMxFM3UV62vUcGNm2ARXZVM310I BUFcVugLOqVk0rQNzTyaZIQyduGvuHGhiFt/w51/Y6atoKIHAodz7uWeHCfgT CqEPozEyura+kZyM7W1vbO7Z+4fNKUfCkIbxOe+aDtYUs482lBMcdoOBMWuw2 nLGV/GfuWCsl8r6mAe26eOixASNYalnHk3O4AW0a6xnu1iNCObR1aw+6Zl plEHFQiWHIMoUkFWqVDRBqFjOZaGlSYw0WKLWM9/tvk9Cl3qKcCxlx0KB6kZY KEY4naXsUNIAkzEe0o6mHnap7Ebz/DN4qpU+HPhCP0/Bufp9I8KulFPX0ZNxS Pnbi8W/vE6oBuVuxLwgVNQji0ODkEPlw7gM2GeCEsWnmAimM4KyQgLTJSuLK VL+Pop/J80sxkrnync5NPV7LKOJDgGJ+AcWKAEquAa1EADEHAHsATeDbujUf jxXhdjCaM5c4h+AHj7RMlA5WK Tv AB6XicbV DLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGyQuSJcxOepMhs7PLzGwgLPkDLx4U8eofefNvnDwOmljQUFR1090VJIJr47rfTm5re2d3L79fODg8Oj4pnp61dJwq hk0Wi1h1AqpRcIlNw43ATqKQRoHAdjC+n/vtCSrNY9kw0wT9iA4lDzmjxkpPjUm/WHL7gJk3grUoIV6v3iV28QszRCaZigWnc9NzF+RpXhTOCs0Es1JpSN6R C7lkoaofazxaUzcmWVAQljZUsaslB/T2Q0noaBbYzomak1725+J/XTU1452dcJqlByZaLwlQE5P52TAFTIjpZQpri9lbARVZQZG07BhuCtv7xJWpWyVy3f PFZLtcoqjxcwCVcgwe3UIMHqEMTGITwDK/w5oydF+fd+Vi25pzVzDn8gfP5A4WQjVA= v0 = \u21e7FTv AB/3icdV DLSsNAFJ3UV62vqODGzWARXYUkxrYuhIgLiv0BU0pk+mkHTp5MDMplNiFv+LGhSJu/Q13/o2TtoKHhg4nHMv98zxYkaFNM0PLbe0vLK6l8vbGxube/ou3tN ESUckwaOWMTbHhKE0ZA0JWMtGNOUOAx0vJGV5nfGhMuaBTW5SQm3QANQupTjKSevrB+AReQrdGe26A5BAjl5P6+OeXjSNi0rJdkrQNEyzbNlWRuyc+ZASy kZimCBWk9/d/sRTgISsyQEB3LjGU3RVxSzMi04CaCxAiP0IB0FA1RQEQ3neWfwmOl9KEfcfVCWfq940UBUJMAk9NZiHFby8T/I6ifQr3ZSGcSJiOeH/IRB GcGsDNinGDJogzKnKCvEQcYSlqygSvj6KfyfNG3DcozW6dYtRd15MEhOAKnwAJlUAU3oAYaAIM78ACewLN2rz1qL9rfDSnLXb2wQ9ob58q+5WN v AB6HicdV DLSsNAFL3xWeur6tLNYBFchSTGtu4Kbly2YB/QhjKZTtqxkwczk0IJ/QI3LhRx6ye582+ctBVU9MCFwzn3cu89fsKZVJb1Yaytb2xubRd2irt7+weHpaPjtoxT QWiLxDwWXR9LylEW4opTruJoDj0Oe34k5vc70ypkCyO7tQsoV6IRxELGMFKS83poFS2zOtaxXEryDItq2o7dk6cqnvpIlsrOcqwQmNQeu8PY5KGNFKEYyl7tp UoL8NCMcLpvNhPJU0wmeAR7Wka4ZBKL1scOkfnWhmiIBa6IoUW6veJDIdSzkJfd4ZYjeVvLxf/8nqpCmpexqIkVTQiy0VBypGKUf41GjJBieIzTARTN+KyBgL TJTOpqhD+PoU/U/ajm75lXTLdedVRwFOIUzuAbqlCHW2hACwhQeIAneDbujUfjxXhdtq4Zq5kT+AHj7RNBF40 Tq AB6XicdV DLSsNAFJ3UV62vqks3g0VwFSYh0XZXcOySl/QhjKZTtqhk0mcmQgl9A/cuFDErX/kzr9x+hBU9MCFwzn3cu89YcqZ0gh9WIW19Y3NreJ2aWd3b/+gfHjUVkm CW2RhCeyG2JFORO0pZnmtJtKiuOQ04uZr7nXsqFUtEU09TGsR4JFjECNZGum3eDcoVZCO36nsuRLbro5pTM8RHTu3Cg46NFqiAFRqD8nt/mJAspkITjpXqOS jVQY6lZoTWamfKZpiMsEj2jNU4JiqIF9cOoNnRhnCKJGmhIYL9ftEjmOlpnFoOmOsx+q3Nxf/8nqZjqpBzkSaSrIclGUcagTOH8bDpmkRPOpIZhIZm6FZIwl JtqEUzIhfH0K/ydt13Y827/xKnV3FUcRnIBTcA4cAnq4Bo0QAsQEIEH8ASerYn1aL1Yr8vWgrWaOQY/YL19AuXrjZM= q0 = \u21e7FTq AB/3icdV DLSgMxFM3UV62vUcGNm2ARXZVM310IBUFcVugLOqVk0rQNzTyaZIQyduGvuHGhiFt/w51/Y6atoKIHAodz7uWeHCfgTCqEPozEyura+kZyM7W1vbO7Z+4fNKUf CkIbxOe+aDtYUs482lBMcdoOBMWuw2nLGV/GfuWCsl8r6mAe26eOixASNYalnHk3O4AW0a6xnu1iNCObR1aw+6ZlplEHFQiWHIMoUkFWqVDRBqFjOZaGlSY w0WKLWM9/tvk9Cl3qKcCxlx0KB6kZYKEY4naXsUNIAkzEe0o6mHnap7Ebz/DN4qpU+HPhCP0/Bufp9I8KulFPX0ZNxSPnbi8W/vE6oBuVuxLwgVNQji0ODkEPl w7gM2GeCEsWnmAimM4KyQgLTJSuLKVL+Pop/J80sxkrnync5NPV7LKOJDgGJ+AcWKAEquAa1EADEHAHsATeDbujUfjxXhdjCaM5c4h+AHj7RMlA5WK q AB6HicdV DLSgMxFM3UV62vqks3wSK4GjJ9d1dw47IF+4B2KJk08ZmMmOSEcrQL3DjQhG3fpI7/8ZMW0FD1w4nHMv97jRZwpjdCHldnY3Nreye7m9vYPDo/yxydFcaS 0A4JeSj7HlaUM0E7mlO+5GkOPA47Xmzq9Tv3VOpWChu9DyiboAngvmMYG2k9t0oX0A2qlYaJQSRXUFOrdEwBKFqvVSEjiEpCmCN1ij/PhyHJA6o0IRjpQYOir SbYKkZ4XSRG8aKRpjM8IQODBU4oMpNlocu4IVRxtAPpSmh4VL9PpHgQKl54JnOAOup+u2l4l/eINZ+3U2YiGJNBVkt8mMOdQjTr+GYSUo0nxuCiWTmVkimWGKi TY5E8LXp/B/0i3aTtmutMuFZnEdRxacgXNwCRxQA01wDVqgAwig4AE8gWfr1nq0XqzXVWvGWs+cgh+w3j4BQxyNg= Tc AB6XicbV BNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx6rtLXQhrLZbtqlm03YnQgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgobm1vbO8Xd0t7+weFR+fikY+JU M95msYx1N6CGS6F4GwVK3k0p1Eg+WMwuZ37j09cGxGrFk4T7kd0pEQoGEUrPbTYoFxq+4CZJ14OalAjuag/NUfxiyNuEImqTE9z03Qz6hGwSflfqp4QlEz riPUsVjbjxs8WlM3JhlSEJY21LIVmovycyGhkzjQLbGVEcm1VvLv7n9VIMb/xMqCRFrthyUZhKgjGZv02GQnOGcmoJZVrYWwkbU0Z2nBKNgRv9eV10qlVvXr1 6r5eadTyOIpwBudwCR5cQwPuoAltYBDCM7zCmzNxXpx352PZWnDymVP4A+fzB2jEjT0= \u21e7HTc AB+nicdV DLSgMxFM3UV62vVpdugkVwNWSGjra7gpsuK/QFnVIyaYNzWSGJKOUsZ/ixoUibv0Sd/6N6UNQ0QOBwzn3ck9OkHCmNEIfVm5jc2t7J79b2Ns/ODwqlo47Kk4l oW0S81j2AqwoZ4K2NdOc9hJcRw2g2m1wu/e0ulYrFo6VlCBxEeCxYygrWRhsWS32RDP8J6QjDPGvMWGRbLyEZu1au4ENmuh2pOzRAPObXLCnRstEQZrNEcFt /9UzSiApNOFaq76BEDzIsNSOczgt+qmiCyRSPad9QgSOqBtky+hyeG2UEw1iaJzRcqt83MhwpNYsCM7kIqX57C/Evr5/qsDrImEhSTQVZHQpTDnUMFz3AEZOU aD4zBPJTFZIJlhiok1bBVPC10/h/6Tj2k7F9m4q5bq7riMPTsEZuAOuAJ10ABN0AYE3IEH8ASerXvr0XqxXlejOWu9cwJ+wHr7BJ5BlDY= c0 = \u21e7FTc AB/3icdV DLSgMxFL1TX7W+qoIbN8EiuirToaWdhVAQxGWFvqAzlEyatqGZB0lGKGMX/obF4q49Tfc+Tdm2goqeiBwOde7snxIs6kMs0PI7Oyura+kd3MbW3v7O7l9w/a MowFoS0S8lB0PSwpZwFtKaY47UaCYt/jtONLlO/c0uFZGHQVNOIuj4eBWzICFZa6uePyBm6QE6D9R0fqzHBPLmaNUk/XzCLdq1SsarILJpzaGLbtmlVUGmpFG CJRj/7gxCEvs0UIRjKXslM1JugoVihNZzokljTCZ4BHtaRpgn0o3mefoVOtDNAwFPoFCs3V7xsJ9qWc+p6eTEPK314q/uX1YjWsuQkLoljRgCwODWOVIjS MtCACUoUn2qCiWA6KyJjLDBRurKcLuHrp+h/0raKpXKxclMu1K1lHVk4hM4hxJUoQ7X0IAWELiDB3iCZ+PeDRejNfFaMZY7hzCDxhvn/U+lWs= c AB6HicdV DLSsNAFL3xWeur6tLNYBFchSQ0tNkV3LhswT6gDWUynbRjJw9mJkIJ/QI3LhRx6ye582+cthFU9MCFwzn3cu89QcqZVJb1YWxsbm3v7Jb2yvsHh0fHlZPTrkwy QWiHJDwR/QBLylMO4opTvupoDgKO0Fs+ul37unQrIkvlXzlPoRnsQsZAQrLbXJqFK1TK/huk4dWa1giae51mOi+xCqUKB1qjyPhwnJItorAjHUg5sK1V+jo VihNFeZhJmIywxM60DTGEZV+vjp0gS61MkZhInTFCq3U7xM5jqScR4HujLCayt/eUvzLG2QqbPg5i9NM0ZisF4UZRypBy6/RmAlKFJ9rgolg+lZEplhgonQ2 ZR3C16fof9J1TLtmu1atekUcZTgHC7gCmyoQxNuoAUdIEDhAZ7g2bgzHo0X43XdumEUM2fwA8bJynEjSU= (a) AB6nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXInoMevEY0TwgWcLspDcZMju7zMwKIeQTvHhQxKtf5M2 /cZLsQRMLGoqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtz O/9YRK81g+mnGCfkQHkoecUWOlhzI97xVLbsWdg6wSLyMlyFDvFb+6/ZilEUrDBNW647mJ8SdUGc4ETgvdVGNC2 YgOsGOpBFqfzI/dUrOrNInYaxsSUPm6u+JCY20HkeB7YyoGeplbyb+53VSE17Ey6T1KBki0VhKoiJyexv0uc KmRFjSyhT3N5K2JAqyoxNp2BD8JZfXiXNi4pXrVzeV0u1myOPJzAKZTBgyuowR3UoQEMBvAMr/DmCOfFeXc+Fq 05J5s5hj9wPn8Ai0ONUg= (b) AB6nicbVDLSgNBEOyNrxhfUY9eBoMQL2F XInoMevEY0TwgWcLspDcZMju7zMwKIeQTvHhQxKtf5M2/cZLsQRMLGoqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtzO/9YRK81g+mnGCfkQHkoecUWOlh3Jw3iuW3Io7B1klX kZKkKHeK351+zFLI5SGCap1x3MT40+oMpwJnBa6qcaEshEdYMdSPU/mR+6pScWaVPwljZkobM1d8TExpPY4C2xlRM9TL3kz8z+ukJrz2J1wmqUHJFovCVBATk9nfpM8VMiPGlCmuL2VsCFVlBmbTsG4C2/vEqaFxWvWrm8r5Z qN1kceTiBUyiDB1dQgzuoQwMYDOAZXuHNEc6L8+58LFpzTjZzDH/gfP4AjMiNUw= Figure 3: A visual interpretation of how the Bellman operator can push the value estimates outside the space of representable functions and the role of the projection operator. The set F corresponds to the (parameterized) space of value functions and H is the set of functions that approximate (project) the Bellman error T v \u2212v. Potential settings include F = H (visualized in Figure 1), F \u2282H visualized in (a) and F \u0338= H visualized in (b). In (a), we highlight two cases: T v is not representable by any function in F or H, or T v is representable by functions in H but not F. In (b) we see examples of projections when F intersects H. This assumption is easily satis\ufb01ed by linear functions with a \ufb01xed basis \u03c6(s), with bounded weights. For any two functions with weights w1 and w2, the function de\ufb01ned by weights (1 \u2212c)w1 + cw2 is also in the set. More generally, typical convex nonlinear function approximation sets used in machine learning are reproducing kernel Hilbert spaces. Many classes of neural networks have been shown to be expressible as RKHSs (see Bietti and Mairal (2019) for a nice overview), including neural networks with ReLU activations as are commonly used in RL. Therefore, this is not an overly restrictive assumption. 4.4 The Connection to Previous PBE Objectives In this section we show how the generalized PBE lets us express the linear PBE, and even the nonlinear PBE, by selecting di\ufb00erent sets H. First let us consider the linear PBE. The easiest way to see this is to use the saddlepoint formulations developed for the linear PBE (Mahadevan et al., 2014; Liu et al., 2016; Touati et al., 2018). The goal there was to re-express the existing linear PBE using a saddlepoint form, rather than to re-express the BE or \ufb01nd connections between them. The linear PBE = \u2225b \u2212Aw\u22252 C\u22121 can be rewritten using the conjugate for the two norm. The conjugate for the two-norm is 1 2\u2225y\u2225C\u22121 = maxh y\u22a4h \u22121 2\u2225h\u22252 C, with optimal h = C\u22121y. Correspondingly, we get 1 2\u2225b \u2212Aw\u22252 C\u22121 = max h\u2208Rd(b \u2212Aw)\u22a4h \u22121 2\u2225h\u22252 C where the solution for h = C\u22121(b \u2212Aw). This solution makes the \ufb01rst term equal to \u2225b \u2212 Aw\u22252 C\u22121 and the second term equal to \u22121 2\u2225b \u2212Aw\u22252 C\u22121; adding them gives 1 2\u2225b \u2212Aw\u22252 C\u22121. We can obtain the same formulation under the PBE, by restricting F and H to be the same set of linear functions. Let L = {f : S \u2192R : f(s) = x(s)\u22a4w, w \u2208Rd}. For F = H = L, we have that h\u2217= arg minh\u2208L P s\u2208S d(s) (E\u03c0[\u03b4(w) | S = s] \u2212h(s))2 satis\ufb01es h\u2217(s) = x(s)\u22a4h\u2217. This h\u2217is the linear regression solution for targets \u03b4(w), so h\u2217= E \u0002 xx\u22a4\u0003\u22121 E[x\u03b4(w)] which equals C\u22121(b \u2212Aw). We can further verify that the resulting PBE matches the linear PBE 19 \f(see Appendix C). This result is alluded to in the connection between the NEU and the KBE, in (Feng et al., 2019, Corollary 3.5), but not explicitly shown. This connection also exists with the nonlinear PBE, but with a surprising choice for the parameterization of h: using the gradient of the value estimate as the features. The nonlinear PBE is de\ufb01ned as (Maei et al., 2009) nonlinear PBE(w) = E[\u03b4(w)\u2207w\u02c6 vw(s)]\u22a4E h \u2207w\u02c6 vw(s)\u2207w\u02c6 vw(s)\u22a4i\u22121 E[\u03b4(w)\u2207w\u02c6 vw(s)] . This corresponds to the linear PBE when F = L, because \u2207w\u02c6 vw(s) = x(s). De\ufb01ne set Gw = {f : S \u2192R : f(s) = y(s)\u22a4h, h \u2208Rd and y(s) = \u2207w\u02c6 vw(s)}. Notice that this function set for h changes as w changes. Then we get that h\u2217 nl = arg min h\u2208Gw X s\u2208S d(s) (E\u03c0[\u03b4(w) | S = s] \u2212h(s))2 satis\ufb01es h\u2217 nl(s) = \u2207w\u02c6 vw(s)\u22a4h\u2217 nl where h\u2217 nl = E h \u2207w\u02c6 vw(s)\u2207w\u02c6 vw(s)\u22a4i\u22121 E[\u03b4(w)\u2207w\u02c6 vw(s)]. Plugging this optimal h back into the formula, we get that max h\u2208Gw X s\u2208S d(s) \u00002E\u03c0[\u03b4(w) | S = s] h(s) \u2212h(s)2\u0001 = X s\u2208S d(s) \u00002E\u03c0[\u03b4(w) | S = s] h\u2217 nl(s) \u2212h\u2217 nl(s)2\u0001 = \u0010 X s\u2208S d(s)2E\u03c0[\u03b4(w) | S = s] \u2207w\u02c6 vw(s)\u22a4\u0011 h\u2217 nl \u2212 X s\u2208S d(s)(h\u2217 nl)\u22a4\u2207w\u02c6 vw(s)\u2207w\u02c6 vw(s)\u22a4h\u2217 nl = 2E[\u03b4(w)\u2207w\u02c6 vw(s)]\u22a4h\u2217 nl \u2212(h\u2217 nl)\u22a4E[\u2207w\u02c6 vw(s)\u2207w\u02c6 vw(s)\u22a4]h\u2217 nl = 2nonlinear PBE(w) \u2212nonlinear PBE(w) = nonlinear PBE(w) This nonlinear PBE is not an instance of the generalized PBE, as we have currently de\ufb01ned it, because the H changes with w. It is possible that such a generalization is worthwhile, as using the gradient of the values as features is intuitively useful. Further, interchangeability should still hold, as the exchange of the maximum was done for a \ufb01xed w. Therefore, it is appropriate to explore an H that changes with w, and in our experiments we test H = Gw. In summary, in this section we introduced the generalized PBE and highlighted connections to the linear PBE and BE. The generalized PBE provides a clear path to develop value estimation under nonlinear function approximation, providing a strict generalization of the linear PBE. Two secondary bene\ufb01ts are that the generalized PBE provides a clear connection between the BE and PBE, based on a di\ufb00erence in the choice of projection (H), and resolves the identi\ufb01ability issue in the BE. 5. Understanding the Impact of Choices in the Generalized PBE The two key choices in the Generalized PBE is the state-weighting and the (projection) set H. There are at least three clear criteria for selecting H and the state-weighting: (1) the quality of the solution, (2) the feasibility of implementation and (3) the estimation error during learning and the impact on learning the primary weights. In this section, we 20 \fprovide some conceptual and empirical insight into how to choose H, and empirically show that the choice of weighting can signi\ufb01cantly change the quality of the solution. After \ufb01rst showing that these choices clearly matter, and some intuition for why, we then provide theory characterizing the quality of the solution in the following section. 5.1 The Projection Set and the Quality of the Solution The \ufb01rst criteria parallels the long-standing question about the quality of the solution under the linear PBE versus the BE. The examples developed for that comparison provide insights on H. In this section, we revisit these examples, now in context of the generalized PBE. Objectives based on Bellman errors perform backwards bootstrapping, where the value estimates in a state s are adjusted both toward the value of the next state and the value of the previous state. In the case of the BE, backwards bootstrapping can become an issue when two or more states are heavily aliased and these aliased states lead to successor states with highly di\ufb00erent values. Because the aliased states look no di\ufb00erent to the function approximator, they must be assigned the same estimated value. For each of these aliased states, backwards bootstrapping forces the function approximator to balance between accurately predicting the successor values for all aliased states, as well as adjusting the successor values to be similar to those of the aliased states. The PBE, on the other hand, projects the error for the aliased states, ignoring the portion of the Bellman error that forces the function approximator to balance the similarity between the aliased state value and the successor state value. This allows the function approximator the freedom to accurately estimate the values of the successor states without trading o\ufb00error in states which it cannot distinguish. To make this concrete, consider the following 4-state MDP from Sutton et al. (2009). State A1 and A2 are aliased under the features for F. For the linear PBE, H = F, and so the states are also aliased when approximating h. For the BE, they are not aliased for h. A1 transitions to B and terminates with reward 1. A2 transition to C and terminates with reward 0. The linear PBE results in the correct values for B and C\u20141 and 0 respectively\u2014 because it does not su\ufb00er from backwards bootstrapping. The BE, on the other hand, assigns them values 3 4 and 1 4, to reduce Bellman errors at A1 and A2. A generalized PBE with other H \u0338= F would su\ufb00er the same issue as the BE in this example, unless the projection \u03a0H mapped errors in the aliased states to zero. On the other hand, the linear PBE can \ufb01nd solutions where the Bellman error is very high, even though the projected Bellman error is zero. Consider the plane of value functions that can be represented with a linear function approximator. The Bellman operator can take the values far o\ufb00of this surface, only to be projected back to this surface through the projection operator. At the \ufb01xed-point, this projection brings the value estimate back to the original values and the distance that the value estimate moved on the plane is zero, thus the PBE is zero. The PBE can be zero even when the BE is large. Kolter (2011) provides an example where the solution under the PBE can be made arbitrarily far from the true value function. We expand on this example in Figure 4, and show that the solution under the linear PBE can be arbitrarily poor, even though the features allow for an \u03f5 accurate value estimate and the solution under the BE is very good. 21 \fFigure 4: The visualization above shows how the PBE solution can result in arbitrarily bad value error under some behaviours. The blue line above is the same as the visualization used in prior work to demonstrate issues with minimizing PBE (see Kolter (2011) for a description of the counterexample). The vertical axis measures VE and the horizontal axis di\ufb00erent behavior policies. This \ufb01gure di\ufb00ers from Kolter (2011); we show that the BE solution exhibits low error and highlight the impact of changing H. The size of the set H increases from the left subplot to the right. More behavior policies result in low generalized PBE as the set H increases. If we use an emphasis weighting for the states, rather than behavior visitation, then the solution under the PBE becomes reasonable. Further, even just a small change to H so that H \u0338= F resolves this counterexample. Ideally, we would use an H similar to F, to avoid backwards bootstrapping. Conceptually, a potentially reasonable choice for H is therefore either (1) H = F with some consideration on adjusting the state weighting and (2) an H that is only slightly bigger than F, potentially with the inclusion of an additional feature. A Simple Experiment for Solution Quality Under Di\ufb00erent Weightings and H We empirically investigate the quality of the solution under the PBE and BE with three di\ufb00erent weightings: db, d\u03c0 and m. The solution quality is measured by the VE under db and d\u03c0. We compute the \ufb01xed-point of each objective on a 19-state random walk with randomly chosen target and behavior policies. To isolate the impact of representation on the \ufb01xed-points, we investigate several forms of state representation where v\u03c0 is outside the representable function class. We include the Dependent features from Sutton et al. (2009), randomly initialized sparse ReLu networks, tile-coded features, and state aggregation. The random-walk has 19 states with the left-most and right-most state being terminal. The reward function is zero everywhere except on transitioning into the right-most terminal state where the agent receives +1 reward, and on the left-most terminal state where the agent receives -1 reward. The discount factor is set to \u03b3 = 0.99. We run each experimental setting one million times with a di\ufb00erent randomly initialized neural network, random o\ufb00set between tilings in the tile-coder, and randomly sampled target and behavior policy. The policies are chosen uniformly randomly on the standard simplex. The neural network is initialized with a Xavier initialization (Glorot and Bengio, 2010), using 76 nodes in the \ufb01rst hidden layer and 9 nodes in the \ufb01nal feature layer. Then 25% of the neural network weights are randomly set to zero to encourage sparsity between 22 \fFigure 5: Investigating the VE of the \ufb01xed-points of PBE and BE under db, d\u03c0, and m on a 19-state random walk. All errors are computed closed form given access to the reward and transition dynamics. The \ufb01xed-point of the PBE with emphatic weighting consistently has the lowest error across several di\ufb00erent state representations (light color); while the \ufb01xed-point of the PBE under db has the highest error (dark blue). Results are averaged over one million randomly generated policies and state representations. connections and to increase variance between di\ufb00erent randomly generated representations. The tile-coder uses 4 tilings each o\ufb00set randomly and each containing 4 tiles. The state aggregator aggressively groups the left-most states into one bin and the right-most states into another, creating only two features. Figure 5 shows the normalized log-error of the \ufb01xed-points of the PBE and BE under each weighting. A normalized error between [0, 1], for each representation, is obtained by (1) computing the best value function representable by those features, minv\u2208F VE(v) under db or d\u03c0 and (2) subtracting this minimal VE, and normalizing by the maximum VE for each column (across objectives and weightings for a \ufb01xed representation). The \ufb01xed-points are computed using their least-squares closed form solutions given knowledge of the MDP dynamics. Plotted is the mean error across the one million randomly initialized experimental settings. The standard error between settings is negligibly small. Interestingly, the \ufb01xed-points corresponding to weighting db consistently have the highest error across feature representations, even on the excursion VE error metric with weighting db. The PBE under emphatic weighting, m, consistently has the lowest error across all feature representations, though is slightly outperformed by PBE with weighting d\u03c0 for the VE with weighting d\u03c0. In these experiments, the BE appears to have no advantages over the PBE, meaning that the more restricted H for PBE produces su\ufb03ciently high quality solutions. 5.2 Feasibility of the Implementation There are many feasible choices for estimating h. Likely the simplest is to use the same approximator for h as for the values. For example, this might mean that h and v use the same features, or that we have two heads on a shared neural network. However, we could 23 \ffeasibly consider a much bigger class for h, because h is only used during training, not prediction. For example, we might want v to be e\ufb03cient to query, and so use a compact parametric function approximator. But h could use a more computationally costly function approximator, updated with replay between agent-environment interaction steps. To expand the space H, one feasible approach is to use a separate set of features for H or learn a separate neural network. The separate neural network implicitly can learn a di\ufb00erent set of features, and so allows h to use di\ufb00erent features than v. If we allow this second neural network to be much bigger, then we expand the space H and make the generalized PBE closer to the BE. We can take this expansion further by using nonparametric function approximators for h. For example, a reservoir of transitions can be stored, where E\u03c0[\u03b4t | St = s] is approximated using a weighted average over \u03b4t in the bu\ufb00er, where the weighting is proportional to similarity between that state and s. This is the strategy taken by the Kernel BE (Feng et al., 2019), precisely to reduce bias in h and so better approximate the BE. When learning online, this non-parametric approach is less practical. Either a large bu\ufb00er needs to be maintained, or a su\ufb03cient set of representative transitions identi\ufb01ed and stored. Further, it is not clear that estimating the BE more closely is actually desirable, as discussed in the previous section. In this work, where we learn online, we advocate for the simplest approach: a shared network, with two heads (see Section 8.2). We also show that learning two separate neural networks performs comparably, in Section 9.5. 5.3 Estimation Error and the Impact on Primary Weight Updates The generalized PBE presents an additional trade-o\ufb00between approximation error and estimation error in h. A rich H may reduce the approximation error (and projection penalty) at the expense of higher error in estimating \u03b4t via h. A more restricted H may yield lower estimation error, because less data is needed to estimate h. Note that this trade-o\ufb00is for approximating/estimating the objective itself. It is di\ufb00erent from\u2014and secondary to\u2014the approximation-estimation trade-o\ufb00for the value function with set F. One strategy to restrict H is to add regularization on h. For example, an \u21132 regularizer constrains h to be closer to zero\u2014reducing variance\u2014and improves convergence rates. This strategy was introduced in an algorithm called TD with Regularized Corrections (TDRC) and the control variant, Q-learning with Regularized Corrections (QRC) (Ghiassian et al., 2020). Empirically, these algorithms performed comparably to their TD counterparts, in some cases performing signi\ufb01cantly better. This particular constraint on H was particularly appropriate, because the bias from regularization asymptotically disappears: at the TD \ufb01xed-point, the true parameters \u03b8 are zero and regularization biases \u03b8 towards zero. More generally, the criteria for selecting H is about improving the primary update, rather than necessarily reducing approximation error or estimation error for H. Characterizing how H improves updates for the primary weights remains an open question. One could imagine algorithmic strategies to identify such an H using meta-learning, with the objective to optimize features for H to make the primary weights learn more quickly. This question is particularly di\ufb03cult to answer, as h can be used in two ways: within the standard saddlepoint update or for gradient corrections, as we discuss further in Section 7. With gradient corrections, the interim bias in h is less problematic than in the saddlepoint update. 24 \fIn this work, we again advocate for a simple choice: reducing estimation error for H using \u21132 regularization, within a gradient corrections update as in TDRC (Ghiassian et al., 2020). We \ufb01nd this choice to be generally e\ufb00ective, and easy to use. Nonetheless, there is clearly much more investigation that can be done to better understand the choice of H. 6. Bounding Value Error & the Impact of Weighting on Solution Quality The desired objective to minimize is the value error with weighting deval. We, however, optimize a surrogate objective, like the PBE, with a potentially di\ufb00erent weighting d. In fact, in the last section in Figure 5, we saw the it can be better to pick d \u0338= deval, where optimizing PBE with d = m produce better solutions in terms of VE with deval = db than PBE with d = db. In this section, we characterize the solution quality under the generalized PBE, which depends both on H and d. Let vw be the vector consisting of value function estimates v(s, w). Further, let vwH,d be the solution to the generalized PBE. Similarly to prior theoretical work (Yu and Bertsekas, 2010, Equation 5), our goal is to \ufb01nd bounds of the form \u2225vwH,d \u2212v\u03c0\u2225deval | {z } Value error \u2264C(deval, d, H) \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d | {z } Approximation error (20) where the constant C(deval, d, H) in the bound depends on the two weightings and the projection set H. The term \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = minv\u2208F \u2225v\u2212v\u03c0\u2225d represents the approximation error: the lowest error under function class F if we could directly minimize VE under our weighting d. Compared to prior theoretical work (Yu and Bertsekas, 2010, Equation 5), here we generalize to the nonlinear setting and where d may not equal deval. We start in the case where H = F, and then generalize to H \u2283F\u2014where H is a superset of F\u2014in the following subsection. Many of these results build on existing work, which we reference throughout; we also provide a summary table of existing results in the appendix, in Table 1. 6.1 Upper Bound on VE when H = F Throughout this section we will assume that H = F, so that the projection operator for both the objective and value function space is the same. This matches the setting analyzed for the linear PBE, though here we allow for nonlinear functions. Our goal is to characterize the solution to the PBE, the \ufb01xed point vwF,d = \u03a0F,dT vwF,d. The typical approach is to understand the properties of T under norm \u2225\u00b7 \u2225d, as in Bertsekas and Tsitsiklis (1996, Lemma 6.9) or White (2017, Theorem 1). However, we can actually obtain more general results, by directly characterizing \u03a0F,dT and making assumptions about the norm only for a subset of value functions. This approach builds on the strategy taken by Kolter (2011, Theorem 2), where the choice of d was constrained to ensure a contraction, and on the strategy taken by Ghosh and Bellemare (2020, Theorem 4.3), where the set of value functions is constrained to ensure a contraction. We combine the two ideas and get a more general condition, as well as an extension to nonlinear function approximation. Assumption 1 (Convex Function Space) The set F is convex. This convexity assumption is needed to ensure the projection operator has the typical properties, particularly that \u2225\u03a0F,d(v1 \u2212v2)\u2225d \u2264\u2225v1 \u2212v2\u2225d for all v1, v2 \u2208F. 25 \fTo characterize the Bellman operator, it will be useful to directly de\ufb01ne the discounted transition operator (matrix) under \u03c0, P\u03c0,\u03b3 \u2208R|S|\u00d7|S|, where P\u03c0,\u03b3(s, s\u2032) def = X a \u03c0(a|s)P(s\u2032|s, a)\u03b3(s, a, s\u2032) (21) For a constant discount of \u03b3c < 1 in the continuing setting, this simpli\ufb01es to P\u03c0,\u03b3(s, s\u2032) = \u03b3cP\u03c0 for P\u03c0(s, s\u2032) def = P a \u03c0(a|s)P(s\u2032|s, a). We can characterize when the projected Bellman operator is a contraction, by using either the norm of this discounted transition operator or the norm of the projected discounted transition operator for a restricted set of value functions. De\ufb01nition 2 (Discounted Transition Constant) De\ufb01ne the discounted transition constant cd def = \u2225P\u03c0,\u03b3\u2225d, the weighted spectral norm of the discounted transition operator P\u03c0,\u03b3. De\ufb01nition 3 (Operator Constant) De\ufb01ne the projected Bellman operator constant cF,d > 0 for the set of value functions Fsub \u2286F as the constant that satis\ufb01es \u2225\u03a0F,dP\u03c0,\u03b3(v1 \u2212v2)\u2225d \u2264cF,d\u2225v1 \u2212v2\u2225d for any v1, v2 \u2208Fsub. (22) Notice that cF,d \u2264cd because \u2225\u03a0F,dP\u03c0,\u03b3v\u2225d \u2264\u2225P\u03c0,\u03b3v\u2225d for any v. We show next that the VE of the solution vwF,d to the generalized PBE is upper bounded by the approximation error times a constant. This constant depends only on the discounted transition constant, if it is less than 1, and otherwise depends also on the operator constant. Theorem 4 Assume cF,d < 1. Let C(d, F) def = ( 1+cd 1\u2212cF,d if cd \u22651; 1 1\u2212cd if cd < 1. (23) Then \u2225vwF,d \u2212v\u03c0\u2225d \u2264C(d, F)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. (24) Proof Case 1: cd < 1. This follows using the standard strategy in Bertsekas and Tsitsiklis (1996, Lemma 6.9) or White (2017, Theorem 1). Notice \ufb01rst that vwF,d = \u03a0F,dT vwF,d, since it is a solution to the PBE. Also note that v\u03c0 = T v\u03c0. \u2225vwF,d \u2212v\u03c0\u2225d \u2264\u2225vwF,d \u2212\u03a0F,dv\u03c0\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = \u2225\u03a0F,dT vwF,d \u2212\u03a0F,dT v\u03c0\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = \u2225\u03a0F,dT (vwF,d \u2212v\u03c0)\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d \u2264\u2225T (vwF,d \u2212v\u03c0)\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = \u2225P\u03c0,\u03b3(vwF,d \u2212v\u03c0)\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d \u2264\u2225P\u03c0,\u03b3\u2225d\u2225vwF,d \u2212v\u03c0\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d \u2264cd\u2225vwF,d \u2212v\u03c0\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = \u21d2(1 \u2212cd)\u2225vwF,d \u2212v\u03c0\u2225d \u2264\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. Note that in the above T (vwF,d \u2212v\u03c0) = T vwF,d \u2212T v\u03c0 = P\u03c0,\u03b3vwF,d \u2212P\u03c0,\u03b3v\u03c0 because the reward term in the Bellman operator cancels in the subtraction. 26 \fCase 2: cd \u22651. For this case, we use the approach in Kolter (2011, Theorem 2). We cannot use the above approach, since (1 \u2212cd) is negative. We start again by adding and subtracting \u03a0F,dv\u03c0, but bound the \ufb01rst term di\ufb00erently. \u2225vwF,d \u2212\u03a0F,dv\u03c0\u2225d = \u2225\u03a0F,dT vwF,d \u2212\u03a0F,dT v\u03c0\u2225d = \u2225\u03a0F,dP\u03c0,\u03b3vwF,d \u2212\u03a0F,dP\u03c0,\u03b3v\u03c0\u2225d \u2264\u2225\u03a0F,dP\u03c0,\u03b3vwF,d \u2212\u03a0F,dP\u03c0,\u03b3\u03a0F,dv\u03c0\u2225d + \u2225\u03a0F,dP\u03c0,\u03b3\u03a0F,dv\u03c0 \u2212\u03a0F,dP\u03c0,\u03b3v\u03c0\u2225d. By assumption cF,d < 1 and both vwF,d \u2208F and \u03a0F,dv\u03c0 \u2208F. Therefore, for the \ufb01rst term \u2225\u03a0F,dP\u03c0,\u03b3vwF,d \u2212\u03a0F,dP\u03c0,\u03b3\u03a0F,dv\u03c0\u2225d \u2264cF,d\u2225vwF,d \u2212\u03a0F,dv\u03c0\u2225d \u2264cF,d\u2225vwF,d \u2212v\u03c0\u2225d. For the second term, we have that \u2225\u03a0F,dP\u03c0,\u03b3\u03a0F,dv\u03c0 \u2212\u03a0F,dP\u03c0,\u03b3v\u03c0\u2225d \u2264\u2225P\u03c0,\u03b3\u03a0F,dv\u03c0 \u2212P\u03c0,\u03b3v\u03c0\u2225d \u2264cd\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. Putting this all together, we have \u2225vwF,d \u2212v\u03c0\u2225d \u2264\u2225vwF,d \u2212\u03a0F,dv\u03c0\u2225d + \u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d \u2264cF,d\u2225vwF,d \u2212v\u03c0\u2225d + (1 + cd)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d = \u21d2(1 \u2212cF,d)\u2225vwF,d \u2212v\u03c0\u2225d \u2264(1 + cd)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. Finally, we can get the desired result in Equation (20), by considering a di\ufb00erent state weighting deval for the evaluation versus the state weighting d we use in the PBE. For example, in the o\ufb00-policy setting, deval could correspond to d\u03c0 but we learn under d = db or the emphatic weighting d = m. It is appropriate to separate the evaluation and solution weightings, as it may not be feasible to use d = deval and further it is even possible we can improve our solution by carefully selecting d di\ufb00erent from deval. Some insights about when this choice can be bene\ufb01cial has been provided in seminal work on policy gradient methods (Kakade and Langford, 2002). We do not explore when we can obtain such improvements in this work. Our bounds assume worst case di\ufb00erences, as per the next de\ufb01nition, and so the bound on solution quality is bigger when d \u0338= deval. De\ufb01nition 5 (State Weighting Mismatch) De\ufb01ne the state weighting mismatch between the desired weighting deval and the weighting used in the solution d as \u03ba(deval, d) def = max s\u2208S deval(s) d(s) . (25) Corollary 6 Again assuming cF,d < 1, we can further bound the error under a di\ufb00erent weighting deval \u2225vwF,d \u2212v\u03c0\u2225deval \u2264 p \u03ba(deval, d)C(d, F)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. Proof We use the state mismatch \u03ba(deval, d) = maxs\u2208S deval(s) d(s) and get \u2225vwF,d \u2212v\u03c0\u22252 deval = X s\u2208S deval(s)(vwF,d(s) \u2212v\u03c0(s))2 = X s\u2208S deval(s)d(s) d(s)(vwF,d \u2212v\u03c0)2 \u2264\u03ba(deval, d) X s\u2208S d(s)(vwF,d \u2212v\u03c0)2 = \u03ba(deval, d)\u2225vwF,d \u2212v\u03c0\u22252 d. 27 \fCombined with the inequality in Equation (24), we obtain \u2225vwF,d \u2212v\u03c0\u2225deval \u2264 p \u03ba(deval, d)C(d, F)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d This corollary gives us the desired result, speci\ufb01cally with C(deval, d, F) = p \u03ba(deval, d)C(d, F). The next question is when we can expect cF,d < 1 or cd < 1. More is known about when cd < 1, because this is the condition used to prove that on-policy TD and emphatic TD are convergent: under d = d\u03c0 and d = m, we know that sd < 1 (White, 2017, Theorem 1). It is clear that other weights d will give this result, but we also know certain o\ufb00-policy settings where sd > 1, such as in Baird\u2019s counterexample. Even if sd \u22651, we can still ensure the projected Bellman operator is a contraction by either limiting the set of weightings d or F so that cF,d < 1. Of course, if d = d\u03c0 or m, then automatically cF,d < 1 because for all d and F, cF,d \u2264cd. We provide some characterization of these constants in the next proposition, including both known results and a few novel ones. Proposition 7 The following statements are true about cd. 1. If d = d\u03c0 or d = m, then cd < 1. 2. If d = d\u03c0 and the discount is a constant \u03b3c < 1 (a continuing problem), then cd = \u03b3c. 3. If d = d\u03c0 and for some constant \u03b3c, \u03b3(s, a, s\u2032) \u2264\u03b3c for all (s, a, s\u2032), then cd \u2264\u03b3c. 4. If d\u03c0(s) > 0 i\ufb00d(s) > 0, then cd \u2264sd\u03c0 p \u03ba(d, d\u03c0)\u03ba(d\u03c0, d). The proof for this result, as well as the remaining proofs, are in Appendix D. The previous proofs were included in the main body, as they provide useful intuition. There is an important relationship between quality of the solution and convergence of TD, given by cF,d. If cF,d < 1, then the projected Bellman operator is a contraction. Correspondingly, the update underlying TD converges. O\ufb00-policy TD, therefore, can converge, as long as d or F are chosen such that cF,d < 1. Additionally, notice that we only have bounds on the solution quality under this same criteria. This suggests the following pessimistic conclusion: gradient TD methods, even if they ensure convergence, may not ensure convergence to a quality solution. Therefore, we should eschew them altogether and instead should focus on controlling the weighting d or the function space F. However, this conclusion only arises due to the looseness of the bound. The theorem statement requires cF,d < 1 for all of F. The proof itself only requires this contraction property for vwF,d and \u03a0F,dv\u03c0. The projected operator could be an expansion during the optimization and the \ufb01nal quality of the solution under gradient descent will still be good as long as cFopt,d < 1 for Fopt \u2282F a small subset of F that contains vwF,d and \u03a0F,dv\u03c0. Corollary 8 Assume there is a convex subset Fopt \u2282F where vwF,d, \u03a0F,dv\u03c0 \u2208Fopt and cFopt,d < 1. Then \u2225vwF,d \u2212v\u03c0\u2225d \u2264C(d, Fopt)\u2225\u03a0F,dv\u03c0 \u2212v\u03c0\u2225d. An example of a problem that satis\ufb01es cFopt,d < 1 but not cF,d < 1 is Baird\u2019s counterexample (Baird, 1995). In fact, v\u03c0 \u2208F, and \u2225vwF,d \u2212v\u03c0\u2225d = 0. Rather, the initialization is 28 \fstarted in a part of the space where the projected Bellman operator is an expansion, causing o\ufb00-policy TD to diverge. On the other hand, Kolter\u2019s counterexample is one where no such convex subset exists. The distribution d can be chosen to make A more and more singular, correspondingly causing the TD solution w = A\u22121b to grow. The distance between the solution of the PBE and the optimal value function can be arbitrarily large and no such convex set Fopt can exist. Overall, these results suggest that we should combine all our of techniques to improve convergence and solution quality. The choice of F and d should be considered, particularly to improve solution quality. If our function class contains the true value function v\u03c0, then the approximation error \u2225\u03a0dv\u03c0 \u2212v\u03c0\u2225d = 0 and the VE is zero regardless of the operator constant. Even in this ideal scenario, TD methods can diverge. Gradient methods should be used to avoid convergence issues, because the projected Bellman operator may not be a contraction in all parts of the space. Gradient methods can help us reach a part of the space Fopt where cFopt,d < 1, and so provide an upper bound on the quality of the solution. An exciting open question remains as how to select F and d, to obtain cFopt,d < 1. 6.2 Upper Bound on VE when H \u2287F We next consider more general projections, i.e., for any function space H \u2287F, that includes the BE and PBE as special cases. We show two di\ufb00erent approaches to upper bounding the VE: re-expressing the objective using oblique projections and characterizing the di\ufb00erence to the BE, for which there is a straightforward bound on the VE. 6.2.1 The PBE as an Oblique Projection The goal of this section is to rewrite the PBE as an oblique projection, which then gives a generic bound on the value error. The bound relies on the norm of the oblique projection, which is not easily computable. But, it provides a potential direction for special cases where the norm of this projection might be able to be simpli\ufb01ed. We start by re-expressing the generalized PBE as a weighted VE, using the same approach as Schoknecht (2003) and Scherrer (2010). Notice \ufb01rst that v\u03c0 = (I \u2212P\u03c0,\u03b3)\u22121r\u03c0. Then the generalized PBE for any vw \u2208F, written in projection form, is \u2225\u03a0H,d(Tvw \u2212vw)\u22252 d = \u2225\u03a0H,d(r\u03c0 + P\u03c0,\u03b3vw \u2212vw)\u22252 d = \u2225\u03a0H,d(r\u03c0 \u2212(I \u2212P\u03c0,\u03b3)vw)\u22252 d = \u2225\u03a0H,d[(I \u2212P\u03c0,\u03b3)v\u03c0 \u2212(I \u2212P\u03c0,\u03b3)vw]\u22252 d \u25b7r\u03c0 = (I \u2212P\u03c0,\u03b3)v\u03c0 = \u2225\u03a0H,d(I \u2212P\u03c0,\u03b3)(v\u03c0 \u2212vw)\u22252 d = \u2225v\u03c0 \u2212vw\u22252 H \u25b7H def = (I \u2212P\u03c0,\u03b3)\u22a4\u03a0\u22a4 H,dD\u03a0H,d(I \u2212P\u03c0,\u03b3) Minimizing the generalized PBE therefore corresponds to minimizing the VE with a reweighting over states that may no longer be diagonal, as H is not a diagonal matrix. In fact, we can see that the solution to the generalized PBE is a projection of v\u03c0 onto set F under weighting H, namely v = \u03a0F,Hv\u03c0. A projection under such a non-diagonal weighting is called an oblique projection. Using this form, we can obtain an upper bound using a similar approach to (Scherrer, 2010, Proposition 3), with proof in Appendix D. 29 \fTheorem 9 If H \u2287F, then the solution vwH,d to the generalized PBE satis\ufb01es \u2225v\u03c0 \u2212vwH,d\u2225d \u2264\u2225\u03a0F,H\u2225d\u2225v\u03c0 \u2212\u03a0F,dv\u03c0\u2225d. (26) We can next extend a previous result, which expressed this projection under linear function approximation for both the TD \ufb01xed point and the solution to the BE. We can now more generally express the oblique projection for interim H, with proof in Appendix D. Corollary 10 Assume F is the space of linear functions with features x and H the space of linear functions with features \u03c6, with F \u2286H. Then the solution to the PBE is vwH,d = XwH,d for wH,d = (M\u22a4(I \u2212P\u03c0,\u03b3)X)\u22121M\u22a4r\u03c0 (27) for M def = \u03a0\u22a4 H,dD(I \u2212P\u03c0,\u03b3)X. Further, vwH,d = \u03a0F,Hv\u03c0 for \u03a0F,H = X(M\u22a4(I \u2212P\u03c0,\u03b3)X)\u22121M\u22a4(I \u2212P\u03c0,\u03b3). (28) 6.2.2 The Distance between the PBE and the BE There is a well known upper bound on the VE, in terms of the BE. If we can characterize the distance of the PBE to the BE, we could potentially exploit this known result to upper bound the VE in terms of the PBE. Such a characterization should be possible because, for a su\ufb03ciently large H, the PBE is equivalent to the BE. For smaller H, we can view the PBE as an approximation to the BE, and so can characterize that approximation error. For a given v \u2208F, let h\u2217 v(s) = E[\u03b4(v)|S = s] and de\ufb01ne ApproxError(H, v) def = min h\u2208H \u2225h\u2217 v \u2212h\u2225d with worst-case approximation error ApproxError(H) def = max v\u2208F ApproxError(H, v) (29) where we overload notation because the distinction is clear from the arguments given. Then we can obtain the following result, with proof in Appendix D. Theorem 11 For any v \u2208F, \u2225v\u03c0 \u2212v\u2225d \u2264\u2225(I \u2212P\u03c0,\u03b3)\u22121\u2225d \u2225T v \u2212v\u2225d | {z } BE \u2264\u2225(I \u2212P\u03c0,\u03b3)\u22121\u2225d \u0010 \u2225\u03a0H,dT v \u2212v\u2225d | {z } PBE +ApproxError(H, v) \u0011 \u2264\u2225(I \u2212P\u03c0,\u03b3)\u22121\u2225d \u0010 \u2225\u03a0H,dT v \u2212v\u2225d + ApproxError(H) \u0011 . Note that if cd < 1, then \u2225(I \u2212P\u03c0,\u03b3)\u22121\u2225d \u2264(1 \u2212cd)\u22121. 30 \fThe upper bound depends on the value of the PBE and the approximation error of H. For H = F, the PBE is zero but the approximation error is likely higher. As H gets bigger, the approximation error gets smaller, but the PBE is also larger. Once H is big enough to include h\u2217for a given v, then the approximation error is zero but the PBE is at its maximum, which is to say it is equal to the BE. This upper bound is likely minimized for an interim value of H, that balances between the error from the PBE and the approximation error. It is important to recognize that the approximation error for F and H can be quite di\ufb00erent. For example, if v\u03c0 \u2208F, then ApproxError(H, v\u03c0) = 0 if 0 \u2208H. The function that returns zero for every state should be in H to ensure zero approximation error, but need not be in F. For many convex function spaces we consider, all convex combinations of functions are in F, and it is likely that we have 0 in F. But nonetheless, there are likely instances where F contains near optimal value functions, but that same set produces high approximation error for H. An important future direction is to better understand the di\ufb00erences in function spaces needed for F and H. 7. Algorithms for the Generalized PBE The linear PBE is often optimized using gradient correction algorithms as opposed to saddlepoint methods. The canonical methods are TDC (gradient corrections) and GTD2 (saddlepoint), where TDC has been consistently shown to perform better than GTD2 (White and White, 2016; Ghiassian et al., 2020). We show that similar gradient correction algorithms arise for the generalized PBE, and discuss such an algorithm called TDRC. 7.1 Estimating the Gradient of the Generalized PBE To see why (at least) two classes of algorithms arise, consider the gradient for the generalized PBE, for a given h(s) \u2248E\u03c0[\u03b4(w) | S = s] with a stochastic sample \u03b4(w) from S = s: \u2212\u2207w\u03b4(w)h(s) = h(s)[\u2207w\u02c6 vw(s) \u2212\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 ]. This is the standard saddlepoint update. The key issue with this form is that any inaccuracy in h has a big impact on the update of w, and h can be highly inaccurate during learning. Typically, it is initialized to zero, and so it multiples the update to the primary weights by a number near zero, making learning slow. The gradient correction update is preferable because it relies less on the accuracy of h. The \ufb01rst term uses only the sampled TD-error. \u2206w \u2190\u03b4(w)\u2207w\u02c6 vw(s) \u2212h\u03b8(s)\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 where \u2206\u03b8 \u2190(\u03b4(w) \u2212h\u03b8(s))\u2207\u03b8h\u03b8(s) just like the saddlepoint update. But, the update is biased because it assumes it has optimal h\u2217\u2208H for part of the gradient. To see why, we extend the derivation for the linear setting. \u2212\u2207w\u03b4(w)h(s) = h(s)[\u2207w\u02c6 vw(s) \u2212\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 ] = h(s)\u2207w\u02c6 vw(s) \u2212h(s)\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 = (h(s) \u2212\u03b4(w) + \u03b4(w))\u2207w\u02c6 vw(s) \u2212h(s)\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 = \u03b4(w)\u2207w\u02c6 vw(s) + (h(s) \u2212\u03b4(w))\u2207w\u02c6 vw(s) \u2212h(s)\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 31 \fThis resembles the gradient correction update, except it has extra term (h(s)\u2212\u03b4(w))\u2207w\u02c6 vw(s). In the linear setting, if we have the linear regression solution for h\u2217with parameters \u03b8, then this second term is zero in expectation. This is because \u2207w\u02c6 vw(s) = x(s), giving E\u03c0[(h(s) \u2212\u03b4(w))\u2207wv(s, w) | S = s] = x(s)x(s)\u22a4\u03b8 \u2212x(s)E\u03c0[\u03b4(w) | S = s] and so in expectation across all states, because \u03b8 = E[x(S)x(S)\u22a4]\u22121E[x(S)\u03b4], we get that E[(h(S) \u2212\u03b4(w))\u2207wv(S, w)] = E[x(S)x(S)\u22a4]E[x(S)x(S)\u22a4]\u22121E[x(S)\u03b4(w)] \u2212E[x(S)\u03b4(w)] = E[x(S)\u03b4(w)] \u2212E[x(S)\u03b4(w)] = 0. Therefore, given the optimal h \u2208H for H the set of linear functions, this term can be omitted in the stochastic gradient and still be an unbiased estimate of the full gradient. More generally, the same reasoning applies if h(s) can be re-expressed as a linear function of \u2207w\u02c6 vw(S). This provides further motivation for using features produced by the gradient of the values, as in the nonlinear PBE, to estimate h. Another choice is to use the features in the last layer of the neural network used for \u02c6 vw(S). Because the output is a linear weighting of features from the last layer, \u2207w\u02c6 vw(S) includes this last layer as one part of the larger vector. A head for h can be added to the neural network, where h is learned as a linear function of this layer. Its updates do not in\ufb02uence the neural network itself\u2014gradients are not passed backwards through the network\u2014to ensure it is a linear function of the last layer. Unlike the saddlepoint update, however, the gradient correction update is no longer a straightforward gradient update, complicating analysis. It is possible, however, to analyze the dynamical system underlying these updates. The asymptotic solution does not require the omitted term, under certain conditions on h, as discussed above. If the dynamical system moves towards this stable asymptotic solution, then convergence can be shown. The TDC update relies on just such a strategy: the joint update is rewritten as a linear system, that is then shown to be a contraction that iterates towards a stable solution (Maei, 2011). The extension to the nonlinear setting is an important open problem. Theoretical work for these updates under nonlinear function approximation has been completed, for an algorithm called SBEED (Dai et al., 2018). SBEED uses a gradient correction update, but the theory is for the saddlepoint version. This work investigates a slightly di\ufb00erent saddlepoint update, learning h(s) \u2248E[R + \u03b3\u02c6 vw(S\u2032) | S = s], instead of estimating the entire TD error. A TD-error estimate can be obtained using h(s) \u2212\u02c6 vw(s). Even under this alternate route, it was recon\ufb01rmed that the gradient correction update was preferable\u2014since the \ufb01nal proposed algorithm used this form\u2014but the theory non-trivial. Remark: Another way to interpret gradient correction algorithms is as approximations of the gradient of the BE. We can consider two forms for the negative gradient of the BE: E\u03c0[\u03b4(w) | S = s] E\u03c0 \u0002 \u2207w\u02c6 vw(s) \u2212\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 | S = s \u0003 or E\u03c0[\u03b4(w) | S = s] \u2207w\u02c6 vw(s) \u2212E\u03c0[\u03b4(w) | S = s] E\u03c0 \u0002 \u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 | S = s \u0003 because \u02c6 vw(s) is not random. We can estimate the \ufb01rst form of the gradient using an estimate h(s) \u2248E\u03c0[\u03b4(w) | S = s]: h(s)(\u2207w\u02c6 vw(s) \u2212\u03b3\u2207w\u02c6 vw(S\u2032)). This corresponds to a saddlepoint update. To estimate the second form of the gradient, notice that we do not 32 \fhave a double sampling problem for the \ufb01rst term. This means we can use \u03b4 to compute an unbiased sample for the \ufb01rst term: \u03b4\u2207w\u02c6 vw(s) \u2212h(s)\u03b3\u2207w\u02c6 vw(S\u2032). This strategy corresponds to the gradient correction update. 7.2 TDRC: A Practical Gradient-Based Prediction Algorithm In this section we incorporate the above insights into an easy-to-use gradient-based algorithm called TD with Regularized Corrections (TDRC). Empirical work with TDC suggests that it performs well because the secondary stepsize is set small, and thus it is e\ufb00ectively using conventional o\ufb00-policy TD (Ghiassian et al., 2020). TDRC builds on this insight, providing a method that performs similarly to TD when TD performs well, while maintaining the theoretical soundness of TDC. The key idea is to regularize h to be near zero, using an \u21132 regularizer, where if h(s) = 0 then the update corresponds to the TD update. The motivation for this approach is that it can help learn h more quickly, without incurring much bias. In the linear setting with F = H, the parameters \u03b8 = 0 for h\u2217at the TD \ufb01xed point. Applying \u21132 regularization to the secondary weights, therefore, incurs little bias asymptotically. But, adding a strongly convex regularizer can improve the convergence rate of the secondary variable, and reduce variance of the estimator. In fact, this perspective on h(s) helps to explain the unreasonably good performance of TD in many practical settings. TD is equivalent to TDC with h(s) = 0, which is a zero variance but highly biased estimate of the expected TD error. Asymptotically as our estimates for the primary variable improve, the bias of this heuristic choice of h decreases until we converge to the same \ufb01xed-point as TDC (in the cases where TD converges). The TDRC update equations with importance sampling ratios \u03c1(s, a) = \u03c0(a|s) b(a|s) are \u2206w \u2190\u03c1(s, a)\u03b4(w)\u2207w\u02c6 vw(s) \u2212\u03c1(s, a)h\u03b8(s)\u03b3\u2207w\u02c6 vw \u0000S\u2032\u0001 \u2206\u03b8 \u2190\u03c1(s, a)(\u03b4(w) \u2212h\u03b8(s))\u2207\u03b8h\u03b8(s) \u2212\u03b2\u03b8 where w and \u03b8 are parameters of v : S \u2192R and h : S \u2192R respectively. Notice that the only di\ufb00erence between TDRC and TDC is the inclusion of the \u2212\u03b2\u03b8 term in the update for \u03b8. It can be shown in the linear setting with F = H that the \ufb01xed-points of the TDRC dynamical system are the same as the TDC system and that, under some conditions on the learning rates, TDRC converges to its \ufb01xed-point on any MDP (Ghiassian et al., 2020). By adding an \u21132 regularizer with regularization parameter \u03b2 to TDC, we can interpolate between TD and TDC. With \u03b2 very large the solution set of h(s) becomes diminishingly small until it contains only the point h(s) = 0 as \u03b2 \u2192\u221e. On the other hand, as \u03b2 approaches zero, then the solution set of h(s) becomes that of the PBE and TDRC approaches TDC. Because \u03b2 scales between two known good algorithms, this suggests that the impetus to highly tune \u03b2 is small: many choices of \u03b2 should yield a reasonably performing algorithm. There are many options to approximate both v(s) and h(s) using neural networks. In our experiments, we chose to use a single network with two sets of outputs, one to predict v(s) and the other to predict h(s). To avoid di\ufb03culties with balancing between two loss functions, we only use the gradients from the primary update to adjust the weights of the network and use the gradients from the secondary update only to adjust the weights for that head. The resulting architecture for estimating action values is shown in Figure 6. 33 \f8. Extending to Control The previous development was strictly for policy evaluation. The formulation of a sensible generalized PBE for control, however, can be obtained using a similar route. The conjugate form has already been used to develop a novel control algorithm for nonlinear function approximation, called SBEED (Dai et al., 2018). The SBEED algorithm explicitly maintains a value function and policy, to incorporate entropy regularization, and uses the gradient correction update. We develop an alternative control algorithm that learns only action-values and uses the gradient correction update. 8.1 The Control Objective Our goal is to approximate q\u2217: S \u00d7 A \u2192R, the action-values for the optimal (soft) policy. Instead of the Bellman operator, we use the Bellman optimality operator or generalizations that use other forms of the max but are still guaranteed to be contractions, like the mellow-max operator (Asadi and Littman, 2017). Let m be the given max operator, that takes action-values and returns a (soft) greedy value. In Q-learning, we use a hard-max m(q(s, \u00b7)) = maxa q(s, a) and in mellow-max, m(q(s, \u00b7)) = \u03c4 \u22121 log \u0010 1 |A| P a\u2208A exp(\u03c4q(s, a)) \u0011 . As \u03c4 \u2192\u221e, the mellow-max operator approaches the hard-max operator. The Bellman optimality operator Tm corresponds to (Tmq)(s, a) def = E \u0002 R + \u03b3(S, A, S\u2032)m(q(S\u2032, \u00b7)) | S = s, A = a \u0003 (30) where the expectation is over next state and reward. The \ufb01xed point of this operator is q\u2217. To learn \u02c6 qw(s, a) approximating q\u2217(s, a), we de\ufb01ne the BE for control BE(w) def = X s,a d(s, a)E[\u03b4(w) | S = s, A = a]2 (31) where \u03b4(w) def = R + \u03b3(S, A, S\u2032)m(\u02c6 qw(S\u2032, \u00b7)) \u2212\u02c6 qw(S, A) and d : S \u00d7 A \u2192[0, \u221e) is some weighting. We overload the notation for the weighting d, to make the connection to the previous objectives clear. We can rewrite this objective using conjugates, to get BE(w) = X s\u2208S,a\u2208A d(s, a) max h\u2208R \u00002E[\u03b4(w) | S = s, A = a] h \u2212h2\u0001 = max h\u2208Fall X s\u2208S,a\u2208A d(s, a) \u00002E[\u03b4(w) | S = s, A = a] h(s, a) \u2212h(s, a)2\u0001 . As before, this maximization can be rewritten as a minimization, where the optimal h\u2217(s, a) = E[\u03b4(w) | S = s, A = a]. This equivalence is true for the hard-max operator or the mellow-max, even though the operator is no longer smooth. Finally, in practice, we will learn an approximate h, from the set H, resulting in a PBE for control: PBE(w) def = max h\u2208H X s\u2208S,a\u2208A d(s, a) \u00002E[\u03b4(w) | S = s, A = a] h(s, a) \u2212h(s, a)2\u0001 . This objective is the \ufb01rst generalized PBE for learning action-values for control. 34 \fFigure 6: Visualization of the neural network architecture used to approximate \u02c6 q(s, a) and h(s, a) for the QRC algorithm. TDRC uses the same network architecture, except with only one output for each head of the network predicting \u02c6 v(s) and h(s) respectively. The green shaded regions represent gradient information from the update to \u02c6 q(s, a) and the red shaded region represents the gradient information from the update h(s, a). Notice only \u02c6 q(s, a) modi\ufb01es the weights of the earlier layers of the network. The algorithm is a simple modi\ufb01cation of the policy evaluation algorithms above, described brie\ufb02y here and expanded upon more in the next section. The update to h(s, a) is still a gradient of a squared error to the TD error. The saddlepoint gradient update, for a given h(s, a) with a stochastic sample \u03b4(w) from S = s, A = a: \u2212\u2207w\u03b4(w)h(s, a) = h(s, a)[\u2207w\u02c6 qw(s, a) \u2212\u03b3\u2207wm(\u02c6 qw \u0000S\u2032, \u00b7 \u0001 )] with gradient correction form \u03b4(w)\u2207w\u02c6 qw(s, a) \u2212\u03b3h(s, a)\u2207wm(\u02c6 qw \u0000S\u2032, \u00b7 \u0001 ). Both involve taking a gradient through the max operator m. For the hard-max operator, this results in a subgradient. The mellow-max operator, on the other hand, is di\ufb00erentiable with derivative \u2202 \u2202wi m(\u02c6 qw(s, \u00b7)) = 1 P a\u2208A exp(\u03c4 \u02c6 qw(s, a)) X a\u2208A exp(\u03c4 \u02c6 qw(s, a)) \u2202 \u2202wi \u02c6 qw(s, a) We can reason similarly about the validity of the gradient correction update. \u2212\u2207w\u03b4(w)h(s, a) = (h(s, a) \u2212\u03b4(w) + \u03b4(w))\u2207w\u02c6 qw(s, a) \u2212h(s, a)\u03b3\u2207wm(\u02c6 qw \u0000S\u2032, \u00b7 \u0001 ) = \u03b4(w)\u2207w\u02c6 qw(s, a) + (h(s, a) \u2212\u03b4(w))\u2207w\u02c6 qw(s, a) \u2212h(s, a)\u03b3\u2207wm(\u02c6 qw \u0000S\u2032, \u00b7 \u0001 ). As before, we can conclude that we can drop this second term, as long as the optimal h \u2208H is representable as a linear function of \u2207w\u02c6 qw(s, a). The \ufb01xed point for the gradient correction updates that drop the term (h(s, a) \u2212\u03b4(w))\u2207w\u02c6 qw(s, a) will still converge to the same \ufb01xed point, if they converge. The key question that remains is, if the dynamical system produced by these equations does in fact converge. 8.2 QRC: A Practical Gradient-Based Control Algorithm In this section, we modify the TDRC algorithm speci\ufb01ed in Section 7.2 for control. Let \u03b8t,At be the weights for only the secondary head predicting h(St, At), xt the last layer of the 35 \fnetwork\u2014giving the features for h\u2014and wt all of the remaining weights in the network. We use the mellowmax operator, to get the following updates for the QRC algorithm \u03b4t = Rt+1 + \u03b3m(\u02c6 qw(St+1, \u00b7)) \u2212\u02c6 qw(St, At), and ht = \u03b8\u22a4 t,Atxt wt+1 \u2190wt + \u03b1\u03b4t\u2207w\u02c6 qw(S, A) \u2212\u03b1\u03b3ht\u2207wm(\u02c6 qw \u0000S\u2032, \u00b7 \u0001 ) \u03b8t+1,At \u2190\u03b8t,At + \u03b1 [\u03b4t \u2212ht] xt \u2212\u03b1\u03b2\u03b8t,At QRC is similar to SBEED (Dai et al., 2018), but has two key di\ufb00erences. The \ufb01rst is that SBEED learns a state-value function and an explicit policy. We learn action-values and use a direct mellowmax on the action-values to compute the policy. The other key di\ufb00erence is that the SBEED update uses a di\ufb00erent form of gradient correction, which estimates only the Bellman step instead of the TD error in order to correct the gradient. Recall that QRC interpolates between a gradient correction update at one extreme of its hyperparameter (\u03b2 = 0) and Q-learning at the other extreme (large \u03b2). SBEED, on the other hand, interpolates between a gradient correction update and a residual-gradient method minimizing the mean squared TD error. When QRC converges, it converges to the same \ufb01xed-point for all values of its hyperparameter. SBEED, however, interpolates between the \ufb01xed-point for the PBE and the mean squared TD error. 9. Empirical Investigation of the Generalized PBE for Control In this section, we empirically investigate QRC. We \ufb01rst provide a comparison of QRC with Q-learning and SBEED across four benchmark domains. Then we delve more deeply into the design choices in QRC. We compare using gradient corrections within QRC to the saddlepoint form in Section 9.4. We then investigate how QRC performs with separate bases for h, including a more powerful basis for h, in Section 9.5. 9.1 Benchmark Environments and Experimental Design We used four simulation domains with neural network function approximation. We chose simulation domains with a su\ufb03ciently small state dimension to e\ufb03ciently compare algorithms across many di\ufb00erent random initializations of neural network, while still computing the required number of experiment repetitions required for statistical signi\ufb01cance. On the other hand, to tease out di\ufb00erences between algorithms, we require domains with su\ufb03ciently complex learning dynamics. We chose three classic control domains known to be challenging when approximation resources and agent-environment interactions are limited: Acrobot (Sutton, 1996), Cart Pole (Barto et al., 1983), and Mountain Car (Moore, 1990). We also used Lunar Lander (Brockman et al., 2016) to investigate performance in a domain with a dense reward function and moderately higher-dimensional state. The network architectures were as follows. For Acrobot and Mountain Car, we used two layer fully-connected neural networks with 32 units in each layer and a ReLU transfer. The output layer has an output for each action-value and uses a linear transfer. For the Cart Pole and Lunar Lander domains, we used the same architecture except with 64 units in each hidden layer. We use a shared network with multiple heads for all algorithms unless . Code for all experiments is available at https://github.com/rlai-lab/Generalized-Projected-Bellman-Errors 36 \fotherwise speci\ufb01ed. In experiments with policy-gradient-based methods the parameterized policy uses an independent neural network. We do not use target networks. We swept consistent values of the hyperparameters for every experiment. We swept the stepsize parameter over a wide range \u03b1 \u2208{2\u221212, 2\u221211, . . . , 2\u22127} for every algorithm. For algorithms which chose a stepsize on the boundary of this range\u2014for instance, GQ often chose the smallest stepsize\u2014we performed a one-o\ufb00test to ensure that the range was still representative of the algorithm\u2019s performance. All algorithms used mellowmax, with \u03c4 swept in the range \u03c4 \u2208{0, 10\u22124, 10\u22123, . . . , 100}, including 0 to allow algorithms to choose to use a hard-max. Algorithms based on the SBEED update have an additional hyperparameter \u03b7 which interpolates between the gradient correction update and a residual gradient update. For all experiments we swept values of \u03b7 \u2208{10\u22123, 10\u22122, 10\u22121, 100} and the ratio between the actor and critic stepsizes \u03bd \u2208{2\u22124, 2\u22123, . . . , 21}, often giving SBEED algorithms twenty-four times as many parameter permutations to optimize over compared to other algorithms. Likewise, we allowed saddlepoint methods (GQ) to optimize over the regularization parameter \u03b2 \u2208{0, 0.5, 1, 1.5}, to give them an opportunity to perform well. The remaining hyper-parameters were not swept, but instead set to reasonable defaults. We used a replay bu\ufb00er to store the last 4000 transitions, then sampled 32 independent transitions without replacement to compute mini-batch averaged updates. We used the ADAM optimizer (Kingma and Ba, 2015) for all experiments with the default hyperparameters, a momentum term of \u03b21 = 0.9 and a squared-gradient term of \u03b22 = 0.999. We additionally tested Stochastic Gradient Descent and RMSProp and found that most conclusions remain the same, so choose not to include these results to focus the presentation of results. For each of the four domains we use a discount factor of \u03b3 = 0.99 and cuto\ufb00long-running episodes at 500 steps for Acrobot and Cart Pole and 1000 steps for Mountain Car. On episode cuto\ufb00 events, we do not make an update to the algorithm weights to avoid bootstrapping over this imaginary transition and on true episode termination steps we update with \u03b3 = 0. We use a non-conventional performance measure to more fairly report algorithm performance. A common performance metric is to report the cumulative reward at the end of each episode, running each algorithm for a consistent number of episodes. This choices causes algorithms to have di\ufb00erent amounts of experience and updates. Some algorithms use more learning steps in the \ufb01rst several episodes and achieve higher asymptotic performance because they e\ufb00ectively learned for more steps. We instead report the cumulative reward from the current episode on each step of the current episode. For example in Mountain Car, if the kth episode takes 120 steps, then we would record -120 for each step of the episode. We then run each algorithm for a \ufb01xed number of steps instead of a \ufb01xed number of episodes, so that each algorithm gets the same number of learning steps and a consistent amount of data from the environment. We record performance over 100,000 steps, recorded every 500 steps\u2014rather than every step\u2014to reduce storage costs. To avoid tuning the hyperparameters for each algorithm for every problem, we start by investigating a single set of hyperparameters for each algorithm across all four benchmark domains. We evaluate the hyperparameters according to mean performance over runs, for each domain. We then use a Condorcet voting procedure to \ufb01nd the single hyperparameter setting that performs best across all domains. 37 \fFigure 7: Learning curves using the best performing hyperparameters across 4 domain. The learning curves above are averaged over 100 independent runs and shaded regions correspond to one standard error. Figure 8: Distribution of average returns over hyperparameter settings for each benchmark domain. The vertical axis represents the average performance of each hyperparameter setting (higher is better) and the width of each curve represents the proportion of hyperparameters which achieve that performance level, using a \ufb01tted kernel density estimator. The solid horizontal bars show the maximum, mean, and minimum performance respectively and the dashed horizontal bar represents the median performance over hyperparameters. QRC in blue generally performs best and exhibits less variability across hyperparameter settings. 9.2 Overall Results in the Benchmark Environments Figure 7 shows the learning curves for each algorithm with the single best performing hyperparameter setting across domains. QRC was the only algorithm to consistently be among the best performing algorithms on every domain and was the only algorithm with a single hyperparameter setting that could solve all four domains. Although SBEED was given twenty-four times as many hyperparameter combinations to optimize over, its performance was consistently worse than all other benchmark algorithms. This suggests that the voting procedure was unable to identify a single hyperparameter setting that was consistently good across domains. We additionally include the nonlinear control variant of TDC with a locally linear projection which we call QC-LL (Maei et al., 2009). QC-LL performed well on the two simpler domains, Acrobot and Cart Pole, but exhibited poor performance in the two more challenging domains. We report other voting procedures as well as the performance of the best hyperparameters tuned for each domain independently in Appendix H. 38 \fTo understand how hyperparameter selection impacts the performance of each algorithm, we visualize the performance distribution over hyperparameters in Figure 8. Ideally, we prefer the variability in performance across hyperparameters to be small and the distribution to be concentrated around a single performance level near the top of the plot. Plots with an hourglass shape represent bimodal distributions where several hyperparameter values perform well and several perform poorly. Plots where the mean and median horizontal markers are quite separated indicate highly skewed distributions. All algorithms exhibit di\ufb00erent hyperparameter performance distributions across the four domains. The SBEED algorithm often has one (or a few) hyperparameter setting(s) which perform well, especially in the Cart Pole domain. This suggests that SBEED is highly sensitive to its hyperparameters. and even small deviations from the ideal hyperparameters can lead to very low performance. Q-Learning generally exhibits wide spread of performance over hyperparameters, with a high skew in two of the problem settings. QRC has much lower spread of performance over hyperparameters often with the bulk of the distribution located near the highest performing hyperparameter setting. 9.3 Investigating Variability Across Runs In the previous section we investigated performance averaged across runs; in this section, we investigate how much the algorithms vary across runs. Two algorithms could have similar average returns across runs, even if one has many poor runs and many high performing runs and the other has most runs that have reasonable performance. To observe these di\ufb00erences, we visualize the distribution of returns across 100 runs, as in Figure 9. The horizontal axis shows the return achieved for a given run, averaged over the last 25% of steps. The vertical axis is the proportion of independent runs that achieve that return, using both a histogram with 30 bins and a \ufb01tted Gaussian kernel density estimator. In this example plot, we can see that QRC and Q-learning have similar distributions over returns across runs, QC-LL is peaked at a lower return and SBEED exhibits some bimodality, with several reasonable runs and a few runs with low return. Figure 9: The performance distribution over runs for the best performing hyperparameter settings for each algorithm on Lunar Lander. The horizontal axis represents the average episodic return over the last 25% of steps. The vertical axis for each subplot represents the proportion of trials that obtained a given level of performance. The plot shows the empirical histogram and kernel density estimator for the performance distribution over 100 independent trials. Mass concentrated to the right indicates better performance. 39 \fFigure 10: Sensitivity to stepsize parameter. Distribution of the return per episode for the last 25% of episodes across choice of stepsize. Each row of this \ufb01gure correponds to the performance on each algorithm across domains for one value of the stepsize parameter. Each subplot is exactly like Figure 9: the distribution of performance for all four algorithms using a particular stepsize parameter value on a single domain. The highlighted plots in each column represent the best performing stepsize parameter value. QRC consistently exhibits a narrow distribution of performance where the bulk of the distribution is on the upper end of the performance metric (towards the right is better). Q-learning and Nonlinear QC both have wide performance distributions on all domains and exhibit bimodal distributions on Mountain Car. SBEED tends to exhibit bimodal performance often, with a non-trivial proportion of runs which fail to learn beyond random performance. Because each algorithm prefers di\ufb00erent stepsizes, we provide these distribution plots across stepsizes, optimizing the remaining hyperparameters for each stepsize. Figure 10 shows the distribution for every algorithm, across every domain and swept stepsize value. For every algorithm and domain, the distribution with the highest mean is highlighted with a bold color; distributions for all other stepsizes are shown with a faded color. QRC consistently has narrow performance distributions over runs. The worse performing hyperparameter settings for QRC often exhibit narrow distributions as well. In contrast, the other three algorithms demonstrate skewed performance distributions for their best hyperparameters, pulling the mean towards lower performance levels. SBEED often exhibits bimodality in its performance distributions, especially on the Mountain Car domain.3 9.4 Gradient Correction Updates versus Saddlepoint Updates In this section we compare the saddlepoint and gradient correction forms of the gradient. In Section 7.1, we discussed these two strategies for estimating the gradient of the generalize PBE, and advocated for using the gradient correction form in QRC. To ablate this choice of 3. Our results with SBEED may appear pessimistic compared to results in the literature. It is possible previously published results with SBEED were achieved through domain speci\ufb01c hyper-parameter tuning. Further, our results were averaged over 100 runs, whereas prior work used \ufb01ve. We do not have access to the code used in prior work, and so we can only speculate. 40 \fFigure 11: Comparing gradient correction-based updates (QRC) and saddlepoint methods (GQ, GQ-Grad). GQ-Grad utilizes the gradient of v as features for the secondary variable, h. Allowing the saddlepoint methods to estimate h(s) by using a linear function of the gradients of the primary variable yields slightly higher performance. Nonetheless, saddlepoint methods su\ufb00er from wide performance distributions with the bulk of the distribution being further left than the gradient correction-based updates. update strategy, we \ufb01x all other design decisions. Figure 11 shows the saddlepoint method compared to the gradient correction method across our four domains. The distribution plots suggest that the saddlepoint method can learn reasonable policies in some runs, however, the performance distributions tend to be skewed right with a bulk of the distribution around poor performance. QRC is restricted to regularization parameter \u03b2 = 1; to give the saddlepoint method a greater chance of success, we additionally sweep \u03b2 for it, giving it four times as many hyperparameter permutations. Even with the increased hyperparameter search space, it is clear that the saddlepoint method under-performs the gradient correction-based methods. 9.5 Using a Shared or Separate Basis for h In this section we investigate the impact of using a shared network in QRC. By sharing a network, we e\ufb00ectively restrict H = F; by using a di\ufb00erent network, H is less restricted. A shared network is simpler and, in section 7.1, we show that de\ufb01ning h to be a function of the gradients for \u02c6 vw reduces the bias of gradient correction methods, such as QRC. However, in Section 6.2 we show a less restricted H can improve performance. So, which is better? We compare QRC with a shared network, QRC with a separate network for h (QRC-Sep), and QRC with gradient features for h (QRC-Grad) to understand the impact of these choices. To help remove the confounding variable of number of learnable parameters, we swept over network architectures for the separate network version of QRC. We kept the same general structure of the neural network by restricting the number of hidden units in both hidden layers to be the same, then swept over the number of units per layer: n \u2208{8, 16, 32, 64} for the network for h in Acrobot and Mountain Car and n \u2208{16, 32, 64, 128} for Cart Pole and Lunar Lander. At the smallest end of the range, QRC with separate networks has fewer 41 \fFigure 12: How to represent h: ablating the choice of basis function for the secondary variable, by comparing a shared network with two heads (QRC), two separate networks (QRC-Sep), and one network for the primary variable and a linear function of the primary variable\u2019s gradients for the secondary (QRC-Grad). learnable parameters than when using shared heads, and at the upper end of the range, it has considerably more learnable parameters. Figure 12 summarizes the results using separate networks for the dual. Performance is comparable for each of the three method, with QRC-Sep performing marginally worse overall and QRC-Grad performing marginally better. Notably the performance ordering appears to follow the bias of each method, with QRC-Sep having more bias due to needing to learn two separate networks, and QRC-Grad having slightly less bias due to h being a function of the gradients of \u02c6 vw. In general, the simpler choice of using a shared network appears to be as e\ufb00ective in these benchmark environments. 9.6 Using Parameterized Policies In this section we attempt to better understand the poor performance of SBEED by investigating both action-value and actor-critic variants of the algorithms. Parameterized policies add a signi\ufb01cant source of complexity both to the agent design and to the learning dynamics of the problem, making it challenging to directly compare SBEED to QRC. In this section we complete the square by introducing a parameterized policy version of TDRC, which can be viewed as an Actor-Critic method with a TDRC critic, and introduce an actionvalue version of SBEED. We show that the action-value version of SBEED signi\ufb01cantly outperforms its parameterized policy counterpart, but still under-performs QRC in all domains and TDRC-PG in most domains. The actor-critic version of TDRC uses the following update wt+1 \u2190wt + \u03b1\u03b4t\u2207wv(St, wt) \u2212\u03b1\u03b3\u03b8\u22a4 t xt\u2207wv(St+1, wt) \u03b8t+1 \u2190\u03b8t + \u03b1 h \u03b4t \u2212\u03b8\u22a4 t xt i xt \u2212\u03b1\u03b2\u03b8t \u03b8\u03c0,t+1 \u2190\u03b8\u03c0,t + \u03b1\u03b4t\u2207\u03b8\u03c0 ln \u03c0\u03b8\u03c0(At|St), 42 \fFigure 13: Impact of policy parameterization: action values verses parameterized policies. The plots above compare the QRC and SBEED update functions both for learning state-value functions with parameterized policies and action-value functions with mellowmax. which can be viewed as a standard actor-critic update with TDRC as the critic. Our design choices for TDRC-PG mirror those of SBEED, for instance using a two-headed neural network for the value function estimator and a separate neural network for the policy gradient estimator to avoid the need to balance between loss functions. In Figure 13 we investigate the distribution of performance for each choice of update cross-producted with each choice of action-value or parameterized policy. The width of each violin plot represents the proportion of parameter settings whose area under the learning curve achieved the given performance level indicated on the vertical axis. In all cases, QRC had the most narrow performance distribution centered around the highest or near-highest reward. The parameterized-policy variants generally exhibit wider distributions and lower average AUC than their action-value counterparts, but occasionally achieve higher maximum performance for a single hyperparameter setting. The SBEED update generally performed worse than the QRC update across all four domains. The empirical superiority of QRC over SBEED in our experiments is partially, but not fully, explained by learning parameterized policies. Because SBEED-Q outperforms SBEED overall, some of SBEED\u2019s poor performance is due to the parameterized policy. SBEEDQ and QRC still perform di\ufb00erently in every domain, with SBEED-Q generally having much higher sensitivity to hyperparameters and occasionally achieving lower maximum performance than QRC, even after tuning over many more hyperparameter settings. 10." + } + ], + "Martha White": [ + { + "url": "http://arxiv.org/abs/1609.01995v4", + "title": "Unifying task specification in reinforcement learning", + "abstract": "Reinforcement learning tasks are typically specified as Markov decision\nprocesses. This formalism has been highly successful, though specifications\noften couple the dynamics of the environment and the learning objective. This\nlack of modularity can complicate generalization of the task specification, as\nwell as obfuscate connections between different task settings, such as episodic\nand continuing. In this work, we introduce the RL task formalism, that provides\na unification through simple constructs including a generalization to\ntransition-based discounting. Through a series of examples, we demonstrate the\ngenerality and utility of this formalism. Finally, we extend standard learning\nconstructs, including Bellman operators, and extend some seminal theoretical\nresults, including approximation errors bounds. Overall, we provide a\nwell-understood and sound formalism on which to build theoretical results and\nsimplify algorithm use and development.", + "authors": "Martha White", + "published": "2016-09-07", + "updated": "2021-09-17", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Reinforcement learning is a formalism for trial-and-error interaction between an agent and an unknown environment. This interaction is typically speci\ufb01ed by a Markov decision process (MDP), which contains a transition model, reward model, and potentially discount parameters \u03b3 specifying a discount on the sum of future values in the return. Domains are typically separated into two cases: episodic problems (\ufb01nite horizon) and continuing problems (in\ufb01nite horizon). In episodic problems, the agent reaches some terminal state, and is teleported back to a start state. In continuing problems, the agent interaction is continual, with a discount to ensure a \ufb01nite total reward (e.g., constant \u03b3 < 1). This formalism has a long and successful tradition, but is limited in the problems that can be speci\ufb01ed. Progressively there have been additions to specify a broader range of ob1Department of Computer Science, Indiana University. Correspondence to: Martha White . Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). jectives, including options (Sutton et al., 1999), state-based discounting (Sutton, 1995; Sutton et al., 2011) and interest functions (Reza and Sutton, 2010; Sutton et al., 2016). These generalizations have particularly been driven by offpolicy learning and the introduction of general value functions for Horde (Sutton et al., 2011; White, 2015), where predictive knowledge can be encoded as more complex prediction and control tasks. Generalizations to problem speci\ufb01cations provide exciting learning opportunities, but can also reduce clarity and complicate algorithm development and theory. For example, options and general value functions have signi\ufb01cant overlap, but because of different terminology and formalization, the connections are not transparent. Another example is the classic divide between episodic and continuing problems, which typically require different convergence proofs (Bertsekas and Tsitsiklis, 1996; Tsitsiklis and Van Roy, 1997; Sutton et al., 2009) and different algorithm speci\ufb01cations. In this work, we propose a formalism for reinforcement learning task speci\ufb01cation that uni\ufb01es many of these generalizations. The focus of the formalism is to separate the speci\ufb01cation of the dynamics of the environment and the speci\ufb01cation of the objective within that environment. Though natural, this represents a signi\ufb01cant change in the way tasks are currently speci\ufb01ed in reinforcement learning and has important rami\ufb01cations for simplifying implementation, algorithm development and theory. The paper consists of two main contributions. First, we demonstrate the utility of this formalism by showing uni\ufb01cation of previous tasks speci\ufb01ed in reinforcement learning, including options, general value functions and episodic and continuing, and further providing case studies of utility. We demonstrate how to specify episodic and continuing tasks with only modi\ufb01cations to the discount function, without the addition of states and modi\ufb01cations to the underlying Markov decision process. This enables a uni\ufb01cation that signi\ufb01cantly simpli\ufb01es implementation and easily generalizes theory to cover both settings. Second, we prove novel contraction bounds on the Bellman operator for these generalized RL tasks, and show that previous bounds for both episodic and continuing tasks are subsumed by this more general result. Overall, our goal is to provide an RL task formalism that requires minimal modi\ufb01cations to previous task speci\ufb01cation, with signi\ufb01cant gains in simplicity and uni\ufb01cation across common settings. arXiv:1609.01995v4 [cs.AI] 17 Sep 2021 \fUnifying Task Speci\ufb01cation in Reinforcement Learning 2. Generalized problem formulation We assume the agent interacts with an environment formalized by a Markov decision process (MDP): (S, A, Pr) where S is the set of states, n = |S|; A is the set of actions; and Pr : S \u00d7 A \u00d7 S \u2192[0, 1] is the transition probability function where Pr(s, a, s\u2032) is the probability of transitioning from state s into state s\u2032 when taking action a. A reinforcement learning task (RL task) is speci\ufb01ed on top of these transition dynamics, as the tuple (P, r, \u03b3, i) where 1. P is a set of policies \u03c0 : S \u00d7 A \u2192[0, 1]; 2. the reward function r : S \u00d7 A \u00d7 S \u2192R speci\ufb01es reward received from (s, a, s\u2032); 3. \u03b3 : S \u00d7A\u00d7S \u2192[0, 1] is a transition-based discount function1; 4. i : S \u2192[0, \u221e) is an interest function that speci\ufb01es the user de\ufb01ned interest in a state. Each task could have different reward functions within the same environment. For example, in a navigation task within an of\ufb01ce, one agent could have the goal to navigate to the kitchen and the other the conference room. For a reinforcement learning task, whether prediction or control, a set or class of policies is typically considered. For prediction (policy evaluation), we often select one policy and evaluate its long-term discounted reward. For control, where a policy is learned, the set of policies may consist of all policies parameterized by weights that specify the action-value from states, with the goal to \ufb01nd the weights that yield the optimal policy. For either prediction or control in an RL task, we often evaluate the return of a policy: the cumulative discounted reward obtained from following that policy Gt = \u221e X i=0 \uf8eb \uf8ed i\u22121 Y j=0 \u03b3(st+j, at+j, st+1+j) \uf8f6 \uf8f8Rt+1+i where Q\u22121 j=0 \u03b3(st+j, at+j, st+1+j) := 1. Note that this subsumes the setting with a constant discount \u03b3c \u2208[0, 1), by using \u03b3(s, a, s\u2032) = \u03b3c for every s, a, s\u2032 and so giving Qi\u22121 j=0 \u03b3(st+j, at+j, st+1+j) = \u03b3i c for i > 0 and \u03b30 c = 1 for i = 0. As another example, the end of the episode, \u03b3(s, a, s\u2032) = 0, making the product of these discounts zero and so terminating the recursion. We further explain how transition-based discount enables speci\ufb01cation of episodic tasks and discuss the utility of the generalization to transition-based discounting throughout this paper. Finally, the interest function i speci\ufb01es the degree of importance 1We describe a further probabilistic generalization in Appendix A; much of the treatment remains the same, but the notation becomes cumbersome and the utility obfuscated. of each state for the task. For example, if an agent is only interested in learning an optimal policy for a subset of the environment, the interest function could be set to one for those states and to zero otherwise. We \ufb01rst explain the speci\ufb01cation and use of such tasks, and then de\ufb01ne a generalized Bellman operator and resulting algorithmic extensions and approximation bounds. 2.1. Unifying episodic and continuing speci\ufb01cation The RL task speci\ufb01cation enables episodic and continuing problems to be easily encoded with only modi\ufb01cation to the transition-based discount. Previous approaches, including the absorbing state formulation (Sutton and Barto, 1998) and state-based discounting (Sutton, 1995; Reza and Sutton, 2010; Sutton et al., 2011)(van Hasselt, 2011, Section 2.1.1), require special cases or modi\ufb01cations to the set of states and underlying MDP, coupling task speci\ufb01cation and the dynamics of the environment. We demonstrate how transition-based discounting seamlessly enables episodic or continuing tasks to be speci\ufb01ed in an MDP via a simple chain world. Consider the chain world with three states s1, s2 and s3 in Figure 1. The start state is s1 and the two actions are right and left. The reward is -1 per step, with termination occurring when taking action right from state s3, which causes a transition back to state s1. The discount is 1 for each step, unless speci\ufb01ed otherwise. The interest is set to 1 in all states, which is the typical case, meaning performance from each state is equally important. Figure 1a depicts the classical approach to specifying episodic problems using an absorbing state, drawn as a square. The agent reaches the goal\u2014transitioning right from state s3\u2014then forever stays in the absorbing state, receiving a reward of zero. This encapsulates the de\ufb01nition of the return, but does not allow the agent to start another episode. In practice, when this absorbing state is reached, the agent is \u201cteleported\" to a start state to start another episode. This episodic interaction can instead be represented the same way as a continuing problem, by specifying a transition-based discount \u03b3(s3, right, s1) = 0. This de\ufb01nes the same return, but now the agent simply transitions normally to a start state, and no hypothetical states are added. To further understand the equivalence, consider the updates made by TD (see equation (3)). Assume linear function approximation with feature function x : S \u2192Rd, with weights w \u2208Rd. When the agent takes action right from s3, the agent transitions from s3 to s1 with probability one and so \u03b3t+1 = \u03b3(s3, right, s1) = 0. This correctly gives \u03b4t = rt+1 + \u03b3t+1x(s1)\u22a4w \u2212x(s3)\u22a4w = rt+1 \u2212x(s3)\u22a4w and correctly clears the eligibility trace for the next step et+1 = \u03bbt+1\u03b3t+1et + x(s1) = x(s1). \fUnifying Task Speci\ufb01cation in Reinforcement Learning The stationary distribution is also clearly equal to the original episodic task, since the absorbing state is not used in the computation of the stationary distribution. Another strategy is to still introduce hypothetical states, but use state-based \u03b3, as discussed in Figure 1c. Unlike absorbing states, the agent does not stay inde\ufb01nitely in the hypothetical state. When the agent goes right from s3, it transitions to hypothetical state s4, and then transition deterministically to the start state s1, with \u03b3s(s4) = 0. As before, we get the correct update, because \u03b3t+1 = \u03b3s(s4) = 0. Because the stationary distribution has some non-zero probability in the hypothetical state s4, we must set x(s4) = x(s1) (or x(s4) = 0). Otherwise, the value of the hypothetical state will be learned, wasting function approximation resources and potentially modifying the approximation quality of the value in other states. We could have tried state-based discounting without adding an additional state s4. However, this leads to incorrect value estimates, as depicted in Figure 1d; the relationship between transition-based and state-based is further discussed in Appendix B.1. Overall, to keep the speci\ufb01cation of the RL task and the MDP separate, transition-based discounting is necessary to enable the uni\ufb01ed speci\ufb01cation of episodic and continuing tasks. 2.2. Options as RL tasks The options framework (Sutton et al., 1999) generically covers a wide range of settings, with discussion about macroactions, option models, interrupting options and intra-option value learning. These concepts at the time merited their own language, but with recent generalizations can be more conveniently cast as RL subtasks. Proposition 1. An option, de\ufb01ned as the tuple (Sutton et al., 1999, Section 2) (\u03c0, \u03b2, I) with policy \u03c0 : S \u00d7 A \u2192[0, 1], termination function \u03b2 : S \u2192[0, 1] and an initiation set I \u2282S from which the option can be run, can be equivalently cast as an RL task. This proof is mainly de\ufb01nitional, but we state it as an explicit proposition for clarity. The discount function \u03b3(s, a, s\u2032) = 1 \u2212\u03b2(s\u2032) for all s, a, s\u2032 speci\ufb01es termination. The interest function, i(s) = 1 if s \u2208I and i(s) = 0 otherwise, focuses learning resources on the states of interest. If a value function for the policy is queried, it would only make sense to query it from these states of interest. If the policy for this option is optimized for this interest function, the policy should only be run starting from s \u2208I, as elsewhere will be poorly learned. The rewards for the RL task correspond to the rewards associated with the MDP. RL tasks generalize options, by generalizing termination conditions to transition-based discounting and by providing degrees of interest rather than binary interest. Further, the policies associated with RL subtasks can be used as macros1 s2 s3 s4 1 (a) Absorbing state formulation. s1 s2 s3 \u03b3(s3, right, s1) = 0 (b) Transition-based termination, \u03b3(s3, right, s1)=0. s1 s2 s3 s4 \u03b3s(s4) = 0 1 (c) State-based termination with \u03b3s(s4) = 0. s1 s2 s3 \u03b3s(s1) = 0 or \u03b3s(s3) = 0 (d) Incorrect state-based termination. Figure 1: Three different ways to represent episodic problems as continuing problems. For (c), the state-based discount cannot represent the episodic chain problem without adding states. To see why, consider the two cases for representing termination: \u03b3s(s1) = 0 or \u03b3s(s3) = 0. For simplicity, assume that \u03c0(s, right) = 0.75 for all states s \u2208{s1, s2, s3} and transitions are deterministic. If \u03b3s(s3) = 0, then the value for taking action right from s2 is r(s2, right, s3) + \u03b3s(s3)v\u03c0(s3) = \u22121 and the value for taking action right from s3 is r(s3, right, s1)+\u03b3s(s1)v\u03c0(s1) \u0338= \u22121, which are both incorrect. If \u03b3s(s1) = 0, then the value of taking action right from s3 is \u22121 + \u03b3s(s1)v\u03c0(s1) = \u22121, which is correct. However, the value of taking action left from s2 is \u22121 + \u03b3s(s1)v\u03c0(s1) = \u22121, which is incorrect. actions, to specify a semi-Markov decision process (Sutton et al., 1999, Theorem 1). 2.3. General value functions In a similar spirit of abstraction as options, general value functions were introduced for single predictive or goaloriented questions about the world (Sutton et al., 2011). The idea is to encode predictive knowledge in the form of value function predictions: with a collection or horde of prediction demons, this constitutes knowledge (Sutton et al., 2011; Modayil et al., 2014; White, 2015). The work on Horde (Sutton et al., 2011) and nexting (Modayil et al., 2014) provide numerous examples of the utility of the types of questions that can be speci\ufb01ed by general value functions, and so by RL tasks, because general value functions can \fUnifying Task Speci\ufb01cation in Reinforcement Learning naturally can be speci\ufb01ed as an RL task. The generalization to RL tasks provide additional bene\ufb01ts for predictive knowledge. The separation into underlying MDP dynamics and task speci\ufb01cation is particularly useful in off-policy learning, with the Horde formalism, where many demons (value functions) are learned off-policy. These demons share the underlying dynamics, and even feature representation, but have separate prediction and control tasks; keeping these separate from the MDP is key for avoiding complications (see Appendix B.2). Transition-based discounts, over state-based discounts, additionally enable the prediction of a change, caused by transitioning between states. Consider the taxi domain, described more fully in Section 3, where the agent\u2019s goal is to pick up and drop off passengers in a grid world with walls. The taxi agent may wish to predict the probability of hitting a wall, when following a given policy. This can be encoded by setting \u03b3(s, a, s) = 0 if a movement action causes the agent to remain in the same state, which occurs when trying to move through a wall. In addition to episodic problems and hard termination, transition-based questions also enable soft termination for transitions. Hard termination uses \u03b3(s, a, s\u2032) = 0 and soft termination \u03b3(s, a, s\u2032) = \u03f5 for some small positive value \u03f5. Soft terminations can be useful for incorporating some of the value of a policy right after the soft termination. If two policies are equivalent up to a transition, but have very different returns after the transition, a soft termination will re\ufb02ect that difference. We empirically demonstrate the utility of soft termination in the next section. 3. Demonstration in the taxi domain To better ground this generalized formalism and provide some intuition, we provide a demonstration of RL task speci\ufb01cation. We explore different transition-based discounts in the taxi domain (Dietterich, 2000; Diuk et al., 2008). The goal of the agent is to take a passenger from a source platform to a destination platform, depicted in Figure 2. The agent receives a reward of -1 on each step, except for successful pickup and drop-off, giving reward 0. We modify the domain to include the orientation of the taxi, with additional cost for not continuing in the current orientation. This encodes that turning right, left or going backwards are more costly than going forwards, with additional negative rewards of -0.05, -0.1 and -0.2 respectively. This additional cost is further multiplied by a factor of 2 when there is a passenger in the vehicle. For grid size g and the number of pickup/dropoff locations l, the full state information is a 5-tuple: (x position of taxi \u2208{1, . . . , g}, y position of taxi \u2208{1, . . . , g}, location of passenger \u2208{1, . . . , l + 1}, location of destination \u2208{1, . . . , l}, orientation of car \u2208{N, E, S, W} ). The location of the passenger can be in one of the pickup/drop-off locations, or in the taxi. Optimal policies and value functions are computed iteratively, with an extensive number of iterations. Figure 2 illustrates three policies for one part of the taxi domain, obtained with three different discount functions. The optimal policy is learned using a soft-termination, which takes into consideration the importance of approaching the passenger location with the right orientation to minimize turns after picking up the passenger. A suboptimal policy is in fact learned with hard termination, as the policy prefers to greedily minimize turns to get to the passenger. For further details, refer to the caption in Figure 2. We also compare to a constant \u03b3, which corresponds to an average reward goal, as demonstrated in Equation (8). The table in Figure 2(e) summarizes the results. Though in theory it should in fact recognize the relative values of orientation before and after picking up a passenger, and obtain the same solution as the soft-termination policy, in practice we \ufb01nd that numerical imprecision actually causes a suboptimal policy to be learned. Because most of the rewards are negative per step, small differences in orientation can be more dif\ufb01cult to distinguish amongst for an in\ufb01nite discounted sum. This result actually suggests that having multiple subgoals, as one might have with RL subtasks, could enable better chaining of decisions and local evaluation of the optimal action. The utility of learning with a smaller \u03b3c has been previously described (Jiang et al., 2015), however, here we further advocate that enabling \u03b3 that provides subtasks is another strategy to improve learning. 4. Objectives and algorithms With an intuition for the speci\ufb01cation of problems as RL tasks, we now turn to generalizing some key algorithmic concepts to enable learning for RL tasks. We \ufb01rst generalize the de\ufb01nition of the Bellman operator for the value function. For a policy \u03c0 : S \u00d7 A \u2192[0, 1], de\ufb01ne P\u03c0, P\u03c0,\u03b3 \u2208Rn\u00d7n and r\u03c0, v\u03c0 \u2208Rn, indexed by states s, s\u2032 \u2208S, P\u03c0(s, s\u2032) := X a\u2208A \u03c0(s, a)Pr(s, a, s\u2032) P\u03c0,\u03b3(s, s\u2032) := X a\u2208A \u03c0(s, a)Pr(s, a, s\u2032)\u03b3(s, a, s\u2032) r\u03c0(s) := X a\u2208A \u03c0(s, a) X s\u2032\u2208S Pr(s, a, s\u2032)r(s, a, s\u2032) v\u03c0(s) := r\u03c0(s) + X s\u2032\u2208S P\u03c0,\u03b3(s, s\u2032)v\u03c0(s\u2032). where v\u03c0(s) is the expected return, starting from a state s \u2208S. To compute a value function that satis\ufb01es this recursion, we de\ufb01ne a Bellman operator. The Bellman operator has been generalized to include state-based discounting and a state-based trace parameter2 (Sutton et al., 2016, Eq. 29). 2A generalization to state-based trace parameters has been considered (Sutton, 1995; Sutton and Barto, 1998; Reza and Sutton, \fUnifying Task Speci\ufb01cation in Reinforcement Learning 4 3 2 1 0 0 1 2 3 4 3 4 3 4 3 4 3 4 -1.1 -1.2 -1 -1 -1 -1.2 -1.2 -1.4 -1 -1.2 -1 -1 -1.4 -1 -1.2 -1.4 Car (a) (b) (c) (d) (e) TOTAL PICKUP AND DROPOFF ADDED COST FOR TURNS TRANS-SOFT 7.74 \u00b1 0.03 5.54 \u00b1 0.01 TRANS-HARD 7.73 \u00b1 0.03 5.83 \u00b1 0.01 STATE-BASED 0.00 \u00b1 0.00 18.8 \u00b1 0.02 \u03b3c = 0.1 0.00 \u00b1 0.00 2.48 \u00b1 0.01 \u03b3c = 0.3 0.02 \u00b1 0.01 2.49 \u00b1 0.01 \u03b3c = 0.5 0.04 \u00b1 0.01 2.51 \u00b1 0.01 \u03b3c = 0.6 0.03 \u00b1 0.01 2.49 \u00b1 0.01 \u03b3c = 0.7 7.12 \u00b1 0.03 4.52 \u00b1 0.01 \u03b3c = 0.8 7.34 \u00b1 0.03 4.62 \u00b1 0.01 \u03b3c = 0.9 3.52 \u00b1 0.06 4.57 \u00b1 0.02 \u03b3c = 0.99 0.01 \u00b1 0.01 2.45 \u00b1 0.01 Figure 2: (a) The taxi domain, where the pickup/drop-off platforms are at (0,0), (0,4), (3,0) and (4,4). The Passenger P is at the source platform (4,4), outlined in black. The Car starts in (2,3), with orientation E as indicated the arrow, needs to bring the passenger to destination D platform at (3,0), outlined in blue. In (b) (d), there are simulated trajectories for policies learned using hard and soft termination. (b) The optimal strategy, with \u03b3(Car in source, Pickup, P in Car) = 0.1 and a discount 0.99 elsewhere. The sequence of taxi locations are (3, 3), (3, 4), (4, 4), (4, 4) with Pickup action, (4, 3), (4, 2), (4, 1), (4, 0), (3, 0). Successful pickup and drop-off with total reward \u22127.7. (c) For \u03b3(Car in source, Pickup, P in Car) = 0, the agent does not learn the optimal strategy. The agent minimizes orientation cost to the subgoal, not accounting for orientation after picking up the passenger. Consequently, it takes more left turns after pickup, resulting in more total negative reward. The sequence of locations are (3, 3), (4, 3), (4, 4), (4, 4) with Pickup action, (3, 4), (3, 3), (3, 2), (3, 1), (3, 0). Successful pickup and drop-off with total reward \u22128. (d) For state-based \u03b3(Car in source and P in Car) = 0, the agent remains around the source and does not complete a successful drop-off. The sequence of locations are (3, 3), (4, 3), (4, 4), (4, 4) with Pickup action, (4, 3), (4, 4), (4, 3).... The agent enters the source and pickups up the passenger. When it leaves to location (4,3), its value function indicates better value going to (4,4) because the negative return will again be cutoff by \u03b3(Car in source and P in Car) = 0, even without actually performing a pickup. Since the cost to get to the destination is higher than the \u22122.6 return received from going back to (4, 4), the agent stays around (4, 4) inde\ufb01nitely. (e) Number of successful passenger pickup and dropoff, as well as additional cost incurred from turns, over 100 steps, with 5000 runs, reported for a range of constant \u03b3c and the policies in Figure 2. Due to numerical imprecision, several constant discounts do not get close enough to the passenger to pickup or drop-off. The state-based approach, that does not add additional states for termination, oscillates after picking up the passenger, and so constantly gets negative reward. We further generalize the de\ufb01nition to the transition-based setting. The trace parameter \u03bb : S \u00d7 A \u00d7 S \u2192[0, 1] in\ufb02uences the \ufb01xed point and provides a modi\ufb01ed (biased) return, called the \u03bb-return; this parameter is typically motivated as a bias-variance trade-off parameter (Kearns and Singh, 2000). Because the focus of this work is on generalizing the discount, we opt for a simple constant \u03bbc in the main body of the text; we provide generalizations to transition-based trace parameters in the appendix. The generalized Bellman operator T(\u03bb) : Rn \u2192Rn is T(\u03bb)v := r\u03bb \u03c0 + P\u03bb \u03c0v, \u2200v \u2208Rn (1) where P\u03bb \u03c0 := (I \u2212\u03bbcP\u03c0,\u03b3)\u22121 P\u03c0,\u03b3(1 \u2212\u03bbc) (2) r\u03bb \u03c0 := (I \u2212\u03bbcP\u03c0,\u03b3)\u22121 r\u03c0 To see why this is the de\ufb01nition of the Bellman operator, we de\ufb01ne the expected \u03bb-return, v\u03c0,\u03bb \u2208Rn for a given approximate value function, given by a vector v \u2208Rn. v\u03c0,\u03bb(s) := r\u03c0(s)+ X s\u2032\u2208S P\u03c0,\u03b3(s, s\u2032) [(1\u2212\u03bbc)v(s\u2032)+\u03bbcv\u03c0,\u03bb(s\u2032)] = r\u03c0(s) + (1 \u2212\u03bbc)P\u03c0,\u03b3(s, :)v + \u03bbcP\u03c0,\u03b3(s, :)v\u03c0,\u03bb. 2010; Sutton et al., 2014; Yu, 2012). Continuing the recursion, we obtain3 v\u03c0,\u03bb = \" \u221e X i=0 (\u03bbcP\u03c0,\u03b3)i # (r\u03c0 + (1 \u2212\u03bbc)P\u03c0,\u03b3v) = (I \u2212\u03bbcP\u03c0,\u03b3)\u22121 (r\u03c0 + (1 \u2212\u03bbc)P\u03c0,\u03b3v) = T(\u03bb)v The \ufb01xed point for this formula satis\ufb01es T(\u03bb)v = v for the Bellman operator de\ufb01ned in Equation (1). To see how this generalized Bellman operator modi\ufb01es the algorithms, we consider the extension to temporal difference algorithms. Many algorithms can be easily generalized by replacing \u03b3c or \u03b3s(st+1) with transition-based \u03b3(st, at, st+1). For example, the TD algorithm is generalized by setting the discount on each step to \u03b3t+1 = \u03b3(st, at, st+1), wt+1 = wt + \u03b1t\u03b4tet \u25b7for some step-size \u03b1t \u03b4t := rt+1 + \u03b3t+1x(st+1)\u22a4w \u2212x(st)\u22a4w (3) et = \u03b3t\u03bbcet\u22121 + x(st). 3For a matrix M with maximum eigenvalue less than 1, P\u221e i=0 Mi = (I \u2212M)\u22121. We show in Lemma 3 that P\u03c0,\u03b3 satis\ufb01es this condition, implying \u03bbcP\u03c0,\u03b3 satis\ufb01es this condition and so this in\ufb01nite sum is well-de\ufb01ned. \fUnifying Task Speci\ufb01cation in Reinforcement Learning The generalized TD \ufb01xed-point, under linear function approximation, can be expressed as a linear system Aw = b A = X\u22a4D(I \u2212\u03bbcP\u03c0,\u03b3) \u22121(I \u2212P\u03c0,\u03b3)X b = X\u22a4D(I \u2212\u03bbcP\u03c0,\u03b3) \u22121r\u03c0 where each row in X \u2208Rn\u00d7d corresponds to features for a state, and D \u2208Rn\u00d7n is a diagonal weighting matrix. Typically, D = diag(d\u00b5), where d\u00b5 \u2208Rn is the stationary distribution for the behavior policy \u00b5 : S \u00d7 A \u2192[0, 1] generating the stream of interaction. In on-policy learning, d\u00b5 = d\u03c0. With the addition of the interest function, this weighting changes to D = diag(d\u00b5 \u25e6i), where \u25e6denotes element-wise product (Hadamard product). More recently, a new algorithm, emphatic TD (ETD) (Mahmood et al., 2015; Sutton et al., 2016) speci\ufb01ed yet another weighting, D = M where M = diag(m) with m = (I \u2212P\u03bb \u03c0)\u22121(d\u00b5 \u25e6 i). Importantly, even for off-policy sampling, with this weighting, A is guaranteed to be positive de\ufb01nite. We show in the next section that the generalized Bellman operator for both the on-policy and emphasis weighting is a contraction, and further in the appendix that the emphasis weighting with a transition-based trace function is also a contraction. 5. Generalized theoretical properties In this section, we provide a general approach to incorporating transition-based discounting into approximation bounds. Most previous bounds have assumed a constant discount. For example, ETD was introduced with state-based \u03b3s; however, (Hallak et al., 2015) analyzed approximation error bounds of ETD using a constant discount \u03b3c. By using matrix norms on P\u03c0,\u03b3, we generalize previous approximation bounds to both the episodic and continuing case. De\ufb01ne the set of bounded vectors for the general space of value functions V = {v \u2208Rn : \u2225v\u2225D\u00b5 < \u221e}. Let Fv \u2282V be a subspace of possible solutions, e.g., Fv = {Xw|w \u2208Rd, \u2225w\u22252 < \u221e}. A1. The action space A and state space S are \ufb01nite. A2. For polices \u00b5, \u03c0 : S \u00d7 A \u2192[0, 1], there exist unique invariant distributions d\u00b5, d\u03c0 such that d\u03c0P\u03c0 = d\u03c0 and d\u00b5P\u00b5 = d\u00b5. This assumption is typically satis\ufb01ed by assuming an ergodic Markov chain for the policy. A3. There exist transition s, a, s\u2032 such that \u03b3(s, a, s\u2032) < 1 and \u03c0(s, a)P(s, a, s\u2032) > 0. This assumptions states that the policy reaches some part of the space where the discount is less than 1. A4. Assume for any v \u2208Fv, if v(s) = 0 for all s \u2208S where i(s) > 0, then v(s) = 0 for all s \u2208S s.t. i(s) = 0. For linear function approximation, this requires F = span{x(s) : s \u2208S, i(s) \u0338= 0}. For weighted norm \u2225v\u2225D = \u221a v\u22a4Dv, if we can take the square root of D, the induced matrix norm is \u2225P\u03bb \u03c0\u2225D = \r \rD1/2P\u03bb \u03c0D1/2\r \r sp, where the spectral norm \u2225\u00b7\u2225sp is the largest singular value of the matrix. For simplicity of notation below, de\ufb01ne sD := \u2225P\u03bb \u03c0\u2225D. For any diagonalizable, nonnegative matrix D, the projection \u03a0D : V \u2192Fv onto Fv exists and is de\ufb01ned \u03a0Dz = argminv\u2208Fv \u2225z \u2212v\u2225D. 5.1. Approximation bound We \ufb01rst prove that the generalized Bellman operator in Equation 1 is a contraction. We extend the bound from (Tsitsiklis and Van Roy, 1997; Hallak et al., 2015) for constant discount and constant trace parameter, to the general transition-based setting. The normed difference to the true value function could be de\ufb01ned by multiple weightings. A well-known result is that for D = D\u03c0 the Bellman operator is a contraction for constant \u03b3c and \u03bbc (Tsitsiklis and Van Roy, 1997); recently, this has been generalized for a variant of ETD to M, still with constant parameters (Hallak et al., 2015). We extend this result for transition-based \u03b3 for both D\u03c0 and the transition-based emphasis matrix M. Lemma 1. For D = D\u03c0 or D = M, sD = \u2225P\u03bb \u03c0\u2225D < 1. Proof. For D = M: let \u03be \u2208Rn be the vector of row sums for P\u03bb \u03c0: P\u03bb \u03c01 = \u03be. Then for any v \u2208V, with v \u0338= 0, \u2225P\u03bb \u03c0v\u22252 M = X s\u2208S m(s) X s\u2032\u2208S P\u03bb \u03c0(s, s\u2032)v(s\u2032) !2 = X s\u2208S m(s)\u03be(s)2 X s\u2032\u2208S P\u03bb \u03c0(s, s\u2032) \u03be(s) v(s\u2032) !2 \u2264 X s\u2208S m(s)\u03be(s)2 X s\u2032\u2208S P\u03bb \u03c0(s, s\u2032) \u03be(s) v(s\u2032)2 = X s\u2032\u2208S v(s\u2032)2 X s\u2208S m(s)\u03be(s)P\u03bb \u03c0(s, s\u2032) = v\u22a4diag \u0000(m \u25e6\u03be)\u22a4P\u03bb \u03c0 \u0001 v where the \ufb01rst inequality follows from Jensen\u2019s inequality, because P\u03bb \u03c0(s, :) is normalized. Now because \u03be has entries that are less than 1, because the row sums of P\u03bb \u03c0 are less than 1 as shown in Lemma 4, and because each of the values in the above product are nonnegative, v\u22a4diag \u0000(m \u25e6\u03be)\u22a4P\u03bb \u03c0 \u0001 v \u2264v\u22a4diag \u0000m\u22a4P\u03bb \u03c0 \u0001 v = v\u22a4diag \u0000m\u22a4(P\u03bb \u03c0 \u2212I) + m\u22a4\u0001 v = v\u22a4diag \u0000\u2212(d\u00b5 \u25e6i)\u22a4+ m\u22a4\u0001 v = v\u22a4diag \u0000m\u22a4\u0001 v \u2212v\u22a4diag \u0000(d\u00b5 \u25e6i)\u22a4\u0001 v < \u2225v\u22252 M \fUnifying Task Speci\ufb01cation in Reinforcement Learning The last inequality is a strict inequality because d\u00b5 \u25e6i has at least one positive entry where v has a positive entry. Otherwise, if v(s) = 0 everywhere with i(s) > 0, then v = 0, which we assumed was not the case. Therefore, \u2225P\u03bb \u03c0v\u2225M < \u2225v\u2225M for any v \u0338= 0, giving \u2225P\u03bb \u03c0\u2225M := maxv\u2208Rn,v\u0338=0 \u2225P\u03bb \u03c0v\u2225M \u2225v\u2225M < 1. This exact same proof follows through verbatim for the generalization of P\u03bb \u03c0 to transition-based trace \u03bb. For D = D\u03c0: Again, we use Jensen\u2019s inequality, but now rely on the property d\u03c0P\u03c0 = d\u03c0. Because of Assumption A3, for some s < 1, for any non-negative v+, d\u03c0P\u03c0,\u03b3v+ = X s X a d\u03c0(s)Pr(s, a, :)\u03c0(s, a)\u03b3(s, a, :)v+ \u2264s X s X a d\u03c0(s)Pr(s, a, :)\u03c0(s, a)v+ = sd\u03c0v. Therefore, because vectors P\u03c0,\u03b3v+ are also non-negative, d\u03c0P\u03bb \u03c0v+ = d\u03c0 \u221e X k=0 (P\u03c0,\u03b3\u03bbc)kP\u03c0,\u03b3(1 \u2212\u03bbc) ! v+ \u2264(1 \u2212\u03bbc) \u221e X k=0 (s\u03bbc)kd\u03c0P\u03c0,\u03b3v+ \u2264(1 \u2212\u03bbc)(1 \u2212s\u03bbc) \u22121sd\u03c0v+ and so \u2225P\u03bb \u03c0v\u22252 D\u03c0 \u2264 X s\u2208S d\u03c0(s)\u03be(s)2 X s\u2032\u2208S P\u03bb \u03c0(s, s\u2032) \u03be(s) v(s\u2032)2 = X s\u2032\u2208S v(s\u2032)2 X s\u2208S d\u03c0(s)\u03be(s)P\u03bb \u03c0(s, s\u2032) \u2264 X s\u2032\u2208S v(s\u2032)2 X s\u2208S d\u03c0(s)P\u03bb \u03c0(s, s\u2032) \u2264s(1\u2212\u03bbc) 1\u2212\u03bbcs X s\u2032\u2208S d(s\u2032)v(s\u2032)2 \u2264s\u2212s\u03bbc 1\u2212\u03bbcs\u2225v\u22252 D\u03c0 where s\u2212s\u03bbc 1\u2212\u03bbcs < 1 since s < 1. Lemma 2. Under assumptions A1-A3, the Bellman operator T(\u03bb) in Equation (1) is a contraction under a norm weighted by D = D\u03c0 or D = M\u03c0, i.e., for v \u2208V \u2225T(\u03bb)v\u2225D < \u2225v\u2225D. Further, because the projection \u03a0D is a contraction, \u03a0DT(\u03bb) is also a contraction and has a unique \ufb01xed point \u03a0DT(\u03bb)v = v for v \u2208Fv. Proof. For any two vectors v1, v2 \u2225T(\u03bb)v1 \u2212T(\u03bb)v2\u2225D = \u2225P\u03bb \u03c0(v1 \u2212v2)\u2225D \u2264\u2225P\u03bb \u03c0\u2225D\u2225v1 \u2212v2\u2225D < \u2225v1 \u2212v2\u2225D where the last inequality follows from Lemma 1. By the Banach Fixed Point theorem, because the Bellman operator is a contraction under D, it has a unique \ufb01xed point. Theorem 1. If D satis\ufb01es sD < 1, then there exists v \u2208Fv such that \u03a0DT(\u03bb)v = v, and the error to the true value function is bounded as \u2225v \u2212v\u2217\u2225D \u2264(1 \u2212sD)\u22121\u2225\u03a0Dv\u2217\u2212v\u2217\u2225D. (4) For constant discount \u03b3c \u2208[0, 1) and constant trace parameter \u03bbc \u2208[0, 1] with D = D\u03c0, this bound reduces to the original bound (Tsitsiklis and Van Roy, 1997, Lemma 6): (1 \u2212sD)\u22121 \u22641 \u2212\u03b3c\u03bbc 1 \u2212\u03b3c . Proof. Let v be the unique \ufb01xed point of \u03a0DT(\u03bb), which exists by Lemma 2. \u2225v \u2212v\u2217\u2225D \u2264\u2225v \u2212\u03a0Dv\u2217\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D = \u2225\u03a0DT(\u03bb)v \u2212\u03a0Dv\u2217\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D \u2264\u2225T(\u03bb)v \u2212v\u2217\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D = \u2225T(\u03bb)(v \u2212v\u2217)\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D = \u2225P\u03bb \u03c0(v \u2212v\u2217)\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D \u2264\u2225P\u03bb \u03c0\u2225D\u2225v \u2212v\u2217\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D = sD\u2225v \u2212v\u2217\u2225D + \u2225\u03a0Dv\u2217\u2212v\u2217\u2225D where the second inequality is due to \u2225\u03a0DT(\u03bb)v\u2225D \u2264 \u2225T(\u03bb)v\u2225D, the second equality due to T(\u03bb)v\u2217= v\u2217and the third equality due to T(\u03bb)v \u2212T(\u03bb)v\u2217= P\u03bb \u03c0(v \u2212v\u2217) because the r\u03c0 cancels. By rearranging terms, we get (1 \u2212sD)\u2225v \u2212v\u2217\u2225D \u2264\u2225\u03a0Dv\u2217\u2212v\u2217\u2225D and since sD < 1, we get the \ufb01nal result. For constant \u03b3c < 1 and \u03bbc, we know that P\u03c0,\u03b3 = \u03b3P\u03c0. Further, if D = D\u03c0, we know \u2225Pi+1 \u03c0 \u2225D = 1. Therefore, sD = \u2225P\u03bb \u03c0\u2225D = \u2225D1/2 \u221e X i=0 \u03b3i c\u03bbi cPi \u03c0 ! \u03b3c(1 \u2212\u03bbc)P\u03c0D\u22121/2\u22252 \u2264\u03b3c(1 \u2212\u03bbc) \u221e X i=0 \u03b3i c\u03bbi c\u2225D1/2Pi+1 \u03c0 D\u22121/2\u22252 = \u03b3c(1 \u2212\u03bbc) \u221e X i=0 \u03b3i c\u03bbi c\u2225Pi+1 \u03c0 \u2225D \u2264\u03b3c(1 \u2212\u03bbc) \u221e X i=0 \u03b3i c\u03bbi c = \u03b3c(1 \u2212\u03bbc) 1 \u2212\u03b3c\u03bbc \fUnifying Task Speci\ufb01cation in Reinforcement Learning We provide generalizations to transition-based trace parameters in the appendix, for the emphasis weighting, and also discuss issues with generalizing to state-based termination for a standard weighting with d\u03c0. We show that for any transition-based discounting function \u03bb : S\u00d7A\u00d7S \u2192[0, 1], the above contraction results hold under emphasis weighting. We then provide a general form for an upper bound on \u2225P\u03bb \u03c0\u2225D\u03c0 for transition-based discounting, based on the contraction properties of two matrices within P\u03bb \u03c0. We further provide an example where the Bellman operator is not a contraction even under the simpler generalization to state-based discounting, and discuss the requirements for the transition-based generalizations to ensure a contraction with weighting d\u03c0. This further motivates the emphasis weighting as a more \ufb02exible scheme for convergence under general setting\u2014both off-policy and transition-based generalization. 5.2. Properties of TD algorithms Using this characterization of P\u03bb \u03c0, we can re-examine previous results for temporal difference algorithms that either used state-based or constant discounts. Convergence of Emphatic TD for RL tasks. We can extend previous convergence results for ETD, for learning value functions and action-value functions, for the RL task formalism. For policy evaluation, ETD and ELSTD, the least-squares version of ETD that uses the above de\ufb01ned A and b with D = M, have both been shown to converge with probability one (Yu, 2015). As an important component of this proof is convergence in expectation, which relies on A being positive de\ufb01nite. In particular, for appropriate step-sizes \u03b1t (see (Yu, 2015)), if A is positive de\ufb01nite, the iterative update is convergent wt+1 = wt + \u03b1t(b \u2212Awt). For the generalization to transition-based discounting, convergence in expectation extends for the emphatic algorithms. We provide these details in the appendix for completeness, with theorem statement and proof in Appendix F and pseudocode in Appendix D. Convergence rate of LSTD(\u03bb). Tagorti and Scherrer (2015) recently provided convergence rates for LSTD(\u03bb) for continuing tasks, for some \u03b3c < 1. These results can be extended to the episodic setting with the generic treatment of P\u03bb \u03c0. For example, in (Tagorti and Scherrer, 2015, Lemma 1), which describes the sensitivity of LSTD, the proof extends by replacing the matrix (1\u2212\u03bbc)\u03b3cP\u03c0(I\u2212\u03bbc\u03b3cP\u03c0) \u22121 (which they call M in their proof) with the generalization P\u03bb \u03c0, resulting instead in the constant 1 1\u2212sD in the bound rather than 1\u2212\u03bbc\u03b3c 1\u2212\u03b3c . Further, this generalizes convergence rate results to emphatic LSTD, since M satis\ufb01es the required convergence properties, with rates dictated by sM rather than sD\u00b5 for standard LSTD. Insights into sD. Though the generalized form enables uni\ufb01ed episodic and continuing results, the resulting bound parameter sD is more dif\ufb01cult to interpret than for constant \u03b3c, \u03bbc. With \u03bbc increasing to one, the constant 1\u2212\u03b3c\u03bbc 1\u2212\u03b3c in the upper bound decreased to one. For \u03b3c decreasing to zero, the bound also decreases to one. These trends are intuitive, as the problem should be simpler when \u03b3c is small, and bias should be less when \u03bbc is close to one. More generally, however, the discount can be small or large for different transitions, making it more dif\ufb01cult to intuit the trend. To gain some intuition for sD, consider a random policy in the taxi domain, with sD summarized in Table 1. As \u03bbc goes to one, sD goes to zero and so (1 \u2212sD) \u22121 goes to one. Some outcomes of note are that 1) hard or soft termination for the pickup results in the exact same sD; 2) for a constant gamma of \u03b3c = 0.99, the episodic discount had a slightly smaller sD; and 3) increasing \u03bbc has a much stronger effect than including more terminations. Whereas, when we added random terminations, so that from 1% and 10% of the states, termination occurred on at least one path within 5 steps or even more aggressively on every path within 5 steps, the values of sD were similar. \u03bbc 0.0 0.5 0.9 0.99 0.999 EPISODIC TAXI 0.989 0.979 0.903 0.483 0.086 \u03b3c = 0.99 0.990 0.980 0.908 0.497 0.090 1% SINGLE PATH 0.989 0.978 0.898 0.467 0.086 10% SINGLE PATH 0.987 0.975 0.887 0.439 0.086 1% ALL PATHS 0.978 0.956 0.813 0.304 0.042 10% ALL PATHS 0.898 0.815 0.468 0.081 0.009 Table 1: The sD values for increasing \u03bbc, with discount settings described in the text. 6. Discussion and conclusion The goal of this paper is to provide intuition and examples of how to use the RL task formalism. Consequently, to avoid jarring the explanation, technical contributions were not emphasized, and in some cases included only in the appendix. For this reason, we would like to highlight and summarize the technical contributions, which include 1) the introduction of the RL task formalism, and of transition-based discounts; 2) an explicit characterization of the relationship between state-based and transition-based discounting; and 3) generalized approximation bounds, applying to both episodic and continuing tasks; and 4) insights into\u2014and issues with\u2014extending contraction results for both statebased and transition-based discounting. Through intuition from simple examples and fundamental theoretical extensions, this work provides a relatively complete characterization of the RL task formalism, as a foundation for use in practice and theory. \fUnifying Task Speci\ufb01cation in Reinforcement Learning Acknowledgements Thanks to Hado van Hasselt for helpful discussions about transition-based discounting, and probabilistic discounts." + }, + { + "url": "http://arxiv.org/abs/1607.00446v2", + "title": "A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning", + "abstract": "One of the main obstacles to broad application of reinforcement learning\nmethods is the parameter sensitivity of our core learning algorithms. In many\nlarge-scale applications, online computation and function approximation\nrepresent key strategies in scaling up reinforcement learning algorithms. In\nthis setting, we have effective and reasonably well understood algorithms for\nadapting the learning-rate parameter, online during learning. Such\nmeta-learning approaches can improve robustness of learning and enable\nspecialization to current task, improving learning speed. For\ntemporal-difference learning algorithms which we study here, there is yet\nanother parameter, $\\lambda$, that similarly impacts learning speed and\nstability in practice. Unfortunately, unlike the learning-rate parameter,\n$\\lambda$ parametrizes the objective function that temporal-difference methods\noptimize. Different choices of $\\lambda$ produce different fixed-point\nsolutions, and thus adapting $\\lambda$ online and characterizing the\noptimization is substantially more complex than adapting the learning-rate\nparameter. There are no meta-learning method for $\\lambda$ that can achieve (1)\nincremental updating, (2) compatibility with function approximation, and (3)\nmaintain stability of learning under both on and off-policy sampling. In this\npaper we contribute a novel objective function for optimizing $\\lambda$ as a\nfunction of state rather than time. We derive a new incremental, linear\ncomplexity $\\lambda$-adaption algorithm that does not require offline batch\nupdating or access to a model of the world, and present a suite of experiments\nillustrating the practicality of our new algorithm in three different settings.\nTaken together, our contributions represent a concrete step towards black-box\napplication of temporal-difference learning methods in real world problems.", + "authors": "Martha White, Adam White", + "published": "2016-07-02", + "updated": "2016-10-24", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION In reinforcement learning, the training data is produced by an adaptive learning agent\u2019s interaction with its environment, which makes tuning the parameters of the learning Appears in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), J. Thangarajah, K. Tuyls, C. Jonker, S. Marsella (eds.), May 9\u201313, 2016, Singapore. Copyright c \u20dd2016, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. process both challenging and essential for good performance. In the online setting we study here, the agent-environment interaction produces an unending stream of temporally correlated data. In this setting there is no testing-training split, and thus the agent\u2019s learning process must be robust and adapt to new situations not considered by the human designer. Robustness is often critically related to the values of a small set parameters that control the learning process (e.g., the step-size parameter). In real-world applications, however, we cannot expect to test a large range of theses parameter values, in all the situations the agent may face, to ensure good performance\u2014common practice in empirical studies. Unfortunately, safe values of these parameters are usually problem dependent. For example, in o\ufb00-policy learning (e.g., learning from demonstrations), large importance sampling ratios can destabilize provably convergent gradient temporal di\ufb00erence learning methods, when the parameters are not set in a very particular way (\u03bb = 0) [33]. In such situations, we turn to meta-learning algorithms that can adapt the parameters of the agent continuously, based on the stream of experience and some notion of the agent\u2019s own learning progress. These meta-learning approaches can potentially improve robustness, and also help the agent specialize to the current task, and thus improve learning speed. Temporal di\ufb00erence learning methods make use of two important parameters: the step-size parameter and the tracedecay parameter. The step-size parameter is the same as those used in stochastic gradient descent, and there are algorithms available for adjusting this parameter online, in reinforcement learning [2]. For the trace decay parameter, on the other hand, we have no generally applicable meta-learning algorithms that are compatible with function approximation, incremental processing, and o\ufb00-policy sampling. The di\ufb03culty in adapting the trace decay parameter, \u03bb, mainly arises from the fact that it has seemingly multiple roles and also in\ufb02uences the \ufb01xed-point solution. This parameter was introduced in Samuel\u2019s checker player [16], and later described as interpolation parameter between of\ufb02ine TD(0) and Monte-Carlo sampling (TD(\u03bb = 1) by Sutton[21]. It has been empirically demonstrated that values of \u03bb between zero and one often perform the best in practice [21, 23, 32]. This trace parameter can also be viewed as a bias-variance trade-o\ufb00parameter: \u03bb closer to one is less biased but likely to have higher variance, where \u03bb closer to zero is more biased, but likely has lower variance. However, it has also been described as a credit-assignment parameter [19], as a method to encode probability of transitions [25], a way to incorporate the agent\u2019s con\ufb01dence in its value funcarXiv:1607.00446v2 [cs.AI] 24 Oct 2016 \ftion estimates [23, 29], and as an averaging of n-step returns [23]. Selecting \u03bb is further complicated by the fact that \u03bb is a part of the problem de\ufb01nition: the solution to the Bellman \ufb01xed point equation is dependent on the choice of \u03bb (unlike the step-size parameter). There are few approaches for setting \u03bb, and most existing work is limited to special cases. For instance, several approaches have analyzed setting \u03bb for variants of TD that were introduced to simplify the analysis, including phased TD [5] and TD\u2217(\u03bb) [17]. Though both provide valuable insights into the role of \u03bb, the analysis does not easily extend to conventional TD algorithms. Sutton and Singh [25] investigated tuning both the learning rate parameter and \u03bb, and proposed two meta-learning algorithms. The \ufb01rst assumes the problem can be modeled by an acyclic MDP, and the other requires access to the transition model of the MDP. Singh and Dayan [18] and Kearns and Singh [5] contributed extensive simulation studies of the interaction between \u03bb and other agent parameters on a chain MDP, but again relied on access to the model and o\ufb04ine computation. The most recent study [3] explores a Bayesian variant of TD learning, but requires a batch of samples and can only be used o\ufb00-line. Finally, Konidaris et al. [6] introduce TD\u03b3 as a method to remove the \u03bb parameter altogether. Their approach, however, has not been extended to the o\ufb00-policy setting and their full algorithm is too computationally expensive for incremental estimation, while their incremental variant introduces a sensitive meta-parameter. Although this long-history of prior work has helped develop our intuitions about \u03bb, the available solutions are still far from the use cases outlined above. This paper introduces an new objective based on locally optimizing bias-variance, which we use to develop an e\ufb03cient, incremental algorithm for learning state-based \u03bb. We use a forward-backward analysis [23] to derive an incremental algorithm to estimate the variance of the return. Using this estimate, we obtain a closed-form estimate of \u03bb on each time-step. Finally, we empirically demonstrate the generality of the approach with a suite of on-policy and o\ufb00-policy experiments. Our results show that our new algorithm, \u03bbgreedy, is consistently amongst the best performing, adapting as the problem changes, whereas any \ufb01xed approach works well in some settings and poorly in anothers. 2. BACKGROUND We model the agent\u2019s interaction with an unknown environment as a discrete time Markov Decision Process (MDP). A MDP is characterized by a \ufb01nite set of states S, set of actions A, a reward function r : S \u00d7 S \u2192R, and generalized state-based discount \u03b3 : S \u2208[0, 1], which encodes the level of discounting per-state (e.g., a common setting is a constant discount for all states). On each of a discrete number of timesteps, t = 1, 2, 3, . . ., the agent observes the current state St, selects an action At, according to its target policy \u03c0 : S \u00d7A \u2192[0, 1], and the environment transitions to a new state St+1 and emits a reward Rt+1. The state transitions are governed by the transition function P : S\u00d7A\u00d7S \u2192[0, 1], where P(St, At, St+1) denotes the probability of transitioning from St to St+1, due to action At. At timestep t, the future rewards are summarized by the Monte Carlo (MC) return Gt \u2208R de\ufb01ned by the in\ufb01nite discounted sum Gt def = Rt+1 + \u03b3t+1Rt+2 + \u03b3t+1\u03b3t+2Rt+3 + . . . \u25b7\u03b3t = \u03b3(St) = Rt+1 + \u03b3t+1Gt+1. The agent\u2019s objective is to estimate the expected return or value function, v\u03c0 : S \u2192R, de\ufb01ned as v\u03c0(s) def = E[Gt|St = s, At \u223c\u03c0]. We estimate the value function using the standard framework of linear function approximation. We assume the state of the environment at time t can be characterized by a \ufb01xed-length feature vector xt \u2208Rn, where n \u226a|S|; implicitly, xt is a function of the random variable St. The agent uses a linear estimate of the value of St: the inner product of xt and a modi\ufb01able set of weights w \u2208Rn, \u02c6 v(St, w) def = x\u22a4 t w, with mean-squared error (MSE) = P s\u2208S d(s)(v(s) \u2212\u02c6 v(s, w))2, where d : S \u2192[0, 1] encodes the distribution over states induced by the agent\u2019s behavior in the MDP. Instead of estimating the expected value of Gt, we can estimate a \u03bb-return that is expected to have lower variance G\u03bb t def = Rt+1 + \u03b3t+1[(1 \u2212\u03bbt+1)x\u22a4 t+1w + \u03bbt+1G\u03bb t+1], where the trace decay function \u03bb : S \u2192[0, 1] speci\ufb01es the trace parameter as a function of state. The trace parameter \u03bbt+1 = \u03bb(st+1) averages the estimate of the return, x\u22a4 t+1w, and the \u03bb-return starting on the next step, G\u03bb t+1. When \u03bb = 1, G\u03bb t becomes the MC return Gt, and the value function can be estimated by averaging rollouts from each state. When \u03bb = 0, G\u03bb t becomes equal to the one-step \u03bb-return, Rt+1 + \u03b3t+1x\u22a4 t+1w, and the value function can be estimated by the linear TD(0) algorithm. The \u03bb-return when \u03bb \u2208(0, 1) is often easier to estimate than MC, and yields more accurate predictions than using the one-step return. The intuition, is that the for large \u03bb, the estimate is high-variance due to averaging possibly long trajectories of noisy rewards, but less bias because the initial biased estimates of the value function participate less in the computation of the return. In the case of low \u03bb, the estimate has lower-variance because fewer potentially noisy rewards participate in G\u03bb t , but there is more bias due to the increase role of the initial value function estimates. We further discuss the intuition for this parameter in the next section. The generalization to state-based \u03b3 and \u03bb have not yet been widely considered, though the concept was introduced more than a decade ago [22, 27] and the generalization shown to be useful [22, 10, 14, 26]. The Bellman operator can be generalized to include state-based \u03b3 and \u03bb (see [26, Equation 29]), where the choice of \u03bb per-state in\ufb02uences the \ufb01xed point. Time-based \u03bb, on the other hand, would not result in a well-de\ufb01ned \ufb01xed point. Therefore, to ensure a wellde\ufb01ned \ufb01xed point, we will design an objective and algorithm to learn a state-based \u03bb. This paper considers both onand o\ufb00-policy policy evaluation. In the more conventional on-policy learning setting, we estimate v\u03c0(s) based on samples generated while selecting actions according to the target policy \u03c0. In the o\ufb00-policy case, we estimate v\u03c0(s) based on samples generated while selecting actions according to the behavior policy \u00b5 : S \u00d7 A \u2192[0, 1], and \u03c0 \u0338= \u00b5. In order to learn \u02c6 v in both these settings we use the GTD(\u03bb) algorithm [9] speci\ufb01ed by the following update equations: \u03c1t def = \u03c0(St, At) \u00b5(St, At) \u25b7importance sampling ratio et def = \u03c1t(\u03b3t\u03bbtet\u22121 + xt) \u25b7eligibility trace \u03b4t def = Rt+1 + \u03b3tx\u22a4 t+1wt \u2212x\u22a4 t wt \u25b7TD error \fwt+1 \u2190wt + \u03b1(\u03b4tet \u2212\u03b3t(1 \u2212\u03bbt+1)e\u22a4 t htxt+1) ht+1 \u2190ht + \u03b1h(\u03b4tet \u2212x\u22a4 t ht)xt \u25b7auxiliary weights with step-sizes \u03b1, \u03b1h \u2208R+ and an arbitrary initial w0, h0 (e.g., the zero vector). The importance sampling ratio \u03c1t \u2208 R+ facilitates learning about rewards as if they were generated by following \u03c0, instead of \u00b5. This ratio can be very large if \u00b5(St, At) is small, which can compound and destabilize learning. 3. OBJECTIVE FOR TRACE ADAPTATION To obtain an objective for selecting \u03bb, we need to clarify its role. Although \u03bb was not introduced with the goal of trading o\ufb00bias and variance [21], several algorithms and signi\ufb01cant theory have developed its role as such [5, 17]. Other roles have been suggested; however, as we discuss below, each of them can still be thought of as a bias-variance trade-o\ufb00. The \u03bb parameter has been described as a credit assignment parameter, which allows TD(\u03bb) to perform multi-step updates on each time step. On each update, \u03bbt controls the amount of credit assigned to previous transitions, using the eligibility trace e. For \u03bbt close to 1, TD(\u03bb) assigns more credit for the current reward to previous transitions, resulting in updates to many states along the current trajectory. Conversely, for \u03bbt = 0, the eligibility trace is cleared and no credit is assigned back in time, performing a single-step TD(0) update. In fact, this intuition can still be thought of as a bias-variance trade-o\ufb00. In terms of credit assignment, we ideally always want to send maximal credit \u03bb = 1, but decayed by \u03b3, for the current reward, which is also unbiased. In practice, however, this often leads to high variance, and thus we mitigate the variance by choosing \u03bb less than one and speed learning overall, but introduce bias. Another interpretation is that \u03bb should be set to re\ufb02ect con\ufb01dence in value function estimates [29, 23]. If your con\ufb01dence in the value estimate of state s is high, then \u03bb(s) should be close to 0, meaning we trust the estimates provided by \u02c6 v. If your con\ufb01dence is low, suspecting that \u02c6 v may be inaccurate, then \u03bb(s) should be close to 1, meaning we trust observed rewards more. For example in states that are indistinguishable with function approximation (i.e., aliased states), we should not trust the \u02c6 v as much. This intuition similarly translates to bias-variance. If \u02c6 v is accurate, then decreasing \u03bb(s) does not incur (much) bias, but can signi\ufb01cantly decrease the variance since \u02c6 v gives the correct value. If \u02c6 v is inaccurate, then the increased bias is not worth the reduced variance, so \u03bb(s) should be closer to 1 to use actual (potentially high-variance) samples. Finally, a less commonly discussed interpretation is that \u03bb acts as parameter that simulates a form of experience replay (or model-based simulation of trajectories). One can imagine that sending back information in eligibility traces is like simulating experience from a model, where the model could be a set of trajectories, as in experience replay [8]. If \u03bb = 1, the traces are longer and each update gets more trajectory information, or experience replay. If a trajectory from a point, however, was unlikely (e.g., a rare transition), we may not want to use that information. Such an approach was taken by Sutton and Singh [25], where \u03bb was set to the transition probabilities. Even in this model-based interpretation, the goal in setting \u03bb becomes one of mitigating variance, without incurring too much bias. Optimizing this bias-variance trade-o\ufb00, however, is dif\ufb01cult because \u03bb a\ufb00ects the return we are approximating. Jointly optimizing for \u03bb across all time-steps is generally not feasible. One strategy is to take a batch approach, where the optimal \u03bb is determined after seeing all the data [3]. Our goal, however, is to develop approaches for the online setting, where future states, actions, rewards and the in\ufb02uence of \u03bb have yet to be observed. We propose to take a greedy approach: on each time step select \u03bbt+1 to optimize the bias-variance trade-o\ufb00for only this step. This greedy objective corresponds to minimizing the mean-squared error between the unbiased \u03bb = 1 return Gt and the estimate \u02c6 Gt with \u03bbt+1 \u2208[0, 1] with \u03bb = 1 into the future after t + 1 \u02c6 Gt def = \u03c1t(Rt+1 + \u03b3t+1[(1\u2212\u03bbt+1)x\u22a4 t+1wt + \u03bbt+1 Gt+1 | {z } Monte Carlo ]). Notice that \u02c6 Gt interpolates between the current value estimate and the unbiased \u03bb = 1 MC return, and so is not recursive. Picking \u03bbt+1 = 1 gives an unbiased estimate, since then we would be estimating \u02c6 Gt = Gt. We greedily decide how \u03bbt+1 should be set on this step to locally optimize the mean-squared error (i.e., bias-variance). This greedy decision is made given both xt and xt+1, which are both available when choosing \u03bbt+1. To simplify notation in this section, we assume that xt and xt+1 are both given in the below expectations. To minimize the mean-squared error in terms of \u03bbt+1 MSE(\u03bbt+1) def = E h ( \u02c6 Gt \u2212E[Gt])2i we will consider the two terms that compose the meansquared error: the squared bias term and the variance term. Bias(\u03bbt+1) def = E[ \u02c6 Gt] \u2212E[Gt] Variance(\u03bbt+1) def = Var[ \u02c6 Gt] MSE(\u03bbt+1) = Bias(\u03bbt+1)2 + Variance(\u03bbt+1). Let us begin by rewriting the bias. Since we are given xt, \u03c1t, xt+1 and \u03b3t+1 when choosing \u03bbt+1, E[ \u02c6 Gt] = \u03c1tE h Rt+1 + \u03b3t+1(1\u2212\u03bbt+1)x\u22a4 t+1wt + \u03bbt+1Gt+1 i = \u03c1tE[Rt+1] + \u03c1t\u03b3t+1 \u0010 (1\u2212\u03bbt+1)x\u22a4 t+1wt + \u03bbt+1E[Gt+1] \u0011 For convenience, de\ufb01ne err(w, xt+1) def = E[Gt+1] \u2212x\u22a4 t+1w (1) as the di\ufb00erence between the \u03bb = 1 return and the current approximate value from state xt+1 using weights wt. Using this de\ufb01nition, we can rewrite (1\u2212\u03bbt+1)x\u22a4 t+1wt + \u03bbt+1E[Gt+1] = (1\u2212\u03bbt+1)(E[Gt+1] \u2212err(wt, xt+1)) + \u03bbt+1E[Gt+1] = E[Gt+1] \u2212(1\u2212\u03bbt+1)err(wt, xt+1) giving E[ \u02c6 Gt] = \u03c1t (E[Rt+1] + \u03b3t+1E[Gt+1]) \u2212\u03c1t\u03b3t+1(1\u2212\u03bbt+1)err(wt, xt+1) = E[Gt] \u2212\u03c1t\u03b3t+1(1\u2212\u03bbt+1)err(wt, xt+1) \f= \u21d2Bias2(\u03bbt+1) = \u0010 E[Gt] \u2212E[ \u02c6 Gt] \u00112 = \u03c12 t\u03b32 t+1(1\u2212\u03bbt+1)2err2(wt, xt+1). For the variance term, we will assume that the noise in the reward Rt+1 given xt and xt+1 is independent of the other dynamics [12], with variance \u03c3r(xt, xt+1). Again since we are given xt, \u03c1t, xt+1 and \u03b3t+1 Var[ \u02c6 Gt] = \u03c12 tVar[Rt+1 + \u03b3t+1(1\u2212\u03bbt+1)x\u22a4 t+1wt | {z } constant given \u03c1t, xt+1, \u03b3t+1 +\u03b3t+1\u03bbt+1Gt+1] = \u03c12 t Var[Rt+1] + \u03c12 t\u03b32 t+1\u03bb2 t+1Var[Gt+1] | {z } independent given xt, xt+1 = \u03c12 t\u03c3r(xt, xt+1) + \u03c12 t\u03b32 t+1\u03bb2 t+1Var[Gt+1] = \u21d2Variance(\u03bbt+1) = \u03c12 t\u03b32 t+1\u03bb2 t+1Var[Gt+1] + \u03c12 t\u03c3r(xt, xt+1) Finally, we can drop the constant \u03c12 t\u03c3r(xt, xt+1) in the objective, and drop the \u03c12 t\u03b32 t+1 in both the bias and variance terms as it only scales the objective, giving the optimization min \u03bbt+1\u2208[0,1] Bias2(\u03bbt+1) + Variance(\u03bbt+1) \u2261 min \u03bbt+1\u2208[0,1](1\u2212\u03bbt+1)2err2 t+1(wt) + \u03bb2 t+1Var[Gt+1]. We can take the gradient of this optimization to \ufb01nd a closed form solution \u22122(1\u2212\u03bbt+1)err2 t+1(wt) + 2\u03bbt+1Var[Gt+1] = 0 = \u21d2 \u0000Var[Gt+1] + err2 t+1(wt) \u0001 \u03bbt+1 \u2212err2 t+1(wt) = 0 = \u21d2\u03bbt+1 = err2 t+1(wt) Var[Gt+1] + err2 t+1(wt) (2) which is always feasible, unless both the variance and error are zero (in which case, any choice of \u03bbt+1 is equivalent). Though the importance sampling ratio \u03c1t does not a\ufb00ect the choice of \u03bb on the current time step, it can have a dramatic e\ufb00ect on Var[Gt+1] into the future via the eligibility trace. For example, when the target and behavior policy are strongly mis-matched, \u03c1t can be large, which multiplies into the eligibility trace et. If several steps have large \u03c1t, then et can get very large. In this case, the equation in (2) would select a small \u03bbt+1, signi\ufb01cantly decreasing variance. 4. TRACE ADAPTATION ALGORITHM To approximate the solution to our proposed optimization, we need a way to approximate the error and the variance terms in equation (2). To estimate the error, we need an estimate of the expected return from each state, E[Gt]. To estimate the variance, we need to obtain an estimate of E[G2 t], and then can use Var[Gt] = E[G2 t]\u2212E[Gt]2. The estimation of the expected return is in fact the problem tackled by this paper, and one could use a TD algorithm, learning weight vector werr to obtain approximation x\u22a4 t werr to E[Gt]. This approach may seem problematic, as this sub-step appears to be solving the same problem we originally aimed to solve. However, as in many meta-parameter optimization approaches, this approximation can be inaccurate and still adequately guide selection of \u03bb. We discuss this further in the experimental results section. Similarly, we would like to estimate E[G2 t] with wsqx\u22a4 t by learning wsq; estimating the variance or the second moment Algorithm 1: Policy evaluation with \u03bb-greedy wt \u21900 // Main weights ht \u21900 // Auxiliary weights for o\ufb00-policy learning et \u21900 // Main trace parameters xt \u2190the initial observation werr \u2190Rmax 1\u2212\u03b3 \u00d7 1, wsq \u21900.0 \u00d7 1 // Aux. weights for \u03bb \u00af et \u21900, \u00af zt \u21900 // Aux. traces repeat Take action accord. to \u03c0, observe xt+1, reward rt+1 \u03c1t = \u03c0(st, at)/\u00b5(st, at) // In on-policy, \u03c1t = 1 \u03bbt+1 \u2190\u03bb-greedy(werr, wsq, wt, xt, xt+1, rt+1, \u03c1t) // Now can use any algorithm, e.g., GTD \u03b4t = rt+1 + \u03b3t+1w\u22a4 t xt+1 \u2212w\u22a4 t xt et = \u03c1t(\u03b3t\u03bbtet\u22121 + xt) wt+1 = wt + \u03b1(\u03b4tet \u2212\u03b3t+1(1 \u2212\u03bbt+1)xt+1)(e\u22a4 t ht)) ht+1 = ht + \u03b1h(\u03b4tet \u2212(h\u22a4 t xt)xt) until agent done interaction with environment Algorithm 2: \u03bb-greedy(werr, wsq, wt, xt, xt+1, rt+1, \u03c1t) // Use GTD to update werr \u00af gt+1 \u2190x\u22a4 t+1werr \u03b4t \u2190rt+1 + \u03b3t+1\u00af gt+1 \u2212x\u22a4 t werr \u00af et = \u03c1t(\u03b3t\u00af et\u22121 + xt) werr = werr + \u03b1\u03b4t\u00af et // Use VTD to update wsq \u00af rt+1 \u2190\u03c12 tr2 t+1 + 2\u03c12 t\u03b3t+1rt+1\u00af gt+1 \u00af \u03b3t+1 \u2190\u03c12 t\u03b32 t+1 \u00af \u03b4t \u2190\u00af rt+1 + \u00af \u03b3t+1x\u22a4 t+1wsq \u2212x\u22a4 t wsq \u00af zt = \u00af \u03b3t\u00af zt\u22121 + xt wsq = wsq + \u03b1\u00af \u03b4t\u00af zt // Compute \u03bb estimate errsq = (\u00af gt+1 \u2212x\u22a4 t+1wt)2 varg = max(0, x\u22a4 t+1wsq \u2212(\u00af gt+1)2) \u03bbt+1 = errsq/(varg + errsq) return \u03bbt+1 of the return, however, has not been extensively studied. Sobel [20] provides a Bellman equation for the variance of the \u03bb-return, when \u03bb = 0. There is also an extensive literature on risk-averse MDP learning, where the variance of the return is often used as a measure [15, 13, 1, 28]; however, an explicit way to estimate the variance of the return for \u03bb > 0 is not given. There has also been some work on estimating the variance of the value function [12, 35], for general \u03bb; though related, this is di\ufb00erent than estimating the variance of the \u03bb-return. In the next section, we provide a derivation for a new algorithm called variance temporal di\ufb00erence learning (VTD), to approximate the second moment of the return for any statebased \u03bb. The general VTD updates are given at the end of Section 5.2. For \u03bb-greedy, we use VTD to estimate the variance, with the complete algorithm summarized in Algorithm 1. We opt for simple meta-parameter settings, so that no additional parameters are introduced. We use the same stepsize \u03b1 that is used for the main weights to update werr and wsq. In addition, we set the weights werr and wsq to re\ufb02ect a priori estimates of error and variance. As a reasonable rule-of-thumb, werr should be set larger than wsq, to re\ufb02ect that initial value estimates are inaccurate. This results in \fan estimate of variance Var[Gt+1] \u2248x\u22a4 t+1wsq \u2212(x\u22a4 t+1werr)2 that is capped at zero until x\u22a4 t+1wsq becomes larger than (x\u22a4 t+1werr)2. 5. APPROXIMATING THE SECOND MOMENT OF THE RETURN In this section, we derive the general VTD algorithm to approximate the second moment of the \u03bb-return. Though we will set \u03bbt+1 = 1 in our algorithm, we nonetheless provide the more general algorithm as the only model-free variance estimation approach for general \u03bb-returns. The key novelty is in determining a Bellman operator for the squared return, which then de\ufb01nes a \ufb01xed-point objective, called the Var-MSPBE. With this Bellman operator and recursive form for the squared return, we derive a gradient TD algorithm, called VTD, for estimating the second moment. To avoid confusion with parameters for the main algorithm, as a general rule throughout the document, the additional parameters used to estimate the second moment have a bar. For example, \u03b3t+1 is the discount for the main problem, and \u00af \u03b3t+1 is the discount for the second moment. 5.1 Bellman operator for squared return The recursive form for the squared-return is (G\u03bb t )2 = \u03c12 t( \u00af G2 t + 2\u03b3t+1\u03bbt+1 \u00af GtG\u03bb t+1 + \u03b32 t+1\u03bb2 t+1(G\u03bb t+1)2) = \u00af rt+1 + \u00af \u03b3t+1(G\u03bb t+1)2 where for a given \u03bb : S \u2192[0, 1] and w, \u00af Gt def = Rt+1 + \u03b3t+1(1 \u2212\u03bbt+1)x\u22a4 t+1w \u00af rt+1 def = \u03c12 t \u00af G2 t + 2\u03c12 t\u03b3t+1\u03bbt+1 \u00af GtG\u03bb t+1 \u00af \u03b3 def = \u03c12 t\u03b32 t+1\u03bb2 t+1. The w are the weights for the \u03bb-return, and not the weights wsq we will learn for approximating the second moment. For further generality, we introduce a meta-parameter \u00af \u03bbt \u00af V \u00af \u03bb t def = \u00af rt+1 + \u00af \u03b3t+1 \u0010 (1 \u2212\u00af \u03bbt+1)x\u22a4 t+1wsq + \u00af \u03bbt+1 \u00af V \u00af \u03bb t+1 \u0011 to get a \u00af \u03bb-squared-return where for \u00af \u03bbt+1 = 1, \u00af V \u00af \u03bb t = (G\u03bb t )2. This meta-parameter \u00af \u03bb plays the same role for estimating (G\u03bb t )2 as \u03bb for estimating Gt. We can de\ufb01ne a generalized Bellman operator \u00af T for the squared-return, using this above recursive form. The goal is to obtain the \ufb01xed point \u00af T\u00af v = \u00af v, where a \ufb01xed point exists if the operator is a contraction. For the \ufb01rst moment, the Bellman operator is known to be a contraction [30]. This result, however, does not immediately extend here because, thought \u00af rt+1 is a valid \ufb01nite reward, \u00af \u03b3t+1 does not satisfy \u00af \u03b3t+1 \u22641, because \u03c12 t can be large. We can nonetheless de\ufb01ne such a Bellman operator for the \u00af \u03bb-squared-return and determine if a \ufb01xed point exists. Interestingly, \u00af \u03b3t+1 can in fact be larger than 1, and we can still obtain a contraction. To de\ufb01ne the Bellman operator, we use a recent generalization that enables the discount to be de\ufb01ned as a function of (s, a, s\u2032) [34], rather than just as a function of s\u2032. We \ufb01rst de\ufb01ne \u00af v, the expected \u00af \u03bb-squaredreturn \u00af v def = \u221e X t=0 (\u00af P\u03c0,\u03b3)t\u00af r, where \u00af P\u03c0,\u03b3(s, s\u2032) def = X s,a,s\u2032 P(s, a, s\u2032)\u00b5(s, a)\u03c1(s, a)2\u03b3(s\u2032)2\u03bb(s\u2032)2 (3) \u00af r def = X s,a,s\u2032 P(s, a, s\u2032)\u00b5(s, a)\u00af r(s, s\u2032). Using similar equations to the generalized Bellman operator [34], we can de\ufb01ne \u00af T\u00af v = \u221e X t=0 (\u00af P\u03c0,\u03b3\u039b)t \u0000\u00af r + \u00af P\u03c0,\u03b3(I \u2212\u00af \u039b)\u00af v \u0001 where \u00af \u039b \u2208R|S|\u00d7|S| is a matrix with \u00af \u03bb(s) on the diagonal, for all s \u2208S. The in\ufb01nite sum is convergent if the maximum singular value of \u00af P\u03c0,\u03b3 \u00af \u039b is less than 1, giving solution P\u221e t=0(\u00af P\u03c0,\u03b3 \u00af \u039b)t = (I \u2212\u00af P\u03c0,\u03b3 \u00af \u039b)\u22121. Otherwise, however, the value is in\ufb01nite and one can see that in fact the variance of the return is in\ufb01nite! We can naturally investigate when the second moment of the return is guaranteed to be \ufb01nite. This condition on \u00af P\u03c0,\u03b3 \u00af \u039b should facilitate identifying theoretical conditions on the target and behavior policies that enable \ufb01nite variance of the return. This theoretical characterization is outside of the scope of this work, but we can reason about di\ufb00erent settings that provide a well-de\ufb01ned, \ufb01nite \ufb01xed point. First, clearly setting \u03bbt+1 = 0 for every state ensures a \ufb01nite second moment, given a \ufb01nite \u00af v, regardless of policy mis-match. For the on-policy setting, where \u03c1t = 1, \u00af \u03b3t+1 \u2264\u03b3t+1 and so a well-de\ufb01ned \ufb01xed point exists, under standard assumptions (see [34]). For the o\ufb00-policy setting, if \u00af \u03b3t+1 = \u03bb2 t+1\u03b32 t+1\u03c12 t < 1, this is similarly the case. Otherwise, a solution may still exist, by ensuring that the maximum singular value of \u00af P\u03c0,\u03b3 is less than one; we hypothesize that this property is unlikely if there is a large mis-match between the target and behavior policy, causing many large \u03c1t. An important future avenue is to understand the required similarity between \u03c0 and \u00b5 to enable \ufb01nite variance of the return, for any given \u03bb. Interestingly, the \u03bb-greedy algorithm should adapt to such in\ufb01nite variance settings, where (2) will set \u03bbt+1 = 0. 5.2 VTD derivation In this section, we propose Var-MSPBE, the mean-squared projected Bellman error (MSPBE) objective for the \u00af \u03bb-squaredreturn, and derive VTD to optimize this objective. Given the de\ufb01nition of the generalized Bellman operator \u00af T, the derivation parallels GTD(\u03bb) for the \ufb01rst moment [9]. The main di\ufb00erence is in obtaining unbiased estimates of parts of the objective; we will therefore focus the results on this novel aspect, summarized in the below two theorems and corollary. De\ufb01ne the error of the estimate x\u22a4 t wsq to the future \u00af \u03bbsquared-return \u00af \u03b4 \u00af \u03bb t def = \u00af V \u00af \u03bb t \u2212x\u22a4 t wsq and, as in previous work [24, 9], we de\ufb01ne the MSPBE that corresponds to \u00af T Var-MSPBE(wsq) def = E[\u00af \u03b4 \u00af \u03bb t xt]\u22a4E[xtxt]\u22121E[\u00af \u03b4 \u00af \u03bb t xt]. To obtain the gradient of the objective, we prove that we can obtain an unbiased sample of \u00af \u03b4 \u00af \u03bb t xt (a forward view) using a trace of the past (a backward view). The equivalence is \fsimpler if we assume that we have access to an estimate of the \ufb01rst moment of the \u03bb-return. For our setting, we do in fact have such an estimate, because we simultaneously learn werr. We include the more general expectation equivalence in Theorem 2, with all proofs in the appendix. Theorem 1. For a given unbiased estimate \u00af gt+1 of E[G\u03bb t+1|St+1], de\ufb01ne \u00af \u03b4t def = (\u03c12 t \u00af G2 t + 2\u03c12 t\u03b3t+1\u03bbt+1 \u00af Gt\u00af gt+1) + \u00af \u03b3t+1x\u22a4 t+1wsq t \u2212x\u22a4 t wsq t \u00af zt def = xt + \u00af \u03b3t+1\u00af \u03bbt+1\u00af zt\u22121 Then E[\u00af \u03b4 \u00af \u03bb t xt] = E[\u00af \u03b4t\u00af zt] Theorem 2. E[\u00af rt+1\u00af zt] = E[\u03c12 t+1 \u00af G2 t \u00af zt] + 2E[\u03c12 t+1 \u00af Gt(\u00af at + \u00af Btwt)] where \u00af at def = \u03c1t\u03b3t\u03bbt(Rt\u00af zt\u22121 + \u00af at\u22121) \u00af Bt def = \u03c1t\u03b3t\u03bbt(\u03b3t(1 \u2212\u03bbt)\u00af zt\u22121x\u22a4 t + \u00af Bt\u22121) Corollary 1. For \u03bbt = 1 for all t, E[\u00af rt+1\u00af zt] = E[\u03c12 tR2 t+1\u00af zt] + 2E[Rt+1\u00af at] where \u00af zt = xt + \u03c12 t\u22121\u03b32 t \u00af zt\u22121 \u00af at = \u03c1t\u03b3t(Rt\u00af zt\u22121 + \u00af at\u22121). To derive the VTD algorithm, we take the gradient of the Var-MSPBE. As this again parallels GTD(\u03bb), we include the derivation in the appendix for completeness and provide only the \ufb01nal result here. \u22121 2\u2207Var-MSPBE(wsq) = E[\u00af \u03b4 \u00af \u03bb t xt] \u2212E[\u00af \u03b3t+1(1 \u2212\u00af \u03bbt+1)xt+1\u00af z\u22a4 t ]E[xtxt]\u22121E[\u00af \u03b4 \u00af \u03bb t xt]. As with previous gradient TD algorithms, we will learn an auxiliary set of weights wsq to estimate a part of this objective: E[xtxt]\u22121E[\u00af \u03b4 \u00af \u03bb t xt]. To obtain such an estimate, notice that hsq = E[xtxt]\u22121E[\u00af \u03b4 \u00af \u03bb t xt] corresponds to an LMS solution, where the goal is to obtain x\u22a4 t hsq that estimates E[\u00af \u03b4 \u00af \u03bb t |xt]. Therefore, we can use an LMS update for hsq, giving the \ufb01nal set of update equations for VTD: \u00af gt+1 \u2190x\u22a4 t+1werr \u00af rt+1 \u2190\u03c12 t \u00af G2 t + 2\u03c12 t\u03b3t+1\u03bbt+1 \u00af Gt\u00af gt+1 \u00af \u03b3t+1 \u2190\u03c12 t\u03b32 t+1\u03bb2 t+1 \u00af \u03b4t \u2190\u00af rt+1 + \u00af \u03b3t+1x\u22a4 t+1wsq t \u2212x\u22a4 t wsq t wsq t+1 \u2190wsq t + \u00af \u03b1h\u00af \u03b4t\u00af zt \u2212\u00af \u03b1h\u00af \u03b3t+1(1 \u2212\u00af \u03bbt+1)xt+1\u00af z\u22a4 t hsq t hsq t+1 \u2190hsq t + \u00af \u03b1h\u00af \u03b4t\u00af zt \u2212\u00af \u03b1hx\u22a4 t hsq t xt. For \u03bb-greedy, we set \u00af \u03bbt+1 = 1, causing the term with the auxiliary weights to be multiplied by 1 \u2212\u00af \u03bbt+1, and so removing the need to approximate hsq. 6. RELATED WORK There has been a signi\ufb01cant e\ufb00ort to empirically investigate \u03bb, typically using batch o\ufb00-line computing and modelbased techniques. Sutton and Singh [25] investigated tuning both \u03b1 and \u03bb. They proposed three algorithms, the \ufb01rst two assume the underlying MDP has no cycles, and the third makes use of an estimate of the transition probabilities and is thus of most interest in tabular domains. Singh and Dayan [18] provided analytical expression for bias and variance, given the model. They suggest that there is a largest feasible step-size \u03b1, below which bias converges to zero and variance converges to a non-zero value, and above which bias and/or variance may diverge. Downey and Sanner [3] used a Bayesian variant of TD learning, requiring a batch of samples and o\ufb00-line computation, but did provide an empirical demonstration o\ufb00optimally setting \u03bb after obtaining all the samples. Kearns and Singh[5] compute a bias-variance error bound for a modi\ufb01cation of TD called phased TD. In each discrete phase the algorithm is given n trajectories from each state. Because we have n trajectories in each state the e\ufb00ective learning rate is 1/n removing the complexities of sample averaging in the conventional online TD-update. The error bounds are useful for, among other things, computing a new \u03bb value for each phase which outperforms any \ufb01xed \u03bb value, empirically demonstrating the utility of changing \u03bb. There has also been a signi\ufb01cant e\ufb00ort to theoretically characterizing \u03bb. Most notably, the work of Schapire and Warmuth [17] contributed a \ufb01nite sample analysis of incremental TD-style algorithms. They analyze a variant of TD called TD\u22c6(\u03bb), which although still linear and incremental, computes value estimates quite di\ufb00erently. The resulting \ufb01nite sample bound is particularly interesting, as it does not rely on model assumptions, using only access to a sequence of feature vectors, rewards and returns. Unfortunately, the bound cannot be analytically minimized to produce an optimal \u03bb value. They did simulate their bound, further verifying the intuition that \u03bb should be larger if the best linear predictor is inaccurate, small if accurate and an intermediate value otherwise. Li [7] later derived similar bounds for another gradient descent algorithm, called residual gradient. This algorithm, however, does not utilize eligibility traces and converges to a di\ufb00erent solution than TD methods when function approximation is used [24]. Another approach involves removing the \u03bb parameter altogether, in an e\ufb00ort to improve robustness. Konidaris et al. [6] introduced a new TD method called TD\u03b3. Their work de\ufb01nes a plausible set of assumptions implicitly made when constructing the \u03bb-returns, and then relaxes one of those assumptions. They derive an exact (but computationally expensive) algorithm, TD\u03b3, that no longer depends on a choice of \u03bb and performs well empirically in a variety of policy learning benchmarks. The incremental approximation to TD\u03b3 also performs reasonably well, but appears to be somewhat sensitive to the choice of meta parameter C, and often requires large C values to obtain good performance. This can be problematic, as the complexity grows as O(Cn), where n is the length of the trajectories\u2014not linearly in the feature vector size. Nonetheless, TD\u03b3 constitutes a reasonable way to reduce parameter sensitivity in the on-policy setting. Garcia and Serre[4] proposed a variant of Q-learning, for which the optimal value of \u03bb can be computed online. Their analysis, however, was restricted to the tabular case. Finally, Mahmood et al. [11] introduced weighted importance sampling for o\ufb00-policy learning; though indirect, this is a strategy for enabling larger \u03bb to be selected, without destabilizing o\ufb00-policy learning. This related work has helped shape our intuition on the role of \u03bb, and, in special cases, provided e\ufb00ective strate\fgies for adapting \u03bb. In the next section, we add to existing work with an empirical demonstration of \u03bb-greedy, the \ufb01rst \u03bb-adaptation algorithm for o\ufb00-policy, incremental learning, developed from a well-de\ufb01ned, greedy objective. 7. EXPERIMENTS We investigate \u03bb-greedy in a ring-world, under both on and o\ufb00-policy sampling. This ring-world was previously introduced as a suitable domain for investigating \u03bb [5]. We varied the length of the ring-world from N = 10, 25, 50. The reward is zero in every state, except for two adjoining states that have +1 and -1 reward, and are terminal states. The agent is teleported back to the middle of the ring-world upon termination. The target policy is to take action \u201cright\u201d with 95% probability, and action \u201cleft\u201d with 5% probability. The feature representation is a tabular encoding of states with binary identity vectors, but we also examine the e\ufb00ects of aliasing state values to simulate poor generalization: a common case where the true value function cannot be represented. The length of the experiment is a function of the problem size, N \u00d7 100, proportionally scaling the number of samples for longer problem instances. We compared to \ufb01xed values of \u03bb = 0, 0.1, . . . , 0.9, 1.0 and to two time-decay schedules, \u03bbt = 10/(10 + t), \u03bbt = 100/(100 + t), which worked well compared to several other tested settings. The discount factor is \u03b3 = 0.99 for the onpolicy chains and 0.95 for the o\ufb00-policy chains. We include the optimal \u03bb-greedy, which computes wsq and wsq using closed form solutions, de\ufb01ned by their respective Bellman operators. For \u03bb-greedy, the initialization was wsq = 0.0 and werr = Rmax 1\u2212\u03b3 \u00d7 1 for on-policy, to match the rule-ofthumb of initializing with high lambdas, and the opposite for o\ufb00-policy, to match the rule-of-thumb of more caution in o\ufb00policy domains. This max return value is a common choice for optimistic initialization, and prevented inadvertent evaluator bias by overly tuning this parameter. We \ufb01xed the learning-rate parameter for \u03bb-greedy to be equal to equal to \u03b1 used in learning the value function (w), again to demonstrate performance in less than ideal settings. Sweeping the step-size for \u03bb-greedy would improve performance. The performance results in on and o\ufb00-policy are summarized in Figure 1 and 2. We report the absolute value error compared to the true value function, which is computable in this domain for each of the settings. We average over 100 runs, and report the results for the best parameter settings for each of the algorithms with 12 values of \u03b1 \u2208{0.1 \u00d7 2j|j = \u22126, \u22126, ..., 5, 6}, 11 values of \u03b7 \u2208{2j|j = \u221216, \u22128, \u22124, . . . , 4, 8, 16} (\u03b1h = \u03b1\u03b7). In general, we \ufb01nd that \u03bb-greedy works well across settings. The optimal \u03bb-greedy consistently performs the best, indicating the merit of the objective. Estimating wsq and wsq typically cause \u03bb-greedy to perform more poorly, indicating an opportunity to improve these algorithms to match the performance of the optimal, idealistic version. In particular, we did not optimize the meta-parameters in \u03bb-greedy. For the \ufb01xed values of \u03bb and decay schedules, we \ufb01nd that they can be e\ufb00ective for speci\ufb01c instances, but do not perform well across problem settings. In particular, the \ufb01xed decay schedules settings appear to be un-robust to an increasing chain length and the \ufb01xed \u03bb are not robust to the change from on-policy to o\ufb00-policy. We also examined the \u03bb value selected by our \u03bb-greedy algorithm plotted against time. For tabular features, \u03bb should converge to zero over-time, since the value function can be approximated and so, at some point, no bias is introduced by using \u03bbt = 0, but variance is reduced. The algorithm does converge to a state-based \u03bb (\u03bb(s) \u22480 for all states), which was one of our goals to ensure we have a well-de\ufb01ned \ufb01xed point. Second, for aliased features, we expect that the \ufb01nal per-state \u03bb should be larger for the states that have been aliased. The intuition is that one should not bootstrap on the values of these states, as that introduces bias. We demonstrate that this does indeed occur, in Figure 3. As expected, the \u03bb-greedy is more robust to state aliasing, compared to the \ufb01xed strategies. State aliasing provides a concrete example of when we have less con\ufb01dence in \u02c6 v in speci\ufb01c states, and an e\ufb00ective strategy to mitigate this situation is to set \u03bb high in those states. 8." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file