diff --git "a/abs_29K_G/test_abstract_long_2405.01035v1.json" "b/abs_29K_G/test_abstract_long_2405.01035v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01035v1.json" @@ -0,0 +1,449 @@ +{ + "url": "http://arxiv.org/abs/2405.01035v1", + "title": "LOQA: Learning with Opponent Q-Learning Awareness", + "abstract": "In various real-world scenarios, interactions among agents often resemble the\ndynamics of general-sum games, where each agent strives to optimize its own\nutility. Despite the ubiquitous relevance of such settings, decentralized\nmachine learning algorithms have struggled to find equilibria that maximize\nindividual utility while preserving social welfare. In this paper we introduce\nLearning with Opponent Q-Learning Awareness (LOQA), a novel, decentralized\nreinforcement learning algorithm tailored to optimizing an agent's individual\nutility while fostering cooperation among adversaries in partially competitive\nenvironments. LOQA assumes the opponent samples actions proportionally to their\naction-value function Q. Experimental results demonstrate the effectiveness of\nLOQA at achieving state-of-the-art performance in benchmark scenarios such as\nthe Iterated Prisoner's Dilemma and the Coin Game. LOQA achieves these outcomes\nwith a significantly reduced computational footprint, making it a promising\napproach for practical multi-agent applications.", + "authors": "Milad Aghajohari, Juan Agustin Duque, Tim Cooijmans, Aaron Courville", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Agent AND Reinforcement AND Learning", + "gt": "In various real-world scenarios, interactions among agents often resemble the\ndynamics of general-sum games, where each agent strives to optimize its own\nutility. Despite the ubiquitous relevance of such settings, decentralized\nmachine learning algorithms have struggled to find equilibria that maximize\nindividual utility while preserving social welfare. In this paper we introduce\nLearning with Opponent Q-Learning Awareness (LOQA), a novel, decentralized\nreinforcement learning algorithm tailored to optimizing an agent's individual\nutility while fostering cooperation among adversaries in partially competitive\nenvironments. LOQA assumes the opponent samples actions proportionally to their\naction-value function Q. Experimental results demonstrate the effectiveness of\nLOQA at achieving state-of-the-art performance in benchmark scenarios such as\nthe Iterated Prisoner's Dilemma and the Coin Game. LOQA achieves these outcomes\nwith a significantly reduced computational footprint, making it a promising\napproach for practical multi-agent applications.", + "main_content": "INTRODUCTION A major difficulty in reinforcement learning (RL) and multi-agent reinforcement learning (MARL) is the non-stationary nature of the environment, where the outcome of each agent is determined not only by their own actions but also those of other players von Neumann (1928). This difficulty often results in the failure of traditional algorithms converging to desirable solutions. In the context of general-sum games, independent RL agents often converge to sub-optimal solutions in the Pareto sense, when each of them seeks to optimize their own utility Foerster et al. (2018b). This situation draws parallels with many real-world scenarios, in which individuals pursuing their own selfish interests leads them to a worse outcome than cooperating with others. Thus one of the objectives of MARL research must be to develop decentralized agents that are able to cooperate while avoiding being exploited in partially competitive settings. We call this reciprocity-based cooperation. Previous work has resulted in algorithms that train reciprocity-based cooperative agents by differentiating through the opponent\u2019s learning step (Foerster et al., 2018b; Letcher et al., 2021; Zhao et al., 2022; Willi et al., 2022) or by modeling opponent shaping as a meta-game in the space of agent policies (Al-Shedivat et al., 2018; Kim et al., 2021; Lu et al., 2022; Cooijmans et al., 2023). However, both of these approaches have important drawbacks with respect to computational efficiency. On one hand, differentiating through even just a few of the opponent\u2019s learning steps, can only be done sequentially and requires building large computation graphs. This is computationally costly when dealing with complex opponent policies. On the other hand, meta-learning defines the problem as a meta-state over the product space of policies of the agent and opponent, and learns a meta-policy that maps from the meta-state to the agent\u2019s updated policy. The complexity of the problem then scales with the policy parameterization which is usually a neural network with many parameters. In this paper we introduce Learning with Opponent Q-Learning Awareness (LOQA), which stands because it avoids computing gradients w.r.t. optimization steps or learning the dynamics of a metagame, resulting in significantly improved computational efficiency. LOQA performs opponent shaping by assuming that the opponent\u2019s behavior is guided by an internal action-value function Q. This assumption allows LOQA agents to build a model of the opponent policy that can be shaped by influencing its returns for different actions. Controlling the return by differentiating through stochastic 1 arXiv:2405.01035v1 [cs.GT] 2 May 2024 \fPublished as a conference paper at ICLR 2024 objectives is a key idea in RL and can be done using the REINFORCE estimator Williams (1992). LOQA is strongly inspired by Best Response Shaping Aghajohari et al. (2024). In BRS, the detective approximates the optimal response to an agent by conditioning on the returns from simulated game trajectories between the agent and a random opponent. The agent then differentiates through the detective through these differentiable returns. 2 BACKGROUND We consider general-sum, n-player, Markov games, also referred as stochastic games Shapley (1953). Markov games are defined by a tuple M = (N, S, A, P, R, \u03b3) where S denotes the state space, A := A1 \u00d7 . . . \u00d7 An, is the joint action space of all players, P : S \u00d7 A \u2192\u2206(S), defines a mapping from every state and joint action to a probability distribution over states, R = {r1, . . . , rn} is the set of reward functions where each ri : S \u00d7 A \u2192R maps every state and joint action to a scalar return and \u03b3 \u2208[0, 1] is the discount factor. We use the notation and definitions for standard RL algorithms of Agarwal et al. (2021). Consider two agents, 1 (agent) and 2 (opponent) that interact in an environment with neural network policies \u03c01 := \u03c0(\u00b7|\u00b7; \u03b81), \u03c02 := \u03c0(\u00b7|\u00b7; \u03b82) parameterized by \u03b81 and \u03b82 respectively. We denote \u03c4 to be a trajectory with initial state distribution \u00b5 and probability measure Pr\u03c01,\u03c02 \u00b5 given by Pr\u03c01,\u03c02 \u00b5 (\u03c4) = \u00b5(s0)\u03c01(a0|s0)\u03c02(b0|s0)P(s1|s0, a0, b0) . . . here b \u2208A2 denotes the action of the opponent. In multi-agent reinforcement learning, each agent seeks to optimize their expected discounted return R, for the agent this is given by: V 1(\u00b5) := E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \u0002 R1(\u03c4) \u0003 = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" \u221e X t=0 \u03b3tr1(st, at, bt) # The key observation is that under the definitions above, V 1 is dependent on the policy of the opponent through the reward function r1(st, at, bt). V 1 is thus differentiable with respect to the parameters of the opponent via the REINFORCE estimator (Williams, 1992) \u2207\u03b82V 1(\u00b5) = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" R1(\u03c4) \u221e X t=0 \u2207\u03b82log \u03c02(bt|st) # = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" \u221e X t=0 \u03b3tr1(st, at, bt) X kt)\u223c\u03c0 \u221e X \u03c4=t \u03b3\u03c4\u2212tfi(x(\u03c4)) = fi(x(t)) + \u03b3 E x(t+1)\u223c\u03c0 V \u03c0 i (x(t+1)). (6) In this context, gradient methods like naive learning and LOLA can be seen as deterministic metapolicies, although they become stochastic when policy gradients are involved, and non-Markov when path-dependent information like momentum is used. 3 \fMeta-Value Learning Cooijmans, Aghajohari & Courville Meta-PG (Al-Shedivat et al., 2018) was the first to consider such a meta-game, applying policy gradients to find initializations xi that maximize V \u03c0 i , with \u03c0 assumed to be naive learning on f. Meta-MAPG (Kim et al., 2021) tailor Meta-PG to multi-agent learning, taking the learning process of other agents into account. However, Meta-MAPG (like Meta-PG) assumes all agents use naive learning, and hence is inconsistent like LOLA. M-FOS (Lu et al., 2022) considers a partially observable meta-game, thus allowing for the scenario in which opponent policies are not directly observable. M-FOS trains parametric meta-policies \u03c0i(\u00b7 \u00b7 \u00b7 ; \u03b8i) to maximize V \u03c0 i using policy gradients. However, the move to arbitrary meta-policies, away from gradient methods, discards the gradual dynamics that are characteristic of learning. As such, M-FOS does not learn to learn with learning awareness so much as learn to act with learning awareness. Indeed, M-FOS uses arbitrarily fast policy changes to derail naive and LOLA learners. 3 META-VALUE LEARNING We now describe our method. First, we propose the use of the meta-value function as a surrogate game. Next, we demonstrate how to estimate the gradient of the meta-value using standard reinforcement learning techniques. Finally, we discuss how the proposed meta-policy relates to one that is (locally) greedy in the Q-values. 3.1 THE META-VALUE FUNCTION In order to fully account for the downstream effects of our meta-actions, we propose to follow the gradient of the meta-value function of (6): x(t+1) i = x(t) i + \u03b1\u2207xiVi(x(t)). (7) Like the popular surrogate (3), it looks ahead in optimization time, but it does so in a way that is consistent with the true optimization process x(t) and naturally covers multiple steps. When all players follow this meta-policy, the transition is deterministic and we can write V (x) = f(x) + \u03b3V (x\u2032) with x\u2032 = x + \u03b1 \u00af \u2207xV (x), (8) which is consistent like (4), and provides a natural way to look further into the future through the discount rate \u03b3. It is however implicit, so we cannot directly access the gradients \u2207xiVi used in (7). 3.2 LEARNING META-VALUES We will take a similar approach as Willi et al. (2022) and approximate our implicit surrogate (8) with a model. Specifically, we propose to learn a model \u02c6 Vi(xi; \u03b8i) parameterized by \u03b8i that approximates the values \u02c6 Vi \u2248Vi, and use its gradients \u2207xi \u02c6 Vi \u2248\u2207xiVi instead. Thus rather than emitting entire gradients (as COLA does) or emitting entire policies (as M-FOS does), we model scalars, and estimate the gradient of the scalar by the gradient of the estimated scalar. The resulting algorithm is related to Value-Gradient Learning (Fairbank & Alonso, 2012), but we do not directly enforce a Bellman equation on the gradients \u2207xi \u02c6 Vi. In essence, the learning process follows a nested loop (see Algorithms 1 & 2). In the inner loop, we collect a (finite) policy optimization trajectory according to x(t+1) i = x(t) i + \u03b1\u2207xi \u02c6 Vi(x(t); \u03b8) and x(t+1) \u2212i \u223c\u03c0\u2212i( \u00b7 | x(t)). (9) Then in the outer loop, we train \u02c6 V by minimizing the total TD error P t(\u03b4(t) i )2 where \u03b4(t) i = \u02c6 fi(x(t)) + \u03b3 \u02c6 Vi(x(t+1); \u00af \u03b8i) \u2212\u02c6 Vi(x(t); \u03b8i). (10) Here \u02c6 f(x(t)) is in general an empirical estimate of the expected return f(x(t)) based on a batch of Monte-Carlo rollouts, although in our experiments we use the exact expected return. The target involves \u00af \u03b8, typically a target network that lags behind \u03b8 (Mnih et al., 2015). As training progresses, (9) approaches the proposed update (7). The algorithm can be viewed as a consistent version of Meta-MAPG (Kim et al., 2021), and the value-learning counterpart to policy gradient-based M-FOS (Lu et al., 2022) with local meta-policy \u03c0i. 4 \fMeta-Value Learning Cooijmans, Aghajohari & Courville Algorithm 1 Basic Meta-Value Learning. Require: Learning rates \u03b7, \u03b1, rollout length T, fixed discount rate \u03b3. Initialize models \u03b8i for all players i. while \u03b8 has not converged do Initialize policies x(0). for t = 0, . . . , T do x(t+1) = x(t) + \u03b1 \u00af \u2207x \u02c6 V (x(t); \u03b8) end for for players i \u2208{1, . . . , P} do \u03b8i \u2190\u03b8i \u2212\u03b7\u2207\u03b8i 1 T P t(\u03b4(t) i )2 end for end while Algorithm 2 Basic Meta-Value Learning versus unknown meta-policies. Require: Learning rates \u03b7, \u03b1, rollout length T, fixed discount rate \u03b3. Initialize model \u03b8i for player i. while \u03b8i has not converged do Initialize policies x(0). for t = 0, . . . , T do x(t+1) i = x(t) i + \u03b1\u2207xi \u02c6 Vi(x(t); \u03b8i) x(t+1) \u2212i \u223c\u03c0\u2212i( \u00b7 | x(t)) end for \u03b8i \u2190\u03b8i \u2212\u03b7\u2207\u03b8i 1 T P t(\u03b4(t) i )2 end while 3.3 Q-LEARNING INTERPRETATION In this section we establish a theoretical link between the gradient \u2207xiVi and the action that greedily (if locally) maximizes the state-action value Qi given by Qi(x, x\u2032 i) = Ex\u2032 \u2212i Vi(x\u2032 i, x\u2032 \u2212i), or equivalently Qi(x, xi + \u2206i) = E\u2206\u2212i Vi(x + \u2206), where we have defined \u2206i = x\u2032 i \u2212xi so as to write the Q-function in terms of policy changes. Next, we construct a first-order Taylor approximation \u02dc Qi of Qi around x: \u02dc Qi(x, xi + \u2206i) = Vi(x) + E\u2206\u2212i \u2206\u22a4\u2207xVi(x) (11) = Vi(x) + \u2206\u22a4 i \u2207xiVi(x) + E\u2206\u2212i \u2206\u22a4 \u2212i\u2207x\u2212iVi(x). (12) This approximation is not justified in general as there is no reason to expect \u2206to be small, particularly the opponent updates \u2206\u2212i which we do not control. It is however justified when all players are learners, i.e. they make local updates with small \u2206. We now proceed to maximize (11) to find the argmax of \u02dc Qi. In doing so, we must include a locality constraint to bound the problem. If we use a soft norm penalty, we arrive at exactly our update: argmax\u2206i \u02dc Qi(x, xi + \u2206i) \u2212 1 2\u03b1\u2225\u2206i\u22252 = \u03b1\u2207xiVi(x). However, we may wish instead to apply a hard norm constraint, which would admit a treatment from the perspective of a game with a well-defined local action space. Either way, the argmax update will be proportional to \u2207xiVi(x), which lends an interpretation to our use of the gradient in (7). In conclusion, our proposed update locally maximizes a local linearization \u02dc Q of the Q-function. Our method is thus related to independent Q-learning (Watkins & Dayan, 1992; Busoniu et al., 2008), which we must point out is not known to converge in general-sum games. It nevertheless does appear to converge reliably in practice, and we conjecture that applying it on the level of optimization effectively simplifies the interaction between the agents\u2019 learning processes. 4 PRACTICAL CONSIDERATIONS We use a number of established general techniques to improve the dynamics of value function approximation (Hessel et al., 2018). The prediction targets in (10) are computed with a target network (Mnih et al., 2015) that is an exponential moving average of the parameters \u03b8i. We use distributional reinforcement learning with quantile regression (Dabney et al., 2018). Instead of the fully-bootstrapped TD(0) error, we use \u03bb-returns (Sutton & Barto, 2018) as the targets, computed individually for each quantile. The rest of this section describes some additional techniques designed to mitigate the downsides of bootstrapping and to encourage generalization. Algorithm 3 in Appendix B lays out the complete learning process with these techniques included. 5 \fMeta-Value Learning Cooijmans, Aghajohari & Courville 4.1 REFORMULATION AS A CORRECTION The use of a model \u02c6 V introduces a bias, particularly early on in training when its gradients \u00af \u2207\u02c6 V are meaningless. We provide a variant of the method that provides a correction to the original game f rather than replacing it entirely. Instead of modeling V (x) = f(x) + \u03b3 Ex\u2032 V (x\u2032), we may model U(x) = Ex\u2032 V (x\u2032). We can derive a Bellman equation for U: U(x) = Ex\u2032 V (x\u2032) = Ex\u2032 f(x\u2032) + \u03b3 Ex\u2032\u2032 V (x\u2032\u2032) = Ex\u2032 f(x\u2032) + \u03b3U(x\u2032). Now agents follow the gradient field \u2207xifi(x) + \u03b3\u2207xi \u02c6 Ui(x), and we minimize P t(\u03b4(t) i )2 where \u03b4(t) i = \u02c6 fi(x(t+1)) + \u03b3 \u02c6 Ui(x(t+1); \u00af \u03b8i) \u2212\u02c6 Ui(x(t); \u03b8i) with respect to the parameters \u03b8i of our model \u02c6 Ui(x; \u03b8i). This loss is the same as that for \u02c6 V but with a time shift on the f term. This variant is more strongly grounded in the game f, as the use of the original gradient \u2207xifi(x) guards against poor initialization and spurious drift in the model. A drawback of this approach is that now the naive gradient term \u2207xifi(x) will in general have to be estimated by REINFORCE (Williams, 1992). In all of our experiments we are able to compute \u2207xifi exactly. 4.2 VARIABLE DISCOUNT RATES We set up the model (be it \u02c6 U or \u02c6 V ) to condition on discount rates \u03b3i, so that we can train it for different rates and even rates that differ between the players. This is helpful because it forces the model to better understand the given policies, in order to distinguish policies that would behave the same under some fixed discount rate but differently under another. During training, we draw \u03b3i \u223cBeta (1/2, 1/2) from the standard arcsine distribution to emphasize extreme values. Varying \u03b3 affects the scale of U, V and hence the scale of our approximations to them. This in turn changes the effective learning rate when we take gradients. To account for this, we could normalize the outputs and gradients of \u02c6 U, \u02c6 V by scaling by 1 \u2212\u03b3 before use. However, we instead choose to multiply the meta-reward term f(x) in the Bellman equations by 1 \u2212\u03b3: Vi(x; \u03b3) = (1 \u2212\u03b3i)fi(x) + \u03b3i Ex\u2032 Vi(x\u2032; \u03b3) Ui(x; \u03b3) = Ex\u2032(1 \u2212\u03b3i)fi(x\u2032) + \u03b3iVi(x\u2032; \u03b3) This ensures our models learn the normalized values instead, which fall in the same range as f(x). Appendix A has a derivation. 4.3 EXPLORATION Our model \u02c6 V provides a deterministic (meta)policy for changing the inner policies x. Effective value learning, however, requires exploration as well as exploitation. A straightforward way to introduce exploration into the system is to perturb the greedy transition in (9) with some additive Gaussian noise (Heess et al., 2015). However, this leads to a random walk that fails to systematically explore the joint policy space. Instead of perturbing the (meta)actions, we perturb the (meta)policy by applying noise to the parameters \u03b8i, and hold the perturbed policy fixed over the course of an entire exploration trajectory \u02dc x(0), . . . , \u02dc x(T ). Specifically, we randomly flip signs on the final hidden units of \u02c6 V ; this results in a perturbed value function that incentivizes different high-level characteristics of the inner policies x. The trajectories so collected are entirely off-policy and serve only to provide a diversity of states. In order to train our model on a given state x, we collect a short on-policy rollout with the unperturbed parameters \u03b8 and minimize TD error there. 6 \fMeta-Value Learning Cooijmans, Aghajohari & Courville Figure 1: The Logistic Game. The left panel displays the contours of player 1\u2019s objective f1(x), the right panel similarly for player 2. Player 1\u2019s policy x1 is a horizontal position, player 2\u2019s policy x2 is a vertical position. Both players prefer solution B over solution A, but cannot unilaterally go there. Naive learning converges to whichever solution is closest upon initialization. 5 EXPERIMENTS The method is evaluated on four environments. First, we demonstrate the advantage of looking farther ahead on a two-dimensional game that is easy to visualize. Next, we evaluate opponent shaping on the IPD, IMP and Chicken games by pitting MeVa head-to-head against Naive and LOLA agents, and M-MAML (a special exact case of Meta-MAPG due to Lu et al. (2022)). 5.1 LOGISTIC GAME We analyze the behavior of several algorithms on the Logistic Game (Letcher, 2018), a two-player game where each player\u2019s policy is a single scalar value. Thus the entire joint policy space is a two-dimensional plane, which we can easily visualize. The game is given by the function1 f(x) = \u2212 \u0010 4\u03c3(x1)(1\u22122\u03c3(x2)) 4\u03c3(x2)(1\u22122\u03c3(x1)) \u0011 \u2212x2 1x2 2+(x1\u2212x2)2(x1+x2)2 10000 . (13) Figure 1 shows the structure of the game. There are two stable fixed points \u2013 one in the lower left (A) and one in the upper right (B). Both players prefer B to A, however to get from A to B requires coordination: the horizontal player prefers left if the vertical player plays low and vice versa. We look at this game in terms of basins of attraction, and how different algorithms affect them (Figure 2b). Following naive gradients (LOLA/HOLA2 with \u03b1 = 0), players converge to whichever solution is nearest; the basins of attraction meet at a diagonal line through the origin. LOLA grows the basin of the preferred solution B, but only slightly and increasing the extrapolation step size \u03b1 does not help much. HOLA2 grows the basin of B around the edges, but suffers from instabilities around the origin (a saddlepoint). We found HOLA3 to be significantly worse than HOLA2 and did not pursue that direction further. COLA (our implementation) makes significant improvements around the edges and around the origin, but overshoots B as \u03b1 increases. Finally, MeVa is able to make the basin of B arbitrarily large. When \u03b3 > 0.9, it converges to the preferred solution B from anywhere in the surveyed area. We also show some actual optimization trajectories in Figure 2a. Experiment details can be found in Appendix C. 5.2 MATRIX GAMES We evaluate our method on several repeated matrix games by going head-to-head with naive learners, LOLA and M-MAML. M-MAML is a variant of MetaMAPG due to Lu et al. (2022) that uses exact 1Letcher (2018) use the divisor 1000 in Eqn (13), however it does not match their plots. 7 \fMeta-Value Learning Cooijmans, Aghajohari & Courville (a) Optimization trajectories. We took a random set of policy pairs and, for each panel, optimized them according to the algorithm under consideration. Each curve shows an optimization trajectory, typically finishing in or close to either A or B. (b) Basins of attraction. For each panel, we took a grid of policy space points and optimized them according to the algorithm under consideration. White cells indicate that the corresponding point ended up in the positive quadrant x1, x2 > 0, black cells ended up in other quadrants (typically the negative quadrant). Figure 2: Logistic Game behaviors of different algorithms (rows) with different settings (columns). For LOLA/HOLA/COLA, the parameter is the learning rate \u03b1. Increasing it leads to instability. For MeVa, the parameter is the meta-discount rate \u03b3, which gradually smooths out the landscape until it leads towards B from anywhere. Table 1: Payoffs for the matrix games considered. (a) Iterated Prisoner\u2019s Dilemma A B A (\u22121, \u22121) (\u22123, 0) B ( 0, \u22123) (\u22122, \u22122) (b) Iterated Matching Pennies A B A (+1, \u22121) (\u22121, +1) B (\u22121, +1) (+1, \u22121) (c) Chicken Game A B A ( 0, 0) ( \u22121, + 1) B (+1, \u22121) (\u2212100, \u2212100) gradients. We do not provide direct comparison with M-FOS as doing so requires training M-FOS and MeVa jointly; it is unclear how to do this fairly. Instead, we compare our behaviors versus Naive, LOLA and M-MAML with those of M-FOS. We use the same setup as Lu et al. (2022): policies xi \u2208R5 consist of five binary logits, corresponding to the probability of playing action A or B in each of five possible states (the initial state \u2205and the previous joint action AA, AB, BA, BB). We use the exact value function given by Foerster et al. (2018a) with discount rate 0.96 (not to be confused with our meta-discount rate \u03b3). The games differ only in their payoff matrices (Table 1), however IMP stands out as being not symmetric but antisymmetric. Lu et al. (2022) performed comparisons as though it were symmetric, that is, as though (meta)policies trained to (meta)play player 1 could be used to (meta)play player 2. Particularly, the numbers they report for M-MAML are meaningless. The numbers we report do not have this problem, except M-MAML vs M-FOS wich was measured using their code, and which we omit from the table for this reason. Please see Appendix D for more discussion of the issue. 8 \fMeta-Value Learning Cooijmans, Aghajohari & Courville Table 2: Head-to-head comparison of meta-policies on repeated matrix games. For each pairing, we report the return of the row player, averaged across a batch of trials. Appendix E reports standard errors on these numbers. (a) Iterated Prisoner\u2019s Dilemma. MeVa extorts the naive learner and MMAML, and is slightly extorted by LOLA. Naive LOLA MMAML MFOS MeVa Naive -1.99 -1.38 -1.52 -2.02 -2.00 LOLA -1.36 -1.04 -0.97 -1.02 -1.03 MMAML -1.40 -1.29 -1.22 -1.99 MFOS -0.56 -1.02 -1.01 MeVa -0.55 -1.15 -0.53 -1.05 (b) Iterated Matching Pennies. MeVa is able to exploit both the naive learner and LOLA, but not MMAML. This exploitation must be dynamical in nature, as ZDextortion cannot occur in zero-sum games. Naive LOLA MMAML MFOS MeVa Naive 0.01 0.03 -0.10 -0.20 -0.24 LOLA -0.03 0.03 -0.05 -0.19 -0.30 MMAML 0.10 0.05 -0.00 -0.01 MFOS 0.20 0.19 0.00 MeVa 0.24 0.30 0.01 -0.00 (c) Chicken Game. MeVa exploits the naive learner and MMAML, while avoiding disasters against itself. LOLA exploits every opponent except MMAML, but does poorly against itself. Naive LOLA MMAML MFOS MeVa Naive -0.05 -0.40 -0.99 -1.03 -0.98 LOLA 0.38 -1.64 -0.80 0.79 0.14 MMAML 0.98 0.78 -0.45 -0.91 MFOS 0.97 -1.16 -0.01 MeVa 0.96 -0.23 0.17 -0.08 On the Iterated Prisoner\u2019s Dilemma (Table 2a), MeVa extorts the naive learner (details in Appendix F), and LOLA to a small extent. The behavior is similar to that of M-FOS, although M-FOS leads the naive agent to accept returns below -2, indicating dynamical exploitation. On Iterated Matching Pennies (Table 2b), MeVa exploits naive and LOLA learners, moreso than M-FOS. ZD-extortion is impossible in zero-sum games, so MeVa exhibits dynamical exploitation. On the Chicken Game (Table 2c), LOLA exploits every opponent except M-MAML, but does poorly against itself (also observed by Lu et al. (2022)). M-MAML similarly exploits its opponents by taking on an aggresive initial policy. MeVa exploits the naive learner and M-MAML, while avoiding disasters against itself. Overall, we find that MeVa is competitive with M-FOS on these games. Despite the restriction to local meta-actions, MeVa finds ZD-extortion and even dynamical exploitation on games that permit it. Further detail, including standard errors on these results, can be found in Appendix E. 6 LIMITATIONS The meta-value function is a scalar function over (joint) policies. In practice, policies will often take the form of neural networks, and so will our meta-value function approximation. Conditioning neural nets on other neural nets is a major challenge (Harb et al., 2020). In addition, the large parameter vectors associated with neural nets quickly prohibit handling batched optimization trajectories. During training and opponent shaping, we assume opponent parameters to be visible to our agent. This is not necessarily unrealistic \u2013 we learn and use the meta-value only as a means to the end of finding good policies x for the game f that can then be deployed in the wild without further training. Nevertheless, the algorithm could be extended to work with opponent models, or more directly, the model could observe policy behaviors instead of parameters. The meta-discount rate \u03b3, like LOLA\u2019s step size \u03b1, is hard to interpret. Its meaning changes significantly with the learning rate \u03b1 and the parameterization of both the model \u02c6 V and the policies x. Future work could explore the use of proximal updates, like POLA (Zhao et al., 2022) did for LOLA. Finally, it is well known that LOLA fails to preserve the Nash equilibria of the original game f. The method presented here shares this property. 9 \fMeta-Value Learning Cooijmans, Aghajohari & Courville 7" + }, + { + "url": "http://arxiv.org/abs/1902.02405v1", + "title": "On the Variance of Unbiased Online Recurrent Optimization", + "abstract": "The recently proposed Unbiased Online Recurrent Optimization algorithm (UORO,\narXiv:1702.05043) uses an unbiased approximation of RTRL to achieve fully\nonline gradient-based learning in RNNs. In this work we analyze the variance of\nthe gradient estimate computed by UORO, and propose several possible changes to\nthe method which reduce this variance both in theory and practice. We also\ncontribute significantly to the theoretical and intuitive understanding of UORO\n(and its existing variance reduction technique), and demonstrate a fundamental\nconnection between its gradient estimate and the one that would be computed by\nREINFORCE if small amounts of noise were added to the RNN's hidden units.", + "authors": "Tim Cooijmans, James Martens", + "published": "2019-02-06", + "updated": "2019-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction All learning algorithms are driven by some form of credit assignment\u2014identi\ufb01cation of the causal e\ufb00ect of past actions on a learning signal (Minsky, 1961; Sutton, 1984). This enables agents to learn from experience by amplifying behaviors that lead to success, and attenuating behaviors that lead to failure. The problem of performing e\ufb03cient and precise credit assignment, especially in temporal agents, is a central one in arti\ufb01cial intelligence. Knowledge of the inner workings of the agent can simplify the problem considerably, as we can trace responsibility for the agent\u2019s decisions back to its parameters. In this work, we consider credit assignment in recurrent neural networks (rnns; Elman, 1990; Hochreiter and Schmidhuber, 1997), where the di\ufb00erentiability of the learning signal with respect to past hidden units allows us to assign credit using derivatives. But even with this structure, online credit assignment across long or inde\ufb01nite stretches of time remains a largely unsolved problem. Typically, di\ufb00erentiation occurs by Backpropagation Through Time (bptt; Rumelhart et al., 1986; Werbos, 1990), which requires a \u201cforward pass\u201d in which the network is evaluated for a length of time, followed by a \u201cbackwards pass\u201d in which gradient with respect to the model\u2019s parameters is computed. This is impractical for very long sequences, and a common trick is to \u201ctruncate\u201d the backwards pass after some \ufb01xed number of iterations (Williams and Peng, 1990). As a consequence, parameter updates are infrequent, expensive, and limited in the range of temporal dependencies they re\ufb02ect. \u2217. Work partially carried out at DeepMind 1 arXiv:1902.02405v1 [cs.LG] 6 Feb 2019 \fbptt\u2019s more natural dual, Real-Time Recurrent Learning (rtrl; Williams and Zipser, 1989), carries gradient information forward rather than backward. It runs alongside the model and provides parameters updates at every time step. To do so, however, it must retain a large matrix relating the model\u2019s internal state to its parameters. Even when this matrix can be stored at all, updating it is prohibitively expensive. Various approximations to rtrl have been proposed (e.g. Mak et al., 1999) in order to obtain cheaper gradient estimates at the cost of reducing their accuracy. In this paper we consider Unbiased Online Recurrent Optimization (uoro; Ollivier et al., 2015; Tallec and Ollivier, 2018), an unbiased stochastic approximation to rtrl that compresses the gradient information through random projections. We analyze the variance of the uoro gradient estimator, relate it to other gradient estimators, and propose various modi\ufb01cations to it that reduce its variance both in theory and practice. 2. Outline of the Paper We begin with a detailed discussion of the relationship and tradeo\ufb00s between rtrl and bptt in Section 3. Before narrowing our focus to approximations to rtrl, we brie\ufb02y review other approaches to online credit assignment in Section 4. We then contribute a novel and arguably more intuitive derivation of the uoro algorithm in Section 5. In Sections 6 and 7 we give our main contribution in the form of a thorough analysis of uoro and the variance it incurs., and derive a new variance reduction method based on this analysis. Sections 6.1 and 6.2 discuss limitations of the variance reduction scheme of Tallec and Ollivier (2018), and in Section 6.3 propose to augment its scalar coe\ufb03cients with matrix-valued transformations. We develop a framework for analysis of uoro-style estimators in Sections 6.4 and 6.5, which allows us to determine the total variance incurred when accumulating consecutive gradient estimates over time. Working within this framework, we derive a formula for matrices that gives the optimal variance reduction subject to certain structural constraints (Section 7.1). We evaluate our theory in a tractable empirical setting in Section 7.2, and explore avenues toward a practical algorithm in Section 7.1.3. Section 8 introduces a variant of uoro that avoids one of its two levels of approximation. It exploits the fact that gradients with respect to weight matrices are naturally rank-one. We show this reduces the variance by a factor on the order of the number of hidden units, at the cost of increasing computation time by the same factor. Finally, we study the relationship between uoro and reinforce (Williams, 1992) in Section 9. The analysis uncovers a close connection when reinforce is used to train rnns with perturbed hidden states. We show that when this noise is annealed, the reinforce estimator converges to the uoro estimator plus an additional term that has expectation zero but unbounded variance. 3. Automatic Di\ufb00erentiation in Recurrent Neural Networks Recurrent Neural Networks (rnns; Elman, 1990; Hochreiter and Schmidhuber, 1997) are a general class of nonlinear sequence models endowed with memory. Given a sequence of input vectors xt, and initial state vector h0, an rnn\u2019s state evolves according to ht = F(ht\u22121, xt; \u03b8t) 2 \fwhere F is an arbitrary continuously di\ufb00erentiable transition function parameterized by \u03b8t that produces the next state ht given the previous state ht\u22121 and the current observation xt. Typically, F will take the form of an a\ufb03ne map followed by a nonlinear function: at = ( h\u22a4 t\u22121 x\u22a4 t 1 )\u22a4 ht = f(Wtat). (1) Here f(\u00b7) is the \u201cactivation function\u201d, which is assumed to be continuously di\ufb00erentiable (and is typically nonlinear and coordinate-wise), and Wt is a square matrix parameter whose vectorization is \u03b8t. The de\ufb01ning feature of recurrent neural networks as compared to feed-forward neural networks is the fact that their weights are tied over time. That is, we have \u03b8t = \u03b8. However, we will continue to distinguish the di\ufb00erent \u03b8t\u2019s in the recurrence, as this allows us to refer to individual \u201capplications\u201d of \u03b8 in the analysis (which will be useful later). Although we will treat the sequence as \ufb01nite, i.e. 1 \u2a7dt \u2a7dT for some sequence length T, we are interested mainly in streaming tasks for which T may as well be in\ufb01nite. At each time step t, we incur a loss Lt which is some di\ufb00erentiable function of ht. In order to minimize the aggregate loss L = PT t=1 Lt with respect to \u03b8, we require an estimate of its gradient with respect to \u03b8. We will write J y x (or occasionally Jx(y)) for the Jacobian of y with respect to x. We can express the gradient as a double sum over time that factorizes in two interesting ways: J L \u03b8 = T X t=1 T X s=1 J Lt \u03b8s = T X s=1 T X t=s J Lt hs ! J hs \u03b8s | {z } reverse accumulation = T X t=1 J Lt ht t X s=1 J ht \u03b8s ! | {z } forward accumulation (2) Each of the terms J Lt \u03b8s indicates how the use of the parameter \u03b8 at time s a\ufb00ected the loss at time t. This is a double sum over time with O(T 2) terms, but since future parameter applications do not a\ufb00ect past losses, we have J Lt \u03b8s = 0 for s > t. Both factorizations exploit this triangular structure and allow the gradient to be computed in O(T) by recursive accumulation. By far the most popular strategy for breaking down this computation goes by the name of Back-Propagation Through Time (bptt; Werbos, 1990). It is an instance of what is known as reverse-mode accumulation in the autodi\ufb00erentiation community, and relies on the reverse factorization in Equation 2. bptt computes gradients of total future loss J L ht with respect to states ht in reverse chronological order by the recursion J L ht = J L ht+1J ht+1 ht + J Lt ht . (3) At each step, a term J L \u03b8t = J L htJ ht \u03b8t of the gradient is accumulated. Since the quantities J ht+1 ht , J Lt ht and J ht \u03b8t generally depend on ht and Lt, the use of bptt in practice implies running the model forward for T steps to obtain the sequence of hidden states ht and losses Lt, and subsequently running backward to compute the gradient. Its converse, Real-Time Recurrent Learning (rtrl; Williams and Zipser, 1989), is an instance of forward-mode accumulation. It exploits the forward factorization of the gradient 3 \fin Equation 2, computing Jacobians J ht \u03b8 of hidden states ht with respect to past applications of the parameter \u03b8 recursively according to J ht \u03b8 = J ht ht\u22121J ht\u22121 \u03b8 + J ht \u03b8t . (4) What rtrl provides over bptt is that we can run it forward alongside our model, and at each time-step t update the model parameters \u03b8 immediately (using J Lt \u03b8 = J L htJ ht \u03b8 ), thus performing fully online learning. This is to be contrasted with bptt, where we must run the model forward for T time-steps before we can make a parameter update, thus introducing a long delay between the reception of a learning signal Lt and the parameter update that takes it into account. There is a caveat to the above, which is that as soon as we update our parameter \u03b8, the Jacobian J ht \u03b8 accumulated by rtrl is no longer quite correct, as it is based on previous values of \u03b8. However, as argued by Williams and Zipser (1995) and Ollivier et al. (2015) this problem can be mostly ignored as long as the learning rate is small enough in relation to the rate of the natural decay of the Jacobian (which occurs due to the vanishing gradient phenomenon). The main drawback of rtrl is that the accumulated quantity J ht \u03b8 is a large matrix. If the size of the parameters \u03b8 is O(H2) where H is the hidden state size, then this matrix requires O(H3) space to store. This is typically much larger than bptt\u2019s O(TH) space. Moreover, the rtrl recursions involve propagating a matrix forward by the matrix-matrix product J ht ht\u22121J ht\u22121 \u03b8 , which takes O(H4) time. bptt on the other hand only propagates a vector through time at a cost of O(H2). Although rtrl frees us to grow T and capture arbitrarily-long-term dependencies, the algorithm is grossly impractical for models of even modest size. 4. Other Approaches to Credit Assignment A number of techniques have been proposed to reduce the memory requirements of bptt. Storage of past hidden states may be traded for time by recomputing the states on demand, in the extreme case resulting in a quadratic-time algorithm. Better choices for this tradeo\ufb00are explored by Chen et al. (2016); Gruslys et al. (2016). Reversible Recurrent Neural Networks (MacKay et al., 2018; Gomez et al., 2017) allow the on-demand computation of past states to occur in reverse order, restoring the linear time complexity while limiting the model class. Stochastic Attentive Backtracking (Ke et al., 2018) sidesteps the storage requirements of backprop through long periods of time by retaining only a sparse subset of states in the distant past. This subset is selected based on an attention mechanism that is part of the model being trained. Gradient from future loss is propagated backwards to these states only through the attention connections. Synthetic gradients (Jaderberg et al., 2017) approximates bptt by use of a predictive model of the total future gradient J L hs, which is trained online based on bptt. Instead of transporting derivatives through time, we may assign credit by transporting value over time. For example, actor-critic architectures (Konda and Tsitsiklis, 2000; Barto et al., 1983) employ Temporal Di\ufb00erence Learning (Sutton, 1988) to obtain a predictive model of the total future loss. By di\ufb00erentiation, the estimated total future loss may be used to estimate the total future gradient. More commonly, such estimates are used directly as a 4 \fproxy for the total future loss, or as a reinforce baseline. Along similar lines as our analysis of reinforce in Section 9, we may interpret these methods as e\ufb00ectively di\ufb00erentiating the estimate in expectation. rudder (Arjona-Medina et al., 2018) redistributes the total loss L over time, replacing the immediate losses Ls by surrogates L\u2032 s determined by a process similar to backpropagation through a critic. These surrogates preserve the total loss but in an RL setting may better re\ufb02ect the long-term impact of the action taken at time s. Temporal Value Transport (Hung et al., 2018) relies on attention weights to determine which past time steps were relevant to which future time steps, and injects the estimated total future loss from the future time steps into the immediate loss for the associated past time steps. 5. Unbiased Online Recurrent Optimization The recently proposed Unbiased Online Recurrent Optimization algorithm (uoro; Tallec and Ollivier, 2018) and its predecessor NoBackTrack (Ollivier et al., 2015) approximate rtrl by maintaining a rank-one estimate \u02dc ht \u02dc w\u22a4 t of the Jacobian J ht \u03b8 . We now brie\ufb02y derive the basic algorithm. 5.1 Derivation First, we note that J ht \u03b8 can be written as J ht \u03b8 = P s\u2a7dt J ht hs J hs \u03b8s . We then perform a rank-one projection of each term in this sum using a random vector \u03bds (which is chosen to satisfy E[\u03bds\u03bd\u22a4 s ] = I). This gives us the estimator J ht \u03b8 \u2248 X s\u2a7dt J ht hs \u03bds\u03bd\u22a4 s J hs \u03b8s . Unbiasedness follows from a simple application of linearity of expectation: E hX s\u2a7dt J ht hs \u03bds\u03bd\u22a4 s J hs \u03b8s i = X s\u2a7dt J ht hs E[\u03bds\u03bd\u22a4 s ]J hs \u03b8s = X s\u2a7dt J ht hs J hs \u03b8s . We will refer to this projection as the spatial projection to distinguish it from the temporal projection that is to follow. It is interesting to note that J ht hs \u03bds can be interpreted as a \u201cdirectional Jacobian\u201d, which measures the instantaneous change in ht as a function of hs\u2019s movement along the direction \u03bds. Similarly \u03bd\u22a4 s J hs \u03b8s is essentially the gradient of \u03bd\u22a4 s hs with respect to \u03b8s, and thus measures the instantaneous change of hs along the direction of \u03bds, as a function of the change in \u03b8s. Thus the intuition behind this \ufb01rst approximation is that we are guessing the relevant direction of change in hs and performing the gradient computations only along that direction. We can generalize the spatial projection from the standard uoro method by projecting in the space of any cut vertex zs on the computational path from \u03b8s to hs. For uoro, zs \u2261hs; other choices include zs \u2261\u03b8s for projection in parameter space, and zs \u2261Wsas for projection in preactivation space. We will make extensive use of this choice in later Sections. This gives the generalized estimator J ht \u03b8 \u2248 X s\u2a7dt J ht hs J hs zs \u03bds\u03bd\u22a4 s J zs \u03b8s , 5 \fwhich is unbiased following a similar argument as before. The random projections serve to reduce the large J hs \u03b8s matrix into the more manageable vector quantities J hs zs \u03bds and \u03bd\u22a4 s J zs \u03b8s . But because the sum of rank-one matrices is not itself rank one, the resultant estimator will still be too expensive to maintain and update online. In order to obtain a practical algorithm we make a second rank-one approximation, now across time instead of z-space. To this end we introduce random scalar coe\ufb03cients \u03c4s satisfying E[\u03c4s\u03c4r] = \u03b4sr (where \u03b4sr is the Kronecker delta which is 1 if s = r and 0 otherwise) and de\ufb01ne the following rank-one estimator: J ht \u03b8 \u2248\u02dc ht \u02dc w\u22a4 t \u225c \u0010X s\u2a7dt \u03c4sJ ht hs J hs zs \u03bds \u0011\u0010X s\u2a7dt \u03c4s\u03bd\u22a4 s J zs \u03b8s \u0011 = X r\u2a7dt X s\u2a7dt \u03c4s\u03c4rJ ht hs J hs zs \u03bds\u03bd\u22a4 r J zr \u03b8r . By linearity of expectation this is an unbiased estimate of the previous spatially projected estimator P r\u2a7dt J ht zs \u03bds\u03bd\u22a4 s J zs \u03b8s , and is thus also an unbiased estimator of J ht \u03b8 , although with potentially much higher variance. Going forward we will assume that \u03c4s \u223cU{\u22121, +1} are iid random signs and \u03bds \u223cN(0, I) are iid standard normal vectors, so that we may treat the product \u03c4s\u03bds as a single Gaussiandistributed random vector us \u223cN(0, I), which will simplify our analysis. The two factors \u02dc ht and \u02dc wt of the rank-one approximation are maintained by the following pair of recursions: \u02dc ht = \u03b3tJ ht ht\u22121\u02dc ht\u22121 + \u03b2tJ ht zt ut \u02dc w\u22a4 t = \u03b3\u22121 t \u02dc w\u22a4 t\u22121 + \u03b2\u22121 t u\u22a4 t J zt \u03b8t , (5) with \u02dc h0, \u02dc w0 initialized to zero vectors. Notably these recursions are similar in structure to that used by rtrl to compute the exact Jacobian J ht \u03b8 (c.f. Equation 4). As with the rtrl equations, their validity follows from the fact that J ht hs = J ht ht\u22121J ht\u22121 ht\u22122 \u00b7 \u00b7 \u00b7 J hs+1 hs . In these recursions we have introduced coe\ufb03cients \u03b3t and \u03b2t to implement the variance reduction technique from Tallec and Ollivier (2018); Ollivier et al. (2015), which we will refer to as greedy iterative rescaling (gir). We will discuss gir in detail in the next subsection. Finally, at each step we estimate J Lt \u03b8 = J Lt ht J ht \u03b8 using the estimator J Lt ht \u02dc ht \u02dc w\u22a4 t . This is a small deviation from the one given by Tallec and Ollivier (2018), which uses backpropagation to compute J Lt \u03b8t exactly, and the remaining part of the gradient, P s 0 in Equation 5. Whereas our above derivation of the algorithm introduced a temporal projection, Ollivier et al. (2015); Tallec and Ollivier (2018) interpret the algorithm given by Equation 5 as 6 \fimplementing a series of projections. Under this view, \u02dc ht \u02dc w\u22a4 t is a rank-one estimate of the rank-two matrix that is the sum of the forwarded previous Jacobian estimate J ht ht\u22121\u02dc ht\u22121 \u02dc w\u22a4 t\u22121 and the approximate contribution J ht zt utu\u22a4 t J zt \u03b8t : \u02dc ht \u02dc w\u22a4 t = (\u03b3tJ ht ht\u22121\u02dc ht\u22121 + \u03b2tJ ht zt ut)(\u03b3\u22121 t \u02dc w\u22a4 t\u22121 + \u03b2\u22121 t u\u22a4 t J zt \u03b8t ) = J ht ht\u22121\u02dc ht\u22121 \u02dc w\u22a4 t\u22121 + J ht zt utu\u22a4 t J zt \u03b8t + \u03c4t\u03b3t\u03b2\u22121 t J ht ht\u22121\u02dc ht\u22121\u03bd\u22a4 t J zt \u03b8t + \u03c4t\u03b2t\u03b3\u22121 t J ht zt \u03bdt \u02dc w\u22a4 t\u22121 . The temporal \u201ccross-terms\u201d \u03c4t\u03b3t\u03b2\u22121 t J ht ht\u22121\u02dc ht\u22121\u03bd\u22a4 t J zt \u03b8t and \u03c4t\u03b2t\u03b3\u22121 t J ht zt \u03bdt \u02dc w\u22a4 t\u22121 , which are zero in expectation (but contribute variance), constitute the error introduced in the transition from time t \u22121 to t. The coe\ufb03cients \u03b3t and \u03b2t provide an extra degree of freedom with which we can minimize this error. As shown by Ollivier et al. (2015), the minimizers ensure the terms \u03b3tJ ht ht\u22121\u02dc ht\u22121, \u03b2tJ ht zt ut and their \u02dc wt counterparts have small norm, so that their contribution to the variance is small as well. The total (trace) variance of \u02dc ht \u02dc w\u22a4 t with respect to \u03c4t is given by the expected squared Frobenius norm \u2225\u00b7\u22252 F of the error: E\u03c4t h\r \r \r\u02dc ht \u02dc w\u22a4 t \u2212E\u03c4t[\u02dc ht \u02dc w\u22a4 t ] \r \r \r 2 F i = E\u03c4t h\r \r \r\u03c4t\u03b3t\u03b2\u22121 t J ht ht\u22121\u02dc ht\u22121\u03bd\u22a4 t J zt \u03b8t + \u03c4t\u03b2t\u03b3\u22121 t J ht zt \u03bdt \u02dc w\u22a4 t\u22121 \r \r \r 2 F i . As the common sign \u03c4t does not a\ufb00ect the norm, this is simply \u03b32 t \u03b2\u22122 t \r \r \r J ht ht\u22121\u02dc ht\u22121\u03bd\u22a4 t J zt \u03b8t \r \r \r 2 F+ \u03b22 t \u03b3\u22122 t \r \r \r J ht zt \u03bdt \u02dc w\u22a4 t\u22121 \r \r \r 2 F+ 2 D J ht ht\u22121\u02dc ht\u22121\u03bd\u22a4 t J zt \u03b8t , J ht zt \u03bdt \u02dc w\u22a4 t\u22121 E F , where \u27e8\u00b7, \u00b7\u27e9F denotes the Frobenius inner product. The coe\ufb03cients \u03b3t and \u03b2t a\ufb00ect the error through the single degree of freedom \u03b32 t \u03b2\u22122 t . By di\ufb00erentiation and use of the identity \u2225xy\u22a4\u22252 F = \u2225x\u22252\u2225y\u22252 we \ufb01nd that the optimal choices satisfy \u03b32 t \u03b2\u22122 t \u2225J ht ht\u22121\u02dc ht\u22121\u22252\u2225\u03bd\u22a4 t J zt \u03b8t \u22252 = \u03b22 t \u03b3\u22122 t \u2225J ht zt \u03bdt\u22252\u2225\u02dc wt\u22121\u22252. This includes the solution \u03b32 t = \u2225\u02dc wt\u22121\u2225/\u2225J ht ht\u22121 \u02dc ht\u22121\u2225, \u03b22 t = \u2225\u03bd\u22a4 t J zt \u03b8t \u2225/\u2225J ht zt \u03bdt\u2225from Ollivier et al. (2015). Examining their use in Equation 5 we can see that for this particular solution \u03b3t plays the important role of contracting \u02dc wt, which would otherwise grow inde\ufb01nitely (being a sum of independent random quantities). While division by \u03b3t in the recursion for \u02dc ht causes an expansive e\ufb00ect, this is more than counteracted by the natural contractive property of the Jacobian J ht ht\u22121 (which is due to gradient vanishing in well-behaved rnns). Thus we can interpret the role of \u03b3t as distributing this contraction evenly between \u02dc ht and \u02dc wt, which limits the growth of both quantities and thus keeps the variance of their product under control. A formal treatment of the growth of variance over time is given by Mass\u00e9 (2017). 6. Variance Analysis In this section we analyze the variance behavior of uoro-style algorithms. We \ufb01rst discuss limitations of the gir variance reduction scheme discussed in Section 5.2, namely that it is greedy (Section 6.1) and derives from a somewhat inappropriate objective (Section 6.2). We then generalize the algorithm and develop a more holistic theoretical framework for its analysis (Sections 6.3 through 6.5). 7 \f6.1 Greedy Iterative Rescaling is Greedy In Section 5.2 we discussed how gir can be interpreted as minimizing the variance of a rank-one estimate \u02dc ht \u02dc w\u22a4 t of a rank-two matrix J ht ht\u22121\u02dc ht\u22121 \u02dc w\u22a4 t\u22121 + J ht zt \u03bdt\u03bd\u22a4 t J zt \u03b8t (which is a stochastic approximation that occurs at each step in uoro). Here we unify this sequence of approximations into a single temporal rank-one estimation (as introduced in Section 5.1), which helps us reveal the inherent limitations of gir. Recall that the uoro recursions (Equation 5) maintain past contributions in the form of sums \u02dc ht and \u02dc wt, and at each step gir applies respective scaling factors \u03b3t+1 and \u03b3\u22121 t+1 (resp.) to these sums. This gives rise to an overall scaling \u03b1(t) s = \u03b2s\u03b3s+1\u03b3s+2 . . . \u03b3t (and similarly (\u03b1(t) s )\u22121) of contributions made at time step s and propagated forward through time step t. We can write the estimates \u02dc ht \u02dc w\u22a4 t produced by uoro in terms of \u03b1(t) s as follows: J ht \u03b8 \u2248\u02dc ht \u02dc w\u22a4 t = \u0010X s\u2a7dt \u03b1(t) s J ht zs us \u0011\u0010X r\u2a7dt 1 \u03b1(t) r u\u22a4 r J zr \u03b8r \u0011 = X r\u2a7dt X s\u2a7dt \u03b1(t) s \u03b1(t) r \u03c4s\u03c4rJ ht hs J hs zs \u03bds\u03bd\u22a4 r J zr \u03b8r . Note that each such estimate is but one element in a sequence of estimates. In the next section, we will establish a notion of the variance for this sequence, so that we may speak meaningfully about its minimization. For now, we will consider the minimization of the variance of \u02dc ht \u02dc w\u22a4 t at each time step t as an independent problem, with independent decision variables \u03b1(t) s . The optimal coe\ufb03cients given by (\u03b1(t) s )2 = \u2225\u03bd\u22a4 s J zs \u03b8s \u2225/\u2225J ht zs \u03bds\u2225(derived in Appendix B) minimize the variance of \u02dc ht \u02dc w\u22a4 t with respect to \u03c4s. This solution is generally di\ufb00erent from that of gir, which is constrained to have the form \u03b1(t+1) s = \u03b3t+1\u03b1(t) s for s \u2a7dt (where \u03b3t+1 is independent of s). This relationship between \u03b1(t+1) s and \u03b1(t) s breaks the independence of consecutive variance minimization problems, and therefore the resulting coe\ufb03cients cannot in general be optimal for all t. We can see this by writing the optimal coe\ufb03cients \u03b1(t+1) s for s \u2a7dt that minimize the variance of \u02dc ht+1 \u02dc w\u22a4 t+1 in terms of the coe\ufb03cients \u03b1(t) s that minimize the variance of \u02dc ht \u02dc w\u22a4 t : (\u03b1(t+1) s )2 = \u2225\u03bd\u22a4 s J zs \u03b8s \u2225 \u2225J ht+1 zs \u03bds\u2225= \u2225\u03bd\u22a4 s J zs \u03b8s \u2225 \u2225J ht zs \u03bds\u2225 \u2225J ht zs \u03bds\u2225 \u2225J ht+1 zs \u03bds\u2225 = (\u03b1(t) s )2 \u2225J ht zs \u03bds\u2225 \u2225J ht+1 zs \u03bds\u2225= (\u03b1(t) s )2 \r \r \r \rJ ht+1 ht J ht zs \u03bds \u2225J ht zs \u03bds\u2225 \r \r \r \r \u22121 . We see that in order to minimize the variance of \u02dc ht+1 \u02dc w\u22a4 t+1 given coe\ufb03cients \u03b1(t) s that minimize the variance of \u02dc ht \u02dc w\u22a4 t , we should divide each contribution \u03b1(t) s J ht zs \u03bds by the square root of its contraction due to forward-propagation through J ht+1 ht , and multiply each (\u03b1(t) s )\u22121\u03bd\u22a4 s J zs \u03b8s by the same factor. Crucially, this factor depends on s and therefore cannot be expressed by gir, which is constrained to rescale all past contributions by a constant factor yt+1 independent of s. This is true of any algorithm that maintains past contributions in a reduced form such as \u02dc ht, \u02dc wt. 6.2 Greedy Iterative Rescaling Optimizes an Inappropriate Objective In the previous subsection, we saw a sense in which gir is greedy: its ability to minimize the variance of \u02dc ht \u02dc w\u22a4 t is hampered by its own past decisions. To see this, we took a holistic 8 \fview of the sequence of variance minimization problems solved by gir, and showed that the choice of coe\ufb03cients \u03b3s, \u03b2s at time s constrains the choice of future coe\ufb03cients. Here we take a further step back, and argue that the variance of \u02dc ht \u02dc w\u22a4 t is not the right objective in light of the downstream application of these estimates. The Jacobian estimates \u02dc ht \u02dc w\u22a4 t \u2248J ht \u03b8 are used to determine a sequence of gradient estimates J Lt ht \u02dc ht \u02dc w\u22a4 t \u2248J Lt \u03b8 , which are accumulated by a gradient descent process. We argue that the quantity of interest is the variance of the total gradient estimate P t\u2a7dT J Lt ht \u02dc ht \u02dc w\u22a4 t \u2248J L \u03b8 incurred during T steps of optimization (which estimates the total gradient J L \u03b8 ). Since consecutive gradient contributions depend largely on the same stochastic quantities, the variance of this sum is not simply the sum of the individual variances. Hence even if we could independently minimize the variances of the Jacobian estimates, doing so is not equivalent to minimizing the variance of the total gradient estimate. 6.3 Generalized Recursions Before proceeding with the variance computation we will generalize the uoro recursions by replacing the \u03b3t and \u03b2t coe\ufb03cients by an invertible matrix Qt as follows: \u02dc ht = J ht ht\u22121\u02dc ht\u22121 + J ht zt Qtut \u02dc w\u22a4 t = \u02dc w\u22a4 t\u22121 + u\u22a4 t Q\u22121 t J zt \u03b8t (6) Qt can be interpreted as modifying the covariance of the noise vector ut (although di\ufb00erently for either recursion). Analogously to the standard uoro recursions, our generalized recursions compute the following sums: \u02dc ht = X s\u2a7dt J ht hs J hs zs Qsus and \u02dc wt = X s\u2a7dt u\u22a4 s Q\u22121 s J zs \u03b8s . We can view Qs as a matrix-valued generalization of the gir coe\ufb03cients, with equivalence when Qs = \u03b2s\u03b3s+1\u03b3s+2 . . . \u03b3T I. The extra degrees of freedom allow more \ufb01ne-grained control over the norms of cross-terms,1 as can be seen when we expand both the temporal and the spatial projections in the estimator \u02dc ht \u02dc w\u22a4 t : J ht \u03b8 \u2248\u02dc ht \u02dc w\u22a4 t = X r\u2a7dt X s\u2a7dt X ijkl J ht zri(Qr)ikurkusl(Q\u22121 s )ljJ zsj \u03b8s Each term\u2019s scaling depends not just on temporal indices r, s but now also on the indices i, j of units. As we shall see, in expectation, terms where both the temporal indices r = s and units i = j correspond remain una\ufb00ected, and it is only the undesired cross-terms for which r \u0338= s or i \u0338= j that are a\ufb00ected. Tallec and Ollivier (2018) hint at a related approach which would correspond to choosing Qs = \u03b1s diag(qs) to be diagonal matrices. However, they derive their choice q2 si \u221d \u2225J zsi \u03b8s \u2225/\u2225J hs zsi\u2225by optimizing the norms of only temporally corresponding terms for which r = s, and ignoring temporal cross terms r \u0338= s which make up the bulk of the error. We 1. By \u201ccross-term\u201d we mean a term that appears in the expanded sum which is zero in expectation but contributes variance. 9 \finstead consider a class of Qs matrices that is not constrained to be diagonal, and whose value minimizes a measure of variance that is more relevant to the optimization process. Thus our recursion in Equation 6 is a strict generalization of the uoro recursion in Equation 5. The Qs matrices can express a broad class of variance reduction mechanisms, including gir. That said, our analysis of this system will be limited to cases where the Qs are independent of the noise vectors ut for all s, t. Notably, this precludes gir because of its complex nonlinear interaction with the noise. 6.4 A Simple Expression for the Gradient Estimate In this subsection we will derive a simple expression for the gradient estimate J Lt ht \u02dc ht \u02dc w\u22a4 t which will prove useful in our subsequent computations. To reduce visual clutter we de\ufb01ne the following notational aliases, which we will make heavy use of throughout the rest of the manuscript: b(t) s = J Lt zs and Js = J zs \u03b8s . First, we observe that that 1s\u2a7dtb(t) s = b(t) s , as derivatives of past losses with respect to future activations are zero. Next we observe that J Lt ht \u02dc ht = X s\u2a7dt J Lt ht J ht hs J hs zs Qsus = X s\u2a7dt b(t)\u22a4 s Qsus. Given these observations we may express the estimate of each gradient contribution J Lt \u03b8 as J Lt ht \u02dc ht \u02dc w\u22a4 t = \u0010X s\u2a7dt b(t)\u22a4 s Qsus \u0011\u0010X s\u2a7dt u\u22a4 s Q\u22121 s Js \u0011 = \u0010X s\u2a7dT 1s\u2a7dtb(t)\u22a4 s Qsus \u0011\u0010X s\u2a7dT 1s\u2a7dtu\u22a4 s Q\u22121 s Js \u0011 = \u0010X s\u2a7dT b(t)\u22a4 s Qsus \u0011\u0010X s\u2a7dT 1s\u2a7dtu\u22a4 s Q\u22121 s Js \u0011 = b(t)\u22a4Quu\u22a4Q\u22121S(t)J, where in the last step we have: \u2013 consolidated the temporal and spatial projections by concatenating the b(t) s into a single vector b(t), and the noise vectors us into a single vector u, \u2013 stacked the Js\u2019s into the matrix J, \u2013 de\ufb01ned Q to be the block-diagonal matrix diag(Q1, Q2, . . . , QT ), and \u2013 introduced the \u201ctruncated identity matrix\u201d S(t) with diagonal blocks S(t) s = 1s\u2a7dtI. Finally, the total gradient estimate is given by X t\u2a7dT J Lt ht \u02dc ht \u02dc w\u22a4 t = X t\u2a7dT b(t)\u22a4Quu\u22a4Q\u22121S(t)J. (7) 10 \fThe S(t) matrix accounts for the fact that at time t of the algorithm, contributions J zs \u03b8s from future steps s > t are not included in \u02dc w\u22a4 t . Omitting this matrix would introduce terms that are zero in expectation and hence would not bias the total gradient estimate, but they would still contribute to the variance of the estimator (to a degree which would adversely a\ufb00ect the usefulness of our subsequent analysis). It is easy to see that this estimator is unbiased as long as E \u0002 Quu\u22a4Q\u22121\u0003 = I. This can happen, for example, when Q and u are independent with E[uu\u22a4] = I. We will focus our analysis on this case. 6.5 Computing the Variance of the Total Gradient Estimate In this section we derive the variance of the total gradient estimate. We assume that Q is independent of u, so that we may use the general results from Appendix A. By bilinearity, the covariance matrix of the total gradient estimate is Var hX t\u2a7dT J Lt ht \u02dc ht \u02dc w\u22a4 t i = X t\u2a7dT X s\u2a7dT Cov h J Lt ht \u02dc ht \u02dc w\u22a4 t , J Ls hs \u02dc hs \u02dc w\u22a4 s i . Combining this with the identity J Lt ht \u02dc ht \u02dc w\u22a4 t = b(t)\u22a4Quu\u22a4Q\u22121S(t)J from the previous subsection and applying Corollary 3 (with \u03ba = 0) yields the following expression for the same quantity: X s\u2a7dT X t\u2a7dT tr \u0010 b(s)b(t)\u22a4QQ\u22a4\u0011 J\u22a4S(s)(QQ\u22a4)\u22121S(t)J + J\u22a4b(s)b(t)\u22a4J. Corollary 3 also yields the following expression for the total variance2 of the total gradient estimate: X s\u2a7dT X t\u2a7dT tr \u0010 b(s)b(t)\u22a4QQ\u22a4\u0011 tr \u0010 J\u22a4S(s)(QQ\u22a4)\u22121S(t)J \u0011 + tr \u0010 J\u22a4b(s)b(t)\u22a4J \u0011 . 7. Variance Reduction We now turn to the problem of reducing the variance given in Equation 6.5. In Sections 7.1 through 7.1.3 we develop an improved (though as yet impractical) variance reduction scheme. Finally, we evaluate our theory in Section 7.2. 7.1 Optimizing Q subject to restrictions on its form Denote by V (Q) the part of the total variance (Equation 6.5) that depends on Q. Making use of the cyclic property of the trace, and the fact that Q is block-diagonal, we can write this as V (Q) = X s\u2a7dT X t\u2a7dT tr \u0010X r\u2a7dT b(s) r b(t)\u22a4 r QrQ\u22a4 r \u0011 tr \u0010X r\u2a7dT S(t) r JrJ\u22a4 r S(s) r (QrQ\u22a4 r )\u22121\u0011 . (8) We wish to optimize V (Q) with respect to Q in a way that leads to a practical online algorithm. To this end, we require that Qs be of the form Qs = \u03b1sQ0, with \u03b1s a scalar 2. We de\ufb01ne the \u201ctotal variance\u201d to be the trace of the covariance matrix. 11 \fand Q0 a constant matrix. This restriction makes sense from a practical standpoint; we envision an algorithm that maintains a statistical estimate of the optimal value of Q0. The stationarity assumption enables us to amortize over time both the sample complexity of obtaining this estimate, and the computational cost associated with inverting it. We furthermore assume projection occurs in preactivation space, that is, zr \u2261Wrar. This assumption gives Jr = J hr zr = I \u2297a\u22a4 r , which is a convenient algebraic structure to work with. Even given this restricted form we cannot \ufb01nd the jointly optimal solution for Q0 and \u03b1. Instead, we will consider optimizing Q0 while holding the \u03b1s\u2019s \ufb01xed, and vice versa. 7.1.1 Optimizing \u03b1s coefficients given Q0 Let us \ufb01rst simplify the expression for V (Q). Given the restricted form Qs = \u03b1sQ0 we may write V (Q) = X r\u2a7dT X q\u2a7dT \u03b12 r \u03b12 q Cqr, (9) where we have collected the factors that do not depend on \u03b1 into the matrix C with elements Cqr = X s\u2a7dT X t\u2a7dT tr \u0010 b(s) r b(t)\u22a4 r Q0Q\u22a4 0 \u0011 tr \u0010 S(t) q JqJ\u22a4 q S(s) q (Q0Q\u22a4 0 )\u22121\u0011 = tr \u0010 T X s=q T X t=q b(s) r b(t)\u22a4 r Q0Q\u22a4 0 \u0011 tr \u0010 JqJ\u22a4 q (Q0Q\u22a4 0 )\u22121\u0011 = \r \r \r T X t=q b(t)\u22a4 r Q0 \r \r \r 2\r \r \rQ\u22121 0 Jq \r \r \r 2 F . (10) Now we wish to solve \u03b1\u22c6= argmin \u03b1>0 X r\u2a7dT X q\u2a7dT \u03b12 r \u03b12 q Cqr. (11) The optimization problem considered here di\ufb00ers from that given in Section 6.1. Although the objective considered there can similarly be written in terms of a matrix like C, that matrix would have rank one (see Appendix B). This di\ufb00erence is a consequence of V (Q) being the variance of the total gradient estimate rather than that of a single contribution J Lt ht \u02dc ht \u02dc w\u22a4 t . In particular, the rank-one property is lost due to our inclusion of the S(t) matrix that discards noncausal terms (see Section 6.4). We analyze the problem in Appendix C, and \ufb01nd that it is an instance of matrix equilibration (see e.g. Idel, 2016, for a review), for which no closed-form solution is known. Instead, we give a second-order steepest-descent update rule that solves for \u03b1 numerically, which we use in our experiments. (Empirically, \ufb01rst-order updates routinely get stuck in cycles on this problem.) However, solving Equation 11 directly does not lead to a practical algorithm. Along the lines of the discussion in Section 6.1, any algorithm that maintains past contributions as a single sum must take \u03b1s to be \u03b2s\u03b3s+1\u03b3s+2 . . . \u03b3T for some coe\ufb03cient sequences {\u03b2s} and {\u03b3s}. In principle, if C were known upfront, one could choose \u03b2s = \u03b1\u22c6 s with \u03b3s = 1, and 12 \fhence this parameterization appears to be degenerate. However, C is not known; it depends on gradients b(t) r = J Lt zr and Jacobians Jt = J zt \u03b8t from future time steps t > s. In light of this, we can view \u03b2s as merely an estimate of \u03b1\u22c6 s, to be corrected by future \u03b3t\u2019s as more information becomes available. One way of formalizing this idea of \u201cincomplete information\u201d is as follows. Suppose C were the \ufb01nal element C(T) of a sequence of matrices C(1) . . . C(T), where each C(s) incorporates all \u201cinformation\u201d available up to time s. Then a natural way to choose \u03b2s and \u03b3s at time s would be solve the following optimization problem based on C(s): \u03b2\u22c6 s, \u03b3\u22c6 s = argmin \u03b2s,\u03b3s min \u03b2>s,\u03b3>s X r\u2a7dT X q\u2a7dT \u03b12 r \u03b12 q C(s) qr . (12) Past coe\ufb03cients \u03b2s, \u03b3>s are estimated by the inner minimization. In Appendix D we explore a natural choice for C(s) where future gradients/Jacobians are treated as though they were 0, which leads to formulas for the coe\ufb03cients that are similar to gir\u2019s, although not identical. This approach can be improved by incorporating statistical predictions or estimates of unknown future information in C(s). We leave further exploration of such schemes to future work. 7.1.2 Optimizing Q0 given the \u03b1s\u2019s Given our assumption that zr \u2261Wrar we have Jr = I \u2297a\u22a4 r and JrJ\u22a4 r = (I \u2297a\u22a4 r )(I \u2297ar) = (I \u2297a\u22a4 r ar) = \u2225ar\u22252I. Thus, S(t) r JrJ\u22a4 r S(s) r (QrQ\u22a4 r )\u22121 = 1r\u2a7dt1r\u2a7ds\u2225ar\u22252\u03b1\u22122 r (Q0Q\u22a4 0 )\u22121, and V (Q) becomes X s\u2a7dT X t\u2a7dT tr \u0010X r\u2a7dT \u03b12 rb(s) r b(t)\u22a4 r Q0Q\u22a4 0 \u0011 tr \u0010min(s,t) X r=1 \u2225ar\u22252\u03b1\u22122 r (Q0Q\u22a4 0 )\u22121\u0011 . Now we can move the scalar Pmin(s,t) r=1 \u2225ar\u22252\u03b1\u22122 r leftward and group the terms that depend on s and t, giving V (Q) = tr(BQ0Q\u22a4 0 ) tr((Q0Q\u22a4 0 )\u22121), (13) where B = X s\u2a7dT X t\u2a7dT \u0010min(s,t) X q=1 \u03b1\u22122 q \u2225aq\u22252\u0011\u0010X r\u2a7dT \u03b12 rb(s) r b(t)\u22a4 r \u0011 = X q\u2a7dT X r\u2a7dT \u03b12 r \u03b12 q \u2225aq\u22252\u0010 T X s=q b(s) r \u0011\u0010 T X t=q b(t) r \u0011\u22a4 . (14) The matrix B is PSD (it is a sum of PSD matrices), and we will further assume it is invertible. By Theorem 5 (which is stated and proved in Appendix E) any choice of Q0 satisfying \u03b7BQ0Q\u22a4 0 = (Q0Q\u22a4 0 )\u22121 for some constant \u03b7 > 0 will be a global minimizer of V (Q). One such choice is Q0 = B\u22121/4. 13 \fThis solution, or any other globally optimal one, gives us V (Q) = tr(B 1/2)2, where \u03bb is the vector of eigenvalues of B1/2. We can compare this to the variance attained by temporal scaling only (Q0 = I): V (Q) = tr(B) tr(I). Writing tr(B1/2)2 = (\u20d7 1\u22a4\u03bb)2 and tr(B) tr(I) = \u2225\u20d7 1\u22252\u2225\u03bb\u22252, where \u20d7 1 is the vector of ones and \u03bb is the vector of eigenvalues of B1/2, we have by the Cauchy-Schwarz inequality that tr(B 1/2)2 = (\u20d7 1\u22a4\u03bb)2 \u2a7d\u2225\u20d7 1\u22252\u2225\u03bb\u22252 = tr(B) tr(I). This approaches equality as \u03bb approaches a multiple of \u20d7 1, or in other words, as the spectrum of B 1/2 becomes \ufb02at. Conversely, the inequality will be more extreme when the spectrum is lopsided, indicating improved variance reduction when using Q0 = B\u22121/4 over the default choice Q0 = I. 7.1.3 Practical Considerations In practice, the proposed choice of Q0 requires computing the B matrix and its eigendecomposition. Computing B involves four levels of summations over time and seemingly cannot be computed online. However, we can estimate it using quantities similar to the ones we use to estimate the gradient. Appendix F derives the following unbiased estimator of B: B \u22481 2( \u02dc mT \u02dc n\u22a4 T + \u02dc nT \u02dc m\u22a4 T ) where \u02dc mt is given by X s\u2a7dt \u0010X q\u2a7ds \u03c3q\u03b1\u22121 q \u2225aq\u2225 \u0011\u0010X r\u2a7ds \u03c4r\u03b1rb(s)\u22a4 r \u03bdr \u0011\u0010X r\u2a7ds \u03bdr \u0011 and \u02dc nt is like \u02dc mt except with spatial noise \u00b5r instead of and independent of \u03bdr. In these expressions, \u03c3, \u00b5 are temporal and spatial noise vectors distributed identically to \u03c4, \u03bd. This extra layer of stochastic approximation severely degrades the quality of the estimates. Additionally, the estimator depends on unknown future quantities, such as the total future gradient with respect to all time steps. As detailed in Appendix F, we may compute intermediate estimates based on \u02dc mt, \u02dc nt for t < T. To the extent that B is stationary, a moving average of these intermediate estimates can serve as a good approximation to B. Empirically however, computing Q0 based on this kind of estimator does not seem to improve optimization performance, due to its high variance. We leave a broader exploration of approximation algorithms for B to future work, while noting that an estimator for B need not be unbiased in order for us to obtain an unbiased estimate of the gradient. Indeed, any invertible choice of Q0 will result in an unbiased estimate of the gradient, as was shown in Section 6.4. Unbiasedness may not even be a particularly desirable property for the B estimator to have, compared to other reasonable-sounding properties such as positive-semide\ufb01niteness. 14 \f0 1000 2000 3000 4000 5000 training step 1.2 1.5 1.8 2.1 loss alpha=gir,Q0=identity alpha=gir,Q0=ours alpha=ours,Q0=identity alpha=ours,Q0=ours Figure 1: Training curves on the row-wise sequential mnist task. For each setting we have run 10 trials and plotted the mean of the classi\ufb01cation loss and the 95% con\ufb01dence interval of the mean. For clarity of presentation, these curves have been aggressively smoothed by a median \ufb01lter prior to the computation of their statistics. Once we have our estimate \u02c6 B of B and wish to compute its fourth root, the O(H3) cost of factorization could be amortized by only performing it every so often or maintaining the estimate in factored form. It is often advisable to \u201cdampen\u201d or \u201cregularize\u201d the estimate by adding a multiple of the identity, i.e. Q0 = ( \u02c6 B + \u03bbI) 1/4 where the hyperparameter \u03bb serves to control the amount of trust placed in the estimate by biasing it towards a \ufb02at eigenvalue spectrum (i.e. towards Q0 \u221dI). 7.2 Variance Reduction Experiments We empirically evaluate four settings for Qs = \u03b1sQ0 in a controlled setting based on the sequential mnist task (Le et al., 2015). We choose this task because it is episodic; it gives us access to gradients b(t) s and Jacobians Js for all s, t by bptt. Thus we can compute the matrices B and C from Sections 7.1.2 and 7.1.1 exactly. In order to curb the cost of these computations, we simplify the task to be row-by-row instead of pixel-by-pixel (i.e. T = 28 as opposed to T = 784). Moreover, the model is tasked with classifying the digit at every step rather than only at the end, as otherwise Lt = 0 and therefore b(t) s = 0 for t < T, trivializing the total gradient estimate (Equation 7). For \u03b1s, we compare the gir-style coe\ufb03cients \u03b32 s = \u2225\u02dc ws\u22121\u2225 \u2225J hs hs\u22121\u02dc hs\u22121\u2225 and \u03b22 s = \r \ru\u22a4 s Q\u22121 0 J zs \u03b8s \r \r \u2225J hs zs Q0us\u2225 15 \fagainst the ones prescribed by our analysis. In the latter case, we use the algorithm described in Appendix C to solve Equation 11 for \u03b1. Given \u03b1, we derive a sequence of \u03b3, \u03b2 coe\ufb03cients by setting \u03b3s equal to the geometric average ratio of consecutive \u03b1s\u2019s, and solving for \u03b2 such that \u03b1s = \u03b2s\u03b3s+1 . . . \u03b3T for all s.3 For Q0, we consider the naive choice Q0 = I as well as the solution Q0 = B\u22121/4 from Section 7.1.2. Recall that the optimal Q0 depends on the choice of \u03b1 and both choices of \u03b1 depend on the choice of Q0. We break this circularity by maintaining an exponential moving average \u00af B of B across episodes, which we use to compute Q0 according to Q0 = \u0010 \u00af B + \u03bbtr( \u00af B) tr(I) I \u00111/4 , where the amount of damping/regularization is controlled by the hyperparameter \u03bb. Given Q0, we compute \u03b1 exactly, process the episode and update the parameters by the total gradient estimate (Equation 7). At the end of the episode, we compute B exactly based on the \u03b1 used in the episode, average it across the minibatch, and use the result to update \u00af B. The model consists of an lstm (Hochreiter and Schmidhuber, 1997) with 50 hidden units. At each step, the digit is classi\ufb01ed by softmax regression based on the hidden state ht. As the classi\ufb01er parameters do not a\ufb00ect ht, their gradient is obtained by backprop. The gradients are averaged across a minibatch of 50 examples and across the duration of each episode, before being passed to the Adam (Kingma and Ba, 2014) optimizer. The settings of the learning rate, momentum and \u00af B decay and dampening hyperparameters are detailed in Appendix G. Figure 1 shows the training curves for each of the four con\ufb01gurations. While there is a clear advantage to using both our proposed \u03b1 and Q0 choices, that advantage appears to be lost when only one of the two is used. In order to test our variance analysis, we show in Figure 2 predictions and measurements of several quantities that contribute to the variance, recorded during optimization. Recall from Section 6.5 that the variance of the total gradient estimate takes the form Var hX t\u2a7dT J Lt ht \u02dc ht \u02dc w\u22a4 t i = V (Q) + \r \rJ L \u03b8 \r \r2. The actual variance in Figure 2 measures V (Q) empirically by computing E h\r \rX t\u2a7dT J Lt ht \u02dc ht \u02dc w\u22a4 t\u22121 \u2212J L \u03b8 \r \r2 \u2212 \r \rJ L \u03b8 \r \r2i , where the expectation is estimated by averaging across the minibatch. The intrinsic variance is similarly computed as E \u0002 \u2225J L \u03b8 \u22252\u0003 . The expected variance measures the theoretical prediction of V (Q) by plugging the corresponding choice of Q0 into Equation 13. We see that the theoretical predictions of V (Q) are correct when alpha=ours, but that they overestimate V (Q) when alpha=GIR. When we derived V (Q) in Section 6.5, we started with the assumption that Q and u be independent; this assumption is violated by the gir 3. The simpler choice \u03b3s = 1, \u03b2s = \u03b1s may run into numerical issues but is otherwise equivalent, as the distribution of the total scaling \u03b1 across \u03b3, \u03b2 does not a\ufb00ect the variance. 16 \f101 103 105 variance alpha=GIR intrinsic actual expected alpha=ours Q0=identity 0 1000 2000 3000 4000 5000 101 103 105 0 1000 2000 3000 4000 5000 training step Q0=ours Figure 2: Theoretical predictions and empirical measurements of quantities contributing to total gradient variance. The \u201cintrinsic\u201d variance measures the expected norm of the total gradient J L \u03b8 , estimated by averaging across the minibatch. The \u201cexpected\u201d variance is a theoretical prediction of V (Q) according to Equation 13. The \u201cactual\u201d variance measures V (Q) empirically by the expected norm of the total gradient estimate. coe\ufb03cients, which depend on the noise u. Finally, we see that our proposals indeed reduce the actual variance; signi\ufb01cantly so when both Q0=ours, alpha=ours. We furthermore highlight in Figure 3 the di\ufb00erence in behavior of the \u03b1 coe\ufb03cients under the four con\ufb01gurations. The gir coe\ufb03cients appear to take on more extreme values, especially early on in training. Presumably, poor initialization causes increased levels of gradient vanishing, which subsequently causes \u03b3s to be large in order to compensate. However, when we combine the gir coe\ufb03cients with our choice of Q0, the e\ufb00ect is exacerbated. This may be because the gir coe\ufb03cients and our Q0 optimize for con\ufb02icting objectives. Curiously, when both Q0=ours, alpha=ours, the relative ordering of the coe\ufb03cients is reversed, so that \u03b1s < \u03b1t for s < t. 8. Projection in the Space of Preactivations Recall from Section 5 how the spatial rank-one approximation breaks down the Jacobian J ht \u03b8t = J ht zt \u03bdt\u03bd\u22a4 t J zt \u03b8t into more manageable quantities J ht zt \u03bdt and \u03bd\u22a4 t J zt \u03b8t by projecting in the space of some cut vertex zt. Assuming the transition function F takes the form given in Equation 1, we observe that the Jacobian can be factored as J ht \u03b8t = J ht Wtat(I \u2297a\u22a4 t ) where \u2297denotes the Kronecker product, i.e. it is already rank-one. By choosing zt to be the 17 \f0 0.25 0.50 0.75 1 log alpha alpha=GIR RNN step 0 4 8 12 16 20 24 alpha=ours Q0=identity 0 1000 2000 3000 4000 5000 0 0.25 0.50 0.75 1 0 1000 2000 3000 4000 5000 training step Q0=ours Figure 3: Evolution of log \u03b1s for some time steps s as training proceeds. At each training step, the log \u03b1s are centered so that mins log \u03b1s = 0; this eliminates irrelevant constant factors. preactivations Wtat, we can avoid the projection, and we obtain the following recursion: \u02dc ht = \u03b3tJ ht ht\u22121\u02dc ht\u22121 + \u03b2t\u03c4tJ ht Wtat (15) \u02dc wt = \u03b3\u22121 t \u02dc wt\u22121 + \u03b2\u22121 t \u03c4tat The vector-valued \u02dc ht has been replaced by a matrix \u02dc ht, and the contributions J ht Wtat and at are multiplied by scalar noise \u03c4s \u223cN(0, 1) rather than projected down. At each step, the gradient contribution J Lt \u03b8 is computed as vec((J Lt ht \u02dc ht)\u22a4\u02dc w\u22a4 t ). The gir coe\ufb03cients \u03b32 t = \u2225\u02dc wt\u22121\u2225/\u2225J ht ht\u22121 \u02dc ht\u22121\u2225F, \u03b22 t = \u2225at\u2225/\u2225J ht Wtat\u2225F can be derived like in Section 5. We will refer to this variant of uoro as \u201cpreuoro \u201d. This algorithm has also been discovered by Mujika et al. (2018). De\ufb01ne b(t) s = J Lt zs , the gradient of the loss at time t with respect to the projection variable at time s. Then the total gradient J L \u03b8 can be expressed as J L \u03b8 = X t\u2a7dT J Lt \u03b8 = X t\u2a7dT X s\u2a7dt b(t)\u22a4 s (I \u2297a\u22a4 s ) = X t\u2a7dT X s\u2a7dt vec(b(t) s a\u22a4 s ), 18 \fwhere vec is the vectorization operator that serializes its matrix argument into a row vector in row-major order. We can express the total gradient estimate as vec \u0010X t\u2a7dT (J Lt ht \u02dc ht)\u22a4\u02dc w\u22a4 t \u0011 = vec \u0010X t\u2a7dT \u0000X s\u2a7dt \u03c4s\u03b1sb(t) s \u0001\u0000X r\u2a7dt \u03c4r\u03b1\u22121 r a\u22a4 r \u0001\u0011 = vec \u0010X t\u2a7dT \u00af B(t)\u22a4\u00af Q\u03c4\u03c4 \u22a4\u00af Q\u22121 \u00af S(t) \u00af J \u0011 , (16) where we have de\ufb01ned the matrices \u00af B(t)\u22a4= \u0000b(t) 1 \u00b7 \u00b7 \u00b7 b(t) T \u0001 , \u00af Q = diag(\u03b1), \u00af S(t) ij = \u03b4ij1i\u2a7et, \u00af J = \u0000a1 \u00b7 \u00b7 \u00b7 aT \u0001\u22a4. that mirror similarly-named quantities from Section 6.4. The expression in Equation 16 is analogous to that in Equation 7, but with the crucial di\ufb00erence that no summation across space is involved. Hence the noise vector \u03c4 has much smaller dimension T rather than TN (with N being the dimension of the projection space). We show in Appendix H that the variance contribution V (Q) of preuoro can be written X s\u2a7dT X t\u2a7dT tr \u0000 \u00af B(s) \u00af B(t)\u22a4\u00af Q \u00af Q\u22a4\u0001 tr \u0000 \u00af S(t) \u00af J \u00af J\u22a4\u00af S(s)( \u00af Q \u00af Q\u22a4)\u22121\u0001 and the variance contribution V (Q) of uoro \u2019s total gradient estimate from Section 6.4 (Equation 7) can be written: X s\u2a7dT X t\u2a7dT tr \u0000 \u00af B(t)Q0Q\u22a4 0 \u00af B(s)\u22a4\u00af Q \u00af Q\u22a4\u0001 tr \u0000 \u00af S(t) \u00af J \u00af J\u22a4\u00af S(s)( \u00af Q \u00af Q\u22a4)\u22121\u0001 tr \u0000(Q0Q\u22a4 0 )\u22121\u0001 The latter has an extra factor tr((Q0Q\u22a4 0 )\u22121). If Q0 = I, then this factor is equal to tr(I). Spatial projection thus causes the dominant term of the variance to be multiplied by the dimension of the preactivations, which typically ranges in the thousands. Avoiding the spatial projection avoids this multiplication and hence achieves drastically lower variance. Figure 4 con\ufb01rms the corresponding improvement in optimization performance. This \ufb01gure shows training curves of four variations on rtrl: rtrl, rtrl plus spatial projection (uoro minus temporal projection), preuoro (uoro minus spatial projection), and uoro which performs both spatial and temporal projection. The task under consideration is the queue task, in which the model is trained to emit its input stream with a delay. E\ufb00ectively, the model learns to implement a queue. The model is similar to that described in 7.2, except with 50 hidden units. The model observes a random binary input stream and has to predict a binary output stream that is equal to the input stream but with a delay of 4 time steps. The J Lt \u03b8 estimates are averaged across a minibatch of 100 examples, and applied to the parameters by Adam (Kingma and Ba, 2014) with momentum 0.5 and learning rate set to 0.008 for \u201cneither\u201d, 0.008 for \u201cspatial\u201d, 0.0008 for \u201ctemporal\u201d, 0.002 for \u201cboth\u201d (found by grid search). The main drawback of this method is its computational complexity: the algorithm involves propagating multiple vectors forward, which increases the computation time by the same factor N that we removed from the variance. The dominant operation is the matrix-matrix multiplication J ht ht\u22121\u02dc ht\u22121, which has computational cost O(N3) (recall N is the dimension of the projection space). This is better than rtrl\u2019s J ht ht\u22121J ht\u22121 \u03b8 which costs O(N4), but worse than uoro and bptt which propagate vectors at a cost of O(N2). The space complexity is O(N2), which matches that of uoro. 19 \f0 2500 5000 7500 10000 training step 0 0.2 0.4 0.6 loss neither (RTRL) spatial temporal (preUORO) both (UORO) Figure 4: Training curves on the queue task showing interpolation between rtrl and uoro by ablation of the spatial and temporal approximations. \u201cneither\u201d denotes exact computation of the gradient using rtrl, \u201cspatial\u201d denotes rtrl with J ht zt \u03bdt\u03bd\u22a4 t J zt \u03b8t standing in for J ht \u03b8t , \u201ctemporal\u201d denotes preuoro computed by Equation 15, \u201cboth\u201d denotes uoro . Where applicable, the cut vertex zt \u2261Wtat is the preactivations. 9. reinforce as Approximate Real-Time Recurrent Learning In this section we show a fundamental connection between reinforce (Williams, 1992) and uoro. The reinforce algorithm provides gradient estimates for systems with stochastic transitions. It can also be used to train recurrent neural networks if we arti\ufb01cially induce stochasticity by adding Gaussian noise to the hidden states. We will show that in this setting, the reinforce estimator is closely related to the uoro estimator. reinforce aims to estimate the gradient of the expected loss E\u03c7\u223cp(\u03c7;\u03b8)[L(\u03c7)] which depends on the parameter \u03b8 through some distribution p(\u03c7; \u03b8) over stochastic context \u03c7 that determines the loss L(\u03c7). Conceptually, \u03c7 = (\u03c7t) is the trajectory of the state of an agent and its external environment, and \u03b8 parameterizes a stochastic policy over actions, which induces a distribution p(\u03c7; \u03b8) on \u03c7. The gradient of the expected loss can be rewritten as an expected gradient as follows: \u2207\u03b8E\u03c7\u223cp(\u03c7;\u03b8)[L(\u03c7)] = \u2207\u03b8 Z L(\u03c7)p(\u03c7; \u03b8)d\u03c7 = Z L(\u03c7)\u2207\u03b8p(\u03c7; \u03b8)d\u03c7 = Z L(\u03c7)\u2207\u03b8(log p(\u03c7; \u03b8))p(\u03c7; \u03b8)d\u03c7, where we have used the fact that \u2207\u03b8 log p(\u03c7; \u03b8) = \u2207\u03b8p(\u03c7;\u03b8)/p(\u03c7;\u03b8). With this modi\ufb01ed expression, we can estimate \u2207\u03b8E\u03c7\u223cp(\u03c7;\u03b8)[L(\u03c7)] by sampling from p(\u03c7; \u03b8). 20 \fIn our case, \u03c7 will be the trajectory of the stochastic hidden states of the rnn, and sampling from p(\u03c7; \u03b8) will correspond to the following recursions: ht = F(\u00af ht\u22121, xt; \u03b8t) \u00af ht = ht + \u03c3ut, (17) with additive Gaussian noise ut \u223cN(0, I). The stochastic hidden state \u00af ht is e\ufb00ectively sampled from a state transition policy p(\u00af ht|\u00af ht\u22121, \u03b8t) \u221dexp \u0000\u22121 2\u03c32 \u2225\u00af ht \u2212ht\u22252\u0001 . For each state \u00af ht so visited, we compute the score \u2207\u03b8 log p(\u00af h\u2a7dt; \u03b8) of the trajectory \u00af h\u2a7dt = (\u00af h0, \u00af h1, . . . , \u00af ht) that brought us there, and multiply it by an immediate loss Lt so obtained. Intuitively, higher rewards \u201creinforce\u201d directions in parameter space that bring them about. We will assume Lt is a di\ufb00erentiable function of \u00af ht. By the chain rule of probability, the score \u2207\u03b8 log p(\u00af h\u2a7dt; \u03b8) of the trajectory is simply the sum \u2207\u03b8 Pt s=1 log p(\u00af hs|\u00af hs\u22121, \u03b8s), which we can recursively maintain according to \u00af w\u22a4 t = \u00af w\u22a4 t\u22121 + \u2207\u03b8 log p(\u00af ht|\u00af ht\u22121, \u03b8t) = \u00af w\u22a4 t\u22121 \u2212 1 2\u03c32 J \u2225\u00af ht\u2212ht\u22252 ht J ht \u03b8t = \u00af w\u22a4 t\u22121 + 1 \u03c32 (\u00af ht \u2212ht)\u22a4J ht \u03b8t = \u00af w\u22a4 t\u22121 + 1 \u03c3u\u22a4 t J ht \u03b8t . Note that in the above computations, \u201c\u00af ht\u201d and \u201c\u00af ht\u22121\u201d are not the variables themselves but particular values. (This is a consequence of our adoption of the standard abuse of notation for random variables.) Thus they are treated as constants with respect to di\ufb00erentiation. The only quantity that depends on \u03b8 is ht, which when we condition on the value of \u00af ht\u22121, only depends on \u03b8 via \u03b8t. This recursion is very similar to uoro\u2019s recursion for \u02dc w\u22a4 t , and it computes a similar type of sum: \u00af w\u22a4 t = 1 \u03c3 X s\u2a7dt u\u22a4 s J hs \u03b8s . (18) Once we have \u2207\u03b8 log p(\u00af h\u2a7dt; \u03b8), we need to multiply it by the loss Lt to obtain a reinforce gradient estimate of J Lt \u03b8 . We can express the loss by its Taylor series around the point u = 0 where the noise is zero, as follows: Lt = Lt|u=0 + \u0010X s\u2a7dt J Lt us |u=0us \u0011 + 1 2 \u0010X r\u2a7dt X s\u2a7dt u\u22a4 r HLt ur,us \f \f u=0us \u0011 + \u00b7 \u00b7 \u00b7 = Lt|u=0 + \u03c3 \u0010X s\u2a7dt J Lt hs |u=0us \u0011 + O(\u03c32), where HLt ur,us denotes the Hessian of Lt with respect to ur and us. The last step uses the fact that \u03c3us a\ufb00ects Lt in exactly the same way that hs does, so that J Lt us = \u03c3J Lt hs and HLt ur,us = \u03c32HLt hr,hs. Plugging the Taylor series for Lt into the reinforce gradient estimate and using Equation 18, we get: Lt \u00af w\u22a4 t = Lt|u=0 \u00af w\u22a4 t + \u0010X s\u2a7dt J Lt hs |u=0us \u0011\u0010X s\u2a7dt u\u22a4 s J hs \u03b8s \u0011 + O(\u03c3). 21 \fHere we see the uoro gradient estimator appear in the second term, but with an important di\ufb00erence: the J hs \u03b8s \u2019s are evaluated in the noisy system, whereas the J Lt hs |u=0 are evaluated with zero noise. Thus this term doesn\u2019t estimate J Lt \u03b8 for any value of u. However, the equivalence becomes exact when we let the noise go to zero by taking the limit \u03c3 \u21920. To see this we \ufb01rst observe that letting \u03c3 go to 0 is equivalent to letting u go to 0 in the recursions for hs (Equation 17). Furthermore, since F is continuously di\ufb00erentiable, so is hs (w.r.t. all of its dependencies). Therefore J hs \u03b8s is a continuous function of u, and it follows that lim \u03c3\u21920 J hs \u03b8s = lim u\u21920 J hs \u03b8s = J hs \u03b8s |u=0. And therefore we have lim \u03c3\u21920 h\u0010X s\u2a7dt J Lt hs |u=0us \u0011\u0010X s\u2a7dt u\u22a4 s J hs \u03b8s \u0011 + O(\u03c3) i = \u0010X s\u2a7dt J Lt hs |u=0us \u0011\u0010X s\u2a7dt u\u22a4 s J hs \u03b8s |u=0 \u0011 , which is identical to the standard uoro estimate J Lt ht \u02dc ht \u02dc w\u22a4 t of J Lt \u03b8 (without any variance reduction). Thus we can see that in the limit as \u03c3 \u21920, reinforce becomes equivalent to uoro (sans variance reduction), except that it includes the additional term: Lt|u=0 \u00af w\u22a4 t = 1 \u03c3 Lt|u=0 X s\u2a7dt u\u22a4 s J hs \u03b8s . From the RHS expression we see that this term has mean zero, and thus the limiting behavior of reinforce is to give an unbiased estimate of the gradient of the noise-free model. However, the variance of the additional term goes to in\ufb01nity as \u03c3 \u21920. For models where the noise is bounded away from zero this term represents the main source of variance for reinforce estimators. It can however be addressed by subtracting an estimate of Lt|u=0 from Lt before multiplying by the score function. This is known as a \u201cbaseline\u201d in the reinforce literature (Williams, 1992). The appearance of the uoro estimator as part of the reinforce estimator suggests an additional opportunity for variance reduction in reinforce. If in Equation 17 we had instead de\ufb01ned \u00af ht = ht + \u03c3Qtut, that is, the noise added to ht has covariance \u03c32Q\u22a4 t Qt, then we would have found \u00af w\u22a4 t = 1 \u03c3 X s\u2a7dt u\u22a4 s Q\u22121J hs \u03b8s and X s\u2a7dt J Lt us |u=0 = \u03c3 X s\u2a7dt J Lt hs |u=0Qsus. Putting these two together as in Equation 19 and passing to the limit \u03c3 \u21920 as before, we get lim \u03c3\u21920 Lt \u00af w\u22a4 t = Lt|u=0 \u00af w\u22a4 t + \u0010X s\u2a7dt J Lt hs |u=0Qsus \u0011\u0010X s\u2a7dt u\u22a4 s Q\u22121 s J hs \u03b8s |u=0 \u0011 , where now the second term is identical to uoro with the generalized variance reduction described in Section 6.3. Thus the Qs matrices that enable variance reduction in uoro correspond directly to a choice of covariance on the exploration noise in reinforce. 22 \f10." + }, + { + "url": "http://arxiv.org/abs/1603.09025v5", + "title": "Recurrent Batch Normalization", + "abstract": "We propose a reparameterization of LSTM that brings the benefits of batch\nnormalization to recurrent neural networks. Whereas previous works only apply\nbatch normalization to the input-to-hidden transformation of RNNs, we\ndemonstrate that it is both possible and beneficial to batch-normalize the\nhidden-to-hidden transition, thereby reducing internal covariate shift between\ntime steps. We evaluate our proposal on various sequential problems such as\nsequence classification, language modeling and question answering. Our\nempirical results show that our batch-normalized LSTM consistently leads to\nfaster convergence and improved generalization.", + "authors": "Tim Cooijmans, Nicolas Ballas, C\u00e9sar Laurent, \u00c7a\u011flar G\u00fcl\u00e7ehre, Aaron Courville", + "published": "2016-03-30", + "updated": "2017-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine translation (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015). Top-performing models, however, are based on very high-capacity networks that are computationally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013). It is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2000; Ioffe & Szegedy, 2015) degrades the ef\ufb01ciency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks. Batch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer\u2019s parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge signi\ufb01cantly faster and generalize better. Although batch normalization has demonstrated signi\ufb01cant training speed-ups and generalization bene\ufb01ts in feed-forward networks, it is proven to be dif\ufb01cult to apply in recurrent architectures (Laurent et al., 2016; Amodei et al., 2015). It has found limited use in stacked RNNs, where the normalization is applied \u201cvertically\u201d, i.e. to the input of each RNN, but not \u201chorizontally\u201d between timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most bene\ufb01cial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescaling. Our \ufb01ndings run counter to this hypothesis. We show that it is both possible and highly bene\ufb01cial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section 3) that involves batch normalization and demonstrate that it is easier to optimize and generalizes better. In addition, we empirically analyze the 1 arXiv:1603.09025v5 [cs.LG] 28 Feb 2017 \fPublished as a conference paper at ICLR 2017 gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequential problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance. Liao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar improvements as our method. 2 PREREQUISITES 2.1 LSTM Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review brie\ufb02y in this paper. Given an input sequence X = (x1, x2, . . . , xT ), an RNN de\ufb01nes a sequence of hidden states ht according to ht = \u03c6(Whht\u22121 + Wxxt + b), (1) where Wh \u2208Rdh\u00d7dh, Wx \u2208Rdx\u00d7dh, b \u2208Rdh and the initial state h0 \u2208Rdh are model parameters. A popular choice for the activation function \u03c6( \u00b7 ) is tanh. RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using \ufb01rst-order stochastic gradient descent (SGD) is notoriously dif\ufb01cult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994; Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states ht are not in\ufb02uenced by small changes in much earlier states h\u03c4, t \u226a\u03c4, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally dif\ufb01cult (Bengio et al., 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al., 2015). In what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recurrent transition given by \uf8eb \uf8ec \uf8ec \uf8ed \u02dc ft \u02dc it \u02dc ot \u02dc gt \uf8f6 \uf8f7 \uf8f7 \uf8f8 = Whht\u22121 + Wxxt + b (2) ct = \u03c3(\u02dc ft) \u2299ct\u22121 + \u03c3(\u02dc it) \u2299tanh( \u02dc gt) (3) ht = \u03c3(\u02dc ot) \u2299tanh(ct), (4) where Wh \u2208Rdh\u00d74dh, WxRdx\u00d74dh, b \u2208R4dh and the initial states h0 \u2208Rdh, c0 \u2208Rdh are model parameters. \u03c3 is the logistic sigmoid function, and the \u2299operator denotes the Hadamard product. The LSTM differs from simple RNNs in that it has an additional memory cell ct whose update is nearly linear which allows the gradient to \ufb02ow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate ft determines the extent to which information is carried over from the previous timestep, and the input gate it controls the \ufb02ow of information from the current input xt. The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time. 2.2 BATCH NORMALIZATION Covariate shift (Shimodaira, 2000) is a phenomenon in machine learning where the features presented to a model change in distribution. In order for learning to succeed in the presence of covariate shift, the model\u2019s parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as 2 \fPublished as a conference paper at ICLR 2017 internal covariate shift (Ioffe & Szegedy, 2015), where changing the parameters of a layer affects the distribution of the inputs to all layers above it. Batch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization which aims to reduce internal covariate shift. It does so by standardizing the activations using empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows: BN(h; \u03b3, \u03b2) = \u03b2 + \u03b3 \u2299 h \u2212b E[h] q d Var[h] + \u03f5 (5) where h \u2208Rd is the vector of (pre)activations to be normalized, \u03b3 \u2208Rd, \u03b2 \u2208Rd are model parameters that determine the mean and standard deviation of the normalized activation, and \u03f5 \u2208R is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample variance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction. 3 BATCH-NORMALIZED LSTM This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in both the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( \u00b7 ; \u03b3, \u03b2) into the LSTM as follows: \uf8eb \uf8ec \uf8ec \uf8ed \u02dc ft \u02dc it \u02dc ot \u02dc gt \uf8f6 \uf8f7 \uf8f7 \uf8f8 = BN(Whht\u22121; \u03b3h, \u03b2h) + BN(Wxxt; \u03b3x, \u03b2x) + b (6) ct = \u03c3(\u02dc ft) \u2299ct\u22121 + \u03c3(\u02dc it) \u2299tanh( \u02dc gt) (7) ht = \u03c3(\u02dc ot) \u2299tanh(BN(ct; \u03b3c, \u03b2c)) (8) In our formulation, we normalize the recurrent term Whht\u22121 and the input term Wxxt separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the \u03b3h and \u03b3x parameters. We set \u03b2h = \u03b2x = 0 to avoid unnecessary redundancy, instead relying on the pre-existing parameter vector b to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient \ufb02ow through ct, we do not apply batch normalization in the cell update. The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we \ufb01nd that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ signi\ufb01cantly (see Figure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.1 Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For our experiments we estimate the population statistics separately for each timestep 1, . . . , Tmax where 1 Note that we separate only the statistics over time and not the \u03b3 and \u03b2 parameters. 3 \fPublished as a conference paper at ICLR 2017 Tmax is the length of the longest training sequence. When at test time we need to generalize beyond Tmax, we use the population statistic of time Tmax for all time steps beyond it. During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set. 4 INITIALIZING \u03b3 FOR GRADIENT FLOW Although batch normalization allows for easy control of the pre-activation variance through the \u03b3 parameters, common practice is to normalize to unit variance. We suspect that the previous dif\ufb01culties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) are largely due to improper initialization of the batch normalization parameters, and \u03b3 in particular. In this section we demonstrate the impact of \u03b3 on gradient \ufb02ow. 0 100 200 300 400 500 600 700 800 t 10-26 10-24 10-22 10-20 10-18 10-16 10-14 10-12 10-10 10-8 10-6 10-4 10-2 100 ||dloss/dh_t||_2 RNN gradient propagation gamma=0.10 gamma=0.20 gamma=0.30 gamma=0.40 gamma=0.50 gamma=0.60 gamma=0.70 gamma=0.80 gamma=0.90 gamma=1.00 (a) We visualize the gradient \ufb02ow through a batchnormalized tanh RNN as a function of \u03b3. High variance causes vanishing gradient. 0.0 0.2 0.4 0.6 0.8 1.0 input standard deviation 0.0 0.2 0.4 0.6 0.8 1.0 expected derivative (and IQR range) derivative through tanh (b) We show the empirical expected derivative and interquartile range of tanh nonlinearity as a function of input variance. High variance causes saturation, which decreases the expected derivative. Figure 1: In\ufb02uence of pre-activation variance on gradient propagation. In Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of \u03b3, the norm quickly goes to zero as gradient is propagated back in time. For small values of \u03b3 the norm is nearly constant. To demonstrate what we think is the cause of this vanishing, we drew samples x from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative tanh\u2032(x) = 1 \u2212tanh2(x) \u2208[0, 1] for each. Figure 1(b) shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1. We conjecture that this is what causes the gradient to vanish, and recommend initializing \u03b3 to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks. 5 EXPERIMENTS This section presents an empirical evaluation of the proposed batch-normalized LSTM on four different tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters \u03b3 and \u03b2 to 0.1 and 0 respectively. 4 \fPublished as a conference paper at ICLR 2017 0 20000 40000 60000 80000 100000 Training Iteration 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Pixel-by-Pixel MNIST (Validation Set) lstm bn_lstm 0 20000 40000 60000 80000 100000 Training Iteration 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Pixel-by-Pixel Permuted-MNIST (Validation Set) lstm bn_lstm Figure 2: Accuracy on the validation set for the pixel by pixel MNIST classi\ufb01cation tasks. The batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to preserve long-term memory information. 5.1 SEQUENTIAL MNIST We evaluate our batch-normalized LSTM on a sequential version of the MNIST classi\ufb01cation task (Le et al., 2015). The model processes each image one pixel at a time and \ufb01nally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). In MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a \ufb01xed random order. Our baseline consists of an LSTM with 100 hidden units, with a softmax classi\ufb01er to produce a prediction from the \ufb01nal hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10\u22123 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients. The in-order MNIST task poses a unique problem for our model: the input for the \ufb01rst hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zerovariance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization ampli\ufb01es the noise to signal level, we \ufb01nd that it does not hurt performance compared to datadependent ways of initializing the hidden states. Model MNIST pMNIST TANH-RNN (Le et al., 2015) 35.0 35.0 iRNN (Le et al., 2015) 97.0 82.0 uRNN (Arjovsky et al., 2015) 95.1 91.4 sTANH-RNN (Zhang et al., 2016) 98.1 94.0 LSTM (ours) 98.9 90.2 BN-LSTM (ours) 99.0 95.4 Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classi\ufb01cation tasks In Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we observe that BN-LSTM generalizes signi\ufb01cantly better on pMNIST. It has been highlighted in Arjovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to 5 \fPublished as a conference paper at ICLR 2017 Model Penn Treebank LSTM (Graves, 2013) 1.262 HF-MRNN (Mikolov et al., 2012) 1.41 Norm-stabilized LSTM (Krueger & Memisevic, 2016) 1.39 ME n-gram (Mikolov et al., 2012) 1.37 LSTM (ours) 1.38 BN-LSTM (ours) 1.32 Zoneout (Krueger et al., 2016) 1.27 HM-LSTM (Chung et al., 2016) 1.24 HyperNetworks (Ha et al., 2016) 1.22 Table 2: Bits-per-character on the Penn Treebank test sequence. characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies. Table 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the population statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST. 5.2 CHARACTER-LEVEL PENN TREEBANK We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead. Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classi\ufb01er on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3. We show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which con\ufb01rms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016). 5.3 TEXT8 We evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2009). This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow Mikolov et al. (2012); Zhang et al. (2016) and use the \ufb01rst 90M characters for training, the next 5M for validation and the \ufb01nal 5M characters for testing. We train on nonoverlapping sequences of length 180. Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classi\ufb01er on the hidden state ht. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. All weight matrices were initialized to be orthogonal. 6 \fPublished as a conference paper at ICLR 2017 We early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a signi\ufb01cant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance. Model text8 td-LSTM (Zhang et al., 2016) 1.63 HF-MRNN (Mikolov et al., 2012) 1.54 skipping RNN (Pachitariu & Sahani, 2013) 1.48 LSTM (ours) 1.43 BN-LSTM (ours) 1.36 HM-LSTM (Chung et al., 2016) 1.29 Table 3: Bits-per-character on the text8 test sequence. 5.4 TEACHING MACHINES TO READ AND COMPREHEND Recently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Attentive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular. To demonstrate the generality and practical applicability of our proposal, we apply batch normalization in the Attentive Reader model and show that this drastically improves training. We evaluate several variants. The \ufb01rst variant, referred to as BN-LSTM, consists of the vanilla Attentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the \ufb01rst, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlinearities. Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variablelength sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of xt toward zero. We address this effect using sequencewise normalization of the inputs as proposed by Laurent et al. (2016); Amodei et al. (2015). That is, we share statistics over time for normalization 0 2000 4000 6000 8000 10000 12000 14000 16000 training steps 1.4 1.6 1.8 2.0 2.2 2.4 bits per character LSTM BN-LSTM (a) Performance in bits-per-character on length100 subsequences of the Penn Treebank validation sequence during training. 100 200 300 400 500 600 700 800 900 1000 sequence length 1.32 1.34 1.36 1.38 1.40 1.42 1.44 1.46 mean bits per character LSTM BN-LSTM, population statistics BN-LSTM, batch statistics (b) Generalization to longer subsequences of Penn Treebank using population statistics. The subsequences are taken from the test sequence. Figure 3: Penn Treebank evaluation 7 \fPublished as a conference paper at ICLR 2017 0 100 200 300 400 500 600 700 800 training steps (thousands) 0.0 0.2 0.4 0.6 0.8 1.0 error rate LSTM train BN-LSTM train BN-everywhere train BN-e* train BN-e** train LSTM valid BN-LSTM valid BN-everywhere valid BN-e* valid BN-e** valid (a) Error rate on the validation set for the Attentive Reader models on a variant of the CNN QA task (Hermann et al., 2015). As detailed in Appendix C, the theoretical lower bound on the error rate on this task is 43%. 0 50 100 150 200 250 300 350 400 training steps (thousands) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 error rate LSTM train BN-e** train LSTM valid BN-e** valid (b) Error rate on the validation set on the full CNN QA task from Hermann et al. (2015). Figure 4: Training curves on the CNN question-answering tasks. of the input terms Wxxt, but not for the recurrent terms Whht or the cell output ct. Doing so avoids many issues involving degenerate statistics due to input sequence padding. Our fourth and \ufb01nal variant BN-e** is like BN-e* but bidirectional. The main dif\ufb01culty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place. See Appendix C for hyperparameters and task details. Figure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a signi\ufb01cant improvement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization bene\ufb01t over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were obtained without any tweaking \u2013 all we did was to introduce batch normalization. BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively. Model CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) 45.5 45.0 BN-e** (ours) 37.9 36.3 Table 4: Error rates on the CNN question-answering task Hermann et al. (2015). We train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). On this dataset we had to reduce the number of hidden units to 120 to avoid severe over\ufb01tting. Training curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models. 8 \fPublished as a conference paper at ICLR 2017 6" + } + ], + "Guy Avni": [ + { + "url": "http://arxiv.org/abs/2305.04096v2", + "title": "A Game of Pawns", + "abstract": "We introduce and study pawn games, a class of two-player zero-sum turn-based\ngraph games. A turn-based graph game proceeds by placing a token on an initial\nvertex, and whoever controls the vertex on which the token is located, chooses\nits next location. This leads to a path in the graph, which determines the\nwinner. Traditionally, the control of vertices is predetermined and fixed. The\nnovelty of pawn games is that control of vertices changes dynamically\nthroughout the game as follows. Each vertex of a pawn game is owned by a pawn.\nIn each turn, the pawns are partitioned between the two players, and the player\nwho controls the pawn that owns the vertex on which the token is located,\nchooses the next location of the token. Control of pawns changes dynamically\nthroughout the game according to a fixed mechanism. Specifically, we define\nseveral grabbing-based mechanisms in which control of at most one pawn\ntransfers at the end of each turn. We study the complexity of solving pawn\ngames, where we focus on reachability objectives and parameterize the problem\nby the mechanism that is being used and by restrictions on pawn ownership of\nvertices. On the positive side, even though pawn games are\nexponentially-succinct turn-based games, we identify several natural classes\nthat can be solved in PTIME. On the negative side, we identify several\nEXPTIME-complete classes, where our hardness proofs are based on a new class of\ngames called Lock & Key games, which may be of independent interest.", + "authors": "Guy Avni, Pranav Ghorpade, Shibashis Guha", + "published": "2023-05-06", + "updated": "2023-11-29", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.MA" + ], + "main_content": "Introduction Two-player zero-sum graph games constitute a fundamental class of games [5] with applications, e.g., in reactive synthesis [26], multi-agent systems [4], a deep connections to foundations of logic [27], and more. A graph game is played on a directed graph \u27e8V, E\u27e9, where V = V1 \u222aV2 is a fixed partition of the vertices. The game proceeds as follows. A token is initially placed on some vertex. When the token is placed on v \u2208Vi, for i \u2208{1, 2}, Player i chooses u with \u27e8v, u\u27e9\u2208E to move the token to. The outcome of the game is an infinite path, called a play. We focus on reachability games: Player 1 wins a play iff it visits a set of target vertices T \u2286V . In this paper, we introduce pawn games, which are graph games in which the control of vertices changes dynamically throughout the game as follows. The arena consists of d pawns. For 1 \u2264j \u2264d, Pawn j owns a set of vertices Vj. Throughout the game, the pawns are distributed between the two players, and in each turn, the control of pawns determines which player moves the token. Pawn control may be updated after moving the token by running a predetermined mechanism. Formally, a configuration of a pawn game is a pair \u27e8v, P\u27e9, where v denotes the position of the token and P the set of pawns that Player 1 controls. The player who moves the token is determined according to P: if Player 1 controls a pawn that arXiv:2305.04096v2 [cs.GT] 29 Nov 2023 \f2 A Game of Pawns v0 v1 s t v0 v1 t v2 v3 Figure 1 Left: The pawn game G1; a non-monotonic game under optional-grabbing. Right: The pawn game G2 in which Player 1 wins from \u27e8v0, {v0, v1}\u27e9, but must visit v1 twice. owns v, then Player 1 moves. Specifically, when each vertex is owned by a unique pawn, i.e., V1, . . . , Vd partitions V , then Player 1 moves iff he controls the pawn that owns v. We consider the following mechanisms for exchanging control of pawns. For i \u2208{1, 2}, we denote by \u2212i = 3 \u2212i the \u201cother player\u201d. Optional grabbing. For i \u2208{1, 2}, following a Player i move, Player \u2212i has the option to grab one of Player i\u2019s pawns; namely, transfer one of the pawns that Player \u2212i to his control. Always grabbing. For i \u2208{1, 2}, following every Player i move, Player \u2212i grabs one of Player i\u2019s pawns. Always grabbing or giving. Following a Player i move, Player \u2212i either grabs one of Player i\u2019s pawns or gives her one of his pawns. k-grabbing. For k \u2208N, Player 1 can grab at most k pawns from Player 2 throughout the game. In each round, after moving the token, Player 1 has the option of grabbing one of the pawns that is controlled by Player 2. A grabbed pawn stays in the control of Player 1 for the remainder of the game. Note the asymmetry: only Player 1 grabs pawns. Note that players in pawn games have two types of actions: moving the token and transferring control of pawns. We illustrate the model and some interesting properties of it. \u25b6Example 1. Consider the game G1 in Fig. 1(left). We consider optional-grabbing and the same reasoning applies for always-grabbing. Each vertex is owned by a unique pawn, and Player 1\u2019s target is t. Note that Player 2 wins if the game reaches s. We claim that G1 is non-monotonic: increasing the set of pawns that Player 1 initially controls is \u201charmful\u201d for him. Formally, Player 1 wins from configuration \u27e8v0, \u2205\u27e9, i.e., when he initially does not control any pawns, but loses from \u27e8v0, {v0}\u27e9, i.e., when controlling v0. Indeed, from \u27e8v0, \u2205\u27e9, Player 2 initially moves the token from v0 to v1, Player 1 then uses his option to grab v1, and wins by proceeding to t. Second, from \u27e8v0, {v0}\u27e9, Player 1 makes the first move and thus cannot grab v1. Since Player 2 controls v1, she wins by proceeding to s. In Thm. 5 and 26, we generalize this observation and show, somewhat surprisingly, that if a player wins from the current vertex v, then he wins from v with fewer pawns as long as if he controlled v previously, then he maintains control of v. Consider the game G2 in Fig. 1 (right). We consider optional-grabbing, each vertex is owned by a unique pawn, and Player 1\u2019s target is t. We claim that Player 1 wins from configuration \u27e8v0, {v0, v2}\u27e9and Player 2 can force the game to visit v1 twice. This differs from turn-based games in which if Player 1 wins, he can force winning while visiting each vertex at most once. To illustrate, consider the following outcome. Player 1 makes the first move, so he cannot grab v1. Player 2 avoids losing by moving to v2. Player 1 will not grab, move to v3, Player 2 moves to v1, then Player 1 grabs v1 and proceeds to t. We point out that no loop is closed in the explicit configuration graph that corresponds to G2. \fG. Avni, P. Ghorpade and S. Guha 3 Applications Pawn games model multi-agent settings in which the agent who acts in each turn is not predetermined. We argue that such settings arise naturally. Quantitative shield synthesis. It is common practice to model an environment as a Kripke structure (e.g. [28]), which for sake of simplicity, we will think of as a graph in which vertices model environment states and edges model actions. A policy chooses an outgoing edge from each vertex. A popular technique to obtain policies is reinforcement learning (RL) [30] whose main drawback is lack of worst-case guarantees [12]. In order to regain safety at runtime, a shield [18, 6, 12] is placed as a proxy: in each point in time, it can alter the action of a policy. The goal in shield synthesis is to synthesize a shield offline that ensures safety at runtime while minimizing interventions. We suggest a procedure to synthesize shields based on k-grabbing pawn games. Player 2 models an unknown policy. We set his goal to reaching an unsafe state. Player 1 (the shield) ensures safety by grabbing at most k times. Grabbing is associated with a shield intervention. Note that once the shield intervenes in a vertex v, it will choose the action at v in subsequent turns. An optimal shield is obtained by finding the minimal k for which Player 1 has a winning strategy. We describe other examples that can be captured by a k-grabbing pawn game in which, as in the shield application, Player 1 models an \u201cauthority\u201d that has the \u201cupper hand\u201d, and aims to maximize freedom of action for Player 2 while using grabs to ensure safety. Consider a concurrent system in which Player 2 models a scheduler and Player 1 can force synchronization, e.g., by means of \u201clocks\u201d or \u201cfences\u201d in order to maintain correctness (see [13]). Synchronization is minimized in order to maximize parallelism and speed. As another example, Player 1 might model an operating system that allows freedom to an application and blocks only unsafe actions. As a final example, in [2], synthesis for a safety specification was enriched with \u201cadvice\u201d given by an external policy for optimizing a soft quantitative objective. Again, the challenge is how to maximize accepting advice while maintaining safety. Modelling crashes. A sabotage game [32] is a two-player game which is played on a graph. Player 1 (the Runner) moves a token throughout the graph with the goal of reaching a target set. In each round, Player 2 (the Saboteur) crashes an edge from the graph with the goal of preventing Player 1 from reaching his target. Crashes are a simple type of fault that restrict Player 1\u2019s actions. A malicious fault (called byzantine faults [21]) actively tries to harm the network, e.g., by moving away from the target. Pawn games can model sabotage games with byzantine faults: each vertex (router) is owned by a unique pawn, all pawns are initially owned by Player 1, and a Player 2 grab corresponds to a byzantine fault. Several grabbing mechanisms are appealing in this context: k-grabbing restricts the number of faults and optionaland always-grabbing accommodate repairs of routers. Our results We distinguish between three types of ownership of vertices. Let V = V1 \u222a. . . \u222aVd be a set of vertices, where for j \u2208{1, . . . , d}, Pawn j owns the vertices in Vj. In one vertex per pawn (OVPP) games, each pawn owns exactly one vertex, thus Vj is a singleton, for all j \u2208{1, . . . , d}. In multiple vertices per pawn (MVPP) games, V1, . . . , Vd consists of a partition of V , where the sets might contain more than one vertex. In overlapping multiple vertices per pawn (OMVPP) games, the sets might overlap. For example, in the shield synthesis application above, the type of ownership translates to dependencies between interventions: OVPP models no dependencies, MVPP models cases in which interventions come in \u201cbatches\u201d, \f4 A Game of Pawns e.g., grabbing control in all states labeled by some predicate, and OMVPP models the case when the batches overlap. We define that Player 1 moves the token from a vertex v iff he controls at least one of the pawns that owns v. Clearly, OMVPP generalizes MVPP, which in turn generalizes OVPP. We consider the problem of deciding whether Player 1 wins a reachability pawn game from an initial configuration of the game. Our results are summarized below. Mechanisms OVPP MVPP OMVPP k-grabbing PTIME (Thm. 31) NP-hard (Thm. 32) PSPACE-C (Thm. 36) Optional-grabbing PTIME (Thm. 7) EXPTIME-C (Thm. 14) EXPTIME-C (Thm. 14) Always PTIME (grab or give; Thm. 30) PTIME (grab or give; Thm. 30) EXPTIME-C (grab; Thm. 25) EXPTIME-C (grab; Thm. 25) Pawn games are succinctly-represented turn-based games. A naive algorithm to solve a pawn game constructs and solves an explicit turn-based game on its configuration graph leading to membership in EXPTIME. We thus find the positive results to be pleasantly surprising; we identify classes of succinctly-represented games that can be solved in PTIME. Each of these algorithms is obtained by a careful and tailored modification to the attractorcomputation algorithm for turn-based reachability games. For OMVPP k-grabbing, the PSPACE upper bound is obtained by observing that grabs in a winning strategy must be spaced by at most |V | turns, implying that a game ends within polynomial-many rounds (Lem. 34). Our EXPTIME-hardness proofs are based on a new class of games called Lock & Key games and may be of independent interest. A Lock & Key game is a turn-based game that is enriched with a set of locks, where each lock is associated with a key. Each edge is labeled by a subset of locks and keys. A lock can either be closed or open. An edge that is labeled with a closed lock cannot be crossed. A lock changes state once an edge labeled by its key is traversed. We show two reductions. The first shows that deciding the winner in Lock & Key games is EXPTIME-hardness. Second, we reduce Lock & Key games to MVPP optional-grabbing pawn games. The core of the reduction consists of gadgets that simulate the operation of locks and keys using pawns. Then, we carefully analyze the pawn games that result from applying both reductions one after the other, and show that the guarantees are maintained when using always grabbing instead of optional grabbing. The main difficulty in constructing a winning Player i strategy under always-grabbing from a winning Player i strategy under optional-grabbing is to ensure that throughout the game, both players have sufficient and the correct pawns to grab (Lem. 23). Related work The semantics of pawn games is inspired by the seminal paper [4]. There, the goal is, given a game, an objective O, and a set C of pawns (called \u201cplayers\u201d there), to decide whether Player 1 (called a \u201ccoalition\u201d there) can ensure O when he controls the pawns in C. A key distinction from pawn games is that the set C that is controlled by Player 1 is fixed. The paper introduced a logic called alternating time temporal logic, which was later significantly extended and generalized to strategy logic [14, 23, 24]. Multi-player games with rational players have been widely studied; e.g., finding Nash equilibrium [31] or subgame perfect equilibrium [11], and rational synthesis [16, 19, 33, 10]. A key distinction from pawn games is that, in pawn games, as the name suggests, the owners of the resources (pawns) have no individual goals and act as pawns in the control of the players. Changes to multi-player \fG. Avni, P. Ghorpade and S. Guha 5 graph games in order to guarantee existence or improve the quality of an equilibrium have been studied [3, 25, 9, 20]. The key difference from our approach is that there, changes occur offline, before the game starts, whereas in pawn games, the transfer of vertex ownership occurs online. In bidding games [22, 7] (see in particular, discrete-bidding games [15, 1, 8]) control of vertices changes online: players have budgets, and in each turn, a bidding determines which player moves the token. Bidding games are technically very different from pawn games. While pawn games allow varied and fine-grained mechanisms for transfer of control, bidding games only consider strict auction-based mechanisms, which lead to specialized proof techniques that cannot be applied to pawn games. For example, bidding games are monotonic \u2013 more budget cannot harm a player \u2013 whereas pawn games are not (see Ex. 1). 2 Preliminaries For k \u2208N, we use [k] to denote the set {1, . . . , k}. For i \u2208{1, 2}, we use \u2212i = 3 \u2212i to refer to the \u201cother player\u201d. Turn-based games Throughout this paper we consider reachability objectives. For general graph games, see for example [5]. A turn-based game is G = \u27e8V, E, T\u27e9, where V = V1 \u222aV2 is a set of vertices that is partitioned among the players, E \u2286V \u00d7 V is a set of directed edges, and T \u2286V is a set of target vertices for Player 1. Player 1\u2019s goal is to reach T and Player 2\u2019s goal is to avoid T. For v \u2208V , we denote the neighbors of v by N(v) = {u \u2208V : E(v, u)}. Intuitively, a strategy is a recipe for playing a game: in each vertex it prescribes a neighbor to move the token to. Formally, for i \u2208{1, 2}, a strategy for Player i is a function f : Vi \u2192V such that for every v \u2208Vi, we have f(v) \u2208N(v).1 An initial vertex v0 \u2208V together with two strategies f1 and f2 for the players, give rise to a unique play, denoted \u03c0(v0, f1, f2), which is a finite or infinite path in G and is defined inductively as follows. The first vertex is v0. For j \u22650, assuming v0, . . . , vj has been defined, then vj+1 = fi(vj), where vj \u2208Vi, for i \u2208{1, 2}. A Player 1 strategy f1 is winning from v0 \u2208V if for every Player 2 strategy f2, the play \u03c0(v0, f1, f2) ends in T. Dually, a Player 2 strategy f2 is winning from v0 \u2208V if for every Player 1 strategy f1, the play \u03c0(v0, f1, f2) does not visit T. \u25b6Theorem 2. [17] Turn based games are determined: from each vertex, one of the players has a (memoryless) winning strategy. Deciding the winner of a game is in PTIME. Proof sketch. For completeness, we briefly describe the classic attractor-computation algorithm. Consider a game \u27e8V, E, T\u27e9. Let W0 = T. For i \u22651, let Wi = Wi\u22121 \u222a{v \u2208V1 : N(v) \u2229Wi \u0338= \u2205} \u222a{v \u2208V2 : N(v) \u2286Wi}. One can prove by induction that Wi consists of the vertices from which Player 1 can force reaching T within i turns. The sequence necessarily reaches a fixed point W 1 = S i\u22651 Wi, which can be computed in linear time. Finally, one can show that Player 2 has a winning strategy from each v / \u2208W 1. Pawn games A pawn game with d \u2208N pawns is P = \u27e8V, E, T, M\u27e9, where V = V1 \u222a. . . \u222aVd and for j \u2208[d], Vj denotes the vertices that Pawn j owns, E and T are as in turn-based games, and M is a 1 We restrict to memoryless strategies since these suffice for reachability objectives. \f6 A Game of Pawns mechanism for exchanging pawns as we elaborate later. Player 1 wins a play if it reaches T. We stress that the set of pawns that he controls when reaching T is irrelevant. We omit M when it is clear from the context. We distinguish between classes of pawn games based on the type of ownership of vertices: One Vertex Per Pawn (OVPP). There is a one-to-one correspondence between pawns and vertices; namely, |V | = d and each Vj is singleton, for j \u2208[d]. For j \u2208[d] and {vj} = Vj, we sometimes abuse notation by referring to Pawn j as vj. Multiple Vertices Per Pawn (MVPP). Each vertex is owned by a unique pawn but a pawn can own multiple vertices, thus V1, . . . , Vd is a partition of V . Overlapping Multiple Vertices Per Pawn (OMVPP). Each pawn can own multiple vertices and a vertex can be owned by multiple pawns, i.e., we allow Vi \u2229Vj \u0338= \u2205, for i \u0338= j. Clearly OMVPP generalizes MVPP, which generalizes OVPP. In MVPP too, we sometimes abuse notation and refer to a pawn by a vertex that it owns. A configuration of a pawn game is \u27e8v, P\u27e9, meaning that the token is placed on a vertex v \u2208V and P \u2286[d] is the set of pawns that Player 1 controls. Implicitly, Player 2 controls the complement set P = [d] \\ P. Player 1 moves the token from \u27e8v, P\u27e9iff he controls at least one pawn that owns v. Note that in OVPP and MVPP, let j \u2208[d] with v \u2208Vj, then Player 1 moves iff i \u2208P. Once the token moves, we update the control of the pawns by applying M. From pawn games to turn-based games We describe the formal semantics of pawn games together with the pawn-exchanging mechanisms by describing the explicit turn-based game that corresponds to a pawn game. For a pawn game G = \u27e8V, E, T, M\u27e9, we construct the turn-based game G\u2032 = \u27e8V \u2032, E\u2032, T \u2032\u27e9. For i \u2208{1, 2}, denote by V \u2032 i Player i\u2019s vertices in G\u2032. The vertices of G\u2032 consist of two types of vertices: configuration vertices C = V \u00d7 2[d], and intermediate vertices V \u00d7 C. When M is k-grabbing, configuration vertices include the remaining number of pawns that Player 1 can grab, as we elaborate below. The target vertices are T \u2032 = {\u27e8v, P\u27e9: v \u2208T}. We describe E\u2032 next. For a configuration vertex c = \u27e8v, P\u27e9, we define c \u2208V \u2032 1 iff there exists j \u2208P such that v \u2208Vj. That is, Player 1 moves from c in G\u2032 iff he moves from c in G. We define the neighbors of c to be the intermediate vertices {\u27e8v\u2032, c\u27e9: v\u2032 \u2208N(v)}. That is, moving the token in G\u2032 from c to \u27e8v\u2032, c\u27e9corresponds to moving the token from v to v\u2032 in G. Moves from intermediate vertices represent an application of M. We consider the following mechanisms. Optional grabbing. For i \u2208{1, 2}, following a Player i move, Player \u2212i has the option to grab one of Player i\u2019s pawns. Formally, for a configuration vertex c = \u27e8v, P\u27e9\u2208V \u2032 1, we have N(c) \u2286V \u2032 2. From \u27e8v\u2032, c\u27e9\u2208N(c), Player 2 has two options: (1) do not grab and proceed to \u27e8v\u2032, P\u27e9, or (2) grab j \u2208P, and proceed to \u27e8v\u2032, P \\ {j}\u27e9. The definition for Player 2 is dual. Always grabbing. For i \u2208{1, 2}, following a Player i move, Player \u2212i always has to grab one of Player i\u2019s pawns. The formal definition is similar to optional grabbing with the difference that option (1) of not grabbing is not available to the players. We point out that Player \u2212i grabs only after Player i has moved, which in particular implies that Player i controls at least one pawn that Player \u2212i can grab. Always grabbing or giving. Following a Player i move, Player \u2212i must either grab one of Player i\u2019s pawns or give her a pawn. The formal definition is similar to always grabbing with the difference that, for an intermediate vertex \u27e8v\u2032, \u27e8v, P\u27e9\u27e9, there are both neighbors of the form \u27e8v\u2032, P \\ {j}\u27e9, for j \u2208P, and neighbors of the form \u27e8v\u2032, P \u222a{j}\u27e9, for j / \u2208P. k-grabbing. After each round, Player 1 has the option of grabbing a pawn from Player 2, and at most k grabs are allowed in a play. A configuration vertex in k-grabbing is c = \u27e8v, P, r\u27e9, \fG. Avni, P. Ghorpade and S. Guha 7 where r \u2208[k]\u222a{0} denotes the number of grabs remaining. Intermediate vertices are Player 1 vertices. Let \u27e8v\u2032, c\u27e9\u2208V \u2032 1. Player 1 has two options: (1) do not grab and proceed to the configuration vertex \u27e8v\u2032, P, r\u27e9, or (2) grab j / \u2208P, and proceed to \u27e8v\u2032, P \u222a{j}, r \u22121\u27e9when r > 0. Note that grabs are not allowed when r = 0 and that Pawn j stays at the control of Player 1 for the remainder of the game. Since pawn games are succinctly-represented turn-based games, Thm. 2 implies determinacy; namely, one of the players wins from each initial configuration. We study the problem of determining the winner of a pawn game, formally defined as follows. \u25b6Definition 3. Let \u03b1 \u2208{OVPP, MVPP, OMVPP} and \u03b2 be a pawn-grabbing mechanism. The problem \u03b1 \u03b2 PAWN-GAMES takes as input an \u03b1 \u03b2 pawn game G and an initial configuration c, and the goal is to decide whether Player 1 wins from c in G. A naive algorithm to solve a pawn game applies attractor computation on the explicit turn-based game, which implies the following theorem. \u25b6Theorem 4. \u03b1 \u03b2 PAWN-GAMES is in EXPTIME, for all values of \u03b1 and \u03b2. 3 Optional-Grabbing Pawn Games Before describing our complexity results, we identify a somewhat unexpected property of MVPP optional-grabbing games. Consider a vertex v and two sets of pawns P and P \u2032 having P \u2032 \u2286P. Intuitively, it is tempting to believe that Player 1 \u201cprefers\u201d configuration c = \u27e8v, P\u27e9 over c\u2032 = \u27e8v, P \u2032\u27e9since he controls more pawns in c. Somewhat surprisingly, the following theorem shows that the reverse holds (see also Ex. 1). More formally, the theorem states that if Player 1 wins from c, then he also wins from c\u2032, under the restriction that if he makes the first move at c (i.e., he controls v in c), then he also makes the first move in c\u2032 (i.e., he controls v in c\u2032). \u25b6Theorem 5. Consider a configuration \u27e8v, P\u27e9of an MVPP optional-grabbing pawn game G. Let j \u2208[d] such that v \u2208Vj and P \u2032 \u2286P. Assuming that j \u2208P implies j \u2208P \u2032, if Player 1 wins from \u27e8v, P\u27e9, he wins from \u27e8v, P \u2032\u27e9. Assuming that j \u2208P \u2032 implies j \u2208P, if Player 2 wins from \u27e8v, P \u2032\u27e9, she wins from \u27e8v, P\u27e9. Proof. We prove for Player 1 and the proof for Player 2 follows from determinacy. Let G, P, P \u2032, c = \u27e8v, P\u27e9, and c\u2032 = \u27e8v, P \u2032\u27e9be as in the theorem statement. Let G\u2032 be the turn-based game corresponding to G. For i \u22650, let Wi be the set of vertices in G\u2032 from which Player 1 can win in at most i rounds (see Thm. 2). The following claim clearly implies the theorem. Its proof proceeds by a careful induction. Claim: Configuration vertices: for i \u22650, if \u27e8v, P\u27e9\u2208Wi, then \u27e8v, P \u2032\u27e9\u2208Wi. Intermediate vertices: for i \u22651 and every vertex u \u2208N(v), if \u27e8u, c\u27e9\u2208Wi\u22121, then \u27e8u, c\u2032\u27e9\u2208Wi\u22121. For the base case, we prove for both i = 0 and i = 1. Recall that the target set of G\u2032, which coincides with W0, consists of configuration vertices whose V component is in T. Thus, for the first part of the claim, since c \u2208W0, then v \u2208T, and thus \u27e8v, P \u2032\u27e9\u2208W0. Recall that an intermediate vertex is of the form \u27e8u, b\u27e9, where u denotes the \u201cnext\u201d location of the token and b denotes the \u201cprevious\u201d configuration. In the case of W1, since Player 1 wins in one turn, each vertex in W1 is of the form \u27e8u, b\u27e9, where u \u2208T. Thus, for \u27e8u, c\u27e9\u2208W1, we clearly have \u27e8u, c\u2032\u27e9\u2208W1. For the induction hypothesis, assume that the claim holds for i = n, and we prove for i = n + 1. We start with configuration vertices. Assume c = \u27e8v, P\u27e9\u2208Wn+1. We will show that c\u2032 = \u27e8v, P \u2032\u27e9\u2208Wn+1. Recall that N(c) consist of intermediate vertices of the form \u27e8u, c\u27e9, \f8 A Game of Pawns where u \u2208N(v), thus \u27e8u, c\u27e9\u2208N(c) iff \u27e8u, c\u2032\u27e9\u2208N(c\u2032). Note that if \u27e8u, c\u27e9\u2208N(c) is winning for Player 1, i.e., \u27e8u, c\u27e9\u2208Wn, then by the induction hypothesis, \u27e8u, c\u2032\u27e9\u2208Wn. We distinguish between two cases. In the first case, Player 1 controls c, i.e., j \u2208P and c is a Player 1 vertex. In this case, our assumption is that c\u2032 is also a Player 1 vertex. Since c \u2208Wn+1, there must exist a neighbor \u27e8u, c\u27e9of c having \u27e8u, c\u27e9\u2208Wn. By the above, \u27e8u, c\u2032\u27e9\u2208(N(c\u2032) \u2229Wn), thus c\u2032 \u2208Wn+1. In the second case, Player 2 controls c, thus j / \u2208P. Recall that P \u2032 \u2286P, thus Player 2 controls c\u2032 as well. Note that c \u2208Wn+1 implies that Player 1 wins from all of its neighbors of the form \u27e8u, c\u27e9. Now consider the contrapositive of the assumption that states that j / \u2208P \u2032 implies that j / \u2208P which holds here and since \u27e8u, c\u27e9\u2208Wn, by induction hypothesis, it follows that \u27e8u, c\u2032\u27e9\u2208Wn. Thus, Player 1 wins from all the neighbors of c\u2032, thus c\u2032 \u2208Wn+1. We turn to the second part of the claim that addresses intermediate vertices. We again distinguish between two cases: j \u2208P and j / \u2208P. Consider an intermediate vertex \u27e8u, c\u27e9\u2208Wn+1. We will show that \u27e8u, c\u2032\u27e9\u2208Wn+1. We denote by \u2113, the pawn that owns u. First consider the case j \u2208P. Recall that in optional grabbing, Player 2 has the option of grabbing after Player 1 moves, thus \u27e8u, c\u27e9is a Player 2 vertex. Since the claim requires that if j \u2208P, then j \u2208P \u2032, we conclude that \u27e8u, c\u2032\u27e9is also a Player 2 vertex. Consider a neighbor of \u27e8u, c\u2032\u27e9, a configuration vertex \u27e8u, Q\u2032\u27e9. Note that either Player 2 does not use the option to grab, then Q\u2032 = P \u2032, or she grabs a pawn r from Player 1, then Q\u2032 = P \u2032 \\ {r}. Note that in order to apply the induction hypothesis on \u27e8u, Q\u2032\u27e9, we need to find a neighbor \u27e8u, Q\u27e9 of \u27e8u, c\u27e9such that if \u2113/ \u2208Q\u2032, then \u2113/ \u2208Q. Note that at \u27e8u, c\u27e9, Player 2 has the option of not grabbing as well as of grabbing \u2113, thus both \u27e8u, P\u27e9and \u27e8u, P \\ {\u2113}\u27e9are neighbors of \u27e8u, c\u27e9. Note that P = P \\ {\u2113} when \u2113/ \u2208P. That is, Player 2 cannot grab \u2113when she already owns it. Since \u27e8u, c\u27e9\u2208Wn+1 and it is a Player 2 vertex, both \u27e8u, P\u27e9\u2208Wn and \u27e8u, P \\ {\u2113}\u27e9\u2208Wn. Finally, if \u2113\u2208Q\u2032, define Q := P, and if \u2113/ \u2208Q\u2032, define Q := P \\ {\u2113}. In both cases, since Q\u2032 \u2286P \u2032 and P \u2032 \u2286P, we have Q\u2032 \u2286Q and meets the requirement in the claim on \u2113. Thus, since \u27e8u, Q\u27e9\u2208Wn, by the induction hypothesis, \u27e8u, Q\u2032\u27e9\u2208Wn. Since \u27e8u, Q\u2032\u27e9is any arbitrary neighbour of the Player 2 vertex \u27e8u, c\u2032\u27e9, we have that \u27e8u, c\u2032\u27e9\u2208Wn+1. In the second case j / \u2208P. That is, Player 2 makes the move in \u27e8v, P\u27e9, and thus \u27e8u, c\u27e9 is a Player 1 vertex. Since P \u2032 \u2286P, we have j / \u2208P \u2032, thus \u27e8u, c\u2032\u27e9is also a Player 1 vertex. Since \u27e8u, c\u27e9\u2208Wn+1, it has a neighbor \u27e8u, Q\u27e9\u2208Wn. Note that since Player 1 either grabs or does not grab in \u27e8u, c\u27e9, we have P \u2286Q. We find a neighbor \u27e8u, Q\u2032\u27e9of \u27e8u, c\u2032\u27e9to apply the induction hypothesis on. Note that Player 1 has the option to grab at \u27e8u, c\u2032\u27e9, and, among his choices, he can choose not to grab or to grab \u2113. If \u2113\u2208Q, then we choose Q\u2032 := P \u2032 \u222a{\u2113}, and if \u2113/ \u2208Q, we choose Q\u2032 := P \u2032. In both cases, Q\u2032 \u2286Q and meets the requirement in the claim. Thus, since \u27e8u, Q\u27e9\u2208Wn, by the induction hypothesis, we have \u27e8u, Q\u2032\u27e9\u2208Wn, and hence \u27e8u, c\u2032\u27e9\u2208Wn+1. The following corollary of Thm. 5 shows that we can restrict attention to \u201clocally-grabbing\u201d strategies that only grab the pawn that owns the vertex on which the token is placed. In other words, a locally-grabbing strategy will not grab a pawn if the token is not placed on the vertex that it owns. \u25b6Corollary 6. Consider an MVPP optional-grabbing game. Suppose that Player 1 controls P \u2286[d], and that Player 2 moves the token to a vertex v owned by Pawn j, i.e., v \u2208Vj. Player 1 has the option to grab. If Player 1 can win by grabbing a pawn j\u2032 \u0338= j, i.e., a pawn that does not own the next vertex, he can win by not grabbing at all. Formally, if Player 1 wins from \u27e8v, P \u222a{j\u2032}\u27e9, he also wins from \u27e8v, P\u27e9. And dually for Player 2. We point out that Thm. 5 and Cor. 6 do not hold for OMVPP optional-grabbing games. \fG. Avni, P. Ghorpade and S. Guha 9 3.1 OVPP: A PTIME algorithm We turn to study complexity results, and start with the following positive result. \u25b6Theorem 7. OVPP optional-grabbing PAWN-GAMES is in PTIME. Proof. We describe the intuition of the algorithm, the pseudo-code can be found in Alg. 1. Recall that in turn-based games (see Thm. 2), the attractor computation iteratively \u201cgrows\u201d the set of states from which Player 1 wins: initially W0 = T, and in each iteration, a vertex u is added to Wi if (1) u belongs to Player 2 and all its neighbors belong to Wi or (2) u belongs to Player 1 and it has a neighbor in Wi. In optional-grabbing games, applying attractor computation is intricate since vertex ownership is dynamic. Note that the reasoning behind (1) above holds; namely, if N(u) \u2286Wi, no matter who controls u, necessarily Wi is reached in the next turn. However, the reasoning behind (2) fails. Consider a Player 1 vertex u that has two neighbors v1 \u2208Wi and v2 / \u2208Wi. While u would be in Wi+1 according to (2), under optional-grabbing, when Player 1 makes the move into u, Player 2 can avoid Wi by grabbing u and proceeding to v2. In order to overcome this, our algorithm operates as follows. Vertices that satisfy (1) are added independent of their owner (Line 4). The counterpart of (2) can be seen as two careful steps of attractor computation. First, let B denote the border of Wi, namely the vertices who have a neighbor in Wi (Line 6). Second, a vertex u is in Wi+1 in one of two cases. (i) u \u2208B and all of its neighbors are in B \u222aWi (Line 10). Indeed, if Player 1 controls u he wins by proceeding to Wi and if Player 2 owns u, she can avoid Wi by moving to B, then Player 1 grabs and proceeds to Wi. (ii) Player 2 controls u in the initial configuration and all of its neighbors are in B (Line 12). Indeed, Player 2 cannot avoid proceeding into B, and following Player 2\u2019s move, Player 1 grabs and proceeds to Wi. More formally, consider an input OVPP optional-grabbing game G = \u27e8V, E, T\u27e9and an initial configuration \u27e8v0, P0\u27e9. Correctness of the algorithm follows from the two following claims. First, we show soundness; namely, whenever the algorithm returns that Player 1 wins, he does indeed have a winning strategy from \u27e8v0, P0\u27e9. \u25b7Claim 8. For i \u22650, Player 1 wins from every vertex v \u2208Wi no matter who makes the last step into v. Proof. The proof is by induction on i. The base case is trivial since W0 = T. For the inductive step, suppose that the claim holds for Wi and we show that it holds in all ways that it can be extended. Case 1 \u2013 Line 4: for a vertex u having N(u) \u2286Wi, Player 1 wins no matter who controls u since in the next turn, Wi is necessarily reached. Case 2 \u2013 Line 10: We claim that Player 1 wins from u \u2208B\u2032 no matter who controls u. Indeed, if Player 1 controls u, he wins by proceeding to Wi. If Player 2 controls u and avoids Wi, then the next vertex is necessarily in B. Player 1 grabs and proceeds to Wi. Case 3 \u2013 Line 12: Let v \u2208R \\ P0. Player 1 can always ensure that Player 2 controls v by never grabbing v. Also in OVPP optional-grabbing, Player 1 does not need to grab v in order to reach v. Thus if Player 1 has a strategy such that the token reaches v, then he can ensure that the token reaches v and is controlled by Player 2. Hence, once v is reached, since it is controlled by Player 2, she is forced to move from v to a vertex v\u2032 in B. Player 1 then grabs v\u2032 unless he already controls it and moves the token to Wi. We turn to prove completeness. \f10 A Game of Pawns Algorithm 1 Given an OVPP optional-grabbing pawn game G = \u27e8V, E, T\u27e9and an initial configuration c = \u27e8v, P0\u27e9, determines which player wins G from c. 1: W0 = T, i = 0 2: while True do 3: if v0 \u2208Wi then return Player 1 4: Wi+1 = Wi \u222a{u : N(u) \u2286Wi} 5: if Wi \u0338= Wi+1 then i := i + 1; Continue 6: B := {u : N(u) \u2229Wi \u0338= \u2205} 7: if B = \u2205then return Player 2 8: if v0 \u2208B and v0 \u2208P0 then return Player 1 9: B\u2032 := {u \u2208B : N(u) \u2286B \u222aWi} 10: if B\u2032 \u0338= \u2205then Wi+1 := Wi \u222aB\u2032; i := i + 1; Continue 11: R = {u : N(u) \u2286B} 12: if R \\ P0 \u0338= \u2205then Wi+1 = Wi \u222a(R \\ P0); i := i + 1 13: else return Player 2 \u25b7Claim 9. Suppose that the algorithm terminates at iteration i in Lines 7 or 13. Then, Player 2 has a winning strategy from \u27e8v0, P0\u27e9. Proof. We first consider the case of terminating in Line 7. Thus, every vertex not in Wi, including v0, does not have a path to Wi. Clearly Player 2 wins. Second, suppose that the algorithm terminates in Line 13. We describe a winning Player 2 strategy from \u27e8v0, P0\u27e9that avoids Wi. Whenever Player 2 moves the token, she moves it to an arbitrary vertex not in Wi \u222aB. When the token reaches u \u2208B, it is following a Player 1 move, and Player 2 grabs u, and moves to Wi \u222aB. We claim that Player 2 wins when the game starts from \u27e8v0, P0\u27e9. Assume towards contradiction that Wi is reached. Since v0 / \u2208Wi, a visit to Wi must first visit a vertex u \u2208B. We claim that (i) u is in the control of Player 2 and that (ii) she can move away from Wi. Combining (i) and (ii) we reach a contradicting to the assumption that Wi is reached. We conclude the proof by showing that (i) and (ii) hold. For (ii), note that u has a neighbor not in Wi \u222aB otherwise u would have been added to Wi+1 in Line 4 or Line 12. We turn to prove (i). Suppose first that u = v0. If Player 1 controls v0, then the algorithm terminates in Line 8. Since it does not terminate there, Player 2 controls u. Next we consider the case that u \u0338= v0, thus u has a predecessor u\u2032 in the play. We distinguish between two cases. If u\u2032 \u2208R, then u\u2032 \u2208P0 otherwise it would have been added to Wi+1 in Line 12. Thus, Player 1 makes the move from u\u2032 to u, and Player 2 grabs u following Player 1\u2019s move (if she does not control it already). The second case is when u\u2032 / \u2208R. Recall that a vertex is in R if all of its neighbors lead to B, thus u\u2032 / \u2208R implies that it has at least one neighbor not in B. Note that if u\u2032 was in the control of Player 2, her strategy would have dictated to proceed to a vertex not in B. Thus, if Player 1 makes the move from u\u2032 to u, and Player 2 grabs u following Player 1\u2019s move. Finally, note that the algorithm terminates once a fixed point is reached, thus it runs for at most |V | iterations. This concludes the proof of the theorem. \fG. Avni, P. Ghorpade and S. Guha 11 3.2 MVPP: EXPTIME-hardness via Lock & Key games We prove hardness of MVPP optional-grabbing pawn games by reduction through a class of games that we introduce and call Lock & Key games, and may be of independent interest. A Lock & Key game is G = \u27e8V, E, T, L, K, \u03bb, \u03ba\u27e9, where \u27e8V, E, T\u27e9is a turn-based game, L = {\u21131, . . . , \u2113n} is a set of locks K = {k1, . . . , kn} is a set of keys, each \u2113j is associated to key kj \u2208K for j \u2208[n], and each edge is labeled by a set of locks and keys respectively given by \u03bb : E \u21922L and \u03ba : E \u21922K. Note that a lock and a key can appear on multiple edges. Intuitively, a Lock & Key game is a turn-based game, only that the locks impose restrictions on the edges that a player is allowed to cross. Formally, a configuration of a Lock & Key game is c = \u27e8v, A\u27e9\u2208V \u00d7 2L, meaning that the token is placed on v and each lock in A is closed (all other locks are open). When v \u2208Vi, for i \u2208{1, 2}, then Player i moves the token as in turn-based games with the restriction that he cannot choose an edge that is labeled by a closed lock, thus e = \u27e8v, u\u27e9\u2208E is a legal move at c when \u03bb(e) \u2286(L \\ A). Crossing e updates the configuration of the locks by \u201cturning\u201d all keys that e is labeled with. Formally, let \u27e8u, A\u2032\u27e9be the configuration after crossing e. For kj \u2208\u03ba(e) (\u201ckey kj is turned\u201d), we have \u2113j \u2208A iff \u2113j / \u2208A\u2032. For kj / \u2208\u03ba(e) (\u201ckey kj is unchanged\u201d), we have \u2113j \u2208A iff \u2113j \u2208A\u2032. Note that, similar to pawn games, each Lock & Key game corresponds to an exponentially sized two-player turn-based game. Thus, membership in EXPTIME is immediate. For the lower bound, we show a reduction for the problem of deciding whether an alternating polynomial-space Turing machine (ATM) accepts a given word. \u25b6Theorem 10. Given a Lock & Key game G and an initial configuration c, deciding whether Player 1 wins from c in G is EXPTIME-complete. Proof. We briefly describe the syntax and semantics of ATMs, see for example [29], for more details. An ATM is A = \u27e8Q, \u0393, \u03b4, q0, qacc, qrej\u27e9, where Q is a collection of states that is partitioned into Q = Q1 \u222aQ2 owned by Player 1 and Player 2 respectively, \u0393 is a tape alphabet, \u03b4 : Q \u00d7 \u0393 \u21922Q\u00d7\u0393\u00d7{L,R} is a transition function, q0, qacc, qrej \u2208Q are respectively an initial, accepting, and rejecting states. A configuration of A is c = \u27e8q, i, \u27e8\u03b31, . . . , \u03b3m\u27e9\u27e9, meaning that the control state is q, the head position is i, and \u27e8\u03b31, . . . , \u03b3m\u27e9is the tape content, where m is polynomial in the length of the input word w. In order to determine whether A accepts w we construct a (succinctly-represented) turn-based game over the possible configurations of A, the neighbors of a configuration are determined according to \u03b4, and, for i \u2208{1, 2}, Player i moves from states in Qi. We say that A accepts w iff Player 1 has a winning strategy from the initial configuration for the target state qacc. Given A and w, we construct a Lock & Key game G = \u27e8V, E, T, L, K, \u03bb, \u03ba\u27e9and an initial configuration \u27e8v0, A\u27e9such that Player 1 wins from \u27e8v, A\u27e9in G iff w is accepted by A. The vertices of G consist of main and intermediate vertices. Consider a configuration c = \u27e8q, i, \u27e8\u03b31, . . . , \u03b3m\u27e9\u27e9of A. We simulate c in G using c\u2032 = \u27e8v, A\u27e9as follows. First, the main vertices are Q \u00d7 {1, . . . , m} \u00d7 \u0393 and keep track of the control state and position on the tape. The main vertex that simulates c = \u27e8q, i, \u27e8\u03b31, . . . , \u03b3m\u27e9\u27e9is v = \u27e8q, i, \u03b3i\u27e9. We define v \u2208Vi iff q \u2208Qi. Second, we use locks to keep track of the tape contents. For each 1 \u2264i \u2264m and \u03b3 \u2208\u0393, we introduce a lock \u2113i,\u03b3. Then, in the configuration c\u2032 = \u27e8v, A\u27e9that simulates c, the only locks that are open are \u2113i,\u03b3i, for i \u2208{1, . . . , m}. Next, we describe the transitions, where intermediate vertices are used for book-keeping. The neighbors of a main vertex v are the intermediate vertices {\u27e8v, t\u27e9: t \u2208\u03b4(q, \u03b3)}, where a transition of A is t = \u27e8q\u2032, \u03b3\u2032, B\u27e9, meaning that the next control state is q\u2032, the tape head moves to i + 1 if B = R and to i \u22121 if B = L, and the i-th tape content changes from \u03b3 to \u03b3\u2032. We update the state of the locks so that they reflect the tape contents: for the edge \u27e8v, \u27e8v, t\u27e9\u27e9, we have \u03ba(\u27e8v, \u27e8v, t\u27e9\u27e9) = {ki,\u03b3, ki,\u03b3\u2032}. \f12 A Game of Pawns v\u2032 1 v1 v2 v1 v\u2032 2 v2 s t Figure 2 From turn-based to optional-grabbing games. That is, traversing the edge turn the keys to close \u2113i,\u03b3 and open \u2113i,\u03b3\u2032. The neighbors of \u27e8v, t\u27e9are main vertices having control state q\u2032 and head position i\u2032. Recall that the third component of a main vertex is the tape content at the current position. We use the locks\u2019 state to prevent moving to main vertices with incorrect tape content: outgoing edges from \u27e8v, t\u27e9are of the form \u27e8\u27e8v, t\u27e9, \u27e8q\u2032, i\u2032, \u03b3\u2032\u2032\u27e9\u27e9and is labeled by the lock \u2113i\u2032,\u03b3\u2032\u2032. That is, the edge can only be traversed when the i\u2032-th tape position is \u03b3\u2032\u2032. It is not hard to verify that there is a one-to-one correspondence between runs of A and plays of G. Thus, Player 1 forces A to reach a configuration with control state qacc iff Player 1 forces to reach a main vertex with control state qacc. Note that the construction is clearly polynomial since G has |Q| \u00b7 m \u00b7 |\u0393| main vertices. 3.2.1 From Lock & Key games to optional-grabbing pawn games Throughout this section, fix a Lock & Key game G and an initial configuration c. We construct an optional-grabbing pawn game G\u2032 over a set of pawns [d], and identify an initial configuration c\u2032 such that Player 1 wins in G from c iff Player 1 wins from c\u2032 in G\u2032. From turn-based games to optional-grabbing games In this section, we consider the case in which G has no keys or locks, thus G is a turn-based game. The reduction is depicted in Fig. 2. Denote the turn-based game G = \u27e8V, E, T\u27e9with V = V1 \u222aV2 and initial vertex v0. We construct an OVPP optional-grabbing G\u2032 = \u27e8V \u2032, E\u2032, T \u2032\u27e9, where V \u2032 = V \u222a{v\u2032 : v \u2208V } \u222a{s, t}. We add edges to ensure that the player who owns a vertex v \u2208V is the player who moves from v in G\u2032: we have \u27e8v\u2032, v\u27e9\u2208E\u2032, and if v \u2208V1, then \u27e8v, s\u27e9\u2208E\u2032, and if v \u2208V2, then \u27e8v, t\u27e9\u2208E\u2032. We redirect each edge \u27e8u, v\u27e9in G to \u27e8u, v\u2032\u27e9in G\u2032. Intuitively, for v \u2208V1, a Player 1 winning strategy will guarantee that v\u2032 is always in the control of Player 2, and following her move at v\u2032, Player 1 must grab v otherwise Player 2 wins and choose the next location. And dually for v \u2208V2. Let V \u2032 1 = V1 \u222a{v\u2032 : v \u2208V2}, the initial configuration of G\u2032 is \u27e8v0, V \u2032 1\u27e9, that is Player 2 controls V2 \u222a{v\u2032 : v \u2208V1}. We thus have the following: \u25b6Lemma 11. For a turn-based game G, Player 1 wins G from a vertex v0 \u2208V iff Player 1 wins the optional-grabbing game G\u2032 from configuration \u27e8v0, V \u2032 1\u27e9. Proof. Let f be a Player 1 winning strategy in G. We describe a Player 1 winning strategy f \u2032 in G\u2032. The proof is dual when Player 2 wins in G. Player 1\u2019s strategy f \u2032 simulates the execution of f on G so that when the token is placed on v \u2208V in G\u2032 it is also placed on v in G. Moreover, if v \u2208V1 in G, then Player 1 controls v in G\u2032. Initially, the invariant clearly holds. Suppose that Player 2 is playing G\u2032 according to some strategy g\u2032. We show that either f \u2032 wins by reaching t, or that the invariant is maintained. Suppose that the token is placed on v \u2208V . We distinguish between three cases. (1) If v \u2208V2 and Player 1 owns v in G\u2032, then he moves to t to win the game. (2) Suppose that v \u2208V2, that Player 2 owns v, and that she moves to u\u2032. The simulation in G proceeds to vertex u. In order to maintain the second part of the invariant, Player 1 does not grab u\u2032 but grabs u when u \u2208V1, which is possible because in such a case we define u\u2032 \u2208V \u2032 2. (3) When v \u2208V1, \fG. Avni, P. Ghorpade and S. Guha 13 the invariant implies that Player 1 controls v in G\u2032. Player 1 then copies the move of f in G; namely, let u = f(v), then u\u2032 = f \u2032(v). The move from u\u2032 is as in case (2). Let v0, v1, . . . be the play in G. Clearly, if case (1) occurs at some point, Player 1 wins G\u2032. Assume that it does not occur. Then, the invariant implies that the play in G\u2032 is v0, v\u2032 1, v1, v\u2032 2, v2, . . .. Since f is winning in G, there is an i \u22650 such that vi = t, thus the play is winning in G\u2032 as well. Gadgets for simulating locks and keys The core of the reduction is to construct gadgets that simulate locks and keys. Let G\u2032 denote the optional-grabbing pawn game that we construct, and let [d] denote its set of pawns. For each lock \u2113\u2208L and its corresponding key k \u2208K, we construct gadgets G\u2113and Gk that simulate the operations of \u2113and k in G\u2032. The gadgets in two states are depicted in Fig. 3. We highlight three pawns colored blue, green, and red, respectively owning, {v\u2113 1, vk 1}, {v\u2113 2, vk 2, vk 7, vk 8}, and {vk in, vk 4, vk 5, vk 6}. Each of the other vertices (colored white) is owned by a fresh pawn. Intuitively, for each lock \u2113, we identify two sets P\u2113 O, P\u2113 C \u22862[d], respectively representing an open and closed state of \u2113. We will ensure that when entering and exiting a gadget, the configuration is in one of these sets. When the set of pawns that Player 1 controls is in P\u2113 O and P\u2113 C, we respectively say that G\u2113is in open and closed state, and similarly for Gk as stated below. We define P\u2113 O = {P \u22082[d] : v\u2113 1 / \u2208P \u2227v\u2113 2 \u2208P} and P\u2113 C = {P \u22082[d] : v\u2113 1 \u2208P \u2227v\u2113 2 / \u2208P}. Formally, we define P\u2113 O = {P \u22082[d] : v\u2113 1 / \u2208P \u2227v\u2113 2 \u2208P} and P\u2113 C = {P \u22082[d] : v\u2113 1 \u2208P \u2227v\u2113 2 / \u2208P}. \u25b6Lemma 12. Let i \u2208{1, 2}. An open lock stays open: If Player i enters G\u2113in P\u2113 O, then he has a strategy that guarantees that either he wins G\u2032 or G\u2113is exited in P\u2113 O. A closed lock cannot be crossed: If Player i enters G\u2113in P\u2113 C, then Player \u2212i has a strategy that guarantees that Player i loses G\u2032. Proof. We prove for Player 1 and the proof is dual for Player 2. First, suppose Player 1 enters G\u2113in P\u2113 O. Player 2 may or may not grab v\u2113 in, and the game can proceed to either v\u2113 1 or v\u2113 2. We argue that if the game proceeds to v\u2113 1, then Player 1 will not grab v\u2113 1. We can also similarly show that if the game proceeds to v\u2113 2, then Player 2 will not grab v\u2113 2. Player 2 controls v\u2113 1. We claim that if Player 1 grabs v\u2113 1, he will lose the game. Indeed, following Player 1\u2019s move in v\u2113 1, Player 2 will grab v\u2113 3 and move the token to the sink vertex s to win the game. Thus, Player 1 does not grab v\u2113 1 and keeps it in the control of Player 2. Following Player 2\u2019s move in v\u2113 1, Player 1 grabs v\u2113 3 and proceeds to exit G\u2113. Note that when G\u2113is exited, Player 1 maintains control of v\u2113 2 and Player 2 maintains control of v\u2113 1, thus the configuration is in P\u2113 O. Second, suppose that Player 1 enters G\u2113in P\u2113 C. Then, Player 2 grabs v\u2113 in and moves the token to v\u2113 1. Since Player 1 controls v\u2113 1 he must make the next move. Player 2 then grabs v\u2113 3 and moves the token to s to win the game. Next, we present the gadget Gk for simulating the operation of a key k (see Fig. 3). Intuitively, we maintain that Gk is in open state iff G\u2113is in open state, and traversing Gk swaps the state of both. We define sets of configurations Pk O = {P \u22082[d] : {vk in, vk 1, vk 4, vk 5, vk 6} \u2229P = \u2205\u2227{vk 2, vk 7, vk 8} \u2286P} and Pk C = {P \u22082[d] : {vk in, vk 1, vk 4, vk 5, vk 6} \u2286P \u2227{vk 2, vk 7, vk 8} \u2229P = \u2205} (see Fig. 3). Note that Pk O \u2286P\u2113 O and Pk C \u2286P\u2113 C since vk i and v\u2113 i are owned by the same pawn for i \u2208[2]. \u25b6Lemma 13. Turning k closes an open \u2113: Let i \u2208{1, 2}. If Player i enters Gk in Pk O, then he has a strategy that ensures that either Player i wins G\u2032 or Gk is exited in Pk C. Turning k \f14 A Game of Pawns v\u2113 in v\u2113 1 v\u2113 2 v\u2113 3 v\u2113 4 v\u2113 out s t v\u2113 in v\u2113 1 v\u2113 3 v\u2113 out v\u2113 4 s t v\u2113 2 vk in vk 1 vk 2 vk 3 s t vk 4 vk 5 vk 6 vk 7 vk 8 vk out t s s t vk in vk 1 vk 2 vk 3 s t vk 4 vk 5 vk 6 vk 7 vk 8 vk out t s s t Figure 3 From left to right: G\u2113in open and closed state and Gk in open and closed state. opens a closed \u2113: when Player i enters Gk in Pk C, Player i ensures that either he wins G\u2032 or Gk is exited in Pk O. Proof. We depict Gk in two configurations in Fig. 3. The vertices vk 3 and vk out are controlled by one-vertex pawns. The rest of the vertices in Gk, not including targets, are controlled by three pawns and are depicted using colors. Vertex vk 1 is controlled by a \u201cblue\u201d pawn, who also owns v\u2113 1. Vertices vk 2, vk 7, and vk 8 are all owned by a \u201cgreen\u201d pawn who also controls v\u2113 2. The other vertices are owned by a \u201cred\u201d pawn. We simulate the properties of a key using the configurations, where we prove the claim for an entry configuration in Pk C and the proof for Pk O is dual. We claim that for i \u2208{1, 2}, if the token lands on vk 2 when Player i controls that vertex, then Player i loses. Indeed, following Player i\u2019s move at vk 2, Player \u2212i grabs vk 3 and directs the token to the winning vertex. It follows that Player 1 does not grab vk in upon entry to Gk and allows Player 2 to move. Following Player 2\u2019s move at vk in, Player 1 grabs vk 1, and proceeds to vk 4. Since Player 1 moves and vk 4 is in the control of Player 2, no change of pawns occurs. We claim that Player 2 now proceeds to vk 5. Indeed, if she proceeds to vk 6, Player 1 will grab vk 6 and win by moving to t. Observe that Player 1 grabs the red pawn since it controls vk 5 following Player 2\u2019s move from vk 4 to vk 5. Indeed, otherwise Player 2 proceeds to s to win the game. Finally, following Player 1\u2019s move from vk 5, Player 2 must grab the green pawn since it grabs vk 7, otherwise Player 1 moves from vk 7 to t to win the game. To conclude, if the token exits Gk, Player 1 has grabbed the blue and red pawns and Player 2 has grabbed the green pawn. The configuration upon exiting Gk is thus in Pk O, and we are done. Putting it all together We describe the construction of a pawn game G\u2032 from a Lock & Key game G. We assume w.l.o.g. that each edge \u27e8u, v\u27e9in G is labeled by at most one lock or key since an edge that is labeled by multiple locks or keys can be split into a chain of edges, each labeled by a single lock or a key. We describe the construction of G\u2032. We first apply the construction for turn-based games on G while \u201cignoring\u201d the locks and keys. Recall that the construction introduces fresh vertices so that an edge e = \u27e8u, v\u27e9in G is mapped to an edge e\u2032 = \u27e8u\u2032, v\u27e9in G\u2032. We re-introduce the locks and keys so that the labeling of e\u2032 coincides with the labeling of e. Next, we replace an edge e\u2032 that is labeled by a lock \u2113, by a copy of G\u2113, and if e is labeled by a key k, we replace e\u2032 by a copy of Gk. Note that multiple edges could be labeled by the same lock \u2113. In such a case we use fresh vertices in each copy of G\u2113, but crucially, all gadgets share the same pawns so that they share the same state. And similarly for keys. For an illustration of this construction, see Fig. 4, which applies the construction on a Lock & Key game that is output from the reduction in Thm. 10. Finally, given an initial configuration c = \u27e8v, A\u27e9of G we define an initial configuration c\u2032 = \u27e8v, P\u27e9of G\u2032. Note that the initial vertex is the entry point of the gadget that simulates v in G\u2032. For each lock \u2113and corresponding key k, if \u2113is open according to A, then P \u2208P\u2113 O, \fG. Avni, P. Ghorpade and S. Guha 15 vki in vki 1 vki 2 vki 3 s t vki 4 vki 5 vki 6 vki 7 vki 8 vki out t s s t v kj in v kj 1 v kj 2 v kj 3 s t v kj 4 v kj 5 v kj 6 v kj 7 v kj 8 v kj out t s s t u v v\u2032 v\u2113r in v\u2113r 1 v\u2113r 2 v\u2113r 3 v\u2113r 4 v\u2113r out s t u\u2032 v\u2032 2 Figure 4 A \u03b4-path is a path between two primed main vertices in an optionalor always-grabbing game, and it crosses two key gadgets and one lock gadget. i.e., both G\u2113and Gk are initially in open state. And similarly when \u2113is closed according to A. Combining the properties in Lemmas 11, 12, and 13 implies that Player 1 wins G from c iff Player 1 wins G\u2032 from c\u2032. Thus, by Thm. 10, we have the following. \u25b6Theorem 14. MVPP optional-grabbing PAWN-GAMES is EXPTIME-complete. 4 Always-Grabbing Pawn Games In this section, we study always-grabbing pawn games and show that MVPP always-grabbing pawn-games are EXPTIME-complete. The main challenge is proving the lower bound. We proceed as follows. Let M be an ATM. Apply the reduction in Thm 10 and the one in Thm. 14 to obtain pairs \u27e8G, c\u27e9and \u27e8G\u2032, c\u2032\u27e9, where G and G\u2032 are respectively Lock & Key and optional-grabbing games with initial configurations c and c\u2032 respectively. We devise a construction that takes \u27e8G\u2032, c\u2032\u27e9and produces an always-grabbing game G\u2032\u2032 and a configuration c\u2032\u2032 such that Player 1 wins from c\u2032\u2032 in G\u2032\u2032 iff he wins from c in G. Our analysis heavily depends on the special structure of G\u2032. The construction in Thm. 10 outputs a game G with main vertices of the form \u27e8q, i, \u03b3\u27e9(q is a state, i is a tape position, and \u03b3 is a letter in the tape alphabet). A play of G can be partitioned into paths between main vertices. Each such path corresponds to one transition of the Turing machine and traverses two keys and a lock before again reaching a main vertex. Recall that when constructing G\u2032 from G, we replace locks and keys with their respective gadgets, and for every vertex v that belongs to G, we add a new primed vertex v\u2032 such that if v is controlled by Player i then v\u2032 is controlled by Player \u2212i. We call a path in G\u2032 that corresponds to a path in G between two successive main vertices, say v and v\u2032, a \u03b4-path. Fig. 4 depicts a \u03b4-path. The \u03b4-path shown here is along a closed key ki, an open key kj and an open lock \u2113r such that r \u0338= i. An open key represents that the corresponding lock is currently open, and going through this key closes the lock and also the state of the key changes from open to closed. Similarly, a closed key represents that the corresponding lock is currently closed, and going through this key opens the lock and also the state of the key changes from closed to open. Recall from the proof of Thm 10, that we go through the open key for lock \u2113i,\u03b3\u2032 and the closed key for lock \u2113i,\u03b3 and finally, the open lock \u2113i\u2032,\u03b3\u2032\u2032. For simplicity of notation, here, we refer to the keys corresponding to the locks \u2113i,\u03b3\u2032 and \u2113i,\u03b3 as ki and kj respectively, while we denote by \u2113r the lock \u2113i\u2032,\u03b3\u2032\u2032. Recall from Section 3.2 that the gadget for Key km and Lock \u2113m mainly have vertices owned by three pawns pRedm, pBluem, pGreenm. pRedm owns vertices {vkm in , vkm 4 , vkm 5 , vkm 6 }, pawn pBluem owns {vkm 1 , vlm 1 }, and pawn pGreenm owns {vkm 2 , vkm 7 , vkm 8 , vlm 2 }. In this \u03b4-path we have pawns pRedm, pBluem, pGreenm for m = i, j, r (The pawns pBluei, pBluej, pBluer are different pawns even though the vertices owned by them have the same colour in a \u03b4-path and the same holds for vertices belonging to pawns pRed and \f16 A Game of Pawns pGreen.). The ownership of the diamond vertices is not fixed; they can be either controlled by Player 1 or Player 2. An important property of the specific optional-grabbing game G\u2032 that is constructed in Thm 10 from an ATM is that every play of G\u2032 consists of a sequence of \u03b4-paths. The following observation can easily be verified: \u25b6Observation 15. A \u03b4-path from v\u2032 to v\u2032 2 consists of 20 turns. The following lemma is crucial for the construction of G\u2032\u2032. The proof of this lemma is obtained by proving several claims made below. \u25b6Lemma 16. For i \u2208{1, 2}, if Player i has a strategy in the optional-grabbing game G\u2032 to cross a \u03b4-path from v\u2032 to v\u2032 2, then Player i has a strategy that moves the token in at least 10 rounds and Player \u2212i moves the token in at most 10 rounds in the \u03b4-path. Proof. We will prove this lemma for i = 1 that is for Player 1. The proof for Player 2 is similar. This follows from the fact that in Figure 4, the vertices controlled initially by Player 1 in Key ki have their corresponding vertices in Key kj that are initially controlled by Player 2. Further, the gadget for the Lock \u2113r is symmetric for both players. In order to prove the lemma we first prove some claims. Note that in all these claims, we talk about a particular player controlling a vertex, for example, in Claim 17, we say that Player 2 controls vlr 1 , because Player 1 can ensure that the game reaches such configuration from vertex v\u2032. We can indeed show that Player 1 has a strategy such that in the \u03b4-path, Player 2 controls vertex vlr 1 when the token reaches this vertex. Refer to Figure 4 in the following claims. \u25b7Claim 17. If Player 2 controls vertex vlr 1 then Player 1 has a strategy from vlr 1 to reach v\u2032 2 which ensures that out of the 4 rounds that moves the token to v\u2032 2, he moves the token in at least 2 rounds. Proof. Suppose the token is at vertex vlr 1 and Player 2 controls vlr 1 . Then Player 2 moves the token from vlr 1 to vlr 3 . Player 1 can now grab a pawn. If Player 2 controls vertex vlr 3 , Player 1 grabs the pawn owning this vertex else he chooses to not grab a pawn. Note that under both the cases by the end of this round Player 1 controls vertex vlr 3 . He now moves the token to vlr out. Note that till now Player 1 has moved the token in one round. Now by the end of this round there are two possible cases: 1. Player 1 controls vlr out. In this case Player 1 moves the token to u\u2032, and thus it Player 1 has moved the token in two rounds, and hence regardless of what happens in next round the claim holds. 2. Player 2 controls vlr out. In this case Player 2 moves the token to u\u2032 and now Player 1 can control u\u2032 and move the token to v\u2032 2, which will be his second instance of moving the token and thus the claim holds. Hence showed. \u25b7Claim 18. If Player 1 controls vertex vlr 2 then he has a strategy from vlr 2 to reach v\u2032 2 which ensures that, out of the 4 rounds that moves the token from vlr 2 to v\u2032 2, he moves the token in at least 2 rounds. Proof. Suppose the token is at vertex vlr 2 and Player 1 controls vlr 2 . Then Player 1 moves the token from vlr 2 to vlr 4 . Now if Player 1 has the pawn owning vertex vlr 4 , then Player 2 is forced to grab this pawn in this round. Thus by the end of this round Player 2 controls vlr 4 . \fG. Avni, P. Ghorpade and S. Guha 17 Now Player 2 moves the token from vlr 4 to vlr out. Player 1 now can grab a pawn. If Player 2 controls vertex vlr out, Player 1 grabs the pawn owning this vertex else he chooses to not grab a pawn. Note under both the cases by the end of this round Player 1 controls vertex vlr out. He now moves the token to u\u2032, which will be his second instance of moving the token and thus the claim holds. \u25b7Claim 19. If Player 1 controls vertex vkj out then he has a strategy from vkj out to reach v\u2032 2 which ensures that, out of the 7 rounds that moves the token from vkj out to v\u2032 2 he moves the token in at least 4 rounds. Proof. Suppose the token is at vertex vkj out and Player 1 controls vkj out. Then Player 1 moves the token from vkj out to u. Now by the end of this round there are two possible cases: 1. Player 1 controls u. In this case Player 1 moves the token to vlr in. Note that till now Player 1 has moved the token in two rounds. Now again by the end of this round there are two possible cases: a. Player 1 controls vlr in. In this case Player 1 moves the token to vlr 1 . Now note that the token is at vertex vlr 1 and Player 2 controls vlr 1 . Thus by Claim 17 we know that from here Player 1 moves the token in at least 2 rounds before reaching v\u2032 2. As till now when the token is at vlr 1 Player 1 has moved the token in three rounds, thus overall under this case Player 1 moves the token in at least 5 rounds. b. Player 2 controls vlr in. In this case Player 2 either moves the token to the vertex vlr 1 in which case Player 1 chooses to not grab. Now note that the token is at vertex vlr 1 and Player 2 controls vlr 1 . Thus by Claim 17 we know that from here Player 1 moves the token in at least 2 rounds before reaching v\u2032 2. Otherwise suppose from vlr in Player 2 moves the token to vertex vlr 2 , in which case Player 1 chooses to not grab. Now note that here the token is at vertex vlr 2 and Player 1 controls vlr 2 . Thus by Claim 18 we know that from here Player 1 moves the token in at least 2 rounds before reaching v\u2032 2. Thus in this case from vlr in Player 1 moves the token in at least two rounds. As till reaching vlr in Player 1 has moved the token in two rounds, thus overall under this case Player 1 moves the token in at least 4 rounds as the token reaches v\u2032 2. 2. Player 2 controls u. In this case Player 2 moves the token to vlr in. Player 1 now can grab a pawn. If Player 2 controls the vertex vlr in, Player 1 grabs the pawn owning this vertex else he choose to not grab a pawn. Note under both the cases by the end of this round Player 1 controls vertex vlr in. He now moves the token to vlr 1 . Now note that the token is at vertex vlr 1 and Player 2 controls vlr 1 thus by Claim 17 we know that from here Player 1 moves the token in at least two rounds before reaching v\u2032 2. As till reaching vlr 1 Player 1 has moved the token in two rounds, thus overall under this case Player 1 moves the token in at least 4 rounds. Hence showed. \u25b7Claim 20. If Player 2 controls vertex vkj in, then Player 1 has a strategy from vkj in to reach v\u2032 2 which ensures that, out of the 12 rounds that moves the token from vkj in to v\u2032 2 he moves the token in at least 6 rounds. Proof. Suppose the token is at vertex vkj in and Player 2 controls vkj in. Then Player 2 moves the token from vkj in to vkj 1 . In this round Player 1 grabs pawn pBluej inorder to control vkj 1 . Now Player 1 controls vkj 1 , he moves the token to vertex vkj 4 . In the next round Player 2 moves the token from vkj 4 to vkj 5 . Player 1 in this round grabs pRedj inorder to control vkj 5 . \f18 A Game of Pawns Now Player 1 moves the token from vkj 5 to vkj 7 . Here Player 2 grabs pGreenj inorder to control vkj 7 . Now Player 2 moves the token from vkj 7 to vkj out. Player 1 now can grab a pawn. If Player 2 controls vertex vkj out, Player 1 grabs the pawn owning this vertex else he choose to not grab a pawn. Note under both the cases by the end of this round Player 1 controls vertex vkj out. Now note that the token is at vertex vkj out and Player 1 controls vkj out. Thus by Claim 19 we know that from here Player 1 moves the token in at least 4 rounds before reaching v\u2032 2. As till reaching vkj out Player 1 has moved the token in two rounds, thus overall under this case Player 1 moves the token in at least 6 rounds. \u25b7Claim 21. If Player 1 controls vertex vki in, then he has a strategy from vki in to reach v\u2032 2 which ensures that, out of the 18 rounds that moves the token to v\u2032 2 he moves the token in at least 9 rounds. Proof. Player 1 controls vertex vki in, he moves the token to vki 1 , now Player 2 grabs pBluei in order to control vki 1 . In the next round Player 2 moves the token to vki 4 . In this round Player 1 does not grab a pawn. Now since pRedi belongs to Player 1 and pRedi owns vki 4 , Player 1 moves the token to vki 6 . In this round Player 2 is forced to grab pRedi in order to control vki 6 . Now since here Player 2 controls vki 6 , she moves the token to vki 8 . Now Player 1 grabs pGreeni and controls vki 8 . In the next round Player 1 moves the token to vki out. Now observe that till now Player 1 has moved the token in three rounds. Now by the end of this round there two possible cases: 1. Player 2 controls vki out. In this case Player 2 moves the token to vkj in, Player 1 here chooses to not grab a pawn. Now observe that the token is at vertex vkj in and Player 2 controls vertex vkj in thus by claim 20 we know that from here Player 1 moves the token in at least 6 rounds before reaching v\u2032 2. As until reaching vkj in Player 1 has moved the token in three rounds, thus overall under this case Player 1 moves the token in at least 9 rounds. 2. Player 1 controls vki out. In this case Player 1 moves the token to vkj in. Now note that Player 2 controls vkj in. Thus the token is at the vertex vkj in and Player 2 controls vertex vkj in. Now by Claim 20, we know that from here Player 1 moves the token in at least 6 rounds before reaching v\u2032 2. As until reaching vkj in Player 1 has moved the token in four rounds, thus overall under this case Player 1 moves the token in at least 10 rounds. Hence showed. Now let us prove the lemma. So the token is at vertex v\u2032. There are two possible cases: 1. Player 1 controls v\u2032. In this case Player 1 moves the token to v. Now regardless of who controls, v the token reaches vki in with Player 1 controlling vertex vki in. Note that Player 2 will not control vki in as he would like to control vki 1 . As till reaching vki in Player 1 moved the token in atleast one round and by Claim 21 we know that from here Player 1 moves the token in at least 9 rounds before reaching v\u2032 2, thus overall Player 1 moves in at least 10 rounds before reaching v\u2032 2. 2. Player 2 control v\u2032. In this case Player 2 moves the token to v. Player 1 now can grab a pawn. If Player 2 controls vertex v, Player 1 grabs the pawn owning vertex v else he chooses to not grab a pawn. Note under both the cases by the end of this round Player 1 controls the vertex v\u2032. He now moves the token to vki in. Again note that Player 2 will not control vki in as he would like to control vki 1 . As until reaching vki in Player 1 moved the token in one round and by Claim 21 we know that from here Player 1 moves the token in at least 9 rounds before reaching v\u2032 2. Thus overall Player 1 moves in at least 10 rounds before reaching v\u2032 2. \fG. Avni, P. Ghorpade and S. Guha 19 This concludes the proof of the lemma. Let G\u2032 = \u27e8V \u2032, E\u2032, T \u2032\u27e9with d pawns. The game G\u2032\u2032 is constructed from G\u2032 by adding 2(d+10) fresh isolated vertices each owned by a fresh unique pawn. Formally, G\u2032\u2032 = \u27e8V \u2032\u2032, E\u2032, T \u2032\u27e9, where V \u2032\u2032 = V \u2032 \u222a{v1, v2, . . . , v2(d+10)} such that vj / \u2208V \u2032, for j \u2208[2(d + 10)]. Consider a configuration c\u2032 = \u27e8v, P\u27e9in G\u2032. Let c\u2032\u2032 = \u27e8v, P \u222a{1, 2, . . . , d + 10}\u27e9be a configuration in G\u2032\u2032. Note that Lemma 16 also applies to the always-grabbing game G\u2032\u2032, and we get the following. \u25b6Corollary 22. For i \u2208{1, 2}, if Player i has a strategy in the always-grabbing game G\u2032\u2032 to cross a \u03b4-path from v\u2032 to v\u2032 2, then Player i has a strategy such that out of the 20 rounds in the \u03b4-path, the following hold. 1. Player \u2212i grabs a pawn in at least 10 rounds, and 2. Player i grabs a pawn in at most 10 rounds. Corollary 22 follows directly from Lemma 16 since in an always-grabbing game, the number of times Player \u2212i grabs equals the number of times Player i moves. In the remaining part of this section, we show that Player 1 wins G\u2032 from c\u2032 iff Player 1 wins G\u2032\u2032 from the configuration c\u2032\u2032 described above. \u25b6Lemma 23. For i \u2208{1, 2}, Player i wins from c\u2032 in the optional-grabbing game G\u2032 iff he wins from c\u2032\u2032 in the always-grabbing game G\u2032\u2032. Proof. We first give an outline of the proof before proceeding to proving the lemma formally. We prove that if Player 1 has a winning strategy f \u2032 in G\u2032 from c\u2032, then he has a winning strategy f \u2032\u2032 from c\u2032\u2032 in G\u2032\u2032. The case for Player 2 is analogous and the other direction follows from determinacy (Thm. 2. We construct f \u2032\u2032 to mimic f \u2032 with the following difference. Whenever f \u2032 chooses not to grab, in order to follow the rules of the always-grabbing mechanism, f \u2032\u2032 grabs a pawn owning an isolated vertex. This is possible since we show that we maintain the invariant that along a play in G\u2032\u2032 that consists of sequences of \u03b4-paths, at the beginning of each \u03b4-path, Player 2 has at least 10 isolated pawns. Note that the invariant holds initially due to the definition of c\u2032\u2032. We show that it is maintained. Recall from the proof of Theorem 10 that crossing a \u03b4-path simulates a transition in the Turing machine. Since Player 1 has a winning strategy in G\u2032, in a winning play, the strategy enables her to cross the \u03b4-path. Thus, by Lem. 16, Player 1 moves in at least 10 rounds. Thus, Player 2 moves in at most 10 rounds, and during each such round, Player 1 grabs a pawn. Hence, Player 1 grabs at most 10 times which thus maintains the invariant. We show that f \u2032\u2032 is a winning Player 1 strategy. We now formally prove each of the claims stated above. Now since for Player 1, grabbing an isolated pawn serves no extra purpose than grabbing nothing, the action of not grabbing in the optional-grabbing game can be replaced with grabbing a pawn owning an isolated vertex in the always-grabbing game G\u2032\u2032. Now assuming that Player 1 has a winning strategy f \u2032 in the optional-grabbing game G\u2032, consider a winning play \u03c0\u2032 for Player 1 in G\u2032. Now assume that in every round in which Player 1 does not grab a pawn in G\u2032 can be replaced with Player 1 grabbing a pawn owning an isolated vertex in the always-grabbing game G\u2032\u2032. Thus consider a strategy f \u2032\u2032 of Player 1 and a strategy of Player 2 such that in the resulting play Player 1 grabs a pawn owning an isolated vertex from Player 2 if in the corresponding round in the optional-grabbing game he does not grab anything, otherwise, f \u2032\u2032 follows the moves of f \u2032. Now consider that Player 1 chooses strategy f \u2032\u2032. Recall that f \u2032 is winning for Player 1 in G\u2032. We argue that Player 2 cannot win in G\u2032\u2032 against the strategy f \u2032\u2032 of Player 1. Suppose Player 2 uses a strategy and grabs some pawns owning non-isolated vertices while \f20 A Game of Pawns playing against strategy f \u2032\u2032 of Player 1 in rounds other than those rounds in G\u2032 i which he grabs the pawns owning non-isolated vertices while playing against strategy f \u2032 of Player 1. Then the resulting play in the always-grabbing game will still be losing for Player 2, since otherwise, Player 2 could have followed the same order of grabbing the non-isolated pawns in the optional-grabbing game G\u2032 and would have won the optional-grabbing game. This contradicts that f \u2032 is winning for Player 1 in the optional-grabbing game. This implies that if Player 1 has a winning strategy in the optional-grabbing game G\u2032 from the configuration \u27e8v, P\u27e9, then he has a winning strategy in the always-grabbing game G\u2032\u2032 from the configuration \u27e8v, P \u222a{1, . . . , d + 10}\u27e9. We now only need to show that in every round in G\u2032\u2032 where Player 2 moves the token, Player 2 has an isolated pawn for Player 1 to grab. We consider the following claim. \u25b7Claim 24. In the always-grabbing game G\u2032\u2032, for strategy f \u2032\u2032 of Player 1 and some strategy of Player 2, every time the token is at a primed main vertex, that is, at the beginning of a \u03b4-path, Player 2 owns at least 10 pawns owning the isolated vertices. Proof. Consider the case in which at a primed main vertex v, Player 2 has r pawns and the token is at v. Now suppose that the token is moved along a \u03b4-path to the next primed main vertex v\u2032. Note that at this next primed vertex v\u2032, Player 2 has at least r pawns. This clearly holds since by Corollary 22, along the \u03b4-path, Player 1 grabs at most 10 pawns and Player 2 grabs at least 10 pawns. Thus, after traversing a \u03b4-path, the number of pawns that Player 2 controls does not decrease. Now, in a play, every time the token is at a primed main vertex in G\u2032\u2032, it goes along a \u03b4-path to reach the next primed main vertex again. Now the initial vertex is a primed main vertex and Player 2 has (d \u2212|P|) + (d + 10) pawns at the beginning. Here P is the initial set of pawns owned by Player 1 in optional-grabbing game G\u2032. Thus, every time the token reaches a primed main vertex afterwards, Player 2 has at least (d \u2212|P|) + (d + 10) pawns. Now since there are exactly d non-isolated pawns, and (d \u2212|P|) + (d + 10) \u2265(d + 10), we have that out of the pawns that Player 2 has at a primed main vertex, at least 10 are isolated pawns. Hence proved. Now to prove Lemma 23, since by Corollary 22, we know that Player 1 has a strategy in the always-grabbing game such that from a primed main vertex to reach the next primed main vertex, he needs to grab in at most 10 rounds, and since by Claim 24, Player 2 has at least 10 pawns owning isolated vertices, whenever the token is at primed main vertex, in every round between primed main vertices where Player 1 needs to grab, Player 2 has an isolated pawn that Player 1 can grab. Hence by the argument above, if Player 1 has a winning strategy in the optional-grabbing game G\u2032, we have that Player 1 also has a winning strategy in the always-grabbing game G\u2032\u2032. Note that the proof of this direction is also analogous for Player 2. We show that if Player 2 has a winning strategy in G\u2032, then she also has a winning strategy in G\u2032\u2032. In particular, from Corollary 22, along a \u03b4-path, Player 1 grabs in at least 10 rounds and Player 2 grabs in at most 10 rounds. Also, similar to Claim 24, we can show that every time the token is at a primed main vertex, Player 1 owns at least 10 pawns owning isolated vertices. This is because initially, Player 1 controls |P| + d + 10 vertices and this invariant is maintained throughout the play whenever the token reaches a primed main vertex. Now |P| + d + 10 \u2265d + 10, and hence, at the beginning of a \u03b4-path, Player 1 controls at least 10 pawns owning isolated vertices. For the converse direction in Lemma 23, we use the determinacy argument as follows. Again, we show the proof for Player 1 that if Player 1 wins in the always-grabbing game G\u2032\u2032, then he also wins the optional-grabbing game G\u2032. The proof for Player 2 is analogous. \fG. Avni, P. Ghorpade and S. Guha 21 Let Player 1 has a winning strategy in G\u2032\u2032. Then, by determinacy, Player 2 loses in G\u2032\u2032. Now from Lemma 23, by taking the contrapositive for Player 2, we have that Player 2 also loses G\u2032. Hence, again by determinacy of pawn games, we have that Player 1 wins G\u2032. We now state the following theorem. While the lower bound follows from Thm. 14 and Lem. 23, the upper bound follows from Thm. 4. \u25b6Theorem 25. MVPP always-grabbing PAWN-GAMES is EXPTIME-complete. We conclude this section by adapting Thm. 5 to always-grabbing. Namely, we show that adding pawns to a player is never beneficial in MVPP always-grabbing games (with the exception of the pawn that owns the current vertex). \u25b6Theorem 26. Consider a configuration \u27e8v, P\u27e9of an MVPP always-grabbing pawn game G. Let j \u2208[d] such that v \u2208Vj and P \u2032 \u2286P. Assuming that j \u2208P implies j \u2208P \u2032, if Player 1 wins from \u27e8v, P\u27e9, he wins from \u27e8v, P \u2032\u27e9. Assuming that j \u2208P \u2032 implies j \u2208P, if Player 2 wins from \u27e8v, P \u2032\u27e9, she wins from \u27e8v, P\u27e9. Proof. We show the case when Player 1 has a winning strategy. The case for Player 2 having a winning strategy is analogous. Recall from the proof of Thm. 5 that the cases were argued for both configuration vertices and intermediate vertices. For configuration vertices, The proof of this theorem is exactly the same as in the proof of Thm. 5. We detail below the case for intermediate vertices in the case of always-grabbing games. We use the same notations as in the proof of Thm. 5. As in the proof of Thm. 5, we again distinguish between two cases: j \u2208P and j / \u2208P. Consider an intermediate vertex \u27e8u, c\u27e9\u2208Wn+1. We will show that \u27e8u, c\u2032\u27e9\u2208Wn+1. We denote by \u2113, the pawn that owns u. First consider j \u2208P, thus Player 1 controls c. We have as in the proof of Thm. 5 that both \u27e8u, c\u27e9and \u27e8u, c\u2032\u27e9are Player 2 vertices. Consider a neighbor of \u27e8u, c\u2032\u27e9, a configuration vertex \u27e8u, Q\u2032\u27e9. Note that in always-grabbing Player 2 has to grab a pawn r from Player 1, thus Q\u2032 = P \u2032 \\ {r}. There are two cases, either \u2113\u2208Q\u2032 or \u2113/ \u2208Q\u2032. Recall that in order to apply the induction hypothesis on \u27e8u, Q\u2032\u27e9, we need to find a neighbor \u27e8u, Q\u27e9of \u27e8u, c\u27e9such that if \u2113/ \u2208Q\u2032, then \u2113/ \u2208Q. If \u2113/ \u2208Q\u2032, then we set Q = P \\{\u2113} if \u2113\u2208P and we set Q = P \\{r} when \u2113/ \u2208P. If \u2113\u2208Q\u2032, we set Q = P \\ {r}. The rest of the argument is as in the proof of Theorem 5. Next consider the case when j / \u2208P. In this case, both \u27e8u, c\u27e9and \u27e8u, c\u2032\u27e9are Player 1 vertices. In always-grabbing, Player 1 grabs a pawn r in \u27e8u, c\u27e9, thus we have Q = P \u222a{r} \u228bP. We find a neighbor \u27e8u, Q\u2032\u27e9of \u27e8u, c\u2032\u27e9to apply the induction hypothesis on. If \u2113\u2208P and \u2113\u2208P \u2032, then we set Q\u2032 = P \u2032 \u222a{r}. If \u2113\u2208P and \u2113/ \u2208P \u2032, then we set Q\u2032 = P \u2032 \u222a{\u2113}. If \u2113/ \u2208P and \u2113/ \u2208P \u2032, then we set Q\u2032 = P \u2032 \u222a{r}. Again, the remaining argument is as in the proof of Theorem 5 and we are done. 5 Always Grabbing-or-Giving Pawn Games In this section, we show that MVPP always grabbing or giving games are in PTIME. We find it intriguing that a seemingly small change in the mechanism \u2013 allowing a choice between grabbing and giving instead of only grabbing \u2013 reduces the complexity to PTIME from EXPTIME-complete. We make the following simple observation. \u25b6Observation 27. In an always grabbing or giving game, every time Player i makes a move from a vertex v to a vertex u, Player \u2212i can decide which player controls u. \f22 A Game of Pawns If Player \u2212i does not control pu that owns u and he wants to control u, he can grab pu from Player i. If he does not want to control u and if he has pu, he can give it to Player i. Consider an always-grabbing-or-giving game G = \u27e8V, E, T\u27e9and an initial configuration c. We construct a turn-based game G\u2032 and an initial vertex v0 so that Player 1 wins in G from c iff he wins in G\u2032 from v0. Let G\u2032 = \u27e8V \u2032, E\u2032, T \u2032\u27e9, where V \u2032 = {\u27e8v, i\u27e9, \u27e8b v, i\u27e9| v \u2208V, i \u2208{1, 2}} with V \u2032 1 = {\u27e8v, 1\u27e9, \u27e8b v, 1\u27e9| v \u2208V } and V \u2032 2 = {\u27e8v, 2\u27e9, \u27e8b v, 2\u27e9| v \u2208V }, T \u2032 = T \u00d7 {1, 2}, and E\u2032 = {(\u27e8v, i\u27e9, \u27e8b u, 3 \u2212i\u27e9), (\u27e8b u, 3 \u2212i\u27e9, \u27e8u, i\u27e9), (\u27e8b u, 3 \u2212i\u27e9, \u27e8u, 3 \u2212i\u27e9) | (v, u) \u2208E, i \u2208{1, 2}}. We call each vertex \u27e8v, i\u27e9a main vertex and each \u27e8b v, i\u27e9an intermediate vertex. Suppose that Player i moves the token from v to u in G. If Player \u2212i decides to control u, then in G\u2032, the token moves from the main vertex \u27e8v, i\u27e9to the main vertex \u27e8u, 3 \u2212i\u27e9, else from \u27e8v, i\u27e9to the main vertex \u27e8u, i\u27e9, and in each case, through the intermediate vertex (b u, 3 \u2212i) that models the decision of Player \u2212i on the control of u. The target vertices T \u2032 are main vertices. We can prove the following lemma. \u25b6Lemma 28. Suppose Player 1 wins from configuration \u27e8v, P\u27e9in G. If he controls v, he wins from \u27e8v, 1\u27e9in G\u2032, and if Player 2 controls v, Player 1 wins from \u27e8v, 2\u27e9in G\u2032. Dually, suppose that Player 2 wins from \u27e8v, P\u27e9in G. If she controls v, then she wins from \u27e8v, 2\u27e9in G\u2032, and if Player 1 controls v, Player 2 wins from \u27e8v, 1\u27e9in G\u2032. Proof. Suppose Player i has a winning strategy in the game G from configuration \u27e8v, P\u27e9. We show that Player i has a winning strategy in the game G\u2032 from vertex \u27e8v, i\u27e9. The other direction of the lemma follows from determinacy of two-player reachability games. We prove the above for Player 1. The proof for Player 2 is analogous. Let Wj be the set of configurations in G such that Player 1 has a strategy to reach T in at most j rounds. Let Aj be the set of vertices in G\u2032 such that Player 1 has a strategy to reach T \u2032 in at most j rounds. We prove the following claim. \u25b7Claim 29. If \u27e8v, P\u27e9\u2208Wj, with Player 1 (Player 2) controlling v then \u27e8v, 1\u27e9(\u27e8v, 2\u27e9) belongs to A2j. Proof. We prove this by induction on j. For the base case with j = 0. this clearly holds. Suppose the claim holds for j = h, and we now show that the claim holds for j = h + 1. Consider a configuration \u27e8v, P\u27e9\u2208Wh+1. We first look at the case when Player 1 controls v. We show that \u27e8v, 1\u27e9\u2208A2(h+1). Note that, by definition of G and G\u2032, both \u27e8v, P\u27e9and \u27e8v, 1\u27e9are controlled by Player 1 in G and G\u2032 respectively. Since \u27e8v, P\u27e9\u2208Wh+1 and Player 1 controls vertex v, there is a strategy of Player 1 that takes the token starting from v with pawns P to a target vertex in T in at most h + 1 steps. Let under this strategy, Player 1 moves the token from v to u. Note that \u27e8u, P \u2032\u27e9\u2208Wh for all configurations \u27e8u, P \u2032\u27e9that Player 2 can reach after grabbing from or giving Player 1 a pawn after Player 1 moves the token from v to u. Let Pu be the pawn owning vertex u. Now from Observation 27, we know that once Player 1 moves the token from vertex v to a vertex u, then Player 2 can reach a configuration \u27e8u, P \u2032\u27e9with pu \u2208P \u2032 as well \u27e8u, P \u2032\u2032\u27e9with pu / \u2208P \u2032\u2032. Thus there exist configurations \u27e8u, P \u2032\u27e9, \u27e8u, P \u2032\u2032\u27e9in Wh such that pu \u2208P \u2032 and pu / \u2208P \u2032\u2032. Now suppose the token is at vertex \u27e8v, 1\u27e9in G\u2032. Since u is a neighbour of v, by the definition of G\u2032, we have that vertex \u27e8b u, 2\u27e9is a neighbour of \u27e8u, 1\u27e9in G\u2032. We show that \u27e8b u, 2\u27e9\u2208A2h+1. Recall that vertex \u27e8b u, 2\u27e9is controlled by Player 2 and the only neighbours of this vertex are \u27e8u, 1\u27e9 and \u27e8u, 2\u27e9. Now if we show that both \u27e8u, 1\u27e9and \u27e8u, 2\u27e9are in A2h, then we are done. Since we know that there exists \u27e8u, P \u2032\u27e9in Wh such that pu \u2208P \u2032, by the induction hypothesis, we have that \u27e8u, 1\u27e9\u2208A2h, and similarly since there exists \u27e8u, P \u2032\u2032\u27e9in Wh such that pu / \u2208P \u2032\u2032, we have that \u27e8u, 2\u27e9\u2208A2h. The case for which Player 1 does not control v can also be proved similarly. \fG. Avni, P. Ghorpade and S. Guha 23 Algorithm 2 Given an OVPP pawn game G = \u27e8V, E, T\u27e9and a set of pawns P0 \u2286V that Player 1 controls, the algorithm returns a minimum-grabbing function \u03b7 : V \u2192N. 1: \u2113= 0 2: while \u2203u \u2208V that is not labeled by \u03b7 do 3: W 1 \u2113= Solve-Turn-Based-Game(V1 = P0, V2 = V \\ P0, E, T) 4: Define \u03b7(u) = \u2113, for all u \u2208W 1 \u2113that is not yet labeled. 5: B\u2113= {u \u2208V \\ W 1 \u2113: N(u) \u2229W 1 \u2113\u0338= \u2205} 6: T = B\u2113and \u2113= \u2113+ 1. We show that Player i wins in G from \u27e8v, P\u27e9when Player i (Player \u2212i) controls v implies that Player i wins from \u27e8v, i\u27e9(\u27e8v, 3 \u2212i\u27e9) in G\u2032. By taking contrapositive, we have that Player i loses from \u27e8v, i\u27e9(\u27e8v, 3 \u2212i\u27e9) in G\u2032 implies that Player i loses in G from \u27e8v, P\u27e9when Player i (Player \u2212i) controls v. By determinacy, we have that Player \u2212i wins from \u27e8v, i\u27e9(\u27e8v, 3 \u2212i\u27e9) in G\u2032 implies that Player \u2212i wins in G from \u27e8v, P\u27e9when Player i (Player \u2212i) controls v. Hence showed. Since the size of G\u2032 is polynomial in the size of G, Thm. 2 implies the following. \u25b6Theorem 30. MVPP always-grab-or-give PAWN-GAMES is in PTIME. 6 k-Grabbing Pawn Games In this section, we consider pawn games under k-grabbing in increasing level of generality of the mechanisms. We start with positive news. \u25b6Theorem 31. OVPP k-grabbing PAWN-GAMES is in PTIME. Proof. Let k \u2208N, an OVPP k-grabbing game G = \u27e8V, E, T\u27e9, and an initial configuration c = \u27e8v0, P0\u27e9, where we refer to P0 as a set of vertices rather than pawns. Our algorithm solves a harder problem. It computes a minimum-grabbing function \u03b7 : V \u2192N that labels each vertex u \u2208V with the necessary and sufficient number grabs needed from u to win. Formally, \u03b7(u) is such that Player 1 wins an \u03b7(u)-grabbing game played on G from configuration \u27e8u, P0\u27e9 but loses an \u0000\u03b7(u) \u22121 \u0001 -grabbing game from configuration \u27e8u, P0\u27e9. The algorithm is depicted in Alg. 2. It calls the Solve-Turn-Based-Game(), which returns the set of vertices that are winning for Player 1 in a turn-based reachability game, e.g., using attractor-based computation as in Thm. 2. For the base case, consider the turn-based game G0 = \u27e8V, E, T\u27e9with V1 = P0. Let W 1 0 \u2286V denote Player 1\u2019s winning region in G0. Clearly, for every u \u2208W 1 0 , we have \u03b7(v0) = 0, and for every u / \u2208W 1 0 , we have \u03b7(v0) \u22651. For the inductive step, suppose that for \u2113\u22650, the set W 1 \u2113= {u \u2208V : \u03b7(u) \u2264\u2113} has been found. That is, for every u / \u2208W 1 \u2113, Player 2 has a strategy that wins the \u2113-grabbing pawn game G from configuration \u27e8u, P0\u27e9. We show how to find W 1 \u2113+1 in linear time. Let the border of W 1 \u2113, denoted B\u2113, be the set of vertices from which W 1 \u2113can be reached in one step, thus B\u2113= {v \u2208V : v / \u2208W 1 \u2113: N(v) \u2229W 1 \u2113\u0338= \u2205}. Note that the vertices in B\u2113are all controlled by Player 2 since otherwise, such vertices will be in the set W 1 \u2113. Now we show that a vertex u / \u2208W 1 \u2113has \u03b7(u) = \u2113+ 1 iff Player 1 can force the game from configuration \u27e8u, P0\u27e9to a vertex in B\u2113in one or more rounds without making any grab. Player 1 wins from such a vertex u by forcing the game into B\u2113, grabbing the pawn in B\u2113, and proceeding to W\u2113, where by the induction hypothesis, he wins with the remaining grabs. Computing W 1 \u2113+1 roughly entails a solution to a turn-based game with target set B\u2113\u222aW 1 \u2113. \f24 A Game of Pawns 1 S1 1 S1 2 2 S2 3 S2 2 3 S3 3 t s s Figure 5 Consider the input to SET-COVER U = [3] and S = {{1}, {1, 2}, {2, 3}}. The figure depicts the output on this input of the reduction in Thm. 32 Intuitively, we show that a vertex u / \u2208W 1 \u2113has \u03b7(u) = \u2113+ 1 iff Player 1 can force the game from configuration \u27e8u, P0\u27e9to a vertex in B\u2113without making any grabs and in a non-trivial manner. We compute W 1 \u2113+1 as follows. Consider the turn-based game G\u2113= \u27e8V, E, B\u2113\u222aW 1 \u2113\u27e9 with V1 = P0. We say that Player 1 wins non-trivially from u \u2208V if he has a strategy f that guarantees the following against any Player 2 strategy: the resulting play is of the form u = v0, v1, v2, . . . , vm with vm \u2208(B\u2113\u222aW 1 \u2113) and it is non-trivial, meaning that m \u22651. Note the difference from standard turn-based games: a target vertex u \u2208B\u2113\u222aW 1 \u2113might be losing for Player 1 when considering winning non trivially, e.g., when u has an outgoing edge to a sink, no non-trivial play that starts in u ends in a target vertex. Let U 1 \u2113denote the set of vertices from which Player 1 wins in G\u2113non-trivially. We describe an algorithm for finding U 1 \u2113. We construct a game G\u2032 \u2113in which we make a copy u\u2032 of each vertex u \u2208B\u2113, and set the neighbors of u\u2032 to be {v \u2208N(u) : v / \u2208W 1 \u2113}. Intuitively, if Player 1 wins from u\u2032 in G\u2032 \u2113, he can win in G\u2113from u in a non-trivial manner: since u\u2032 is not winning in G\u2032 \u2113, Player 1 must make at least one step before entering B\u2113. Let U \u2032 be the winning set for Player 1 in G\u2032 \u2113. It is not hard to see that U 1 \u2113= {u / \u2208B\u2113: u \u2208U \u2032} \u222a{u \u2208B\u2113: u\u2032 \u2208U \u2032}. Let u \u2208U 1 \u2113. We claim that if u / \u2208W 1 \u2113then \u03b7(u) = \u2113+ 1. First, since u / \u2208W 1 \u2113, by the induction hypothesis, \u03b7(u) \u2265\u2113+ 1. Next, we show that \u03b7(u) = \u2113+ 1 by showing that Player 1 wins the (\u2113+ 1)-grabbing pawn game G from configuration \u27e8u, P0\u27e9. Player 1 initially plays according to a strategy that forces the game to B\u2113non-trivially without grabbing. Suppose that Player 2 is playing according to some strategy and the game visits u\u2032 before visiting u\u2032\u2032 \u2208B\u2113. Then, Player 1 grabs u\u2032\u2032, proceeds to W 1 \u2113, and uses his winning \u2113-grabbing strategy from there. Let u / \u2208U 1 \u2113\u222aW 1 \u2113. We claim that \u03b7(u) > \u2113+ 1. Note that in order to reach W 1 \u2113and in particular reach T, the game must first visit B\u2113. Suppose that Player 2 is following a winning strategy in G\u2113. Thus, in order to reach B\u2113, Player 1 must grab u\u2032 / \u2208B\u2113\u222aW 1 \u2113. Suppose that the game reaches a configuration \u27e8u\u2032\u2032, P0 \u222a{u\u2032}\u27e9, where u\u2032\u2032 \u2208B\u2113. By the above, \u03b7(u\u2032\u2032) \u2265\u2113+1. That is, any Player 1 strategy that wins from u\u2032\u2032 must make at least \u2113+ 1 grabs. Hence, in order to win from u, Player 1 makes at least \u2113+ 2 grabs. Let n = |V |. Note that W 1 n = V . Computing W 1 \u2113+1 from W 1 \u2113requires a call to an algorithm that solves a reachability game, which can be done in linear time. Hence, the total running time of the algorithm is polynomial in the size of G. The proof of the following theorem, which is obtained by a reduction from SET-COVER. \u25b6Theorem 32. MVPP k-grabbing game PAWN-GAMES is NP-hard. Proof. Given an input \u27e8U, S, k\u27e9to SET-COVER, we construct an MVPP k-grabbing pawn game G = \u27e8V, E, T\u27e9in which Player 1 grabs (see Fig. 5). Intuitively, G has a \u201cchain-like\u201d structure, and in order to win, Player 1 must cross the chain. Certain positions in G corresponds to i \u2208U. Each neighbor of i correspond to a set S \u2208S with i \u2208S. Thus, a choice of Player 1 at i can be thought of as assigning a set in S that covers i. We construct \fG. Avni, P. Ghorpade and S. Guha 25 G so that before moving to a neighbor S of i, Player 1 must grab the pawn that owns S. All vertices that correspond to S are controlled by the same pawn, thus if Player 1 moves from i\u2032 > i to a neighbor that corresponds to S, there is no need to grab again. Since Player 1 is allowed only k grabs, there is a one-to-one correspondence between winning Player 1 strategies and set covers of size k. We describe the construction of G formally. Let V = U \u222a(S \u00d7U)\u222a{s, t}. Player 1\u2019s target is t and s is a vertex with no path to t, thus it can be thought of as a target for Player 2. There are m + 1 pawns. For j \u2208[m], Pawn j owns all vertices in {\u27e8Sj, i\u27e9: i \u2208U}. Pawn 0 owns the vertices in U. We describe the edges. For i \u2208U, we define N(i) = {\u27e8Sj, i\u27e9: i \u2208Sj}. For 1 \u2264i \u2264n \u22121 and j \u2208[m], we define N(\u27e8Sj, i\u27e9) = {i + 1, s} and N(\u27e8Sj, n\u27e9) = {t, s}. That is, if Player 1 moves from i to its neighbor Sj without controlling Pawn j, then Player 2 will proceed to s and win the game. We claim that a set cover S\u2032 of size k gives rise to a winning k-grabbing Player 1 strategy. Indeed, by choosing, for each i \u2208U, to move to S \u2208S\u2032 and grabbing it if it has not been grabbed previously, Player 1 guarantees that t is reached using at most k grabs. On the other hand, observe a play of a winning k-grabbing Player 1 strategy against a reasonable Player 2 strategy; namely, a strategy that always moves to s from a neighboring vertex u when Player 2 controls u. Suppose that the set of pawns S\u2032 is grabbed. It is not hard to see that S\u2032 is a set cover of size at most k, and we are done. We conclude this section by studying OMVPP games. \u25b6Lemma 33. OMVPP k-grabbing PAWN-GAMES is PSPACE-hard. Proof. Consider an input \u03d5 = Q1x1 . . . QnxnC1 \u2227. . . \u2227Cm to TQBF, where Qi \u2208{\u2203, \u2200}, for 1 \u2264i \u2264n, each Cj, for 1 \u2264j \u2264m, is a clause over the variables x1, . . . , xn. We construct an OMVPP n-grabbing pawn game G = \u27e8V, E, T\u27e9such that Player 1 wins iff \u03d5 is true. The intuition is similar to Thm. 32. We construct G to have a chain-like structure that Player 1 must cross in order to win. The chain consists of two parts. In the first part, each position corresponds to a variable xi, for i \u2208[n]. We construct G so that if xi is existentially quantified, then Player 1 controls xi, and if xi is universally quantified, then Player 2 controls xi and Player 1 cannot win by grabbing it. We associate a move in xi with an assignment to xi. Technically, xi has two neighbors vi and \u00acvi, initially at the control of Player 2. If the token reaches a neighbor of xi without Player 1 grabbing it, then Player 1 loses the game. Player 1 is allowed n grabs, thus in order to win, it is necessary for him to grab, for each i, one of the neighbors of xi. It follows that a play that crosses the first part of the chain gives rise to an assignment f to x1, . . . , xn. In the second part of G, we verify that f is valid. Each position of G corresponds to a clause Cj, for j \u2208[m], all of which are at the control of Player 2 in the initial configuration of G. Note that once the first part of G is crossed, Player 1 is not allowed any more grabs. The idea is that when arriving at Cj, for j \u2208[m], Player 1 must control Cj, otherwise he loses. Recall that in OMVPP, it suffices to control one of the pawns that owns Cj in order to control it. We define the pawns that own Cj as follows. For i \u2208[n], call pi the pawn that owns vi and \u00acpi the pawn that owns \u00acvi. Thus, grabbing pi and \u00acpi respectively corresponds to an assignment that sets xi to true and false. If xi appears in Cj, then pi is an owner of Cj, and if \u00acxi appears in Cj, then \u00acpi is an owner of Cj. Thus, Player 1 controls Cj iff f(Cj) = true. It follows that Player 1 wins iff f is valid. Formally, the vertices of G are V = {s, t} \u222a{xi, vi, \u00acvi : 1 \u2264i \u2264n} \u222a{Cj : 1 \u2264j \u2264m}. The target vertex is t. The vertex s is a sink vertex that is winning for Player 2, i.e., there is no path from s to t. The game consists of 2n + 2 pawns. For each 1 \u2264i \u2264n, we associate \f26 A Game of Pawns with variable xi two pawns, which we denote by pi and \u00acpi. Intuitively, we will construct the game such that in order to win, Player 1 must grab either pi or \u00acpi, and we associate grabbing pi and \u00acpi with an assignment that sets xi to true and false, respectively. Formally, if xi appears in Cj, then pi is an owner of Cj, and if \u00acxi appears in Cj, then \u00acpi is an owner of Cj. We use 1 and 2 to refer to the final two pawns. Pawn 1 owns every vertex xi such that variable xi is existentially quantified in \u03d5 and Pawn 2 owns every vertex xi such that xi is universally quantified in \u03d5. In the initial configuration, Player 1 controls only Pawn 1. We describe the edges in the game, which has a chain-like structure. The last vertex on the chain is the target t, thus Player 1 must cross the chain in order to win. We assume that Player 2 follows a reasonable strategy; namely, a strategy that proceeds to win at s given the option, i.e., when the game reaches a vertex u at her control and that neighbors s. The token is initially placed on x0. For 1 \u2264i \u2264n, the neighbors of xi are vi and \u00acvi. We associate with the choice at xi, an assignment to xi in the expected manner. Suppose that the token is placed on u \u2208{vi, \u00acvi}. We require Player 1 to grab the pawn that owns u by adding an edge from u to s. That is, since initially Player 2 controls the pawn that owns u, if Player 1 does not grab it, Player 2 will win from u. For 1 \u2264i < n, the neighbor of vi and \u00acvi is vi+1. The neighbor of vn and \u00acvn is C1. Suppose that Player 1 follows a strategy that leads the token to C1. We observe that since Pawn 1 is controlled by Player 1, Player 1 chooses the assignment of the existentiallyquantified variables. We claim that Player 1\u2019s strategy does not grab Pawn 2, thus Player 2 chooses the assignment of the universally-quantified variables. Indeed, since Player 1 is restricted to grab at most n pawns and must grab either pi or \u00acpi, for each 1 \u2264i \u2264n. If he grabs Pawn 2, then there exists 1 \u2264i \u2264n such that he cannot grab any of pi or \u00acpi, and the game will necessarily reach either vi or \u00acvi at the control of Player 2, from which she wins. We describe the outgoing edges from clause vertices. For 1 \u2264j < m, the vertex Cj has two neighbors, s and Cj+1, and the neighbors of Cm are s and t. That is, in order to draw the game to t, Player 1 must cross all clause vertices. Recall that the definition of OMVPP dictates that Player 1 chooses how to move the token at a vertex u if he controls at least one of the pawns that owns u. Thus, a winning Player 1 strategy must grab, for each 1 \u2264j \u2264m, at least one pawn that owns Cj. In turn, grabbing a pawn that owns Cj means that the assignment to x1, . . . , xn that arises from the players\u2019 strategies satisfies Cj. It follows that Player 1 wins the game iff \u03d5 is true. We turn to study the upper bound. The following lemma bounds the provides a polynomial bound on the length of a winning play for Player 1. The core of the proof intuitively shows that we can restrict attention to Player 1 strategies that grab at least once in a sequence of |V | rounds. Otherwise, the game enters a cycle that is winning for Player 2. We thus obtain a polynomial bound on the length of a winning play for Player 1. \u25b6Lemma 34. Consider an OMVPP k-grabbing PAWN-GAME G = \u27e8V, E, T\u27e9, and an initial configuration c that is winning for Player 1. Then, Player 1 has a strategy such that, for every Player 2 strategy, a target in T is reached within |V | \u00b7 (k + 1) rounds. Proof. Consider the turn-based game G\u2032 that corresponds to G. Recall that c is a vertex in G\u2032. Fix a Player 1 memoryless winning Player 1 strategy in G\u2032, which exists since G\u2032 is a turn-based reachability game. Consider some Player 2 strategy and let \u03c0 be the resulting play. We make three observations. (1) Each vertex in G\u2032 is visited at most once by \u03c0. Otherwise, \u03c0 enters a loop, which is losing for Player 1. (2) The configurations can be partially ordered: a configuration (v, P) can only be visited before a configuration (u, P \u2032), for P \u2286P \u2032. Indeed, the only manner in which control of pawns changes is by Player 1 grabbing, which only adds \fG. Avni, P. Ghorpade and S. Guha 27 pawns to the set that he already controls. (3) If the game starts from (v, P), then it ends in a configuration (u, P \u2032) with |P \u2032| \u2264|P| + k. Indeed, Player 1 can only grab k times. Combining (1)-(3) implies that \u03c0 visits T within |V | \u00b7 (k + 1) rounds. For the upper bound, we describe an algorithm performing a depth-first traversal of the configuration graph of a game while storing, at a time, only a branch in PSPACE. \u25b6Lemma 35. OMVPP k-grabbing PAWN-GAMES is in PSPACE. Proof. We describe a PSPACE algorithm for OMVPP k-grabbing PAWN-GAMES as follows. The algorithm explores an unwinding of the configuration graph of the k-grabbing pawn game in a depth-first manner. We call this unwinding of the configuration graph a game tree. Recall that after each move by a player in the game, Player 1 chooses to grab a pawn and if Player 1 is controlling the current vertex, that is, where the token lies, then Player 1 chooses a successor, otherwise Player 2 chooses a successor. A vertex v that is controlled by Player 1 is an OR-vertex in the game tree in the sense that there should exist a successor of v which accepts. On the other hand, a vertex v that is controlled by Player 2 is an AND-vertex in the game tree in the sense that all the successors of v should accept. Since by Lemma 34, if Player 1 has a winning strategy then he has one such that for all strategies of Player 2, he wins in n \u00b7 (k + 1) steps, it is sufficient to unwind the configuration graph such that the length of a path in the game tree starting from the initial vertex does not exceed n \u00b7 (k + 1). At any time during exploring the game tree in a depth-first manner, the algorithm stores the path that is being currently traversed from the initial vertex, and the length of the path is thus at most n \u00b7 (k + 1). In a depth-first traversal, from each vertex, its successors are visited in a particular order. At each level of the path that has been currently traversed, the algorithm also keeps count of how many successors of the vertex at that level have been visited, and also the depth of the level from the root of the tree. The latter ensures that the algorithm does not unwind the configuration graph to an extent so that the length of a path in the game tree exceeds n \u00b7 (k + 1). Since in the configuration graph, there are at most exponentially many vertices of the form \u27e8v, P\u27e9where v is a vertex of the k-grabbing pawn game and P is a set of pawns, at each level, the count of number of successors that have been visited so far can be stored in PSPACE. Since the number of levels of the current path is bounded by n \u00b7 (k + 1) which is polynomial and each level uses polynomial space, the algorithm is in PSPACE. By Lem. 34, each branch of such a traversal has polynomial length, leading to the PSPACE upper bound. We thus have the following. \u25b6Theorem 36. OMVPP k-grabbing PAWN-GAMES is PSPACE-complete. \u25b6Remark 37. We also note that in the case of k-grabbing mechanism, unlike optionalgrabbing or always-grabbing mechanisms, it is always the case that for Player 1, at every vertex, controlling more pawns is at least as good as controlling fewer pawns. This is because, if Player 1 needs to control a particular vertex v in any round of the game in order to win, he can grab the pawn controlling the vertex v if the pawn does not belong to him regardless of which player moves the token provided he has not already grabbed k pawns. 7 Discussion We introduce pawn games, a class of two-player turn-based games in which control of vertices changes dynamically throughout the game. Pawn games constitute a class of \f28 A Game of Pawns succinctly-represented turn-based games. We identify natural classes that are in PTIME. Our EXPTIME-hardness results are based on Lock & Key games, which we hope will serve as a framework for proving lower bounds. We mention directions for future research. First, we leave several open problems; e.g., for MVPP k-grabbing pawn games, we only show NP-hardness and membership in PSPACE. Second, we focused on reachability games. It is interesting to study pawn games with richer objectives such as parity or quantitative objectives. Quantitative objectives are especially appealing since one can quantify the benefit of controlling a pawn. Third, it is interesting to consider other pawn-transferring mechanisms and to identify properties of mechanisms that admit low-complexity results. Finally, grabbing pawns is a general concept and can be applied to more involved games like stochastic or concurrent games." + }, + { + "url": "http://arxiv.org/abs/2211.13626v1", + "title": "Bidding Graph Games with Partially-Observable Budgets", + "abstract": "Two-player zero-sum \"graph games\" are a central model, which proceeds as\nfollows. A token is placed on a vertex of a graph, and the two players move it\nto produce an infinite \"play\", which determines the winner or payoff of the\ngame. Traditionally, the players alternate turns in moving the token. In\n\"bidding games\", however, the players have budgets and in each turn, an auction\n(bidding) determines which player moves the token. So far, bidding games have\nonly been studied as full-information games. In this work we initiate the study\nof partial-information bidding games: we study bidding games in which a\nplayer's initial budget is drawn from a known probability distribution. We show\nthat while for some bidding mechanisms and objectives, it is straightforward to\nadapt the results from the full-information setting to the partial-information\nsetting, for others, the analysis is significantly more challenging, requires\nnew techniques, and gives rise to interesting results. Specifically, we study\ngames with \"mean-payoff\" objectives in combination with \"poorman\" bidding. We\nconstruct optimal strategies for a partially-informed player who plays against\na fully-informed adversary. We show that, somewhat surprisingly, the \"value\"\nunder pure strategies does not necessarily exist in such games.", + "authors": "Guy Avni, Ismael Jecker, Djordje Zikelic", + "published": "2022-11-24", + "updated": "2022-11-24", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.FL" + ], + "main_content": "Introduction We consider two-player zero-sum graph games; a fundamental model with applications, e.g., in multi-agent systems [2]. A graph game is played on a \ufb01nite directed graph as follows. A token is placed on a vertex and the players move it throughout the graph to produce an in\ufb01nite path, which determines the payoff of the game. Traditional graph games are turn-based: the players alternate turns in moving the token. Bidding games [17, 16] are graph games in which an \u201cauction\u201d (bidding) determines which player moves the token in each turn. The concrete bidding mechanisms that we consider proceed as follows. In each turn, both players simultaneously submit bids, where a bid is legal if it does not exceed the available budget. The higher bidder \u201cwins\u201d the bidding and moves the token. The mechanisms differ in their payment schemes, which are classi\ufb01ed according to two orthogonal properties. Who pays: in \ufb01rst-price bidding only the higher bidder pays the bid and in all-pay bidding both players pay their bids. Who is the recipient: in Richman bidding (named after David Richman) payments are made to the other player and in poorman bidding payments are made to the \u201cbank\u201d, i.e., the bid is lost. As a rule of thumb, bidding games under all-pay and poorman bidding are respectively technically more challenging than \ufb01rst-price and Richman bidding. More on this later. In terms of applications, however, we argue below that poorman bidding is often the more appropriate bidding mechanism. *This research was supported in part by ISF grant no. 1679/21, by the ERC CoG 863818 (ForM-SMArt), and the European Union\u2019s Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie Grant Agreement No. 665385. \u2020University of Haifa \u2021University of Warsaw \u00a7Institute of Science and Technology Austria (ISTA) 1 arXiv:2211.13626v1 [cs.GT] 24 Nov 2022 \fApplications. A central application of graph games is reactive synthesis [22]: given a speci\ufb01cation, the goal is to construct a controller that ensures correct behavior in an adversarial environment. Synthesis is solved by constructing a turn-based graph game in which Player 1 is associated with the controller and Player 2 with the environment, and searching for a winning Player 1 strategy. Bidding games extend the modeling capabilities of graph games. For example, they model ongoing and stateful auctions in which budgets do not contribute to the players\u2019 utilities. Advertising campaigns are one such setting: the goal is to maximize visibility using a pre-allocated advertising budget. By modeling this setting as a bidding game and solving for Player 1, we obtain a bidding strategy with guarantees against any opponent1. Maximizing visibility can be expressed as a mean-payoff objective (de\ufb01ned below). All-pay poorman bidding is particularly appealing since it constitutes a dynamic version of the wellknown Colonel Blotto games [11]. Rather than thinking of the budgets as money, we think of them as resources at the disposal of the players, like time or energy. Then, deciding how much to bid represents the effort that a player invests in a competition, e.g., investing time to prepare for a job interview, where the player that invests more wins the competition. Prior work \u2013 full-information bidding games. The central quantity in bidding games is the initial ratio between the players\u2019 budgets. Formally, for i \u2208{1, 2}, let Bi be Player i\u2019s initial budget. Then, Player 1\u2019s initial ratio is B1/(B1 + B2). A random-turn game [21] with parameter p \u2208[0, 1] is similar to a bidding game only that instead of bidding, in each turn, we toss a coin with probability p that determines which player moves the token. Formally, a random-turn game is a special case of a stochastic game [14]. Qualitative objectives. In reachability games, each player is associated with a target vertex, the game ends once a target is reached, and the winner is the player whose target is reached. Reachability bidding games were studied in [17, 16]. It was shown that, for \ufb01rst-price reachability games, a threshold ratio exists, which, informally, is a necessary and suf\ufb01cient initial ratio for winning the game. Moreover, it was shown that \ufb01rst-price Richman-bidding games are equivalent to uniform random-turn games (and only Richman bidding); namely, the threshold ratio in a bidding game corresponds to the value of a uniform random-turn game. All-pay reachability games are technically more challenging. Optimal strategies might be mixed and may require sampling from in\ufb01nite-support probability distributions even in extremely simple games [8]. Mean-payoff games. Mean-payoff games are in\ufb01nite-duration quantitative games. Technically, each vertex of the graph is assigned a weight, and the payoff of an in\ufb01nite play is the long-run average sum of weights along the path. The payoff is Player 1\u2019s reward and Player 2\u2019s cost, thus we refer to them respectively as Max and Min. For example, consider the \u201cbowtie\u201d game G\u25b7 \u25c1, depicted in Fig. 1. The payoff in G\u25b7 \u25c1corresponds to the ratio of bidding that Max wins. Informally G\u25b7 \u25c1models the setting in which in each day a publisher sells an ad slot, and Max\u2019s objective is to maximize visibility: the number of days that his ad is displayed throughout the year. Unlike reachability games, intricate equivalences between mean-payoff bidding games and random-turn games are known for all the mechanisms described above [5, 6, 7, 9]. Example 1. We illustrate the equivalences between full-information bidding games and random-turn games. Consider the \u201cbowtie\u201d game G\u25b7 \u25c1(see Fig. 1). For p \u2208[0, 1], the random-turn game RT(G\u25b7 \u25c1, p) that uses a coin with bias p is depicted in Fig. 2. Its expected payoff is p. Suppose that the initial ratio is r \u2208(0, 1). Under \ufb01rst-price Richman-bidding, the optimal payoff in G\u25b7 \u25c1does not depend on the initial ratio: no matter what r is, the optimal payoff that Max can guarantee is arbitrarily close to 0.5, hence the equivalance with RT(G\u25b7 \u25c1, 0.5). Under \ufb01rst-price poorman bidding, the optimal payoff does depend on the initial ratio: roughly, the optimal payoff that Max can guarantee is r, hence the equivalence with RT(G\u25b7 \u25c1, r). For all-pay bidding, pure strategies are only \u201cuseful\u201d in all-pay poorman bidding and only when r > 0.5, where Max can guarantee an optimal payoff of 2r\u22121 r . The results extend to general strongly-connected games (see Thm. 11). \u25b3 1A worst-case modelling assumes that the other bidders cooperate against Player 1. 2 \f1 0 vMax vMin Figure 1: The mean-payoff game G\u25b7 \u25c1with the weights in the vertices. 1 0 vMax vMin p 1 \u2212p 1 \u2212p p Figure 2: The simpli\ufb01ed random-turn game RT(G\u25b7 \u25c1, p), for p \u2208[0, 1]. Our contributions \u2013 partial-information bidding games. In most auction domains, bidders are not precisely informed of their opponent\u2019s budget. Bidding games, however, have only been studied as fullinformation games. We initiate the study of bidding games in which the players are partially informed of the opponent\u2019s budget. Speci\ufb01cally, we study bidding games in which the two players\u2019 budgets are drawn from a known probability distribution, and the players\u2019 goal is to maximize their expected utility. We \ufb01rst show that the results on qualitative objectives as well as \ufb01rst-price Richman bidding transfer to the partial-information setting. We turn to study mean-payoff poorman-bidding games, which are signi\ufb01cantly more challenging. We focus on one-sided partial-information games in which only Player 2\u2019s budget is drawn from a probability distribution. Thus, Player 1 is partially informed and Player 2 is fully informed of the opponent\u2019s budget. We argue that one-sided partial-information games are practically well-motivated. Indeed, one-sided partial information is a worst-case modelling: the utility that an optimal strategy for Player 1 guarantees in the game, is a lower bound on the utility that it will guarantee when deployed against the concrete environment. We illustrate our results in the following example. Example 2. Consider the bowtie game G\u25b7 \u25c1(Fig. 1), where Max (the partially-informed player) starts with a budget of B and Min (the fully-informed player) starts with a budget that is drawn uniformly at random from supp(\u03b3) = {C1, C2}. We describe an optimal strategy for Max under \ufb01rst-price poorman bidding. Max carefully chooses an x \u2208[B \u00b7 C1 C2 , B] and divides his budget into two \u201cwallets\u201d; the \ufb01rst with budget x and the second with budget B \u2212x. He initially uses his \ufb01rst wallet to play an optimal full-information strategy assuming the initial budgets are x and C1, which guarantees a payoff of at least p1 = x C1+x. If Player 2 spends more than C1, i.e., her initial budget was in fact C2, then Player 1 proceeds to use his second wallet against Player 2\u2019s remaining budget, which guarantees a payoff of at least p2 = B\u2212x B\u2212x+C2\u2212C1 . Thus, the expected payoff is at least 0.5 \u00b7 (p1 + p2), and Max simply chooses an x that maximizes this expression. Note that the constraint that x \u2265B \u00b7 C1 C2 implies that p1 \u2265p2, thus Min has an incentive to play so that Max proceeds to use his second wallet. We show that this strategy is optimal, and extend the technique to obtain optimal strategies in general strongly-connected games for \ufb01rst-price and all-pay poorman bidding. Finally, we show that the optimal payoff that Min can guarantee in G\u25b7 \u25c1, is obtained by a surprisingly simple strategy. We show that the following Min strategy is optimal: when her initial budget is Ci, for i \u2208{1, 2}, Min follows an optimal full-information strategy for ratio B/(B + Ci). That is, she \u201creveals\u201d her true budget in the \ufb01rst round and cannot gain utility by hiding this information. The technical challenge is to show that this strategy is optimal. \u25b3 Our results show that contrary to turn-based, stochastic games, and full-information bidding games, there is a gap between the optimal payoffs that the players can guarantee with pure strategies. Thus, the value does not necessary exist in partial-information mean-payoff bidding games under pure strategies. Related work. The seminar book [3] studies the mean-payoff game G\u25b7 \u25c1under one-sided partial-information with a different semantic to the one we study. Let L or R denote the two vertices of G\u25b7 \u25c1. Min has partial information of the weights of L and R, which, before the game begins, are drawn from a known probability distribution. Max, the fully-informed player, knows the weights. In each turn, Max chooses L or R, followed by Min who either \u201caccepts\u201d or \u201crejects\u201d Max\u2019s choice, thus both players can affect the movement 3 \fof the token. The value in the game is shown to exist. Interestingly and similar in spirit to our results, there are cases in which Max cannot use his knowledge advantage and his optimal strategy reveals which of the two vertices he prefers. One-sided partial information have also been considered in turn-based graph games, e.g., [25, 24, 27]. Discrete bidding games were studied in [15]; namely, budgets are given in coins, and the minimal positive bid a player can make is a single coin. Tie-breaking is a signi\ufb01cant factor in such games [1]. Non-zero-sum bidding games were studied in [18]. See also the survey [4]. 2 Preliminaries Strategies in bidding games. A bidding game is played on a directed graph \u27e8V, E\u27e9. A strategy in any graph game is a function from histories to actions. In bidding games, a history consists of the sequence of vertices that were visited and bids made by the two players. We stress that the history does not contain the current state of the budgets. Rather, a player can compute his opponent\u2019s current budget based on the history of bids, if he knows her initial budget. We formalize the available budget following a history. For i \u2208{1, 2}, suppose the initial budget of Player i is Bi. For a history h, we de\ufb01ne the investments of Player i throughout h, denoted Invi(h). In all-pay bidding, Invi(h) is the sum bids made by Player i throughout h, and in \ufb01rst-price bidding, it is the sum only over the winning bids. We denote by Bi(h) Player i\u2019s available budget following h. Under Richman bidding, winning bids are paid to the opponent, thus Bi(h) = Bi \u2212Invi(h) + Inv3\u2212i(h). Under poorman bidding, winning bids are paid to the bank, thus Bi(h) = Bi \u2212Invi(h). Given a history, a strategy prescribes an action, which in a bidding game, is a pair \u27e8b, u\u27e9\u2208R\u00d7V , where b is a bid and u is the vertex to move to upon winning. We restrict the actions of the players following a history h so that (1) the bid does not exceed the available budget, thus following a history h, a legal bid for Player i is a bid in [0, Bi(h)], and (2) a player must choose a neighbor of the vertex that the token is placed on. We restrict attention to strategies that choose legal actions for all histories. Note that we consider only pure strategies and disallow mixed strategies (strategies that allow a random choice of action). De\ufb01nition 3. For i \u2208{1, 2}, we denote by Si(Bi) the set of legal strategies for Player i with an initial budget of Bi. Note that with a higher initial budget, there are more strategies to choose from, i.e., for B\u2032 i > Bi, we have Si(Bi) \u2286Si(B\u2032 i). The central quantity in bidding games is the initial ratio, de\ufb01ned as follows. De\ufb01nition 4. Budget ratio. When Player i\u2019s budget is Bi, for i \u2208{1, 2}, we say that Player i\u2019s ratio is Bi B1+B2 . Plays. Consider initial budgets B1 and B2 for the two players, two strategies f \u2208S1(B1) and g \u2208S2(B2), and an initial vertex v. The triple f, g, and v gives rise to a unique play, denoted play(v, f, g). The construction of play(v, f, g) is inductive and is intuitively obtained by allowing the players to play according to f and g. Initially, we place the token on v, thus the \ufb01rst history of the game is h = v. Suppose a history h has been played. Then, the next action that the players choose is respectively \u27e8u1, b1\u27e9= f(h) and \u27e8u2, b2\u27e9= g(h). If b1 > b2, then Player 1 wins the bidding and the token moves to u1, and otherwise Player 2 wins the bidding and the token moves to u2. Note that we resolve ties arbitrarily in favor of Player 2. The play continues inde\ufb01nitely. Since the players always choose neighboring vertices, each play corresponds to an in\ufb01nite path in \u27e8V, E\u27e9. For n \u2208N, we use playn(v, f, g) to denote its \ufb01nite pre\ufb01x of length n. We sometimes omit the initial vertex from the play when it is clear from the context. 4 \fObjectives. We consider zero-sum games. An objective assigns a payoff to a play, which can be thought of as Player 1\u2019s reward and Player 2\u2019s penalty. We thus sometimes refer to Player 1 as Max and Player 2 as Min. We denote by payoff(f, g, v) the payoff of the play play(f, g, v). Qualitative objectives. The payoff in games with qualitative objectives is in {\u22121, 1}. We say that Player 1 wins the play when the payoff is 1. We consider two qualitative objectives. (1) Reachability. There is a distinguished target vertex t and a play is winning for Player 1 iff it visits t. (2) Parity. Each vertex is labeled by an index in {1, . . . , d} and a play is winning for Player 1 iff the highest index that is Parity objectives are important in practice, e.g., reactive synthesis [22] is reducted to the problem of solving a (turn-based) parity games. Mean-payoff games. The quantitative objective that we consider is mean-payoff. Every vertex v in a mean-payoff game has a weight w(v) and the payoff of an in\ufb01nite play is the long-run average weight that it traverses. Formally, the payoff of an in\ufb01nite path v1, v2, . . . is lim infn\u2192\u221e1 n P 1\u2264i 0}. We restrict attention to \ufb01nite-support probability distributions. For i \u2208{1, 2}, the probability that Player i\u2019s initial budget is Bi \u2208supp(\u03b3i) is \u03b3i(Bi). De\ufb01nition 5. One-sided partial information. We say that a game has one-sided partial information when |supp(\u03b31)| = 1 and |supp(\u03b32)| > 1. We then call Player 1 the partially-informed player and Player 2 the fully-informed player. We turn to de\ufb01ne values in partial-information games. The intuition is similar to the full-information case only that each player selects a collection of strategies, one for each possible initial budget, and we take the expectation over the payoffs that each pair of strategies achieves. The \u03b4 in the following de\ufb01nition allows us to avoid corner cases due to ties in biddings and the \u03b5 is crucial to obtain the results on full-information mean-payoff bidding games. De\ufb01nition 6. (Values in partial-information bidding games). Consider a partial-information bidding game G = \u27e8V, E, \u03b1, \u03b2, \u03b3\u27e9. Suppose supp(\u03b2) = {B1, . . . , Bn} and supp(\u03b3) = {C1, . . . , Cn}. We de\ufb01ne Player 1\u2019s value, denoted val\u2193(G, \u03b2, \u03b3), and Player 2\u2019s value, denoted val\u2191(G, \u03b2, \u03b3), is de\ufb01ned symmetrically. We de\ufb01ne that val\u2193(G, \u03b2, \u03b3) = c \u2208R if for every \u03b4, \u03b5 > 0, \u2022 There is a collection \u0000fB \u2208S1(B + \u03b4) \u0001 B\u2208supp(\u03b2) of Player 1 strategies, such that for every collection \u0000gC \u2208S2(C) \u0001 C\u2208supp(\u03b3) of Player 2 strategies, we have P B,C \u03b2(B) \u00b7 \u03b3(C) \u00b7 payoff(fB, gC) \u2265c \u2212\u03b5. 5 \f\u2022 For every collection \u0000fB \u2208S1(B) \u0001 B\u2208supp(\u03b2) of Player 1 strategies, there is a collection \u0000gC \u2208S2(C+ \u03b4) \u0001 C\u2208supp(\u03b3) of Player 2 strategies such that P B,C \u03b2(B) \u00b7 \u03b3(C) \u00b7 payoff(fB, gC) \u2264c + \u03b5. Note that val\u2193(G, \u03b2, \u03b3) \u2264val\u2191(G, \u03b2, \u03b3) and when there is equality, we say that the value exists, and denote it by val(G, \u03b2, \u03b3). The value in mean-payoff games is often called the mean-payoff value. In mean-payoff games we use MP\u2193, MP\u2191, and MP instead of val\u2193, val\u2191, and val, respectively. When G is full-information and the budget ratio is r, we use MP(G, r) instead of writing the two budgets. 3 Partial-Information Qualitative First-Price Bidding Games In this section, we focus on \ufb01rst-price bidding and show that the value exists in partial-information bidding games with qualitative objectives. The proof adapts results from the full-information setting, which we survey \ufb01rst. De\ufb01nition 7. (Threshold ratios in full-information games). Consider a full-information \ufb01rst-price bidding game with a qualitative objective. Suppose that the sum of initial budgets is 1 and that the game starts at v. The threshold ratio in v, denoted Th(v), is a value t such that for every \u03b5 > 0: \u2022 Player 1 wins when his ratio is greater than Th(v); namely, when the initial budgets are t + \u03b5 and 1 \u2212t \u2212\u03b5. \u2022 Player 2 wins when Player 1\u2019s ratio is less than Th(v); namely, when the initial budgets are t \u2212\u03b5 and 1 \u2212t + \u03b5. Existence of threshold ratios for full-information reachability games was shown in [17, 16] and later extended to full-information parity games in [5, 6]. Theorem 8. [17, 16, 5, 6] Threshold ratios exist in every vertex of a parity game. The following theorem, extends these results to the partial-information setting. Theorem 9. Consider a partial-information parity \ufb01rst-price bidding game G = \u27e8V, E, \u03b1, \u03b2, \u03b3\u27e9and a vertex v \u2208V . Let W = {\u27e8B, C\u27e9: B \u2208supp(\u03b2), C \u2208supp(\u03b3), and Th(v) < B B+C }. Then, the value of G in v is P \u27e8B,C\u27e9\u2208W \u03b2(B) \u00b7 \u03b3(C). Proof. Consider the following collection of strategies for Player 1. For every B \u2208supp(\u03b2), let C \u2208supp(\u03b3) be the maximal initial budget such that Player 1 wins with initial budgets B and C from v. That is, C is the maximal element such that B B+C > Th(v). We \ufb01x Player 1\u2019s strategy for initial budget B to be a winning strategy f against C. It is not hard to show that f wins against any Player 2 strategy g \u2208S2(C\u2032), for C\u2032 < C. To show that Max cannot guarantee a higher payoff, we consider the dual collection of strategies for Min: for every C \u2208supp(\u03b3), Min selects the maximal B \u2208supp(\u03b2) such that B B+C \u2264Th(v), and plays according to a winning strategy for these budgets. Recall that we let Min win bidding ties, thus she wins the game when B B+C = Th(v). Similar to the above, Min wins for initial budgets C and B\u2032 < B. To conclude, for each pair \u27e8B, C\u27e9\u2208supp(\u03b2) \u00d7 supp(\u03b3), if B B+C > Th(v), Player 1 wins, and if B B+C \u2264Th(v), Player 2 wins. Both players play irrespective of the opponent\u2019s strategy, hence the theorem follows. 6 \f4 Partial-Information Mean-Payoff Bidding Games In this section we study mean-payoff bidding games. Throughout this section we focus on games played on strongly-connected graphs. We start by surveying results on full-information games. The most technicallychallenging results concern one-sided partial-information poorman-bidding games. We \ufb01rst develop optimal strategies for the partially-informed player, and then show that the value does not necessary exist under pure strategies. 4.1 Full-information mean-payoff bidding games We show equivalences between bidding games and a class of stochastic games [14] called random-turn games, which are de\ufb01ne formally as follows. De\ufb01nition 10. (Random-turn games). Consider a strongly-connected mean-payoff bidding game G. For p \u2208[0, 1], the random-turn game that corresponds to G w.r.t. p, denoted RT(G, p), is a game in which instead of bidding, in each turn, we toss a (biased) coin to determine which player moves the token: Player 1 and Player 2 are respectively chosen with probability p and 1 \u2212p. Formally, RT(G, p) is constructed as follows. Every vertex v in G, is replaced by three vertices vN, v1, and v2. The vertex vN simulates the coin toss: it has an outgoing edge with probability p to v1 and an edge with probability 1\u2212p to v2. For i \u2208{1, 2}, vertex vi simulates Player i winning the coin toss: it is controlled by Player i and has an outgoing edge to uN, for every neighbor u of v. The weights of vN, v1, and v2 coincide with the weight of v. The mean-payoff value of RT(G, p), denoted MP \u0000RT(G, p) \u0001 , is the optimal expected payoff that the two players can guarantee, and it is known to exist [23]. Since G is strongly-connected, MP \u0000RT(G, p) \u0001 does not depend on the initial vertex. For a full-information game G and a ratio r \u2208(0, 1), recall that MP(G, r) denotes the optimal payoff that Max can guarantee with initial ratio r. We state the equivalences between the two models. Theorem 11. Let G be a strongly-connected full-information mean-payoff bidding game. \u2022 First-price Richman bidding [5]. The optimal payoff that Max can guarantee with a pure strategy does not depend on the initial ratio: for every initial ratio r, we have MP(G, r) = MP \u0000RT(G, 0.5) \u0001 . \u2022 First-price poorman bidding [6]. The optimal payoff that Max can guarantee with pure strategy and ratio r coincides with the value of a random-turn game with bias r: for every initial ratio r, we have MP(G, r) = MP \u0000RT(G, r) \u0001 . \u2022 All-pay poorman bidding [9]. The optimal payoff that Max can guarantee with a pure strategy and ratio r > 0.5 coincides with the value of a random-turn game with bias (2r \u22121)/r: for every initial ratio r > 0.5, we have MP(G, r) = MP \u0000RT(G, (2r \u22121)/r) \u0001 . Since the optimal payoff under \ufb01rst-price Richman bidding depends only on the structure of the game and not on the initial ratios, the result easily generalizes to partial-information games. Consider two budget distributions \u03b2 and \u03b3 for Min and Max, respectively. Indeed, when Min\u2019s initial budget is B \u2208supp(\u03b2), playing optimally against any C \u2208supp(\u03b3) results in the same payoff, and similarly for Max. We thus conclude the following. Theorem 12. Consider a strongly-connected \ufb01rst-price Richman mean-payoff bidding game G. For any two budget distributions \u03b2 and \u03b3 for the two players, we have MP\u2193(G, \u03b2, \u03b3) = MP\u2191(G, \u03b2, \u03b3) = MP \u0000RT(G, 0.5) \u0001 . Remark 13. (All-pay Richman bidding). It was shown in [9] that in all-pay Richman bidding games, pure strategies are \u201cuseless\u201d: no matter what the initial ratio is, Max cannot guarantee a positive payoff with a pure strategy. The study of mean-payoff all-pay Richman-bidding games is thus trivial in the partial-information setting as well. 7 \f4.2 The value of the partially-informed player We turn to study partial-information mean-payoff bidding games under poorman bidding, where we focus on one-sided partial information. We arbitrarily set Max to be partially-informed and Min to be fully-informed. 4.2.1 First-price bidding. Fix a strongly-connected mean-payoff game G. Suppose that Max\u2019s budget is B and Min\u2019s budget is chosen from a \ufb01nite probability distribution \u03b3 with supp(\u03b3) = {C1, . . . , Cn} and Ci < Ci+1, for 1 \u2264i < n. We generalize the technique that is illustrated in Example 2. Max carefully chooses increasing x1, . . . , xn, where xn = B. He maintains two \u201caccounts\u201d: a spending account from which he bids and a savings account. Initially, the spending account has a budget of x1 and the savings account, a budget of B \u2212x1. Max plays \u201coptimistically\u201d. He \ufb01rst plays in hope that Min\u2019s budget is C1 with a budget of x1. If Min does not spend C1, the payoff is as in full-information games, namely at least p1 = MP \u0000RT(G, x1 x1+C1 ) \u0001 . Otherwise, Min spends at least C1 and Max transfers budget from his savings account to his spending account so that the saving account has B \u2212x2 and the spending account has at least x2 \u2212x1. Note that if Min\u2019s initial budget was indeed C2, at this point she is left with a budget of at most C2 \u2212C1. If Min does not spend C2 \u2212C1, by following a full-information optimal strategy, Max can guarantee a payoff of at least p2 = MP \u0000RT(G, x2\u2212x1 x2\u2212x1+C2\u2212C1 ) \u0001 . The de\ufb01nition of p3, . . . , pn is similar. Max chooses x1, . . . , xn so that p1 \u2265. . . \u2265pn. Thus, when Min\u2019s initial budget is Ci, she has an incentive to play so that Max\u2019s spending account will reach xi and the payoff will be at least pi. We call such a choice of x1, . . . , xn admissible and formally de\ufb01ne it as follows. De\ufb01nition 14. Admissible sequences. Let G be a poorman mean-payoff bidding game. Let B be a budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, . . . , Cn}. A sequence (xi)1\u2264i\u2264n of budgets is called admissible with respect to B and \u03b3 if 0 \u2264x1 \u2264x2 \u2264\u00b7 \u00b7 \u00b7 \u2264xn = B and p1 \u2265p2 \u2265 . . . \u2265pn, where pi = MP \u0010 RT \u0010 G, xi \u2212xi\u22121 xi \u2212xi\u22121 + Ci \u2212Ci\u22121 \u0011\u0011 (4.1) for each 1 \u2264i \u2264n, with x0 = 0 and C0 = 0. We denote by ADM(B, \u03b3) the set of all admissible sequences with respect to B and \u03b3. The main result of this section is stated in the following theorem. The upper bound is proven in Lemma 18 and the lower bound in Lemma 19. Theorem 15 (Mean-payoff value of the partially-informed player). Consider a strongly-connected \ufb01rstprice poorman mean-payoff bidding game G. Let B be the initial budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, . . . , Cn}. Then MP\u2193(G, \u03b2, \u03b3) = max (xi)1\u2264i\u2264n\u2208ADM(B,\u03b3) Val(x1, . . . , xn), (4.2) where Val(x1, . . . , xn) = n X i=1 \u03b3(Ci) \u00b7 MP \u0010 RT \u0010 G, xi \u2212xi\u22121 xi \u2212xi\u22121 + Ci \u2212Ci\u22121 \u0011\u0011 (4.3) with x0 = 0 and C0 = 0. We point to some interesting properties of Max\u2019s value: Remark 16. Consider the bowtie game (Fig. 1) and assume Max\u2019s budget is \ufb01xed to B = 1 and Min\u2019s budget is drawn uniformly at random from {C1, C2}. 8 \f\u2022 When C1 = 1 and C2 = 2, the maximum is obtained at x = 0.5, thus Max\u2019s optimal expected payoff is 1 3 = B B+C2 . We note that Max has a very simple optimal strategy in this case: \u201cassume the worst\u201d on Min\u2019s initial budget. That is, play according to an optimal strategy for initial budgets B and C2. \u2022 When C1 = 1, and C2 = 5, the maximum is obtained at x = 0. This is the dual of the case above. Max can \u201cassume the best\u201d on Min\u2019s initial budget and play according to an optimal strategy for budgets B and C1. When Min\u2019s budget is C1, this strategy guarantees a payoff of B B+C1 . But when Min\u2019s budget is C2, the strategy cannot guarantee a payoff above 0. Thus, the strategy guarantees an expected payoff of 1 4 = 1 2 \u00b7 B B+C1 . \u2022 There are cases in which Max\u2019s optimal strategy is not one of the trivial cases above. When C1 = 1 and C2 = 3, Max\u2019s optimal payoff is 1 8(5 \u22122 \u00b7 \u221a 2) \u22480.271, which is strictly larger than both 1 4 = 1 2 \u00b7 B B+C1 and 1 4 = B B+C2 . \u25b3 De\ufb01nition 17. We denote the right-hand-side of eq. (4.2) by Val. Lemma 18 (Upper bound). Consider a strongly-connected \ufb01rst-price poorman mean-payoff bidding game G. Let B be the initial budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, . . . , Ck}. Then, for every \u03b5 > 0, Max has a strategy that guarantees an expected mean-payoff of at least Val \u2212\u03b5. Proof. Fix \u03b5 > 0. For each (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3), we construct a Max strategy fx1,...,xn that guarantees a payoff of at least Val(x1, . . . , xn) \u2212\u03b5 as follows: \u2022 Max uses portion x1 of his budget to play an \u03b5-optimal strategy against Min with budget C1. This is continued as long as Min spends at most C1. \u2022 For each 1 \u2264i \u2264n \u22121, once Min\u2019s investments exceed Ci, Max starts using portion xi+1 \u2212xi of his budget and plays according to an \u03b5-optimal strategy against budget Ci+1 \u2212Ci of Min. This is continued as long as Min\u2019s investments do not exceed Ci+1. Lemma 18 follows from Claim 1 below, which generalizes the analysis of Example 2 to any Min \ufb01nite budget distribution. As in the example, it is crucial to select xi such that Min has an incentive to \u201creveal\u201d (when she can), that her budget is larger than Ci. Formally, recall that pi = (xi \u2212xi\u22121)/(xi \u2212xi\u22121 + Ci \u2212 Ci\u22121). Intuitively, pi can be thought of as the payoff when Max plays according to the strategy above and Min\u2019s budget is Ci. Then, we require that p1 \u2265\u00b7 \u00b7 \u00b7 \u2265pn. Claim 1. For each (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3), Max ensures a payoff of at least Val(x1, . . . , xn)\u2212\u03b5 by playing according to the strategy fx1,...,xn. To prove Claim 1, \ufb01x a strategy g of Min and consider play(fx1,...,xn, g). Denote by c the highest value of the budget lost by Min during the course of the play, and let 1 \u2264i \u2264n be such that Ci\u22121 < c \u2264Ci. Then, by the construction of fx1,...,xn, the payoff of the play is at least pi \u2212\u03b5. Since we assumed that p1 \u2265p2 \u2265\u00b7 \u00b7 \u00b7 \u2265pn and since MP \u0000RT(G, p) \u0001 is a monotonically decreasing function in p, it follows that the payoff of play(fx1,...,xn, g) is at least pi \u2212\u03b5 if Min\u2019s initial budget is Ci. Therefore, as the probability of Min\u2019s initial budget being Ci is \u03b3(Ci), we conclude that the expected payoff of play(fx1,...,xn, g) is at least Val(x1, . . . , xn) \u2212\u03b5. Since the strategy g of Min was arbitrary, Claim 1 follows. Recall that Max guarantees an expected payoff of c \u2208R if, intuitively, he can reveal the strategy that he plays according to and no matter how Min responds, the expected payoff is at least c. Thus, in order to show a lower bound on Max\u2019s value, we show that no matter which strategy Max chooses, Min can respond in a way that guarantees a payoff of at most Val + \u03b5. Formally we have the following. 9 \fLemma 19 (Lower bound). Given \u03b5 > 0 and a strategy f of Max, there exist strategies g1 \u2208S(C1), . . . , gn \u2208 S(Cn) of Min such that Pn i=1 \u03b3(Ci) \u00b7 payoff(f, gi) \u2264Val + \u03b5. Proof. Let \u03b5 > 0 and suppose that Max plays according to a strategy f. As a response, for each 1 \u2264i \u2264n, when Min\u2019s initial budget is Ci, she selects an \u03b5-optimal response strategy gi \u2208S2(Ci) against f. We show that the choice of g1, . . . , gn satis\ufb01es the claim. Intuitively, we \ufb01nd an admissible sequence x1, . . . , xn and a corresponding \u201cwallet-based\u201d strategy fx1,...,xn as constructed in the proof of Lemma 18, and show that fx1,...,xn achieves a payoff no worse than f against g1, . . . , gn. The proof follows since n X i=1 \u03b3(Ci) \u00b7 payoff(f, gi) = Val(x1, . . . , xn) + \u03b5 \u2264Val + \u03b5. To construct the admissible sequence (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3), we set xn = B and de\ufb01ne the remaining xi\u2019s as follows. Let pi = payoff(f, gi) for each 1 \u2264i \u2264n. Since C1 < \u00b7 \u00b7 \u00b7 < Cn, we have p1 \u2265 \u00b7 \u00b7 \u00b7 \u2265pn. By Theorem 11, we have MP(RT(G, 0)) \u2264pn \u2264MP(RT(G, B B+Cn )). On the other hand, it is known [12, 26] that the value MP(RT(G, p)) is a continuous function in p. Hence, there exists B \u00b7 Cn\u22121 Cn \u2264 x \u2264B such that pn = MP(RT(G, B\u2212x B\u2212x+Cn\u2212Cn\u22121 )). We set xn\u22121 to be the largest such x. We claim that, when the initial budget of Min is Cn\u22121, Max does not spend more than xn\u22121 in the play(f, gn\u22121). Indeed, suppose towards contradiction that Max spends x\u2032 > xn\u22121 while playing against gn\u22121. Then, if the initial budget of Min was Cn and Min used portion Cn\u22121 of her budget to play according to gn\u22121, Max would eventually be left with a budget of B \u2212x\u2032 < B \u2212xn\u22121 and Min would be left with at least Cn \u2212Cn\u22121. Thus, Min could play optimally for an initial budget of at least Cn \u2212Cn\u22121 against a Max budget smaller than B \u2212x\u2032, to ensure a payoff of at most MP(RT(G, B\u2212x\u2032 B\u2212x\u2032+Cn\u2212Cn\u22121 )) \u2264 MP(RT(G, B\u2212xn\u22121 B\u2212xn\u22121+Cn\u2212Cn\u22121 )) = pn. This would contradict either the optimality of gn in the case of strict inequality, or the maximality of xn\u22121 in the case of equality. We thus conclude that Max spends at most xn\u22121 in play(f, gn\u22121). Next, we de\ufb01ne xn\u22122. Note that the fact that Max spends at most xn\u22121 in play(f, gn\u22121) also implies that MP(RT(G, 0)) \u2264pn\u22121 \u2264MP(RT(G, xn\u22121/(xn\u22121 + Cn\u22121)). Thus, as MP(RT(G, p)) is continuous in p, there exists xn\u22121 \u00b7 Cn\u22122/Cn\u22121 \u2264x \u2264xn\u22121 with pn\u22121 = MP(RT(G, xn\u22121\u2212x xn\u22121\u2212x+Cn\u22121\u2212Cn\u22122 )). Set xn\u22122 to be the largest such x. Then, the same argument as above shows that Max does not lose more than xn\u22122 in the play(f, gn\u22122). We may then inductively repeat this procedure in order to de\ufb01ne xn\u22123, . . . , x1. Note that this results in a sequence 0 \u2264x1 \u2264x2 \u2264\u00b7 \u00b7 \u00b7 \u2264xn = B which by construction satis\ufb01es eq. (4.1) for each 1 \u2264i \u2264n. Since we already showed that p1 \u2265\u00b7 \u00b7 \u00b7 \u2265pn, it follows that (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3). 4.2.2 All-pay poorman bidding We extend the technique in the previous section to all-pay poorman bidding. In order to state our results formally, we need to rede\ufb01ne the notion of admissible sequences since the optimal payoff that Max can guarantee under all-pay bidding differs from the payoff that he can guarantee under \ufb01rst-price bidding. Analogously to Def. 14 but now under all-pay bidding, we say that a sequence (xi)1\u2264i\u2264n of budgets is called admissible with respect to a budget B of Max and a budget distribution \u03b3 of Min if 0 \u2264x1 \u2264x2 \u2264 \u00b7 \u00b7 \u00b7 \u2264xn = B and p1 \u2265p2 \u2265. . . \u2265pn, where now pi = MP \u0010 RT \u0010 G, \u0010 1 \u2212Ci \u2212Ci\u22121 xi \u2212xi\u22121 \u0011 \u00b7 I \u0010 xi \u2212xi\u22121 > Ci \u2212Ci\u22121 \u0011\u0011\u0011 for each 1 \u2264i \u2264n, with x0 = 0 and C0 = 0. Here, I is an indicator function that evaluates to 1 if the input logical formula is true, and to 0 if it is false. We are now ready to state our result on all-pay poorman mean-payoff bidding games. 10 \fTheorem 20 (Mean-payoff value of the partially-informed player). Consider a strongly-connected all-pay poorman mean-payoff bidding game G. Let B be the initial budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, . . . , Cn}. Then MP\u2193(G, \u03b2, \u03b3) = max (xi)1\u2264i\u2264n\u2208ADM(B,\u03b3) Val(x1, . . . , xn), (4.4) where Val(x1, . . . , xn) = Pn i=1 \u03b3(Ci) \u00b7 MP \u0010 RT \u0010 G, \u0010 1 \u2212Ci\u2212Ci\u22121 xi\u2212xi\u22121 \u0011 \u00b7I \u0010 xi \u2212xi\u22121 > Ci \u2212Ci\u22121 \u0011\u0011\u0011 with x0 = 0 and C0 = 0 and I an indicator function. We introduce the following notation: De\ufb01nition 21. We denote the right-hand-side of eq. (4.4) by Val. The proof of the upper bound is similar to the proof for \ufb01rst-price poorman bidding and we include it for completeness. Lemma 22 (Upper bound). Consider a strongly-connected all-pay poorman mean-payoff bidding game G. Let B be the initial budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, . . . , Ck}. Then, for every \u03b5 > 0, Max has a strategy that guarantees an expected mean-payoff of at least Val \u2212\u03b5. Proof. Fix \u03b5 > 0. For each (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3), we construct a Max strategy fx1,...,xn that guarantees a payoff of at least Val(x1, . . . , xn) \u2212\u03b5 as follows: \u2022 Max uses portion x1 of his budget to play an \u03b5-optimal strategy against Min with budget C1. This is continued as long as Min spends at most C1. \u2022 For each 1 \u2264i \u2264n \u22121, once Min\u2019s investments exceed Ci, Max starts using portion xi+1 \u2212xi of his budget and plays according to an \u03b5-optimal strategy against budget Ci+1 \u2212Ci of Min. This is continued as long as Min\u2019s investments do not exceed Ci+1. Lemma 22 follows immediately from Claim 1 below. Recall that, for all-pay poorman mean-payoff bidding games, we de\ufb01ned pi = MP(RT(G, (1 \u2212Ci\u2212Ci\u22121 xi\u2212xi\u22121 ) \u00b7 I(xi \u2212xi\u22121 > Ci \u2212Ci\u22121))). Claim 1. For each (xi)1\u2264i\u2264n \u2208ADM(B, \u03b3), Max ensures a payoff of at least Val(x1, . . . , xn) \u2212\u03b5 by playing according to the strategy fx1,...,xn. To prove Claim 1, \ufb01x a strategy g of Min and consider play(fx1,...,xn, g). Denote by c the highest value of the budget lost by Min during the course of the play, and let 1 \u2264i \u2264n be such that Ci\u22121 < c \u2264Ci. Then, by the construction of fx1,...,xn, the payoff of the play is at least pi \u2212\u03b5. Since we assumed that p1 \u2265p2 \u2265\u00b7 \u00b7 \u00b7 \u2265pn and since MP \u0000RT(G, p) \u0001 is a monotonically decreasing function in p, it follows that the payoff of play(fx1,...,xn, g) is at least pi \u2212\u03b5 if Min\u2019s initial budget is Ci. Therefore, as the probability of Min\u2019s initial budget being Ci is \u03b3(Ci), we conclude that the expected payoff of play(fx1,...,xn, g) is at least Val(x1, . . . , xn) \u2212\u03b5. Since the strategy g of Min was arbitrary, Claim 1 follows. The proof of the lower bound is also similar to the proof for \ufb01rst-price poorman bidding but it requires care. In particular, if the initial budget of Min is Ci > B, then Min can guarantee an arbitrarily small payoff against any strategy of Max according to Theorem 11. We need to take this into account when constructing an admissible sequence x1, . . . , xn and a corresponding \u201cwallet-based\u201d strategy fx1,...,xn. Lemma 23 (Lower bound). Given \u03b5 > 0 and a strategy f of Max, there exist strategies g1 \u2208S(C1), . . . , gn \u2208 S(Cn) of Min such that Pn i=1 \u03b3(Ci) \u00b7 payoff(f, gi) \u2264Val + \u03b5. 11 \fProof. Let \u03b5 > 0 and suppose that Max plays according to a strategy f. As a response, for each 1 \u2264i \u2264n, when her initial budget is Ci, Min selects a strategy gi \u2208S2(Ci) that is \u03b5-optimal against f. We show that the choice of g1, . . . , gn satis\ufb01es the claim. First, if B < Ci for each 1 \u2264i \u2264n, then from eq. (4.2) and eq. (4.3) we see that Val = 0. On the other hand, Max cannot guarantee any payoff better than 0 against any possible budget of Min, thus for each i and for each strategy of Max there exists a response strategy of Min that ensures payoff of at most \u03b5. Therefore, by our choice of g1, . . . , gn we deduce that Pn i=1 \u03b3(Ci) \u00b7 payoff(f, gi) \u2264Pn i=1 \u03b3(Ci) \u00b7 \u03b5 = \u03b5 = Val + \u03b5, as desired. Now, assume that there exists some Ci < B and let i\u2217be the largest such index. To prove that our choice of g1, . . . , gn satis\ufb01es the claim, we \ufb01nd an admissible sequence x1, . . . , xn and a corresponding \u201cwallet-based\u201d strategy fx1,...,xn as constructed in the proof of Lemma 22, and show that fx1,...,xn achieves a payoff no worse than f against g1, . . . , gn. The proof follows since n X i=1 \u03b3(Ci) \u00b7 payoff(f, gi) \u2264Val(x1, . . . , xn) + \u03b5 \u2264Val + \u03b5. 4.3 The mean-payoff value of the fully-informed player under \ufb01rst-price poorman bidding In this section we identify the optimal expected payoff that the fully-informed player can guarantee in the bowtie game (Fig. 1) under \ufb01rst-price bidding. Suppose that Max\u2019s initial budget is B and Min\u2019s initial budget is drawn from a distribution \u03b3. Consider the following collection of naive strategies for Min: when her initial budget is C \u2208supp(\u03b3), Min plays according to an optimal full-information strategy for the ratio B B+C . We \ufb01nd it surprising that this collection of strategies is optimal for Min in the bowtie game. The technical challenge in this section is the lower bound. This result complements Thm. 15: we characterize both Min and Max\u2019s values in the bowtie game when the players are restricted to use pure strategies. We show, somewhat unexpectedly, that the two values do not necessarily coincide. In order to state the result formally, we need the following de\ufb01nition. Intuitively, the potential of \u27e8B, \u03b3\u27e9 is the optimal expected payoff when Min plays according to the collection of naive strategies described above. De\ufb01nition 24. (Potential). Given a budget B \u2208R of Max and a budget distribution \u03b3 with support supp(\u03b3) = {C1, C2, . . . , Ck} of Min, we de\ufb01ne Pot(B, \u03b3) = Pk j=1 \u03b3(Cj) \u00b7 B B+Cj . The main result in this section is given in the following theorem, whose proof follows from Lemmas 27 and 28. Theorem 25 (Mean-payoff value of the fully-informed player). Consider the bowtie game G\u25b7 \u25c1. Let B be the initial budget of Max and \u03b3 be a \ufb01nite budget distribution of Min with supp(\u03b3) = {C1, C2, . . . , Ck}. Then, MP\u2191(G, B, \u03b3) = Pot(B, \u03b3) = k X j=1 \u03b3(Cj) \u00b7 B B + Cj . (4.5) Before proving the theorem, we note the following. Remark 26. (Inexistence of a value). Our result implies that the value in partial-information mean-payoff \ufb01rst-price poorman bidding games under pure strategies is not guaranteed to exist. Indeed, consider G\u25b7 \u25c1 12 \fwith B = 1 and \u03b3 that draws Min\u2019s budget uniformly at random from {1, 2}. By Thm. 15, one can verify that the optimal choice of x is 1, thus MP\u2193(G\u25b7 \u25c1, B, \u03b3) = 1 3. On the other hand, by Thm. 25, we have MP\u2191(G\u25b7 \u25c1, B, \u03b3) = 5 12. \u25b3 The upper bound is obtained when Min reveals her true budget immediately and plays according to the strategies described above. The following lemma follows from results on full-information games (Thm. 11). Lemma 27 (Upper bound). For every \u03b5 > 0, Min has a collection of strategies ensuring an expected payoff smaller than Pot(B, \u03b3) + \u03b5. We proceed to the more challenging lower bound and show that there are no Min strategies that perform better than the naive strategy above. Lemma 28 (Lower bound). For every \u03b5 > 0 and for every collection (gj \u2208SMin(Cj))1\u2264j\u2264k of Min strategies, Max has a strategy ensuring an expected payoff greater than Pot(B, \u03b3) \u2212\u03b5. Proof. Let \u03b5 > 0, and let (gj \u2208SMin(Cj))1\u2264j\u2264k be a collection of Min strategies. We construct a counter strategy f of Max ensuring an expected payoff greater than Pot(B, \u03b3) \u2212\u03b5. The proof is by induction over the size k of the support of \u03b3. Obviously, if k = 1, Max has perfect information and can follow a fullinformation optimal strategy to guarantee a payoff of Pot(B, \u03b3) = B B+C1 (Thm. 11). So suppose that k > 1, and that the statement holds for every budget distribution of Min with a support strictly smaller than k. Max carefully chooses a small part x \u2264B of his budget and a part y \u2264C1 of Min\u2019s budget. He plays according to a full-information strategy f for initial budgets x and y. This can result in three possible outcomes: (O1) Min never uses more than y: the payoff is x x+y as in full-information games; (O2) Min reveals her true initial budget, thus Max can distinguish between the case that Min\u2019s budget is Ci and Cj, and by the induction hypothesis he can ensure an expected payoff of Pot(B \u2212x, \u03b3) using his remaining budget; (O3) Min does not reveal her true initial budget and spends more than y: Max\u2019s leftover budget is greater than B \u2212x and, for 1 \u2264j \u2264k, when Min\u2019s budget is Cj, she has Cj \u2212y, and Max re-starts the loop by selecting a new x. We show that Max can choose x and y in a way that guarantees that the payoffs obtained in the \ufb01rst two outcomes are greater than the desired payoff Pot(B, \u03b3) \u2212\u03b5. Also, outcome O3 can occur only \ufb01nitely many times and the potential there does not decrease. Thus, O1 or O2 occur, ensuring a payoff of at least Pot(B, \u03b3) \u2212\u03b5. Formally, we describe a sequence (\u03c0i, Bi, \u03b3i)0\u2264i\u2264m of con\ufb01gurations comprising of a history \u03c0i consistent with every strategy (gj)1\u2264j\u2264k, the budget Bi of Max after \u03c0i, and the budget distribution \u03b3i of Min with supp(\u03b3i) = {Ci 1, Ci 2, . . . , Ci k} following \u03c0i. Tuple i represents the budget and budget distribution of the players following i \u22121 choices of outcome O3. Let \u03bb = 1 \u2212\u03b5 2 and \u03c1 = 1 Pot(B,\u03b3) \u22121. We start with (\u03c00, B0, \u03b30) = (v, B, \u03b3) with v an initial vertex, and we show recursively how Max can update this tuple while ensuring that the following four properties are satis\ufb01ed: P1: The history \u03c0i is consistent with every (gj)1\u2264j\u2264k; P2: Max spends his budget suf\ufb01ciently slowly: Bi \u2265\u03bbiB; P3: Min spends her budget suf\ufb01ciently fast: Ci j \u2264Cj \u2212\u03c1 \u00b7 (1 \u2212\u03bbi)B for every 1 \u2264j \u2264k; P4: The potential never decreases: Pot(Bi, \u03b3i) \u2265Pot(B, \u03b3). Note that for the initial tuple (\u03c00, B0, \u03b30) = (v, B, \u03b3), these are trivially satis\ufb01ed. Moreover, Property P3 implies an upper bound on i, that is, outcome O3 can happen only \ufb01nitely many times: limi\u2192\u221eCi 1 \u2264 limi\u2192\u221eC1 \u2212\u03c1 \u00b7 (1 \u2212\u03bbi)B = C1 \u2212\u03c1 \u00b7 B = B + C1 \u2212 B Pot(B,\u03b3) = 1 1 B+C1 \u2212 1 Pk j=1 \u03b3(Cj)\u00b7 1 B+Cj which is negative since C1 < C2 < . . . < Ck, yet a negative Ci 1 means that Min illegally bids higher than her available budget. 13 \fWe now de\ufb01ne the choices xi and yi for each i \u2208N, and show that they satisfy the properties described above. Let xi = \u03b5 2 \u00b7 \u03bbiB and yi = \u03c1 \u00b7 xi. For initial budgets xi and yi, let fi be a full-information Max strategy whose payoff is greater than xi xi+yi \u2212\u03b5. Max follows fi as long as Min spends at most yi. Let (\u03c8j)1\u2264j\u2264k be plays such that for each 1 \u2264j \u2264k: \u2022 the play \u03c0i\u03c8j is consistent with the strategy gj; \u2022 Max plays according to fi along \u03c8j; \u2022 \u03c8j stops when Min uses more than yi, and is in\ufb01nite if she never does. We consider three possible cases, depending on whether the paths \u03c8j are \ufb01nite or in\ufb01nite, and whether they are distinct or identical. If they are all in\ufb01nite, or there are at least two distinct ones, we show that Max immediately has a way to obtain the desired payoff. If they are all identical and \ufb01nite, we show that, while Max cannot immediately get the desired payoff, he can go to the next step by setting \u03d5i+1 = \u03d5i\u03c81, and restarting. 1. The play \u03c8j is in\ufb01nite for every 1 \u2264j \u2264k. This situation happens if Min does not spend more than yi. Since Max follows the strategy fi along each \u03c8j, the resulting payoff is greater than xi xi+yi \u2212\u03b5. Moreover, the de\ufb01nition of yi implies: xi xi + yi = xi xi + \u03c1 \u00b7 xi = xi xi + ( 1 Pot(B,\u03b3) \u22121)xi = Pot(B, \u03b3). 2. The plays (\u03c8j)1\u2264j\u2264k are not all identical. Let P1, P2, . . . , Pm be the partition of {1, 2, . . . , k} such that for every pair 1 \u2264j, j\u2032 \u2264k, the plays \u03c8j and \u03c8j\u2032 are equal if and only if j and j\u2032 belong to the same P\u2113. Remark that m \u22652 since by supposition the plays (\u03c8j)1\u2264j\u2264k are not all identical. We show that Max can follow some \u03c8j until he identi\ufb01es precisely which P\u2113corresponds to the initial budget of Min, which allows us to apply the induction hypothesis, and to show that Max can guarantee the desired payoff. For each 1 \u2264\u2113\u2264m, the plays (\u03c8j)j\u2208P\u2113are equal by de\ufb01nition, and we denote this play by \u03c7\u2113. We start by trimming the in\ufb01nite plays into \ufb01nite plays that still allow Max to determine the adequate P\u2113: for every 1 \u2264\u2113\u2264m, let \u03c7\u2032 \u2113be a \ufb01nite pre\ufb01x of \u03c7\u2113that is only consistent with the strategies gj of Min satisfying j \u2208P\u2113(note that if the play \u03c7\u2113is already \ufb01nite, we can set \u03c7\u2032 \u2113= \u03c7\u2113). Remark that the play \u03c7\u2032 \u2113occurs with probability exactly P j\u2208P\u2113\u03b3i(Ci j), which we denote by \u03b3(P\u2113). After the play \u03d5i\u03c7\u2032 \u2113, the remaining budget of Max is bigger that Bi \u2212xi. Moreover, since this play is only consistent with the strategies gj of Min satisfying j \u2208P\u2113, Max knows that the current distribution of budgets of Min is the function \u03b3i.\u2113de\ufb01ned by \u03b3i.\u2113(Ci j \u2212y) = \u03b3i(Ci j) \u03b3(P\u2113) , where j \u2208P\u2113and y denotes the budget spent by Min along \u03c7\u2032 \u2113. Since |P\u2113| < k, the induction hypothesis implies that from this point Max can guarantee an expected payoff greater than Pot(Bi \u2212xi, \u03b3i.\u2113) \u2212\u03b5 2. This holds for every 1 \u2264\u2113\u2264m, therefore Max can globally 14 \fguarantee an expected payoff greater than m X \u2113=1 \u03b3(P\u2113) \u00b7 Pot(Bi \u2212xi, \u03b3i.\u2113) \u2212\u03b5 2 = m X \u2113=1 \u03b3(P\u2113) \u00b7 X j\u2208P\u2113 \u03b3i.\u2113(Ci j \u2212y) Bi \u2212xi Bi \u2212xi + Ci j \u2212y \u2212\u03b5 2 \u2265 m X \u2113=1 \u03b3(P\u2113) \u00b7 X j\u2208P\u2113 \u03b3i(Ci j) \u03b3(P\u2113) Bi \u2212xi Bi \u2212xi + Ci j \u2212\u03b5 2 = m X \u2113=1 X j\u2208P\u2113 \u03b3i(Bi j) Bi \u2212xi Bi \u2212xi + Ci j \u2212\u03b5 2 = Pot(Bi \u2212xi, \u03b3i) \u2212\u03b5 2 To conclude, we show that Pot(Bi \u2212xi, \u03b3i) \u2265Pot(B, \u03b3) \u2212\u03b5 2. For all 1 \u2264j \u2264k, the de\ufb01nition of xi and Property P2 imply Bi Bi + Ci j \u2212 Bi \u2212xi Bi \u2212xi + Ci j = Ci jxi (Bi + Ci j)(Bi + Ci j \u2212xi) \u2264xi Bi \u2264\u03b5 2. Therefore Pot(Bi \u2212xi, \u03b3i) \u2265Pot(Bi, \u03b3i) \u2212\u03b5 2, which translates to Pot(Bi \u2212xi, \u03b3i) \u2265Pot(B, \u03b3) \u2212\u03b5 2 by Property P4. 3. The plays (\u03c8j)1\u2264j\u2264k are identical and \ufb01nite. If the \u03c8j are all equal to a \ufb01nite play \u03c8, then we de\ufb01ne \u03c0i+1 as the concatenation of \u03c0i and \u03c8. The budget Bi+1 is obtained by subtracting from Bi the budget spent by Max along \u03c8. Moreover, for every 1 \u2264j \u2264k, the distribution \u03b3i+1 maps the budget Ci+1 j obtained by subtracting from Ci j the budget spent by Min along \u03c8 to the probability \u03b3i(Ci j) = \u03b3(Cj) \u2208[0, 1]. We show that the con\ufb01guration (\u03c0i+1, Bi+1, \u03b3i+1) satis\ufb01es properties P1-P4. First, Property P1 holds as \u03d5i\u03c8 is consistent with every gj. Second, since Max follows the strategy fi along \u03c8, he does not spend more than xi. Therefore, since his budget Bi after \u03d5i satis\ufb01es Property P2, so does his budget Bi+1 after \u03d5i\u03c8: Bi+1 \u2265Bi \u2212xi \u2265\u03bbiB \u2212\u03b5 2\u03bbiB = (1 \u2212\u03b5 2)\u03bbiB = \u03bbi+1B. Moreover, since Min needs to use more than yi in order for \u03c8 to stop, we can also conclude P3: Ci+1 j \u2264Ci j \u2212yi \u2264Cj \u2212\u03c1(1 \u2212\u03bbi)B \u2212\u03c1\u03b5 2\u03bbiB = Cj \u2212\u03c1(1 \u2212\u03bbi+1)B. Finally, we obtain Property P4 as a consequence of Properties P2 and P3. Let x denote the overapproximation (1\u2212\u03bbi+1)B of the budget spent by Max since the start of the game, and let y denote the underapproximation \u03c1 \u00b7 (1 \u2212\u03bbi+1)B of the budget spent by Min since the start of the game. Then Pot(Bi+1, \u03b3i+1) = k X j=1 \u03b3(Cj) Bi+1 Bi+1 + Ci+1 j \u2265 k X j=1 \u03b3(Cj) B \u2212x B \u2212x + Cj \u2212y = k X j=1 \u03b3(Cj) B B + Cj \u00b7 B \u2212x B \u2212 B B+Cj (x + y) = k X j=1 \u03b3(Cj)f \u0010 B B + Cj \u0011 , 15 \fwhere f is the function mapping \u03bb \u2208R to \u03bb \u00b7 B\u2212x B\u2212\u03bb(x+y). As f is convex, we may apply Jensen\u2019s inequality, and use the fact that Pot(B, \u03b3) \u00b7 (x + y) = x to conclude that Pot(Bi+1, \u03b3i+1) \u2265f \u0010 k X j=1 \u03b3(Cj) \u00b7 B B + Cj \u0011 = f(Pot(B, \u03b3)) = Pot(B, \u03b3) \u00b7 (B \u2212x) B \u2212Pot(B, \u03b3) \u00b7 (x + y) = Pot(B, \u03b3). 5 Discussion and Future Work We initiate the study of partial-information bidding games, and speci\ufb01cally bidding games with partiallyobserved budgets. Our most technically challenging results are for one-sided partial-information meanpayoff poorman-bidding games. We show a complete picture in strongly-connected games for the partiallyinformed player, which is the more important case in practice. By identifying the value for the fullyinformed player in the bowtie game, we show that the value in mean-payoff bidding games does not necessarily exist when restricting to pure strategies. We discuss open problems in this model. First, we focus on games played on strongly-connected graphs. Reasoning about such games is the crux of the solution to general full-information bidding games. We thus expect that our results will be key in the solution of partial-information bidding games on general graphs. This extension, however, is not straightforward as in the full-information setting, and we leave it as an open question. Second, we identify the value of the fully-informed player in the bowtie game G\u25b7 \u25c1. Reasoning about G\u25b7 \u25c1was the crux of the solution to general strongly-connected full-information bidding games. In fact, the same technique was used to lift a solution for G\u25b7 \u25c1to general strongly-connected games under all the previously-studied bidding mechanisms. In partial-information games, however, this technique breaks the intricate analysis in the proof of Thm. 25. Again, we expect a solution to the bowtie game to be a key ingredient in the solution to general strongly-connected games, and we leave the problem open. Finally, we showed that the value does not necessarily exist under pure strategies. We leave open the problem of developing optimal mixed strategies for the players. This work is part of a research that combines formal methods and AI including multi-agent graph games [2], logics to reason about strategies [13, 20] and in particular, their application in auctions [19], enhancing network-formation games with concepts from formal methods (e.g., [10]), and many more." + }, + { + "url": "http://arxiv.org/abs/2210.02773v2", + "title": "Computing Threshold Budgets in Discrete-Bidding Games", + "abstract": "In a two-player zero-sum graph game, the players move a token throughout a\ngraph to produce an infinite play, which determines the winner of the game.\n\\emph{Bidding games} are graph games in which in each turn, an auction\n(bidding) determines which player moves the token: the players have budgets,\nand in each turn, both players simultaneously submit bids that do not exceed\ntheir available budgets, the higher bidder moves the token, and pays the bid to\nthe lower bidder (called {\\em Richman} bidding). We focus on {\\em\ndiscrete}-bidding games, in which, motivated by practical applications, the\ngranularity of the players' bids is restricted, e.g., bids must be given in\ncents.\n A central quantity in bidding games is are {\\em threshold budgets}: a\nnecessary and sufficient initial budget for winning the game. Previously,\nthresholds were shown to exist in parity games, but their structure was only\nunderstood for reachability games. Moreover, the previously-known algorithms\nhave a worst-case exponential running time for both reachability and parity\nobjectives, and output strategies that use exponential memory. We describe two\nalgorithms for finding threshold budgets in parity discrete-bidding games. The\nfirst is a fixed-point algorithm. It reveals, for the first time, the structure\nof threshold budgets in parity discrete-bidding games. Based on this structure,\nwe develop a second algorithm that shows that the problem of finding threshold\nbudgets is in \\NP and co\\NP for both reachability and parity objectives.\nMoreover, our algorithm constructs strategies that use only linear memory.", + "authors": "Guy Avni, Suman Sadhukhan", + "published": "2022-10-06", + "updated": "2024-01-02", + "primary_cat": "cs.FL", + "cats": [ + "cs.FL", + "cs.GT" + ], + "main_content": "Introduction Two-player zero-sum graph games are a central class of games. A graph game proceeds as follows. A token is placed on a vertex and the players move it throughout the graph to produce an infinite play, which determines the winner of the game. The central algorithmic problem in graph games is to identify the winner and to construct winning strategies. One key application of graph games is reactive synthesis [23], in which the goal is to synthesize a reactive system that satisfies a given specification no matter how the environment behaves. Two orthogonal classifications of graphs games are according to the mode of moving the token and according to the players\u2019 objectives. For the latter, we focus on two canonical qualitative objectives. In reachability games, there is a set of target vertices and Player 1 wins if a target vertex is reached. In parity games, each vertex is labeled with a parity index and an infinite path is winning for Player 1 iff the highest parity index that is visited infinitely often is odd. The simplest and most studied mode of moving is turn-based: the players alternate turns in moving the token. We note that reactive synthesis reduces to solving a turn-based parity game. Other modes include concurrent and probabilistic moves (see [4]). We study bidding graph games [19, 18], which use the following mode of moving: both players have budgets, and in each turn, an auction (bidding) determines which player moves arXiv:2210.02773v2 [cs.FL] 2 Jan 2024 \f2 Computing Threshold Budgets in Discrete-Bidding Games the token. Concretely, we focus on Richman bidding (named after David Richman): in each turn, both players simultaneously submit bids that do not exceed their available budget, the higher bidder moves the token, and pays his bid to the lower bidder. Note that the sum of budgets stays constant throughout the game. We distinguish between continuousand discrete-bidding, where in the latter, the granularity of the players\u2019 bids is restricted. The central questions in bidding games revolve around the threshold budgets, which is a necessary and sufficient initial budget for winning the game. Continuous-bidding games. This paper focuses on discrete-bidding. We briefly survey the relevant literature on continuous-bidding games, which have been more extensively studied that their discrete-bidding couterparts. Bidding games were introduced in [19, 18]. The objective that was considered is a variant of reachability, which we call double reachability: each player has a target and a player wins if his target is reached (unlike reachability games in which Player 2\u2019s goal is to prevent Player 1 from reaching his target). It was shown that in continuous-bidding games, a target is necessarily reached, thus double-reachability games essentially coincide with reachability games under continuous-bidding. Threshold budgets were shown to exist; namely, each vertex v has a value Th(v) such that if Player 1\u2019s budget is strictly greater than Th(v), he wins the game from v, and if his budget is strictly less than Th(v), Player 2 wins the game. Moreover, it was shown that the threshold function Th is a unique function that satisfies the following property, which we call the average property. Suppose that the sum of budgets is 1, and ti is Player i\u2019s target, for i \u2208{1, 2}. Then, Th assigns a value in [0, 1] to each vertex such that at the \u201cend points\u201d, we have Th(t1) = 0 and Th(t2) = 1, and the threshold at every other vertex is the average of two of its neighbors. Uniqueness implies that the problem of finding threshold budgets1 is in NP and coNP. Moreover, an intriguing equivalence was observed between reachability continuous-bidding games and a class of stochastic game [15] called random-turn games [22]. Intricate equivalences between mean-payoff continuous-bidding games and random-turn games have been shown in [7, 8, 9, 10] (see also [6]). Parity continuous-bidding games were studied in [7]. The following key property was identified. Consider a strongly-connected parity continuous-bidding game G. If the maximal parity index in G is odd, then Player 1 wins with any positive initial budget, i.e., the thresholds in G are all 0. Dually, if the maximal parity index in G is even, then the thresholds are all 1. This property gives rise to a simple reduction from parity bidding games to doublereachability bidding games: roughly, a player\u2019s goal is to reach a bottom strongly-connected component in which he can win with any positive initial budget. Discrete-bidding games. Discrete-bidding games are similar to continuous-bidding games only that we fix the sum of the budgets to be k \u2208N and bids are restricted to be integers. Ties in biddings need to be handled explicitly. We focus on the tie-breaking mechanism that was defined in [16]: one of the players has the advantage and when a tie occurs, the player with the advantage chooses between (1) use the advantage to win the bidding and pass it to the other player, or (2) keep the advantage and let the other player win. Other tie-breaking mechanisms and the properties that they lead to were considered in [1]. The motivation to study discrete-bidding games is practical: in most applications, the assumption that bids can have arbitrary granularity is unrealistic. We point out that the results in continuous-bidding games, particularly those on infinite-duration games, do in fact develop strategies that bid arbitrarily small bids. It is highly questionable whether such strategies would be useful in practice. 1 Stated as a decision problem: given a game and a vertex v, decide whether Th(v) \u22650.5. \fG. Avni and S. Sadhukhan 3 Bidding games model ongoing and stateful auctions. Such auctions arise in various domains. An immediate example is auctions for online advertisements [21]. In [11], a framework based on bidding games was proposed to synthesize plans for objectives of the form \u03c81 \u2227\u03c82. The idea is to synthesize two independent policies for \u03c81 and \u03c82, enrich each policy with a bidding strategy, and compose the two at runtime by letting the policies bid for which policy takes an action at each turn. An advantage of the framework is modularity: a policy can be altered without affecting the other policy. As another example, in blockchain technology, miners accept transaction fees, which can be thought of as bids, and prioritize transactions based on them. Verification against attacks is a well-studied problem [14, 5]. Attacks based on manipulations of these fees are hard to detect, can cause significant loses, and thus call for verification of the protocols [14, 5]. Bidding games have been applied as a mechanism for fair allocation of resources [20]. In addition, researchers have studied training of agents that accept \u201cadvice\u201d from a \u201cteacher\u201d, where the advice is equipped with a \u201cbid\u201d that represents its importance [3]. Finally, recreation bidding games have been studied, e.g., bidding chess [12], as well as combinatorial games that apply bidding instead of alternating turns [24]. In all of these applications, the granularity of the bids is restricted. Previous results. For reachability objectives, the theory of continuous bidding games was largely adapted to discrete-bidding in [16]: threshold budgets were shown to exist and satisfy a discrete version of the average property and winning strategies are derived from the threshold budgets. However, the only known algorithm to compute thresholds is a value-iteration algorithm whose worst-case running time is exponential. For parity discrete-bidding games there were large gaps in our understanding. On the one hand, thresholds are known to exist [1], but they were not known to satisfy the average property. The known algorithm to find thresholds is naive: construct and solve the explicit concurrent game that corresponds to a bidding game. Beyond the high exponential-time complexity of the algorithm, the strategies that it produces use exponential memory whose bids are not connected to the thresholds. To make things even more challenging, while the properties of thresholds in reachability discrete-bidding games are conceptually similar to those in continuous-bidding, threshold in parity discreteand continuous-bidding games differ in the following key property. We describe the difference between the models on B\u00fcchi objectives. Under continuous-bidding, Player 1, the B\u00fcchi player, wins with any positive initial budget, a strongly-connected game with an accepting vertex, i.e., the thresholds are 0 in all vertices. On the other hand, an example is shown in [1] of a strongly-connected B\u00fcchi game with an accepting state in which Player 1 loses with any budget under discrete bidding. Our results. We develop two complementary algorithms for computing threshold budgets in parity discrete-bidding games. Our first algorithm is a fixed-point algorithm. It is recursive solutions of reachability discrete-bidding games, similar in spirit to Zielonka [25] and Kupferman and Vardi\u2019s [17] algorithms to solve turn-based parity games. While the algorithm runs in exponential time, it shows, for the first time, that threshold budgets in parity discrete-bidding games satisfy the average property. It also produces bidding strategies whose bids are derived from the thresholds. Second, we show that the problem of finding threshold budgets in parity discrete-bidding games2 is in NP and coNP. The bound follows to reachability discrete-bidding games for which only an exponential-time algorithm was known. We briefly describe the idea of our proof. We first show that, interestingly and for the first time, unlike continuous-bidding games, functions that satisfy the discrete average property are not unique, thus guessing a 2 Formally, given a discrete-bidding game G, a vertex v, and a threshold \u2113, decide whether Th(v) \u2265\u2113. \f4 Computing Threshold Budgets in Discrete-Bidding Games function and verifying that it satisfies the average property does not suffice. We overcome this challenge by designing a verification algorithm for a function that satisfies the average property based on a reduction to turn-based parity games. The algorithm outputs a strategy that can be implemented using linear memory, whereas previously, the output strategies were exponential. 2 Preliminaries 2.1 Concurrent games We define the formal semantics of bidding games via two-player concurrent games [2]. Intuitively, a concurrent game proceeds as follows. A token is placed on a vertex of a graph. In each turn, both players concurrently select actions, and their joint actions determine the next position of the token. The outcome of a game is an infinite path. A game is accompanied by an objective, which specifies which plays are winning for Player 1. We focus on reachability and parity objectives, which we define later in this section. Formally, a concurrent game is played on an arena \u27e8A, Q, \u03bb, \u03b4\u27e9, where A is a finite nonempty set of actions, Q is a finite non-empty set of states (in order to differentiate, we use \u201cstates\u201d or \u201cconfigurations\u201d in concurrent games and \u201cvertices\u201d in bidding games), the function \u03bb : Q \u00d7 {1, 2} \u21922A \\ {\u2205} specifies the allowed actions for Player i in vertex v, and the transition function is \u03b4 : Q \u00d7 A \u00d7 A \u2192Q. Suppose that the token is placed on a state q \u2208Q and, for i \u2208{1, 2}, Player i chooses action ai \u2208\u03bb(q, i). Then, the token moves to \u03b4(q, a1, a2). For q, q\u2032 \u2208Q, we call q\u2032 a neighbor of q if there is a pair of actions \u27e8a1, a2\u27e9\u2208\u03bb(q, 1) \u00d7 \u03bb(q, 2) with q\u2032 = \u03b4(q, a1, a2). We denote the neighbors of q by N(q) \u2286Q. A (finite) history is a sequence \u27e8q0, a1 0, a2 0\u27e9, . . . , \u27e8qn\u22121, a1 n\u22121, a2 n\u22121\u27e9, qn \u2208(Q \u00d7 A \u00d7 A)\u2217\u00b7 Q such that, for each 0 \u2264i < n, we have qi+1 = \u03b4(qi, a1 i , a2 i ). A strategy is a \u201crecipe\u201d for playing the game. Formally it is a function \u03c3 : (Q \u00d7 A \u00d7 A)\u2217\u00b7 Q \u2192A. We restrict attention to legal strategies; namely, strategies that for each history \u03c0 \u2208(Q \u00d7 A \u00d7 A)\u2217\u00b7 Q that ends in q \u2208Q, choose an action in \u03bb(q, i), for i \u2208{1, 2}. A memoryless strategy is a strategy that, for every state q \u2208Q, assigns the same action to every history that ends in q. Two strategies \u03c31 and \u03c32 for the two players and an initial state q0, give rise to a unique play, denoted play(q0, \u03c31, \u03c32), which is a sequence in (Q \u00d7 A \u00d7 A)\u03c9 and is defined inductively as follows. The first element of play(q0, \u03c31, \u03c32) is q0. Suppose that the prefix of length j \u22651 of play(q0, \u03c31, \u03c32) is defined to be \u03c0j \u00b7 qj, where \u03c0j \u2208(Q \u00d7 A \u00d7 A)\u2217. Then, at turn j, for i \u2208{1, 2}, Player i takes action aj i = \u03c3i(\u03c0j \u00b7 qj), the next state is qj+1 = \u03b4(qj, aj 1, aj 2), and we define \u03c0j+1 = \u03c0j \u00b7 \u27e8vj, aj 1, aj 2\u27e9\u00b7 qj+1. The path that corresponds to play(q0, \u03c31, \u03c32) is q0, q1, . . .. For i \u2208{1, 2}, we say that Player i controls a state q \u2208Q if, intuitively, the next state is determined solely according to their chosen action. Formally, q is controlled by Player 1 if for every action a1 \u2208A, there is a state q\u2032 such that no matter which action a2 \u2208A Player 2 takes, we have q\u2032 = \u03b4(q, a1, a2), and the definition is dual for Player 2. Turn-based games are a special case of concurrent games in which all states are controlled by one of the players. Note that a concurrent game that is not turn based might still contain some vertices that are controlled by one of the players. 2.2 Bidding games A discrete-bidding game is played on an arena G = \u27e8V, E, k\u27e9, where V is a set of vertices, E \u2286V \u00d7 V is a set of directed edges, and k \u2208N is the sum of the players\u2019 budgets. For a vertex v \u2208V , we slightly abuse notation and use N(v) to denote the neighbors of v in \fG. Avni and S. Sadhukhan 5 G, namely N(v) = {u : E(v, u)}. We will consider decision problems in which G is given as input. We then assume that k is encoded in binary, thus the size of G is O(|V |+|E|+log(k)). Intuitively, in each turn, both players simultaneously choose a bid that does not exceed their available budgets. The higher bidder moves the token and pays the other player. Note that the sum of budgets is constant throughout the game. Tie-breaking needs to be handled explicitly in discrete-bidding games as it can affect the properties of the game [1]. In this paper, we focus on advantage-based tie-breaking mechanism [16]: exactly one of the players holds the advantage at a turn, and when a tie occurs, the player with the advantage chooses between (1) win the bidding and pass the advantage to the other player, or (2) let the other player win the bidding and keep the advantage. We describe the semantics of bidding games formally below. We will describe the formal semantics of a bidding game based on the explicit concurrent game it corresponds to. We introduce required notation. Following [16], we denote the advantage with \u2217. Let N denote the non-negative integers, N\u2217the set {0, 0\u2217, 1, 1\u2217, 2, 2\u2217, . . .}, and [k] the set {0, 0\u2217, . . . , k, k\u2217}. We define an order < on N\u2217by 0 < 0\u2217< 1 < 1\u2217< . . .. Let m \u2208N\u2217. When saying that Player 1 has a budget of m\u2217\u2208[k], we mean that Player 1 has the advantage, and implicitly, we mean that Player 2\u2019s budget is k \u2212m and she does not have the advantage. We use |m| to denote the integer part of m, i.e., |m\u2217| = m. We define operators \u2295and \u2296over N\u2217. Intuitively, we use \u2295as follows: suppose that Player 1\u2019s budget is m\u2217and Player 2 wins a bidding with a bid of b2, then Player 1\u2019s budget is updated to m\u2217\u2295b2. Similarly, for \u2113\u2264m, a bid of b1 = \u2113\u2217means that Player 1 will use the advantage if a tie occurs and b1 = \u2113means that he will not use it. Upon winning the bidding, his budget is updated to m\u2217\u2296b1. Formally, for x, y \u2208N, define x\u2217\u2295y = x \u2295y\u2217= (x + y)\u2217, x \u2295y = x + y. For x, y \u2208N, define x \u2296y = x \u2212y, x\u2217\u2296y = (x \u2212y)\u2217, and in particular x\u2217\u2296y\u2217= x \u2212y. Note that there is no need to define x \u2296y\u2217in general; indeed, the player without the advantage cannot propose a bid that includes the advantage. However, it is convenient to define the following special case. \u25b6Definition 1. (Successor and predecessor). For B \u2208N\u2217, we denote by B \u22950\u2217and B \u22960\u2217respectively the successor and predecessor of B in N\u2217according to <, defined as B \u22950\u2217= min{x > B} and B \u22960\u2217= max{x < B}. 2.2.1 Bidding games as concurrent games Consider an arena \u27e8V, E, k\u27e9of a bidding game. The corresponding configurations are C = {\u27e8v, B\u27e9\u2208V \u00d7 [k]}, where a configuration c = \u27e8v, B\u27e9\u2208C means that the token is placed on vertex v \u2208V and Player 1\u2019s budget is B. Implicitly, Player 2\u2019s budget is k\u2217\u2296B. The arena of the explicit concurrent game is \u27e8A, C, \u03bb, \u03b4\u27e9, where A = V \u00d7 N\u2217and we define the allowed actions in each configuration and transitions next. An action \u27e8b, v\u27e9\u2208A means that the player bids b and proceeds to v upon winning the bidding. We require the player with the advantage to decide prior to the bidding whether he will use the advantage or not. Thus, when Player 1\u2019s budget is B\u2217, Player 1\u2019s legal bids are [B] and Player 2\u2019s legal bids are {0, . . . , k \u2212|B|}, and when Player 1\u2019s budget is B, Player 1\u2019s legal bids are {0, 1, . . . , B} and Player 2\u2019s legal bids are [k\u2217\u2296B]. Next, we describe the transitions. Suppose that the token is placed on a configuration c = \u27e8v, B\u27e9and Player i chooses action \u27e8bi, ui\u27e9, for i \u2208{1, 2}. If b1 > b2, Player 1 wins the bidding and the game proceeds to \u27e8u1, B1 \u2296b1\u27e9. The definition for b2 > b1 is dual. The remaining case is a tie, i.e., b1 = b2. Since only one of the players has the advantage, a tie can occur only when the player who has the advantage does not use it. Suppose that c = \u27e8v, B\u2217\u27e9, i.e., Player 1 has the advantage, and the definition when Player 2 \f6 Computing Threshold Budgets in Discrete-Bidding Games has the advantage is dual. Player 2 wins the bidding, Player 1 keeps the advantage, and we proceed to \u27e8u2, B\u2217\u2295b2\u27e9. Note that the size of the arena is O(|V | \u00d7 k), which is exponential in the size of G since k is given in binary. Consider two strategies f and g and an initial configuration c = \u27e8v, B\u27e9We slightly abuse notation and refer to \u03c4 = play(v, f, g) as the infinite path in the bidding game that is obtained by projecting to first component in the configurations that the play traverses. 2.3 Objectives and threshold budgets A bidding game is G = \u27e8V, E, k, O\u27e9, where \u27e8V, E, k\u27e9is an arena and O \u2286V \u03c9 is an objective, which specifies the infinite paths that are winning for Player 1. We introduce notations on paths before defining the objectives that we consider. Consider a path \u03c0 = v0, v1, . . . and consider a subset of vertices A \u2286V . We say that \u03c0 visits A if there is j \u22650 such that vj \u2208A. We denote by inf(\u03c0) \u2286V , the set of vertices that \u03c0 visits infinitely often. For a play \u03c4 that traverses \u03c0, we abuse notation and use inf(\u03c4) to refer to inf(\u03c0). We say that \u03c0 enters A at time j \u22651 if vj \u2208A and vj\u22121 / \u2208A, and it is exited at time j if vj / \u2208A and vj\u22121 \u2208A. We consider the following two canonical objectives: Reachability: A reachability bidding game is \u27e8V, E, k, S\u27e9, where S \u2286V is a set of sinks. Player 1, the reachability player, wins an infinite play \u03c0 iff it visits S, and we then say that \u03c0 ends in S. Parity: A parity bidding game is \u27e8V, E, k, p\u27e9, where p : V \u2192{1, . . . , d} assigns to each vertex a parity index, for d \u2208N. A play \u03c4 is winning for Player 1 iff maxv\u2208inf(\u03c4) p(v) is odd. The special case in which p assigns parities in {2, 3} is called B\u00fcchi objective; Player 1 wins a play iff it visits the set {v \u2208V : p(v) = 3} infinitely often. We study, for the first time, bidding games with frugal objectives in which, roughly, Player 1 can win by reaching a target with a sufficient budget. \u25b6Definition 2. Frugal objectives. A frugal-reachability bidding game is \u27e8V, E, k, S, fr\u27e9, where V , E, and k are as in bidding games, S \u2286V is a set of target vertices, and fr : S \u2192[k] assigns a frugal-target budget to each target. Consider a play \u03c0 that ends in a configuration \u27e8s, B\u27e9with s \u2208S. Player 1 wins \u03c0 iff B \u2265fr(s). Note that a reachability bidding game is a special case of a frugal-reachability bidding game in which fr \u22610. The frugal-safety objective is dual to frugal-reachability. We describe the winning condition explicitly. A frugal-safety bidding game is \u27e8V, E, k, S, fr\u27e9, where V , E, and k are as in bidding games, S \u2286V is a set of sinks, and fr : S \u2192[k] assigns a frugal-target budget to each sink. Player 1, the safety player, wins a play \u03c0 if: (1) \u03c0 never reaches S, or (2) \u03c0 reaches a configuration \u27e8s, B\u27e9with s \u2208S and B \u2265fr(s). Note that a safety bidding game is a special case of a frugal-safety bidding game in which fr \u2261k + 1. A frugal-parity bidding game is \u27e8V, E, k, p, S, fr\u27e9, where p : (V \\ S) \u2192{0, . . . , d} and the other components are as in the above. Player 1 wins a play \u03c0 if (1) \u03c0 does not reach S and satisfies the parity objective, or (2) \u03c0 satisfies a frugal-reachability objective: it ends in a configuration \u27e8s, B\u27e9with s \u2208S and B \u2265fr(s). Next, we define winning strategies. \u25b6Definition 3. (Winning strategies). Consider a configuration c = \u27e8v, B\u27e9and an objective O. A Player 1 strategy f is winning from c if for every strategy g, play(c, f, g) satisfies O. \fG. Avni and S. Sadhukhan 7 A Player 2 strategy g is winning from c if for every strategy f, play(c, f, g) does not satisfy O. A player wins from c if they have a winning strategy from c. The central quantity in bidding games is the threshold budget at a vertex, which is the necessary and sufficient initial budget at that vertex for Player 1 to guarantee winning the game. It is formally defined as follows. \u25b6Definition 4. (Threshold budgets). Consider a bidding game G. The threshold budget at a vertex v in G, denoted ThG(v), is such that Player 1 wins from every configuration \u27e8v, B\u27e9with B \u2265ThG(v), and Player 2 wins from every configuration \u27e8v, B\u27e9with B < ThG(v). We refer to the function ThG as the threshold budgets. \u25b6Remark 5. We point out that existence of threshold budgets is not trivial. Indeed, existence of threshold budgets implies determinacy: from each configuration there is a player who has a winning strategy. Recall that bidding games are succinctly-represented concurrent games, and concurrent games are often not determined, for example, the simple concurrent game \u201cmatching pennies\u201d is not determined since neither players can win the game. Typically, the vertices of a concurrent game can be partitioned into surely winning vertices from which Player 1 has a winning strategy, surely losing from which Player 2 has a winning strategy, and in the rest, neither player has a winning strategy. An optimal strategy from a vertex in the last set is mixed; it assigns a probability distribution over actions. Interestingly in bidding games, all vertices are either surely winning or surely losing. Determined sub-classes of concurrent games where also studied in [13]. 3 Frugal-Reachability Discrete-Bidding Games The study in [16] focuses primarily on reachability discrete-bidding games played on DAGs. We revisit their results, provide explicit and elaborate proofs for game played on general graphs, and extend the results to frugal-reachability games. Specifically Thm. 11 points to an issue in bidding games played on general graphs that was not explicitly addressed in [16]. 3.1 Background: reachability continuous-bidding games Many of the techniques used in reachability discrete-bidding games are adaptations of techniques developed for reachability continuous-bidding games [19, 18]. In order to develop intuition and ease presentation of discrete-bidding games, in this section, we illustrate the ideas and techniques of continuous-bidding games. Recall that in continuous-bidding games there is no restriction on the granularity of bids, i.e., bids can be arbitrarily small. Throughout this section we assume that the sum of the players\u2019 budgets is 1. Note that since winning bids are paid to the opponent, the sum of budgets stays constant throughout the game. \u25b6Definition 6. (Continuous threshold budgets). The continuous threshold budget at a vertex v is a budget Th(v) \u2208[0, 1] such that for every \u03f5 > 0: if Player 1\u2019s budget is Th(v) + \u03f5, he wins the game from v, and if Player 1\u2019s budget is Th(v) \u2212\u03f5, Player 2 wins the game from v. \u25b6Remark 7. We point out that the issue of tie breaking is avoided in continuous-bidding games by considering initial budgets that differ from the threshold. That is, the requirement \f8 Computing Threshold Budgets in Discrete-Bidding Games is that when a player wins, it is for any tie-breaking mechanism, e.g., he should win even when the opponent wins all bidding ties. A double-reachability continuous-bidding game is \u27e8V, E, t1, t2\u27e9, where for i \u2208{1, 2}, the vertex ti is the target of Player i and every vertex v \u0338= t1, t2 has a path to both. The game ends once one of the targets is reached, and the player whose target is reached is the winner. The careful reader might notice that the definition does not define a winner when no target is reached. We will show below that this case is avoided. \u25b6Definition 8. (Continuous average property). Consider a double-reachability continuousbidding game G = \u27e8V, E, t1, t2\u27e9and a function T : V \u2192[0, 1]. For v \u2208V , denote v+ := arg maxu\u2208N(v) T(u) and v\u2212:= arg minu\u2208N(v) T(v). We say that T has the continuous average property if for every vertex v \u2208V : T(v) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 if v = t2 0 if v = t1 T (v\u2212)+T (v+) 2 otherwise The next theorem presents the main results on reachability continuous-bidding games: a function that satisfies the continuous average property is unique and, more importantly, it coincides with the continuous threshold budgets. We illustrate the proof techniques, in particular how to construct a winning bidding strategy given the thresholds in the game. \u25b6Theorem 9. [19, 18] Consider a double-reachability continuous-bidding game \u27e8V, E, t1, t2\u27e9. Continuous threshold budgets exist, and the threshold budgets Th : V \u2192[0, 1] is the unique function that has the continuous average property. Moreover, the problem of deciding whether Th(v) \u22640.5 is in NP and coNP. Proof. (SKETCH) Let T be a function that satisfies the continuous average property, where we omit the proof of existence of such a function. We prove that for every vertex v, the continuous threshold budget at v is T(v). Uniqueness follows immediately. The complexity bound is obtained by guessing, for every vertex v, two neighbors v\u2212and v+, and constructing and solving a linear program based on the constrains in Def. 8 (for more details, see [7] in which a reduction to stochastic games [15] is shown). From a function with the continuous-average property to a strategy. Suppose that Player 1\u2019s budget at v is T(v) + \u03b5, for \u03b5 > 0. We describe a winning Player 1 strategy. Recall that v+, v\u2212\u2208N(v) are respectively the neighbors of v that attain the maximal and minimal values according to T. Let b(v) := T(v+) \u2212T(v\u2212) 2 . The key observation is that T(v) + b(v) = T(v+) and T(v) \u2212b(v) = T(v\u2212). Consider the following Player 1 strategy. At vertex v \u2208V , bid b(v) and proceed to v\u2212 upon winning. We show that the strategy maintains the following invariant: Invariant: When the game reaches a configuration \u27e8u, B\u27e9, then B > T(u). First, the invariant implies that the strategy is legal, namely Player 1\u2019s budget at v suffices to bid b(v). Second, the invariant implies that Player 1 does not lose, namely no matter how Player 2 plays, the game will not reach t2. Indeed, assume towards contradiction that t2 is reached. Then, the invariant implies that Player 1\u2019s budget is strictly greater than 1, which violates the assumption that the sum of budgets is 1. The details on how Player 1 guarantees that t1 is reached are not relevant for this paper. We describe the rough idea for \fG. Avni and S. Sadhukhan 9 completeness. Suppose that the game reaches configuration \u27e8u, B\u27e9. The invariant implies B > T(u). We call B \u2212T(u) Player 1\u2019s spare change. The idea is to choose Player 1\u2019s bids carefully in a way that ensures that as the game continues, his spare change strictly increases so that eventually his budget suffices to win |V | times in a row. We prove that Player 1\u2019s strategy maintains the invariant against any Player 2 strategy. Note that the invariant holds initially. Suppose that the game reaches configuration \u27e8u, B\u27e9 with B > T(u). We claim that the invariant is maintained in the next turn. Indeed, if Player 1 wins the bidding, the next configuration is \u27e8v\u2212, B \u2212b(v)\u27e9, and the claim follows from T(v)\u2212b(v) = T(v\u2212). If Player 2 wins the bidding, she bids at least b(v), thus Player 1\u2019s updated budget is at least B + b(v), and the worst that Player 2 can do for Player 1 is to move to v+. The claim follows from T(v) + b(v) = T(v+). Reasoning about the flipped game. Finally, we show that Player 2 wins when Player 1\u2019s budget is T(v)\u2212\u03b5. We intuitively \u201cflip\u201d the game and associate Player 1 with Player 2. More formally, let G\u2032 be the same as G only that Player 1\u2019s goal is to reach t2 and Player 2\u2019s goal is to reach t1. For every u \u2208V , define T \u2032 by T \u2032(u) = 1 \u2212T(u). A key observation is that T \u2032 satisfies the continuous average property in G\u2032. In particular, note that T \u2032(t1) = 1 and T \u2032(t2) = 0. Now, in order to win from v in G when Player 1\u2019s budget is T(v) \u2212\u03b5, Player 2 follows a winning Player 1 strategy in G\u2032 with an initial budget of 1 \u2212T(v) + \u03b5. \u25c0 3.2 Frugal-Reachability discrete-bidding games We turn to study discrete-bidding games. 3.2.1 The discrete average property In this section, we adapt the definition of the continuous average property (Def. 8) to the discrete setting and analyze its properties. \u25b6Definition 10. (Average property). Consider a frugal-reachability discrete-bidding game G = \u27e8V, E, k, S, fr\u27e9. We say that a function T : V \u2192[k] \u222a{k + 1} has the average property if for every s \u2208S, we have T(s) = fr(s), and for every v \u2208V \\ S, T(v) = \u230a|T(v+)| + |T(v\u2212)| 2 \u230b+\u03b5 where \u03b5 = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 if |T(v+)| + |T(v\u2212)| is even and T(v\u2212) \u2208N 1 if |T(v+)| + |T(v\u2212)| is odd and T(v\u2212) \u2208N\u2217\\ N \u2217 otherwise where v+ := arg maxu\u2208N(v) T(u) and v\u2212:= arg minu\u2208N(v) T(v) Note that, T can assign k + 1. When T(v) = k + 1, Player 2 can win from v even with a budget 0. The following theorem shows, somewhat surprisingly and for the first time that functions that satisfy the discrete average property are not unique. That is, there are functions satisfying the discrete average property but not coinciding with the threshold budgets. This is in stark difference to the continuous bidding games in which there is a unique function that satisfies the average property. \u25b6Theorem 11. The reachability discrete-bidding game G1 that is depicted in Fig. 1 with target t for Player 1 has more than one function that satisfies the average property. Proof. Assume a total budget of k = 5. We represent a function T : V \u2192[k] as a vector \u27e8T(v0), T(v1), T(v2), T(t)\u27e9. It is not hard to verify that both \u27e84, 3\u2217, 3, 2\u27e9and \u27e85, 4\u2217, 3\u2217, 2\u27e9 satisfy the average property. (The latter represents the threshold budgets). \u25c0 \f10 Computing Threshold Budgets in Discrete-Bidding Games v0 v1 v2 t 4 3* 3 2 5 4* 3* Figure 1 A discrete-bidding reachability game with two functions that satisfy the average property. The following lemma intuitively shows that the \u201ccomplement\u201d of T, denoted below T \u2032, satisfies the average property. For a vertex v, the value T \u2032(v) should be thought of as Player 2\u2019s budget when Player 1\u2019s budget is strictly less than T(v). We will later show that since T \u2032 satisfies the average property, Player 2 can win with a budget of T \u2032(v). It follows that a function T that satisfies the average property satisfies T \u2264ThG; indeed, Player 2 wins if Player 1\u2019s budget is less than T(v), but it is not necessarily the case that Player 1 wins with a budget of T(v). A similar idea is used in continuous-bidding in the last point in the proof of Thm. 9. \u25b6Lemma 12. Let G = \u27e8V, E, k, S, fr\u27e9be a discrete-bidding game with a frugal objective. Let T : V \u2192[k] \u222a{k + 1} be a function that satisfies the average property. We define T \u2032 : V \u2192[k] \u222a{k + 1} as follows. T \u2032(v) = ( k\u2217\u2296(T(v) \u22960\u2217) If T(v) > 0 k + 1 otherwise Then, T \u2032 satisfies the average property. Proof. We need to show that T \u2032(v) = \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b+ \u03b5\u2032, where \u03b5\u2032 is from the definition of the average property. We have T(v) = \u230a|T (v+)|+|T (v\u2212)| 2 \u230b+ \u03b5 already. We first note some crucial observations that we will use in the subsequent case analysis. Observation I. For every v \u2208V , we have that T(v) and T \u2032(v) agree on which player has the advantage, formally T(v) \u2208N iff T \u2032(v) \u2208N. Indeed, if T(v) \u2208N, i.e., T(v) does not contain the advantage, then its predecessor T(v) \u22960\u2217does contain it. This implies T \u2032(v) = k\u2217\u2296(T(v) \u22960\u2217) \u2208N. The other direction is dual. Observation II. For v \u2208V , let v\u2032 \u2208N(v) be a maximum neighbor with respect to T, then it is a minimum neighbor with respect to T \u2032, and vice-versa. Indeed, this follows from T \u2032(v+) = k\u2217\u2296(T(v\u2212) \u22960\u2217) and T \u2032(v\u2212) = k\u2217\u2296(T(v+) \u22960\u2217). The following observation is technical and relates the numerical values (i.e., disregarding the advantage status) of T(v) and T \u2032(v): Observation III. For every vertex v, we have |T \u2032(v)| = (k + 1) \u2212|T(v)|. We prove the last observation by distinguishing two cases: (a) T(v) \u2208N, and (b) T(v) \u2208N\u2217\\ N. First, T(v) \u2208N implies T(v) \u22960\u2217= (|T(v)| \u22121) \u22950\u2217. Therefore, k\u2217\u2296(T(v) \u22960\u2217) = k\u2217\u2296((|T(v)| \u22121) \u22950\u2217), which implies |T \u2032(v)| = k + 1 \u2212|T(v)|. On the other hand, T(v) \u2208N\u2217\\ N implies T(v) \u22960\u2217= |T(v)|. Therefore, k\u2217\u2296(T(v) \u22960\u2217) = k\u2217\u2296|T(v)| = (k \u2212|T(v)|) \u22950\u2217, which implies |T \u2032(v)| = (k + 1) \u2212|T(v)|. We proceed to prove the lemma. First, if T(v) = 0, then both T(v+) and T(v\u2212) are 0, which implies \u03b5 = \u03b5\u2032 = 0 and both T(v\u2212) and T(v+) are k + 1, therefore \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b+ \u03b5\u2032 = k + 1 = T \u2032(v). Assume T(v) > 0. We distinguish between four cases: T(v+), T(v\u2212) \u2208N. Note that |T(v+)| + |T(v\u2212)| is even iff |T \u2032(v\u2212)| + |T \u2032(v+)| is even. It follows that \u03b5\u2032 = \u03b5 both when the sum is even and when it is odd. \fG. Avni and S. Sadhukhan 11 When \u03b5 = \u03b5\u2032 = 0 (i.e., we are in the case that the sum is even and the minimum is in N), we have |T \u2032(v+)|+|T \u2032(v\u2212)| 2 = (k+1\u2212|T (v\u2212)|)+(k+1\u2212|T (v+)|) 2 = k + 1 \u2212|T (v+)|+|T (v\u2212)| 2 = k + 1 \u2212|T(v)| = T(v\u2032). When \u03b5 = \u03b5\u2032 = 0\u2217(i.e., we are in the case that the sum is odd and the minimum is in N), then \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b\u22950\u2217= (k+1\u2212\u230a|T (v+)|+|T (v\u2212)| 2 \u230b\u22121)\u22950\u2217= (k\u2212(T(v)\u22960\u2217))\u22950\u2217= k\u2217\u2296(T(v) \u22960\u2217) = T \u2032(v). T(v+), T(v\u2212) \u2208N\u2217\\ N. In this case too, |T(v+)| + |T(v\u2212)| is even iff |T \u2032(v\u2212)| + |T \u2032(v+)| is even. Therefore, we have \u03b5\u2032 = \u03b5. When \u03b5 = \u03b5\u2032 = 0\u2217(i.e., the sum is even and the minimum is in N\u2217\\ N), we have |T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u22950\u2217= (k\u2212|T (v\u2212)|)+(k\u2212|T (v+)|) 2 \u22950\u2217= k \u2296(T(v) \u22960\u2217) \u22950\u2217= k\u2217\u2296(T(v) \u2296 0\u2217) = T \u2032(v). When \u03b5 = \u03b5\u2032 = 1 (i.e., the sum is odd and the minimum is in N\u2217\\ N), we have \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b\u22951 = (k \u2212\u230a|T (v\u2212)|+|T (v+)| 2 \u230b\u22121) \u22951 = k \u2296(T(v) \u22961) = (k \u22950\u2217) \u2296 (T(v) \u22961 \u22950\u2217) = k\u2217\u2296(T(v) \u22960\u2217) = T \u2032(v) T(v+) \u2208N and T(v\u2212) \u2208N\u2217 In this case, |T(v+)| + |T(v\u2212)| is even iff |T \u2032(v\u2212)| + |T \u2032(v+)| is odd. When \u03b5 = \u03b5\u2032 = 0\u2217(i.e., the sum is even and the minimum according to T is in N\u2217\\N), we have \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b\u22950\u2217= \u230a2k+1\u2212(|T (v\u2212)|+|T (v+)|) 2 \u230b\u22950\u2217= k \u2296|T (v+)|+|T (v\u2212)| 2 \u22950\u2217= k \u2296(T(v) \u22960\u2217) \u22950\u2217= k\u2217\u2296(T(v) \u22960\u2217) = T(v). When \u03b5 = 1, \u03b5\u2032 = 0 (i.e., the sum is odd and the minimum according to T is in N\u2217\\ N), |T \u2032(v+)|+|T \u2032(v\u2212)| 2 = 2k+1\u2212(|T (v\u2212)|+|T (v+)|) 2 = k\u2212|T (v+)|+|T (v\u2212)|\u22121 2 = k\u2212\u230a|T (v+)|+|T (v\u2212)| 2 \u230b= k \u2296(T(v) \u22961) = k\u2217\u2296(T(v) \u22960\u2217) = T \u2032(v). T(v+) \u2208N\u2217and T(v\u2212) \u2208N In this case, |T(v+)| + |T(v\u2212)| is even iff |T \u2032(v\u2212)| + |T \u2032(v+)| is odd. When \u03b5 = 0, \u03b5\u2032 = 1 (i.e., the sum is even and the minimum according to T is in N), we have \u230a|T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u230b\u22951 = \u230a2k+1\u2212(|T (v\u2212)|+|T (v+)|) 2 \u230b\u22951 = (k \u2212|T (v+)|+|T (v\u2212)| 2 ) \u22951 = (k \u2296T(v)) \u22951 = k\u2217\u2296(T(v) \u22960\u2217) = T \u2032(v). When \u03b5 = \u03b5\u2032 = 0\u2217(i.e., the sum is odd and the minimum according to T is in N), we have |T \u2032(v+)|+|T \u2032(v\u2212)| 2 \u22950\u2217= 2k+1\u2212(|T (v\u2212)|+|T (v+)|) 2 \u22950\u2217= k \u2212|T (v+)|+|T (v\u2212)|\u22121 2 \u22950\u2217= k \u2296\u230a|T (v+)|+|T (v\u2212)| 2 \u230b\u22950\u2217= k\u2217\u2296(T(v) \u22960\u2217) = T \u2032(v) \u25c0 3.2.2 From a function with the average to a bidding strategy Consider a function T : V \u2192[k] \u222a{k + 1} that satisfies the average property. A partial strategy based on T is a function fT : C \u2192[k] \u00d7 2V that chooses, at each configuration, a bid and a set of allowed vertices. Note that fT is not a strategy since it does not assign a unique vertex to proceed to upon winning the bidding. Consider a strategy f \u2032 and a partial strategy fT . We say that f \u2032 agrees with fT if in all configurations, f \u2032 chooses the same bid as fT and proceeds to one of the vertices that fT allows. \u25b6Definition 13. (f \u2032 agrees with fT ). Consider a configuration c. Let \u27e8b, A\u27e9= fT (c) and \u27e8b\u2032, u\u2032\u27e9= f \u2032(c). We say that f \u2032 agrees with fT at c if b = b\u2032 and u\u2032 \u2208A. We say that f \u2032 agrees with fT if f \u2032 agrees with fT in all configurations. The idea behind the construction of fT is that any strategy f \u2032 that agrees with fT maintains the following invariant. Suppose that the game starts from configuration \u27e8v, B\u27e9 \f12 Computing Threshold Budgets in Discrete-Bidding Games with B \u2265T(v). Then, against any opponent strategy, when the game reaches a configuration \u27e8u, B\u2032\u27e9, we have B\u2032 \u2265T(u). Since for every sink s \u2208S, we have T(s) \u2265fr(s), the invariant implies that f \u2032 guarantees that the frugal objective is not violated. To get an intuition for the construction of fT , recall the case of continuous-bidding (Thm. 9). There, a nonlosing strategy maintains the invariant that Player 1\u2019s budget exceeds Th(v) by bidding b = 0.5 \u00b7 (Th(v+) \u2212Th(v\u2212)) at a vertex v. The invariant is maintained by proceeding to v\u2212 upon winning the bidding since Th(v) \u2212b = Th(v\u2212) and Th(v) + b = Th(v+). We construct fT . Consider v \u2208V and a budget B \u2208[k] with B \u2265T(v). Let v+ = arg maxu\u2208N(v) T(u) and v\u2212= arg minu\u2208N(v) T(u). We define fT (\u27e8v, B\u27e9) = \u27e8bT (v, B), A(v)\u27e9 as follows. First, we define the allowed vertices A(v) = ( {u \u2208N(v) : T(u) = T(v\u2212)} if T(v\u2212) \u2208N {u \u2208N(v) : T(u) \u2264T(v\u2212) \u22950\u2217} if T(v\u2212) \u2208N\u2217\\ N (1) Second, the definition of the bid bT (v, B) is based on bT v , defined as follows: bT v = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 |T (v+)|\u2212|T (v\u2212)| 2 When |T(v+)| + |T(v\u2212)| is even and T(v\u2212) \u2208N \u230a|T (v+)|\u2212|T (v\u2212)| 2 \u230b When |T(v+)| + |T(v\u2212)| is odd and T(v\u2212) \u2208N\u2217\\ N |T (v+)|\u2212|T (v\u2212)| 2 \u22960\u2217 When |T(v+)| + |T(v\u2212)| is even and T(v\u2212) \u2208N\u2217\\ N \u230a|T (v+)|\u2212|T (v\u2212)| 2 \u230b\u22950\u2217 When |T(v+)| + |T(v\u2212)| is odd and T(v\u2212) \u2208N (2) We define the bid chosen by fT at a configuration c = \u27e8v, B\u27e9. Intuitively, Player 1 \u201cattempts\u201d to bid bT v . This is not possible when bT v requires the advantage but Player 1 does not have it in c, i.e., bT v \u2208N\u2217\\ N and B \u2208N. In such a case, Player 1 bids bT v \u22950\u2217\u2208N. Formally, we define bT (v, B) = bT v when both bT v and B belong to either N or N\u2217\\ N, and bT v \u22950\u2217otherwise. 3.2.3 Strategies that agree with fT are not losing In order to formalize the guarantees of fT , we need the following lemma. \u25b6Lemma 14. Let T be a function that satisfies the average property and a vertex v \u2208V . Then: 1. if T(v\u2212) \u2208N, then T(v) \u2296bT v = T(v\u2212), 2. if T(v\u2212) \u2208N\u2217\\ N, then T(v) \u2296bT v = T(v\u2212) \u22950\u2217, 3. T(v) \u2295bT v \u22950\u2217= |T(v+)|\u2217, 4. (T(v) \u22950\u2217) \u2296(bT v \u22950\u2217) = T(v) \u2296bT v , and 5. (T(v) \u22950\u2217) \u2295(bT v \u22951) = |T(v+)|\u2217+ 1. Proof. For ease of presentation, we refer to |T(v+)|+|T(v\u2212)| as simply the sum and to T(v\u2212) as the min. Note that, when the sum is even, we have \u230a|T (v+)|+|T (v\u2212)| 2 \u230b= |T (v+)|+|T (v\u2212)| 2 and \u230a|T (v+)|\u2212|T (v\u2212)| 2 \u230b= |T (v+)|\u2212|T (v\u2212)| 2 . On the other hand, when the sum is odd, we have \u230a|T (v+)|+|T (v\u2212)| 2 \u230b= |T (v+)|+|T (v\u2212)| 2 \u22120.5 and \u230a|T (v+)|\u2212|T (v\u2212)| 2 \u230b= |T (v+)|\u2212|T (v\u2212)| 2 \u22120.5. Note that, the definitions of the quantities T(v) and bT v vary with respect to the neighbours of v depending on whether the sum is odd or even, and whether the min is in N or N\u2217\\ N. But, our claim here shows that some of the facts hold as it is, despite of those variations. For example, irrespective of whether the sum is odd or even, we claim in item 1 that T(v) \u2296bT v = T(v\u2212). Similarly, in item 3 we claim that T(v) \u2295bT v \u22950\u2217= |T(v+)|, irrespective of both the parity of the sum and the advantage status of the min. \fG. Avni and S. Sadhukhan 13 Thus, our argument follows in the following manner: we analyze the four cases (classified by whether the sum is odd/even, and the min is is in N or N\u2217\\ N), and establish the stated claims about the quantities mentioned in each of the cases. 1. When the sum is even, and the min is in N, T(v) \u2296bT v = |T(v+)| + |T(v\u2212)| 2 \u2212|T(v+)| \u2212|T(v\u2212)| 2 = |T(v\u2212)| = T(v\u2212), and T(v) \u2295bT v \u22950\u2217= |T(v+)| \u22950\u2217= |T(v+)|\u2217. 2. When the sum is odd, and the min is in N, we have T(v)\u2296bT v = \u230a|T(v+)| + |T(v\u2212)| 2 \u230b\u22950\u2217\u2296(\u230a|T(v+)| \u2212|T(v\u2212)| 2 \u230b\u22950\u2217) = |T(v\u2212)| = T(v\u2212). Thus, we establish item 1. Also, T(v)\u2295(bT v \u22950\u2217) = \u230a|T(v+)| + |T(v\u2212)| 2 \u230b\u22950\u2217\u2295\u230a|T(v+)| \u2212|T(v\u2212)| 2 \u230b\u22950\u2217\u22950\u2217= |T(v+)|\u22950\u2217= |T(v+)|\u2217. 3. When the sum is odd, and the min is in N\u2217\\ N, we have T(v) \u2296bT v = |T(v\u2212)| \u22951 = T(v\u2212) \u22950\u2217, and T(v)\u2295(bT v \u22950\u2217) = \u230a|T(v+)| + |T(v\u2212)| 2 \u230b\u22951\u2295\u230a|T(v+)| \u2212|T(v\u2212)| 2 \u230b\u22950\u2217= |T(v+)|\u22950\u2217= |T(v+)|\u2217. 4. When the sum is even, and the min is in N\u2217\\ N, we have T(v)\u2296bT v = |T(v+)| + |T(v\u2212)| 2 \u22950\u2217\u2296(|T(v+)| + |T(v\u2212)| 2 \u22960\u2217) = |T(v\u2212)|\u22951 = T(v\u2212)\u22950\u2217. Thus, we establish item 2. Also, T(v)\u2295(bT v \u22950\u2217) = (|T(v+)| + |T(v\u2212)| 2 \u22950\u2217)\u2295|T(v+)| \u2212|T(v\u2212)| 2 = |T(v+)|\u22950\u2217= |T(v+)|\u2217. Thus, we establish item 3 for all the four cases. Finally, item 4 and item 5 are obtained by plain algebra based on item 1 to item 3. \u25c0 We proceed to prove that strategies that agree with fT maintain an invariant on Player 1\u2019s budget. \u25b6Lemma 15. Suppose that Player 1 plays according to a strategy f \u2032 that agrees with fT starting from configuration \u27e8v, B\u27e9satisfying B \u2265T(v). Then, against any Player 2 strategy, when the game reaches u \u2208V , Player 1\u2019s budget is at least T(u). Proof. The invariant holds initially by the assumption. Consider a history that ends in a configuration \u27e8v, B\u27e9. Assume that B \u2265T(v). We claim that the invariant is maintained no matter the outcome of the bidding, namely B \u2296bT (v, B) \u2265T(v\u2212) and B \u2295(bT (v, B) \u22950\u2217) \u2265 T(v+). We distinguish between two cases. First, when both B and bT v are in either N or N\u2217\\ N, then Player 1 bids bT (v, B) = bT v . It follows from the first three items of Lem. 14 that B \u2296bT (v, B) = B \u2296bT v \u2265T(v) \u2296bT v \u2265T(v\u2212), and \f14 Computing Threshold Budgets in Discrete-Bidding Games B \u2295bT (v, B) \u22950\u2217= B \u2295bT v \u22950\u2217\u2265T(v) \u2295bT v \u22950\u2217= |T(v+)|\u2217. In the second case, Player 1 bids bT (v, B) = bT v \u22950\u2217. Note that B \u2265T(v) \u22950\u2217because T(v) and bT v have the same advantage status. It follows from last two items of Lem. 14 that B \u2296bT (v, B) = B \u2296(bT v \u22950\u2217) \u2265(T(v) \u22950\u2217) \u2296(bT v \u22950\u2217) = T(v) \u2296bT v \u2265T(v\u2212), and B \u2295(bT (v, B) \u22950\u2217) \u2265(T(v) \u22950\u2217) \u2295(bT v \u22951) = |T(v+)|\u2217+ 1 > |T(v+)|. This concludes the proof. \u25c0 \u25b6Corollary 16. Suppose that Player 1 plays according to a strategy f \u2032 that agrees with fT starting from configuration \u27e8v, B\u27e9satisfying B \u2265T(v), then: f \u2032 is a legal strategy: the bid b prescribed by fT does not exceed the available budget. f \u2032 is not losing: if s \u2208S is reached, Player 1\u2019s budget is at least fr(s). 3.2.4 Existence of Thresholds in Frugal-reachability Discrete-bidding Games We close this section by showing existence of threshold budgets in frugal-reachability discretebidding games. Recall that Thm. 11 shows that functions that satisfy the discrete average property are not unique. Let T be such a function. The following lemma shows that T \u2264ThG. That is, if in a vertex v, Player 1 has a budget less than T(v), then Player 2 has a winning strategy. This proves that the threshold budgets for Player 1 cannot be less than T(v), when T is a average property satisfying function. \u25b6Lemma 17. Consider a frugal-reachability discrete-bidding game G = \u27e8V, E, k, S, fr\u27e9. If T : V \u2192[k] \u222a{k + 1} is a function that satisfies the average property, then T(v) \u2264ThG(v) for every v \u2208V . Proof. Given T that satisfies the average property, we construct T \u2032 as in Lem. 12. Let \u27e8v, B1\u27e9be a configuration, where v \u2208V , Player 1\u2019s budget is B1, and implicitly, Player 2\u2019s budget is B2 = k\u2217\u2296B1. Note that B1 < T(v) iff B2 \u2265T \u2032(v). Moreover, for every s \u2208S, we have T \u2032(s) = k\u2217\u2296(fr(s) \u22960\u2217). We \u201cflip\u201d the game; namely, we associate Player 2 with Player 1, and construct a partial strategy fT \u2032 for Player 2. We construct a Player 2 strategy f \u2032 that agrees with fT \u2032: for each v \u2208V , we arbitrarily choose a neighbor u from the allowed vertices. By Lem. 15, no matter how Player 1 responds, whenever the game reaches \u27e8u, B1\u27e9, we have B2 \u2265T \u2032(u). The invariant implies that f \u2032 is a winning strategy. Indeed, if the game does not reach a sink, Player 2 wins, and if it does, Player 1\u2019s frugal objective is not satisfied. \u25c0 The following lemma shows the existence of a function that satisfies the average property and that coincides with threshold budgets. \u25b6Lemma 18. Consider a frugal-reachability discrete-bidding game G = \u27e8V, E, k, S, fr\u27e9. There is a function T that satisfies the average property with T(v) \u2265ThG(v), for every v \u2208V . Proof. The proof is similar to the one in [16]. We illustrate the main ideas. For n \u2208N, we consider the truncated game G[n], which is the same as G only that Player 1 wins iff he wins in at most n steps. We find a sufficient budget for Player 1 to win in the vertices in G[n] in a backwards-inductive manner. For the base case, for every vertex u \u2208V , since Player 1 cannot win from u in 0 steps, we have T0(u) = k + 1. For s \u2208S, we have T0(s) = fr(s). \fG. Avni and S. Sadhukhan 15 Clearly, T0 \u2261ThG[0]. For the inductive step, suppose that Tn\u22121 is computed. For each vertex v, we define Tn(v) = \u230a|Tn\u22121(v+)|+|Tn\u22121(v\u2212,k)| 2 \u230b+ \u03b5 as in Def. 10. Following a similar argument to Thm. 9, it can be shown that if Player 1\u2019s budget is Tn(v), he can bid b so that if he wins the bidding, his budget is at least Tn\u22121(v\u2212) and if he loses the bidding, his budget is at least Tn\u22121(v+). By induction we get ThG[n](v) = Tn(v), for every v \u2208V . For every vertex v, let T(v) = limn\u2192\u221eTn(v). It is not hard to show that T satisfies the average property and that T(v) \u2265ThG(v), for every v \u2208V . \u25c0 Let T be a function that results from the fixed-point computation from the proof of Lem. 18. Since it satisfied the average property, we apply Lem. 17 to show that Player 2 wins from v when Player 1\u2019s budget is T(v) \u22960\u2217. Since the values observed in a vertex during an execution of the fixed-point algorithm are monotonically decreasing and since the number of values that a vertex can obtain is 2k + 1, the running time is O(|V | \u00b7 k). We thus conclude the following. \u25b6Theorem 19. Consider a frugal-reachability discrete-bidding game G = \u27e8V, E, k, S, fr\u27e9. Threshold budgets exist and satisfy the average property. Namely, there exists a function Th : V \u2192[k] \u222a{k + 1} such that for every vertex v \u2208V if Player 1\u2019s budget is B \u2265Th(v), then Player 1 wins the game, and if Player 1\u2019s budget is B < Th(v), then Player 2 wins the game Moreover, there is an algorithm to compute Th that runs in time O(|V |\u00b7k), which is exponential in the size of the input when k is given in binary. 4 A Fixed-Point Algorithm for Finding Threshold Budgets In this section, we develop a fixed-point algorithm for finding threshold budgets in frugalparity discrete-bidding games. While its worst-case running time is exponential in the input, the algorithm shows, for the first time, that threshold budgets in parity discrete-bidding games satisfy the average property. This property is key in the development of the NP and coNP algorithm (Sec. 5). For the remainder of this section, fix a frugal-parity game G = \u27e8V, E, k, p, S, fr\u27e9. 4.1 Warm up: a fixed-point algorithm for B\u00fcchi bidding games In this section, we consider the special case of B\u00fcchi games. The transition from B\u00fcchi to parity games involves an induction on the parity indices. We find that the core ideas of the algorithm are easier to present on the simpler case of B\u00fcchi objectives. Recall that in B\u00fcchi games, each vertex is assigned a parity index in {0, 1} and Player 1\u2019s goal is to visit the set of vertices with parity index 1 infinitely often, which we call accepting vertices and denote by F = {v : p(v) = 1}. We describe the algorithm from the perspective of the co-B\u00fcchi player, which we assume is Player 1. We use coB\u00fc-Th to denote Player 1\u2019s threshold. Namely, for an initial configuration \u27e8v, B\u27e9, we have: if B \u2265coB\u00fc-Th(v), Player 1 can guarantee that F is visited only finitely often, and if B < coB\u00fc-Th(v), Player 2 can guarantee that F is visited infinitely often. We describe a sequence of objectives each of which is intuitively stronger than co-B\u00fcchi but weaker than safety. The formal definition of when a path enters F can be found in Sec. 2.3. \u25b6Definition 20. (Bounded-eventual safety objectives). For i \u22650, the objective Safei contains infinite paths that \f16 Computing Threshold Budgets in Discrete-Bidding Games start in V \\ F and enter F at most i times, or start in F, exit F, and enter F at most i \u22121 more times. Note that, in particular, every path that starts in F violates Safe0. For i \u22650, we denote by Thi : V \u2192[k + 1] the threshold for the objective Safei. Clearly, entering F at most i times is harder (thus requires more budget) than entering F at most i + 1 times. In particular, entering F only finitely often is easier than entering F at most i times, for any i \u22650. We thus have the following. \u25b6Observation 21. For i \u22650 and v \u2208V \\ F, we have Thi(v) \u2265Thi+1(v) and Thi(v) \u2265 coB\u00fc-Th(v). It follows that the sequence Th0, Th1, . . . of thresholds reaches a fixed point. We will show that at the fixed-point, the threshold coincides with coB\u00fc-Th. Below, we characterize Thi and describe a recursive algorithm to compute it. The algorithm constructs and solves a sequence of frugal-reachability games R0, R1, . . . and a sequence of frugal-safety games S0, S1, . . .. For i \u22650, recall that we denote by ThRi and ThSi respectively the thresholds in Ri and Si. We will show that the threshold Thi for Safei can be characterized by these two thresholds: for v \u2208V \\ F, we have Thi(v) = ThSi(v) and for u \u2208F, we have Thi(u) = ThRi(u). Throughout this section, we will use v to denote a vertex in V \\ F and u to denote a vertex in F. We characterize Th0 in the following lemma. Its proof is immediate: it is not possible to satisfy Safe0 from any vertex in F, and in order to satisfy Safe0 from V \\ F, one must not visit F at all, thus Safe0 is a safety objective. \u25b6Lemma 22. (Base case). Let S0 = \u27e8V, E, k, F\u27e9be a safety bidding game (Player 1\u2019s goal is to avoid F) on the same arena as G. Then, for u \u2208F, we have Th0(u) = k + 1, and for v \u2208V \\ F, we have Th0(v) = ThS0(v). \u25b6Corollary 23. Computing Th0 can be done by calling a sub-routine that finds the thresholds in a safety bidding game with the game S0 as input. Moreover, by Thm. 19, Th0 satisfies the average property. Recusive step For i > 0, we characterize Thi as the thresholds in two bidding games Si and Ri, whose definition is based on Thi\u22121. Fix i \u22650. Intuitively, the first game starts from F and we compute the threshold budget for the frugal objective of reaching V \\ F with a budget that suffices to guarantee that F is entered at most i more times. Formally, suppose that the thresholds ThSi in the game Si (defined below) have been computed. We construct a frugal-reachability bidding game Ri = \u27e8V, E\u2032, k, V \\F, fri\u27e9on the same arena as G only that Player 1\u2019s targets are the vertices in V \\ F, which are sinks, i.e., E\u2032 = {\u27e8u, u\u2032\u27e9\u2208E : u \u2208F}, with a frugal target budget of fri(v) = ThSi(v), for v \u2208V \\ F. Let ThRi : F \u2192[k + 1] denote the thresholds in the game. Thus, starting from an initial configuration \u27e8u, B\u27e9with u \u2208F, we have: If B \u2265ThRi(u), Player 1 can ensure the frugal-reachability objective; namely, a configuration \u27e8v, B\u2032\u27e9is reached with v \u2208V \\ F and B\u2032 \u2265ThR0(v). If B < ThRi(u), Player 2 can violate the frugal-reachability objective; namely, she can force that (1) F is not exited or (2) upon reaching a configuration \u27e8v, B\u2032\u27e9with v \u2208V \\ F, then B\u2032 < Thi(v). By Thm. 19, ThRi satisfies the average property. \fG. Avni and S. Sadhukhan 17 v0 v1 v2 t k + 1 = 6 2\u2217 3\u2217 4\u2217 3\u2217 S0 R0 0 0\u2217 1\u2217 0\u2217 S1 R1 0 0 0 0 S2 R2 0 0 0 S3 Figure 2 Top: A co-B\u00fcchi game G where Player 1\u2019s objective is to visit t only finitely often. Bottom: Alg. 1 applied to G with a total budget of k = 5. The thresholds in the frugal-safety game Si and frugal-reachability game Ri, for i = 0, . . . , 3 are depicted in orange. The algorithm terminates once a fixed-point is reached and finds that the thresholds in G are 0 in all vertices. \f18 Computing Threshold Budgets in Discrete-Bidding Games \u25b6Lemma 24. Let i \u22650. Suppose that for every v \u2208V \\ F, we have ThSi(v) = Thi(v). Then, for every u \u2208F, a budget of ThRi(u) is the threshold budget for the objective Safei, i.e., Thi(u) = ThRi(u). Proof. Suppose that G starts from \u27e8u, B\u27e9with u \u2208F. We first show that when B \u2265ThRi(u), Player 1 can ensure the objective Safei. Indeed, by following a winning strategy in Ri, Player 1 guarantees reaching a configuration \u27e8v, B\u2032\u27e9with v \u2208V \\ F and B\u2032 \u2265ThSi(v) = Thi(v), from which he can proceed with a winning strategy for Safei. On the other hand, when B < ThRi(u), Player 2 violates Safei as follows. She first follows a winning strategy in Ri. By the above, no matter how Player 1 plays, the resulting play either stays in F, in which case Safei is clearly violated, or it reaches a configuration \u27e8v, B\u2032\u27e9with v \u2208V \\ F and B\u2032 < ThSi(v) = Thi(v), from which Player 2 can continue with a strategy that violates Safei. \u25c0 We proceed to describe the second bidding game. Let i > 1. Suppose that ThRi\u22121 has been computed. We construct a frugal-safety game Si = \u27e8V, E, k, F, ThRi\u22121\u27e9. Let ThSi denote the thresholds in Si. Thus, from an initial configuration \u27e8v, B\u27e9with v \u2208V \\ F, we have: If B \u2265ThSi(v), the safety player, Player 1, can force the game to either (1) stay in V \\ F, or (2) upon reaching a configuration \u27e8u, B\u2032\u27e9with u \u2208F, we have B\u2032 \u2265ThRi\u22121(u). If B < ThSi(v), the reachability player, Player 2, can guarantee the frugal-reachability objective; namely, she can force the game to reach F and upon reaching \u27e8u, B\u2032\u27e9with u \u2208F, we have B\u2032 < ThRi\u22121(u). By Thm. 19 and Lem. 12, Thi satisfies the average property. The following lemma follows from Lem. 24 together with the two observations above. \u25b6Lemma 25. Let i > 0. Suppose that for every u \u2208F, we have ThRi\u22121(v) = Thi\u22121(v). Then, for v \u2208V \\ F, we have Thi(v) = ThSi(v). \u25b6Corollary 26. For i > 0, given Thi\u22121, computing Thi can be done as follows. First, find the thresholds in frugal-reachability game Ri. Second, based on ThRi, construct the frugal safety bidding game Si and find the thresholds in it. Thus, in order to compute Thi from Thi\u22121 two calls are needed to a sub-routine that solves a frugal-reachability game. Moreover, it follows that if Thi\u22121 satisfies the average property, so does Thi. As mentioned above, the sequence Th0, Th1, . . . of thresholds reaches a fixed point. We show that the fixed-point threshold coincides with the co-B\u00fcchi thresholds. \u25b6Theorem 27. Consider a B\u00fcchi bidding game G. For i \u22650, let Thi be the threshold for satisfying the objective Safei, and n \u2208N be such that Thn = Thn+1. Then, Thn coincides with the thresholds for the co-B\u00fcchi player, namely Thn = coB\u00fc-Th. Moreover, coB\u00fc-Th satisfies the average property and computing it can be done in time O \u0000(|V | \u00b7 k)2\u0001 , which is exponential in the size of the input when k is given in binary. Proof. Cor. 26 shows that for all i \u22650, the thresholds Thi satisfy the average property. Hence, it suffices to prove that Thn = coB\u00fc-Th. The direction Thn \u2265coB\u00fc-Th has been established in Obs. 21. Regarding running time, note that the thresholds observed in a vertex are monotonically decreasing (Obs. 21), thus the number of iterations until a fixed-point is reached is O(|V | \u00b7 k). By Cor. 26, each iteration includes two solutions of frugal-reachability games, each of running time O(|V | \u00b7 k) (Thm. 19). It is left to show that Player 2, the B\u00fcchi player, wins from a configuration \u27e8v, B\u27e9with B < Thn(v) = Thn+1(v). Recall that for i \u22650, the budget ThSi(v) is the threshold for the safety player, at a vertex v \u2208V \\ F, in the frugal-safety game Si in which the frugal target budget at u \u2208F is \fG. Avni and S. Sadhukhan 19 Algorithm 1 co-B\u00fcchi-Thresholds(G) i := 0 Define the frugal-safety game S0 = \u27e8V, E, k, fr0, F\u27e9, with fr0 \u2261k + 1 ThS0 = Frugal-Safety(S0) Define Th0(v) = ThS0(v), for v \u2208V \\ F, and Th0(u) = k + 1, for u \u2208F. do i := i + 1 Define Ri = \u27e8V, E, k, ThSi\u22121, V \\ F\u27e9. ThRi := Frugal-Reachability(Ri) \u25b7Thresholds for vertices in F. Define Si = \u27e8V, E, k, ThRi, F\u27e9 ThSi := Frugal-Safety(Si) \u25b7Thresholds for vertices in V \\ F. Define Thi(v) = ThSi(v), for v \u2208V \\ F, and Thi(u) = ThRi(u), for u \u2208F. while Thi\u22121 \u0338= Thi return Thi ThRi\u22121(u).3 Further recall that ThRi(u), for u \u2208F, is the threshold in the frugal-reachability game Ri in which the targets are V \\ F with frugal target budget ThSi. Suppose that v \u2208V \\ F and the case of an initial vertex in F is captured in the proof. From configuration \u27e8v, B\u27e9, since B < ThSn+1(v), Player 2 can follow a winning strategy of the reachability player in frugal-safety game Sn+1 that ensures that a configuration \u27e8u, B\u2032\u27e9 is reached with u \u2208F and B\u2032 < ThRn(u). She then follows a winning strategy of the safety player in the frugal-reachability game Rn. Thus, no matter how Player 1 plays, Player 2\u2019s strategy guarantees that either (1) F is never exited, clearly satisfying the B\u00fcchi objective, or (2) a configuration \u27e8v\u2032, B\u2032\u2032\u27e9is reached with v\u2032 \u2208V \\ F and B\u2032\u2032 < ThSn(v\u2032). Player 2 can restart her strategy from v\u2032 since ThSn(v\u2032) = ThSn+1(v\u2032). Thus, Player 2 guarantees infinitely many visits to F, and we are done. \u25c0 Pseudo-code We conclude this section with a pseudo-code of the fixed-point algorithm. See Fig. 2 for a depiction of its operation. The algorithm calls two sub-routines Frugal-Reachability() and Frugal-Safety(), which return the thresholds for all vertices that are not targets, respectively, in a frugal-reachability and frugal-safety game, e.g., by running the fixed-point algorithm described in Lem. 18. 4.2 A fixed-point algorithm for frugal-parity bidding games In this section, we extend the fixed-point algorithm developed in Sec. 4.1 to parity bidding games. The algorithm involves a recursion over the parity indices, which we carry out by strengthening the induction hypothesis and developing an algorithm for frugal-parity objectives instead of the special case of parity objectives. For the remainder of this section, fix a frugal-parity game G = \u27e8V, E, k, p, S, fr\u27e9. Denote the maximal parity index by d \u2208N. Recall that S is a set of sinks that is disjoint from V and fr(s) denotes Player 1\u2019s frugal target budget at s \u2208S. Thus, Player 1 wins a play \u03c0 if \u03c0 is infinite and satisfies the parity condition, or 3 We point out that this holds also for the base case. Indeed, we have Th0(u) = k + 1, for u \u2208F, thus the safety game S0 is in fact a frugal-safety game. \f20 Computing Threshold Budgets in Discrete-Bidding Games \u03c0 is finite and ends in a configuration \u27e8s, B\u27e9with s \u2208S and B \u2265fr(s). We characterize the thresholds in G by reasoning about games with a lower parity index. This characterization gives rise to a recursive algorithm to compute the thresholds. \u25b6Lemma 28. (Base case). Let G = \u27e8V, E, p, S, fr\u27e9with only one parity index, i.e., p(v) = p(v\u2032) = d, for all v, v\u2032 \u2208V . Assume that d is odd. Let S = \u27e8V, E, S, fr\u27e9be a frugal-safety game. Then, ThG \u2261ThS. Assume that d is even. Let R = \u27e8V, E, S, fr\u27e9be a frugal-reachability game. Then, ThG \u2261ThR. Proof. Clearly, in both cases, a finite play that ends in a sink is winning in G iff it is winning in S, and similarly for R. When d is odd, an infinite play in G is winning for Player 1, thus G is a frugal-safety game. On the other hand, when d is even, an infinite play in G is losing for Player 1, and the only way to win is by satisfying the frugal objective in a sink, thus G is a frugal-reachability game. \u25c0 \u25b6Corollary 29. When G contains only one parity index, computing ThG can be done by calling a sub-routine that finds the thresholds in a frugal-reachability (or a frugal-safety) bidding game. Moreover, by Thm. 19, ThG satisfies the average property in this case. Recursive step Suppose that more than one parity index is used. Let d \u2208N denote the maximal parity index in G. We assume access to a sub-routine that computes thresholds in frugal-parity games with a maximal parity index of d \u22121, and we describe how to use it in order to compute thresholds in G. We assume that d is even and we describe the algorithm from Player 1\u2019s perspective. The definition for an odd d is dual from Player 2\u2019s perspective. Let Fd = {v : p(v) = d}. Since d is even, a play that visits Fd infinitely often is losing for Player 1. Thus, a necessary (but not sufficient) requirement to win is to ensure that Fd is visited only finitely often. For example, a B\u00fcchi game can be modeled as follows: Player 1 is the co-B\u00fcchi player, the parity indices are 1 or 2, and the set F2 denotes the accepting vertices, which Player 1 needs to visit only finitely often. We define a bounded variant of the frugal-parity objective, similar to the definition of Safei in Sec. 4.1: \u25b6Definition 30. For i \u22650, a play \u03c0 is in Fr-Parityi if: \u03c0 is finite and satisfies the frugal objective: ends in \u27e8s, B\u27e9with s \u2208S and B \u2265fr(s), or \u03c0 is infinite, satisfies the parity objective, and starts from V \\ Fd and enters Fd at most i times, or starts from Fd, exits Fd, and enters Fd at most i \u22121 more times. In particular, every path that starts from Fd violates Fr-Parity0. For i \u22650, we denote by Thi the thresholds for objective Fr-Parityi. As in Obs. 21, since the restriction monotonically decreases as i grows, the thresholds are monotonically non-increasing and they all lower-bound the thresholds in G. \u25b6Observation 31. For i \u22650, we have Thi+1 \u2264Thi and ThG \u2264Thi. It follows that the sequence of thresholds reaches a fixed-point, and we will show that the thresholds at the fixed coincide with ThG. We iteratively define and solve two sequences games: a sequence of frugal-parity games G0, G1, . . . each with maximal parity index d \u22121 and a sequence of frugal-reachability games \fG. Avni and S. Sadhukhan 21 R0, R1, . . .. For i \u22650, recall that ThGi and ThRi respectively denote the thresholds in Gi and Ri. We will show that Thi can be characterized by ThGi and ThRi: we will show that for v \u2208Fd we have Thi(v) = ThGi(v) and for u \u2208V \\ Fd, we have Thi(u) = ThRi(u). We start with the frugal-parity games. The games share the same arena, which is obtained from G by setting the vertices in Fd to be sinks. The games differ in the frugal target budgets. Formally, for i \u22650, we define Gi = \u27e8V, E\u2032, p\u2032, S\u2032, frGi\u27e9, where the sinks are S\u2032 = S \u222aFd, the edges are restricted accordingly E\u2032 = {\u27e8v, v\u2032\u27e9\u2208E : v \u2208V \\ Fd}, the parity function p\u2032 coincides with p but is not defined over Fd, i.e., p\u2032(v) = p(v) for all v \u2208V \\ Fd, and frGi is the only component that changes as i changes and it is defined below based on a solution to Ri. Note that p\u2032 assigns at most d \u22121 parity indices. We construct the frugal-reachability games. Let i \u22650. Intuitively, the game Ri starts from Fd and Player 1\u2019s goal is to either satisfy the frugal objective in S or reach V \\Fd with a budget that suffices to ensure that Fd is entered at most i more times. Formally, we construct the frugal-reachability game Ri = \u27e8V, E\u2032\u2032, V \\Fd\u222aS, frRi\u27e9, where E\u2032\u2032 = {\u27e8u, u\u2032\u27e9\u2208E : u \u2208Fd} and frRi(v) = ( fr(v) if v \u2208S ThGi(v) if v \u2208V \\ Fd. \u25b6Lemma 32. Let i \u22650. Assume that for every v \u2208V \\ F, we have ThGi(v) = Thi(v). Then, for every u \u2208Fd, we have Thi(u) = ThRi(u). Proof. Suppose that G starts from \u27e8u, B\u27e9with u \u2208Fd. We first show that when B \u2265ThRi(u), Player 1 can ensure the objective Fr-Parityi. Indeed, by following a winning strategy in Ri, Player 1 guarantees that either (1) the frugal objective is satisfied in S, in which case the play is clearly winning in G, or (2) the game reaches a configuration \u27e8v, B\u2032\u27e9with v \u2208V \\ Fd and B\u2032 \u2265ThGi(v), from which, by the assumption that ThGi(v) = Thi(v), he can proceed with a winning strategy for Fr-Parityi. On the other hand, when B < ThRi(u), Player 2 violates Fr-Parityi as follows. She first follows a winning strategy in Ri, which ensures that no matter how Player 1 plays, the resulting play either (1) violates the frugal objective in S, (2) stays in Fd, or (3) it reaches a configuration \u27e8v, B\u2032\u27e9with v \u2208V \\ Fd and B\u2032 < ThGi(v) = Thi(v). In Cases (1) and (2), the play is clearly winning for Player 2 in G, and in Case (3), the assumption on Thi(v) implies that Player 2 can continue with a strategy that violates Fr-Parityi. \u25c0 We define the frugal target budgets frGi of the frugal-parity game Gi. Recall that we obtain Gi from G by setting Fd to be sinks. Thus, the sinks in Gi consist of \u201cold\u201d sinks S and \u201cnew\u201d sinks Fd. The frugal target budgets of G and Gi agree on S, thus for s \u2208S and i \u22650, we have fr(s) = frGi(s). For u \u2208Fd, we define frG0(u) = k + 1 and for i > 0, we define frGi(u) = ThRi\u22121(u). \u25b6Lemma 33. For i \u22650 and u \u2208Fd, assume that a budget of fri(u) is the threshold to satisfy Fr-Parityi from u. Then, for v \u2208V \\ Fd, we have Thi(v) = ThGi(v). Proof. Recall that each Gi agrees with G on the parity indices in V \\ Fd, thus an infinite path that satisfies the parity condition in Gi satisfies it in G, and that Gi and G agree on the frugal target budgets in S. Under the assumption in the statement, we prove that Thi(v) = ThGi(v), for v \u2208V \\ Fd. Suppose that G starts from \u27e8v, B\u27e9with v \u2208V \\Fd. First, when B \u2265ThGi(v), Player 1 ensures Fr-Parityi by following a winning strategy in Gi. Let \u03c0 be the play that is obtained when Player 2 follows some strategy. Note that \u03c0 is winning for Player 1 in Gi, thus it satisfies one of the following: \f22 Computing Threshold Budgets in Discrete-Bidding Games 1. \u03c0 is finite and ends in \u27e8s, B\u27e9with s \u2208S and B \u2265fri(s), 2. \u03c0 is infinite (i.e., a sink is never reached) and satisfies the parity condition, or 3. \u03c0 is finite and ends in \u27e8u, B\u27e9with u \u2208Fd and B \u2265fri(u). Case (1) clearly satisfies the frugal objective of Fr-Parityi, in Case (2) the parity condition is satisfied without visiting Fd once, thus again, Fr-Parityi is satisfied, and in Case (3), once the game reaches \u27e8u, B\u27e9, the assumption on fri implies that Player 1 can follow a strategy that ensures Fr-Parityi. Second, if B < ThGi(v), Player 2 violates Fr-Parityi by following a winning strategy in Gi. The argument is dual to the above. \u25c0 Note that since every path that starts from Fd violates Fr-Parity0, the threshold budget at every u \u2208Fd is k + 1. This constitutes the proof of the base case of the following lemma, and the inductive step is obtained by combining Lem. 32 with Lem. 33. \u25b6Lemma 34. For i \u22650, for v \u2208V we have Thi(v) = ThGi(v) and for u \u2208V \\ Fd, we have Thi(u) = ThRi(u). It follows from Obs. 31 that the sequence Th0, Th1, . . . reaches a fixed point. We show that at the fixed point, the threshold coincides with ThG. \u25b6Lemma 35. Let n \u2208N such that Thn = Thn+1. Then, ThG = Thn. Proof. Lem. 34 and Obs. 31 show that ThG \u2264Thn. To show equality, we show that Player 2 wins G starting from a configuration \u27e8v, B\u27e9with v \u2208V \\ Fd and B < Thn(v). Player 2 proceeds by following a winning strategy in Gn+1. Let \u03c0 be a play that results from some Player 1 strategy. Since \u03c0 is winning for Player 2 in Gn+1, there are three cases: 1. \u03c0 is finite and ends in \u27e8s, B\u2032\u27e9with s \u2208S and B\u2032 < fr(s), thus it is winning also in G, 2. \u03c0 is infinite and violates the parity objective, thus since G and Gn+1 agree on the parity indices, \u03c0 is winning for Player 2 in G, or 3. \u03c0 ends in \u27e8u, B\u2032\u27e9with B\u2032 < frn+1(u). In Case (3), since frn+1(u) = ThRn(u), Player 2 continues by following a winning strategy for the safety player in Rn. This guarantees that no matter how Player 1 plays, the play either stays within Fd, thus it necessarily violates the parity objective of G, or it reaches \u27e8v, B\u2032\u2032\u27e9 with v \u2208V \\ Fd and B\u2032\u2032 < frRn(v). In the latter case, since frRn(v) = Thn(v) = Thn+1(v), Player 2 can restart her strategy. Note that Player 2\u2019s strategy guarantees that either Fd is eventually never reached, then she wins, or it is reached infinitely often, in which case she also wins since the play visits parity index d infinitely often. \u25c0 Pseudo code The algorithm is described Alg. 2 for an even d and from Player 1\u2019s perspective. Note that since in a frugal-reachability game both the thresholds for the reachability and safety player satisfy the average property (Thm. 19) and the algorithm boils down to repeated calls to a solution of a frugal-reachability game, it outputs a function that satisfies the average property. \u25b6Theorem 36. Given a frugal-parity bidding game G with maximal index d, Alg. 2 outputs the thresholds ThG. Moreover, ThG satisfies the average property and Alg. 2 runs in time O \u0000(|V | \u00b7 k)d\u0001 . \u25b6Remark 37. We point out that while we develop Alg. 2 for discrete-bidding games, it can be seen as a general \u201crecipe\u201d for extending a solution for frugal-reachability games to parity bidding games. While the algorithm that arises from this recipe might not be optimal \fG. Avni and S. Sadhukhan 23 Algorithm 2 Frugal-Parity-Threshold(G = \u27e8V, E, p, S, fr\u27e9) if G uses one parity index d then if d is odd then Return Frugal-Safety(S = \u27e8V, E, S, fr\u27e9) else Return Frugal-Reachability(R = \u27e8V, E, S, fr\u27e9) Define E\u2032 = {\u27e8v, v\u2032\u27e9\u2208E : v \u2208V \\ Fd} and E\u2032\u2032 = {\u27e8u, u\u2032\u27e9\u2208E : u \u2208Fd}. Define G0 = \u27e8V, E\u2032, p|V \\Fd, S \u222aFd, frG0\u27e9with frG0(u) = ( fr(u) if u \u2208S k + 1 if u \u2208Fd . ThG0 = Frugal-Parity-Threshold(G0) Define R0 = \u27e8V, E\u2032\u2032, (V \\ Fd) \u222aS, frRi\u27e9 ThR0 = Frugal-Reachability-Threshold(R0) for i = 1, . . . do ThGi = Frugal-Parity-Threshold(Gi = \u27e8V, E\u2032, p\u2032, S \u222aFd, frGi = fr \u222aThRi\u22121\u27e9) ThRi = Frugal-Reachability-Threshold(Ri = \u27e8V, E\u2032\u2032, V \\ Fd, ThGi\u27e9) For each v \u2208Fd, define fri+1(v) = ThRi(v) if fri(v) = fri+1(v), for all v \u2208Fd then Define ThG(v) = fri(v) for v \u2208Fd Define ThG(u) = ThGi(u) for u \u2208V \\ Fd. Return ThG complexity wise, it does provide a first upper bound and importantly, it extends a proof that thresholds in frugal-reachability games have the average property to parity bidding games. 5 Finding threshold budgets is in NP and coNP We formalize the problem of finding threshold budgets as a decision problem: \u25b6Problem 1. (Finding Threshold Budgets). Given a frugal-parity bidding game G = \u27e8V, E, k, p, S, fr\u27e9, a vertex v \u2208V , and \u2113\u2208[k], decide whether ThG(v) \u2265\u2113. We will show that Prob. 1 is in NP and coNP. Note that a function T : V \u2192[k] \u222a{k + 1} can be represented using O(|V | \u00b7 log(k)) bits, thus it is polynomial in the size of the input to Prob. 1. We describe a first attempt to show membership in NP and coNP. Guess T, verify that it satisfies the average property, and accept \u27e8G, v, \u2113\u27e9iff T(v) \u2265\u2113. Unfortunately, such an attempt fails. Even though by Thm. 36, the thresholds satisfy the average property, Thm. 11 shows that there can be other functions that satisfy it. That is, it could also be the case that T satisfies the average property and T \u0338\u2261ThG. We point out that in continuous-bidding games, if guessing such T would be possible, this scheme would have succeeded since there is a unique function that satisfies the continuous average property (Thm. 9). In the remainder of this section, we describe a polynomial-time algorithm to solve the following problem. \u25b6Problem 2. (Verifying a guess of T). Given a frugal-parity discrete-bidding game G with vertices V and a function T : V \u2192[k] \u222a{k + 1} that satisfies the average property, decide whether T \u2261ThG. We describe the high-level idea. We find it instrumental to first recall an NP algorithm to decide whether Player 1 wins a turn-based parity game from an initial vertex v0. The \f24 Computing Threshold Budgets in Discrete-Bidding Games algorithm first guesses a memoryless strategy f, which is a function that maps each vertex v that is controlled by Player 1 to an outgoing edge from v. The algorithm then verifies that f is winning for Player 1. The idea is to solve the following problem: given a Player 1 strategy f, check whether Player 2 has a counter strategy g such that play(v0, f, g) violates Player 1\u2019s objective. Clearly, f is winning iff Player 2 cannot counter f. Deciding whether Player 2 can counter f is done as follows. We trim every edge in the game that does not comply with f. This leaves a graph with only Player 2 choices, and we search in it for a \u201classo\u201d path that satisfies Player 2\u2019s objective. Such a path exists iff Player 2 can counter f iff f is not winning.4 Our algorithm for frugal-parity games follows conceptually similar steps. Let T : V \u2192 [k] \u222a{k + 1} that satisfies the average property. We verify whether T \u2261ThG as follows. We construct a partial strategy fT based on T. Recall that a partial strategy proposes a bid and a set of allowed vertices in each configuration. That is, guessing T is not quite like guessing a strategy as in the algorithm above, rather T gives rise to a partial strategy. We seek a Player 1 strategy that agrees with fT and wins when the game starts from every configuration \u27e8v, T(v)\u27e9. Note that if T \u2261ThG, then such a strategy exists. Given fT , we describe an algorithm that decides whether Player 2 can counter every Player 1 strategy that agrees with fT . Our algorithm constructs and solves a turn-based parity game. 5.1 From bidding games to turn-based games Let T be a function that satisfies the average property. Recall that the partial strategy fT that is constructed in Sec. 3.2.2 is a function that, given a configuration \u27e8v, B\u27e9, outputs \u27e8b, A\u27e9, where b \u2264B is a bid and A \u2286V is a subset of neighbors of v that are called allowed vertices. A strategy f \u2032 agrees with fT if from each configuration, it bids the same as fT and chooses an allowed vertex upon winning the bidding. We construct a parity turn-based game GT,G such that if Player 1 wins in every vertex in GT,G, then Player 1 has a strategy f \u2032 that agrees with fT and wins from every configuration \u27e8v, T(v)\u27e9in G, thus T \u2265ThG. We describe the intuition behind the construction of GT,G. Consider the following first attempt to construct GT,G. Recall the construction in Sec. 2.2.1 of the explicit concurrent game that corresponds to G, and denote it by G\u2032. The vertices of G\u2032 are the configurations C of G. We construct a game G\u2032\u2032 on the configuration graph C. Recall that our goal is to check whether Player 2 can counter Player 1\u2019s strategy, which can be thought of as Player 2 responds to Player 1\u2019s actions in each turn. Thus, G\u2032\u2032 is turn-based: when the game is in configuration c, Player 1 first chooses \u27e8b1, v1\u27e9, and only then, Player 2 responds by choosing an action \u27e8b2, v2\u27e9. The next configuration is determined by these two actions in the same manner as the concurrent game. Next, we trim Player 1 actions in G\u2032 that do not comply with fT : in a configuration c = \u27e8v, B\u27e9in G\u2032\u2032 with \u27e8b, A\u27e9= fT (c), Player 1 must bid b and choose a vertex in A. That is, an action \u27e8b\u2032, v\u2032\u27e9is not allowed if b\u2032 \u0338= b or if v\u2032 / \u2208A. Finally, we omit Player 2 actions that are dominated: observing a Player 1 bid of b, she chooses between bidding 0 and letting Player 1 win the bidding or bidding b \u22950\u2217and winning the bidding. It is not hard to see that Player 1 wins G\u2032\u2032 from configuration \u27e8v, B\u27e9iff there is a strategy f \u2032 that agrees with fT and wins G from \u27e8v, B\u27e9. 4 An alternative description of the verification algorithm is the following. View the trimmed graph as an automaton with a singleton alphabet whose acceptance condition is Player 2\u2019s objective, and check whether the language of the automaton is empty. The language is empty iff Player 2 cannot counter f. \fG. Avni and S. Sadhukhan 25 The first attempt fails since the size of G\u2032\u2032 is proportional to the number of configuration, which is exponential in G. We overcome this key challenge as follows. Lem. 15 shows that when G starts from configuration \u27e8v, B\u27e9with B \u2265T(v) a strategy f \u2032 that agrees with fT maintains an invariant on Player 1\u2019s budget: the game only reaches configurations of the form \u27e8v\u2032, B\u2032\u27e9with B\u2032 \u2265T(v\u2032). We shrink the size of the game by grouping all configurations in which Player 1\u2019s budget is greater than T(v) \u22950\u2217into a vertex denoted \u27e8v, \u22a4\u27e9. We describe the idea that allows keeping only three copies of each vertex. We will show in Lem. 39 that if Player 1 wins in all vertices of GT,G, then he wins in G. We call the distance from the invariant B \u2212T(v) as spare change. Recall from Sec. 3.2.2 that fT chooses one of two bids in a vertex v \u2208V , and the choice depends on the advantage status and does not depend on the spare change. Thus, our winning strategy in G emulates a winning strategy in GT,G: both bid according to fT and the latter prescribes a vertex to move to upon winning a bidding. For a Player 2 strategy, the resulting play in G is simulated by a play in GT,G. There can be three outcomes: (1) the play in GT,G is infinite, (2) it ends in a sink S, or (3) it ends in a sink \u27e8v, \u22a4\u27e9. The first two cases are winning in G. When Case (3) occurs and GT,G reaches \u27e8v, \u22a4\u27e9, then G reaches \u27e8v, B\u27e9with B > T(v) \u22950\u2217, and we restart GT,G from either \u27e8v, T(v)\u27e9or \u27e8v, T(v) \u22950\u2217\u27e9depending on the advantage status. That is, after restarting the game, Player 1 plays the same only that his spare change increased. A key idea is that whenever Case (3) occurs, Player 1\u2019s spare change strictly increases, thus this can happen only finitely often since the space change cannot exceed the total budget k. Formally, we define the turn-based parity game GT,G = \u27e8V1, V2, E, p\u27e9as follows. The vertices that are controlled by Player i are Vi, for i \u2208{1, 2}, where V1 = {\u27e8v, T(v)\u27e9, \u27e8v, T(v) \u22950\u2217\u27e9, \u27e8v, \u22a4\u27e9: v \u2208(V \u222aS)} and V2 = {\u27e8v, c\u27e9: v \u2208V }. We define the edges. A vertex \u27e8v, B\u27e9is a sink if v \u2208S or if B = \u22a4. Consider c = \u27e8v, B\u27e9\u2208V1 and let \u27e8b, A\u27e9= fT (c). The neighbors of c are {\u27e8v\u2032, c\u27e9: v\u2032 \u2208A}. Intuitively, \u27e8v\u2032, c\u27e9means that Player 1 chooses the action \u27e8b, v\u2032\u27e9at configuration c; the bid b is determined by fT and v\u2032 is an allowed vertex. A vertex \u27e8v\u2032, c\u27e9is a Player 2 vertex. Intuitively, Player 2 makes two choices: who wins the bidding and where the token moves upon winning. Thus, a vertex \u27e8v\u2032, c\u27e9has two types of neighbors, depending on who wins the bidding at c: First, \u27e8v\u2032, B \u2296b\u27e9is a neighbor of \u27e8v\u2032, c\u27e9, meaning Player 2 lets Player 1 win the bidding by bidding 0. Second, suppose that k\u2217\u2296B \u2265b\u22950\u2217, i.e., Player 2 has sufficient budget to win the bidding. Let B\u2032 = B \u2295(b \u22950\u2217) be Player 1\u2019s updated budget and w \u2208N(v). If c\u2032 = \u27e8w, B\u2032\u27e9\u2208V1, then c\u2032 is a neighbor of \u27e8v\u2032, c\u27e9. We note that c\u2032 / \u2208V1 when B\u2032 exceeds T(w) \u22950\u2217, then we trim the budget and set \u27e8w, \u22a4\u27e9as a neighbor of \u27e8v\u2032, c\u27e9. For ease of presentation, we define parity indices only in Player 1 vertices. A non-sink vertex in GT,G \u201cinherits\u201d its parity index from the vertex in G; namely, for c = \u27e8v, B\u27e9\u2208V1, we define p\u2032(c) = p(v). The parity index of a sink is odd so that Player 1 wins in sinks. \u25b6Example 38. Fig. 1 depicts a frugal-reachability bidding game G1 with two functions that satisfy the average property: T1 = \u27e84, 3\u2217, 3, 2\u27e9and T2 = \u27e85, 4\u2217, 3\u2217, 2\u27e9. In Fig. 3, we depict the games GT1,G1 and GT2,G1. We omit Player 1 vertices since both fT1 and fT2 allow only singleton sets of allowed vertices from all configurations, thus Player 1 has no choices. Player 1\u2019s goal in both games is to reach a sink. An outgoing edge from vertex c labeled by \u27e8b1, b2\u27e9represents the outcome of a bidding at configuration c in which Player i bids bi, for i \u2208{1, 2}. Thus, each vertex c has two outgoing edges labeled by \u27e8b1, 0\u27e9and \u27e8b1, b1 \u22950\u2217\u27e9, where b1 is the bid that fT1 or fT2 prescribes at c. Note that some edges are disallowed. For example, in the configuration \u27e8v1, 5\u27e9in GT2,G1, the bid prescribed by fT2 is b1 = 1 and \f26 Computing Threshold Budgets in Discrete-Bidding Games \u27e8v0, 4\u27e9 \u27e8v1, 4\u27e9 \u27e8v0, 4\u2217\u27e9 \u27e8v2, 3\u27e9 \u27e8v0, \u22a4\u27e9 \u27e8t, 2\u27e9 (0, 0) (1, 0) (1, 1\u2217) (0, 0\u2217) (1, 1\u2217) (0\u2217, 0) (0\u2217, 1) (1, 0) (a) The turn-based game GT1,G1. Player 1 loses from some vertices, thus T1 \u0338\u2261ThG1 \u27e8v0, 5\u27e9 \u27e8v1, 5\u27e9 \u27e8v0, 5\u2217\u27e9 \u27e8v2, 4\u27e9 \u27e8v1, 4\u2217\u27e9 \u27e8v2, 3\u2217\u27e9 \u27e8t, 2\u27e9 (0, 0) (0, 0\u2217) (1, 0) (0\u2217, 0) (2, 0) (0\u2217, 0) (0\u2217, 1) (1\u2217, 2) (1\u2217, 0) (b) The turn-based game GT2,G1. Player 1 wins from all vertices, thus T2 \u2261ThG1 Figure 3 Turn-based reachability games that correspond to G1 from Fig. 1 for two functions that satisfy the average property. Player 1 vertices are omitted; that is, all depicted vertices are Player 2 vertices. The edge labelings depict bidding outcomes and are meant to ease presentation. Player 2 cannot bid b1 \u22950\u2217= 1\u2217since it exceeds her available budget (indeed, k = 5, thus Player 2\u2019s budget in c is k\u2217\u22965 = 0\u2217). Note that GT1,G1 has a cycle. Thus, Player 1 does not win from every vertex and T1 does not coincide with the threshold budgets. On the other hand, GT2,G1 is a DAG. Thus, no matter how Player 2 plays, Player 1 wins from all vertices, and indeed T2 \u2261ThG1. \u25c0 5.2 Correctness In this section, we prove soundness and completeness of the approach. We start with soundness. \u25b6Lemma 39. If Player 1 wins from every vertex in GT,G, then T \u2265ThG. Proof. Suppose that Player 1 wins from every vertex of GT,G and let f \u2217be a Player 1 memoryless winning strategy. We construct a strategy f in G based on f \u2217and show that it is winning from every configuration \u27e8v, B\u27e9where B \u2265T(v). This implies that T \u2265ThG since f witnesses that Player 1 can win with a budget of T(v) from v. Note that we do not yet rule out that a different strategy wins with a lower budget, this will come later. We introduce notation. Consider a configuration c = \u27e8v, B\u27e9in G with B \u2265T(v). The vertex in GT,G that agrees with c, denoted by c\u2217, is the vertex in {\u27e8v, T(v)\u27e9\u27e8v, T(v) \u22950\u2217\u27e9} that matches with c on the status of the advantage (and of course on the vertex of G). Note the convention of calling c a configuration in G and a vertex in GT,G. For example if T(v) = 5\u2217for some vertex v and c = \u27e8v, 9\u27e9is a configuration in G, then the vertex of GT,G that agrees with c, denoted by c\u2217, is \u27e8v, 6\u27e9. Recall that even though the budget in c may be higher than that of c\u2217, the partial strategy fT acts the same in both, i.e., fT (c) = fT (c\u2217). The spare change that is associated with c, denoted by Spare(c) is |B| \u2212|T(v)|. In the following, we construct f based on fT and f \u2217. We define f to agree with fT on the bid and choose the successor vertex according to f \u2217. Let \u27e8b, A\u27e9= fT (c). Recall that c\u2217is a Player 1 vertex in GT,G and its neighbours are of the form \u27e8v\u2032, c\u27e9such that v\u2032 is an allowed vertex, i.e, v\u2032 \u2208A. Intuitively, proceeding to vertex \u27e8v\u2032, c\u2217\u27e9in GT,G is associated with moving to v\u2032 upon winning the bidding at c\u2217. Let \u27e8v\u2032, c\u2217\u27e9= f \u2217(c\u2217). Then, we define f(c) = \u27e8b, v\u2032\u27e9. \fG. Avni and S. Sadhukhan 27 We claim that f is winning from an initial configuration c0 = \u27e8v, B\u27e9in G where B \u2265T(v). Let g be a Player 2 strategy in G. The initial vertex c\u2217 0 in GT,G is the vertex that agrees with c. We construct a Player 2 strategy g\u2217in GT,G so that play(c\u2217 0, f \u2217, g\u2217) in GT,G simulates play(c0, f, g) in G: when play(c0, f, g) is in a configuration c, play(c\u2217 0, f \u2217, g\u2217) is in a vertex c\u2217that agrees with c. We define g\u2217inductively. Initially the invariant holds due to our choice of c\u2217 0 in GT,G. Suppose that G is at configuration c = \u27e8v, B\u27e9, then the play(c\u2217 0, f \u2217, g\u2217) in GT,G is at the vertex c\u2217that agrees with c. Denote by \u02c6 T(v) \u2208{T(v), T(v) \u22950\u2217} such that c\u2217= \u27e8v, \u02c6 T(v)\u27e9. Let \u27e8b1, v1\u27e9= f(c) as defined above, let \u27e8b2, v2\u27e9= g(c) be Player 2\u2019s choice, and let d be the next configuration in G. We extend the play in GT,G as follows. We first register Player 1\u2019s move in GT,G by proceeding to the Player 2 vertex \u27e8v1, c\u2217\u27e9. We distinguish between two cases. First, Player 1 wins the bidding in G. We define g\u2217to choose \u27e8v1, \u02c6 T(v) \u2296b1\u27e9as the successor vertex from \u27e8v1, c\u2217\u27e9. Note that, in this case, the configuration d is d\u2217= \u27e8v1, B \u2296b1\u27e9. Since c\u2217agrees with c, then d\u2217agrees with d. Second, Player 2 wins the bidding in G. We define g\u2217to proceed to d\u2217= \u27e8v2, \u02c6 T(v) \u2295b2\u27e9, if it exists in GT,G. If d\u2217is in the graph, then again, d\u2217agrees with d. On the other hand, if d\u2217is not a vertex in GT,G, it intuitively means \u02c6 T(v) \u2295b2 > T(v2) \u22950\u2217, and we define g\u2217to proceed to \u27e8v2, \u22a4\u27e9. Let us denote \u02c6 d\u2217be the vertex in GT,G that agrees with d. We apply the same definition above starting from vertex \u02c6 d\u2217in GT,G. That is, in the next turn, assuming that Player 1 proceeds in GT,G to \u27e8v2, \u02c6 d\u2217\u27e9, we define g\u2217according to g(\u27e8v2, B \u2295b2\u27e9), as discussed above. We call this a restart in the simulation. Note that if the simulation is not a restart, then Spare(c) = Spare(d), and if it is a restart, then Spare(c) < Spare(d). We claim that play(c0, f, g) = c0, c1, . . . is winning for Player 1 in G. We slightly abuse notation and denote by play(c\u2217 0, f \u2217, g\u2217) = c\u2217 0, c\u2217 1, . . . the sequence of Player 1 vertices that are traversed in GT,G, that is we skip Player 2 vertices. Since f \u2217is winning in GT,G, then play(c\u2217 0, f \u2217, g\u2217) is winning for Player 1. We distinguish between three cases. First, play(c\u2217 0, f \u2217, g\u2217) is infinite. Since for every i \u22650, the vertex c\u2217 i agrees with ci, the two plays agree on the parity indices that are visited, thus play(c0, f, g) satisfies the parity objective. Second, play(c\u2217 0, f \u2217, g\u2217) is finite and ends in a sink configuration ck = \u27e8s, B\u27e9. Note that B \u2265T(s), and the definition of T requires that T(s) = fr(s). Since c\u2217 k agrees with ck, it follows that ck satisfies the frugal objective. Third, play(c\u2217 0, f \u2217, g\u2217) is finite and ends in c\u2217 k = \u27e8v, \u22a4\u27e9. Let \u02c6 c\u2217 k denote the vertex that agrees with ck. We apply the reasoning above to play( \u02c6 c\u2217 k, f \u2217, g\u2217). Note that the third case can occur only finitely many times, since a restart causes the spare change to strictly increase, and the spare change is bounded by k. Thus, eventually, the play in GT,G falls into one of the first two cases, which implies that play(c0, f, g) is winning for Player 1. \u25c0 \u25b6Corollary 40. The proof of Lem. 39 constructs a Player 1 winning strategy f in G. Note that in order to implement f, we only need to keep track of a vertex in GT,G. Thus, its memory size equals the size of GT,G, which is linear in the size of G. This is significantly smaller than previously known constructions in parity and reachability discrete-bidding games, where the strategy size is polynomial in k, and is thus exponential when k is given in binary. The following lemma shows completeness; namely, that a correct guess of T implies that Player 1 wins from every vertex in GT,G. \u25b6Lemma 41. If T \u2261ThG, then Player 1 wins from every vertex in GT,G. Proof. Assume towards contradiction that T \u2261ThG and there is a Player 1 vertex c\u2217 0 = \u27e8v, B\u27e9 in GT,G that is losing for Player 1. Let g\u2217be a Player 2 memoryless strategy that wins \f28 Computing Threshold Budgets in Discrete-Bidding Games from vertex c\u2217 0 in GT,G. Recall that B \u2208{T(v), T(v) \u22950\u2217}. Note that B \u2265T(v). Since we assume T \u2261ThG, Player 1 wins from configuration c0 = \u27e8v, B\u27e9in G. Let f be a Player 1 winning strategy from c0 in G. Note that we follow the convention of referring to c in G as a configuration and c\u2217in GT,G as a vertex, even though both are \u27e8v, B\u27e9. We will reach a contradiction by constructing a Player 2 strategy g in G based on g\u2217that counters f, thus showing that f is not winning. Recall that a winning Player 1 strategy can be thought of as a strategy that, in each turn, reveals Player 1\u2019s action first, and allows Player 2 to respond to Player 1\u2019s action. We construct a Player 1 strategy f \u2217in GT,G based on f as long as f agrees with fT and a Player 2 strategy g in G based on g\u2217. Both constructions are straightforward. First, for f \u2217, consider a Player 1 vertex c\u2217in GT,G. Recall that c\u2217is a configuration in G, which we denote by c to avoid confusion. Suppose that f(c) agrees with fT (c), that is denoting \u27e8b, A\u27e9= fT (c), we have \u27e8b, v\u27e9= f(c) with v \u2208A. Then, in GT,G, from vertex c\u2217, the strategy f \u2217proceeds to \u27e8v, c\u2217\u27e9. We stress that f \u2217is only defined when f agrees with fT . Second, for g, recall that Player 2 vertices in GT,G are of the form \u27e8v, c\u27e9, and Player 2 chooses between letting Player 1 win the bidding or bidding b \u22950\u2217, winning the bidding, and choosing the next vertex. Assume \u27e8b, v\u27e9= f(c) that agrees with fT , then g responds by following g\u2217: if g\u2217lets Player 1 win from \u27e8v, c\u2217\u27e9, then g bids 0 in c and lets Player 1 win the bidding, and if it wins the bidding by proceeding to vertex \u27e8v\u2032, B\u2032\u27e9, then g chooses \u27e8b \u22950\u2217, v\u2032\u27e9, i.e., it too wins the bidding in c and proceeds to vertex v\u2032. Let \u03c0\u2217and \u03c0 respectively denote the longest histories of GT,G and G that start from c\u2217 0 and c0 that arise from applying f \u2217against g\u2217in GT,G and f against g in G, as long as f agrees with fT . Note that, skipping Player 2 vertices in GT,G, the plays \u03c0\u2217and \u03c0 traverse the same sequence of configurations. We claim that the two plays cannot be infinite. Indeed, assume otherwise, then since we assume f is winning, \u03c0 is satisfies Player 1\u2019s objective, and since we assume g\u2217is winning, \u03c0\u2217violates Player 1\u2019s objective, but both cannot hold at the same time. Also, \u03c0\u2217cannot end in a sink since sinks are winning for Player 1 and \u03c0\u2217 results from a Player 2 winning strategy. We conclude that \u03c0 and \u03c0\u2217are finite and end in a configuration in which f does not agree with fT . Let c = \u27e8v, B\u27e9be the last configuration in \u03c0. That is, c is the first configuration in which f chooses an action that does not agree with fT . Let \u27e8b, A\u27e9= fT (c) and \u27e8b1, v1\u27e9= f(\u03c0). In the remainder of the proof, we consider the three ways in which f can disagree with fT , choose a Player 2 response, and show that she can win from the resulting configuration. We need the following observation, which intuitively states that when Player 1 has the advantage and the bids of f and fT agree, then Player 1 uses the advantage. The proof is obtained by a careful case-by-case analysis of Def. 10 and Eq. (2): Observation: For every vertex v, we have T(v) is in N iff bT v is in N. We proceed to analyze the three ways in which f disagrees with fT : Case 1: f underbids; b1 < b. Player 2 responds by bidding b1 \u22960\u2217. We show below that she wins the bidding. She then proceeds to a neighbour v\u2032 with T-value T(v+). Let c\u2032 = \u27e8v\u2032, B\u2032\u27e9denote the resulting configuration. Intuitively, Player 2 pays less than she should for winning the bidding. Formally, we will show that B\u2032 < T(v\u2032). This will conclude the proof. Indeed, since we assume T \u2261ThG, Player 2 has a winning strategy from c\u2032, which she uses to counter f. We distinguish between two cases depending on whether Player 1 holds the advantage: (a) Player 1 holds the advantage, i.e, B \u2208N\u2217\\ N. By the observation above, he uses it, thus b \u2208N\u2217\\ N. Player 2 bids b2 = b \u22960\u2217. First, note that the bid is legal. Indeed, since b contains the advantage b2 does not. Second, note that Player 2 wins the bidding. \fG. Avni and S. Sadhukhan 29 Indeed, if Player 1 bids less b2, clearly Player 2 wins, and if he bids b2, then a tie occurs, and since he has the advantage and does not use it, Player 2 wins the bidding. As a result, Player 1\u2019s budget is updated to B \u2295(b \u22960\u2217) < B \u2295b < B \u2295(b \u22950\u2217) = |T(v+)|\u2217, in particular, B \u2295(b \u22960\u2217) < T(v+). (b) Player 1 does not hold the advantage, i.e, B \u2208N. By the observation, b does not include the advantage and bidding b1 < b necessarily implies b1 < b \u22960\u2217, simply because he does not hold the advantage. Player 2 bids b \u22960\u2217\u2208N\u2217\\ N. It is not hard to see that she wins the bidding and showing that B\u2032 < T(v\u2032) is done as in the previous case. Case 2: f oversbids; b1 > b. We assume B = T(v) and the case of B = T(v) \u22950\u2217is similar. Note that the observation implies that fT proposes a bid of b \u22950\u2217when Player 1\u2019s budget is T(v) \u22950\u2217. Intuitively, if Player 1 wins the bidding with his bid of b1, he will pay \u201ctoo much\u201d, and Player 2 indeed lets him win by bidding 0 (except for one case that we will explain later). The resulting configuration is c\u2032 = \u27e8v1, B \u2296b1\u27e9, and we will show that B \u2296b1 < T(v1) (barring one case). As in the underbidding case, this concludes the proof: since we assume T \u2261ThG, Player 2 wins from c\u2032. We first consider the easier case that b1 > b\u22950\u2217. Then, B \u2296b1 < B \u2296(b\u22950\u2217) \u2264T(v\u2212) \u2264 T(v1), thus B \u2296b1 < T(v1), as required. We proceed to the harder case of b1 = b \u22950\u2217. Note that Player 1 has the advantage. Indeed, if he does not have the advantage, then b does not use the advantage and he cannot bid b \u22950\u2217since it uses the advantage. Recall from Definition 10 that there are two remaining possibilities: (i) |T(v+)| + |T(v\u2212)| is odd and T(v\u2212) \u2208N, and (ii) |T(v+)| + |T(v\u2212)| is even and T(v\u2212) \u2208N\u2217\\ N. In Case (i), T(v) \u2212b = T(v\u2212), hence when Player 1 bids b \u22950\u2217, Player 2\u2019s response is 0, and Player 1\u2019s budget in the next configuration is strictly lower than the threshold. We conclude with Case (ii). Recall that in this case b = \u230a|T (v+)|\u2212|T (v\u2212)| 2 \u230b\u22960\u2217. This case requires a different approach since Player 1 can bid b\u22950\u2217and even if he wins the bidding, his budget in the next configuration does not fall below the threshold. We define g to follow g\u2217. Consider the move of g\u2217from \u27e8v1, c\u27e9. If it lets Player 1 win by proceeding to \u27e8v1, B \u2296b\u27e9, then g responds to f in G by bidding 0. Recall that Player 2\u2019s other action in GT,G corresponds to a bid of b \u22950\u2217, and is represented by proceeding to vertex \u27e8v\u2032, B \u2296b \u22950\u2217\u27e9. Then, in G, we define g to bid b \u22950\u2217, thus both players bid b \u22950\u2217and Player 2 wins the tie since Player 1 has the advantage and does not use it. Player 2 proceeds to v\u2032 following g\u2217. The key idea is that in both cases, we reach the same configuration in G and GT,G. That is, even though f disagrees with fT , we extend the two plays \u03c0 and \u03c0\u2217and restart the proof. As discussed in the beginning of the proof, the plays cannot be infinite, thus eventually f disagrees with fT in one of the other manners. Case 3: f does not choose an allowed vertex; b1 = b and v1 / \u2208A. Recall that, by definition, the set A of allowed vertices consists of all vertices v\u2032 that satisfy T(v) \u2296b \u2265T(v\u2032). Therefore, Player 2 responses to f by letting Player 1 win by bidding 0. In the resulting configuration, Player 1\u2019s budget is strictly less than T, which coincides with the threshold budget, and, as in the above, Player 2 proceeds with a winning strategy. \u25c0 Finally, we verify that T \u2264ThG. We define a function T \u2032 : V \u2192[k] \u222a{k + 1} as follows. For v \u2208V , when T(v) > 0 we define T(v) = k\u2217\u2296(T(v) \u22960\u2217), and T \u2032(v) = k + 1 otherwise. Lem. 12 shows that T \u2032 satisfies the average property. We proceed as in the previous construction only from Player 2\u2019s perspective. We construct a partial strategy fT \u2032 for Player 2 from T \u2032 just as fT is constructed from T, and construct a turn-based parity game GT \u2032,G. Let Th2 G denote Player 2\u2019s threshold function in G. That is, at a vertex v \u2208V , \f30 Computing Threshold Budgets in Discrete-Bidding Games Player 2 wins when her budget is at least Th2 G(v) and she loses when her budget is at most Th2 G(v) \u22960\u2217. Applying Lemmas 39 and 41 to Player 2, we obtain the following. \u25b6Lemma 42. If Player 2 wins from every vertex in GT \u2032,G, then T \u2032 \u2265Th2 G. If T \u2032 \u2261Th2 G, then Player 2 wins from every vertex of GT \u2032,G. Given a frugal-parity discrete-bidding game G = \u27e8V, E, k, p, S, fr\u27e9, a vertex v \u2208V , and \u2113\u2208[k], we guess T : V \u2192[k] \u222a{k + 1} and verify that it satisfies the average property. Note that the size of T is polynomial in G since it consists of |V | numbers each of size O(log k). We construct GT,G and GT \u2032,G, guess memoryless strategies for Player 1 and Player 2, respectively, and verify in polynomial time that they are indeed winning. Finally, we check whether T(v) \u2265\u2113, and answer accordingly. Correctness follows from Lemmas 39, 41, and 42. We thus obtain our main result. \u25b6Theorem 43. The problem of finding threshold budgets in frugal-parity discrete-bidding games is in NP and coNP. 6 Discussion We develop two algorithms to find threshold budgets in discrete-bidding games. Our first algorithm shows, for the first time, that thresholds in parity discrete-bidding games satisfy the average property. Previously, only thresholds in reachability discrete-bidding games were known to have this property. We study, for the first time, the problem of computing threshold budgets in discrete-bidding games in which the budgets are given in binary, and establish membership in NP and coNP for reachability and parity objectives. Previous algorithms for reachability and parity discrete-bidding games have exponential running time in this setting. We develop novel building blocks as part of our algorithms, which can be of independent interest. First, we define and study, for the first frugal objectives, which are reachability objectives accompanied by an enforcement on a player\u2019s budget when reaching the target. Second, our fixed-point algorithm provides a recipe for extending a proof on the structure of thresholds in reachability bidding games to parity bidding games. Third, we develop, for the first time, strategies that can be implemented with linear memory in reachability and parity discrete-bidding games, whereas previous constructions used exponential memory. We point to the intriguing state of affairs in parity discrete-bidding games. Deciding the winner in a turn-based parity game is a long-standing open problem, which is known to be in NP and coNP but not known to be in P. A very simple reduction from turn-based parity games reduce to parity discrete-bidding games was shown in [1]. Moreover, the reduction outputs a bidding game with a total budget of 0; that is, a discrete bidding game with constant sum of budgets. Our results show that parity discrete-bidding games are in NP and coNP even when the sum of budgets is given in binary. One might expect that such games would be at least exponentially harder than bidding games with constant sum of budgets. But all of these classes of games actually lie in NP and coNP." + }, + { + "url": "http://arxiv.org/abs/1911.08360v1", + "title": "All-Pay Bidding Games on Graphs", + "abstract": "In this paper we introduce and study {\\em all-pay bidding games}, a class of\ntwo player, zero-sum games on graphs. The game proceeds as follows. We place a\ntoken on some vertex in the graph and assign budgets to the two players. Each\nturn, each player submits a sealed legal bid (non-negative and below their\nremaining budget), which is deducted from their budget and the highest bidder\nmoves the token onto an adjacent vertex. The game ends once a sink is reached,\nand \\PO pays \\PT the outcome that is associated with the sink. The players\nattempt to maximize their expected outcome. Our games model settings where\neffort (of no inherent value) needs to be invested in an ongoing and stateful\nmanner. On the negative side, we show that even in simple games on DAGs,\noptimal strategies may require a distribution over bids with infinite support.\nA central quantity in bidding games is the {\\em ratio} of the players budgets.\nOn the positive side, we show a simple FPTAS for DAGs, that, for each budget\nratio, outputs an approximation for the optimal strategy for that ratio. We\nalso implement it, show that it performs well, and suggests interesting\nproperties of these games. Then, given an outcome $c$, we show an algorithm for\nfinding the necessary and sufficient initial ratio for guaranteeing outcome $c$\nwith probability~$1$ and a strategy ensuring such. Finally, while the general\ncase has not previously been studied, solving the specific game in which \\PO\nwins iff he wins the first two auctions, has been long stated as an open\nquestion, which we solve.", + "authors": "Guy Avni, Rasmus Ibsen-Jensen, Josef Tkadlec", + "published": "2019-11-19", + "updated": "2019-11-19", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI" + ], + "main_content": "Introduction Two-player graph games naturally model settings in which decision making is carried out dynamically. Vertices model the possible con\ufb01gurations and edges model actions. The game proceeds by placing a token on one of the vertices and allowing the players to repeatedly move it. One player models the protagonist for which we are interested in \ufb01nding an optimal decision-making strategy, and the other player, the antagonist, models, in an adversarial manner, the other elements of the systems on which we have no control. We focus on quantitative reachability games (Everett 1955) in which the graph has a collection of sink vertices, which we call the leaves, each of which is associated with a weight. The game is a zero-sum game; it ends once a leaf is reached and the weight of the leaf is Player 1\u2019s reward and Player 2\u2019s cost, thus Player 1 aims at maximizing the weight while Player 2 aims at minimizing it. A special case is qualitative reachability games in which each Player i has a target ti and Player i wins iff the game ends in ti. A graph game is equipped with a mechanism that determines how the token is moved; e.g., in turn-based games the players alternate turns in moving the token. Bidding is a mode of moving in which in each turn, we hold an auction to determine which player moves the token. Bidding qualitative-reachability games where studied in (Lazarus et al. 1996; Lazarus et al. 1999) largely with variants of \ufb01rstprice auctions: in each turn both players simultaneously submit bids, where a bid is legal if it does not exceed the available budget, the higher bidder moves the token, and pays his bid to the lower bidder in Richman bidding (named after David Richman), and to the bank in poorman bidding. The central quantity in these games is the ratio between the players\u2019 budgets. Each vertex is shown to have a threshold ratio, which is a necessary and suf\ufb01cient initial ratio that guarantees winning the game. Moreover, optimal strategies are deterministic. We study, for the \ufb01rst time, quantitative-reachability allpay bidding games, which are similar to the bidding rules above except that both players pay their bid to the bank. Formally, for i \u2208{1, 2}, suppose that Player i\u2019s budget is Bi \u2208Q>0 and that his bid is bi \u2208[0, Bi], then the higher bidder moves the token and Player i\u2019s budget is updated to Bi \u2212bi. Note that in variants of \ufb01rst-price auctions, assuming the winner bids b, the loser\u2019s budget is the same for any bid in [0, b). In an all-pay auction, however, the higher the losing bid, the lower the loser\u2019s available budget is in the next round. Thus, intuitively, the loser would prefer to bid as close as possible to 0. Example 1.1. Consider the qualitative reachability game that is depicted in Fig. 1, which we call win twice in a row or WnR(2), for short. For convenience, \ufb01x Player 2\u2019s initial budget to be 1. The solution to the game using a \ufb01rst-price auction is trivial: for example, with poorman bidding (in which the winner pays the bank), Player 1 wins iff his budget exceeds 2. Indeed, if his budget is 2 + \u03f5, he bids 1 + \u03f5/2 in the \ufb01rst bidding, moves the token to v1, and, since in \ufb01rstarXiv:1911.08360v1 [cs.GT] 19 Nov 2019 \fv0 v1 t1 t2 B1 = 1 + x 1 x 0 x + \u03f5 B2 = 1 \u27e81,1\u27e9 \u27e8x,1\u27e9 \u27e8x,1 \u2212x\u27e9 1 wins 2 wins 1 wins 2 wins Figure 1: On the left, the bidding game in which Player 1 wins iff he wins two biddings in a row. Assume initial budgets of B1 = 1 + x, for x \u2208(0.5, 1), and B2 = 1. A Player 1 strategy that guarantees value 0.5 uniformly bids {x, 1} in the \ufb01rst bidding. Player 2 has two optimal counter strategies, and the table on the right depicts the budget in the second bidding in all cases. price auctions, the loser does not pay his bid, the budgets in the next round are 1 + \u03f5/2 and 1, respectively. Player 1 again bids 1 + \u03f5/2, wins the bidding, moves the token to t1, and wins the game. On the other hand, if Player 1\u2019s budget is 2 \u2212\u03f5, Player 2 can guarantee that the token reaches t2 by bidding 1 in both rounds. The solution to this game with all-pay bidding is much more complicated and was posed as an open question in (Lazarus et al. 1999), which we completely solve. Assuming Player 2\u2019s budget is 1, it is easy to show that when Player 1\u2019s initial budget is either greater than 2 or smaller than 1, there exist a deterministic winning strategy for one of the players. Also, for budgets in between, i.e, in (1, 2), it is not hard to see that optimal strategies require probabilistic choices, which is the source of the dif\ufb01culty and an immediate difference from \ufb01rst-price bidding rules. We characterize the value of the game, which is the optimal probability that Player 1 can guarantee winning, as a function of Player 1\u2019s initial budget. In Theorem 5.1, we show that for n \u2208I N and x \u2208(1/(n+1), 1/n], the value is 1/(n+1) when Player 1\u2019s initial budget is 1+x and Player 2\u2019s initial budget is 1. Fig. 1 gives a \ufb02avor of the the solution in the simplest interesting case of x \u2208(0.5, 1), where a Player 1 strategy that bids x and 1 each with probability 0.5 guarantees winning with probability 0.5. Apart from the theoretical interest in all-pay bidding games, we argue that they are useful in practice, and that they address limitations of previously studied models. Allpay auctions are one of the most well-studied auction mechanisms (Michael R. Baye and de Vries 1996). Even though they are described in economic terms, they are often used to model settings in which the agents\u2019 bids represent the amount of effort they invest in a task such as in rent seeking (Tullock 1980), patent races, e.g., (Baye and Hoppe 2003), or biological auctions (Chatterjee, Reiter, and Nowak 2012). As argued in (Konrad and Kovenock 2009), however, many decision-making settings, including the examples above, are not one-shot in nature, rather they develop dynamically. Dynamic all-pay auctions have been used to analyze, for example, political campaigning in the USA (Klumppa and K.Polborn 2006), patent races (Harris and Vickers 1985), and (Klumppa and K.Polborn 2006) argue that they appropriately model sport competitions between two teams such as \u201cbest of 7\u201d in the NBA playoffs. An inherent difference between all-pay repeated auctions and all-pay bidding games is that our model assumes that the players\u2019 effort has no or negligible inherent value and that it is bounded. The payoff is obtained only from the reward in the leaves. For example, a \u201cbest of k\u201d sport competition between two sport teams can be modelled as an all-pay bidding game as follows. A team\u2019s budget models the sum of players\u2019 strengths. A larger budget represents a fresher team with a deeper bench. Each bidding represents a match between the teams, and the team that bids higher wins the match. The teams only care about winning the tournament and the players\u2019 strengths have no value after the tournament is over. The closest model in spirit is called Colonel Blotto games, which dates back to (Borel 1921), and has been extensively studied since. In these games, two colonels own armies and compete in n battle\ufb01elds. Colonel Blotto is a one-shot game: on the day of the battle, the two colonels need to decide how to distribute their armies between the n battle\ufb01elds, where each battle\ufb01eld entails a reward to its winner, and in each battle\ufb01eld, the outnumbering army wins. To the best of our knowledge, all-pay bidding games are the \ufb01rst to incorporate a modelling of bounded resource with no value for the players, as in Colonel Blotto games, with a dynamic behavior, as in ongoing auctions. Graph games have been extensively used to model and reason about systems (Clarke et al. 2018) and multi-agent systems (Wooldridge et al. 2016). Bidding games naturally model systems in which the scheduler accepts payment in exchange for priority. Blockchain technology is one such system, where the miners accept payment and have freedom to decide the order of blocks they process based on the proposed transaction fees. Transaction fees are not refundable, thus all-pay bidding is the most appropriate modelling. Manipulating transaction fees is possible and can have dramatic consequences: such a manipulation was used to win a popular game on Ethereum called FOMO3d1. There is thus ample motivation for reasoning and verifying blockchain technology (Chatterjee, Goharshady, and Velner 2018). We show that all-pay bidding games exhibit an involved and interesting mathematical structure. As discussed in Example 1.1, while we show a complete characterization of the value function for the game WnR(2), it is signi\ufb01cantly harder than the characterization with \ufb01rst-price bidding. The situation becomes worse when we slightly complicate the game and require that Player 1 wins three times in a row, called WnR(3), for short. We show that there are initial budgets for which an optimal strategy in WnR(3) requires in\ufb01nite support. We turn to describe our positive results on general games. First, we study surely-winning in all-pay bidding games, i.e., winning with probability 1. We show a threshold behavior that is similar to \ufb01rst-price bidding games: each vertex v in a qualitative all-pay bidding game has a surelywinning threshold ratio, denoted STHR(v), such that if 1https://bit.ly/2wizwjj 2 \fPlayer 1\u2019s ratio exceeds STHR(v), he can guarantee winning the game with probability 1, and if Player 1\u2019s ratio is less than STHR(v), Player 2 can guarantee winning with positive probability. Moreover, we show that surely-winning threshold ratios have the following structure: for every vertex v, we have STHR(v) = STHR(v\u2212) + (1 \u2212STHR(v\u2212)/STHR(v+)), where v\u2212and v+ are the neighbors of v that, respectively, minimize and maximize STHR. This result has computation-complexity implications; namely, we show that in general, the decision-problem counterpart of \ufb01nding surely-winning threshold ratios is in PSPACE using the existential theory of the reals (ETR, for short) (Canny 1988), and it is in linear time for DAGs. We show that surely-winning threshold ratios can be irrational, thus we conjecture that the decision problem is sumof-squareroot-hard. Example 1.2. Tic-tac-toe is a canonical graph game that is played on a DAG. First-price bidding Tic-tac-toe was discussed in a blog post2, where threshold budgets are shown to be surprisingly low: with Richman bidding, Player 1 can guarantee winning when his ratio exceeds 133/123 \u2248 1.0183 (Develin and Payne 2010) and with poormanbidding, when it exceeds roughly 1.0184. We implement our algorithm and \ufb01nd surely-winning threshold ratios in all-pay bidding tic-tac-toe: Player 1 surely wins when his initial ratio is greater than 51/31 \u22481.65 (see Fig. 2). We point to several interesting phenomena. One explanation for the signi\ufb01cant gap between the thresholds in all-pay and \ufb01rst-price bidding is that, unlike Richman and poorman bidding, in the range (31/51, 51/31) neither player surely wins. Second, the threshold ratio for the relaxed Player 1 goal of surely not-losing equals the surelywinning threshold ratio. This is not always the case; e.g., from the con\ufb01guration with a single O in the middle left position, Player 1 requires a budget of 31/16 for surely winning and 25/13 for surely not-losing. Third, we \ufb01nd it surprising that when Player 2 wins the \ufb01rst bidding, it is preferable to set an O in one of the corners (as in con\ufb01guration c2) rather than in the center. wins 4/3 . = 1.33 3/2 = 1.5 2 51/31 . = 1.65 31/16 . = 1.94 8/3 . = 2.67 5/3 . = 1.67 3 c4 c1 c2 c3 Figure 2: Surely-winning thresholds in tic-tac-toe. For example, the surely-winning threshold budget in c4 is 3 since in order to win the game, Player 1 must win three biddings in a row. 2https://bit.ly/2KUong4 Finally, we devise an FPTAS for the problem of \ufb01nding values in DAGs; namely, given a bidding game G that is played on a DAG, an initial vertex, and an \u03f5 > 0, we \ufb01nd, in time polynomial in the size of G and 1/\u03f5, an upperand lower-bound on the expected payoff that Player 1 can guarantee with every initial ratio of the form \u03f5 \u00b7 k. The idea is to discretize the budgets and bids and, using a bottomup approach, repeatedly solve \ufb01nite-action zero-sum games. The algorithm gives theoretical bounds on the approximation. It is a simple algorithm that we have implemented and experimented with. Our experiments show that the difference between upperand lower-bounds is small, thus we conclude that the algorithm supplies a good approximation to the value function. The experiments verify our theoretical \ufb01ndings. In addition, they hint at interesting behavior of all-pay bidding games, which we do not yet understand and believe encourages a further study of this model. Related work Colonel Blotto games (Borel 1921) have been extensively studied; a handful of papers include (Bellman 1969; Blackett 1954; Hart 2008; Shubik and Weber 1981). They have mostly been studied in the discrete case, i.e., the armies are given as individual soldiers, but, closer to our model, also in the continuous case (see (Roberson 2006) and references therein). The most well-studied objective is maximizing the expected payoff, though recently (Behnezhad et al. 2019) study the objective of maximizing the probability of winning at least k battle\ufb01elds, which is closer to our model. All-pay bidding games were mentioned brie\ufb02y in (Lazarus et al. 1999), where it was observed that optimal strategies require probabilistic choices. To the best of our knowledge, all-pay bidding games (Menz, Wang, and Xie 2015) were studied only with discrete-bidding, which signi\ufb01cantly simpli\ufb01es the model, and in the Richman setting; namely, both players pay their bids to the other player. First-price bidding games have been well-studied and exhibit interesting mathematical properties. Threshold ratios in reachability Richman-bidding games, and only with these bidding rules, equal values in random-turn based games (Peres et al. 2009), which are stochastic games in which in each round, the player who moves is chosen according to a coin toss. This probabilistic connection was extended and generalized to in\ufb01nite-duration bidding games with Richman(Avni, Henzinger, and Chonev 2019), poorman(Avni, Henzinger, and Ibsen-Jensen 2018), and even taxman-bidding (Avni, Henzinger, and \u02c7 Zikeli\u00b4 c 2019), which generalizes both bidding rules. Other orthogonal extensions of the basic model include non-zero-sum bidding games (Meir, Kalai, and Tennenholtz 2018) and discretebidding games that restrict the granularity of the bids (Develin and Payne 2010; Aghajohari, Avni, and Henzinger 2019). There are a number of shallow similarities between allpay games and concurrent stochastic games (Shapley 1953). A key difference between the models is that in all-pay bidding games, the (upper and lower) value depends on the initial ratio, whereas a stochastic game has one value. We list examples of similarities. In both models players pick actions 3 \fsimultaneously in each turn, and strategies require randomness and in\ufb01nite memory though in Everett recursive games (Everett 1955), which are closest to our model, only \ufb01nite memory is required and in more general stochastic games, the in\ufb01nite memory requirement comes from remembering the history e.g. (Mertens and Neyman 1981) whereas in allpay bidding games in\ufb01nite support is already required in the game \u201cwin 3 times in a row\u201d (which has histories of length at most 3). Also, computing the value in stochastic games is in PSPACE using ETR (Etessami et al. 2008), it is in P for DAGs, and there are better results for solving the guaranteed-winning case (Chatterjee and Ibsen-Jensen 2015). 2 Preliminaries A reachability all-pay bidding game is \u27e8V, E, L, w\u27e9, where V is a \ufb01nite set of vertices, E \u2286V \u00d7 V is a set of directed edges, L \u2286V is a set of leaves with no outgoing edges, and w : L \u2192Q assigns unique weights to leaves, i.e., for l, l\u2032 \u2208L, we have w(l) \u0338= w(l\u2032). We require that every vertex in V \\ L has a path to at least two different leaves. A special case is qualitative games, where there are exactly two leaves with weights in {0, 1}. We say that a game is played on a DAG when there are no cycles in the graph. For v \u2208V , we denote the neighbors of v as N(v) = {u \u2208V : \u27e8v, u\u27e9\u2208E}. A strategy is a recipe for how to play a game. It is a function that, given a \ufb01nite history of the game, prescribes to a player which action to take, where we de\ufb01ne these two notions below. A history in a bidding game is \u03c0 = \u27e8v1, b1 1, b2 1\u27e9, . . . , \u27e8vk, b1 k, b2 k\u27e9, vk+1 \u2208(V \u00d7 I R \u00d7 I R)\u2217\u00b7 V , where for 1 \u2264j \u2264k + 1, the token is placed on vertex vj at round j and Player i\u2019s bid is bi j, for i \u2208{1, 2}. Let BI i be the initial budget of Player i. Player i\u2019s budget following \u03c0 is Bi(\u03c0) = BI i \u2212P 1\u2264j\u2264k bi j. A play \u03c0 that ends in a leaf l \u2208L is associated with the payoff w(l). Consider a history \u03c0 that ends in v \u2208V \\ L. The set of legal actions following \u03c0, denoted A(\u03c0), consists of pairs \u27e8b, u\u27e9, where b \u2264Bi(\u03c0) is a bid that does not exceed the available budget and u \u2208N(v) is a vertex to move to upon winning. A mixed strategy is a function that takes \u03c0 and assigns a probability distribution over A(\u03c0). The support of a strategy f is supp(f, \u03c0) = {\u27e8b, u\u27e9: f(\u03c0)(\u27e8v, u\u27e9) > 0}. We assume WLog. that each bid is associated with one vertex to proceed to upon winning, thus if \u27e8b, v\u27e9, \u27e8b, v\u2032\u27e9\u2208supp(f, \u03c0), then v = v\u2032. We say that f is pure if, intuitively, it does not make probabilistic choices, thus for every history \u03c0, we have |supp(f, \u03c0)| = 1. De\ufb01nition 2.1 (Budget Ratio). Let B1, B2 \u2208I R be initial budgets for the two players. Player 1\u2019s ratio is B1/B2.3 Let G be an all-pay bidding game. An initial vertex v0, an initial ratio r \u2208I R, and two strategies f1 and f2 for the two players give rise to a probability D(v0, r, f1, f2) over plays, which is de\ufb01ned inductively as follows. The probability of the play of length 0 is 1. Let \u03c0 = \u03c0\u2032 \u00b7 \u27e8v, b1, b2\u27e9, u, where \u03c0\u2032 ends in a vertex v. Then, we de\ufb01ne Pr[\u03c0] = 3We \ufb01nd this de\ufb01nition more convenient than B1/(B1 + B2) which is used in previous papers on bidding games. Pr[\u03c0\u2032] \u00b7 f1(\u03c0)(b1) \u00b7 f2(\u03c0)(b2). Moreover, for i \u2208{1, 2}, assuming Player i chooses the successor vertex vi, i.e., \u27e8bi, vi\u27e9\u2208supp(fi, \u03c0\u2032), then u = v1 when b1 \u2265b2 and otherwise u = v2. That is, we resolve ties by giving Player 1 the advantage. This choice is arbitrary and does not affect most of our results, and the only affect is discussed in Remark 5.1. De\ufb01nition 2.2 (Game Values). The lower value in an allpay game G w.r.t. an initial vertex v0, and an initial ratio r is val\u2193(G, v0, r) = supf infg R \u03c0\u2208D(v0,r,f,g) Pr[\u03c0] \u00b7 pay(\u03c0). The upper value, denoted val\u2191(G, v0, r) is de\ufb01ned dually. It is always the case that val\u2193(G, v0, r) \u2264val\u2191(G, v0, r), and when val\u2191(G, v0, r) = val\u2193(G, v0, r), we say that the value exists and we denote it by val(G, v0, r). 3 Surely-Winning Thresholds In this section we study the existence and computation of a necessary and suf\ufb01cient initial budget for Player 1 that guarantees surely winning, namely winning with probability 1. We focus on qualitative games in which each player has a target leaf. Note that the corresponding question in quantitative games is the existence of a budget that suf\ufb01ces to surely guarantee a payoff of some c \u2208I R, which reduces to the the surely-winning question on qualitative games by setting Player 1\u2019s target to be the leaves with weight at least c. We de\ufb01ne surely-winning threshold budgets formally as follows. De\ufb01nition 3.1 (Surely-Winning Thresholds). Consider a qualitative game G and a vertex v in G. The surely-winning threshold at v, denoted STHR(v), is a budget ratio such that: \u2022 If Player 1\u2019s ratio exceeds STHR(v), he has a strategy that guarantees winning with probability 1. \u2022 If Player 1\u2019s ratio is less than STHR(v), Player 2 has a strategy that guarantees winning with positive probability. To show existence of surely-winning threshold ratios, we de\ufb01ne threshold functions, show their existence, and show that they coincide with surely-winning threshold ratios. De\ufb01nition 3.2 (Threshold functions). Consider a qualitative game G = \u27e8V, E, t1, t2\u27e9. Let T : V \u2192I R\u22650 be a function. For v \u2208V \\ {t1, t2}, let v\u2212, v+ \u2208N(V ) be such that T(v\u2212) \u2264T(u) \u2264T(v+), for every u \u2208N(v). We call T a threshold function if T(v) = \uf8f1 \uf8f2 \uf8f3 0 if v = t1 \u221e if v = t2 T(v\u2212) + 1 \u2212T(v\u2212)/T(v+) otherwise.4 We start by making observations on surely-winning threshold ratios and threshold functions. Lemma 3.1. Consider a qualitative game G = \u27e8V, E, t1, t2\u27e9, and let T be a threshold function. \u2022 For v \u2208V , if Player 1\u2019s initial ratio exceeds STHR(v), then he has a pure strategy that guarantees winning. \u2022 For v \u2208V \\ {t1, t2}, we have STHR(v) \u22651. 4We de\ufb01ne 1/\u221e= 0 and c < \u221e, for every c \u2208I R. 4 \f\u2022 We have T(v\u2212) \u2264T(v) \u2264T(v+) and the inequalities are strict when T(v\u2212) \u0338= T(v+). Proof. For the \ufb01rst claim, suppose Player 1 has a strategy f that guarantees winning against any Player 2 strategy. Suppose f is mixed. Then, we construct a pure strategy f \u2032 by arbitrarily choosing, in each round, a bid b \u2208supp(f). If Player 2 has a strategy g that guarantees winning with positive probability against f \u2032, then by playing g against f \u2032, he wins with positive probability, contradicting the assumption that f \u2032 guarantees surely winning. For the second claim, suppose towards contradiction that STHR(v) = 1\u2212\u03f5, for \u03f5 > 0. We think of Player 1 as revealing his pure winning strategy before Player 2. Assuming Player 1 bids b in a round, Player 2 reacts by bidding b + \u03f5/|V |. Thus, Player 2 wins |V | biddings in a row and draws the game to t2. A simple calculation veri\ufb01es the last claim. We \ufb01rst show that threshold functions coincide with surely-winning threshold ratios, and then show their existence. Lemma 3.2. Consider a qualitative game G = \u27e8V, E, t1, t2\u27e9 and let T be a threshold function for G. Then, for every vertex v \u2208V , we have T(v) \u2264STHR(v); namely, Player 1 surely wins from v when his budget exceeds T(v). Proof. We claim that if Player 1\u2019s ratio is T(v)+\u03f5, for \u03f5 > 0, he can surely-win the game. For t1 and t2, the claim is trivial and vacuous, respectively. We provide a strategy f1 for Player 1 that guarantees that in at most n = |V | steps, either Player 1 has won or he is at some node u \u2208V with relative budget T(u) + \u03f5 + d\u03f5, where d\u03f5 > 0 is a small \ufb01xed positive number. By repeatedly applying this strategy, either Player 1 at some point wins directly, or he accumulates relative budget n and then he can force a win in n steps by simply bidding 1 in each round. Suppose the token is placed on vertex v \u2208V \\{t1, t2} following 0 \u2264i \u2264n biddings, let v\u2212, v+ \u2208N(v) be neighbors of v that achieve the minimal and maximal value of T, respectively. For convenience, set x = T(v\u2212), y = T(v+), and \u03b4 = \u03f52. Player 1 bids 1 \u2212x/y + \u03b4k+1\u2212i. We \ufb01rst disregard the supplementary \u03b4k+1\u2212i. When Player 1 wins, we consider the worst case of Player 2 bidding 0, thus Player 1\u2019s normalized budget is greater than \u0000T(v) \u2212(1 \u2212 x/y) \u0001 / \u00001 \u2212(1 \u2212x/y) \u0001 = T(v\u2212). On the other hand, when Player 1 loses, Player 2 bids at least as much as Player 1, and Player 1\u2019s normalized budget is greater than \u0000T(v) \u2212(1 \u2212x/y) \u0001 / \u00001 \u22122(1 \u2212x/y) \u0001 = T(v+). Upon winning a bidding, Player 1 moves to a neighbor u \u2208N(v) with T(u) = T(v\u2212), and when losing, the worst case is when Player 2 moves to a vertex u with T(u) = T(v+). The claim above shows that in both cases Player 1\u2019s budget exceeds T(u). Since T(t2) = \u221e, we have established that the strategy guarantees not losing. We de\ufb01ne Player 1\u2019s moves precisely, and show how Player 1 uses the surplus \u03b4 to guarantee winning. We de\ufb01ne Player 1 moves upon winning a bidding at v \u2208V \\ {t1, t2}. He moves to u \u2208N(v) such that T(u) = T(v\u2212), where if there are several vertices that achieve the minimal value, he chooses a vertex that is closest to t1. By Lemma 3.1, for every vertex v, we have T(v\u2212) \u2264T(v) \u2264T(v+) and equality occurs only when T(v\u2212) = T(v+). Thus, Player 1\u2019s move guarantees that T(u) \u2264T(v) and if u is farther from t1 than v, then the inequality is strict. The sum of decreases in T and in distance is at most |V |, thus t1 is reached after winning n biddings. We show how to use the surplus \u03b4 to guarantee winning. First note that T(v) \u22651 \u2212x/y and the extra terms \u03b4k+1\u2212i add up to at most Pk i=1 \u03b4i < P\u221e i=1 \u03b4i = \u03b4/(1 \u2212\u03b4) = \u03f52/(1\u2212\u03f52) < \u03f5 so the bids are legal (we assume, WLog., that \u03f5 \u22640.1). Finally, if Player 2 wins the i-th bidding, Player 1\u2019s ratio is at least T(u) + \u03f5 \u2212\u03b4k+1\u2212i 1\u2212\u03b4 whereas Player 2\u2019s ratio is at most x/y \u2212\u03b4k+1\u2212i. Player 1\u2019s new ratio is then at least \u0000x+\u03f5\u2212\u03b4k+1\u2212i 1\u2212\u03b4 \u0001 / \u0000x/y\u2212\u03b4k+1\u2212i\u0001 , and it is straightforward to check that this is at least as much as the desired T(u)+\u03f5+d\u03f5, where d\u03f5 = \u03f52n(1 + \u03f5 \u22121/(1 \u2212\u03f52)). To obtain the converse, we show a strategy for Player 2 that wins with positive probability. We identify an upper bound \u03b2 in each vertex such that a Player 1 bid that exceeds \u03b2 exhausts too much of his budget. Then, Player 2 bids 0 with probability 0.5 and the rest of the probability mass is distributed uniformly in [0, \u03b2]. This de\ufb01nition intuitively allows us to reverse the quanti\ufb01cation and assume Player 1 reveals his strategy before Player 2; that is, when Player 1 bids at least \u03b2, we consider the case that Player 2 bids 0, and when Player 1 bids less than \u03b2, we consider the case where Player 2 slightly overbids Player 1. Both occur with positive probability. Lemma 3.3. Consider a qualitative game G = \u27e8V, E, t1, t2\u27e9 and let T be a threshold function for G. Then, for every vertex v \u2208V , we have T(v) \u2265STHR(v); namely, Player 2 wins with positive probability from v when Player 1\u2019s budget is less than T(v). Proof. We distinguish between two types of vertices in V \\ {t1, t2}. The \ufb01rst type is V1 = {v : T(v\u2212) < T(v) < T(v+)} and the second type is V2 = {v : T(v\u2212) = T(v) = T(v+)}. Assume Player 2\u2019s budget is at least 1 + \u03f5 and Player 1\u2019s budget is at most T(v) \u2212\u03f5. We use \u03b2 to denote an upper bound on Player 2\u2019s bids. For v \u2208V1, we set \u03b2 = 1 \u2212T(v\u2212)/T(v+), and for v \u2208V2, we set \u03b2 = p\u00b7\u03f5(2n)\u2212d, where p is a small constant, n = |V |, and d is the length of the shortest path from v to a vertex in the \ufb01rst type of vertices. Player 2 bids 0 with probability 1/2 and the rest of the probability mass is distributed uniformly in [0, 1]. Upon winning, in the \ufb01rst case, Player 2 moves to a vertex v+ that is closest to t2, and in the second case, to a vertex that is closest to a vertex in V1. Intuitively, a bid above \u03b2 is too high for Player 1 since it exhausts too much of his budget. Thus, when Player 1 bids at least \u03b2, we consider the case where Player 2 bids 0, and when Player 1 bids below \u03b2, we consider the case where Player 2 bids slightly above him, both occur with positive probability. Note that in both cases, we maintain the invariant that Player 2\u2019s budget is at least 1 + \u03f5 and Player 1\u2019s budget is at most T(v) \u2212\u03f5, thus Player 1 does not win the 5 \fgame. We show that Player 2 wins with positive probability. Consider a \ufb01nite play \u03c0 that is obtained from some choice of strategy of Player 1. Note that a cycle in \u03c0 necessarily results from a Player 1 win (against a Player 2 bid of 0), and such a win occurs when he bids at least \u03b2. In both cases, Player 1\u2019s budget decreases by a constant. The tricky case is when the cycle is contained in V2. Then, the sum of Player 2\u2019s bids is at most Pd i=k p\u03f5(2n)\u2212i \u2264(p\u03f5(2n)\u2212d)/n whereas Player 1 bids at least p\u03f5(2n)\u2212d. Thus, Player 1\u2019s decrease in budget is larger by a factor of n of Player 2. Thus, Player 2\u2019s budget will eventually be larger than Player 1\u2019s, at which point Player 2 wins in at most n steps since he always bids above Player 1\u2019s budget with positive probability. Finally, we show existence of threshold functions. We \ufb01rst show existence in DAGs, which is a simple backwardsinduction argument that we have implemented (see Example 1.2). To obtain existence in a general game G, we \ufb01nd threshold functions in games on DAGs of the form Gn, for n \u2265I N, in which Player 1 is restricted to win in at most n steps. We tend n to in\ufb01nity and show that the limit of the threshold functions in the games is a threshold function in G. Lemma 3.4. Every qualitative reachability bidding game has a threshold function. Proof. Note that in a game that is played on a DAG, the existence of a threshold function is shown easily in a backwardsinduction manner from the leaves. Consider a game G = \u27e8V, E, t1, t2\u27e9. For n \u2208I N let Gn be the reachability game in which Player 1 wins iff the game reaches t1 in at most n rounds. Since Gn is a DAG, by the above, a threshold function Tn exists in Gn. By Lemma 3.2, we have Tn(v) \u2265 Tn+1(v), and by Lemma 3.1, we have Tn(v) \u22651, thus the sequence {Tn(v)}n\u22651 converges. We de\ufb01ne T(v) = limn\u2192\u221eTn(v) and claim that T is a threshold function. We show that T(v) converges to T(v\u2212) + 1 \u2212 T(v\u2212)/T(v+). Note that since Gn is a DAG, we have Tn(v) = Tn\u22121(v\u2212) + 1 \u2212Tn\u22121(v\u2212)/Tn\u22121(v+). Moreover, for D = |V |, note that TD(v) \u2264D, since a budget of D allows Player 1 to win D times in a row and draw the game to t1. Since {Tn(v)}n\u22651 is a monotonically decreasing sequence, for every \u03f5 > 0, there is N \u2208I N such that for every n > N, we have T(v) \u2264Tn(v) \u2264T(v) + \u03f5. Given \u03f5, we choose n such that T(v) \u2264Tn(v) \u2264T(v\u2212) + \u03f5 + 1 \u2212 T(v\u2212) T(v+) + \u03f5 \u2264 \u2264T(v\u2212) + \u03f5 + 1 \u2212 T(v\u2212) T(v+)(1 + \u03f5) \u2264 \u2264T(v\u2212) + \u03f5 + 1 \u2212T(v\u2212) T(v+)(1 \u2212\u03f5) \u2264 \u2264T(v\u2212) + 1 \u2212T(v\u2212) T(v+) + \u03f5 \u00b7 (D + 1) P1 P2 1 + \u221a 2 2 A B C D 1 2 + \u221a 2 Figure 3: A qualitative-reachability game with target Pi for Player i in which surely-winning threshold ratios are irrational. We use the characterization of surely-winning threshold ratios by means of threshold functions (Lemmas 3.2 and 3.3) to \ufb01nd the threshold ratios in the game that is depicted in Fig. 3, and show that they can be irrational. In addition the characterization has computation-complexity consequences on the problem of \ufb01nding surely-winning threshold ratios. In DAGs, \ufb01nding threshold functions is done in linear time. In general graphs, the characterization implies a reduction to the existential theory of the reals (ETR, for short) by phrasing the constraints in De\ufb01nition 3.2 as an input to ETR. It is known that ETR is in PSPACE (Canny 1988). Combining the lemmas above, we have the following. Theorem 3.5. Surely-winning threshold ratios exist in every qualitative reachability game and coincide with threshold functions. Surely-winning threshold ratios can be irrational. Given a vertex v and a value c \u2208Q, deciding whether STHR(v) \u2265c is in PSPACE for general graphs and can be solved in linear time for games on DAGs. 4 Finding Approximated Values In this section, we focus on games on DAGs. The algorithm that we construct is based on a discretization of the budgets; namely, we restrict the granularity of the bids and require Player 1 to bid multiples of some \u03f5 > 0, similar in spirit to discrete-bidding games (Develin and Payne 2010). We \ufb01rst relate the approximated value with the value in G with no restriction on the bids. Then, in Section 6, we experiment with an implementation of the algorithm and show interesting behavior of all-pay bidding games. We de\ufb01ne the approximate value as follows. De\ufb01nition 4.1 (Approximate Value Function). Let G be a game on a DAG and \u03f5 > 0. Let val\u03f5 be the approximatevalue function in G when Player 1 is restricted to choose bids in {\u03f5 \u00b7 k : k \u2208I N} and Player 2 wins ties. Our algorithm is based on the following theorem. Theorem 4.1. Consider a game on a DAG G where each leaf is labeled with a reward. Let v be a vertex in G, B1 \u2208I R be an initial budget for Player 1, d(G) be the longest path from a vertex to a leaf in G, and \u03f5 > 0. Then, we have val(v, B1) \u2264val\u03f5(v, B1 + d(G) \u00b7 \u03f5). Theorem 4.1 gives rise to Algorithm 4 that \ufb01nds val\u03f5(B1), for every B1 that is a multiple of \u03f5. Note that assuming Player 1 bids only multiples of \u03f5, we can assume that 6 \fApprox-Values(v, \u03f5) if N(v) = \u2205then return weight(v) \u2200u \u2208N(v) call APPROX-VALUES(u, \u03f5) for B1 = k \u00b7 \u03f5 s.t. 0 \u2264B1 \u2264STHR(v) do // Construct a two-player \ufb01nite-action matrix game. A1 = {b1 = i \u00b7 \u03f5 s.t. 0 \u2264b1 \u2264B1} A2 = {b2 = j \u00b7 \u03f5 s.t. 0 \u2264b2 \u22641} for b1 \u2208A1, b2 \u2208A2 do // Player 1\u2019s normalized new budget. B\u2032 1 = \u2308(B1 \u2212b1)/(1 \u2212b2)\u2309\u03f5 if b1 > b2 then pay(b1, b2) = maxu\u2208N(v) val\u03f5(u, B\u2032 1) else pay(b1, b2) = minu\u2208N(v) val\u03f5(u, B\u2032 1) val\u03f5(v, B1) = SOLVE(A1, A2, pay) return val\u03f5 Figure 4: An FPTAS for \ufb01nding upperand lower-bounds on the values for every initial budget ratio. Player 2 also bids multiples of \u03f5. The algorithm constructs a two-player zero-sum matrix game, which is \u27e8A1, A2, pay\u27e9, where, for i \u2208{1, 2}, Ai is a \ufb01nite set of actions for Player i, and, given ai \u2208Ai, the function pay(a1, a2) is the payoff of the game. A solution to the game is the optimal payoff that Player 1 can guarantee with a mixed strategy, and it is found using linear programming. Let STHR(v) denote the threshold budget with which Player 1 can guarantee the highest reward, then val\u03f5(B1) equals the highest reward, for B1 > STHR(v). To compute STHR(v), we use the lineartime algorithm in the previous section. 5 \u201cWin n in a Row\u201d Games In this section we study a simple fragment of qualitative games. De\ufb01nition 5.1 (Win n in a Row Games). For n \u2208I N, let WnR(n) denote the qualitative game in which Player 1 needs to win n biddings in a row and otherwise Player 2 wins. For example, see a depiction of WnR(2) in Fig. 1. We start with a positive results and completely solve WnR(2). Then, we show that optimal strategies require in\ufb01nite support already in WnR(3). A solution to \u201cwin twice in a row\u201d We start by solving an open question that was posed in (Lazarus et al. 1999) and characterize the value as a function the budget ratio in the win twice in a row game WnR(2) (see a depiction of the game in Fig. 1). Theorem 5.1. Consider the all-pay bidding game WnR(2) in which Player 1 needs to win twice and Player 2 needs to win twice. The value exists for every pair of initial budgets. Moreover, suppose the initial budgets are B1 for Player 1 and 1 for Player 2. Then, if B1 > 2, the value is 1, if B1 < 1, the value is 0, and if B1 \u2208(1 + 1 n+1, 1 + 1 n], for n \u2208I N, then the value is 1 n+1. Proof. The cases when B1 > 2 and B1 < 1 are easy. Let 2 \u2265n \u2208I N such that B1 = 1+1/n+\u03f5, where \u03f5 is such that B1 < 1 + 1/(n \u22121). We claim that the value of WnR(2) with initial budgets B1 and B2 = 1 is 1/n. Consider the Player 1 strategy that choses a bid in {k/n : 1 \u2264k \u2264n} uniformly at random. We claim that no matter how Player 2 bids, one of the choices wins, thus the strategy guarantees winning with probability at least 1/n. Let b2 be a Player 2 bid and let k \u2208I N be such that b2 \u2208[k/n, (k + 1)/n]. Consider the case where Player 1 bids b1 = (k + 1)/n and wins the \ufb01rst bidding. Player 1\u2019s normalized budget in the second bidding is B\u2032 1 = B1\u2212b1 1\u2212b2 \u2265n+1/n+\u03f5\u2212(k+1)/n 1\u2212(n\u2212k)/n > 1, thus Player 1 wins the second bidding as well. Next, we show that Player 1\u2019s strategy is optimal by showing a Player 2 strategy that guarantees winning with probability at least (n \u22121)/n. Let \u03f5\u2032 > \u03f5 such that (n \u22121) \u00b7 \u03f5\u2032 < 1, which exists since B1 < 1 + 1/(n \u22121). Player 2 chooses a bid uniformly at random in {k\u00b7(1/n+\u03f5\u2032) : 0 \u2264k \u2264n\u22121}. Suppose Player 1 bids b1, and we claim that the bid wins against at most one choice of Player 2. Let b2 be a Player 2 bid. When b2 > b1, Player 2 wins immediately. A simple calculation reveals that when b1 \u2212b2 > 1/n + \u03f5, then Player 1\u2019s normalized budget in the second bidding is less than 1, thus he loses. It is not hard to see that there are n \u22121 choices for Player 2 that guarantee winning. Remark 5.1. Note that the tie-breaking mechanism affects the winner at the end-points of the intervals. For example, if we would have let Player 2 win ties, then the intervals would have changed to [1 + 1 n+1, 1 + 1 n). In\ufb01nite support is required in WnR(3) We continue to show a negative result already in WnR(3): there is an initial Player 1 budget for which his optimal strategy requires in\ufb01nite support, which comes as a contrast to the optimal strategies we develop for WnR(2), which all have \ufb01nite support. A sweeping algorithm. In order to develop intuition for the proof, we present a sketch of an algorithm that decides, given n \u2208I N and B1, v \u2208Q>0, whether Player 1 can guarantee winning with probability v in WnR(n) when his initial budget is B1 and Player 2\u2019s budget is 1. We assume access to a function valn\u22121 that, given a budget B1 for Player 1, returns val\u2193(B1) in WnR(n \u22121). For example, Theorem 5.1 shows that val2(1 + 1/m + \u03f5) = 1/m, for m \u2208I N. The algorithm constructs a sequence of strategies f1, f2, . . . for Player 1 for the \ufb01rst bidding, each with \ufb01nite support. We de\ufb01ne f1(1) = v and f1(0) = 1 \u2212v. That is, according to f1, in the \ufb01rst bidding, Player 1 bids 1 with probability v and 0 with probability 1 \u2212v. Note that out(f1, 1) = v. For i \u22651, suppose fi is de\ufb01ned, that its support is 1 = b1 > . . . > bi > 0, and that out(fi, b) \u2265v, for any Player 2 bid b \u2265bi. Intuitively, as Player 2 lowers his bid, his remaining budget for the subsequent biddings increases and the winning probability for Player 1 decreases. We \u201csweep down\u201d from bi and \ufb01nd the maximal bid bi+1 of Player 2 such that out(fi, bi+1) < v. We de\ufb01ne fi+1 by, 7 \fintuitively, shifting some of the probability mass that fi assigns to 0 to bi+1, thus supp(fi+1) = supp(fi)\u222a{bi+1} and fi(bj) = fi+1(bj), for 1 \u2264j \u2264i. We terminate in two cases. If there is no positive b such that out(fi, b) < v, then fi guarantees a value of v, and we return it. On the other hand, if there is no fi(b) \u2208[0, 1] that guarantees out(fi, b) = v, then there val(B1) < v. For example, in the game WnR(2) with B1 = 1 1 3 and v = 1/3, we de\ufb01ne f1(1) = 1/3. We \ufb01nd b2 by solving (B1 \u2212b1)/(1 \u2212b2) = 1 to obtain b2 = 2/3, and de\ufb01ne f2(b2) = 1/3. Similarly, we \ufb01nd b3 by solving (B1 \u22122/3)/(1 \u2212b3) = 1, and terminate. In\ufb01nite support is required. The following lemma, shows a condition for the in-optimality of a strategy. Intuitively, consider a Player 1 strategy f that is obtained as in the sweeping algorithm only that in the (i + 1)-th iteration, instead of adding x < bi to the support, it adds bi+1 < x. The value that f guarantees needs to be at least v for any Player 2 bid, and speci\ufb01cally for bi and x. Note that it is clearly bene\ufb01cial for Player 1 to bid x rather than bi against the bid x of Player 2. Since x is not in the support of f, the probability f(bi) is un-necessarily high to \u201ccompensate\u201d. We construct f \u2032 by shifting some of this probability to x, we are left with a \u201csurplus\u201d which we re-distribute to the rest of the support of f, thus f \u2032 guarantees a better value than f. Formally, we have the following. Lemma 5.2. Consider a Player 1 strategy f in WnR(n), for some n \u2208I N, that has \ufb01nite support b1 > . . . > bm in the \ufb01rst bidding. If there is 1 \u2264i < m, 1 \u2264k \u2264i, and bi > x > bi+1 with out(bk, bi) > out(bk, x) and out(x, x) = out(bi, bi), then f is not optimal. Proof. Suppose that the strategy f guarantees a value of v and let x \u2208I R be as in the statement. The value is at least v against any Player 2 strategy, and in particular against the bids bi and x, thus out(f, x) \u2265v. We claim that out(f, x) < out(f, bi), thus out(f, bi) > v. Recall that the de\ufb01nition of the outcome is out(f, bi) = P 1\u2264j\u2264i f(bj) \u00b7 out(bj, bi) and similarly for out(f, x). Fixing a Player 1 bid, the function out is monotonically decreasing with Player 2\u2019s bids since the lower Player 2 bids, the more budget he has left for subsequent biddings. Thus, for 1 \u2264j \u2264i, we have out(bj, bi) \u2265out(bj, x), and the assumptions of the lemma imply that at least one of the inequalities is strict. Thus, the claim follows and we have out(f, bi) > v. Next, we construct a \u201cpartial-strategy\u201d f \u2032 by adding x to the support. Formally, we de\ufb01ne f \u2032(bj) = f(bj), for 1 \u2264j < i, and let f \u2032(bi) be such that P 1\u2264j\u2264i f \u2032(bj) \u00b7 out(bj, bi) = v. Let bi+1 = x, and let f \u2032(bi+1) be such that P 1\u2264j\u2264i+1 f \u2032(bj) \u00b7 out(bj, bi) = v. We claim that, intuitively, we are left with a \u201csurplus\u201d, and formally, we have f \u2032(bi) + f \u2032(bi+1) < f(bi). As in the above, we have out(f, bi) = P 1\u2264j\u2264i f(bj) \u00b7 out(bj, bi) > v. Suppose we chose the maximal bi > x > bi+1 for which the statement holds, thus for every j \u0338= k, we have out(bj, x) = out(bj, bi), and out(bk, x) = out(bk, bi) \u2212c, for some c > 0. We subtract the equality P 1\u2264j\u2264i+1 f \u2032(bj) \u00b7 out(bj, bi) = v from out(f, bi) > v and plug in the equalities to obtain f \u2032(bi)\u00b7out(bi, bi)+f \u2032(bi+1)\u00b7out(x, x) < f(bi)\u00b7out(bi, bi)\u2212 c\u00b7f(bk). Since we assume out(x, x) = out(bi, bi), the claim follows. Let \u2206= f(bi)\u2212(f \u2032(bi)+f \u2032(bi+1)) denote our \u201csurplus\u201d, which, by the above, is positive. We de\ufb01ne a new Player 1 strategy f \u2032\u2032 with support supp(f) \u222a{x}. The probability of bj, for j \u0338= i is f(bj) + \u2206/|supp(f \u2032\u2032)|. The probability of bi is f \u2032(bi) + \u2206/|supp(f \u2032\u2032)| and the probability of x is f \u2032(bi+1) + \u2206/|supp(f \u2032\u2032)|. It is not hard to show that f \u2032\u2032 guarantees a value that is greater than v, thus f is not optimal. Next, we use Lemma 5.2 to show that any strategy with \ufb01nite support is not optimal. Theorem 5.3. Consider the game WnR(3) in which Player 1 needs to win three biddings and Player 2 needs to win once. Suppose the initial budgets are 1.25 for Player 1 and 1 for Player 2. Then, an optimal strategy for Player 1 requires in\ufb01nite support in the \ufb01rst bidding. Proof. Suppose towards contradiction that f is an optimal strategy with \ufb01nite support in the \ufb01rst bidding. Consider the in\ufb01nite sequence xn = 7/8\u22121/8\u00b7P 1\u2264j\u2264n\u22121 2\u2212j, for n \u2265 1. It is not hard to verify that (5/4 \u2212xn)/(1 \u2212xn+1) = 2, for every n \u22651. That is, when Player 1 bids xn and Player 2 bids xn+1, by Theorem 5.1, the value is 0.5. Let supp(f) = {b1, . . . , bm} be the support of f in the \ufb01rst bidding, where b1 > b2 > . . . > bm. Since the support is \ufb01nite, there is an 1 \u2264i \u2264m and k \u22651 such that bi \u2265xk > xk+1 > bi+1. We claim that xk+1 satis\ufb01es the assumptions of Lemma 5.2, thus f is not optimal. First, it is not hard to verify that for 0.75 < y < 1, we have (5/4 \u2212y)/(1 \u2212y) > 2. Since the sequence {xn}n\u22651 tends to 0.75, we have out(bi, bi) = out(xk+1, xk+1) = 1. Second, when Player 2\u2019s bid is \ufb01xed, then out is monotonically decreasing with Player 1\u2019s bid, thus out(bi, xk+1) \u2264 out(xk, xk+1) = 0.5. Thus, out(bi, bi) > out(bi, xk+1), and we are done. 6 Experiments We have implemented Algorithm 4 and experiment by running it on qualitative games that are called a race in (Harris and Vickers 1985). De\ufb01nition 6.1 (Race Games). For n, m \u2208I N, let G(n, m) be the qualitative game that consists of at most n + m \u22121 biddings in which Player 1 wins if he wins n biddings and Player 2 wins if he wins m biddings. Speci\ufb01cally, we have WnR(n) = G(n, 1). The algorithm is implemented in Python and it is run on a personal computer. In our experiments, we choose \u03f5 = 0.01. In terms of scalability, the running time for for solving the game G(5, 5) is at most 10 minutes. Solutions to smaller games are in the order of seconds to several minutes. Figure 5 depicts some values for \ufb01ve games as output by the implementation. We make several observations. First, close plots represent the upperand lower-bounds of a game. We \ufb01nd it surprising that the difference between the two approximations is very small, and conclude that the output 8 \f\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf \u25cf\u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf G(1,3) \u25cf G(1,2) \u25cf G(2,2) \u25cf G(2,1) \u25cf G(3,1) 1/3 1/2 2/3 1 1.5 2 3 0.0 0.2 0.4 0.6 0.8 1.0 Budget ratio, log-scale Game value Figure 5: Upperand lower-bounds on the values of the games G(1, 3), G(1, 2), G(2, 2), G(2, 1), and G(3, 1). \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf G(2,3) \u25cf G(3,3) \u25cf G(3,2) 3/5 5/3 1/2 3/4 1 4/3 2 0.0 0.2 0.4 0.6 0.8 1.0 Budget ratio, log-scale Game value Figure 6: Upperand lower-bounds on the values of the games G(3, 2), G(3, 3), and G(2, 3). of the algorithm is a good approximation of the real values of the games. Second, the plot of G(2, 1) = WnR(2) (depicted in red), is an experimental af\ufb01rmation of Theorem 5.1; namely, the step-wise behavior and the values that are observed in theory are clearly visible in the plot. Third, while a step-wise behavior can be seen for high initial budgets in G(3, 1) (depicted in purple), for lower initial budgets, the behavior seems more involved and possibly continuous. In Theorem 5.3, we show that optimal strategies for initial budgets in this region require in\ufb01nite support, and the plot af\ufb01rms the more involved behavior of the value function that is predicted by the theorem. Continuity is more evident in more elaborate games whose values are depicted in Figure 6. Both plots give rise to several interesting open questions, which we elaborate on in the next section. 7 Discussion We study, for the \ufb01rst time, all-pay bidding games on graphs. Unlike bidding games that use variants of \ufb01rst-price auctions, all-pay bidding games appropriately model decisionmaking settings in which bounded effort with no inherent \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.1 0.2 0.5 1 2 5 0.0 0.2 0.4 0.6 0.8 1.0 Budget ratio, log-scale Game value Figure 7: Upperand lower-bounds on the values of the games G(5, i) and G(i, 5), for 1 \u2264i \u22645. value needs to be invested dynamically. While our negative results show that all-pay bidding games are signi\ufb01cantly harder than \ufb01rst-price bidding games, our results are mostly positive. We are able to regain the threshold-ratio phenomena from \ufb01rst-price bidding games by considering surelywinning threshold ratios, and our implementation for games on DAGs solves non-trivial games such as tic-tac-toe. We show a simple FPTAS that \ufb01nds upperand lower-bounds on the values for every initial budget ratio, which we have implemented and show that it performs very well. We leave several open questions. The basic question on games, which we leave open, is showing the existence of a value with respect to every initial ratio in every game. We were able to show existence in WnR(2), and Fig. 6 hints the value exists in more complicated games. Also, while we identify the value function in WnR(2), we leave open the problem of a better understanding of this function in general. For example, while for WnR(2) we show that it is a stepwise function, observing Fig. 6 it seems safe to guess that the function can be continuous. Finally, characterizing the function completely, similar to our solution of WnR(2), in more involved games, is an interesting and challenging open problem. 8 Acknowledments This research was supported by the Austrian Science Fund (FWF) under grants S11402-N23 (RiSE/SHiNE), Z211-N23 (Wittgenstein Award), and M 2369-N33 (Meitner fellowship)." + }, + { + "url": "http://arxiv.org/abs/1905.03835v1", + "title": "Bidding Mechanisms in Graph Games", + "abstract": "In two-player games on graphs, the players move a token through a graph to\nproduce an infinite path, which determines the winner or payoff of the game. We\nstudy {\\em bidding games} in which the players bid for the right to move the\ntoken. Several bidding rules were studied previously. In {\\em Richman} bidding,\nin each round, the players simultaneously submit bids, and the higher bidder\nmoves the token and pays the other player. {\\em Poorman} bidding is similar\nexcept that the winner of the bidding pays the \"bank\" rather than the other\nplayer. {\\em Taxman} bidding spans the spectrum between Richman and poorman\nbidding. They are parameterized by a constant $\\tau \\in [0,1]$: portion $\\tau$\nof the winning bid is paid to the other player, and portion $1-\\tau$ to the\nbank. We present, for the first time, results on {\\em infinite-duration} taxman\ngames. Our most interesting results concern quantitative taxman games, namely\n{\\em mean-payoff} games, where poorman and Richman bidding differ. A central\nquantity in these games is the {\\em ratio} between the two players' initial\nbudgets. While in poorman mean-payoff games, the optimal payoff a player can\nguarantee depends on the initial ratio, in Richman bidding, the payoff depends\nonly on the structure of the game. In both games the optimal payoffs can be\nfound using (different) probabilistic connections with {\\em random-turn based}\ngames in which in each turn, a coin is tossed to determine which player moves.\nThe payoff with Richman bidding equals the payoff of a random-turn based game\nwith an un-biased coin, and with poorman bidding, the coin is biased according\nto the initial budget ratio. We give a complete classification of mean-payoff\ntaxman games using a probabilistic connection. Our results show that Richman\nbidding is the exception; namely, for every $\\tau <1$, the value of the game\ndepends on the initial ratio.", + "authors": "Guy Avni, Thomas A. Henzinger, \u0110or\u0111e \u017dikeli\u0107", + "published": "2019-05-09", + "updated": "2019-05-09", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.LO" + ], + "main_content": "Introduction Two-player in\ufb01nite-duration games on graphs are a central class of games in formal veri\ufb01cation [2], where they are used, for example, to solve synthesis [18], and they have deep connections to foundations of logic [20]. A graph game proceeds by placing a token on a vertex in the graph, which the players move throughout the graph to produce an in\ufb01nite path (\u201cplay\u201d) \u03c0. The game is zero-sum and \u03c0 determines the winner or payo\ufb00. Graph games can be classi\ufb01ed according to the players\u2019 objectives. For example, the simplest objective is reachability, where Player 1 wins i\ufb00an in\ufb01nite path visits a designated target vertex. Another classi\ufb01cation of graph games is the mode of moving the token. The most studied mode of moving is turn based, where the players alternate turns in moving the token. In bidding games, in each turn, an \u201cauction\u201d is held between the two players in order to determine which player moves the token. The bidding mode of moving was introduced in [13, 14] for reachability games, where the following bidding rules where de\ufb01ned. In Richman bidding (named after David Richman), each player has a budget, and before each turn, the players submit bids simultaneously, where a bid is legal if it does not exceed the available budget. The player who bids higher wins the bidding, pays the bid to the other player, and moves the token. A second bidding rule called poorman bidding in [13], is similar except that the winner of the bidding pays the \u201cbank\u201d rather than the other player. Thus, the bid is deducted from his budget and the money is lost. A third bidding rule on which we focus in this paper, called taxman in [13] spans the spectrum between poorman and Richman bidding. Taxman bidding is parameterized by \u03c4 \u2208[0, 1]: the winner of a bidding pays portion \u03c4 of his bid to the other player and portion 1 \u2212\u03c4 to the bank. Taxman bidding with \u03c4 = 1 coincides with Richman bidding and taxman bidding with \u03c4 = 0 coincides with poorman bidding. Bidding games are relevant for several communities in Computer Science. In formal methods, graph games are used to reason about systems. Poorman bidding games naturally model concurrent systems where processes pay the scheduler for moving. Block-chain technology like Etherium is an example of such a system, which is a challenging to formally verify [9, 3]. In Algorithmic Game Theory, auction design is a central research topic that is motivated by the abundance of auctions for online advertisements [16]. In\ufb01nite-duration bidding games can model ongoing auctions and can be used to devise bidding strategies for objectives like: \u201cIn the long run, an advertiser\u2019s ad should show at least half of the time\u201d. In Arti\ufb01cial Intelligence, bidding games with Richman bidding have been used to reason about combinatorial negotiations [15]. Finally, discrete-bidding games [11], in which the granularity of the bids is restricted by assuming that the budgets are given using coins, have been studied mostly for recreational games, like bidding chess [6]. Both Richman and poorman in\ufb01nite-duration games have a surprising, elegant, though di\ufb00erent, mathematical structure as we elaborate below. Our study of taxman bidding aims at a better understanding of this structure and at shedding light on the di\ufb00erences between the seemingly similar bidding rules. A central quantity in bidding games is the initial ratio of the players budgets. Formally, assuming that, for i \u2208{1, 2}, Player i\u2019s initial budget is Bi, we say that Player 1\u2019s initial ratio is B1/(B1 + B2). The central question that was studied in [13] regards the existence of a necessary and su\ufb03cient initial ratio to guarantee winning the game. Formally, the threshold ratio in a vertex v, denoted Th(v), is such that if Player 1\u2019s initial ratio exceeds Th(v), he can guarantee winning the game, and if his initial ratio is less than Th(v), Player 2 can guarantee \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:3 winning the game1. Existence of threshold ratios in reachability games for all three bidding mechanisms was shown in [13]. Richman reachability games have an interesting probabilistic connection [14]. To state the connection, we \ufb01rst need to introduce random-turn based games. Let p \u2208[0, 1]. In a random-turn based game that is parameterized by p, in each turn, rather than bidding, the player who moves is chosen by throwing a (possibly) biased coin: with probability p, Player 1 chooses how to move the token, and with probability 1 \u2212p, Player 2 chooses. Formally, a random-turn based game is a special case of a stochastic game [10]. Consider a Richman game G. We construct a \u201cuniform\u201d random-turn based game on top of G, denoted RTB0.5(G), in which we throw an unbiased coin in each turn. The objective of Player 1 remains reaching his target vertex. It is well known that each vertex in RTB0.5(G) has a value, which is, informally, the probability of reaching the target when both players play optimally, and which we denote by val(RTB0.5(G), v). We are ready to state the probabilistic connection: For every vertex v in the Richman game G, the threshold ratio in v equals 1 \u2212val(RTB(G), v). We note that such a connection is not known and is unlikely to exist in reachability games with neither poorman nor taxman bidding. Random-turn based games have been extensively studied in their own right, mostly with unbiased coin tosses, since the seminal paper [17]. In\ufb01nite-duration bidding games have been recently studied with Richman [4] and poorman [5] bidding. For qualitative objectives, namely games in which one player wins and the other player loses, both bidding rules have similar properties. By reducing general qualitative games to reachability games, it is shown that threshold ratios exist for both types of bidding rules. We show a similar result for qualitative games with taxman bidding. Things get interesting in mean-payo\ufb00games, which are quantitative games: an in\ufb01nite play has a payo\ufb00, which is Player 1\u2019s reward and Player 2\u2019s cost (see an example of a mean-payo\ufb00game in Figure 1). We thus call the players in a mean-payo\ufb00game Max and Min, respectively. We focus on games that are played on strongly-connected graphs. With Richman bidding [4], the initial budget of the players does not matter: A mean-payo\ufb00game G has a value c \u2208I R that depends only on the structure of the game such that Min can guarantee a cost of at most c with any positive budget, and with any positive budget, Max can guarantee a payo\ufb00of at least c \u2212\u03f5, for every \u03f5 > 0. Moreover, the value c of G equals the value of a random-turn based game RTB0.5(G) that is constructed on top of G. Since G is a mean-payo\ufb00game, RTB0.5(G) is a mean-payo\ufb00stochastic game, and its value, which again, is a well-known concept, is the expected payo\ufb00when both players play optimally. Poorman mean-payo\ufb00games have di\ufb00erent properties. Unlike with Richman bidding, the value of the game depends on the initial ratio. That is, with a higher initial ratio, Max can guarantee a better payo\ufb00. More surprisingly, poorman mean-payo\ufb00games have a probabilistic connection, which is in fact richer than for Richman bidding. This is surprising since poorman reachability games do not have a probabilistic connection and reachability games tend to be simpler than mean-payo\ufb00games. The connection for poorman games is the following: Suppose Max\u2019s initial ratio is r \u2208[0, 1] in a game G. Then, the value in G with respect to r is the value of the random-turn based game RTBr(G) in which in each turn, we toss a biased coin that chooses Max with probability r and Min with probability 1 \u2212r. Given this di\ufb00erence between the two bidding rules, one may wonder how do mean-payo\ufb00 taxman games behave, since these bidding rules span the spectrum between Richman and 1 When the initial ratio is exactly Th(v), the winner depends on the mechanism with which ties are broken. Our results do not depend on a speci\ufb01c tie-breaking mechanism.Tie-breaking mechanisms are particularly important in discrete-bidding games [1]. CVIT 2016 \f23:4 Bidding Mechanisms in Graph Games 2 \u22121 \u22121 \u22122 v1 v2 v3 v4 Figure 1 On the left, a mean-payo\ufb00game G. On the right, the mean-payo\ufb00value of G, where the initial ratio is \ufb01xed to 0.75 and the taxman parameter \u03c4 varies. The value of G with Richman bidding is \u22120.5, with poorman bidding, it is 1, and, for example, with \u03c4 = 0.2, it is 0.533. poorman bidding. Our main contribution is a complete solution to this question: we identify a probabilistic connection for a taxman game G that depends on the parameter \u03c4 of the bidding and the initial ratio r. That is, we show that the value of the game equals the value of the random-turn based game RTBF (\u03c4,r)(G), where F(\u03c4, r) = r+\u03c4\u00b7(1\u2212r) 1+\u03c4 . The construction gives rise to optimal strategies w.r.t. \u03c4 and the initial ratio. As a sanity check, note that for \u03c4 = 1, we have F(\u03c4, r) = 0.5, which agrees with the result on Richman bidding, and for \u03c4 = 0, we have F(\u03c4, r) = r, which agrees with the result on poorman bidding. In Figure 1, we depict some mean-payo\ufb00values for a \ufb01xed initial ratio and varying taxman parameter. Previous results only give the two endpoints in the plot, and the mid points in the plot are obtained using the results in this paper. The main technical challenge is constructing an optimal strategy for Max. The construction involves two components. First, we assign an \u201cimportance\u201d to each vertex v, which we call strength and denote St(v). Intuitively, if St(v) > St(u), then it is more important for Max to move in v than in u. Second, when the game reaches a vertex v, Max\u2019s bid is a careful normalization of St(v) so that changes in Max\u2019s ratio are matched with the accumulated weights in the game. Finding the right normalization is intricate and it consists of the main technical contribution of this paper. Previous such normalizations were constructed for Richman and poorman mean-payo\ufb00games [4, 5]. The construction for Richman bidding is much more complicated than the one we present here. The construction for poorman bidding is ad-hoc and does not generalize. Our construction for taxman bidding thus uni\ufb01es these constructions and simpli\ufb01es them. It uses techniques that can generalize beyond taxman bidding. Finally, we study, for the \ufb01rst time, complexity problems for taxman games. Due to lack of space, some proofs appear in the appendix. 2 Preliminaries A graph game is played on a directed graph G = \u27e8V, E\u27e9, where V is a \ufb01nite set of vertices and E \u2286V \u00d7 V is a set of edges. The neighbors of a vertex v \u2208V , denoted N(v), is the set of vertices {u \u2208V : \u27e8v, u\u27e9\u2208E}. A path in G is a \ufb01nite or in\ufb01nite sequence of vertices v1, v2, . . . such that for every i \u22651, we have \u27e8vi, vi+1\u27e9\u2208E. Bidding games Each Player i has a budget Bi \u2208I R\u22650. In each turn a bidding determines which player moves the token. Both players simultaneously submit bids, where a bid bi for Player i is legal if bi \u2264Bi. The player who bids higher wins the bidding, where we assume some mechanism to break ties, e.g., always giving Player 1 the advantage, and our results are not a\ufb00ected by the speci\ufb01c tie-breaking mechanism at use. The winner moves the token and pays his bid, where we consider three bidding mechanisms that di\ufb00er in where the winning \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:5 bid is paid. Suppose Player 1 wins a bidding with his bid of b. In Richman bidding, the winner pays to the loser, thus the new budgets are B1 \u2212b and B2 + b. In poorman bidding, the winner pays to the bank, thus the new budgets are B1 \u2212b and B2. In taxman bidding with parameter \u03c4 \u2208[0, 1], the winner pays portion \u03c4 to the other player and (1 \u2212\u03c4) to the bank, thus the new budgets are B1 \u2212b and B2 + (1 \u2212\u03c4) \u00b7 b. A central quantity in bidding games is the ratio of a player\u2019s budget from the total budget. \u25b6De\ufb01nition 1. (Ratio) Suppose the budget of Player i is Bi, for i \u2208{1, 2}, at some point in the game. Then, Player i\u2019s ratio is Bi/(B1 + B2). The initial ratio refers to the ratio of the initial budgets, namely the budgets before the game begins. We restrict attention to games in which both players start with positive initial budgets, thus the initial ratio is in (0, 1). Strategies and plays A strategy is a recipe for how to play a game. It is a function that, given a \ufb01nite history of the game, prescribes to a player which action to take, where we de\ufb01ne these two notions below. For example, in turn-based games, a strategy takes as input, the sequence of vertices that were visited so far, and it outputs the next vertex to move to. In bidding games, histories and strategies are more involved as they maintain the information about the bids and winners of the bids. Formally, a history in a bidding game is \u03c0 = \u27e8v1, b1, i1\u27e9, . . . , \u27e8vk, bk, ik\u27e9, vk+1 \u2208(V \u00d7 I R \u00d7 {1, 2})\u2217\u00b7 V , where for 1 \u2264j \u2264k + 1, the token is placed on vertex vj at round j, for 1 \u2264j \u2264k, the winning bid is bj and the winner is Player ij. Consider a \ufb01nite history \u03c0. For i \u2208{1, 2}, let Wi(\u03c0) \u2286{1, . . . , k} denote the indices in which Player i is the winner of the bidding in \u03c0. Let BI i be the initial budget of Player i. Player i\u2019s budget following \u03c0, denoted Bi(\u03c0), depends on the bidding mechanism. For example, in Richman bidding, B1(\u03c0) = BI i \u2212P j\u2208W1(\u03c0) bj + P j\u2208W2(\u03c0) bj, B2 is de\ufb01ned dually, and the de\ufb01nition is similar for taxman and poorman bidding. Given a history \u03c0 that ends in v, a strategy for Player i prescribes an action \u27e8b, v\u27e9, where b \u2264Bi(\u03c0) is a bid that does not exceed the available budget and v is a vertex to move to upon winning, where we require that v is a neighbor of vk+1. An initial vertex, initial budgets, and two strategies for the players determine a unique in\ufb01nite play \u03c0 for the game. The vertices that \u03c0 visits form an in\ufb01nite path path(\u03c0). Objectives An objective O is a set of in\ufb01nite paths. Player 1 wins an in\ufb01nite play \u03c0 i\ufb00 path(\u03c0) \u2208O. We call a strategy f winning for Player 1 w.r.t. an objective O if for every strategy g of Player 2 the play that f and g determine is winning for Player 1. Winning strategies for Player 2 are de\ufb01ned dually. We consider the following qualitative objectives: 1. In reachability games, Player 1 has a target vertex t and an in\ufb01nite play is winning i\ufb00it visits t. 2. In parity games, each vertex is labeled with an index in {1, . . . , d}. An in\ufb01nite path is winning for Player 1 i\ufb00the parity of maximal index visited in\ufb01nitely often is odd. 3. Mean-payo\ufb00games are played on weighted directed graphs, with weights given by a function w : V \u2192Q. Consider an in\ufb01nite path \u03b7 = v1, v2, \u00b7 \u00b7 \u00b7 \u2208V \u03c9. For n \u2208I N, the pre\ufb01x of length n of \u03b7 is \u03b7n, and we de\ufb01ne its energy to be E(\u03b7n) = Pn i=1 w(vi). The payo\ufb00of \u03b7 is MP(\u03b7) = lim infn\u2192\u221eE(\u03b7n)/n. Player 1 wins \u03b7 i\ufb00MP(\u03b7) \u22650. Mean-payo\ufb00games are quantitative games. We think of the payo\ufb00as Player 1\u2019s reward and Player 2\u2019s cost, thus in mean-payo\ufb00games, we refer to Player 1 as Max and to Player 2 as Min. CVIT 2016 \f23:6 Bidding Mechanisms in Graph Games Threshold ratios The \ufb01rst question that arrises in the context of bidding games asks what is the necessary and su\ufb03cient initial ratio to guarantee an objective. \u25b6De\ufb01nition 2. (Threshold ratios) Consider a bidding game G, a vertex v, an initial ratio r, and an objective O for Player 1. The threshold ratio in v, denoted Th(v), is a ratio in [0, 1] such that if r > Th(v), then Player 1 has a winning strategy that guarantees that O is satis\ufb01ed, and if r < Th(v), then Player 2 has a winning strategy that violates O. Random turn-based games A stochastic game [10] is a graph game in which the vertices are partitioned between two players and a nature player. As in turn-based games, whenever the game reaches a vertex that is controlled by Player i, for i = 1, 2, he choses how the game proceeds, and whenever the game reaches a vertex v that is controlled by nature, the next vertex is chosen according to a probability distribution that depends only on v. Consider a bidding game G that is played on a graph \u27e8V, E\u27e9. The random-turn based game with ratio r \u2208[0, 1] that is associated with G is a stochastic game that intuitively simulates the following process. In each turn we throw a biased coin that turns heads with probability r and tails with probability 1 \u2212r. If the coin turns heads, then Player 1 moves the token, and otherwise Player 2 moves the token. Formally, we de\ufb01ne RTBr(G) = \u27e8V1, V2, VN, E, Pr\u27e9, where each vertex in V is split into three vertices, each controlled by a di\ufb00erent player, thus for \u03b1 \u2208{1, 2, N}, we have V\u03b1 = {v\u03b1 : v \u2208V }, nature vertices simulate the fact that Player 1 chooses the next move with probability r, thus Pr[vN, v1] = r = 1 \u2212Pr[vN, v2], and reaching a vertex that is controlled by one of the two players means that he chooses the next move, thus E = {\u27e8v\u03b1, uN\u27e9: \u27e8v, u\u27e9\u2208E and \u03b1 \u2208{1, 2}}. When G is a mean-payo\ufb00game, the vertices are weighted and we de\ufb01ne the weights of v1, v2, and vN to be equal to the weight of v. The following de\ufb01nitions are standard, and we refer the reader to [19] for more details. A strategy in a stochastic game is similar to a turn-based game; namely, given the history of vertices visited so far, the strategy chooses the next vertex. Fixing two such strategies f and g for both players gives rise to a distribution D(f, g) on in\ufb01nite paths. Intuitively, Player 1\u2019s goal is to maximize the probability that his objective is met. An optimal strategy for Player 1 guarantees that the objective is met with probability at least c and, intuitively, he cannot do better, thus Player 2 has a strategy that guarantees that the objective is violated with probability at least (1 \u2212c). It is well known that optimal positional strategies exist for the objectives that we consider. \u25b6De\ufb01nition 3. (Values in stochastic games) Consider a bidding game G, let r \u2208 [0, 1], and consider two optimal strategies f and g for the two players in RTBr(G). When G is a qualitative game with objective O, the value of RTBr(G), denoted val(RTBr(G)), is Pr\u03b7\u223cD(f,g) Pr[\u03b7 \u2208O]. When G is a mean-payo\ufb00game, the mean-payo\ufb00value of RTBr(G), denoted MP(RTBr(G)), is E\u03b7\u2208D(f,g)MP(\u03b7). 3 Qualitative Taxman Games We start by describing the results on reachability bidding games. \u25b6Theorem 4. [13] Consider a reachability bidding game G and a vertex v. The threshold ratio exists in v with Richman, poorman, and taxman bidding. Moreover, threshold ratios have the following properties. For the target vertex t of Player 1, we have Th(t) = 0. For every vertex v from which there is no path to t, we have Th(v) = 1. Consider some other vertex v and denote v+, v\u2212\u2208N(v) the vertices for which Th(v\u2212) \u2264Th(u) \u2264Th(v+), for every u \u2208N(v). \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:7 In Richman bidding, we have Th(v) = 1 2 \u0000Th(v+) + Th(v\u2212) \u0001 . Moreover, Th(v) is a rational number and satis\ufb01es Th(v) = 1 \u2212val(RTB(G), v). In poorman bidding, we have Th(v) = Th(v+)/(1 + Th(v+) \u2212Th(v\u2212)). In taxman bidding with parameter \u03c4, we have Th(v) = \u0000Th(v\u2212)+Th(v+)\u2212\u03c4 \u00b7Th(v\u2212) \u0001 / \u00002\u2212 \u03c4 \u00b7 (1 + Th(v\u2212) \u2212Th(v+)) \u0001 . It is shown in [4] and [5] that parity games with Richman and poorman bidding reduce to reachability games. We show a similar result for taxman games. The crucial step is the following lemma whose proof can be found in Appendix A. \u25b6Lemma 5. Consider a taxman reachability game G that is played on the graph \u27e8V, E\u27e9. Suppose that every vertex in G has a path to the target of Player 1. Then, for any taxman parameter \u03c4 and every v \u2208V , we have Th(v) = 0. That is, Player 1 wins from v with any positive initial budget. Proof. Let n = |V | \u22121 and t \u2208V be Player 1\u2019s target. Suppose the game starts from a vertex v, and let \u03f5 > 0 be the initial budget of Player 1. Since there is a path from v to Player 1\u2019s target, there is a path of length at most n. Thus, if Player 1 wins n consecutive biddings, he wins the game. Intuitively, Player 1 carefully chooses n increasing bids such that if Player 2 wins one of these bids, Player 1\u2019s ratio increases by a constant over his initial budget. By repeatedly playing according to such a strategy, Player 1 guarantees that his ratio increases and will eventually allow him to win n biddings in a row. Formally, if \u03c4 = 0, then G is a Richman game and the proof of the lemma can be found in [4]. Otherwise, pick a su\ufb03ciently large r \u2208I N such that \u03c4 > 2 r\u22121 and r \u22653. Fix 0 < m < \u03f5 rn . Player 1 proceeds as follows: after winning i times, for 0 \u2264i, he bids m \u00b7 ri and, upon winning the bidding, he moves towards t along any shortest path. Since m + mr + \u00b7 \u00b7 \u00b7 + mrn\u22121 < mrn < \u03f5, Player 1 has su\ufb03cient budget to win n consecutive biddings. If Player 2 does not win any of the \ufb01rst n biddings, Player 1 wins the game. On the other hand, if Player 2 wins the k-th bidding with 1 \u2264k \u2264n, we show in Appendix A that his ratio increases by a \ufb01xed amount b = mr (1\u2212\u03f5)(r\u22121) > 0. \u25c0 Lemma 5 gives rise to simple reduction from parity taxman games to taxman reachability games. \u25b6Theorem 6. Parity taxman games are linearly reducible to taxman reachability games. Speci\ufb01cally, threshold ratios exist in parity taxman games. Proof. A bottom strongly-connected component (BSCC, for short) in G is a maximal subset of vertices such that every two vertices have a path between them and no edges leave the set. Lemma 5 ensures that when the game is in a BSCC, with any positive initial budget, a player can force the game to reach any other vertex. A strategy that ensures in\ufb01nitely many visits to a vertex t splits a player\u2019s budget into in\ufb01nitely many positive parts and uses the i-th part to force the game to visit t for the i-th time. Thus, a BSCC in which the highest parity index is odd is \u201cwinning\u201d for Player 1 and these in which the highest parity index is odd are \u201closing\u201d for Player 1. We then construct a reachability game by removing the BSCCs of the game and playing a reachability game on the rest of the game, where Player 1\u2019s targets are his winning BSCCs. \u25c0 4 Mean-Payo\ufb00Taxman Games This section consists of our main technical contribution. We start by showing a complete classi\ufb01cation of the value in strongly-connected mean-payo\ufb00taxman games depending on CVIT 2016 \f23:8 Bidding Mechanisms in Graph Games the taxman parameter \u03c4 and the initial ratio. We then extend the solution to general games, where the solution to strongly-connected games constitutes the main ingredient in the solution of the general case. 4.1 Strongly-Connected Mean-Payo\ufb00Taxman Games We start by formally de\ufb01ning the value of a strongly-connected mean-payo\ufb00game. Lemma 5 implies that in a strongly-connected game, a player can draw the game from every vertex to any other vertex with any positive initial budget. Since mean-payo\ufb00objectives are pre\ufb01x independent, it follows that the vertex from which the game starts does not matter. Indeed, if the game starts at a vertex v with Max having initial ratio r + \u03f5, then Max can use \u03f5/2 of his budget to draw the game to a vertex u and continue as if he starts the game with initial ratio r + \u03f5/2. \u25b6De\ufb01nition 7. (Mean-payo\ufb00value) Consider a strongly-connected mean-payo\ufb00game G = \u27e8V, E, w\u27e9and a ratio r \u2208(0, 1) and a taxman parameter \u03c4 \u2208[0, 1]. The mean-payo\ufb00 value of G w.r.t. r and \u03c4, is a value c \u2208I R such that for every \u03f5 > 0 if Min\u2019s initial ratio is greater than (1 \u2212r), then he has a strategy that guarantees that the payo\ufb00is at most c + \u03f5, and if Max\u2019s initial ratio is greater than r, then he has a strategy that guarantees that the payo\ufb00is greater than c \u2212\u03f5. The following theorem, which we prove in the next two sections, summarizes the properties of mean-payo\ufb00taxman games. \u25b6Theorem 8. Consider a strongly-connected mean-payo\ufb00taxman game G with taxman parameter \u03c4 \u2208[0, 1] and an initial ratio r \u2208(0, 1). The value of G w.r.t. \u03c4 and r equals the value of the random-turn based game RTBF (\u03c4,r)(G) in which Max is chosen to move with probability F(\u03c4, r) and Min with probability 1 \u2212F(\u03c4, r), where F(\u03c4, r) = r+\u03c4(1\u2212r) 1+\u03c4 . We show that in order to prove Theorem 8, it su\ufb03ces to prove the following intermediate lemma. \u25b6Lemma 9. Consider a strongly-connected mean-payo\ufb00taxman game G, a taxman parameter \u03c4, and an initial ratio r \u2208(0, 1) such that MP(RTBF (\u03c4,r)) = 0 for F(\u03c4, r) = r+\u03c4(1\u2212r) 1+\u03c4 . Then, for every \u03f5 > 0 Max has a strategy that guarantees that no matter how Min plays, the payo\ufb00 is greater than \u2212\u03f5. Proof that Lemma 9 implies Theorem 8. First, we may assume that MP(RTBF (\u03c4,r)) = 0 since we can decrease all weights by MP(RTBF (\u03c4,r)). Recall that the de\ufb01nition of the payo\ufb00of an in\ufb01nite play \u03c0 = v1, v2, . . . is lim infn\u2192\u221e1 n Pn i=1 w(vi). Note that since the de\ufb01nition uses lim inf, it gives Min an advantage. Constructing a strategy for Max is thus more challenging and it implies a strategy for Min as follows. Let G\u2032 be a mean-payo\ufb00game that is obtained from G by multiplying all the weights by \u22121, and associate Min in G with Max in G\u2032 and vice-versa. Observe that MP(RTB1\u2212r+\u03c4(1\u2212r) 1+\u03c4 (G\u2032)) = \u2212MP(RTB r+\u03c4(1\u2212r) 1+\u03c4 (G)) = 0. Thus, using a strategy for Max in G\u2032 that guarantees a payo\ufb00that is greater than \u2212\u03f5 can be used by Min to guarantee a payo\ufb00in G that is smaller than \u03f5. \u25c0 4.2 The importance of moving The \ufb01rst part of the construction of an optimal strategy for Max as in Lemma 9 is to assign, to each vertex v \u2208V , a strength, denoted St(v), where St(v) \u2208Q\u22650. Intuitively, if \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:9 St(v) > St(u), for u, v \u2208V , it is more important for Max to move in v than it is in u. We follow the construction in [5], which uses the concept of potentials, which is a well-known concept in stochastic game (see [19]) and was originally de\ufb01ned in the context of the strategy iteration algorithm [12]. For completeness, we present the de\ufb01nitions below. Consider a strongly-connected mean-payo\ufb00game G, and let p \u2208[0, 1]. Let f and g be two optimal positional strategies in RTBp(G), for Min and Max, respectively. For a vertex v \u2208V , let v\u2212, v+ \u2208V be such that Max proceeds from v to v+ according to g and Min proceeds from v to v\u2212according to f. It is not hard to see that the mean-payo\ufb00value in all vertices in RTBp(G) is the same and we denote it by MP(RTBp(G)). We denote the potential of v by Potp(v) and the strength of v by Stp(v), and we de\ufb01ne them as follows. Potp(v) = p \u00b7 Potp(v+) + (1 \u2212p) \u00b7 Potp(v\u2212) + w(v) \u2212MP(RTBp(G)) and Stp(v) = p \u00b7 (1 \u2212p) \u00b7 \u0000Potp(v+) \u2212Potp(v\u2212) \u0001 There are optimal strategies for which Potp(v\u2212) \u2264Potp(v\u2032) \u2264Potp(v+), for every v\u2032 \u2208N(v), which can be found, for example, using the strategy iteration algorithm. Note that St(v) \u22650, for every v \u2208V . Consider a \ufb01nite path \u03c0 = v1, . . . , vn in G. We intuitively think of \u03c0 as a play, where for every 1 \u2264i < n, the bid of Max in vi is St(vi) and he moves to v+ i upon winning. Thus, if vi+1 = v+ i , we say that Max won in vi, and if vi+1 \u0338= v+ i , we say that Max lost in vi. Let W(\u03c0) and L(\u03c0) respectively be the indices in which Max wins and loses in \u03c0. We call Max wins investments and Max loses gains, where intuitively he invests in increasing the energy and gains a higher ratio of the budget whenever the energy decreases. Let G(\u03c0) and I(\u03c0) be the sum of gains and investments in \u03c0, respectively, thus G(\u03c0) = P i\u2208L(\u03c0) St(vi) and I(\u03c0) = P i\u2208W (\u03c0) St(vi). Recall that the energy of \u03c0 is E(\u03c0) = P 1\u2264i 0, there is a constant P = minv Potp(v) \u2212maxv Potp(v) such that \u03bd\u00b7\u00b5 \u03bd+\u00b5 \u00b7 \u0000E(\u03c0) \u2212P \u2212(n \u22121) \u00b7 MP(RTB \u03bd \u00b5+\u03bd (G)) \u0001 \u2265 \u00b5 \u00b7 I(\u03c0) \u2212\u03bd \u00b7 G(\u03c0). Proof. The proof is by induction on the length of \u03c0. For n = 1, the claim is trivial since both sides of the equation are 0. Suppose the claim is true for all paths of length n \u22121 and we prove it for a path \u03c0 = v1, . . . , vn+1 of length2 n. We consider the case when Max wins in v1 thus v2 = v+ 1 . The case when Min wins in v1 is proved similarly. Let \u03c0\u2032 be the part of path \u03c0 starting in v2. Since Max wins the \ufb01rst bidding, we have G(\u03c0\u2032) = G(\u03c0), I(\u03c0\u2032) = I(\u03c0) + Stp(v). Hence, by induction hypothesis we have E(\u03c0) + G(\u03c0)/(1 \u2212p) \u2212I(\u03c0)/p \u2265E(\u03c0\u2032) + G(\u03c0\u2032)/(1 \u2212p) \u2212I(\u03c0\u2032)/p + w(v1) \u2212Stp(v1)/p \u2265Potp(v+ 1 ) \u2212Potp(vn+1) + (n \u22121) \u00b7 MP(RTBp(G)) + w(v1) \u2212Stp(v1)/p = Potp(v+ 1 ) \u2212Potp(vn+1) + (n \u22121) \u00b7 MP(RTBp(G)) + w(v1) \u2212(1 \u2212p) \u00b7 (Potp(v+ 1 ) \u2212Potp(v\u2212 1 )) = p \u00b7 Potp(v+ 1 ) + (1 \u2212p) \u00b7 Potp(v\u2212 1 ) + w(v1) \u2212Potp(vn+1) + (n \u22121) \u00b7 MP(RTBp(G)) = Potp(v1) \u2212Potp(vn+1) + n \u00b7 MP(RTBp(G)). \u25c0 2 The weight of the last vertex does not participate in the energy calculation, thus the length of a path that traverses n + 1 vertices has length n. CVIT 2016 \f23:10 Bidding Mechanisms in Graph Games 4.3 Normalizing the bids Once we have \ufb01gured out how important each vertex is, the second challenge in the construction of Max\u2019s strategy is to wisely use his budget such that the changes in the ratios between the players\u2019 budgets coincides with the changes in the accumulated energy. Intuitively, Lemma 11 below gives us a recipe to normalize the bids: whenever we reach a vertex v, Max bids r \u00b7 (1 \u2212r) \u00b7 St(v) \u00b7 \u03b2x, where \u03b2x is the normalization factor and x \u2208I R\u22651 ties between changes in energy and changes in Max\u2019s ratio, as elaborated after the lemma. \u25b6Lemma 11. Consider a game G, a \ufb01nite set of non-negative strengths S \u2286I R\u22650, a ratio r \u2208(0, 1), and a taxman parameter \u03c4 \u2208[0, 1]. For every K > \u03c4r2+r(1\u2212r) \u03c4(1\u2212r)2+r(1\u2212r) there exist sequences (rx)x\u22651 and (\u03b2x)x\u22651 with the following properties. 1. Max\u2019s bid does not exceed his budget, thus, for each position x \u2208I R\u22651 and strength s \u2208S, we have \u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) < rx. 2. Min cannot force the game beyond position 1, thus for every s \u2208S\\{0} and 1 \u2264x < 1+rs, we have \u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) > 1 \u2212rx. 3. The ratios tend to r from above, thus for every x \u2208I R\u22651, we have rx \u2265r, and limx\u2192\u221erx = r. 4. No matter who wins a bidding, Max\u2019s ratio can only improve. Thus, in case of winning and in case of losing, we respectively have rx \u2212\u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) 1 \u2212(1 \u2212\u03c4) \u00b7 \u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) \u2265rx+(1\u2212r)\u00b7K\u00b7s and rx + \u03c4 \u00b7 \u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) 1 \u2212(1 \u2212\u03c4) \u00b7 \u03b2x \u00b7 s \u00b7 r \u00b7 (r \u22121) \u2265rx\u2212s\u00b7r We \ufb01rst show how Lemma 11 implies Theorem 8. Proof that Lemma 11 implies Lemma 9. Fix \u03f5 > 0, we construct strategy for Max guaranteeing a payo\ufb00greater than \u2212\u03f5, as wanted. Observe that r r + (1 \u2212r) \u03c4r2+r(1\u2212r) \u03c4(1\u2212r)2+r(1\u2212r) = r(\u03c4(1 \u2212r) + r) \u03c4r(1 \u2212r) + r2 + \u03c4r2 + r(1 \u2212r) = r + \u03c4(1 \u2212r) 1 + \u03c4 = F(\u03c4, r). Thus, since by assumption MP(RTBF (\u03c4,r)(G)) = 0 and MP(RTBp(G)) is a continuous function in p \u2208[0, 1] [8, 21], we can pick K > F(\u03c4, r) such that MP(RTB r r+(1\u2212r)K (G)) > \u2212\u03f5. We now describe Max\u2019s strategy. We think of the change in Max\u2019s ratio as a walk on I R\u22651. Each position x \u2208I R\u22651 is associated with a ratio rx. The walk starts in a position x0 such that Max\u2019s initial ratio is at least rx0. Let \u03bd = r and \u00b5 = K(1 \u2212r). Suppose the token is placed on a vertex v \u2208V . Then, Max\u2019s bid is r \u00b7 (1 \u2212r) \u00b7 \u03b2x \u00b7 St(v) (when ratios of Max and Min are normalized to sum up to 1) and he proceeds to v+ upon winning. If Max wins, the walk proceeds up \u00b5 \u00b7 St(v) steps to x + \u00b5St(v), and if he loses, the walk proceeds down to x \u2212\u03bdSt(v). Suppose Min \ufb01xes some strategy and let \u03c0 = v1, . . . , vn be a \ufb01nite pre\ufb01x of the play that is generated by the two strategies. Suppose the walk following \u03c0 reaches x \u2208I R. Then, using the terminology of the previous section, we have x = x0 \u2212G(\u03c0) \u00b7 \u03bd + I(\u03c0) \u00b7 \u00b5. Lemma 11 shows that the walk always stays above 1, thus x \u22651. Combining with Lemma 10, we get \u03bd+\u00b5 \u03bd\u00b7\u00b5 (1 \u2212x0) + P + (n \u22121) \u00b7 MP(RTB \u03bd \u03bd+\u00b5 (G)) \u2264E(\u03c0). Thus, dividing both sides by n and letting n \u2192\u221e, since x0 and P are constants depending only on K we conclude that this strategy guarantees payo\ufb00at least MP(RTB \u03bd \u03bd+\u00b5 (G)) > \u2212\u03f5. \u25c0 We continue to prove Lemma 11. \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:11 Proof of Lemma 11. Note that \u03c4r2+r(1\u2212r) \u03c4(1\u2212r)2+r(1\u2212r) is well-de\ufb01ned for r \u2208(0, 1). Fix \u03c4 \u2208[0, 1] and r \u2208(0, 1). Let K > \u03c4r2+r(1\u2212r) \u03c4(1\u2212r)2+r(1\u2212r). Observe that the two inequalities in Point 4 are equivalent to: rx\u2212rs \u2212rx \u2264\u03c4r(1 \u2212r)\u03b2xs + (1 \u2212\u03c4)r(1 \u2212r)\u03b2xsrx\u2212rs, rx \u2212rx+K(1\u2212r)s \u2265r(1 \u2212r)\u03b2xs \u2212(1 \u2212\u03c4)r(1 \u2212r)\u03b2xsrx+K(1\u2212r)s. Point 3 combined with monotonicity in the above expressions, implies that we can replace the last term in each of them by r in order to obtain stronger inequalities. Therefore, it su\ufb03ces for (rx)x\u22651 and (\u03b2x)x\u22651 to satisfy rx\u2212rs \u2212rx \u2264\u03c4r(1 \u2212r)\u03b2xs + (1 \u2212\u03c4)r(1 \u2212r)\u03b2xsr, rx \u2212rx+K(1\u2212r)s \u2265r(1 \u2212r)\u03b2xs \u2212(1 \u2212\u03c4)r(1 \u2212r)\u03b2xsr, which is equivalent to rx\u2212rs \u2212rx \u2264r(1 \u2212r)\u03b2xs[\u03c4 + (1 \u2212\u03c4)r], rx \u2212rx+K(1\u2212r)s \u2265r(1 \u2212r)\u03b2xs[1 \u2212(1 \u2212\u03c4)r]. (1) We seek (rx)x\u22651 and (\u03b2x)x\u22651 in the form rx = \u03b3x\u22121 + (1 \u2212\u03b3x\u22121)r and \u03b2x = \u03b2\u03b3x\u22121 for some \u03b3, \u03b2 \u2208(0, 1). Note that this choice ensures Points 1 and 3. Therefore, we just need to show that we can \ufb01nd \u03b3, \u03b2 \u2208[0, 1] for which the inequalities in (1) hold for any s \u2208S. Substituting rx and \u03b2x in terms of \u03b3 and \u03b2, the inequalities in (1) reduce to rx\u2212rs \u2212rx = \u03b3x\u22121(\u03b3\u2212rs \u22121)(1 \u2212r) ? \u2264\u03b2\u03b3x\u22121r(1 \u2212r)s[\u03c4 + (1 \u2212\u03c4)r], rx \u2212rx+K(1\u2212r)s = \u03b3x\u22121(1 \u2212\u03b3K(1\u2212r)s)(1 \u2212r) ? \u2265\u03b2\u03b3x\u22121r(1 \u2212r)s[1 \u2212(1 \u2212\u03c4)r]. First, when s = 0, both sides of both inequalities are equal to 0 so both inequalities clearly hold. Recall that S is a \ufb01nite set of non-negative strengths. Thus, when s > 0, it takes values in 0 < s1 \u2264. . . \u2264sn, and the above inequalities are equivalent to \u03b3 \u2265 \u00001 + \u03b2rs[\u03c4 + (1 \u2212\u03c4)r] \u0001\u22121 rs , \u03b3 \u2264 \u00001 \u2212\u03b2rs[1 \u2212(1 \u2212\u03c4)r] \u0001 1 K(1\u2212r)s . (2) Since both of these expressions are in (0, 1), to conclude that \u03b3, \u03b2 \u2208(0, 1) exist, it su\ufb03ces to show that there is some \u03b2 \u2208(0, 1) such that max s\u2208{s1,...,sn} \u00001 + \u03b2rs[\u03c4 + (1 \u2212\u03c4)r] \u0001\u22121 rs \u2264 min s\u2208{s1,...,sn} \u00001 \u2212\u03b2rs[1 \u2212(1 \u2212\u03c4)r] \u0001 1 K(1\u2212r)s . (3) Note that the LHS of (3) is monotonically increasing in s > 0 whereas the RHS is monotonically decreasing in s > 0, therefore it su\ufb03ces to \ufb01nd \u03b2 \u2208(0, 1) for which \u00001 + \u03b2rsn[\u03c4 + (1 \u2212\u03c4)r] \u0001\u2212 1 rsn \u2264 \u00001 \u2212\u03b2rs1[1 \u2212(1 \u2212\u03c4)r] \u0001 1 K(1\u2212r)s1 . (4) By Taylor\u2019s theorem (1 + y)\u03b1 = 1 + \u03b1y + O(y2), so Taylor expanding both sides of (4) in \u03b2 > 0 we get \u00001 + \u03b2rsn[\u03c4 + (1 \u2212\u03c4)r] \u0001\u2212 1 rsn = 1 \u2212\u03b2[\u03c4 + (1 \u2212\u03c4)r] + O(\u03b22), \u00001 \u2212\u03b2rs1[1 \u2212(1 \u2212\u03c4)r] \u0001 1 K(1\u2212r)s1 = 1 \u2212\u03b2 r K(1 \u2212r)[1 \u2212(1 \u2212\u03c4)r] + O(\u03b22). CVIT 2016 \f23:12 Bidding Mechanisms in Graph Games Therefore, if we show that [\u03c4 + (1 \u2212\u03c4)r] > r K(1\u2212r)[1 \u2212(1 \u2212\u03c4)r], the linear coe\ufb03cient of \u03b2 on the LHS of (4) will be strictly smaller than the linear coe\ufb03cient of \u03b2 on the RHS. Thus, for su\ufb03ciently small \u03b2 > 0, (4) will hold, which concludes the proof of the lemma. This condition is equivalent to K > r[1 \u2212(1 \u2212\u03c4)r] (1 \u2212r)[\u03c4 + (1 \u2212\u03c4)r] = r[\u03c4r + (1 \u2212r)] (1 \u2212r)[\u03c4(1 \u2212r) + r] = \u03c4r2 + r(1 \u2212r) \u03c4(1 \u2212r)2 + r(1 \u2212r), which is true by assumption. Thus, Points 1, 3, and 4 hold. In Appendix B, we show that Point 2 holds. \u25c0 4.4 General Mean-Payo\ufb00Taxman Games We extend the solution to general games. Recall that the threshold ratio in mean-payo\ufb00 games is a necessary and su\ufb03cient initial ratio with which Max can guarantee a payo\ufb00of at least 0. \u25b6Theorem 12. Threshold ratios exist in mean-payo\ufb00taxman games. Proof. Consider a mean-payo\ufb00taxman game G = \u27e8V, E, w\u27e9with taxman parameter \u03c4. If G is strongly-connected, then by Theorem 8, the threshold ratio in all vertices in G is the same and is r \u2208(0, 1) for r such that MP(RTBF (\u03c4,r)(G)) = 0. If no such r exists, then either MP(RTBF (\u03c4,1)(G)) < 0, in which case the threshold ratios are 1, or MP(RTBF (\u03c4,0)(G)) > 0, in which case the threshold ratios are 0. The proof for general games follows along the same lines as the proof for reachability games. For each bottom strongly-connected component Si of G we \ufb01nd the threshold ratio ri \u2208(0, 1) as in the above. We play a \u201cgeneralized\u201d reachability game on G as follows. The game ends once the token reaches one of the BSCCs in G. Max wins the game i\ufb00the \ufb01rst time the game enters a BSCC Si, Max\u2019s ratio is greater than ri. Showing existence of threshold ratios in the generalized game follows the same argument as for reachability games [13]. \u25c0 5 Computational Complexity We show, for the \ufb01rst time, computational complexity results for taxman games. We study the following problem, which we call THRESH: given a taxman game G with taxman parameter \u03c4 and a vertex v0 in G, decide whether Th(v0) \u22650.5. The correspondence in Theorem 8 gives the second part of the following theorem, and for the \ufb01rst part, in Appendix C, we show a reduction from THRESH to the existential theory of the reals [7]. \u25b6Theorem 13. For taxman reachability, parity, and mean-payo\ufb00games THRESH is in PSPACE. For strongly-connected mean-payo\ufb00games, THRESH is in NP \u2229coNP. 6 Discussion We study, for the \ufb01rst time, in\ufb01nite-duration taxman bidding games, which span the spectrum between Richman and poorman bidding. For qualitative objectives, we show that the properties of taxman coincide with these of Richman and poorman bidding. For mean-payo\ufb00games, where Richman and poorman bidding have an elegant though surprisingly di\ufb00erent mathematical structure, we show a complete understanding of taxman games. Our study of mean-payo\ufb00taxman games sheds light on these di\ufb00erences and similarities between the two bidding rules. Unlike previous proof techniques, which were ad-hoc, we expect our technique to be easier to generalize beyond taxman games, where they can be used to introduce concepts like multi-players or partial information into bidding games. \fG. Avni, T. Henzinger, and \u00d0. \u017dikeli\u0107 23:13" + }, + { + "url": "http://arxiv.org/abs/1808.04882v1", + "title": "Timed Network Games with Clocks", + "abstract": "Network games are widely used as a model for selfish resource-allocation\nproblems. In the classical model, each player selects a path connecting her\nsource and target vertices. The cost of traversing an edge depends on the {\\em\nload}; namely, number of players that traverse it. Thus, it abstracts the fact\nthat different users may use a resource at different times and for different\ndurations, which plays an important role in determining the costs of the users\nin reality. For example, when transmitting packets in a communication network,\nrouting traffic in a road network, or processing a task in a production system,\nactual sharing and congestion of resources crucially depends on time.\n In \\cite{AGK17}, we introduced {\\em timed network games}, which add a time\ncomponent to network games. Each vertex $v$ in the network is associated with a\ncost function, mapping the load on $v$ to the price that a player pays for\nstaying in $v$ for one time unit with this load. Each edge in the network is\nguarded by the time intervals in which it can be traversed, which forces the\nplayers to spend time in the vertices. In this work we significantly extend the\nway time can be referred to in timed network games. In the model we study, the\nnetwork is equipped with {\\em clocks}, and, as in timed automata, edges are\nguarded by constraints on the values of the clocks, and their traversal may\ninvolve a reset of some clocks. We argue that the stronger model captures many\nrealistic networks. The addition of clocks breaks the techniques we developed\nin \\cite{AGK17} and we develop new techniques in order to show that positive\nresults on classic network games carry over to the stronger timed setting.", + "authors": "Guy Avni, Shibashis Guha, Orna Kupferman", + "published": "2018-08-14", + "updated": "2018-08-14", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.LO" + ], + "main_content": "Introduction Network games (NGs, for short) [11, 48, 49] constitute a well studied model of non-cooperative games. The game is played among sel\ufb01sh players on a network, which is a directed graph. Each 1 Supported by the Austrian Science Fund (FWF) under grants S11402-N23 (RiSE/SHiNE), Z211-N23 (Wittgenstein Award), and M2369-N33 (Meitner fellowship). 2 Supported partially by the ARC project \u201cNon-Zero Sum Game Graphs: Applications to Reactive Synthesis and Beyond\u201d (F\u00e9d\u00e9ration Wallonie-Bruxelles). 3 Supported by the European Research Council (FP7/2007-2013) / ERC grant agreement no 278410. \u00a9 Guy Avni, Shibashis Guha and Orna Kupferman; licensed under Creative Commons License CC-BY Leibniz International Proceedings in Informatics Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik, Dagstuhl Publishing, Germany arXiv:1808.04882v1 [cs.GT] 14 Aug 2018 \fXX:2 Timed Network Games with Clocks player has a source and a target vertex, and a strategy is a choice of a path that connects these two vertices. The cost a player pays for an edge depends on the load on it, namely the number of players that use the edge, and the total cost is the sum of costs of the edges she uses. In cost-sharing games, load has a positive effect on cost: each edge has a cost and the players that use it split the cost among them. Then, in congestion games4, load has a negative effect on cost: each edge has a non-decreasing latency function that maps the load on the edge to its cost. One limitation of NGs is that the cost of using a resource abstracts the fact that different users may use the resource at different times and for different durations. This is a real limitation, as time plays an important role in many real-life settings. For example, in a road or a communication system, congestion only affects cars or messages that use a road or a channel simultaneously. We are interested in settings in which congestion affects the quality of service (QoS) or the way a price is shared by entities using a resource at the same time (rather than affecting the travel time). For example, discomfort increases in a crowded train (in congestion games) or price is shared by the passengers in a taxi (in cost-sharing games). The need to address temporal behaviors has attracted a lot of research in theoretical computer science. Formalisms like temporal logic [46] enable the speci\ufb01cation of the temporal ordering of events. Its re\ufb01nement to formalisms like real-time temporal logic [8], interval temporal logic [42], and timed automata (TAs, for short) [7] enables the speci\ufb01cation of real-time behaviors. Extensions of TAs include priced timed automata (PTAs, for short) that assign costs to real-time behaviors. Thus, PTAs are suitable for reasoning about quality of real-time systems. They lack, however, the capability to reason about multi-agent systems in which the players\u2019 choices affect the incurred costs. We study timed network games (TNGs, for short) \u2013 a new model that adds a time component to NGs. A TNG is played on a timed-network in which edges are labeled by guards that specify time restrictions on when the edge can be traversed. Similar to NGs, each player has a source and target vertex, but a strategy is now a timed path that speci\ufb01es, in addition to which vertices are traversed, the amount of time that is spent in each vertex. Players pay for staying in vertices, and the cost of staying in a vertex v in a time interval I \u2286I R\u22650 is affected by the load in v during I. In [14], we studied a class of TNGs that offered a \ufb01rst extension of NGs to a timed variant in which the reference to time is restricted: the guards on the edges refer only to global time, i.e., the time that has elapsed since the beginning of the game. In the model in [14], it is impossible to refer to the duration of certain events that occur during the game, for example, it is not possible to express constraints that require staying exactly one time unit in a vertex. Accordingly, we refer to that class as global TNGs (GTNGs, for short). In this work, we signi\ufb01cantly extend the way time can be referred to in TNGs. We do this by adding clocks that may be reset along the edges, and by allowing the guards on the edges to refer to the values of all clocks. GTNGs can be viewed as a fragment in which there is only a single clock that is never reset. We demonstrate our model in the following example. \u25b6Example 1. Consider a setting in which messages are sent through a network of routers. Messages are owned by sel\ufb01sh agents who try to avoid congested routes, where there is a greater chance of loss or corruption. The owners of the messages decide how much time they spend in each router. Using TNGs, we can model constraints on these times, as well as constraints on global events, in particular, arrival time. Note that in some applications, c.f., advertising or security, messages need to patrol the network with a lower bound on their arrival time. 4 The name congestion games is sometimes used to refer to games with general latency functions. We \ufb01nd it more appropriate to use it to refer to games with non-decreasing functions. \fG. Avni, S. Guha and O. Kupferman XX:3 Consider the TNG appearing in Figure 1. The vertices in the TNG model the routers. There are two players that model two agents, each sending a message. The source of both messages is s and the targets are u1 and u2, for messages 1 and 2, respectively. The latency functions are described in the vertices, as a function of the load m; e.g., the latency function in v2 is \u2113v2(m) = 3m. Thus, when a single message stays in v2 the cost for each time unit is 3, and when the two messages visit v2 simultaneously, the cost for each of them is 6 per unit time. The network has two clocks, x and y. Clock x is reset in each transition and thus is used to impose restrictions on the time that can be spent in each router: since all transitions can be taken when 1 \u2264x \u22642, a message stays between 1 and 2 time units in a router. Clock y is never reset, thus it keeps track of the global time. The guards on clock y guarantee that message 1 reaches its destination by time 4 but not before time 3 and message 2 reaches its destination by time 5 but not before time 4. Suppose the \ufb01rst agent chooses the timed path (s, 2), (v1, 1), u1, thus message 1 stays in s for two time units and in v1 for one time unit before reaching its destination u1. Suppose the second agent chooses the path (s, 2), (v1, 2), (v2, 1), u2. Note that crossing an edge is instantaneous. Since both messages stay in the same vertices during the intervals I1 = [0, 2] and I2 = [2, 3], the load in the corresponding vertices is 2. During interval I1, each of the agents pays |I1| \u00b7 \u2113s(2) = 2 \u00b7 4 and during I2, each pays |I2| \u00b7 \u2113v1(2) = 1 \u00b7 2. Message 2 stays in v1 alone during the interval [3, 4] and in v2 during the interval [4, 5], for which it pays 1 and 3, respectively. The total costs are thus 10 and 14. \u25c0 m 2m 3m u1 v1 s v2 u2 1 \u2264x \u22642\u2227 3 \u2264y \u22644, {x} 1 \u2264x \u22642, {x} 1 \u2264x \u22642, {x} 1 \u2264x \u22642\u2227 4 \u2264y \u22645, {x} 1 \u2264x \u22642, {x} Figure 1 A congestion TNG. Before we elaborate on our contribution, let us survey relevant works, namely, extensions of NGs with temporal aspects and extensions of timed-automata to games. Extensions of NGs that involve reasoning about time mostly study a cost model in which the players try to minimize the time of arrival at their destinations (c.f., [36, 39, 47, 45]), where, for example, congestion affects the duration of crossing an edge. These works are different from ours since we consider a QoS cost model. An exception is [36], which studies the QoS costs. A key difference in the models is that there, time is discrete and the players have \ufb01nitely many strategies. Thus, reductions to classical resource allocation games is straightforward while for TNGs it is not possible, as we elaborate below. Games on timed automata were \ufb01rst studied in [12] in which an algorithm to solve timed games with timed reachability objective was given. The work was later generalized and improved [5, 20, 35, 23]. Average timed games, games with parity objectives, mean-payoff games and energy games have also been studied in the context of timed automata [3, 37, 27, 21, 34]. All the timed games above are twoplayer zero-sum ongoing games. Prices are \ufb01xed and there is no notion of load. Also, the questions studied on these games concern their decidability, namely \ufb01nding winners and strategies for them. TNGs are not zero-sum games, so winning strategies do not exist. Instead, the problems we study here concern rationality and stability. The \ufb01rst question that arises in the context of non-zero-sum games is the existence of stable outcomes. In the context of NGs, the most prominent stability concept is that of a (pure) Nash equilibrium (NE, for short) [43] \u2013 a pro\ufb01le such that no player can decrease her cost by unilaterally deviating from her current strategy.5 Decentralized decision-making may lead to solutions that are 5 Throughout this paper, we consider pure strategies, as is the case for the vast literature on NGs. \fXX:4 Timed Network Games with Clocks sub-optimal for the society as a whole. The standard measures to quantify the inef\ufb01ciency incurred due to sel\ufb01sh behavior is the price of stability (PoS) [11] and the price of anarchy (PoA) [38]. In both measures we compare against the social optimum (SO, for short), namely a pro\ufb01le that minimizes the sum of costs of all players. The PoS (PoA, respectively) is the best-case (worst-case) inef\ufb01ciency of an NE; that is, the ratio between the cost of a best (worst) NE and the SO. The picture of stability and equilibrium inef\ufb01ciency for standard NGs is well understood. Every NG has an NE, and in fact these games are potential games [48], which have the following stronger property: a best response sequence is a sequence of pro\ufb01les P1, P2, . . . such that, for i \u22651, the pro\ufb01le Pi+1 is obtained from Pi by letting some player deviate and decrease her personal cost. In \ufb01nite potential games, every best-response sequence converges to an NE. For k-player cost-sharing NGs, the PoS and PoA are log k and k, respectively [11]. For congestion games with af\ufb01ne cost functions, PoS \u22481.577 [29, 2] and PoA = 5 2 [30]. In [14], we showed that these positive results carry over to GTNGs. A key technical feature of GTNGs is that since guards refer to global time, it is easy to \ufb01nd an upper bound T on the time by which all players reach their destinations. Proving existence of NE follows from a reduction to NGs, using a zone-like structure [6, 18]. The introduction of clocks with resets breaks the direct reduction to NGs and questions the existence of a bound by which the players arrive at their destinations. Even with an upper bound on time, a reduction from TNGs to NGs is not likely. Consider the following example. From s1, the earliest absolute time at which vertex v2 is reached is 2 following the path \u27e8(s1, 0)(v1, 0), (v1, 2), (v2, 2)\u27e9, and the value of clock x at v2 is 0. On the other hand, when v2 is reached from s2 following the path \u27e8(s2, 0)(v2, 0), (v2, 2)\u27e9, then at absolute time 2, the value of clock x at v2 is 2 and the transition to vertex u is thus enabled. This leads to a spurious path \u27e8(s1, 0)(v1, 0), (v1, 2), (v2, 2), (u, 2)\u27e9which does not correspond to a valid path in th TNG. v1 v2 u s1 s2 {x} 2\u2264x\u22644, {x} {x} 2\u2264x\u22645 (v1, 0) (v1, 2) (s1, 0) (v1, 4) (s2, 0) (v2, 0) (v2, 2) (v2, 4) (u, 2) (u, 4) \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 Figure 2 An attempt to translate a TNG to an NG. Further, to see the dif\ufb01culty in \ufb01nding such a bound, consider, for example, a cost-sharing game in which all players, on their paths to their targets, need to stay for one time unit in a \u201cgateway\u201d vertex v that costs 1 (see details in Section 6). Assume also that, for 1 \u2264i \u2264k, Player i can only reach v in times that are multiples of pi, for relatively prime numbers p1, . . . , pk. The SO is obtained when all players synchronize their visits to v, and such a synchronization forces them to wait till time p1 \u00b7 . . . \u00b7 pk, which is exponential in the TNG. The lack of an upper bound on the global time in TNGs demonstrates that we need a different approach to obtain positive results for general TNGs. We show that TNGs are guaranteed to have an NE. Our proof uses a combination of techniques from real-time models and resource allocation games. Recall that a PTA assigns a price to a timed word. We are able to reduce the best-response and the social-optimum problems to and from the problem of \ufb01nding cheapest runs in PTAs [19], showing that the problems are PSPACE-complete. Next, we show that TNGs are potential games. Note that since players have uncountably many strategies, the fact that TNGs are potential games does not immediately imply existence of an NE, as a best-response sequence may not be \ufb01nite. We show that there is a best-response sequence that terminates in an NE. For this, we \ufb01rst need to show the existence of an integral best-response, which is obtained from the reduction to PTAs. Finally, given a TNG, we \ufb01nd a time T such that there exists an NE in which all players reach their \fG. Avni, S. Guha and O. Kupferman XX:5 destination by time T. 2 Preliminaries 2.1 Resource allocation games and network games For k \u2208N, let [k] = {1, . . . , k}. A resource allocation game (RAG, for short) is R = \u27e8k, E, {\u03a3i}i\u2208[k], {\u2113e}e\u2208E\u27e9, where k \u2208N is the number of players; E is a set of resources; for i \u2208[k], the set strategies of Player i is \u03a3i \u22862E; and, for e \u2208E, the latency function \u2113e : [k] \u2192Q\u22650 maps a load on e to its cost under this load. A pro\ufb01le is a choice of a strategy for each player. The set of pro\ufb01les of R is pro\ufb01les(R) = \u03a31 \u00d7 . . . \u00d7 \u03a3k. For e \u2208E, we de\ufb01ne the load on e in a pro\ufb01le P = \u27e8\u03c31, . . . , \u03c3k\u27e9, denoted loadP (e), as the number of players using e in P, thus loadP (e) = |{i \u2208[k] : e \u2208\u03c3i}|. The cost a player pays in pro\ufb01le P, denoted costi(P), depends on the choices of the other players. We de\ufb01ne costi(P) = P e\u2208\u03c3i \u2113e(loadP (e)). Network games (NGs, for short) can be viewed as a special case of RAGs where strategies are succinctly represented by means of paths in graphs. An NG is N = \u27e8k, V , E, {\u27e8si, ui\u27e9}i\u2208[k], {\u2113e}e\u2208E\u27e9, where \u27e8V, E\u27e9is a directed graph; for i \u2208[k], the vertices si and ui are the source and target vertices of Player i; and the latency functions are as in RAGs. The set of strategies for Player i is the set of simple paths from si to ui in N. Thus, in NGs, the resources are the edges in the graph. We distinguish between two types of latency functions. In cost-sharing games, the players that visit a vertex share its cost equally. Formally, every e \u2208E has a cost ce \u2208Q\u22650 and its latency function is \u2113e(l) = ce l . Note that these latency functions are decreasing, thus the load has a positive effect on the cost. In contrast, in congestion games, the cost functions are non-decreasing and so the load has a negative effect on the cost. Typically, the latency functions are restricted to simple functions such as linear latency functions, polynomials, and so forth. 2.2 Timed networks and timed network games A clock is a variable that gets values from I R\u22650 and whose value increases as time elapses. A reset of a clock x assigns value 0 to x. A guard over a set C of clocks is a conjunction of clock constraints of the form x \u223cm, for x \u2208C, \u223c\u2208{\u2264, =, \u2265}, and m \u2208N. Note that we disallow guards that use the operators < and > (see Remark 4). A guard of the form V x\u2208C x \u22650 is called true. The set of guards over C is denoted \u03a6(C). A clock valuation is an assignment \u03ba : C \u2192I R\u22650. A clock valuation \u03ba satis\ufb01es a guard g, denoted \u03ba | = g, if the expression obtained from g by replacing each clock x \u2208C with the value \u03ba(x) is valid. A timed network is a tuple A = \u27e8C, V, E\u27e9, where C is a set of clocks, V is a set of vertices, and E \u2286V \u00d7 \u03a6(C) \u00d7 2C \u00d7 V is a set of directed edges in which each edge e is associated with a guard g \u2208\u03a6(C) that should be satis\ufb01ed when e is traversed and a set R \u2286C of clocks that are reset along the traversal of e. When traversing a path in a timed network, time is spent in vertices, and edges are traversed instantaneously. Accordingly, a timed path in A is a sequence \u03b7 = \u27e8\u03c41, e1\u27e9, . . . , \u27e8\u03c4n, en\u27e9\u2208(I R\u22650 \u00d7 E)\u2217, describing edges that the path traverses along with their traversal times. The timed path \u03b7 is legal if the edges are successive and the guards associated with them are satis\ufb01ed. Formally, there is a sequence \u27e8v0, t0\u27e9, . . . , \u27e8vn\u22121, tn\u22121\u27e9, vn \u2208(V \u00d7I R\u22650)\u2217\u00b7V , describing the vertices that \u03b7 visits and the time spent in these vertices, such that for every 1 \u2264j \u2264n, the following hold: (1) tj\u22121 = \u03c4j \u2212\u03c4j\u22121, with \u03c40 = 0, (2) there is gj \u2208\u03a6(C) and Rj \u2286C, such that ej = \u27e8vj\u22121, gj, Rj, vj\u27e9, (3) there is a clock valuation \u03baj that describes the values of the clocks before the incoming edge to vertex vj is traversed. Thus, \u03ba1(x) = t0, for all x \u2208C, and for 1 < j \u2264n, we distinguish between clocks that are reset when ej\u22121 is traversed and clocks that are not reset: for x \u2208Rj\u22121, we de\ufb01ne \u03baj(x) = tj\u22121, \fXX:6 Timed Network Games with Clocks and for x \u2208(C \\ Rj\u22121), we de\ufb01ne \u03baj(x) = \u03baj\u22121(x) + tj\u22121, and (4) for every 1 \u2264j \u2264n, we have that \u03baj | = gj. We sometimes refer to \u03b7 also as the sequence \u27e8v0, t0\u27e9, . . . , \u27e8vn\u22121, tn\u22121\u27e9, vn. Consider a \ufb01nite set T \u2286I R\u22650 of time points. We say that a timed path \u03b7 is a T-path if all edges in \u03b7 are taken at times in T. Formally, for all 1 \u2264j \u2264n, we have that \u03c4j \u2208T. We refer to the time at which \u03b7 ends as the time \u03c4n at which the destination is reached. We say that \u03b7 is integral if T \u2286N. A timed network game (TNG, for short) extends an NG by imposing constraints on the times at which edges may be traversed. Formally, T = \u27e8k, C, V, E, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9includes a set C of clocks, and \u27e8C, V, E\u27e9is a timed network. Recall that while traversing a path in a timed network, time is spent in vertices. Accordingly, the latency functions now apply to vertices, thus \u2113v : [k] \u2192 Q\u22650 maps a load on vertex v to its cost under this load. Traversing an edge is instantaneous and is free of charge. A strategy for Player i, for i \u2208[k], is then a legal timed path from si to ui. We assume all players have at least one strategy. \u25b6Remark. A possible extension of TNGs is to allow costs on edges. Since edges are traversed instantaneously, these costs would not be affected by load. Such an extension does not affect our results and we leave it out for sake of simplicity. Another possible extension is allowing strict time guards, which we discuss in Remark 4. The cost Player i pays in pro\ufb01le P, denoted costi(P), depends on the vertices in her timed path, the time spent on them, and the load during the visits. In order to de\ufb01ne the cost formally, we need some de\ufb01nitions. For a \ufb01nite set T \u2286I R\u22650 of time points, we say that a timed path is a T-strategy if it is a T-path. Then, a pro\ufb01le P is a T-pro\ufb01le if it consists only of T-strategies. Let tmax = max(T). For t \u2208T such that t < tmax, let nextT (t) be the minimal time point in T that is strictly larger than t. We partition the interval [0, tmax] into a set \u03a5 of sub-intervals [m, nextT (m)] for every m \u2208(T \u222a{0}) \\ {tmax}. We refer to the sub-intervals in \u03a5 as periods. Suppose T is the minimal set such that P is a T-pro\ufb01le. Note that \u03a5 is the coarsest partition of [0, tmax] into periods such that no player crosses an edge within a period in \u03a5. We denote this partition by \u03a5P . For a player i \u2208[k] and a period \u03b3 \u2208\u03a5P , let visitsP (i, \u03b3) be the vertex that Player i visits during period \u03b3. That is, if \u03c0i = \u27e8vi 0, ti 0\u27e9, . . . , \u27e8vi ni\u22121, ti ni\u22121\u27e9, vi ni is a legal timed path that is a strategy for Player i and \u03b3 = [m1, m2], then visitsP (i, \u03b3) is the vertex vi j for the index 1 \u2264j < ni such that \u03c4 i j \u2264m1 \u2264m2 \u2264\u03c4 i j+1, and visitsP (i, \u03b3) is the vertex vi 0 if 0 = m1 \u2264m2 \u2264\u03c4 i 1. Note that since P is a T-pro\ufb01le, for each period \u03b3 \u2208\u03a5P , the number of players that stay in each vertex v during \u03b3 is \ufb01xed. Let loadP (v, \u03b3) denote this number. Formally loadP (v, \u03b3) = |{i : visitsP (i, \u03b3) = v}|. Finally, for a period \u03b3 = [m1, m2], let |\u03b3| = m2 \u2212m1 be the duration of \u03b3. Suppose Player i\u2019s path ends at time \u03c4 i. Let \u03a5i P \u2286\u03a5P denote the periods that end by time \u03c4i. Recall that the latency function \u2113v : [k] \u2212 \u2192Q\u22650 maps the number of players that simultaneously visit vertex v to the price that each of them pays per time unit. If visitsP (i, \u03b3) = v, then the cost of Player i in P, over the period \u03b3 is cost\u03b3,i(P) = \u2113v(loadP (v, \u03b3)) \u00b7 |\u03b3|. We de\ufb01ne costi(P) = P \u03b3\u2208\u03a5i P cost\u03b3,i(P). The cost of the pro\ufb01le P, denoted cost(P), is the total cost incurred by all the players, i.e., cost(P) = Pk i=1 costi(P). A T-strategy is called an integral strategy when T \u2286N, and similarly for integral pro\ufb01le. A pro\ufb01le P = \u27e8\u03c01, . . . , \u03c0k\u27e9is said to end by time \u03c4 if for each i \u2208[k], the strategy \u03c0i ends by time \u03c4. Consider a TNG T that has a cycle such that a clock x of T is reset on the cycle. It is not dif\ufb01cult to see that this may lead to T having in\ufb01nitely many integral pro\ufb01les that end by different times. A TNG T is called global if it has a single clock x that is never reset. We use GTNG to indicate that a TNG is global. As in RAGs, we distinguish between cost-sharing TNGs that have cost-sharing latency functions and congestion TNGs in which the latency functions are non-decreasing. \fG. Avni, S. Guha and O. Kupferman XX:7 2.3 Stability and ef\ufb01ciency Consider a game G. For a pro\ufb01le P and a strategy \u03c0 of player i \u2208[k], let P[i \u2190\u03c0] denote the pro\ufb01le obtained from P by replacing the strategy of Player i in P by \u03c0. A pro\ufb01le P is said to be a (pure) Nash equilibrium (NE) if none of the players in [k] can bene\ufb01t from a unilateral deviation from her strategy in P to another strategy. Formally, for every Player i and every strategy \u03c0 for Player i, it holds that costi(P[i \u2190\u03c0]) \u2265costi(P). A social optimum (SO) of a game G is a pro\ufb01le that attains the in\ufb01mum cost over all pro\ufb01les. We denote by SO(G) the cost of an SO pro\ufb01le; i.e., SO(G) = infP \u2208pro\ufb01les(G) cost(P). It is well known that decentralized decision-making may lead to sub-optimal solutions from the point of view of the society as a whole. We quantify the inef\ufb01ciency incurred due to self-interested behavior by the price of anarchy (PoA) [38, 44] and price of stability (PoS) [11] measures. The PoA is the worst-case inef\ufb01ciency of a Nash equilibrium, while the PoS measures the best-case inef\ufb01ciency of a Nash equilibrium. Note that unlike resource allocation games in which the set of pro\ufb01les is \ufb01nite, in TNGs there can be uncountably many NEs, so both PoS and PoA need to be de\ufb01ned using in\ufb01mum/supremum rather than min/max. Formally, \u25b6De\ufb01nition 2. Let G be a family of games, and let G \u2208G be a game in G. Let \u0393(G) be the set of Nash equilibria of the game G. Assume that \u0393(G) \u0338= \u2205. The price of anarchy of G is PoA(G) = supP \u2208\u0393(G) cost(P)/SO(G). The price of anarchy of the family of games G is PoA(G) = supG\u2208GPoA(G). The price of stability of G is PoS(G) = infP \u2208\u0393(G) cost(P)/SO(G). The price of stability of the family of games G is PoS(G) = supG\u2208GPoS(G). 3 The Best-Response and the Social-Optimum Problems Consider a TNG T = \u27e8k, C, V, E, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9. In the best-response problem (BR problem, for short), we ask how a player reacts to a choice of strategies of the other players. Formally, let \u03c01, . . . , \u03c0k\u22121 be a choice of integral6 strategies for Players 1, . . . , k \u22121 in T . We look for a strategy \u03c0k that minimizes costk(\u27e8\u03c01, . . . , \u03c0k\u27e9). The choice of allowing Player k to react is arbitrary and is done for convenience of notation. In the social optimum problem (SOPT problem, for short), we seek a pro\ufb01le that maximizes the social welfare, or in other words, minimizes the sum of players\u2019 costs. In this section we describe priced timed automata (PTAs, for short) [10, 17] and show that while they are different from TNGs both in terms of the model and the questions asked on it, they offer a useful framework for reasoning about TNGs. In particular, we solve the BR and SOPT problems by reductions to problems about PTAs. 3.1 From TNGs to priced timed automata A PTA [10, 17] is P = \u27e8C, V, E, {rv}v\u2208V \u27e9, where \u27e8C, V, E\u27e9is a timed network and rv \u2208Q\u22650 is the rate of vertex v \u2208V . Intuitively, the rate rv speci\ufb01es the cost of staying in v for a duration of one time unit. Thus, a timed path \u03b7 = \u27e8v0, t0\u27e9, . . . , \u27e8vn, tn\u27e9, vn+1 in a PTA has a price, denoted price(\u03b7), which is P 0\u2264j\u2264n rv \u00b7 tv. The size of P is |V | + |E| plus the number of bits needed in the binary encoding of the numbers appearing in guards and rates in P. 7 6 We choose integral strategies since strategies with irrational times cannot be represented as part of the input; for strategies that use rational times, the best response problem can be solved with little modi\ufb01cation in the proof of Theorem 4. 7 In general, PTAs have rates on transitions and strict time guards, which we do not need here. \fXX:8 Timed Network Games with Clocks Consider a PTA P and two vertices s and u. Let paths(s, u) be the set of timed paths from s to u. We are interested in cheapest timed paths in paths(s, u). A priori, there is no reason to assume that the minimal price is attained, thus we are interested in the optimal price, denoted opt(s, u), which we de\ufb01ne to be inf{price(\u03b7) : \u03b7 \u2208paths(s, u)}. The corresponding decision problem, called the cost optimal reachability problem (COR, for short) takes in addition a threshold \u00b5, and the goal is to decide whether opt(s, t) \u2264\u00b5. Recall that we do not allow the guards to use the operators < and >. \u25b6Theorem 3. [19, 32] The COR problem is PSPACE-complete for PTAs with two or more clocks. Moreover, the optimal price is attained by an integral path, i.e., there is an integral path \u03b7 \u2208paths(s, u) with price(\u03b7) = opt(s, u). In Sections 3.2 and 3.3 below, we reduce problems on TNGs to problems on PTAs. The reductions allow us to obtain properties on strategies and pro\ufb01les in TNGs using results on PTAs, which we later use in combination with techniques for NGs in order to solve problems on TNGs. 3.2 The best-response problem \u25b6Theorem 4. Consider a TNG T with n clocks and integral strategies \u03c01, . . . , \u03c0k\u22121 for Players 1, . . . , k\u22121. There is a PTA P with n+1 clocks and two vertices v and u such that there is a oneto-one cost-preserving correspondence between strategies for Player k in T and timed paths from v to u: for every strategy \u03c0k in T and its corresponding path \u03b7 in P, we have costk(\u27e8\u03c01, . . . , \u03c0k\u27e9) = price(\u03b7). Proof. Consider a TNG T = \u27e8k, V, E, C, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9, where C = {x1, . . . , xm}. Let Q = \u27e8\u03c01, . . . , \u03c0k\u22121\u27e9be a choice of timed paths for Players 1, . . . , k \u22121. Note that Q can be seen as a pro\ufb01le in a game that is obtained from T by removing Player k, and we use the de\ufb01nitions for pro\ufb01les on Q in the expected manner. Let T \u2286Q be the minimal set of time points for which all the strategies in Q are T-strategies. Consider two consecutive time points a, b \u2208T, i.e., there is no c \u2208T with a < c < b. Then, there are players that cross edges at times a and b, and no player crosses an edge at time points in the interval (a, b). Moreover, let tmax be the latest time in T, then tmax is the latest time at which a player reaches her destination. Let \u03a5Q be a partition of [0, tmax] according to T. We obtain \u03a5\u2032 Q from \u03a5Q by adding the interval [tmax, \u221e). A key observation is that the load on all the vertices is unchanged during every interval in \u03a5\u2032 Q. For a vertex v \u2208V and \u03b4 \u2208\u03a5Q, the cost Player k pays per unit time for using v in the interval \u03b4 is \u2113v(loadQ(v, \u03b4) + 1). On the other hand, since all k \u22121 players reach their destination by time tmax, the load on v after tmax is 0, and the cost Player k pays for using it then is \u2113v(1). The PTA P that we construct has |\u03a5\u2032 Q| copies of T , thus its vertices are V \u00d7 \u03a5\u2032 Q. Let \u03b40 = [0, b] \u2208\u03a5\u2032 Q be the \ufb01rst interval. We consider paths from the vertex v = \u27e8sk, \u03b40\u27e9, which is the copy of Player k\u2019s source in the \ufb01rst copy of T , to a target u, which is a new vertex we add and whose only incoming edges are from vertices of the form \u27e8uk, \u03b4\u27e9, namely, the copies of the target vertex uk of Player k. We construct P such that each such path \u03b7 from v to u in P corresponds to a legal strategy \u03c0k for Player k in T , and such that costk(\u27e8\u03c01, . . . , \u03c0k\u22121, \u03c0k\u27e9) = price(\u03b7). The main difference between the copies are the vertices\u2019 costs, which depend on the load as in the above. We refer to the n clocks in T as local clocks. In each copy of P, we use the local clocks and their guards in T as well as an additional global clock that is never reset to keep track of global time. Let \u03b4 = [a, b] \u2208\u03a5Q and \u03b4\u2032 = [b, c] \u2208\u03a5\u2032 Q be the following interval. Let T\u03b4 and T\u03b4\u2032 be the copies of T that corresponds to the respective intervals. The local clocks guarantee that a path in T\u03b4 is a legal path in T . The global clock allows us to make sure that (1) proceeding from T\u03b4 to T\u03b4\u2032 can only occur precisely at time b, and (2) proceeding from \u27e8uk, \u03b4\u27e9in T\u03b4 to the target u can only occur at a time in the interval \u03b4. We now formalize the intuition of the reduction given above. We de\ufb01ne P = \u27e8V \u2032, E\u2032, C \u222a {xn+1}, {rv}v\u2208V \u2032\u27e9, where V \u2032 = (V \u00d7 \u03a5\u2032 Q) \u222a{u}, and E\u2032 = E\u2032 l \u222aE\u2032 i \u222aE\u2032 t, where E\u2032 l is a set of \fG. Avni, S. Guha and O. Kupferman XX:9 external edges, E\u2032 i is a set of internal edges and E\u2032 t is a set of target edges. Let \u03b41, . . . , \u03b4|\u03a5\u2032 Q| be the set of intervals arranged according to increasing inf(\u03b4j), i.e., for all j \u2208[|\u03a5\u2032 Q| \u22121], we have that inf(\u03b4j) < inf(\u03b4j+1). Also for every interval \u03b4 = \u03b4j, we represent by next(\u03b4), the interval \u03b4j+1, and let \u03c4\u03b4j = sup(\u03b4j). For each v \u2208V , and \u03b4 \u2208\u03a5Q, there is an external edge of the form \u27e8\u27e8v, \u03b4\u27e9, {xn+1 = \u03c4\u03b4}, \u2205, \u27e8v, next(\u03b4)\u27e9\u27e9, where \u03c4\u03b4 = sup(\u03b4). Hence an external edge moves from copy \u03b4 to its next copy at global time \u03c4\u03b4. The internal edges in a copy match the ones in T . Thus, for every \u03b4 \u2208\u03a5Q, we have an edge e\u2032 = \u27e8\u27e8v, \u03b4\u27e9, g, R, \u27e8v\u2032, \u03b4\u27e9\u27e9in E\u2032 iff there is an edge e = \u27e8v, g, R, v\u2032\u27e9\u2208E. Also the guard and the clock reset on e\u2032 are exactly the same as that of e. Note that clock xn+1 is not used in the internal edges. For each copy corresponding to \u03b4 \u2208\u03a5Q, there is a target edge from the vertex (uk, \u03b4) to u with the guard xn+1 \u2264\u03c4\u03b4, and from the copy corresponding to the last interval \u03b4|\u03a5\u2032 Q|, there is an edge from (uk, \u03b4|\u03a5\u2032 Q|) to u with the guard xn+1 \u2265\u03c4\u03b4|\u03a5\u2032 Q|\u22121. Finally, we de\ufb01ne the rate of a vertex \u27e8v, i\u27e9. Let l be the load on v during time interval \u03b4, then the rate of \u27e8v, \u03b4\u27e9is \u2113v(l + 1). Note that for every vertex \u27e8v, \u03b4|\u03a5\u2032 Q|\u27e9, in the |\u03a5\u2032 Q|-th copy of P, the rate is \u2113v(1). We prove that the cost of the best response strategy of Player k in T is the same as the cost of a cost optimal path in P. We consider a strategy \u03c0 of Player k in T and let P = \u27e8\u03c01, . . . , \u03c0k\u27e9and show that there is a path \u03b7 in P such that costk(P) is the same as cost of the path \u03b7. Similarly, for a path \u03b7 in P, we show that there exists a strategy \u03c0 of Player k such that again costk(P) is the same as cost of the path \u03b7 in P. Consider the strategy \u03c0 = (v1, t1), . . . , (vp, tp), uk of Player k in T such that v1 = sk. We construct a timed path \u03c0\u2032 = \u27e8\u27e8v1, \u03b411\u27e9, t1\u27e9, \u27e8\u27e8v2, \u03b411\u27e9, t2\u27e9, . . . , \u27e8\u27e8v\u2113, \u03b4\u2113\u27e9, t\u2113\u27e9, \u27e8\u27e8v\u2113+1, \u03b4\u2113+1\u27e9, 0\u27e9, u in P in the following manner. Firstly, v1 = sk, v\u2113+1 = uk and i1 = 1. Consider the mapping g : I R\u22650 7\u2192\u03a5\u2032 Q such that g(0) = \u03b41, and for Pi j=1 tj \u2264\u03c41, we have g(Pi j=1 tj) = \u03b41 and when Pi j=1 tj \u2265\u03c41, for all 1 \u2264i \u2264\u2113, we have g(Pi j=1 tj) = \u03b4\u03be if \u03c4\u03be\u22121 \u2264g(Pi\u22121 j=1 tj) \u2264\u03c4\u03be. Intuitively, g maps a global time \u03c4 to the \u03beth copy of the TNG T in P if \u03c4\u03be\u22121 \u2264\u03c4 \u2264\u03c4\u03be. Consider (vi, ti) in the path \u03c0 for some i \u2208[p]. Suppose there are times \u03c4r, \u03c4r+1, . . . , \u03c4r+h \u2208T such that all these h + 1 times belong to the interval [Pi\u22121 j=1 tj, Pi j=1 tj]. Then in \u03c0\u2032, we replace (vi, ti) by \u27e8\u27e8vi, g(Pi\u22121 j=1 tj)\u27e9, \u03c4r \u2212Pi\u22121 j=1 tj\u27e9, \u27e8\u27e8vi, g(\u03c4r)\u27e9, \u03c4r+1 \u2212\u03c4r\u27e9, . . . , \u27e8\u27e8vi, g(\u03c4r+l)\u27e9, Pi j=1 tj \u2212\u03c4r+l\u27e9. Again from the de\ufb01nition of fv, for v \u2208V \u2032, we can see that the cost of the timed path \u03c0\u2032 in P is the same as costk(P). Now we consider the other direction. Consider a path \u03b7 = \u27e8\u27e8v1, \u03b41\u27e9, t1\u27e9, \u27e8\u27e8v2, \u03b42\u27e9, t2\u27e9, . . ., \u27e8\u27e8v\u2113, \u03b4\u2113\u27e9, t\u2113\u27e9, \u27e8\u27e8v\u2113+1, \u03b4\u2113+1\u27e9, 0\u27e9, u in P such that v1 = sk and v\u2113+1 = uk. We construct a path \u03c0 in T from \u03c0 as follows. Every internal edge (\u27e8\u27e8vj, \u03b4j\u27e9, tj\u27e9, \u27e8\u27e8vj+1, \u03b4j+1\u27e9, tj+1\u27e9) is replaced by the edge (\u27e8vj, tj\u27e9, \u27e8vj+1, tj+1\u27e9) in \u03c0. Note that here ij = ij+1. A sequence of external edges (\u27e8\u27e8vj, ij\u27e9, tj\u27e9, . . . , \u27e8\u27e8vj+l, ij+1\u27e9, tj+l\u27e9) such that vj = vj+1 = \u00b7 \u00b7 \u00b7 = vj+l is replaced by (\u27e8vj, tj + tj+1+\u00b7 \u00b7 \u00b7+tj+l\u27e9). Let t0 = 0. The cost along path \u03b7 in P is rv1(l1)\u00b7t1+rv2(l2)\u00b7t2+\u00b7 \u00b7 \u00b7+rv\u2113(l\u2113)\u00b7t\u2113, where for 1 \u2264j \u2264\u2113, we have lj = loadP (vj, [Pj q=1 tq \u2212Pj\u22121 q=1 tq]). Let P = \u27e8\u03c01, . . . , \u03c0k\u22121, \u03c0\u27e9is the pro\ufb01le obtained from the strategies of the k players. From the de\ufb01nition of fv for each v \u2208V \u2032, it is not dif\ufb01cult to see that costk(P) is the same as the cost of the timed path \u03b7. Note that given integral strategies of k \u22121 players, an integral path in P translates to an integral strategy of Player k in T . Since it is known that a cost optimal path in P can be an integral path [19], the best response of Player k in T is an integral strategy. \u25c0 We conclude with the computational complexity of the BR problem. The decision-problem variant gets as input a TNG T , integral strategies \u03c01, . . . , \u03c0k\u22121 for Players 1, . . ., k \u22121, and a value \u00b5, and the goal is to decide whether Player k has a strategy \u03c0k such that costk(\u27e8\u03c01, . . . , \u03c0k\u27e9) \u2264\u00b5. Theorem 4 implies a reduction from the BR problem to the COR problem and a reduction in the other direction is easy since PTAs can be seen as TNGs with a single player. For one-clock instances, we \fXX:10 Timed Network Games with Clocks show that the BR problem is NP-hard by a reduction from the subset-sum problem. Note the contrast with the COR problem in one-clock instances, which is NLOGSPACE-complete [41]. \u25b6Theorem 5. The BR problem is PSPACE-complete for TNGs with two or more clocks. For one-clock cost-sharing and congestion TNGs it is in PSPACE and NP-hard. Proof. We reduce the BR problem to and from the COR problem, which is PSPACE-complete for PTAs with at least two clocks [19]. A PTA can be seen as a one-player TNG, thus the BR problem for TNGs with two or more clocks is PSPACE-hard. For the upper bound, given a TNG T , strategies Q = \u27e8\u03c01, . . . , \u03c0k\u22121\u27e9for Players 1, . . . , k \u22121, and a threshold \u00b5, we construct a PTA P as in the proof of Theorem 4. Note that the size of P is polynomial in the size of the input and that P has one more clock than T . An optimal path in P is a best response for Player k, and such a path can be found in PSPACE. The \ufb01nal case to consider is TNGs with one clock. We show that the BR problem is NP-hard for such instances using a reduction from the subset-sum problem. The input to that problem is a set of natural numbers A = {a1, . . . , an} and \u00b5 \u2208N, and the goal is to decide whether there is a subset of A whose sum is \u00b5. We start with the cost-sharing case. The game we construct is a two-player game on a network that is depicted in Figure 3. Player 2 has a unique strategy that visits vertex vn+1 in the time interval [\u00b5, \u00b5 + 1]. A Player 1 strategy \u03c0 corresponds to a choice of a subset of A. Player 1\u2019s source is v1 and her target is u2. The vertex vn+1 is the only vertex that has a cost, which is 1, and the other vertices cost 0. For 1 \u2264i \u2264n, Player 1 needs to choose between staying in vertex vi for a duration of ai time units, and exiting the vertex through the top edge, or staying 0 time units, and exiting the vertex through the bottom edge. Finally, she must stay in vn+1 for exactly one time unit. The cost Player 1 pays for vn+1 depends on the load. If she stays there in the global time interval [\u00b5, \u00b5 + 1], she pays 1/2, and otherwise she pays 1. Thus, Player 1 has a strategy with which she pays 1/2 iff there is a subset of A whose sum is \u00b5, and we are done. v1 v2 v3 \u00b7 \u00b7 \u00b7 1 vn+1 u1 s2 u2 x = \u00b5, \u2205 x = \u00b5 + 1, \u2205 x = a1, {x} x = 0, {x} x = a2, {x} x = 0, {x} x = a3, {x} x = 0, {x} x = an, {x} x = 0, {x} x = 1, {x} Figure 3 NP-hardness proof of best response problem in one clock TNG The reduction for congestion games is similar. Recall that in congestion games, the cost increases with the load, thus a player would aim at using a vertex together with as few other players as possible. The network is the same as the one used above. Instead of two players, we use three players, where Players 2 and 3 have a unique strategy each. Player 2 must stay in vn+1 in the time interval [0, \u00b5] and Player 3 must stay there during the interval [\u00b5 + 1, P 1\u2264i\u2264n ai]. As in the above, Player 1 has a strategy in which she uses vn+1 alone in the time interval [\u00b5, \u00b5 + 1] iff there is a subset of A whose sum is \u00b5. \u25c0 3.3 The social-optimum problem \u25b6Theorem 6. Consider a TNG T = \u27e8k, C, V, E, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9. There is a PTA P with k \u00b7 |C| clocks, |V |k vertices, and two vertices \u00af s and \u00af u such that there is a one-to-one cost-preserving correspondence between pro\ufb01les in T and paths from \u00af s to \u00af u; namely, for a pro\ufb01le P and its corresponding path \u03b7P , we have cost(P) = price(\u03b7P ). \fG. Avni, S. Guha and O. Kupferman XX:11 Proof. First, we show how to construct, given a TNG T with self loops, a TNG T \u2032 that has no self loops. Consider a vertex v that has a self loop e = \u27e8v, g, R, v\u27e9in T . In T \u2032, we remove e, we add a vertex v\u2032, a new clock x and \u201credirect\u201d e to v\u2032 while resetting x. Formally, we have the edge \u27e8v, g, R \u222a{x}, v\u2032\u27e9. We enforce that v\u2032 is left instantaneously using an edge \u27e8v\u2032, {x = 0}, \u2205, v\u27e9. Clearly, the strategies of the players in T and T \u2032 coincide. Recall that the social optimum is obtained when the players do not act sel\ufb01shly, rather they cooperate to \ufb01nd the pro\ufb01le that minimizes their sum of costs. Let T = \u27e8k, C, V, E, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9. We construct a PTA P by taking k copies of T . For i \u2208[k], the i-th copy is used to keep track of the timed path that Player i uses. We need k copies of the clocks of T to guarantee that the individual paths are legal. Recall that the players\u2019 goal is to minimize their total cost, thus for each point in time, the price they pay in P is the sum of their individual costs in T . More formally, consider a vertex \u00af v = \u27e8v1, . . . , vk\u27e9in P and let S\u00af v \u2286V be the set of vertices that appear in \u00af v. Then, the load on a vertex v \u2208S\u00af v in \u00af v is load\u00af v(v) = |{i : vi = v}|, and the rate of \u00af v is P v\u2208S\u00af v \u2113v(load\u00af v(v)). We show below that the cost of the social optimum in T coincides with the price of the optimal timed path in P from \u27e8s1, . . . , sk\u27e9to the vertex \u27e8u1, . . . , uk\u27e9, i.e., the vertices that respectively correspond to the sources and targets of all players. Towards this, we show that for every path \u03b7 in the PTA P, there exists a pro\ufb01le P\u03b7 in T such that price(\u03b7) in P equals cost(P\u03b7) in T . Consider a timed path \u03b7 = (\u00af v1, t1), . . . , (\u00af vn, tn), \u00af u in P where \u00af s = \u00af v1 and \u00af vj = \u27e8v1 j , . . . , vk j \u27e9, for 1 \u2264j \u2264n. For each Player i \u2208[k], we construct a timed path \u03c0i in T as follows. Intuitively, we restrict \u03b7 to the i-th component and remove recurring vertices. Consider an index 1 \u2264j \u2264n in which Player i changes vertices in \u03b7, thus vi j \u0338= vi j+1. We call such an index j a changing index. Let j1, . . . , jm be the changing indices for Player i. Let j0 = 1 and jm+1 = n. Note that between changing indices Player i does not change vertices. We de\ufb01ne \u03c0i = \u27e8vi j0, ti 0\u27e9, \u27e8vi j1, ti 1\u27e9, . . . , \u27e8vi jm, ti m\u27e9, where for p \u2208{0} \u222a[m], we have ti p = P jp\u2264l costi(P[i \u2190\u03c0\u2032 i]). The pro\ufb01le P[i \u2190\u03c0\u2032 i] is considered next and the above procedure repeats. A potential function for a game is a function \u03a8 that maps pro\ufb01les to costs, such that the following holds: for every pro\ufb01le P = \u27e8\u03c01, . . . , \u03c0k\u27e9, i \u2208[k], and strategy \u03c0\u2032 i for Player i, we have \u03a8(P) \u2212 \u03a8(P[i \u2190\u03c0\u2032 i]) = costi(P)\u2212costi(P[i \u2190\u03c0\u2032 i]), i.e., the change in potential equals the change in cost of the deviating player. A game is a potential game if it has a potential function. In a potential game with \ufb01nitely many pro\ufb01les, since the potential of every pro\ufb01le is non-negative and in every step of a best-response sequence the potential strictly decreases, every best-response sequence terminates in an NE. It is well-known that RAGs are potential games [48] and since they are \ufb01nite, this implies that an NE always exists. The idea of our proof is as follows. First, we show that TNGs are potential games, which does not imply existence of NE since TNGs have in\ufb01nitely many pro\ufb01les. Then, we focus on a speci\ufb01c best-response sequence that starts from an integral pro\ufb01le and allows the players to deviate only to integral strategies. Finally, we de\ufb01ne normalized TNGs and show how to normalize a TNG in a way that preserves existence of NE. For normalized TNGs, we show that the potential reduces at least by 1 along each step in the best-response sequence, thus it converges to an NE. \fG. Avni, S. Guha and O. Kupferman XX:13 \u25b6Theorem 8. TNGs are potential games. Proof. Consider a TNG T = \u27e8k, C, V, E, {\u2113v}v\u2208V , \u27e8si, ui\u27e9i\u2208[k]\u27e9. Recall that for a pro\ufb01le P, the set of intervals that are used in P is \u03a5P . We de\ufb01ne a potential function \u03a8 that is an adaptation of Rosenthal\u2019s potential function [48] to TNGs. We decompose the de\ufb01nition of \u03a8 into smaller components, which will be helpful later on. For every \u03b3 \u2208\u03a5P and v \u2208V , we de\ufb01ne \u03a8\u03b3,v(P) = PloadP (v,\u03b3) j=1 |\u03b3| \u00b7 \u2113v(j), that is, we take the sum of |\u03b3| \u00b7 \u2113v(j) for all j \u2208[loadP (v, \u03b3)]. We de\ufb01ne \u03a8\u03b3(P) = P v\u2208V \u03a8\u03b3,v(P), and we de\ufb01ne \u03a8(P) = P \u03b3\u2208\u03a5P \u03a8\u03b3(P). Let for some i \u2208[k], we have P \u2032 to be a pro\ufb01le that is obtained by an unilateral deviation of Player i to a strictly bene\ufb01cial strategy \u03c0\u2032 i from her current strategy in P, that is P \u2032 = P[i \u2190\u03c0\u2032] for some i \u2208[k]. We show that \u03a8(P) \u2212\u03a8(P \u2032) = costi(P) \u2212costi(P \u2032). Let T and T \u2032 be the minimal sets such that P and P \u2032 are Tand T \u2032-pro\ufb01les, respectively. Let \u03a5P,P \u2032 be a set of intervals that re\ufb01ne the intervals in \u03a5P and \u03a5P \u2032 according to T \u222aT \u2032. Formally, consider an interval [a, b] \u2208\u03a5P . The de\ufb01nition of \u03a5P implies that there is no time point t \u2208T with a < t < b. On the other hand, if there exist time points t1, . . . , tn \u2208T \u2032 with a < t1 < . . . < tn < b, then [a, t1], [t1, t2], . . . , [tn, b] \u2208\u03a5P,P \u2032, and dually for intervals in \u03a5P \u2032. It is not hard to see that since \u03a5P,P \u2032 re\ufb01nes \u03a5P and \u03a5P \u2032, we have P \u03b3\u2208\u03a5P,P \u2032 \u03a8\u03b3(P) = \u03a8(P) and P \u03b3\u2208\u03a5P,P \u2032 \u03a8\u03b3(P \u2032) = \u03a8(P \u2032). Consider an interval \u03b3 \u2208\u03a5P,P \u2032. Recall that P \u2032 is obtained by letting Player i change her strategy from the one in P and the other players\u2019 strategies remain the same. Let v = visitsP (i, \u03b3) and v\u2032 = visitsP \u2032(i, \u03b3). For every v\u2032\u2032 \u0338= v, v\u2032, since the other players do not change their strategies, the loads stay the same over the duration \u03b3 and we have \u03a8\u03b3,v\u2032\u2032(P) = \u03a8\u03b3,v\u2032\u2032(P \u2032). We consider the case where v \u0338= v\u2032. Thus, Player i uses v in the interval \u03b3 in P and uses v\u2032 in the same interval in P \u2032, or the other way around. Thus, we have |loadP (v, \u03b3) \u2212 loadP \u2032(v, \u03b3)| = 1 and |loadP (v\u2032, \u03b3) \u2212loadP \u2032(v\u2032, \u03b3)| = 1. Suppose loadP (v, \u03b3) = loadP \u2032(v, \u03b3) + 1 and loadP (v\u2032, \u03b3) = loadP \u2032(v\u2032, \u03b3) \u22121. Thus \u03a8\u03b3(P) \u2212\u03a8\u03b3(P \u2032) = (\u03a8\u03b3,v(P) + \u03a8\u03b3,v\u2032(P)) \u2212 (\u03a8\u03b3,v(P \u2032)+\u03a8\u03b3,v\u2032(P \u2032)) = (\u03a8\u03b3,v(P)\u2212\u03a8\u03b3,v(P \u2032))+(\u03a8\u03b3,v\u2032(P)\u2212\u03a8\u03b3,v\u2032(P \u2032)) = |\u03b3|\u00b7\u2113v(loadP (v, \u03b3))\u2212 |\u03b3| \u00b7 \u2113v\u2032(loadP \u2032(v\u2032, \u03b3)). Now the costs of Player i in pro\ufb01le P and P \u2032 over the duration \u03b3 are |\u03b3|\u00b7\u2113v(loadP (v, \u03b3)) and |\u03b3|\u00b7\u2113v\u2032(loadP \u2032(v\u2032, \u03b3)) respectively. Thus \u03a8\u03b3(P)\u2212\u03a8\u03b3(P \u2032) = cost\u03b3,i(P)\u2212 cost\u03b3,i(P \u2032). Since we sum up for all \u03b3 \u2208\u03a5P,P \u2032, we get \u03a8(P) \u2212\u03a8(P \u2032) = costi(P) \u2212costi(P \u2032), and we are done. \u25c0 Recall from Theorem 4, that given a TNG, a pro\ufb01le P and an index i, we \ufb01nd the best response of Player i by constructing a PTA. If P is an integral pro\ufb01le, from Theorem 3, we have that the best response of Player i also leads to an integer pro\ufb01le. Thus we have the following lemma. \u25b6Lemma 9. Consider a TNG T and an integral pro\ufb01le P. For i \u2208[k], if Player i has a bene\ufb01cial deviation from P, then she has an integral bene\ufb01cial deviation. The last ingredient of the proof gives a lower bound for the difference in cost that is achieved in a bene\ufb01cial integral deviation for some player i \u2208[k], which in turn bounds the change in potential. We \ufb01rst need to introduce a normalized form of TNGs. Recall that the latency function in a TNG T is of the form \u2113v : [k] \u2192Q\u22650. In a normalized TNG all the latency functions map loads to natural numbers, thus for every vertex v \u2208V , we have \u2113v : [k] \u2192N. Constructing a normalized TNG from a TNG is easy. Let L be the least common multiple of the denominators of the elements in the set {\u2113v(l) : v \u2208V and l \u2208[k]}. For every latency function \u2113v and every l \u2208[k] , we construct a new latency function \u2113\u2032 v by \u2113\u2032 v(l) = \u2113v(l) \u00b7 L. Consider a TNG T and let T \u2032 be the normalized TNG that is constructed from T . It is not hard to see that for every pro\ufb01le P and i \u2208[k], we have costi(P) in T \u2032 is L \u00b7 costi(P) in T . We can thus restrict attention to normalized TNGs as the existence of NE and convergence of best-response sequence in T \u2032 implies the same properties in T . In order to show that a best-response sequence \fXX:14 Timed Network Games with Clocks converges in TNGs, we bound the change of potential in each best-response step by observing that in normalized TNGs, the cost a player pays is an integer. \u25b6Lemma 10. Let T be a normalized TNG, P = \u27e8\u03c01, . . . , \u03c0k\u27e9be an integral pro\ufb01le in T , and \u03c0\u2032 i be a bene\ufb01cial integral deviation for Player i, for some i \u2208[k]. Then, costi(P)\u2212costi(P[i \u2190\u03c0\u2032 i]) \u22651. We can now prove the main result in this section. \u25b6Theorem 11. Every TNG has an integral NE. Moreover, from an integral pro\ufb01le P, there is a best-response sequence that converges to an integral NE. Proof. Lemma 9 allows us to restrict attention to integral deviations. Indeed, consider an integral pro\ufb01le P. Lemma 9 implies that if no player has a bene\ufb01cial integral deviation from P, then P is an NE in T . We start best-response sequence from some integral pro\ufb01le PI and allow the players to deviate with integral strategies only. Consider a pro\ufb01le P and let P \u2032 be a pro\ufb01le that is obtained from P by a deviation of Player i. Recall from Theorem 8 that costi(P) \u2212costi(P \u2032) = \u03a8(P) \u2212\u03a8(P \u2032). Lemma 10 implies that when the deviation is bene\ufb01cial, we have \u03a8(P) \u2212\u03a8(P \u2032) \u22651. Since the potential is non-negative, the best-response sequence above converges within \u03a8(PI) steps. \u25c0 \u25b6Remark. A TNG that allows < and > operators on the guards is not guaranteed to have an NE. Indeed, in a PTA, which can be seen as a one-player TNG, strict guards imply that an optimal timed path may not be achieved. In turn, this means that an NE does not exist. To overcome this issue, we use \u03f5-NE, for \u03f5 > 0; an \u03f5-deviation is one that improves the payoff of a player at least by \u03f5, and an \u03f5-NE is a pro\ufb01le in which no player has a \u03f5-deviation. Our techniques can be adapted to show that \u03f5-NE exist in TNGs with strict guards. The proof uses the results of [19] that show that an \u03f5-optimal timed path exists in PTAs. The proof technique for existence of NE in TNGs with non-strict guards can then be adapted to the strict-guard case. 5 Equilibrium Inef\ufb01ciency In this section we address the problem of measuring the degradation in social welfare due to sel\ufb01sh behavior, which is measured by the PoS and PoA measures. We show that the upper bounds from RAGs on these two measures apply to TNGs. For cost-sharing TNGs, we show that the PoS and PoA are at most log k and k, respectively, as it is in cost-sharing RAGs. Matching lower bounds were given in [14] already for GTNGs. For congestion TNGs with af\ufb01ne latency functions, we show that the PoS and PoA are 1 + p (3)/3 \u22481.577 and 5 2, respectively, as it is in congestion RAGs. Again, a matching lower bound for PoA is shown in [14] for GTNGs, and a matching lower bound for the PoS remains open. Let F denote a family of latency functions and F-TNGs and F-RAGs denote, respectively, the family of TNGs and RAGs that use latency functions from this family. \u25b6Theorem 12. Consider a family of latency functions F. We have PoS(F-TNGs) \u2264PoS(F-RAGs) and PoA(F-TNGs) \u2264PoA(F-RAGs). In particular, the PoS and PoA for cost-sharing TNGs with k players is at most log(k) and k, respectively, and for congestion TNGs with af\ufb01ne latency functions it is at most roughly 1.577 and 5 2 respectively. Proof. We prove for PoS in cost-sharing games and the other proofs are similar. Consider a TNG T and let N 1, N 2, . . . be a sequence of NEs whose cost tends to c\u2217= infP \u2208\u0393(T ) cost(P). Let O be a social optimum pro\ufb01le in T , which exists due to Theorem 6. Thus, PoS(T ) = limj\u2192\u221ecost(N j)/ cost(O). We show that each element in the sequence is bounded above by PoS(cost-sharing RAGs), which implies that PoS(T ) \u2264PoS(cost-sharing RAGs), and hence PoS(cost-sharing TNGs) \u2264 PoS(cost-sharing RAGs). For each j \u22651, we construct below an RAG Rj that has PoS(Rj) = cost(N j)/cost(O), and since Rj is a cost-sharing RAG, we have PoS(Rj) \u2264PoS(costsharing RAGs). \fG. Avni, S. Guha and O. Kupferman XX:15 For j \u22651, we construct a RAG Rj in which, for each i \u2208[k], Player i has two strategies; one corresponding to her strategy in N j and one corresponding to her strategy in O. Formally, let \u03a5j = \u03a5Nj\u222a\u03a5O be the time periods of N j and O. We construct a RAG Rj = \u27e8k, Ej, {\u03a3j i}i\u2208[k], {\u2113e}e\u2208Ej\u27e9, where Ej = V \u00d7 \u03a5j and for each e = \u27e8v, \u03b3\u27e9\u2208Ej, we de\ufb01ne \u2113e such that for each l \u2208[k], we have that \u2113e(l) = |\u03b3| \u00b7 \u2113v(l). and we de\ufb01ne the players\u2019 strategies below. Recall that, for a pro\ufb01le P, i \u2208[k], and an interval \u03b3, visitsP (i, \u03b3) denotes the vertex at which Player i stays during \u03b3 in P. We extend the de\ufb01nition of visitsP to allow periods that occur after Player i has reached her destination, and de\ufb01ne that the function returns ui. Moreover, we assume w.l.o.g. that ui is a vertex with no outgoing edges, thus the paths of the other players do not traverse ui. Player i\u2019s two strategies are nj i = {visitsNj(i, \u03b3) : \u03b3 \u2208\u03a5Nj} and oj i = {visitsO(i, \u03b3) : \u03b3 \u2208\u03a5O}. Clearly, \u27e8oj 1, . . . , oj k\u27e9is the social optimum of Rj. Also, \u27e8nj 1, . . . , nj k\u27e9is an NE and we assume it is the cheapest NE in Rj. Otherwise, we can alter N j to match the best NE in Rj and only improve the sequence. Thus, we have PoS(Rj) = cost(N j)/cost(O). Since Rj is a cost-sharing RAG, we have PoS(Rj) \u2264PoS(cost-sharing RAGs), and we are done. \u25c0 6 Time Bounds Recall that due to resets of clocks, the time by which a pro\ufb01le ends can be potentially unbounded. It is interesting to know, given a TNG, whether there are time bounds within which some interesting pro\ufb01les like an NE and an SO are guaranteed to exist. Earlier we showed that every TNG is guaranteed to have an integral NE (Theorem 11) and an integral SO (Theorem 6). In this section we give bounds on the time by which such pro\ufb01les end. That is, given a TNG T , we \ufb01nd tNE(T ), TSO(T ) \u2208Q\u22650 such that an integral NE N and an integral SO O exist in T in which the players reach their destinations by time tNE(T ) and TSO(T ) respectively. We start by showing a time bound on an optimal timed path in a PTA, and then proceed to TNGs. \u25b6Lemma 13. Consider a PTA P = \u27e8C, V, E, {rv}v\u2208V \u27e9, and let \u03c7 be the largest constant appearing in the guards on the edges of P. Then, for every s, u \u2208V , there is an integral optimal timed path from s to u that ends by time |V | \u00b7 (\u03c7 + 2)|C|. Proof. Consider an optimal integral timed path \u03b7 in P that ends in the earliest time and includes no loop that is traversed instantaneously. Let v0, . . . , vn be the sequence of vertices that \u03b7 traverses, and, for 0 \u2264i < n, let \u03bai be the clock valuation before exiting the vertex vi. Since \u03b7 is integral, \u03bai assigns integral values to clocks. Note that since the largest constant appearing in a guard in P is \u03c7, the guards in P cannot differentiate between clock values greater than \u03c7. We abstract away such values and de\ufb01ne the restriction of a clock valuation \u03bai to be \u03b2i : C \u2192({0} \u222a[\u03c7] \u222a{\u22a4}) by setting, for x \u2208C, the value \u03b2i(x) = \u03bai(x), when \u03bai(x) \u2264\u03c7, and \u03b2i(x) = \u22a4, when \u03bai(x) > \u03c7. Assume towards contradiction that \u03b7 ends after time |V | \u00b7 (\u03c7 + 2)|C|. Then, there are 0 \u2264i < j < n such that \u27e8vi, \u03b2i\u27e9= \u27e8vj, \u03b2j\u27e9. Let \u03b7 = \u03b71 \u00b7 \u03b72 \u00b7 \u03b73 be a partition of \u03b7 such that \u03b72 is the sub-path between the i-th and j-th indices. Consider the path \u03b7\u2032 = \u03b7\u2032 1 \u00b7\u03b7\u2032 3 that is obtained from \u03b7 by removing the sub-path \u03b72. First, note that \u03b7\u2032 is a legal path. Indeed, the restrictions of the clock valuations in \u03b71 and \u03b73 match these in \u03b7\u2032 1 and \u03b7\u2032 3, that is, \u03b7\u2032 = \u03b71 \u00b7 \u03b73. Second, since we assume that traversing the loop \u03b72 is not instantaneous, we know that \u03b7\u2032 ends before \u03b7. Moreover, since the rates in P are non-negative, we have price(\u03b7\u2032) \u2264price(\u03b7), and we reach a contradiction to the fact that \u03b7 is an optimal timed path that ends earliest. \u25c0 \u25b6Theorem 14. For a k-player TNG T with a set V of vertices and a set C of clocks, there exists an SO that ends by time O(|V |k \u00b7 \u03c7k|C|), where \u03c7 is the maximum constant appearing in T . For every k \u22651, there is a k-player (cost-sharing and congestion) TNG Tk such that Tk has O(k) states, the boundaries in the guards in Tk are bounded by O(k log k), and any SO in Tk requires time 2\u2126(k). \fXX:16 Timed Network Games with Clocks Proof. We start with the upper bound. Consider a TNG T with a set V of vertices and a set C of clocks. By Theorem 6, we can construct a PTA P with |V |k vertices and k|C| clocks such that a social optimum of T is an optimal timed path in P. Applying Lemma 13, we are done. We turn to the lower bounds. We show that for every k \u22651, there is a k-player (cost-sharing and congestion) TNG Tk such that Tk has O(k) states, the boundaries in the guards in Tk are bounded by O(k log k), and any SO in Tk requires time 2\u2126(k). 1 . . . s1 sk v u x = p1, {x} x = pk, {x} x = 1 x = p1, {x} x = pk, {x} s1 s2 u1 u2 x =1 x=p1, {x} x=p2, {x} x=1 x=p1, {x} x=p2, {x} Figure 4 The time required for the SO is not polynomial. Consider the k-player cost-sharing TNG appearing on the left of Figure 4. Let p1, . . . , pk be relatively prime (e.g., the the \ufb01rst k prime numbers). All the vertices in the TNG have cost 0, except for v, which has some positive cost function. Each player i has to spend one time unit in v in her path from si to u. In an SO, all k players spend this one time unit simultaneously, which forces them all to reach v at time Q 1\u2264i\u2264k pi. Since the i-th prime number is O(i log i) and the product of the \ufb01rst i prime numbers is 2\u2126(i), we are done. We note that we could de\ufb01ne the TNG also with no free vertices, that is vertics with 0 cost, by setting the cost in v to be much higher than those in the source vertices. For congestion games, the example is more complicated. We start with the case of two players. Consider the congestion TNG appearing on the right of Figure 4. Assume that p1 and p2 are relatively prime, rs1(1) = rs2(1) = 0, and rs1(2) = rs2(2) = 1. In the SO, the two players avoid each other in their paths from si to ui, and the way to do so is to wait p1 \u00b7 p2 time units before the edge from si to s3\u2212i is traversed. Below we generalize this example to k players. Again, we could de\ufb01ne the TNG with no free vertices. We generalize the 2-player congestion TNG appearing on the right of Figure 4 to an arbitrary number of players. The extension to 3 players appears in Figure 5 below. As in the case of 2 players, the cost function in the vertices s1, s2, and s3 is 0 for load 1 and strictly positive for higher loads. In order to reach her target, Player i has to traverse 2 edges in the triangle before she can take the edge to ui. In the SO that ends at the earliest possible time, the players perform these traversals together, so the game needs time 2 \u00b7 p1 \u00b7 p2 \u00b7 p3. s1 s3 u1 s2 u3 u2 x = p1, {x} x = p2, {x} x = p3, {x} x = 1 x = 1 x = 1 x = p1, {x} x = p3, {x} x = p2, {x} Figure 5 The time required for the SO is 2 \u00b7 p1 \u00b7 p2 \u00b7 p3. \fG. Avni, S. Guha and O. Kupferman XX:17 In the extension to k players, the TNG consists of a k-vertex polygon (to which the target vertices are connected), and the players have to traverse k \u22121 edges in it. Doing this simultaneously requires time (k \u22121) \u00b7 p1 \u00b7 p2 \u00b7 \u00b7 \u00b7 pk. \u25c0 We proceed to derive a time bound for the existence of an NE. For a TNG T , let LT \u2208N be the smallest number such that multiplying the latency functions by LT results in a normalized TNG. Recall the SO(T ) is the cost of a social optimum in T . \u25b6Theorem 15. Consider a TNG T with k players, played on a timed network \u27e8V, E, C\u27e9, and let \u03c7 be the maximum constant appearing in a guard. Then, there is an NE in T that ends by time O(\u03d5 \u00b7 |V | \u00b7 \u03c7|C| + |V |k \u00b7 \u03c7k|C|), where \u03d5 = LT \u00b7 SO(T ) for congestion TNGs and \u03d5 = LT \u00b7 log(k) \u00b7 SO(T ) for cost-sharing TNGs. Proof. Recall the proof of Theorem 11 that shows that every TNG has an integral NE: we choose an initial integral pro\ufb01le P and perform integral best-response moves until an NE is reached. The number of iterations is bounded by the potential \u03a8(P) of P. We start the best-response sequence from a social-optimum pro\ufb01le O that ends earliest. By Theorem 14, there is such a pro\ufb01le that ends by time O(|V |k \u00b7 \u03c7k|C|). Let \u03d5 = LT \u00b7 SO(T ) in the case of congestion TNGs and \u03d5 = LT \u00b7 (ln(k) + 1) \u00b7 SO(T ) in the case of cost-sharing TNGs. It is not hard to show that \u03a8(O) \u2264\u03d5. Next, we bound the time that is added in a best-response step. We recall the construction in Theorem 4 of the PTA P for \ufb01nding a best-response move. Consider a TNG T and a pro\ufb01le of strategies P, where, w.l.o.g., we look for a best-response for Player k. Suppose the strategies of Players 1, . . . , k \u22121 take transitions at times \u03c41, . . . , \u03c4n. We construct a PTA P with n + 1 copies of T . For 1 \u2264i \u2264n + 1, an optimal path in P starts in the \ufb01rst copy and moves from copy i to copy (i + 1) at time \u03c4i. We use the additional \u201cglobal\u201d clock to enforce these transitions. A key observation is that in the last copy, this additional clock is never used. Thus, the largest constant in a guard in the last copy coincides with \u03c7, the largest constant appearing in T . Let \u03b7 be an optimal path in P and \u03c0k the corresponding strategy for Player k. We distinguish between two cases. If \u03b7 does not enter the last copy of P, then it ends before time \u03c4n, namely the latest time at which a player reaches her destination. Then, the pro\ufb01le P[k \u2190\u03c0k] ends no later than P. In the second case, the path \u03b7 ends in the last copy of P. We view the last copy of P as a PTA. By Lemma 13, the time at which \u03b7 ends is within |V |\u00b7(\u03c7+2)|C| since its entrance into the copy, which is \u03c4n. Then, P[i \u2190\u03c0k] ends at most |V | \u00b7 (\u03c7 + 2)|C| time units after P. To conclude, the best-response sequence terminates in an NE that ends by time O(\u03d5 \u00b7 |V | \u00b7 (\u03c7 + 2)|C| + |V |k \u00b7 \u03c7k|C|). \u25c0 7 Discussion and Future Work The model of TNGs studied in this paper extends the model of GTNGs introduced in [14] by adding clocks. From a practical point of view, the addition of clocks makes TNGs signi\ufb01cantly more expressive than GTNGs and enables them to model the behavior of many systems that cannot be modeled using GTNGs. From a theoretical point of view, the analysis of TNGs poses different and dif\ufb01cult technical challenges. In the case of GTNGs, a main tool for obtaining positive results is a reduction between GTNGs and NGs. Here, in order to obtain positive results we need to combine techniques from NGs and PTAs. We left several open problems. In Theorem 11, we describe a method for \ufb01nding an integral NE through a sequence of BR moves. We leave open the complexity of \ufb01nding an NE in TNGs. For the upper bound, we conjecture that there is a PSPACE algorithm for the problem. For the lower bound, we would need to \ufb01nd an appropriate complexity class of search problems and show hardness for that class. For example, PLS [31], which lies \u201cclose\u201d to P, and includes the problem of \ufb01nding an NE in NGs, consists of search problems in which a local search, e.g., a BR sequence, terminates. Unlike NGs, where a BR can be found in polynomial time, in TNGs, the problem is PSPACE-complete. To \fXX:18 Timed Network Games with Clocks the best of our knowledge, complexity classes for search problems that are higher than PLS were not studied. Further we show that the BR and SO problems for one-clock TNGs is in PSPACE and is NP-hard, leaving open the tight complexity. This work belongs to a line of works that transfer concepts and ideas between the areas of formal veri\ufb01cation and algorithmic game theory: logics for specifying multi-agent systems [9, 26], studies of equilibria in games related to synthesis and repair problems [25, 24, 33, 4], and of non-zero-sum games in formal veri\ufb01cation [28, 22]. This line of work also includes ef\ufb01cient reasoning about NGs with huge networks [40, 13], an extension of NGs to objectives that are richer than reachability [16], and NGs in which the players select their paths dynamically [15]. For future work, we plan to apply the real-time behavior of TNGs to these last two concepts; namely, TNGs in which the players\u2019 objectives are given as a speci\ufb01cation that is more general than simple reachability or TNGs in which the players reveal their choice of timed path in steps, bringing TNGs closer to the timed games of [12, 3]." + }, + { + "url": "http://arxiv.org/abs/1804.04372v4", + "title": "Infinite-Duration Poorman-Bidding Games", + "abstract": "In two-player games on graphs, the players move a token through a graph to\nproduce an infinite path, which determines the winner or payoff of the game.\nSuch games are central in formal verification since they model the interaction\nbetween a non-terminating system and its environment. We study {\\em bidding\ngames} in which the players bid for the right to move the token. Two bidding\nrules have been defined. In {\\em Richman} bidding, in each round, the players\nsimultaneously submit bids, and the higher bidder moves the token and pays the\nother player. {\\em Poorman} bidding is similar except that the winner of the\nbidding pays the \"bank\" rather than the other player. While poorman\nreachability games have been studied before, we present, for the first time,\nresults on {\\em infinite-duration} poorman games. A central quantity in these\ngames is the {\\em ratio} between the two players' initial budgets. The\nquestions we study concern a necessary and sufficient ratio with which a player\ncan achieve a goal. For reachability objectives, such {\\em threshold ratios}\nare known to exist for both bidding rules. We show that the properties of\npoorman reachability games extend to complex qualitative objectives such as\nparity, similarly to the Richman case. Our most interesting results concern\nquantitative poorman games, namely poorman mean-payoff games, where we\nconstruct optimal strategies depending on the initial ratio, by showing a\nconnection with {\\em random-turn based games}. The connection in itself is\ninteresting, because it does not hold for reachability poorman games. We also\nsolve the complexity problems that arise in poorman bidding games.", + "authors": "Guy Avni, Thomas A. Henzinger, Rasmus Ibsen-Jensen", + "published": "2018-04-12", + "updated": "2020-01-27", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.LO" + ], + "main_content": "Introduction Two-player in\ufb01nite-duration games on graphs are a central class of games in formal veri\ufb01cation [4] and have deep connections to foundations of logic [43]. They are used to model the interaction between a system and its environment, and the problem of synthesizing a correct system then reduces to \ufb01nding a winning strategy in a graph game [41]. Theoretically, they have been widely studied. For example, the problem of deciding the winner in a parity game is a rare problem that is in NP and coNP [27], not known to be in P, and for which a quasi-polynomial algorithm was only recently discovered [15]. A graph game proceeds by placing a token on a vertex in the graph, which the players move throughout the graph to produce an in\ufb01nite path (\u201cplay\u201d) \u03c0. The game is zero-sum and \u03c0 determines the winner or payoff. Two ways to classify graph games are according to the type of objectives of the players, and according to the mode of moving the token. For example, in reachability games, the objective of Player 1 is to reach a designated vertex t, and the objective of Player 2 is to avoid t. An in\ufb01nite play \u03c0 is winning for Player 1 iff \u2217This paper is a full version of [7]. This research was supported in part by the Austrian Science Fund (FWF) under grants S11402-N23 (RiSE/SHiNE), Z211-N23 (Wittgenstein Award), and M 2369-N33 (Meitner fellowship). \u2020guy.avni@ist.ac.at \u2021tah@ist.ac.at \u00a7ribsen@ist.ac.at arXiv:1804.04372v4 [cs.GT] 27 Jan 2020 \fit visits t. The simplest mode of moving is turn based: the vertices are partitioned between the two players and whenever the token reaches a vertex that is controlled by a player, he decides how to move the token. We study a new mode of moving in in\ufb01nite-duration games, which is called bidding, and in which the players bid for the right to move the token. The bidding mode of moving was introduced in [31, 32] for reachability games, where two variants of \ufb01rst-price auctions where studied: Each player has a budget, and before each move, the players submit sealed bids simultaneously, where a bid is legal if it does not exceed the available budget, and the higher bidder moves the token. The bidding rules differ in where the higher bidder pays his bid. In Richman bidding (named after David Richman), the higher bidder pays the lower bidder. In poorman bidding, which is the bidding rule that we focus on in this paper, the higher bidder pays the \u201cbank\u201d. Thus, the bid is deducted from his budget and the money is lost. Note that while the sum of budgets is constant in Richman bidding, in poorman bidding, the sum of budgets shrinks as the game proceeds. One needs to devise a mechanism that resolves ties in biddings, and our results are not affected by the tie-breaking mechanism that is used. Bidding games naturally model decision-making settings in which agents need to invest resources in an ongoing manner. We argue that the modelling capabilities of poorman bidding exceed those of Richman bidding. Richman bidding is restricted to model \u201cscrip\u201d systems that use internal currency to avoid free riding and guarantee fairness. Poorman bidding, on the other hand, model a wider variety of settings since the bidders pay their bid to the auctioneer. We illustrate a speci\ufb01c application of in\ufb01nite-duration poorman bidding in reasoning about ongoing stateful auctions, which we elaborate on in Section 4.6. Example 1. Consider a setting in which two buyers compete in auction to buy k \u2208I N goods that are \u201crented\u201d for a speci\ufb01c time duration. For example, a webpage has k ad slots, and each slot is sold for a \ufb01xed time duration, e.g., one day. At time point 1 \u2264i \u2264k, good i is put up for sale in a second-price auction, where the higher bidder pays the auctioneer and keeps the good for the \ufb01xed duration of time. We focus on the \ufb01rst buyer. Each good entails a reward for him, and we are interested in devising a bidding strategy that maximize the long-run average of the rewards. For example, the simple case of a site with one ad slot is represented by the game that is depicted in Fig. 1, where the vertex v1 represents the case that Player 1\u2019s ad appears and v2 represents the case that Player 2\u2019s ad appears. Player 1\u2019s goal is to maximize the long-run average time that his ad appears, which intuitively amounts to \u201cstaying\u201d as much time as possible in v1. Player 1\u2019s goal is formally described as a mean-payoff objective, which we elaborate on below. Our results on mean-payoff poorman games allow us to construct an optimal strategy for the players. Another advantage of poorman bidding over the Richman bidding is that their de\ufb01nition generalizes easily to domains in which the restriction of a \ufb01xed sum of budgets is an obstacle. For example, in ongoing auctions as described in the example above, often a good is sold to multiple buyers with partial information of the budgets. These are two orthogonal concepts that have not been studied in bidding games and are both easier to de\ufb01ne in poorman bidding rather than in Richman bidding. A central quantity in bidding games is the ratio of the players\u2019 initial budgets. Formally, let Bi \u2208I R\u22650, for i \u2208{1, 2}, be Player i\u2019s initial budget. The total initial budget is B = B1 + B2 and Player i\u2019s initial ratio is Bi/B. The \ufb01rst question that arises in the context of bidding games is a necessary and suf\ufb01cient initial ratio for a player to guarantee winning. For reachability games, it was shown in [31, 32] that such threshold ratios exist in every reachability Richman and poorman game: for every vertex v there is a ratio Th(v) \u2208[0, 1] such that (1) if Player 1\u2019s initial ratio exceeds Th(v), he can guarantee winning, and (2) if his initial ratio is less than Th(v), Player 2 can guarantee winning. This is a central property of the game, which is a form of determinacy, and shows that no ties can occur.1 An intriguing equivalence was observed in [31, 32] between random-turn games [39] and reachability bidding games, but only with Richman-bidding. For r \u2208[0, 1], the random-turn game that corresponds to a 1When the initial budget of Player 1 is exactly Th(v), the winner of the game depends on how we resolve draws in biddings. 2 \fbidding game G w.r.t. r, denoted RTr(G), is a special case of stochastic game [23]: rather than bidding for moving, in each round, independently, Player 1 is chosen to move with probability r and Player 2 moves with the remaining probability of 1 \u2212r. Richman reachability games are equivalent to uniform randomturn games, i.e., with r = 0.5 (see Theorem 7 for a precise statement of the equivalence). For reachability poorman-bidding games, no such equivalence is known and it is unlikely to exist since there are (simple) \ufb01nite poorman games with irrational threshold ratios. The lack of such an equivalence makes poorman games technically more complicated. More interesting, from the synthesis and logic perspective, are in\ufb01nite winning conditions, but they have only been studied in the Richman setting previously [6]. We show, for the \ufb01rst time, existence of threshold ratios in qualitative poorman games with in\ufb01nite winning conditions such as parity. We show a linear reduction from poorman parity games to poorman reachability games, similarly to the proof in the Richman setting. First, we show that in a strongly-connected game, one of the players wins with any positive initial ratio, thus the bottom strongly-connected components (BSCCs, for short) of the game graph can be partitioned into \u201cwinning\u201d for Player 1 and \u201closing\u201d for Player 1. Second, we construct a reachability poorman game in which each player tries to force the game to a BSCC that is winning for him. Things get more interesting in mean-payoff poorman games, which are zero-sum quantitative games; an in\ufb01nite play of the game is associated with a payoff which is Player 1\u2019s reward and Player 2\u2019s cost, thus we respectively refer to the players in a mean-payoff game as Max and Min. The central question in these games is: Given a value c \u2208Q, what is the initial ratio that is necessary and suf\ufb01cient for Max to guarantee a payoff of c? More formally, we say that c is the value with respect to a ratio r \u2208[0, 1] if for every \u03f5 > 0, we have (1) when Max\u2019s initial ratio is r + \u03f5, he can guarantee a payoff of at least c, and (2) intuitively, Max cannot hope for more: if Max\u2019s initial ratio is r \u2212\u03f5, then Min can guarantee a payoff of at most c. Our most technically-involved contribution is a construction of optimal strategies in mean-payoff poorman games, which depend on the initial ratio r \u2208[0, 1]. The key component of the solution is a quantitative solution to strongly-connected games, which, similar to parity games, allows us to reduce general meanpayoff poorman games to reachability poorman games by reasoning about the BSCCs of the graph. Before describing our solution, let us highlight an interesting difference between Richman and poorman bidding. With Richman bidding, it is shown in [6] that a strongly-connected mean-payoff Richman-bidding game has a value that does not depend on the initial ratio and only on the structure of the game. It thus seems reasonable to guess that the initial ratio would not matter with poorman bidding as well. We show, however, that this is not the case; the higher Max\u2019s initial ratio is, the higher the payoff he can guarantee. We demonstrate this phenomenon with the following simple game. Technically, each vertex in the graph has a weight the payoff of an in\ufb01nite play \u03c0 is de\ufb01ned as follows. The energy of a pre\ufb01x \u03c0n of length n of \u03c0, denoted E(\u03c0n), is the sum of the weights it traverses. The payoff of \u03c0 is lim infn\u2192\u221eE(\u03c0n)/n. Example 2. Consider the mean-payoff poorman game that is depicted in Figure 1. We take the viewpoint of Min in this example. We consider the case of r = 1 2, and claim that the value with respect to r = 1 2 is 0. Suppose for convenience that Min wins ties. Note that the players\u2019 choices upon winning a bid in the game are obvious, and the dif\ufb01culty in devising a strategy is \ufb01nding the right bids. Intuitively, Min copies Max\u2019s bidding strategy. Suppose, for example, that Min starts with a budget of 1 + \u03f5 and Max starts with 1, for some \u03f5 > 0. A strategy for Min that ensures a payoff of 0 is based on a queue of numbers as follows: In round i, if the queue is empty Min bids \u03f5\u00b72\u2212i, and otherwise the maximal number in the queue. If Min wins, he removes the minimal number from the queue (if non-empty). If Max wins, Min adds Max\u2019s winning bid to the queue. For example, suppose Max\u2019s \ufb01rst bid is 0.2, he wins since Min bids \u03f5/2, and Min adds 0.2 to the empty queue. Min\u2019s second bid is 0.2. Suppose Max bids 0.3 in the second turn, thus he wins again. Min adds 0.3 to the queue and bids 0.3 in the third bidding. Suppose Max bids 0.1, thus Min wins and removes 0.3 from the queue. In the next bidding his bid is 0.2. We make several observations. (1) Min\u2019s strategy is legal: it never bids higher than the available budget. 3 \f1 \u22121 Figure 1: A mean-payoff game. 1 \u22121 \u22121 \u22122 v1 v2 v3 v4 Figure 2: A second mean-payoff game. (2) The size of the queue is an upper bound on the energy; indeed, every bid in the queue corresponds to a Max winning bid that is not \u201cmatched\u201d (the size is an upper bound since Min might win biddings when the queue is empty). (3) If Min\u2019s queue \ufb01lls, it will eventually empty. Indeed, if b \u2208I R is in the queue, in order to keep b in the queue, Max must bid at least b, thus eventually his budget runs out. Combining, since the energy is at most 0 when the queue empties, Min\u2019s strategy guarantees that the energy is at most 0 in\ufb01nitely often. Since we use lim inf in the de\ufb01nition of the payoff, Min guarantees a non-positive payoff. Showing that Max can guarantee a non-negative payoff with an initial ratio of 1 2 + \u03f5 is harder, and a proof for the general case can be found in Section 4. We show that the value c decreases with Max\u2019s initial ratio r. We set r = 1 3. Suppose, for example, that Min\u2019s initial budget is 2 + \u03f5 and Max\u2019s initial budget is 1. We claim that Min can guarantee a payoff of \u22121/3. His strategy is similar to the one above, only that whenever Max wins with b, Min pushes b to the queue twice. Observations (1-3) still hold. The difference is that now, since every Max win is matched by two Min wins, when the queue empties, the number of Min wins is at least twice as much as Max\u2019s wins, and the claim follows. This example shows the contrast between Richman and poorman bidding. When using Richman bidding, Min can guarantee a payoff of 0 with every initial budget, and cannot guarantee \u2212\u03f5, even with a ratio of 1 \u2212\u03b4, for any \u03f5, \u03b4 > 0. In order to solve strongly-connected mean-payoff poorman games, we identify the following equivalence with biased random-turn games. Consider a strongly-connected mean-payoff poorman game G and a ratio r \u2208[0, 1]. Recall that RTr(G) is the random-turn game in which Max is chosen with probability r and Min with probability 1 \u2212r. Since G is a mean-payoff game, the game RTr(G) is a stochastic mean-payoff game. Its value, denoted MP(RTr(G)), is the optimal expected payoff that the players can guarantee, and is known to exist [35]. For every \u03f5 > 0, we show that when Max\u2019s initial ratio is r + \u03f5, he can guarantee a payoff of MP(RTr(G)), and he cannot do better: Min can guarantee a payoff of at most MP(RTr(G)) with an initial ratio of 1 \u2212r + \u03f5. Thus, the value of G w.r.t. r equals MP(RTr(G)). One way to see this result is as a form of derandomization: we show that Max has a deterministic bidding strategy in G that ensures a behavior that is similar to the random behavior of RTr(G). We \ufb01nd this equivalence between the two models particularly surprising due to the fact that, unlike Richman bidding, an equivalence between random-turn games and reachability poorman games is unlikely to exist. Second, while Richman games are equivalent to uniform random-turn games, we are not aware of any known equivalences between bidding games and biased random-turn games, i.e., r \u0338= 0.5. Recall that a strongly-connected mean-payoff Richman-bidding game G has a value c that does not depend on the initial ratio. The value comes from an equivalence with uniform random-turn games [6]: the value c of G under Richman bidding equals the value of the uniform stochastic mean-payoff game RT0.5(G). That is, with Richman bidding, Min can guarantee c with an initial ratio of \u03b4, and cannot guarantee c \u2212\u03f5 with an initial ratio of 1 \u2212\u03b4, for every \u03f5, \u03b4 > 0. One interesting corollary is that the value of G when viewed as a Richman game equals the value of G when viewed as a poorman game with respect to the initial ratio 0.5. We are not aware of previous such connections between the two bidding rules. Finally, we address, for the \ufb01rst time, complexity issues in poorman games; namely, we study the problem of \ufb01nding threshold ratios in poorman games. We show that for qualitative games, the corresponding decision problem is in PSPACE using the existential theory of the reals [16]. For mean-payoff games, the 4 \fproblem of \ufb01nding the value of the game with respect to a given ratio is also in PSPACE for general games, and for strongly-connected games, we show the value can be found in NP and coNP, and even in P for strongly-connected games with out-degree 2. Related work As mentioned above, bidding games can model ongoing auctions, like the ones that are used in internet companies such as Google to sell advertisement slots [37]. Sequential auctions, which are also ongoing, have been well studied, e.g., [33, 45], and let us speci\ufb01cally point [26, 44], which, similar to bidding games, studies two-player sequential auctions with perfect information. Bidding games differ from these models in two important aspects: (1) bidding games are zero-sum games, and (2) the budgets that are used for bidding do not contribute to the utility and are only used to determine which player moves. Point (2) implies that bidding games are particularly appropriate to model settings in which the budget has little or no value, similar in spirit to the well-studied Colonel Blotto games [13]. A dynamic version of Colonel Blotto games called all-pay bidding games has been recently studied [10]. Non-zero-sum Richman-bidding games have been used to reason about ongoing negotiations [34]. Graph games are popular to reason about systems in formal methods [22] and about multi-agent systems in AI [3]. Bidding games extend the modelling capabilities of these games and allow reasoning about multiprocess systems in which a scheduler accepts payment in return for priority. Blockchain technology is one example of such a technology. Simplifying the technology, a blockchain is a log of transactions issued by clients and maintained by miners. In order to write to the log, clients send their transactions and an offer for a transaction fee to a miner, who has freedom to decide transaction priority. We expect that a more precise modelling of such systems will assist in their veri\ufb01cation against attacks, which is a problem of special interest since bugs can result in signi\ufb01cant losses of money (see for example, [18] and a description of an attack http://bit.ly/2obzyE7). Note that poorman bidding models such settings better than Richman bidding since transaction fees are paid to the scheduler (the miners) rather than the other player. Richman bidding is appropriate when modelling \u201cscrip systems\u201d that use internal currency to prevent freeriding [28], and are popular in databases for example. In this work, we show that mean-payoff poorman games are equivalent to biased random-turn games. Thus, there is a contrast with mean-payoff Richman games, which are equivalent to uniform random-turn games. To better understand these differences between the seemingly similar bidding rules, mean-payoff taxman games where studied in [9]. Taxman bidding were de\ufb01ned and studied in [31] for reachability objectives span the spectrum between Richman and poorman bidding. They are parameterized by a constant \u03c4 \u2208[0, 1]: portion \u03c4 of the winning bid is paid to the other player, and portion 1 \u2212\u03c4 to the bank. Thus, with \u03c4 = 1 we obtain Richman bidding and with \u03c4 = 0, we obtain poorman bidding. It was shown that the value of a mean-payoff taxman bidding game G parameterized by \u03c4 and with initial ratio r equals MP(RTF(\u03c4,r)(G)), for F(\u03c4, r) = r+\u03c4\u00b7(1\u2212r) 1+\u03c4 . To the best of our knowledge, since their introduction, poorman games have not been studied. Motivated by recreational games, e.g., bidding chess [12, 30], discrete bidding games with Richman bidding rules are studied in [24], where the money is divided into chips, so a bid cannot be arbitrarily small unlike the bidding games we study. In\ufb01nite-duration discrete bidding games with Richman bidding and various tiebreaking mechanisms have been studied in [1], where they were shown to be a largely determined sub-class of concurrent games. 2 Preliminaries A graph game is played on a directed graph G = \u27e8V, E\u27e9, where V is a \ufb01nite set of vertices and E \u2286V \u00d7V is a set of edges. The neighbors of a vertex v \u2208V , denoted N(v), is the set of vertices {u \u2208V : \u27e8v, u\u27e9\u2208E}, 5 \fand we say that G has out-degree 2 if for every v \u2208V , we have |N(v)| = 2. A path in G is a \ufb01nite or in\ufb01nite sequence of vertices v1, v2, . . . such that for every i \u22651, we have \u27e8vi, vi+1\u27e9\u2208E. Objectives An objective O is a set of in\ufb01nite paths. In reachability games, Player 1 has a target vertex vR and an in\ufb01nite path is winning for him if it visits vR. In parity games each vertex has a parity index in {1, . . . , d}, and an in\ufb01nite path is winning for Player 1 iff the maximal parity index that is visited in\ufb01nitely often is odd. We also consider games that are played on a weighted graph \u27e8V, E, w\u27e9, where w : V \u2192Q. Consider an in\ufb01nite path \u03c0 = v1, v2, . . .. For n \u2208I N, we use \u03c0n to denote the pre\ufb01x of length n of \u03c0. We call the sum of weights that \u03c0n traverses the energy of the game, denoted E(\u03c0n). Thus, E(\u03c0n) = P 1\u2264j 0. Unlike the previous objectives, a path in a mean-payoff game is associated with a payoff, which is Player 1\u2019s reward and Player 2\u2019s cost. Accordingly, in mean-payoff games, we refer to Player 1 as Min and Player 2 as Max. We de\ufb01ne the payoff of \u03c0 to be lim infn\u2192\u221e1 nE(\u03c0n). We say that Max wins an in\ufb01nite path of a mean-payoff game if the payoff is non-negative. Strategies and plays A strategy prescribes to a player which action to take in a game, given a \ufb01nite history of the game, where we de\ufb01ne these two notions below. For example, in turn-based games, a strategy takes as input, the sequence of vertices that were visited so far, and it outputs the next vertex to move to. In bidding games, histories and strategies are more complicated as they maintain the information about the bids and winners of the bids. Formally, a history is a sequence \u03c4 = v0, \u27e8v1, b1, \u21131\u27e9, \u27e8v2, b2, \u21132\u27e9, . . . , \u27e8vk, bk, \u2113k\u27e9\u2208 V \u00b7 (V \u00d7 I R \u00d7 {1, 2})\u2217, where, for j \u22651, in the j-th round, the token is placed on vertex vj\u22121, the winning bid is bj, and the winner is Player \u2113j, and Player \u2113j moves the token to vertex vj. A strategy prescribes an action \u27e8b, v\u27e9, where b is a bid that does not exceed the available budget and v is a vertex to move to upon winning. The winner of the bidding is the player who bids higher, where we assume there is some mechanism to resolve draws, and our results are not affected by what the mechanism is. More formally, for i \u2208{1, 2}, let Bi be the initial budgets of Player i, and, for a \ufb01nite history \u03c0, let Wi(\u03c0) be the sum of Player i winning bids throughout \u03c0. In Richman bidding, the winner of a bidding pays the loser, thus Player 1\u2019s budget following \u03c0 is B1 \u2212W1 + W2. In poorman bidding, the winner pays the \u201cbank\u201d, thus Player 1\u2019s budget following \u03c0 is B1 \u2212W1. Note that in poorman bidding, the loser\u2019s budget does not change following a bidding. An initial vertex together with two strategies for the players determine a unique in\ufb01nite play \u03c0 for the game. The vertices that \u03c0 visits form an in\ufb01nite path path(\u03c0). Player 1 wins \u03c0 according to an objective O iff path(\u03c0) \u2208O. We call a strategy f winning for Player 1 if for every strategy g of Player 2 the play they determine satis\ufb01es O. Winning strategies for Player 2 are de\ufb01ned dually. De\ufb01nition 3. (Initial ratio) Suppose the initial budget of Player i is Bi, for i \u2208{1, 2}, then the total initial budget is B = B1 + B2 and Player i\u2019s initial ratio is Bi/B. We assume B > 0. The \ufb01rst question that arrises in the context of bidding games asks what is the necessary and suf\ufb01cient initial ratio to guarantee an objective. We generalize the de\ufb01nition in [31, 32]: De\ufb01nition 4. (Threshold ratios) Consider a poorman or Richman game G, a vertex v, and an initial ratio r and objective O for Player 1. The threshold ratio in v, denoted Th(v), is a ratio in [0, 1] such that \u2022 if r > Th(v), then Player 1 has a winning strategy that guarantees O is satis\ufb01ed, and \u2022 if r < Th(v), then Player 2 has a winning strategy that violates O. Recall that we say that Max wins a mean-payoff game G = \u27e8V, E, w\u27e9if the mean-payoff value is nonnegative. Finding Th(v) for a vertex v in G thus answers the question of what is the minimal ratio of the 6 \finitial budget that guarantees winning. A more re\ufb01ned question asks what is the optimal payoff Max can guarantee with an initial ratio r. Formally, for a constant c \u2208Q, let Gc be the mean-payoff game that is obtained from G by decreasing all weights by c. De\ufb01nition 5. (Mean-payoff values) Consider a mean-payoff game G = \u27e8V, E, w\u27e9and a ratio r \u2208[0, 1]. The value of G with respect to c, denoted MPr(G, v), is such that Th(v) = r in Gc. Random-turn games In a stochastic game the vertices of the graph are partitioned between two players and a nature player. As in turn-based games, whenever the game reaches a vertex of Player i, for i = 1, 2, he choses how the game proceeds, and whenever the game reaches a vertex v that is controlled by nature, the next vertex is chosen according to a probability distribution that depends only on v. Consider a game G = \u27e8V, E\u27e9. The random-turn game with ratio r \u2208[0, 1] that is associated with G is a stochastic game that intuitively simulates the fact that Player 1 chooses the next move with probability r and Player 2 chooses with probability 1 \u2212r. Formally, we de\ufb01ne RTr(G) = \u27e8V1, V2, VN, E, Pr, w\u27e9, where each vertex in V is split into three vertices, each controlled by a different player, thus for \u03b1 \u2208{1, 2, N}, we have V\u03b1 = {v\u03b1 : v \u2208V }, nature vertices simulate the fact that Player 1 chooses the next move with probability r, thus Pr[vN, v1] = r = 1 \u2212Pr[vN, v2], and reaching a vertex that is controlled by one of the two players means that he chooses the next move, thus E = {\u27e8v\u03b1, uN\u27e9: \u27e8v, u\u27e9\u2208E and \u03b1 \u2208{1, 2}}. When G is weighted, then the weights of v1, v2, and vN equal that of v. Fixing two strategies f and g for the two players in a stochastic game results in a Markov chain, which in turn gives rise to a probability distribution D(f, g) over in\ufb01nite sequences of vertices. A strategy f is optimal w.r.t. an objective O if it maximizes supf infg Pr\u03c0\u223cD(f,g)[\u03c0 \u2208O]. For the objectives we consider, it is well-known that optimal strategies exist, which are, in fact, positional; namely, strategies that only depend on the current position of the game and not on its history. De\ufb01nition 6. (Values) Let r \u2208[0, 1]. For a qualitative game G, the value of RTr(G), denoted val(RTr(G)), is the probability that Player 1 wins when he plays optimally. For a mean-payoff game G, the mean-payoff value of RTr(G), denoted MP(RTr(G)), is the maximal expected payoff Max obtains when he plays optimally. 3 Qualitative Poorman Games For qualitative objectives, poorman games have mostly similar properties to the corresponding Richman games, though they are technically more complicated than Richman bidding. We start with reachability objectives, which were studied in [32, 31]. The objective they study is slightly different than ours and we call it double-reachability: both players have targets and the game ends once one of the targets is reached. As we show below, for our purposes, the variants are equivalent since there are no draws in \ufb01nite-state double-reachability poorman and Richman games. Consider a double-reachability game G = \u27e8V, E, u1, u2\u27e9, where, for i = 1, 2, the target of Player i is ui. In both Richman and poorman bidding, trivially Player 1 wins in u1 with any initial budget and Player 2 wins in u2 with any initial budget, thus Th(u1) = 0 and Th(u2) = 1. For v \u2208V , let v+, v\u2212\u2208N(v) be such that, for every v\u2032 \u2208N(v), we have Th(v\u2212) \u2264Th(v\u2032) \u2264Th(v+). Theorem 7. [32, 31] Threshold ratios exist in reachability Richman and poorman games. Moreover, consider a double-reachability game G = \u27e8V, E, u1, u2\u27e9. \u2022 In Richman bidding, for v \u2208V \\ {u1, u2}, we have Th(v) = 1 2 \u0000Th(v+) + Th(v\u2212) \u0001 , and it follows that Th(v) = val(RT0.5(G, v)) and that Th(v) is a rational number. 7 \f\u2022 In poorman bidding, for v \u2208V \\{u1, u2}, we have Th(v) = Th(v+)/ \u00001\u2212Th(v\u2212)+Th(v+) \u0001 . There is a game G and a vertex v with an irrational Th(v). Proof. The proof here is similar to [31] and is included for completeness, with a slight difference: unlike [31], which assume that every vertex has a path to both targets, we also address the case where one of the targets is not reachable. This will prove helpful when reasoning about in\ufb01nite-duration bidding games. The Richman case is irrelevant for us and we leave it out. We start with the two simpler claims. Assume that in a double-reachability poorman game G, for each vertex v, we have Th(v) = Th(v+)/ \u00001 \u2212Th(v\u2212) + Th(v+) \u0001 . We show a double-reachability poorman game with irrational threshold ratios. Consider the game with vertices u1, v1, v2, and u2, and edges u1 \u2190 v1 \u2194v2 \u2192u2. Solving the equation above we get Th(v1) = ( \u221a 5\u22121)/2 and Th(v2) = (3\u2212 \u221a 5)/2, which are irrational. Next, we show existence of threshold ratios in a reachability poorman games by reducing them to doublereachability games. Consider a game G = \u27e8V, E, u1\u27e9. Let S \u2286V be the set of vertices that have no path to u1. Since Player 1 cannot win from any vertex in S, we have Th(v) = 1. Let G\u2032 = \u27e8V \u2032, E\u2032, u1, u2\u27e9be the double-reachability game that is obtained from G by setting V \u2032 = V \\ S and Player 2\u2019s target u2 to be a vertex in S. Consider a vertex v \u2208V \u2032. We claim that Th(v) in G\u2032 equals Th(v) in G. Indeed, if Player 1\u2019s ratio exceeds Th(v) he can draw the game to u1, and if Player 2\u2019s ratio exceeds 1 \u2212Th(v) he can draw the game to S. Finally, we show that every vertex in a double-reachability game has a threshold ratio. Consider a double-reachability poorman game G = \u27e8V, E, u1, u2\u27e9. It is shown in [31] that there exists a unique function f : V \u2192[0, 1] that satis\ufb01es the following conditions: we have f(u1) = 0 and f(u2) = 1, and for every v \u2208V , we have f(v) = f(v+) 1+f(v+)\u2212f(v\u2212), where v+, v\u2212\u2208N(v) are the neighbors of v that respectively maximize and minimize f, i.e., for every v\u2032 \u2208N(v), we have f(v\u2212) \u2264f(v\u2032) \u2264f(v+). We claim that for every v \u2208V , we have Th(v) = f(v). Our argument will be for Player 1 and duality gives an argument for Player 2. Suppose Player 1\u2019s budget is f(v) + \u03f5 and Player 2\u2019s budget is 1 \u2212f(v), for some \u03f5 > 0. Note that we implicitly assume that f(v) < 1. In case f(v) = 1 we do not show anything, but still, our dual strategy for Player 2 ensures that u2 is visited, when the initial budget for Player 2 is positive. We describe a Player 1 strategy that forces the game to u1. Similar to [31], we divide Player 1\u2019s budget ratio into his real budget and a slush fund. We will ensure the following invariants: 1. Whenever we are in state v, if x is Player 1\u2019s real budget and y is Player 2\u2019s budget, then f(v) = x/(x + y). 2. Every time Player 2 wins a bidding the slush fund increases by a constant factor. Formally, there exists a constant c > 1, such that when \u03f50 is the initial slush fund and \u03f5i is the slush fund after Player 2 wins for the i-th time, we have that \u03f5i > c \u00b7 \u03f5i\u22121, for all i \u22651. Note that these invariants are satis\ufb01ed initially. We describe a Player 1 strategy. Consider a round in vertex v in which Player 1\u2019s real budget is x\u2032, Player 2\u2019s budget is y\u2032 and the last time Player 2 won (or initially, in case Player 2 has not won yet) his slush fund was \u03f5\u2032. Player 1\u2019s bid is \u2206(v)\u00b7x\u2032 +\u03b4v \u00b7\u03f5\u2032, where we de\ufb01ne \u2206(v) and \u03b4v below. Upon winning, Player 1 moves to v\u2212, i.e., to the neighbor that minimizes f(v), or, when f(v) = 0, he moves to a vertex closer to u1. Upon winning, Player 1 pays \u2206(v) \u00b7 x\u2032 from his real budget and \u03b4v \u00b7 \u03f5\u2032 from his slush fund. For v \u2208V \\ {u1, u2}, if f(v) > 0 and f(v\u2212) < 1, let \u2206(v) = f(v)\u2212f(v\u2212) f(v)(1\u2212f(v\u2212)) and otherwise, let \u2206(v) = 0. Note that the second invariant indicates that Player 2 cannot win more than a \ufb01nite number of times, since whenever he wins, the slush fund increases by a constant and the slush fund cannot exceed 1, 8 \fbecause then it would be bigger than the total budget. This in turn shows that eventually Player 1 wins n times in a row, which ensures that the play reaches u1. We choose \u03b4v, for v \u2208V , and show that our choice implies that Player 1\u2019s strategy maintains the invariant above. Let \u2206min be the smallest positive number such that f(v) = \u2206min for some v, and \u2206min = 1 if f(v) = 0 for all v \u2208V . Let \u03b41 be 1 and \u03b4i be such that Pi\u22121 j=1 \u03b4j < \u2206min/2\u03b4i, for all i \u2208{2, . . . , |V |}. Also, let \u03b3 be such that P|V | j=1 \u03b4j < 1/\u03b3. For each state v (such that f(v) > 0), consider that Player 1 wins all bids and let dist(v) be the number of bids before the play ends up in u1 starting from v. When f(v) = 0, let dist(v) be the length of the shortest path from v to u1. Then, \u03b4v = \u03b3\u03b4i, for i = |V | \u2212dist(v). In case Player 1 wins, his real budget becomes x\u2032 \u2212\u2206(v)x\u2032, and Player 2\u2019s budget stays y\u2032. In that case, Player 1\u2019s new real budget ratio becomes (1\u2212\u2206(v))x\u2032 (1\u2212\u2206(v))x\u2032+y\u2032 = f(v\u2212), and the invariants are thus satis\ufb01ed. (His slush fund also decreases by \u03b4v\u03f5\u2032. We will not proof anything about the slush fund in this case, except noting that it stays positive). In case Player 2 wins, Player 1\u2019s real budget stays x\u2032 and Player 2\u2019s budget is at most y\u2032 \u2212\u2206(v)x\u2032 \u2212\u03b4v\u03f5\u2032. By construction, we have that if Player 2\u2019s budget became y\u2032\u2212\u2206(v)x\u2032, then Player 1\u2019s budget ratio becomes x\u2032 x\u2032+y\u2032\u2212\u2206(v)x\u2032 = f(v+), so even if Player 2 moves to v+, Player 2 has paid \u03b4v\u03f5\u2032 too much for Player 1\u2019s real budget ratio to be f(v+). Thus, the \ufb01rst invariant is satis\ufb01ed. Note that this also indicates that f(v+) \u0338= 1, in this case, since otherwise Player 1\u2019s budget ratio must be above 1, indicating that Player 2\u2019s budget is negative. When f(v+) > 0, we can move \u03b4v\u03f5\u2032f(v+)/(1 \u2212f(v+)) \u2265\u03b4v\u03f5\u2032\u2206min into the slush fund. When f(v+) = 0, the new slush fund is \u03b4v\u03f5\u2032. Let j be such that \u03b4j = \u03b4v. By construction of \u03b4v, we have that since the last time Player 2 won a bidding (or since the start if Player 2 never won a bid before), we have subtracted at most \u03f5\u2032 P|V | i=j+1 \u03b4i from the slush fond and now we have added \u03b4j\u03f5\u2032\u2206min. But \u03b4i was chosen such that P|V | i=j+1 \u03b4i was below \u03b4v\u2206min/2. Hence, we have added \u03b4v\u03f5\u2032\u2206min to the previous content of \u03f5\u2032. Because \u03b4v and \u2206min are constants, we have thus increased the slush fund by a constant factor. The invariants are thus satis\ufb01ed in this case. We continue to study poorman games with richer objectives. Theorem 8. Parity poorman games are linearly reducible to reachability poorman games. Speci\ufb01cally, threshold ratios exist in parity poorman games. Proof. The crux of the proof is to show that in a bottom strongly-connected component (BSCC, for short) of G, one of the players wins with every initial budget. Thus, the threshold ratios for vertices in BSCCs are either 0 or 1. For the rest of the vertices, we construct a reachability game in which a player\u2019s goal is to reach a BSCC that is \u201cwinning\u201d for him. Formally, consider a strongly-connected parity poorman game G = \u27e8V, E, p\u27e9. We claim that there is \u03b1 \u2208{0, 1} such that for every v \u2208V , we have Th(v) = \u03b1, i.e., when \u03b1 = 0, Player 1 wins with any positive initial budget, and similarly for \u03b1 = 1. Moreover, deciding which is the case is easy: let vMax \u2208V be the vertex with maximal parity index, then \u03b1 = 0 iff p(vMax) is odd. Suppose p(vMax) is odd and the proof for an even p(vMax) is dual. We prove in two steps. First, following the proof of Theorem 7, we have that when Player 1\u2019s initial budget is \u03f5 > 0, he can draw the game to vMax once. Second, we show that Player 1 can reach vMax in\ufb01nitely often when his initial budget is \u03f5 > 0. Player 1 splits his budget into parts \u03f51, \u03f52, . . ., where \u03f5i = \u03f5 \u00b7 2\u2212i, for i \u22651, thus P i\u22651 \u03f5i = \u03f5. Then, for i \u22650, following the i-th visit to vMax, he plays the strategy necessary to draw the game to vMax with initial budget \u03f5i+1. We turn to show the reduction from parity poorman games to double-reachability poorman games. Consider a parity poorman game G = \u27e8V, E, p\u27e9. Let S \u2286V be a BSCC in G. We call S winning for Player 1 if the vertex vMax with highest parity index in S has odd p(vMax). Dually, we call S winning for Player 2 if p(vMax) is even. Indeed, the claim above implies that for every S that is winning for Player 1 and v \u2208S, 9 \fwe have Th(v) = 0, and dually for Player 2. Let G\u2032 be a double-reachability poorman game that is obtained from G by setting the BSCCs that are winning for Player 1 in G to be his target in G\u2032 and the BSCCs that are winning for Player 2 in G to be his target in G\u2032. Similar to the proof of Theorem 7, we have that Th(v) in G equals Th(v) in G\u2032, and we are done. 4 Mean-Payoff Poorman Games This section consists of our most technically challenging contribution. We construct optimal strategies for the players in mean-payoff poorman games. The crux of the solution regards strongly-connected meanpayoff games, which we develop in the \ufb01rst three sub-sections. Consider a strongly-connected game G and an initial ratio r \u2208[0, 1]. We claim that the value in G w.r.t. r does not depend on the initial vertex. For a vertex v in G, recall that MPr(G, v) is the maximal payoff Max can guarantee when his initial ratio in v is r + \u03f5, for every \u03f5 > 0. We claim that for every vertex u \u0338= v in G, we have MPr(G, u) = MPr(G, v). Indeed, as in Theorem 8, Max can play as if his initial ratio is \u03f5/2 and draw the game from u to v, and from there play using an initial ratio of r + \u03f5/2. Since the energy that is accumulated until reaching v is constant, it does not affect the payoff of the in\ufb01nite play starting from v. We write MPr(G) to denote the value of G w.r.t. r. We show the equivalence with random-turn games: the value MPr(G) equals the value MP(RTr(G)) of the random-turn mean-payoff game RTr(G) in which Max chooses the next move with probability r and Min with probability 1 \u2212r. 4.1 Warm up: solving a simple game In this section we solve a simple game through which we demonstrate the ideas of the general case. Recall that in an energy game, Min wins a \ufb01nite play if the sum of weights it traverses, a.k.a. the energy, is 0 and Max wins an in\ufb01nite play in which the energy stays positive throughout the play. Lemma 9. [31] In the energy game that is depicted in Fig. 1, if the initial energy is k \u2208I N, then Max wins iff his initial ratio exceeds k+2 2k+2. The \ufb01rst implication in Lemma 9 is the important one for us. It shows that Max can guarantee a payoff of 0 with an initial budget that exceeds 0.5. Indeed, given an initial ratio of 0.5 + \u03f5, Max plays as if the initial energy is k \u2208I N such that k+2 2k+2 < 0.5 + \u03f5. He thus keeps the energy bounded from below by \u2212k, which implies that the payoff is non-negative. We describe an alternative proof for the \ufb01rst implication in Lemma 9 whose ideas we will later generalize. We need several de\ufb01nitions. For k \u2208I N, let Sk be the square of area k2. In Fig. 3, we depict S5. We split Sk into unit-area boxes such that each of its sides contains k boxes. A diagonal in Sk splits it into a smaller black triangle and a larger white one. For k \u2208I N, we respectively denote by tk and Tk the areas of the smaller black triangle and the larger white triangle of Sk. For example, we have t5 = 10 and T5 = 15, and in general tk = k(k\u22121) 2 and Tk = k(k+1) 2 . 2 3 4 5 6 tk 1 3 6 10 15 Tk 3 6 10 15 21 . . . Figure 3: The square S5 with area 25 and the sizes of some triangles. Suppose the game starts with energy \u03ba \u2208I N. We show that Max wins when his ratio exceeds \u03ba+2 2\u03ba+2, which equals T\u03ba+1 (\u03ba+1)2 . For ease of presentation, it is convenient to assume that the players\u2019 ratios add up to 10 \f1 + \u03f50, Max\u2019s initial ratio is T\u03ba+1 (\u03ba+1)2 + \u03f50, and Min\u2019s initial ratio is t\u03ba+1 (\u03ba+1)2 . For j \u22650, we think of \u03f5j as Max\u2019s slush fund in the j-th round of the game, though its role here is somewhat less signi\ufb01cant than in Theorem 7. Consider a play \u03c0. We think of changes in energy throughout \u03c0 and changes in budget ratio as representing two walks on two sequences. The energy sequence is I N and the budget sequence is {tk/Sk : k \u2208I N}, with the natural order in the two sets. We show a strategy for Max that maintains the invariant that whenever the energy is k \u2208I N, then Max\u2019s ratio is greater than Tk+1/(k + 1)2. That is, whenever Max wins a bidding, both sequences take a \u201cstep up\u201d and when he loses, both sequences take a \u201cstep down\u201d. We describe Max\u2019s strategy. Upon winning a bidding, Max proceeds to v1, thus the energy increases by one. We assume WLog. that upon winning, Min proceeds to v2, thus the energy decreases by one. The challenge is to \ufb01nd the right bids. Suppose the energy level is k at the j-th round. Thus, Max and Min\u2019s ratio are respectively Tk+1/(k+1)2 +\u03f5j and tk+1/(k+1)2. In other words, Min owns tk+1 boxes and Max owns a bit more than Tk+1 boxes. Max\u2019s bid consists of two parts. Max bids 1/(k +1)2 +\u03f5j/2, or in other words, a single box and half of his slush fund. We \ufb01rst show how the strategy maintains the invariant and then how it guarantees that an energy of 0 is never reached. Suppose \ufb01rst that Max wins the bidding. The total number of boxes decreases by one to (k + 1)2 \u22121, his slush fund is cut by half, and Min\u2019s budget is unchanged. Thus, Max\u2019s ratio of the budget is more than (Tk+1 \u22121)/ \u0000(k + 1)2 \u22121 \u0001 , which equals Tk+2/(k + 2)2. For example, let k = 4 and Max\u2019s ratio exceeds T5 t5+T5 . Following a bidding win the energy increases to k = 5 and Max\u2019s ratio is more than T5\u22121 t5+T5\u22121 = 15\u22121 25\u22121 = 21 36 = T6 t6+T6 . In other words, we take a step up in both sequences. The other case is when Min wins the bidding, the energy decreases by 1, and we show that the budget sequences takes a step down. Since Max bids more than one box, and Min overbids, Min bids at least one box. Max\u2019s new ratio is more than Tk+1/((k + 1)2 \u22121) = Tk/k2, thus dually, both sequences take a step down. For example, again let k = 4 and Max\u2019s ratio exceeds T5 t5+T5 . Upon losing a bidding, the energy decreases to k = 3 and Max\u2019s ratio is 15 25\u22121 = 10 16 = T4 t4+T4 . It is left to show that the energy never reaches 0, thus the walk on the budget sequence never reaches the \ufb01rst element. Suppose the energy is k = 1 in the j-th round, thus according to the invariant, Max\u2019s ratio is 3 4 + \u03f5j and Min\u2019s ratio is 1 4. Recall that Max bids 1 (k+1)2 + \u03f5j/2 at energy k. In particular, he bids 1 4 + \u03f5j/2 at energy 1, which exceeds Min\u2019s budget, thus Max necessarily wins the bidding, implying that the energy increases. 4.2 The potential and strength of vertices In an arbitrary strongly-connected game the bids in the different vertices cannot be the same. In this section we develop a technique to determine the \u201cimportance\u201d of a node v, which we call its strength and measures how high the bid should be in v compared with the other vertices. Consider a strongly-connected game G = \u27e8V, E, w\u27e9and r \u2208[0, 1]. Recall that RTr(G) is a random-turn game in which Max chooses the next move with probability r and Min with probability 1 \u2212r. A positional strategy is a strategy that always chooses the same action (edge) in a vertex. It is well known that there exist optimal positional strategies for both players in stochastic mean-payoff games. Consider two optimal positional strategies f and g in RTr(G), for Min and Max, respectively. For a vertex v \u2208V , let v\u2212, v+ \u2208V be such that v\u2212= f(vMin) and v+ = g(vMax). The potential of v, denoted Potr(v), is a known concept in probabilistic models and its existence is guaranteed [42]. We use the potential to de\ufb01ne the strength of v, denoted Str(v), which intuitively measures how much the potentials of the neighbors of v differ. We assume w.l.o.g. that MP(RTr(G)) = 0 as otherwise we can decrease all weights by this value. Let \u03bd \u2208Q be such that r = \u03bd \u03bd+1. The potential and strengths of v are functions that satisfy the following: Potr(v) = \u03bd \u00b7 Potr(v+) + Potr(v\u2212) 1 + \u03bd + w(v) and Str(v) = Potr(v+) \u2212Potr(v\u2212) 1 + \u03bd 11 \fThere are optimal strategies for which Potr(v\u2212) \u2264Potr(v\u2032) \u2264Potr(v+), for every v\u2032 \u2208N(v), which can be found for example using the strategy iteration algorithm. Consider a \ufb01nite path \u03c0 = v1, . . . , vn in G. We intuitively think of \u03c0 as a play, where for every 1 \u2264i < n, the bid of Max in vi is Str(vi) and he moves to v+ i upon winning. Thus, if vi+1 = v+ i , we say that Max won in vi, and if vi+1 \u0338= v+ i , we say that Max lost in vi. Let W(\u03c0) and L(\u03c0) respectively be the indices in which Max wins and loses in \u03c0. We call Max wins investments and Max loses gains, where intuitively he invests in increasing the energy and gains a higher ratio of the budget whenever the energy decreases. Let G(\u03c0) and I(\u03c0) be the sum of gains and investments in \u03c0, respectively, thus G(\u03c0) = P i\u2208L(\u03c0) Str(vi) and I(\u03c0) = P i\u2208W(\u03c0) Str(vi). Recall that the energy of \u03c0 is E(\u03c0) = P 1\u2264i0 and {\u03b2x}x>0, which we refer to as the budget sequence with properties on which we elaborate below. Max\u2019s bid depends on the position in the budget sequence as well as the strength of the vertex. We \ufb01nd it more convenient to normalize the strength. De\ufb01nition 12. (Normalized strength). Let S = maxv |Str(v)|. The normalized strength of a vertex v \u2208V is nStr(v) = Str(v)/S. Formally, when the token is placed on a vertex v \u2208V and the position of the walk is x, then Max bids \u03b2x \u00b7 nStr(v). Note that nStr(v) \u2208[0, 1], for all v \u2208V . We describe the intuition of the construction. We think of Max\u2019s strategy as maintaining a position x \u2208I R>0 on a walk, where his bidding strategy maintains the invariant that his ratio exceeds \u03bdx. For example, in Section 4.1, the vertices have the same importance, thus their strength is 1. For k \u2208I N, we have \u03bdk = Tk+1/(k + 1)2 and \u03b2k = 1/(k + 1)2, and whenever the position is x = k, Max\u2019s ratio exceeds \u03bdk. We distinguish between two cases. Suppose \ufb01rst that \u03bd \u22651. If Max wins the bidding in v, then the next position of the walk is x + nStr(v), and if Min wins the bidding, the next position is x \u2212nStr(v) \u00b7 \u03bd. When \u03bd < 1, the next position when Max wins is x + nStr(v) \u00b7 \u03bd\u22121, and when he loses, the next position is x \u2212nStr(v). There are two complications when comparing with the proof in Section 4.1. First, while in Section 4.1, we always take one step when winning a bidding, here the number of steps taken at a vertex v depends on the importance of v. Unlike that proof, a step of s \u2208Q does not necessarily correspond to a change of s in the energy. Lemma 10 guarantees that steps in the walk even out with changes in energy at the end of cycles, which suf\ufb01ces for our purposes. Second, that proof addresses the case of r = 1/2 and here we consider general ratios. When Max\u2019s initial ratio is r, winning a bidding is r-times more costly than winning a bidding for Min. This is illustrated in Example 2, where when Min has a budget of 2+\u03f5 and Max has a budget of 1, Min pushes a Max winning bid of b on the queue twice. We de\ufb01ne the following budget sequence. De\ufb01nition 13. Let r = \u03bd 1+\u03bd > 0 be an initial ratio. For x > 0, we de\ufb01ne \u03bdx = \u03bd(1+ 2 x) and \u03b2x = 2\u00b7min(1,\u03bd) x(x+1) . The most important property of the sequences is maintaining the invariant between x and the ratio \u03bdx. Recall that Max\u2019s budget exceeds \u03bdx at position x and Min\u2019s budget is 1. Suppose Max\u2019s bid is b. Then, upon winning, Max\u2019s new budget is \u03bdx \u2212b, and upon losing and re-normalizing Min\u2019s budget to 1, Max\u2019s new budget is at least \u03bdx/(1\u2212b). The following lemma shows that the invariant is maintained in both cases. Lemma 14. For any 0 < x, \u03bd and n \u2208[0, 1], if x(x + 1) > 2 \u00b7 n \u00b7 min(1, \u03bd), we have \u03bd(1 + 2 x) 1 \u22122\u00b7n\u00b7min(1,\u03bd) x(x+1) \u2265\u03bd \u0012 1 + 2 x \u2212n \u00b7 min(1, \u03bd) \u0013 and \u03bd(1+2 x)\u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) \u2265\u03bd \u0012 1 + 2 x + n \u00b7 min(1, \u03bd\u22121) \u0013 Proof. We start with the \ufb01rst claim and argue that x(x+1) > 2\u00b7n\u00b7min(1, \u03bd) implies that x > n min(1, \u03bd). If x > 1, the latter follows directly from our assumptions on n (and that min(1, \u03bd) \u22641). On the other hand, if 0 < x \u22641, the former can be written as xc > n \u00b7 min(1, \u03bd), for c = x+1 2 \u22641, which in particular, implies that x > n \u00b7 min(1, \u03bd). We have that \u03bd(1 + 2 x) 1 \u22122\u00b7n\u00b7min(1,\u03bd) x(x+1) = \u03bd \u00b7 x+2 x x(x+1)\u22122\u00b7n\u00b7min(1,\u03bd) x(x+1) = \u03bd \u00b7 (x + 2)(x + 1) x(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd) 13 \f(we have that the denominators are > 0 since x(x + 1) > 2 \u00b7 n \u00b7 min(1, \u03bd)). Also, \u03bd \u0012 1 + 2 x \u2212n \u00b7 min (1, \u03bd) \u0013 = \u03bd \u0012x \u2212n \u00b7 min(1, \u03bd) + 2 x \u2212n \u00b7 min(1, \u03bd) \u0013 . (we have that x \u2212n \u00b7 min (1, \u03bd) > 0 from above). Thus, \u03bd \u00001 + 2 x \u0001 1 \u22122\u00b7n\u00b7min(1,\u03bd) x(x+1) \u2265\u03bd \u0012 1 + 2 x \u2212n \u00b7 min(1, \u03bd) \u0013 \u21d4 (x + 2)(x + 1) x(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd) \u2265x \u2212n \u00b7 min(1, \u03bd) + 2 x \u2212n \u00b7 min(1, \u03bd) \u21d4 (x + 2)(x + 1)(x \u2212n \u00b7 min(1, \u03bd)) \u2265(x \u2212n \u00b7 min(1, \u03bd) + 2)(x(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd)) \u21d4 (x + 2)(x + 1)(x \u2212n \u00b7 min(1, \u03bd)) \u2212(x \u2212n \u00b7 min(1, \u03bd) + 2)(x(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd)) \u22650 \u21d4 2n min(1, \u03bd)(1 \u2212n min(1, \u03bd)) \u22650 Note that n and min(1, \u03bd) are in [0, 1] and thus, the inequality is true, because each factor is \u22650, and we are done. We proceed to the second claim and show that for any 0 < x, \u03bd and n \u2208[0, 1], we have \u03bd \u0012 1 + 2 x \u0013 \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) \u2265\u03bd \u0012 1 + 2 x + n \u00b7 min(1, \u03bd\u22121) \u0013 We have that \u03bd \u0012 1 + 2 x \u0013 \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) = \u03bd \u00b7 x + 2 x \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) = \u03bd(x + 2)(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) . Also, \u03bd(1 + 2 x + n \u00b7 min(1, \u03bd\u22121)) = \u03bd \u00b7 x + n \u00b7 min(1, \u03bd\u22121) + 2 x + n \u00b7 min(1, \u03bd\u22121) . Thus, \u03bd \u0012 1 + 2 x \u0013 \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) \u2265\u03bd \u0012 1 + 2 x + n \u00b7 min(1, \u03bd\u22121) \u0013 \u21d4 \u03bd(x + 2)(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd) x(x + 1) \u2265\u03bd \u00b7 x + n \u00b7 min(1, \u03bd\u22121) + 2 x + n \u00b7 min(1, \u03bd\u22121) \u21d4 (\u03bd(x + 2)(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd))(x + n \u00b7 min(1, \u03bd\u22121)) \u2265\u03bd \u00b7 x(x + 1) \u00b7 (x + n \u00b7 min(1, \u03bd\u22121) + 2) \u21d4 (\u03bd(x + 2)(x + 1) \u22122 \u00b7 n \u00b7 min(1, \u03bd))(x + n \u00b7 min(1, \u03bd\u22121)) \u2212\u03bd \u00b7 x(x + 1) \u00b7 (x + n \u00b7 min(1, \u03bd\u22121) + 2) \u22650 \u21d4 2n min(1, \u03bd)(1 \u2212n min(1, \u03bd\u22121)) \u22650 Note that n, min(1, \u03bd) and min(1, \u03bd\u22121) are in [0, 1] and thus, the inequality is true, because each factor is \u22650. 4.4 Putting it all together In this section we combine the ingredients developed in the previous sections to solve arbitrary stronglyconnected mean-payoff games. 14 \fTheorem 15. Consider a strongly-connected mean-payoff poorman game G and a ratio r \u2208[0, 1]. The value of G with respect to r equals the value of the random-turn mean-payoff game RTr(G) in which Max chooses the next move with probability r, thus MPr(G) = MP(RTr(G)). Proof. We assume w.l.o.g. that MP(RTr(G)) = 0 since otherwise we decrease this value from all weights. Also, the case where r \u2208{0, 1} is easy since RTr(G) is a graph and in G, one of the players can win all biddings. Thus, we assume r \u2208(0, 1). Recall that MP(\u03c0) = lim infn\u2192\u221e E(\u03c0n) n . We show a Max strategy that, when the game starts from a vertex v \u2208V and with an initial ratio of r + \u03f5, guarantees that the energy is bounded below by a constant, which implies MP(\u03c0) \u22650. Note that showing such a strategy for Max suf\ufb01ces to prove MPr(G) = 0 since our de\ufb01nition for a payoff favors Min. Consider the game G\u2032 that is obtained from G by multiplying all weights by \u22121. We associate Min in G with Max in G\u2032, thus an initial ratio of 1 \u2212r \u2212\u03f5 for Min in G is associated with an initial ratio of r + \u03f5 of Max in G\u2032. We have MP(RT1\u2212r(G\u2032)) = \u2212MP(RTr(G)) = 0. Let f be a Max strategy in G\u2032 that guarantees a non-negative payoff. Suppose Min plays in G according to f and let \u03c0 be a play when Max plays some strategy. Since f guarantees a non-negative payoff in G\u2032, we have lim supn\u2192\u221eE(\u03c0n)/n \u22640 in G, and in particular MP(\u03c0) = lim infn\u2192\u221eE(\u03c0n)/n \u22640. Before we describe Max\u2019s strategy, we need several de\ufb01nitions. In De\ufb01nition 13, we set \u03bdx = \u03bd \u00b7 (1 + 2/x), which clearly tends to \u03bd from above. We can thus choose \u03ba \u2208I N such that Max\u2019s ratio is greater than \u03bd\u03ba. Suppose Max is playing according to the strategy we describe below and Min is playing according to some strategy. The play induces a walk on {\u03bdx}x\u2208Q\u22650, which we refer to as the budget walk. Max\u2019s strategy guarantees the following: Invariant: Whenever the budget walk reaches an x \u2208Q, then Max\u2019s ratio is greater than \u03bdx. The walk starts in \u03ba and the invariant holds initially due to our choice of \u03ba. Suppose the token is placed on the vertex v \u2208V and the position of the walk is x. Max bids nStr(v) \u00b7 \u03b2x, and he moves to v+ upon winning. Suppose \ufb01rst that \u03bd \u22651. If Max wins the bidding, then the next position of the walk is x+nStr(v), and if Min wins the bidding, the next position is x \u2212nStr(v) \u00b7 \u03bd. When \u03bd < 1, the next position when Max wins is x + nStr(v) \u00b7 \u03bd\u22121, and when he loses, the next position is x \u2212nStr(v). Lemma 14 implies that in both cases the invariant is maintained. Claim: For every Min strategy, the budget walk stays on positive positions and never reaches x = 0. Suppose \u03bd \u22651. Thus, when Max loses with a bid of 2n/x(x + 1), we step down n steps. In order to reach x = 0, the position needs to be x = n. But then, Max\u2019s bid is 2n/n(n + 1) \u22651, thus Max wins the bidding since Min\u2019s budget is 1. Similarly, when \u03bd < 1, when the bid is 2n\u03bd/x(x + 1), we step down n \u00b7 \u03bd, and we need x = n \u00b7 \u03bd to reach x = 0. Again, since 2n\u03bd/n\u03bd(n\u03bd + 1) \u22651, Max wins the bidding. Claim: The strategy is legal; Max\u2019s bids never exceed his available budget. Indeed, we have 2n min(1, \u03bd)/x(x + 1) \u2264\u03bd(1 + 2/x), for every 0 \u2264n \u22641 and \u03bd > 0 since x > 0. Claim: The energy throughout a play is bounded from below. Formally, there exists a constant c \u2208I R such that for every Min strategy and a \ufb01nite play \u03c0, we have E(\u03c0) \u2265c. Consider a \ufb01nite play \u03c0. We view \u03c0 as a sequence of vertices in G. Recall that the budget walk starts at \u03ba, that G(\u03c0) and I(\u03c0) represent sums of strength of vertices, and that S = maxv\u2208V |Str(v)| and nStr(v) = Str(v)/S. Suppose the budget walk reaches x following the play \u03c0. Then, when \u03bd \u22651, we have x = \u03ba \u2212G(\u03c0)/S + I(\u03c0)/\u03bdS. Combining with x \u22650, we have S \u00b7 \u03ba \u00b7 \u03bd \u2264\u2212G(\u03c0) \u00b7 \u03bd + I(\u03c0). Let P = maxu,v Potr(u) \u2212Potr(v). Re-writing Lemma 10, we obtain \u2212G(\u03c0) \u00b7 \u03bd + I(\u03c0) \u2264E(\u03c0) + P. Combining the two, we have E(\u03c0) \u2265\u2212P \u2212S \u00b7 \u03ba \u00b7 \u03bd. Similarly, when \u03bd < 1, we have x = \u03ba \u2212G(\u03c0) \u00b7 \u03bd/S + I(\u03c0)/S and combining with Lemma 10, we obtain E(\u03c0) \u2265\u2212P \u2212S \u00b7 \u03ba, and we are done. Remark 16. Richman vs poorman bidding. An interesting connection between poorman and Richman biddings arrises from Theorem 15. Consider a strongly-connected mean-payoff game G. For an initial ratio 15 \fr \u2208[0, 1], let MPr P(G) denote the value of G with respect to r with poorman bidding. With Richman bidding [6], the value does not depend on the initial ratio rather it only depends on the structure of G and we can thus omit r and use MPR(G). Moreover, mean-payoff Richman-bidding games are equivalent to uniform random-turn games, thus MPR(G) = MP(RT0.5(G)). Our results show that poorman games with initial ratio 0.5 coincide with Richman games. Indeed, we have MPR(G) = MP0.5 P (G). To the best of our knowledge such a connection between the two bidding rules has not been identi\ufb01ed before. Remark 17. Energy poorman games. The proof technique in Theorem 15 extends to energy poorman games. Consider a strongly-connected mean-payoff game G, and let r \u2208[0, 1] such that MPr(G) = 0. Now, view G as an energy poorman game. The proof of Theorem 15 shows that when Max\u2019s initial ratio is r + \u03f5, there exists an initial energy level from which he can win the game. On the other hand, when Max\u2019s initial ratio is r \u2212\u03f5, Min can win the energy game from every initial energy. Indeed, consider the game G\u2032 that is obtained from G by multiplying all weights by \u22121. Again, using Theorem 15 and associating Min with Max, Min can keep the energy level bounded from above, which allows him, similar to the qualitative case, to play a strategy in which he either wins or increases his ratio by a constant. Eventually, his ratio is high enough to win arbitrarily many times in a row and drop the energy as low as required. Remark 18. A general budget sequence. The proof of Theorem 15 uses four properties of the \u201cbudget sequence\u201d {\u03bdx}x\u22650 and {\u03b2x}x\u22650 that is de\ufb01ned in De\ufb01nition 13: (1) the invariant between Max\u2019s ratio and rx is maintained (shown in Lemma 14), (2) the bids never exceed the available budget, (3) limx\u2192\u221e\u03bdx = \u03bd, and (4) the walk never reaches x = 0. The existence of a budget sequence with these properties is shown in [9] for taxman bidding, which generalize both Richman and poorman bidding: taxman bidding is parameterized with a constant \u03c4 \u2208[0, 1], where the higher bidder pays portion \u03c4 of his bid to the other player and portion (1\u2212\u03c4) to the bank. Unlike that proof, we de\ufb01ne an explicit budget sequence for poorman bidding. 4.5 Extention to general mean-payoff games We extend the solution in the previous sections to general graphs in a similar manner to the qualitative case; we \ufb01rst reason about the BSCCs of the graph and then construct an appropriate reachability game on the rest of the vertices. Recall that, for a vertex v in a mean-payoff game, the ratio Th(v) is a necessary and suf\ufb01cient initial ratio to guarantee a payoff of 0. Consider a mean-payoff poorman game G = \u27e8V, E, w\u27e9. Recall that, for v \u2208V , Th(v) is the necessary and suf\ufb01cient initial ratio for Max to guarantee a non-positive payoff. Let S1, . . . , Sk \u2286V be the BSCCs of G and S = S 1\u2264i\u2264k Si. For 1 \u2264i \u2264k, the mean-payoff poorman game Gi = \u27e8Si, E|Si, w|Si\u27e9is a strongly-connected game. We de\ufb01ne ri \u2208[0, 1] as follows. If there is an r \u2208[0, 1] such that MPr(Gi) = 0, then ri = r. Otherwise, if for every r, we have MPr(Gi) > 0, then ri = 0, and if for every r, we have MPr(Gi) < 0, then ri = 1. By Theorem 15, for every v \u2208Si, we have Th(v) = ri. We construct a generalized reachability game G\u2032 that corresponds to G by replacing every Si in G with a vertex ui. Player 1 wins a path in G iff it visits some ui and when it visits ui, Player 1\u2019s ratio is at least ri. It is not hard to generalize the proof of Theorem 7 to generalized reachability poorman games and obtain the following. Theorem 19. The threshold ratios in a mean-payoff poorman game G coincide with the threshold ratios in the generalized reachability game that corresponds to G. 4.6 Applying bidding games in reasoning about auctions for online advertisements In this section we show an application of mean-payoff poorman-bidding games in reasoning about auctions for online advertisements. A typical webpage has ad slots; e.g., in Google\u2019s search-results page, ads typically 16 \fappear above or beside the \u201cactual\u201d search results. Different slots have different value depending on their positions; e.g., slots at the top of the page are typically seen \ufb01rst, thus generate more clicks and are more valuable. A large chunk of the revenue of companies like Google comes from auctions for allocating ad slots that they regularly hold between advertisement companies. Consider the following auction mechanism. At each time point (e.g., each day), a slot is auctioned and the winner places an ad in the slot. It is common practice in auctions for online ads to hold secondprice auctions; namely, the higher bidder sets the ad and pays the bid of the second-highest bidder to the auctioneer. Suppose there are k \u2208I N ad slots. We take the view-point of an advertiser. The state of the webpage is given by \u00af s \u2208{0, 1}k, where an advertiser\u2019s ad appears in a slot 1 \u2264i \u2264k iff si = 1. We assume that we are given a reward function \u03c1 : {0, 1}k \u2192Q that assigns the utility obtained from each state \u00af s \u2208{0, 1}k; e.g., the reward can be the expected revenue, which is the expected number of clicks on his ads times the expected revenue from each click. The utility for an in\ufb01nite sequence \u00af s1, \u00af s2, . . . is the meanpayoff of \u03c1( \u00af s1), \u03c1( \u00af s2), . . .. We are interested in \ufb01nding an optimal bidding strategy in the ongoing auction under two simplifying assumptions: (1) the utility is obtained only from the ads and does not include the price paid for them, and (2) we assume two competitors and full information of the budgets. We obtain an optimal bidding strategy by \ufb01nding an optimal strategy for Max in a mean-payoff poorman-bidding game. In Section 6, we discuss extensions of the bidding games that we study in this paper, that are needed to weaken the two assumptions above. As a simple example, the special case of one ad slot is modelled as the game in Fig. 1: in each turn the ad slot is auctioned, Max gets a reward of 1 when his ad shows and a penalty of \u22121 when the competitor\u2019s ad is shown. We formalize the general case. Consider an ongoing auction with k slots and a reward function \u03c1. We construct a mean-payoff poorman-bidding game Ak,\u03c1 = \u27e8V, E, w\u27e9as follows. We de\ufb01ne V = {1, . . . , k} \u00d7 {0, 1}k. Consider v = \u27e8\u2113, \u00af s\u27e9\u2208V , where 1 \u2264\u2113\u2264k and \u00af s = \u27e8s1, . . . , sk\u27e9\u2208{0, 1}k. The vector \u00af s represents the state of the webpage following the previous bidding. The slot that is auctioned at v is \u2113, thus the vertex v has two neighbors u1 = \u27e8\u21131, \u00af s1\u27e9and u2 = \u27e8\u21132, \u00af s2\u27e9with \u21131 = \u21132 = \u2113+ 1 mod k. The state of the slots apart from the \u2113-th slot stay the same, thus for every i \u0338= \u2113, we have s1 i = s2 i = si. The vertex u1 represents a Max win in the bidding and u2 a Max lose, thus s1 \u2113= 1 and s2 \u2113= 0. Finally, the weight of v is \u03c1(\u00af s). Note that Ak,\u03c1 is a strongly-connected mean-payoff poorman-bidding game. Theorem 20. Consider a second-price ongoing auction with k slots and a reward function \u03c1. An optimal strategy for Max in the poorman-bidding game Ak,\u03c1 coincides with an optimal bidding strategy in the auction. Proof. The only point that requires proof is that mean-payoff poorman-bidding games are equivalent to mean-payoff games with second-price auctions. Consider a strongly-connected mean-payoff game G. Let r \u2208(0, 1). Suppose Max\u2019s initial budget is r + \u03f5, for \u03f5 > 0. Theorem 15 constructs a Max strategy f that guarantees a payoff of at least MP(RTr(G)) under poorman bidding rules. A close look at this strategy reveals that it ensures a payoff of at least MP(RTr(G)) under second-price rules. Indeed, let b be the Max bid prescribed by f following a \ufb01nite play. Then, if Max wins the bidding, his payment is at most b. On the other hand, if Min wins the bidding, he pays at least b. In both cases the invariant on Max\u2019s budget is maintained as in the proof of Theorem 15. Finally, a dual argument as in Theorem 15 shows that Min can guarantee a payoff of at most MP(RTr(G)) with second-price bidding rules. We thus conclude that the value of G under second-price bidding coincides with the value under poorman bidding, and we are done. We can use Theorem 20 to answer questions of the form \u201ccan an advertiser guarantee that his ad shows at least half the time, in the long run?\u201d. Indeed, set \u03c1(\u00af s) = 1 when the ad shows and \u03c1(\u00af s) = 0 when it does not. Then, the payoff corresponds to the long-run average time that the ad shows. 17 \f5 Computational Complexity We study the complexity of \ufb01nding the threshold ratios in poorman games. We formalize this search problem as the following decision problem. Recall that threshold ratios in reachability poorman games may be irrational (see Theorem 7). THRESH-BUD Given a bidding game G, a vertex v, and a ratio r \u2208[0, 1] \u2229Q, decide whether Th(v) \u2265r. Theorem 21. For poorman parity games, THRESH-BUD is in PSPACE. Proof. To show membership in PSPACE, we guess the optimal moves for the two players. To verify the guess, we construct a program of the existential theory of the reals that uses the relation between the threshold ratios that is described in Theorem 7. Deciding whether such a program has a solution is known to be in PSPACE [16]. Formally, given a parity poorman game G = \u27e8V, E, p\u27e9and a vertex v \u2208V , we guess, for each vertex u \u2208V , two neighbors u+, u\u2212\u2208N(u). We construct the following program. For every vertex u \u2208V , we introduce a variable xu, and we add constraints so that a satisfying assignment to xu coincides with the threshold ratio in u. Consider a BSCC S of G. Recall that the threshold ratios in S are all either 0 or 1, and verifying which is the case can be done in linear time. Suppose the threshold ratios are \u03b1 \u2208{0, 1}. We add constraints xu = \u03b1, for every u \u2208S. For every vertex u \u2208V that is not in a BSCC, we have constraints xu = xu+ 1\u2212xu\u2212+xu+ and xu\u2212\u2264xu\u2032 \u2264xu+, for every u\u2032 \u2208N(u). By Theorems 7 and 8, a satisfying assignment assigns to xu the ratio Th(u). We conclude by adding a \ufb01nal constraint xv \u2265r. Clearly, the program has a satisfying assignment iff Th(v) \u2265r, and we are done. We continue to study mean-payoff games. Theorem 22. For mean-payoff poorman games, THRESH-BUD is in PSPACE. For strongly-connected games, it is in NP and coNP. For strongly-connected games with out-degree 2, THRESH-BUD is in P. Proof. To show membership in PSPACE, we proceed similarly to the qualitative case, and show a nondeterministic polynomial-space that uses the existential theory of the reals to verify its guess. Given a game G, we construct a program that \ufb01nds, for each BSCC S of G, the threshold ratio for all the vertices in V . We then extend the program to propagate the threshold ratios to the rest of the vertices, similar to Theorem 19. Given a strongly-connected game G and a ratio r \u2208[0, 1], we construct RTr(G) in linear time. Then, deciding whether MP(RTr(G)) \u22650, is known to be in NP and coNP. The more challenging case is the solution for strongly-connected games with out-degree 2. Consider such a game G = \u27e8V, E, w\u27e9and r \u2208[0, 1]. We construct an MDP D on the structure of G such that MP(D) = MPr(G). Since \ufb01nding MP(D) is known to be in P, the claim follows. When r \u22651 2, then D is a max-MDP, and when r < 1 2, it is a min-MDP. Assume the \ufb01rst case, and the second case is similar. We split every vertex v \u2208V in three, where v \u2208VMax and v1, v2 \u2208VN. Suppose {u1, u2} = N(v). Intuitively, moving to v1 means that Max prefers moving to u1 over u2. Thus, we have Pr[v1, u1] = r = 1 \u2212Pr[v1, u2] and Pr[v2, u1] = 1 \u2212r = 1 \u2212Pr[v2, u2]. It is not hard to see that MP(D) = MPr(G). 6 Discussion We studied for the \ufb01rst time in\ufb01nite-duration poorman-bidding games. Historically, poorman bidding has been studied less than Richman bidding, but the reason was technical dif\ufb01culty, not lack of motivation. In practice, while the canonical use of Richman bidding is a richer notion of fairness, poorman bidding, on the other hand, are more common since they model an ongoing investment from a bounded budget. We show the existence of threshold ratios for poorman games with qualitative objectives. For mean-payoff poorman 18 \fgames, we construct optimal strategies with respect to the initial ratio of the budgets. We show an equivalence between mean-payoff poorman games and random-turn games, which, to the best of our knowledge, is the \ufb01rst such equivalence for poorman bidding. Unlike Richman bidding for which an equivalence with random-turn games holds for reachability objectives, for poorman bidding no such equivalence is known. We thus \ufb01nd the equivalence we show here to be particularly surprising. We expect the mathematical structure that we \ufb01nd for poorman bidding to be useful in adding to these games concepts that are important for modelling practical settings. For example, our modelling of ongoing auctions made two simplifying assumptions: (1) utility is only obtained from the weights in the graph, and (2) two companies compete for ads and there is full information on the company\u2019s budgets. Relaxing both assumptions are an interesting direction for future work. Relaxing the second assumption requires an addition of two orthogonal concepts that were never studied in bidding games: multiple players and partial information regarding the budgets. Finally, the deterministic nature of bidding games is questionable for practical applications, and a study of probabilistic behavior is initiated in [8]. To the best of our knowledge, we show the \ufb01rst complexity upper bounds on \ufb01nding threshold ratios in poorman games. We leave open the problem of improving the bounds we show; either improving the PSPACE upper bounds or showing non-trivial lower bounds, e.g., showing ETR-hardness. Since threshold ratios can be irrational, we conjecture that the problem is at least Sum-of-squares-hard. The complexity of \ufb01nding threshold ratios in un-directed reachability Richman-bidding games (a.k.a. \u201ctug-of-war\u201d games) was shown to be in P in [31], thereby solving the problem for uniform undirected random-turn games. Recently, the solution was extended to un-directed biased reachability random-turn games [40]. This work belongs to a line of works that transfer concepts and ideas between three areas with different takes on game theory: formal methods, algorithmic game theory [38], and AI. Examples of works in the intersection of these \ufb01elds include logics for specifying multi-agent systems [3, 20, 36], studies of equilibria in games related to synthesis and repair problems [19, 17, 25, 2], non-zero-sum games in formal veri\ufb01cation [21, 14], and applying concepts from formal methods to resource allocation games; e.g., network games with rich speci\ufb01cations [11] and an ef\ufb01cient reasoning about very large games [5, 29]." + }, + { + "url": "http://arxiv.org/abs/1705.01433v3", + "title": "Infinite-Duration Bidding Games", + "abstract": "Two-player games on graphs are widely studied in formal methods as they model\nthe interaction between a system and its environment. The game is played by\nmoving a token throughout a graph to produce an infinite path. There are\nseveral common modes to determine how the players move the token through the\ngraph; e.g., in turn-based games the players alternate turns in moving the\ntoken. We study the {\\em bidding} mode of moving the token, which, to the best\nof our knowledge, has never been studied in infinite-duration games. The\nfollowing bidding rule was previously defined and called Richman bidding. Both\nplayers have separate {\\em budgets}, which sum up to $1$. In each turn, a\nbidding takes place: Both players submit bids simultaneously, where a bid is\nlegal if it does not exceed the available budget, and the higher bidder pays\nhis bid to the other player and moves the token. The central question studied\nin bidding games is a necessary and sufficient initial budget for winning the\ngame: a {\\em threshold} budget in a vertex is a value $t \\in [0,1]$ such that\nif Player $1$'s budget exceeds $t$, he can win the game, and if Player $2$'s\nbudget exceeds $1-t$, he can win the game. Threshold budgets were previously\nshown to exist in every vertex of a reachability game, which have an\ninteresting connection with {\\em random-turn} games -- a sub-class of simple\nstochastic games in which the player who moves is chosen randomly. We show the\nexistence of threshold budgets for a qualitative class of infinite-duration\ngames, namely parity games, and a quantitative class, namely mean-payoff games.\nThe key component of the proof is a quantitative solution to strongly-connected\nmean-payoff bidding games in which we extend the connection with random-turn\ngames to these games, and construct explicit optimal strategies for both\nplayers.", + "authors": "Guy Avni, Thomas A. Henzinger, Ventsislav Chonev", + "published": "2017-05-03", + "updated": "2019-06-07", + "primary_cat": "cs.LO", + "cats": [ + "cs.LO", + "cs.GT" + ], + "main_content": "Introduction Two-player in\ufb01nite-duration games on graphs are an important class of games as they model the interaction between a system and its environment. Questions about the automatic synthesis of a reactive system from its speci\ufb01cation [46] can be reduced to \ufb01nding a winning strategy for the \u201csystem\u201d player in a two-player game. The game is played by placing a token on a vertex in the graph and allowing the players to move it through the graph, thus producing an in\ufb01nite play. The qualitative winner or quantitative payoff of the game is determined according to the play. There are several common modes to de\ufb01ne how the players move the token, which are used to model different types of systems. The most well-studied mode is turn-based, where the vertices are partitioned between the players and the player who controls the vertex on which the token is placed, moves it. Other modes include probabilistic and concurrent moves (see [5]). \u2217This paper is based of the conference publication [7]. This research was supported in part by the Austrian Science Fund (FWF) under grants S11402-N23 (RiSE/SHiNE), Z211-N23 (Wittgenstein Award), and M 2369-N33 (Meitner fellowship). \u2020guy.avni@ist.ac.at \u2021tah@ist.ac.at \u00a7vencho@mpi-sws.org 1 arXiv:1705.01433v3 [cs.LO] 7 Jun 2019 \fWe study bidding games in which the mode of moving is \u201cbidding\u201d. Intuitively, in each turn, an auction determines which player moves the token. A concrete bidding rule, which was de\ufb01ned and studied for \ufb01nite-duration games in [37, 38] is called Richman bidding (named after David Richman). Both players have budgets, and in each turn a bidding takes place: The players simultaneously submit bids, where a bid is legal if it does not exceed the available budget, the higher bidder pays the other player, and moves the token. Ties can occur and one needs to devise a mechanism for resolving them (e.g., giving advantage to Player 1), but our results do not depend on a speci\ufb01c mechanism. Bidding arises in many settings that are relevant for several communities within Computer Science, and we list several examples below. In Formal Methods, the players in a two-player game often model concurrent processes. Bidding for moving can model an interaction with a scheduler. The process that wins the bidding gets scheduled and proceeds with its computation. Thus, moving has a cost and processes are interested in moving only when it is critical. Bidding for moving can thus be used to obtain a richer notion of fairness. When and how much to bid can be seen as quantifying the resources that are needed for a system to achieve its objective. Other takes on this problem include reasoning about which input signals need to be read by the system at its different states [21, 3] as well as allowing the system to read chunks of input signals before producing an output signal [29, 28, 34]. Also, our bidding game can model scrip systems that use internal currencies in order to prevent \u201cfree riding\u201d [32]; namely, agents who use the resources provided by the system without making their own contribution. Such systems are successfully used in various settings such as databases [50], group decision making [49], resource allocation, and peer-to-peer networks (see [31] and references therein). In Algorithmic Game Theory [44], auction design is a central research topic that is motivated by the abundance of auctions for online advertisements [43]. Repeated bidding is a form of a sequential auction [39], which is used in many settings including online advertising. In\ufb01nite-duration bidding games can model ongoing auctions and can be used to devise bidding strategies for objectives like: \u201cIn the long run, an advertiser\u2019s ad should show at least half of the time\u201d. In Arti\ufb01cial Intelligence, bidding games have been used to reason about combinatorial negotiations [40]. Recall that \u201cbidding\u201d is a mode of moving and can be studied in combination with any objective. Bidding reachability games were studied in [38, 37]: Player 1 has a target vertex and an in\ufb01nite play is winning for him iff it visits the target. The central question that is studied regards a necessary and suf\ufb01cient budget to guarantee winning, called the threshold budget. Formally, we assume that the budgets add up to 1. The threshold budget is a function TH : V \u2192[0, 1] such that if Player 1\u2019s budget exceeds TH(v) at a vertex v, then he has a strategy to win the game from v. On the other hand, if Player 2\u2019s budget exceeds 1\u2212TH(v), he can win the game from v. We illustrate the bidding model and threshold budgets in the following example. Example 1. Consider the reachability bidding game that is depicted in Figure 1. Player 1\u2019s goal is to reach t, and Player 2\u2019s goal is to prevent this from happening. What is a necessary and suf\ufb01cient initial budget for Player 1 to win from v0? We start with a naive solution by showing that Player 1 can win if his budget exceeds 0.75. Suppose that the budgets are \u27e80.75 + \u03f5, 0.25 \u2212\u03f5\u27e9, for Player 1 and 2, respectively, for \u03f5 > 0. In the \ufb01rst turn, Player 1 bids 0.25 and wins the bidding since Player 2 cannot bid above 0.25 \u2212\u03f5. He pays his bid to Player 2 and moves the token to v2. Thus, at the end of the round, the budgets are \u27e80.5+\u03f5, 0.5\u2212\u03f5\u27e9 and the token is placed on v2. In the second bidding, Player 1 bids all his budget, wins the bidding since Player 2 cannot bid above 0.5 \u2212\u03f5, moves the token to t, and wins the game. While an initial budget of 0.75 suf\ufb01ces for winning, it is is not necessary for winning. We continue to show that the necessary and suf\ufb01cient budget in v0, i.e., the threshold budget, is 2/3. That is, we show that for every \u03f5 > 0, Player 1 can win with a budget of 2/3 + \u03f5, and if his initial budget is 2/3 \u2212\u03f5, he loses since Player 2 can force the game to v1. We show a winning strategy for Player 1 assuming that the initial budgets are \u27e82/3 + \u03f5, 1/3 \u2212\u03f5\u27e9. Player 1\u2019s bid in the \ufb01rst bidding is 1/3, which he wins since Player 2 cannot bid beyond 1/3 \u2212\u03f5, and moves the token to v2. The new budgets are \u27e81/3 + \u03f5, 2/3 \u2212\u03f5\u27e9. Now, Player 1 bids 1/3 + \u03f5. If he wins, he proceeds to t and wins the game. Otherwise, Player 2 wins the bidding, and moves 2 \fthe token back to v0. Since Player 2 wins the bidding, he must overbid Player 1\u2019s bid and pay Player 1 at least 1/3 + \u03f5. In the worst case, the new budgets are \u27e82/3 + 2\u03f5, 1/3 \u22122\u03f5\u27e9. In other words, we are back to v0 only that Player 1\u2019s budget strictly increases. By continuing in a similar manner, Player 1 forces his budget to increase by a constant. It will eventually exceed 0.75 from which he can use the naive solution above to win. The same argument shows that Player 1 wins with a budget of 1/3 + \u03f5 in v2. Showing that Player 2 wins from v0 with 1/3 + \u03f5 and from v2 with 2/3 + \u03f5, is dual. To conclude the example, we note that TH(v1) = 1, which intuitively means that even with all the budget, Player 1 cannot win from v1, and TH(t) = 0, which intuitively means that even with no budget, Player 1 wins from t. v0 v2 t v1 1 2/3 1/3 0 Figure 1: A reachability bidding game with the threshold budgets of the vertices. 1 \u22121 v1 v2 Figure 2: A mean-payoff bidding game. It is shown in [38, 37] that a threshold budget exists in every vertex of a reachability bidding game. Moreover, it is shown that threshold budgets have the following property: the threshold budget of a vertex v equals 1 2(TH(v+) + TH(v\u2212)), where v+ and v\u2212are the successors of v with the maximal and minimal threshold budget, respectively. That is, for every successor v\u2032 of v, we have TH(v\u2212) \u2264TH(v\u2032) \u2264TH(v+). For example, in Example 1 we have TH(v0) = 2/3 = 1 2(1 + 1/3) = 1 2(TH(v1) + TH(v2)). This property of threshold budgets gives rise to an interesting probabilistic connection. In a random-turn game, instead of bidding, in each turn, we toss a fair coin. If it turns \u201cheads\u201d Player 1 moves, and if it turns \u201ctails\u201d, Player 2 moves. For a reachability bidding game G, we denote by RT(G), the random-turn game that is constructed on top of G, which is formally a simple stochastic game [23] (see Figure 3). It is well-known that every vertex v in G has a value in RT(G), denoted val(RT(G), v), which is the probability that Player 1 wins when both players play optimally. The probabilistic connection for reachability bidding games is the following: for every vertex v in G, TH(v) in G equals 1 \u2212val(RT(G), v). Random-turn based games have been extensively studied in their own right since the seminal paper [45]. v1 v0 v1 0 v2 0 v2 v1 2 v2 2 t 1 2 1 2 1 2 1 2 Figure 3: The random-turn game that corresponds to the game in Figure 1. The dashed edges are probabilistic and model coin tosses, square vertices are controlled by Player 1, and circle vertices by Player 2. We introduce and study in\ufb01nite-duration bidding games with richer qualitative objectives as well as quantitative objectives. Parity games are an important class of qualitative games. For example, the problem of reactive synthesis from LTL speci\ufb01cations is reduced to solving a parity game [46]. The vertices in a parity game are labeled by an index in {0, . . . , d}, for some d \u2208I N, and an in\ufb01nite play is winning for Player 1 iff the parity of the maximal index that is visited in\ufb01nitely often is odd. We show that parity bidding games are linearly-reducible to reachability bidding games allowing us to obtain all positive results from these games; threshold budgets exist and the problem of computing them is no harder than for reachability bidding games, 3 \fwhich is in turn in NP and coNP due to the probabilistic connection. We \ufb01nd this result somewhat surprising since for most other modes of moving, parity games are considerably harder than reachability games. The key component of the proof considers bottom strongly-connected components (BSCCs, for short) in the game graph, i.e., strongly-connected components with no exiting edges. We show that the BSCCs can be easily classi\ufb01ed into those that are \u201cwinning\u201d for Player 1 and those that are \u201closing\u201d for him, where in a winning BSCC, Player 1 wins with any positive initial budget, and in a losing BSCC, Player 2 wins with any positive initial budget. We can then construct a reachability bidding game by setting the target of Player 1 to be the winning BSCCs. Finally, we ask whether Player 1 can not only win, but win in a prompt manner [35]. In B\u00fcchi games, which are a special case of parity games, the goal is to visit an accepting vertex in\ufb01nitely often. We say that Player 1 wins in a prompt manner if there is a k \u2208I N such that visits to accepting vertices occur within k turns. We show a negative result: under mild assumptions, Player 1 can never win promptly. That is, with any positive budget, Player 2 can guarantee arbitrarily long periods with no visits to accepting vertices. The quantitative games we focus on are mean-payoff games. The vertices of a mean-payoff game are labeled by weights in Z and an in\ufb01nite play has a payoff, which is the long-run average of the accumulated weights. The payoff is Player 1\u2019s cost and Player 2\u2019s reward, thus we refer to the players in a mean-payoff game as Maximizer (Max, for short) and Minimizer (Min, for short). We adapt threshold budgets to meanpayoff games: we ask what is a necessary and suf\ufb01cient initial budget to guarantee a payoff of 0. We show that threshold budgets exist in mean-payoff bidding games and that \ufb01nding them is again in NP and coNP. The key component of the proof, which consists of our most technically challenging result, is a quantitative solution for strongly-connected mean-payoff bidding games by showing an extended probabilistic connection for these games. We show that the optimal payoff Min can guarantee in a strongly-connected mean-payoff bidding game G does not depend on his initial budget. More formally, there exists a value c \u2208I R such that with every positive initial budget, Min can guarantee a payoff of at most c in G, and he cannot do better: for every \u03f5 > 0 and with any positive budget, Max can guarantee a payoff that exceeds c \u2212\u03f5 in G. Moreover, we show that the optimal payoff c equals the value of the random-turn mean-payoff game RT(G). Here, RT(G) is a stochastic mean-payoff game and its value is de\ufb01ned as the expected payoff when both players play optimally [41]. We show a constructive proof for the claim above in which we construct optimal bidding strategies for the two players. Intuitively, the strategies that we construct perform a de-randomization; with a deterministic bidding strategy, the players guarantee that the ratio of the time that is spent in each vertex is the same as in a random behavior. We illustrate our construction in the following example. Technically, consider an in\ufb01nite play \u03c0. The energy of a pre\ufb01x \u03c0n of length n of \u03c0, denoted E(\u03c0n), is the sum of the weights that it traverses. The payoff of \u03c0 is lim infn\u2192\u221eE(\u03c0n)/n. Note that the de\ufb01nition favors Min. The strategy we construct for Min guarantees that an in\ufb01nite play \u03c0 either has (1) in\ufb01nitely many pre\ufb01xes with E(\u03c0n) = 0, or (2) the energy is eventually bounded, thus there is N \u2208I N such that, after some point M, for every n \u2208I N with n > M, we have E(\u03c0n) \u2264N. It is not hard to see that this property implies that the payoff of \u03c0 is non-positive. We stress the point that there are two \u201ccurrencies\u201d in the game: the players\u2019 budgets are \u201cmonopoly money\u201d that they do not care about, rather a player\u2019s goal is to optimize the payoff, which arises from the weights that are traversed by the play. Example 2. Consider the mean-payoff bidding game G that is depicted in Figure 2. The value of the random-turn game that corresponds to G is 0. Indeed, in RT(G), Min always proceeds to v2 and Max always proceeds to v1. Since the players are selected to move uniformly at random, the game can be seen as a random walk that takes each edge with probability 0.5 and stays, in the long run, in v1 and in v2 the same portion of the time. We claim that Min has a deterministic strategy that guarantees a non-positive payoff. It intuitively guarantees that an in\ufb01nite play stays in v2 for at least half the time. Without loss of generality, Max always proceeds to v1 upon winning a bidding. Min\u2019s strategy is a tit-for-tat-like strategy, and he 4 \falways proceeds to v2 upon winning a bidding. The dif\ufb01culty is in \ufb01nding the right bids. Min maintains a queue. When the queue is empty, Min bids 0. If the queue is not empty, Min bids the smallest element in the queue, and removes it upon winning a bidding. If Max wins a bidding with b, then Min adds b to the queue. For example, suppose Max bids 1 3, 1 2, 1 6 in the \ufb01rst three biddings. Min\u2019s \ufb01rst bidding is 0, he loses, and adds 1 3 to the queue. In the second bidding, Min bids the minimal element 1 3 in the queue, loses again, and adds 1 2 to the queue. In the third bidding, Min wins with his bid of 1 3, removes it from the queue, and his bid in the fourth bidding is 1 2. For simplicity, we assume that Min wins whenever a tie occurs. We claim that the tit-for-tat strategy guarantees a non-positive mean-payoff value. Intuitively, elements in Min\u2019s queue can be thought of as Max winnings that are not \u201cmatched\u201d by a Min win. Thus, if the size of the queue is k, the energy is at most k (this is an upper bound since Min could win with 0 bids). In particular, if the queue is empty, the energy is at most 0. Suppose the minimal element in the queue is b. Then, we claim that the size of the queue, and in turn the accumulated energy, is at most \u23081/b\u2309. Indeed, since each bid in the queue represents an \u201cunmatched\u201d Max bid, if the queue size is greater than \u23081/b\u2309, then the sum of Max\u2019s winning bids is more than 1, which is impossible since he would need to invest more than the total budget. It follows that Min\u2019s strategy guarantees that in an in\ufb01nite play either (1) the queue empties in\ufb01nitely often, thus the energy hits 0 in\ufb01nitely often, or (2) if there is a point after which the queue stays non-empty, then its size is bounded, hence the energy is bounded. By the above, this property implies a non-positive payoff. As the tit-for-tat strategy above demonstrates, the strategies that we construct carefully match changes in budget with changes in energy. The \ufb01rst step in our construction for general strongly-connected games is to assign an \u201cimportance\u201d to each vertex in the game; the more important a vertex is, the higher a player bids in it. Our de\ufb01nition of importance uses the concept of potentials in stochastic games (see [47]), which were initially used in the context of the strategy iteration algorithm [30]. In the second component of the proof, we \ufb01nd a bid by carefully normalizing the importance of a vertex. Normalization is easier in Min\u2019s case because of the asymmetry in the de\ufb01nition of payoff. As demonstrated in the tit-for-tat strategy, Min keeps the energy bounded from above. Max strategy guarantees that the energy is bounded from below, which is more technically challenging to achieve. Results on other bidding mechanisms Since the \ufb01rst publication of this work, further results were obtained on in\ufb01nite-duration bidding games with other bidding mechanisms. A second bidding rule that was \ufb01rst de\ufb01ned in [37] is called poorman bidding: the winner of a bidding, rather than paying his bid to the loser, pays the bid to the \u201cbank\u201d, thus the sum of budgets decreases as the game proceeds. Poorman bidding naturally model settings in which the scheduler accepts payment such as miners in block-chain technology or the auctioneer in ongoing auctions. The mathematical structure of reachability poorman-bidding games is more involved than with Richman bidding. Namely, no probabilistic connection is known and it is unlikely to exist. Given the probabilistic connection for reachability Richman-bidding games, the probabilistic connection for mean-payoff Richman-bidding games may not be unexpected. The ideas that were developed in the constructions we show here were later used to show a surprising probabilistic connection for meanpayoff poorman-bidding games in [8], which is in fact richer than the one we observe here for mean-payoff Richman-bidding games. Then, to better understand the curious differences between the seemingly similar bidding rules, in\ufb01nite-duration bidding games with taxman bidding are studied in [10]. Taxman bidding, which was also de\ufb01ned in [37] and studied for reachability games, span the spectrum between Richman and poorman bidding. A probabilistic connection was shown for these games as well. We elaborate on these results in Section 4.3.1. 5 \fFurther related work on bidding games Motivated by recreational games, e.g., bidding chess, discrete bidding games are studied in [25], where the granularity of the bids is bounded by dividing the money into chips. The Richman calculus for reachability continuous-bidding games is extended to discrete-bidding in [25]. Unlike in continuous-bidding, ties play a crucial role in discrete-bidding. The question of which tiebreaking mechanism gives rise to determinacy in in\ufb01nite-duration discrete-bidding games is investigated in [1]. Non-zero-sum two-player games were studied in [40]. They consider a bidding game on a directed acyclic graph. Moving the token through the graph is done by means of bidding. The game ends once the token reaches a sink, and each sink is labeled with a pair of payoffs for the two players that do not necessarily sum up to 0. They show existence of subgame perfect equilibrium for every initial budget and a polynomial algorithm to compute it. 2 Preliminaries A graph game is played on a directed graph G = \u27e8V, E\u27e9, where V is a \ufb01nite set of vertices and E \u2286V \u00d7V is a set of edges. The neighbors of a vertex v \u2208V , denoted N(v), is the set of vertices {u \u2208V : \u27e8v, u\u27e9\u2208E}. We say that G has out-degree 2 if for every v \u2208V , we have |N(v)| = 2. A path in G is a \ufb01nite or in\ufb01nite sequence of vertices \u03b7 = v1, v2, . . . such that for every i \u22651, we have \u27e8vi, vi+1\u27e9\u2208E. A strongly-connected component of G is a set of vertices S such that for every u, v \u2208S there is a path from u to v in G. A bottom strongly-connected component (BSCC, for short) is a maximal strongly-connected component S that has no outgoing edges, i.e., there are no edges of the form \u27e8v, u\u27e9, where v \u2208S and u / \u2208S. Bidding for moving A graph game is a two-player game, which proceeds by placing a token on a vertex in a graph and letting the two players move it to produce an in\ufb01nite play. The play gives rise to a path that determines the qualitative winner or quantitative payoff of the game. We refer to the mechanism that determines how the token moves as the mode of moving of the game. For example, the simplest and most well-studied mode of moving is turn-based; the vertices are partitioned between the two players and the player who controls the vertex on which the token is placed, moves it. We study a different mode of moving, which we call bidding. Both players have budgets, where for convenience, we have B1 + B2 = 1. In each turn, a bidding takes place to determine which player moves the token: Both players simultaneously submit bids, where a bid is a real number in [0, Bi], for i \u2208{1, 2}, the player who bids higher pays the other player and moves the token. Note that the sum of budgets always remains 1. While draws can occur, our results are not affected by the tie-breaking mechanism that is used. To simplify the presentation, we \ufb01x the tie-breaking mechanism to always give advantage to Player 1. Strategies and plays A strategy is a recipe for how to play a game. It is a function that, given a \ufb01nite history of the game, prescribes to a player which action to take, where we de\ufb01ne these two notions below. For example, in turn-based games, a strategy takes as input, the sequence of vertices that were visited so far, and it outputs the next vertex to move to. In bidding games, histories and strategies are more involved since they maintain the information about the bids and winners of the bids. Formally, a history in a bidding game is \u03c0 = \u27e8v1, b1, i1\u27e9, . . . , \u27e8vk, bk, ik\u27e9, vk+1 \u2208(V \u00d7 I R \u00d7 {1, 2})\u2217\u00b7 V , where for 1 \u2264j \u2264k + 1, the token is placed on vertex vj at round j, for 1 \u2264j \u2264k, the winning bid is bj and the winner is Player ij. Consider a \ufb01nite history \u03c0. For i \u2208{1, 2}, let Wi(\u03c0) \u2286{1, . . . , k} denote the indices in which Player i is the winner of the bidding in \u03c0. We denote by Bi(\u03c0) Player i\u2019s budget following \u03c0. Let BI i be the initial budget of Player i. Player 1\u2019s budget following \u03c0 is B1(\u03c0) = BI i \u2212P j\u2208W1(\u03c0) bj + P j\u2208W2(\u03c0) bj, and Player 2\u2019s budget is de\ufb01ned dually. Given a history \u03c0 that ends in v, a strategy for Player i prescribes an action \u27e8b, v\u27e9, where b \u2264Bi(\u03c0) is a bid that does not exceed the available budget and v is a vertex to move to upon winning, where we require that v is a neighbor of vk+1. 6 \fAn initial vertex v1, initial budgets, and two strategies f1 and f2 for the players determine a unique in\ufb01nite play for the game, which we denote by play(v1, f1, f2), and we de\ufb01ne its pre\ufb01xes inductively. Let \u03c01 = v1. Assume that for n \u22651, we have de\ufb01ned the pre\ufb01x \u03c0n = \u27e8v1, b1, i1\u27e9, . . . , \u27e8vk, bk, in\u27e9, vn+1, and we de\ufb01ne the pre\ufb01x \u03c0n+1. Let \u27e8b\u2032, v\u2032\u27e9= f1(\u03c0n) and \u27e8b\u2032\u2032, v\u2032\u2032\u27e9= f2(\u03c0n) be the two actions proposed by the two players\u2019 strategies. Then, if b\u2032 \u2265b\u2032\u2032, Player 1 wins and we de\ufb01ne \u03c0n+1 = \u27e8v1, b2, i1\u27e9, . . . , \u27e8vk+1, b\u2032, 1\u27e9, v\u2032. If b\u2032\u2032 > b\u2032, Player 2 wins, and we de\ufb01ne \u03c0n+1 = \u27e8v1, b2, i1\u27e9, . . . , \u27e8vk+1, b\u2032\u2032, 2\u27e9, v\u2032\u2032. The path that play(v1, f1, f2) traverses is path(play(v1, f1, f2)) = v1, v2, . . .. Objectives An objective O is a set of in\ufb01nite paths. Player 1 wins an in\ufb01nite play \u03c0 iff path(\u03c0) \u2208O. We call a strategy f winning for Player 1 from a vertex v w.r.t. an objective O if for every strategy g of Player 2 play(v, f, g) is winning for Player 1. Winning strategies for Player 2 are de\ufb01ned dually. We consider the following qualitative objectives: 1. In reachability games, Player 1 has a target vertex t and an in\ufb01nite play is winning iff it visits t. We sometimes use a set of vertices T as the target of Player 1, then Player 1 wins iff a vertex in T is visited. 2. In parity games, each vertex is labeled with an index in {1, . . . , d}. An in\ufb01nite path is winning for Player 1 iff the parity of maximal index visited in\ufb01nitely often is odd. 3. Mean-payoff games are played on weighted directed graphs, with weights given by a function w : V \u2192Q. Consider an in\ufb01nite path \u03b7 = v1, v2, \u00b7 \u00b7 \u00b7 \u2208V \u03c9. For n \u2208I N, the pre\ufb01x of length n of \u03b7 is \u03b7n, and we de\ufb01ne its energy to be E(\u03b7n) = Pn i=1 w(vi). The payoff of \u03b7 is payoff(\u03b7) = lim infn\u2192\u221eE(\u03b7n)/n. Player 1 wins \u03b7 iff payoff(\u03b7) \u22650. Mean-payoff games are quantitative games. We think of the payoff as Player 1\u2019s reward and Player 2\u2019s cost, thus in mean-payoff games, we refer to Player 1 as Max and to Player 2 as Min. We elaborate on the quantitative solution to mean-payoff games in Section 4. Threshold budgets The \ufb01rst question that arises in the context of bidding games asks what is the necessary and suf\ufb01cient initial budget to guarantee an objective. We generalize the de\ufb01nition in [37, 38]: De\ufb01nition 3. (Threshold budgets) Consider a bidding game G, a vertex v, and an objective O for Player 1. The threshold budget in v, denoted TH(v), is a number in [0, 1] such that for an initial budget BI \u2208[0, 1] for Player 1 we have \u2022 if BI > TH(v), then Player 1 has a winning strategy that guarantees O is satis\ufb01ed, and \u2022 if BI < TH(v), then Player 2 has a winning strategy that violates O. Random-turn games A stochastic game is played on an arena \u27e8V1, V2, VN, \u2206, Pr\u27e9, where for i \u2208{1, 2}, Vi is a set of vertices that is controlled by Player i, VN is a set of probabilistic vertices that is controlled by \u201cNature\u201d, where all three sets are disjoint and we denote V = V1 \u222aV2 \u222aVN, \u2206\u2286(V1 \u222aV2) \u00d7 V is a set of deterministic edges, and Pr : VN \u00d7 V \u2192[0, 1] are probabilistic transitions, i.e., for each v \u2208VN, we have P u\u2208V Pr[v, u] = 1. As in turn-based games, whenever the game reaches a vertex in Vi that is controlled by Player i, for i \u2208{1, 2}, he choses how the game proceeds, and whenever the game reaches a vertex v \u2208VN, the next vertex is chosen probabilistically according to Pr. Consider a bidding game G that is played on a graph \u27e8V, E\u27e9. The random-turn game that is associated with G is a stochastic game that intuitively simulates the following process. In each turn we throw a fair coin. If it turns \u201cheads\u201d, then Player 1 moves the token, and Player 2 moves if the coin turns \u201ctails\u201d. See 7 \fan example in Figure 3. Formally, we de\ufb01ne RT(G) = \u27e8V1, V2, V, \u2206, Pr\u27e9, where we make two additional copies of each vertex in V ; for i \u2208{1, 2}, we have Vi = {vi : v \u2208V }. Nature vertices simulate the coin toss: for v \u2208V , we have Pr[v, v1] = Pr[v, v2] = 1/2. Reaching a vertex vi \u2208Vi, for i \u2208{1, 2}, means that Player i won the coin toss and gets to choose a neighbor u \u2208N(v) to move the token to, thus we have \u2206= {\u27e8vi, u\u27e9: \u27e8v, u\u27e9\u2208E and i \u2208{1, 2}}. The objective of Player 1 in RT(G) is the same as his objective in G. When G is a reachability game, then RT(G) is called a simple stochastic game [24] and the target is the same as in G. When G is a mean-payoff game, then RT(G) is a stochastic mean-payoff game. The weight of v1, v2, and v all equal the weight of v in G. The following de\ufb01nitions are standard, and we refer the reader to [47] for more details. Two strategies f1 and f2 for the two players and an initial vertex v give rise to a probability distribution D(v, f1, f2) over in\ufb01nite paths that start in v. De\ufb01nition 4. (Values in stochastic games) Consider a stochastic game G. When G is a qualitative game with objective O, the value in a vertex v in G, denoted val(G, v), is supf1 inff2 Pr\u03b7\u223cD(v,f1,f2)[\u03b7 \u2208O]. When G is a mean-payoff game, the value in v is supf1 inff2 E\u03b7\u223cD(v,f1,f2)[payoff(\u03b7)]. For the objectives we consider, positional optimal strategies exist. The existence of positional optimal strategies implies that by letting Player 2 choose his strategy before Player 1, i.e., switching the order in the de\ufb01nitions to inff2 supf1, we obtain the same value. Moreover, restricting one or both of the players to use only positional strategies does not change the value. 3 Qualitative Bidding Games We start by surveying the results of [38, 37] on reachability games before moving to study parity bidding games. The model that is studied in [38, 37] uses a slightly different de\ufb01nition of reachability games, which we call double-reachability games: both players have a target, which we denote by vR and vS for \u201creach\u201d and \u201csafe\u201d, and the game ends once one of the targets is reached. We assume that all vertices apart from vR and vS have at least one path to both vR and vS. We later show that reachability bidding games are equivalent to double-reachability bidding games. Theorem 5. [38, 37] Consider a double-reachability bidding game G = \u27e8V, E, vR, vS\u27e9. Then TH(vR) = 0 and TH(vS) = 1, and for all other vertices v \u2208V \\ {vR, vS}, we have TH(v) = 1 2 \u0000TH(v+) + TH(v\u2212) \u0001 , where v\u2212, v+ \u2208N(v) are such that for every v\u2032 \u2208N(v), we have TH(v\u2212) \u2264TH(v\u2032) \u2264TH(v+). Moreover, for every vertex v \u2208V , we have TH(v) = 1 \u2212val(RT(G), v). Proof. We describe the key ideas in the proof for completeness. Consider two optimal memoryless strategies f1 and f2 in RT(G). For v \u2208V \\ {vS, vR}, we de\ufb01ne v\u2212, v+ \u2208N(v) according to these strategies: let v\u2212= f1(v1) and v+ = f2(v). It is not hard to see that val(RT(G), vS) = 0 and val(RT(G), vR) = 1, and for every v \u2208V \\ {vS, vR}, we have val(RT(G), v) \u2208(0, 1) and val(RT(G), v) = 1 2 \u0000val(RT(G), v+) + val(RT(G), v\u2212) \u0001 . We claim that if Player 1\u2019s budget BI at v \u2208V exceeds 1 \u2212val(RT(G), v), then he wins the game. Thus, we show that TH(v) \u22641 \u2212val(RT(G), v). The proof for the other direction is dual. For v \u2208{vS, vR}, the claim is trivial. Let B(v) = 1 \u2212val(RT(G), v) and \u03f5 > 0 be Player 1\u2019s \u201csurplus\u201d, namely BI = B(v) + \u03f5. Intuitively, Player 1\u2019s strategy ensures that he either wins the game or his surplus increases by a constant. For u \u2208V \\ {vS, vR}, let b(u) = 1 2 \u0000val(RT(G), u\u2212) \u2212val(RT(G), u+) \u0001 , which by this theorem, is equivalent to 1 2(TH(v+) \u2212TH(v\u2212)). For example, in the game that is depicted in Figure 1, we have TH(v1) = 1, TH(v2) = 1/3, and b(v0) = (1 \u22121/3)/2 = 1/3. Let n = |V | and, for 1 \u2264i \u2264n, we de\ufb01ne \u03f5i = \u03f5\u00b72\u2212i. Until he loses a bidding, assuming 0 \u2264j \u2264n\u22121 turns have passed and the token is placed on a vertex u \u2208V , Player 1 bids b(u) + \u03f5n\u2212j and proceeds to 8 \fu\u2212upon winning. Thus, Player 1\u2019s bid consists of two parts: the major part is b(u) and the minor part is \u03f5n\u2212j. We show that no matter the outcome of the bidding, assuming the game continues to u\u2032, Player 1\u2019s budget exceeds B(u\u2032). If Player 1 wins the bidding, he moves the token to u\u2212. His budget exceeds B(u\u2212) since B(u) \u2212b(u) = B(u\u2212) and P 1\u2264i\u2264n \u03f5i \u2264\u03f5. On the other hand, if Player 2 wins, in the worst case, he proceeds to u+ and pays Player 1 at least b(u) + \u03f5n\u2212j. Then, Player 1\u2019s budget exceeds B(u+) since B(u) + b(u) = B(u+) and P 0\u2264\u2113 0. Assuming |T| = n, he chooses \u03f51 > . . . > \u03f5n, and, for 0 \u2264j \u2264 n \u22121, he bids \u03f5n\u2212j in the j-th bidding. Upon winning a bidding, he proceeds to a vertex that is closer to t than the current vertex. As in the proof of Theorem 5, if Player 1 wins n biddings, he wins the game, and if he loses a bidding, his budget increases by at least \u03f5n. By repeatedly following this strategy, his budget will eventually suf\ufb01ce for winning n biddings in a row. The \ufb01nal case is when v is in V \\ (S \u222aT). Suppose Player 1 starts in G with a budget of TH(v) + \u03f5 in DR(G). Player 1 acts as if his budget is TH(v) + \u03f5/2 and uses the winning strategy in DR(G) to force the game to a vertex in T. Then, he uses the strategy above with an initial budget of \u03f5/2 to force the game to t. We proceed to study parity bidding games. Theorem 8. Parity bidding games are linearly reducible to reachability bidding games. Thus threshold budgets exist in parity bidding games. 9 \fProof. Consider a parity bidding game G = \u27e8V, E, p\u27e9and let S be a BSCC in G. We claim that there is \u03b1S \u2208{0, 1} such that for every v \u2208S, we have TH(v) = \u03b1. In case of \u03b1 = 0, we call S \u201cwinning\u201d for Player 1, and when \u03b1 = 1, we call S \u201closing\u201d for Player 1. Let v \u2208S be the vertex with the maximal parity in S. We claim that S is winning for Player 1 iff p(v) is odd. Suppose p(v) is odd, and the other case is dual. We show that Player 1 can win from v with an initial budget of \u03f5 > 0. Proposition 7 implies that Player 1 can force the game from any vertex in S to v with any positive initial budget. Indeed, we construct a reachability bidding game GS by restricting G to S and setting the target of Player 1 to be v. Since S is a BSCC, there is no vertex from which there is no path to v, thus the proposition implies that the threshold budgets are all 0. Finally, Player 1 splits \u03f5 into in\ufb01nitely many pieces \u03f51, \u03f52, . . ., by de\ufb01ning \u03f5i = \u03f5\u00b72\u2212i, for i \u22651. Initially, he plays as if his budget is \u03f51 and forces the game to visit v using the strategy in the reachability game. Once v is visited, he repeats the strategy with an initial budget of \u03f52. He continues similarly forcing in\ufb01nitely many visits to v. Since v is the vertex with maximal parity in S and it is odd, the strategy guarantees that Player 1 wins. We now consider vertices in V that are not in a BSCC. Let W, L \u2286V be the sets of vertices in V that belong to winning and losing BSCCs for Player 1, respectively. Let W \u2032 and L\u2032 be the sets of vertices with no path to vertices in L and W, respectively. Note that W \u2286W \u2032 and L \u2286L\u2032 and as in the above, for every v \u2208W \u2032, we have TH(v) = 0, and for every v \u2208L\u2032, we have TH(v) = 1. We construct a double-reachability bidding game DR(G) by setting the target for Player 1 to be W \u2032 and the target for Player 2 to be L\u2032. A similar argument to the one above shows that for every v \u2208V \\ (W \u2032 \u222aL\u2032), TH(v) in G equals TH(v) in DR(G), and we are done. We adress the computational complexity of \ufb01nding threshold budgets. We \ufb01rst phrase the problem as a decision problem. De\ufb01nition 9. The input to the THRESH-BUDGET problem is a parity bidding game G and a vertex v, and the goal is to decide whether TH(v) \u22651/2. It is stated in [37] that THRESH-BUDGET is in NP and not known to be in P. It follows from Theorems 8 and 5 that THRESH-BUDGET is linearly reducible to the problem of solving a stochastic reachability game, which is known to be in NP and coNP [24]. Thus, we have the following. Theorem 10. For parity bidding games, THRESH-BUDGET is in NP and coNP. We conclude this section by studying a stronger notion of winning that is called promptness [35]. B\u00fcchi games are a special case of parity games in which the maximal parity index is 1: vertices with parity 1 are called accepting and a play is winning for Player 1 iff it visits an accepting state in\ufb01nitely often. We show a negative result for B\u00fcchi bidding games: intuitively, we show that under mild assumptions Player 1 cannot win promptly. Theorem 11. Consider a strongly-connected B\u00fcchi bidding game G = \u27e8V, E\u27e9and let F \u2286V be a set of accepting vertices. If G contains a cycle that does not traverse a vertex in F, then for every k \u2208I N and every initial positive budget, Player 2 can force the game not to visit an accepting vertex for at least k turns. Proof. Let C be a cycle in G with no accepting state. We construct a reachability game Cyc(G, C, k) = \u27e8V \u00d7 {0, . . . , k}, E\u2032, t\u2032\u27e9from G in which we associate Player 2 in G with Player 1 in Cyc(G, C, k) and his goal is to traverse the cycle C k times in a row. Intuitively, the structure of Cyc(G, C, k) can be thought of as maintaining a counter: when the token is on the vertex \u27e8v, i\u27e9, it means that C was traversed for i times. We describe E\u2032 formally. Let C = e1, . . . , en be a sequence of edges and for e \u2208E. Let i \u2208{1, . . . , k} and e = \u27e8v, u\u27e9\u2208E. Suppose e appears in C but it is not the \ufb01rst edge, thus e \u0338= e1. Then, we have \u27e8\u27e8v, i\u27e9, \u27e8u, i\u27e9\u27e9\u2208E\u2032, which means that when the game proceeds on an edge in C, the counter stays 10 \funchanged. When e = e1 is the \ufb01rst edge in C and i < k, we increment the counter, thus \u27e8\u27e8v, i\u27e9, \u27e8u, i+1\u27e9\u27e9\u2208 E\u2032. When e is not in C, we reset the counter and drop to the \ufb01rst level, thus \u27e8\u27e8v, i\u27e9, \u27e8u, 0\u27e9\u27e9\u2208E\u2032. Let v0 be the \ufb01rst vertex in C. Then, the target of Player 2 is \u27e8v0, k\u27e9, which means that the cycle is traversed k times in a row. It is not hard to see that a winning play for Player 1 in Cyc(G, C, k) corresponds to traversing the cycle C k times in a row in G. Moreover, since G is strongly-connected, the target is reachable from all the vertices in Cyc(G, C, k). Thus, by Proposition 7, the threshold budgets are 0 in all vertices. We describe a Player 2 strategy in G that ensures that there is no bound on the frequencies of visits to accepting states. Suppose Player 2 starts with a budget of \u03f5 > 0 in G. He splits his budget into in\ufb01nitely many parts \u03f51, \u03f52, . . .. For i \u22651, suppose the token is on v \u2208V . Player 2 plays according to Player 1\u2019s winning strategy from \u27e8v, 0\u27e9in Cyc(G, C, i) with an initial budget of \u03f5i to force the game to cycle C i times in a row. Thus, for every i \u22651, there is a sequence of i \u00b7 |C| with no visit to an accepting state, and we are done. 4 Mean-Payoff Bidding Games This section consists of our most technically challenging contribution. We show that threshold budgets exist in mean-payoff bidding games and construct optimal strategies for the players. The key component of the proof is a quantitative solution to strongly-connected mean-payoff bidding games. Similar to the proof structure for parity games, the solution allows us to solve general games by \ufb01rst reasoning about the bottom strongly-connected components of the game and then constructing a reachability game for the rest of the vertices. Consider a strongly-connected mean-payoff bidding game G. Recall that a play in a mean-payoff game has a payoff, which is Min\u2019s cost and Max\u2019s reward. Assuming both players start with a positive initial budget, we are intuitively interested in the minimal payoff Min can guarantee assuming his budget is r \u2208 (0, 1).1 Since G is strongly-connected and the de\ufb01nition of the payoff is pre\ufb01x independent, Proposition 7 implies that the optimal payoff does not depend on the initial vertex. Thus, it is meaningful to refer to the mean-payoff value of G, which we formally de\ufb01ne as follows. De\ufb01nition 12. (Mean-payoff value) Consider a strongly-connected game G and r \u2208(0, 1). The meanpayoff value of G w.r.t. r, denoted MPr(G) is a value c \u2208I R such that \u2022 If Min\u2019s budget is greater than r, then he can guarantee that the payoff is at most c. \u2022 Min cannot do better: for every \u03f5 > 0, if Max\u2019s initial budget is greater than 1 \u2212r, then he can guarantee a payoff of at least c \u2212\u03f5. We justify the asymmetry in the de\ufb01nition by noting that the de\ufb01nition of the payoff of a play uses lim inf and thus gives Min an advantage. The following theorem consists of the main technical contribution of this section. It intuitively states that the initial budget does not matter in strongly-connected mean-payoff bidding games and shows an extended probabilistic connection for these games. In Section 4.3.1, we contrast this property of the Richman-bidding mechanism that we use with the properties of mean-payoff bidding games with other bidding mechanisms. Recall that the mean-payoff value of a vertex in a stochastic mean-payoff game is the expected payoff of the game when both players play optimally. It is not hard to show that since G is strongly-connected, the mean-payoff values of all the vertices in RT(G) is the same, thus it is meaningful to refer to the mean-payoff value of RT(G), which we denote by MP(RT(G)). 1We use r for \u201cratio\u201d of the total budget as is used in other bidding mechanisms in which the sum of budgets is not constant. See Section 4.3.1. 11 \fTheorem 13. Consider a strongly-connected mean-payoff bidding game G. The mean-payoff value of G exists and does not depend on the initial budget: there exists c \u2208I R such that for every r \u2208(0, 1), we have MPr(G) = c. Moreover, the value of G equals the mean-payoff value of the random-turn mean-payoff game RT(G) in which in each turn, the player who chooses a move is selected uniformly at random, thus for every r \u2208[0, 1], we have MPr(G) = MP(RT(G)). The two cases of Theorem 13 are proven separately for Min in Theorem 21 and for Max in Theorem 35 in the following sections. We now describe the theorem\u2019s implications. Recall that Min wins a mean-payoff game if he can guarantee that the payoff is non-positive. Theorem 14. Threshold budgets exist in mean-payoff bidding games. The THRESH-BUDGET problem for mean-payoff bidding games is in NP and coNP. Proof. Consider a general mean-payoff bidding game G = \u27e8V, E, w\u27e9. Consider v \u2208V that belongs to a BSCC S of G. Let GS be the game restricted to S. Theorem 13 states that if MP(RT(GS)) \u22640, then with every positive initial budget, Min can guarantee a payoff of at most 0. Thus, the threshold budget in v is 0. On the other hand, the theorem implies that if MP(RT(GS)) > 0, Max can guarantee a positive payoff with any positive initial budget, thus the threshold in v is 1. In the \ufb01rst case, we call S winning for Min, and in the second case, we call S losing for Min. We construct a double-reachability game DR(G) in which we associate Min with Player 1 and set his target to be the set of vertices from which there is no path to losing BSCC, and the target for Max, which we associate with Player 2, is the set of vertices from which there is no path to a BSCC that is winning for Min. Similarly to the proof of Theorem 8, the threshold budgets in DR(G) coincide with the threshold budgets in G. Finally, we show that THRESH-BUDGET is in NP by showing how to verify that TH(v) \u22651/2. For each BSCC S in G, we guess positional strategies in the stochastic game RT(GS). In addition, we guess two target sets of vertices T1, T2 \u2286V and construct the reachability stochastic game RT(DR(G)) using them. Finally, we guess two positional strategies in RT(DR(G)). We \ufb01rst verify that the strategies are optimal in the mean-payoff stochastic games, which can be done in polynomial time. Thus, we obtain the values in all these games. We use the values to verify our guess of the targets T1 and T2. Namely, we check whether every BSCC S that is winning for Min is contained in T1 and that there is no path from a vertex in T1 to a BSCC that is winning for Max, and dually for T2. Finally, we verify that the positional strategies in the reachability stochastic game are optimal. The solution to RT(DR(G)) gives us the threshold budget in v and we accept if it is at least 1/2. The size of the witness is polynomial in the input and the veri\ufb01cation of the guess can be done in polynomial time. The algorithm above shows that THRESH-BUDGET is in coNP since the only change is to accept when TH(v) < 1/2. 4.1 An optimal Min strategy in strongly-connected mean-payoff bidding games In this section we construct an optimal strategy for Min in a strongly-connected mean-payoff bidding game. Since the de\ufb01nition of payoff favors Min, this is technically easier than the construction for an optimal strategy for Max, which we construct in the following section. Consider a strongly-connected mean-payoff bidding game G. In this section, we assume w.l.o.g. that MP(RT(G)) = 0 as otherwise we can decrease all weights by this value. We construct a bidding strategy for Min that, with any positive initial budget, guarantees that the payoff is non-positive. Recall that the energy of a \ufb01nite play is the sum of the weights that it traverses. The following lemma shows that it suf\ufb01ces to construct a Min strategy that keeps the energy bounded from above. Lemma 15. Consider a mean-payoff bidding game G. Suppose that for every positive initial budget \u03f5 > 0 and initial energy kI \u2208I N, there is a constant N \u2208I N such that Min has a strategy fm that keeps the energy bounded by N. That is, for every Max strategy fM and initial vertex u, a \ufb01nite play \u03c0 = play(u, fm, fM) 12 \feither reaches energy 0 or has E(\u03c0n) \u2264N, for every 1 \u2264n \u2264|\u03c0|. Then, Min can guarantee a non-positive payoff in G. Proof. Suppose Min has a strategy fm as the above, and we describe a Min strategy f\u2032 m that guarantees a non-positive payoff. Suppose Min\u2019s initial budget is \u03f5 > 0. He splits his budget into in\ufb01nitely many parts \u03f51, \u03f52, . . .. Initially, Min plays according to fm as if his budget is \u03f51 until an energy of 0 is reached. When energy 0 is reached again, he bids 0 until the energy increases. Once the energy is positive, Min plays according to fm with an initial budget of \u03f52 until an energy of 0 is reached, and so on. Thus, the strategy guarantees that either (1) an energy of 0 is reached in\ufb01nitely often, or (2) if at some point an energy of 0 is never reached, then the energy stays bounded from above. Recall that the de\ufb01nition of the payoff of an in\ufb01nite path \u03b7 = v1, v2, . . . is payoff(\u03b7) = lim infn\u2192\u221eE(\u03b7n)/n. Note that an in\ufb01nite path that satis\ufb01es one of the properties (1) or (2) above has a non-negative payoff. The importance of moving in a vertex. The \ufb01rst component of the strategy construction devises a measure of how \u201cimportant\u201d it is to move in each vertex in the game. Our de\ufb01nition relies on the concept of potential, which was de\ufb01ned in the context of the strategy improvement algorithm to solve stochastic games [30]. The potential of v, denoted Po(v), is a known concept in probabilistic models and its existence is guaranteed [47]. We formalize the notion of the \u201cimportance\u201d of moving in a vertex v by de\ufb01ning its strength, which we denote by St(v), and is formally the maximal difference in potentials of the neighbors of v. De\ufb01nition 16. (Potentials and strengths) Consider two optimal positional strategies f and g in RT(G), for Min and Max, respectively. Recall that when constructing RT(G), for every vertex v \u2208V , we add two copies vMin and vMax, that are controlled by Min and Max, respectively. For v \u2208V , let v\u2212, v+ \u2208V be such that f(vMin) = v\u2212and g(vMax) = v+. The potential of v is a function that satis\ufb01es the following and the strength in v is the difference in potentials: Po(v) = 1 2 \u0000Po(v+) + Po(v\u2212) \u0001 + w(v) \u2212MP(RT(G)) and St(v) = 1 2 \u0000Po(v+) \u2212Po(v\u2212) \u0001 There are optimal strategies for which Po(v\u2212) \u2264Po(v\u2032) \u2264Po(v+), for every v\u2032 \u2208N(v), which can be found for example using the strategy iteration algorithm. Consider a strongly-connected mean-payoff bidding game G = \u27e8V, E, w\u27e9. Consider a \ufb01nite path \u03b7 = v1, . . . , vn in G. We intuitively think of \u03b7 as a play, where for every 1 \u2264i < n, the bid of Min in vi is St(vi) and he moves to v\u2212 i upon winning. Thus, if vi+1 = v\u2212 i , we say that Min won in vi, and if vi+1 \u0338= v\u2212 i , we say that Min lost in vi. Let W(\u03b7) and L(\u03b7) respectively denote the indices in which Min wins and loses in \u03b7. We call Min wins investments and Min loses gains, where intuitively he invests in increasing the energy and gains budget whenever the energy decreases. Let G(\u03b7) and I(\u03b7) be the sum of gains and investments in \u03b7, respectively, thus G(\u03b7) = P i\u2208L(\u03b7) St(vi) and I(\u03b7) = P i\u2208W(\u03b7) St(vi). Recall that the energy of \u03b7 is E(\u03b7) = P 1\u2264i 0. Min chooses an N \u2208I N such that BI m > kI N , which is clearly possible since kI is a constant and BI is positive. Min always bids 1 N as long as the energy is positive. We show that the following invariant is maintained: if the energy level reaches 0 \u2264k \u2208I N, Min\u2019s budget is greater than k N . First, our choice of N implies that the invariant holds initially. Second, assuming that the invariant holds before a bidding, we show that it holds after it. Suppose that the energy is k and Min\u2019s budget is k/N + \u03f5. If Min wins, the energy decreases by 1 to k \u22121 and his budget decreases by 1/N to (k \u22121)/N + \u03f5. On the other hand, if Max wins the bidding, he bids at least as much as Min, thus Min\u2019s budget increases by at least 1/N. The energy increases by 1 to k + 1 and Min\u2019s budget increases to (k + 1)/N + \u03f5. The invariant implies that if the energy does not reach 0, then it is bounded by N. Indeed, if the energy reaches k = N, Min\u2019s budget is N/N +\u03f5, which is impossible since the sum of budgets is 1. 14 \fWe describe the intuition behind Min\u2019s strategy. In the example above, Min\u2019s strategy puts a price of 1/N on changing the energy: whenever the energy decreases by 1, he pays 1/N, and whenever the energy increases, he gains at least 1/N. Lemma 17 allows us to generalize this connection between changes in energy and changes in budget. In a vertex v, Min bids 1 N \u00b7 St(v) and proceeds to v\u2212upon winning. For example, consider the game that is depicted in Figure 4 and the cycle \u03c0 = v3, v2, v1, v3 that results from Min winning the two biddings followed by a Max win. The change in energy is w(v3) + w(v2) + w(v1) = \u22120.5 + 1 \u22122 = \u22121.5 and Min\u2019s budget decreased by at most 1 N \u0000St(v3) + St(v2) \u2212St(v1) \u0001 = (1.5 + 2 \u2212 2)/N = 1.5/N. Thus, Min invests at most c/N in a decrease of c units of energy, and he gains at least c/N units of budget when the energy increases by c units. A similar argument as in the lemma above shows an invariant between the energy and budget and in turn, that the energy stays bounded from above. To formally de\ufb01ne Min\u2019s strategy we show how to choose N in general graphs, which requires some book-keeping due to paths that are not cycles. We call Min\u2019s strategy fm. Consider a positive initial budget B \u2208(0, 1] for Min and an initial energy kI \u2208I N. Let PoM = maxv\u2208V |Po(v)| and StM = maxv\u2208V |St(v)|. We choose N \u2208I N such that B > kI+StM+2PoM N . When the game reaches v \u2208V , Min bids St(v)/N and moves to v\u2212upon winning. Lemma 20. Consider a Max strategy fM, an initial energy kI \u2208I N, and let \u03c0 = play(fm, fM) be a \ufb01nite play whose energy stays positive. Thus, for every pre\ufb01x \u03c0n, for 0 \u2264n \u2264|\u03c0|, we have kI + E(\u03c0n) > 0. Let k = kI + E(\u03c0) be the energy following \u03c0. Then, Min\u2019s budget following \u03c0 is at least k+StM N . Proof. The invariant clearly holds initially. With a slight abuse of notation, let G(\u03c0) be the sum of \u201cgains\u201d in path(\u03c0), namely the sum of strengths in vertices in which Max wins the bidding, and similarly I(\u03c0) be the \u201cinvestments\u201d in path(\u03c0), namely the sum of strengths in vertices in which Min wins the bidding. Let B be Min\u2019s initial budget and B\u2032 his budget following \u03c0. Since Min bids St(v)/N in a vertex v, we have B\u2032 = B + \u0000G(\u03c0) \u2212I(\u03c0) \u0001 /N. From Lemma 17, we have 2PoM \u2212E(\u03c0) \u2265I(\u03c0) \u2212G(\u03c0). By combining with k = kI + E(\u03c0) and re-arranging, we have 2PoM \u2212E(\u03c0) \u2265I(\u03c0) \u2212G(\u03c0) = N(B \u2212B\u2032) B \u22122PoM \u2212k + kI N \u2264B\u2032 Since we de\ufb01ne B > kI+StM+2PoM N , we obtain that B\u2032 > k+StM N , and we are done. Note that the strategy fm is legal, i.e., Min always has suf\ufb01cient budget to bid according to fm. Indeed, for a choice N \u2208I N made by the strategy, the maximal bid in a vertex in G is StM/N. Lemma 20 implies that Min has suf\ufb01cient budget for the bid. Moreover, since Min\u2019s budget cannot exceed 1, Lemma 20 implies that if the energy does not reach 0, then it is bounded by N \u2212StM. Combining with Lemma 15, we obtain the \ufb01rst direction in Theorem 13. Theorem 21. Let G be a strongly-connected mean-payoff bidding game with MP(RT(G)) = 0. Then, from every vertex in G and with any positive initial budget, Min can guarantee a non-positive payoff. 4.2 An optimal Max strategy in strongly-connected mean-payoff bidding games In this section we focus on the more challenging task of constructing an optimal strategy for Max: Given a strongly-connected mean-payoff bidding game G with MP(RT(G)) > 0, we construct a bidding strategy for Max in G that guarantees a positive payoff. The following lemma reduces the problem of optimizing the payoff to the problem of bounding the energy from below. 15 \fLemma 22. Assume that for every Max initial budget \u03f5 > 0 in a game G with MP(RT(G)) > 0, he can keep the energy bounded from below by a constant N(G, \u03f5) \u2208Z. Then, Max can guarantee a positive mean-payoff value in G. Proof. Let G\u2032 be a mean-payoff bidding game that is obtained from G by decreasing MP(RT(G))/2 from all the weights in G. It is not hard to see that MP(RT(G\u2032)) = MP(RT(G))/2 and in particular it is positive. Let \u03f5 > 0, and suppose Max plays in G according to a strategy that keeps the energy above N(G\u2032, \u03f5) in G\u2032. For a \ufb01nite play \u03c0 in G we have E(\u03c0) \u2265|\u03c0| \u00b7 MP(RT(G))/2 + N(G\u2032, \u03f5). Since N(G\u2032, \u03f5) is a constant, its contribution to the payoff vanishes as the length of \u03c0 tends to in\ufb01nity, thus the payoff is at least MP(RT(G))/2, which is positive. Bounding the energy from below is more challenging than Min\u2019s goal in the previous section of bounding the energy from above. A \ufb01rst attempt for constructing a Max strategy would be to use a similar strategy as the previous section only with reversed roles: Max\u2019s strategy would guarantee that whenever the energy is k \u2208I N, his budget exceeds k/N, for some N \u2208I N. He would ensure that whenever the energy increases by one unit, his budget decreases by at most 1/N, and whenever the energy decreases by one unit, his budget increases by at least 1/N. This attempt fails since Min reacts by allowing Max to win for a while and draw the energy all the way up to N, where Max\u2019s budget runs out. When Min has all (or most) of the budget, he can win an arbitrary number of biddings in a row. Thus, he can draw the energy arbitrary low, causing Max to lose since the energy would not be bounded from below. The moral of this attempt is that Max should avoid exhausting his budget. He cannot use a \ufb01xed normalization factor of 1/N. Rather, the normalization factor should decrease as the energy increases. In the next two sections we devise a normalization scheme, \ufb01rst in simpler strongly-connected components and then in general ones. 4.2.1 An optimal Max strategy in recurrent mean-payoff bidding games A game G is called recurrent, if it is strongly-connected and there is a vertex u \u2208V such that every cycle in G includes u (see Figure 5). We refer to u as the root of G. In this section, we construct an optimal strategy for Max in a recurrent mean-payoff bidding game. An adapted de\ufb01nition of importance. Recall that Min\u2019s strategy in the previous section matches changes in budget with changes in energy. The \ufb01rst component in Max\u2019s strategy makes this connection asymmetric: we \ufb01nd z > 1, such that when the energy increases by c units, Max invests at most c units of budget, but when the energy decreases by c units, Max gains at least z \u00b7 c units of budget. Consider a recurrent mean-payoff bidding game G = \u27e8V, E, w\u27e9with MP(RT(G)) > 0. We alter the weights to give advantage to Min. For z > 1, let Gz = \u27e8V, E, wz\u27e9, where wz(v) = ( w(v) if w(z) \u22650 z \u00b7 w(v) if w(z) < 0 Clearly, MP(RT(G)) \u2265MP(RT(Gz)). We select z > 1 such that MP(RT(Gz)) \u22650. This is possible since by additively changing all the weights in RT(G) by a constant c, the value changes by c. We select z such that z \u00b7 maxv\u2208V w(v) \u2264MP(RT(G)). Consider a \ufb01nite path \u03b7 in G. The following lemma connects the energy of \u03b7 in G with its energy in Gz. Note that E(\u03b7) might be negative thus neither claim follows from the other. Let Ez(\u03b7) be the accumulated energy in Gz. Lemma 23. Consider a \ufb01nite path \u03b7 \u2208cycles(u). Then, E(\u03b7) \u2265Ez(\u03b7) and zE(\u03b7) \u2265Ez(\u03b7). Proof. Let E\u22650(\u03b7) and E<0(\u03b7) be the sum of non-negative weights and negative weights in \u03b7, respectively. We have E(\u03b7) = E\u22650(\u03b7) + E<0(\u03b7) and Ez(\u03b7) = E\u22650(\u03b7) + zE<0(\u03b7). The inequality E(\u03b7) \u2265Ez(\u03b7) is 16 \fimmediate. For the second inequality, we multiply the \ufb01rst equality by z and subtract it from the \ufb01rst to get Ez(\u03b7) \u2212zE(\u03b7) = E\u22650(\u03b7) \u2212zE\u22650(\u03b7) \u22640, and we are done. We adapt Lemma 17 to our setting. We \ufb01nd optimal positional strategies gm and gM for Min and Max, respectively, in the stochastic game RT(Gz). Using them, we de\ufb01ne, for each vertex v \u2208V , vertices v\u2212 and v+ by setting v\u2212= gm(vm) and v+ = gM(vM). We respectively denote by Poz and Stz, the potential and strength functions of Gz. For a \ufb01nite path \u03b7 = v1, . . . , vn, we denote Gz(\u03b7) = P vi+1\u0338=v+ i Stz(vi) and Iz(\u03b7) = P vi+1=v+ i Stz(vi). The proof of the following lemma is dual to the proof of Lemma 17. Lemma 24. For a \ufb01nite path \u03b7 from v to u, we have Poz(v) \u2212Poz(u) \u2264Ez(\u03b7) + Gz(\u03b7) \u2212Iz(\u03b7). Let pay(\u03b7) = Iz(\u03b7) \u2212Gz(\u03b7). Combining the two lemmas above, we obtain the required asymmetry between \u201cgaining\u201d and \u201cinvesting\u201d. Lemma 25. Consider a path \u03b7 \u2208cycles(u). When E(\u03b7) \u22650, we have pay(\u03b7) \u2264E(\u03b7), and when E(\u03b7) < 0, we have \u2212pay(\u03b7) \u2265\u2212z \u00b7 E(\u03b7). u Poz : 0 Stz : 1/2 0 0 v1 Poz : 1/2 Stz : 3.5 -2 -3 v2 Poz : \u22123 4 v3 Poz : 4 1 v4 Poz : \u22121/2 Stz : 1.5 -2 -3 v5 Poz : \u22123 0 v6 Poz : 0 Figure 5: A recurrent mean-payoff bidding game G with MP(RT(G)) > 0. For z = 4/3, the alternated weights in a vertex are depicted below the original weight. Example 26. Consider the recurrent mean-payoff bidding game G that is depicted in Figure 5. We have MP(RT(G)) = 0.5, thus Max can guarantee a positive payoff. We illustrate Lemma 25. The weights of the vertices in G are depicted on top, and, for z = 3/2, the weights of the negative-weighted vertices are depicted below. Consider the path \u03b7 = u, v1, v2, u. Thus, Max wins the \ufb01rst bidding and loses the second. The bids in vertices with only one outgoing edge are 0. With this choice of z, we have MP(RT(Gz)) = 0, thus we get equality between energy and budget in Gz. Indeed, the change in energy in Gz is Ez(\u03b7) = wz(u) + wz(v1) + wz(v2) = \u22123. On the other hand, Max\u2019s gain in \u03b7 is Stz(v1) = 3.5 and his investment is Stz(u) = 0.5, thus his budget increases by 3.5 \u22120.5 = 3. However, the \u201creal\u201d change in energy is the one exhibited in G, which is E(\u03b7) = w(u) + w(v1) + w(v2) = \u22122. Thus, in a decrease of 2 units of energy, Max gains 2 \u00b7 z = 3 units of budget rather than only 2. The worst case for Max is in paths that traverse only positive or only negative weights. In paths that traverse a mix of weights, the inequality in Lemma 25 is strict. For example, consider the path u, v4, v5, u. The change in energy is w(u) + w(v4) + w(v5) = \u22121 and the change in budget is Stz(u) + Stz(v4) = 0.5 + 1.5 = 2 > 1 \u00b7 3/2. Max\u2019s strategy. When the game reaches a vertex v, Max bids St(v) \u00b7 \u03b3, where \u03b3 is a normalization factor that depends on the energy in the last visit to u. That is, the normalization changes only after visiting u. In order to de\ufb01ne \u03b3, we select N \u2208I N and partition the natural numbers into energy blocks of size N. Each energy block is associated with its own normalization, which we call the currency of the block. Recall that we chose z > 1. For n \u2208I N, the currency of the n-th block is z\u2212n. The key idea follows from combining with Lemma 25: investing in the n-th block is done in the currency of the n-th block while gaining in the n-block is in the higher currency of the (n \u22121)-th block. 17 \fExample 27. Consider the game G that is depicted in Figure 5, and consider two plays \u03c01 = u, v1, v3, u and \u03c02 = u, v1, v2, u. Suppose the energy in u is in the 3-rd block, thus the currency is 1.5\u22123. In \u03c01, the energy increases, i.e., we have E(\u03c01) = 4, and Max matches his change in budget in the currency of the 3-rd block, i.e., Max invests (0.5+ 3.5) \u00b71.5\u22123 = 4\u00b71.5\u22123. On the other hand, in \u03c02, we have a decrease of energy, i.e., we have E(\u03c02) = \u22122, and Max gains (\u22120.5 + 3.5) \u00b7 1.5\u22123 = 2 \u00b7 1.5\u22122, thus we have a connection between changes in energy and budget, only in the higher currency of the 2-nd block. We choose N \u2208I N as follows. Let cycles(u) be the set of paths that are simple cycles from u to itself. A crucial advantage of recurrent games is that all cycles pass through u. Our de\ufb01nition relies on the maximal energy of such a cycle, which we denote by EM = max\u03b7\u2208cycles(u) |E(\u03b7)|. We choose N \u2208I N such that N \u2265(Stz M + 3EM)/(1 \u2212z\u22121), where Stz M is the maximal strength of a vertex in Gz. For n \u22651, we refer to the n-th block as Nn, and we have Nn = {N(n \u22121), N(n \u22121) + 1, . . . , Nn \u22121}. We use \u03b2\u2193 n and \u03b2\u2191 n to mark the upper and lower boundaries of Nn, respectively. We use a N\u2265n to denote the set {Nn, Nn+1, . . .}. Consider a \ufb01nite play \u03c0 that ends in u and let visitu(\u03c0) be the set of indices in which \u03c0 visits u. Let kI \u2208I N be an initial energy. We say that \u03c0 visits Nn if kI + E(\u03c0) \u2208Nn. We say that \u03c0 stays in Nn starting from an index 1 \u2264i \u2264|\u03c0| if for all j \u2208visitu(\u03c0) such that j \u2265i, we have kI + E(\u03c01, . . . , \u03c0j) \u2208Nn. We are ready to describe Max\u2019s strategy, which we denote by fM. Max chooses kI \u2208I N and plays as if the initial energy in kI. With the right choice of kI, his strategy will keep the energy non-negative. In turn, assuming that the real initial energy is 0, we obtain that the energy stays above \u2212kI. Suppose the game reaches a vertex v and the energy in the last visit to u was in Nn, for n \u22651. Then, Max bids z\u2212n \u00b7 Stz(v) and proceeds to v+ upon winning. Consider an initial Max budget BI M > 0. We choose an initial energy kI \u2208I N with which fM guarantees that energy level 0 is never reached. Recall the intuition that increasing the energy by a unit requires an investment of a unit of budget in the right currency. Thus, increasing the energy from the lower boundary \u03b2\u2193 n of Nn to its upper boundary \u03b2\u2191 n, costs N \u00b7z\u2212n. We de\ufb01ne cost(Nn) = N \u00b7 z\u2212n and cost(N\u2265n) = P\u221e i=n cost(Nn). A \ufb01rst attempt for the de\ufb01nition of kI would be \u03b2\u2193 n such that BI M > cost(N\u2265n), which intuitively means that Max\u2019s initial budget would never run out even if he always wins. This is almost correct. We need some wiggle room to allow for changes in the currency. Also, note that drawing the energy to 0 from \u03b2\u2193 n would cost Min a total of Pn i=1 cost(Ni). We choose kI so that this cost is greater than 1, thus we ensure that the energy never reaches 0. De\ufb01nition 28. Let kI be \u03b2\u2193 n such that BI M > wiggle \u00b7 z\u2212(n\u22121) + cost(N\u2265n), where wiggle = 2EM + Stz M, and Pn i=1 cost(Ni) > 1. Correctness. We prove an invariant on Max\u2019s budget throughout the game, which will imply that the energy never reaches 0 when it starts from kI, and hence the correctness of the strategy. Consider a Min strategy fm, and let \u03c0 = play(fm, fM) be a \ufb01nite play. Let visitu(\u03c0) = \u03c4 1 \u00b7 . . . \u00b7 \u03c4 m be a partition of \u03c0 such that for each 1 \u2264i \u2264m, the path(\u03c4i) is a cycle-less path that ends in u. We de\ufb01ne a coarser partition of \u03c0 into sub-plays in which the same currency is used (recall that we change currency at u and when switching between energy blocks). Let \u03c0 = \u03c01 \u00b7 \u03c02 \u00b7 . . . \u00b7 \u03c0\u2113\u00b7 \u03c0\u2113+1, where for each 1 \u2264i \u2264\u2113, we have \u03c0i = \u03c4 i1 \u00b7 . . . \u03c4 ini, there is an energy block Nn such that the sub-play \u03c4 i1 \u00b7 . . . \u00b7 \u03c4 ini\u22121 stays in Nn and the sub-play \u03c0i visits a neighboring energy block Nn\u22121 or Nn+1. We then call Nn the energy block of \u03c0i. We use ei to denote the energy at the end of \u03c0i, thus ei = kI + E(\u03c0i). Let Nn be the energy block of \u03c0i. There can be two options; either the energy decreases in \u03c0i, thus the energy before it ei\u22121 is in Nn+1 and the energy after it ei is in Nn, or it increases, thus ei\u22121 \u2208Nn\u22121 and ei \u2208Nn. We then call \u03c0i decreasing and increasing, respectively. Recall that \u03b2\u2191 n and \u03b2\u2193 n are the upper and lower boundaries of the energy block Nn. Further recall that EM is the largest energy of a cycle in G. Thus, whenever the energy enters Nn it is within EM of the boundary (see Figure 6). In the case that \u03c0i is decreasing, the energy at the end of \u03c0i is ei \u2265\u03b2\u2191 n \u2212EM and in the case it is increasing, we have ei \u2264\u03b2\u2193 n + EM. Let \u21130 = 0, and for i \u22651, let \u2113i = (\u03b2\u2193 n+1 \u2212EM) \u2212ei in the \ufb01rst 18 \fcase and \u2113i = (\u03b2\u2193 n + EM) \u2212ei in the second case. Note that \u2113i \u2208{0, . . . , 2EM}. We prove the following invariant on Max\u2019s budget when changing between energy blocks. Lemma 29. For every i \u22650, suppose \u03c0i ends in Nn. The budget of Max at the end of \u03c0i is at least (wiggle + \u2113i) \u00b7 z\u2212(\u02c6 n\u22121) + cost(N\u2265\u02c6 n), where \u02c6 n = n + 1 if \u03c0i is decreasing and \u02c6 n = n if \u03c0i is increasing. Proof. The proof is by induction. The base case follows from our choice of initial energy. For i \u22651, assume the claim holds for \u03c0i\u22121 and we prove for \u03c0i. There are four cases for the energy changes in \u03c0i, which we depict in Figure 6. Recall that EM is maximal energy of a simple cycle from u and that we switch currencies at u. Thus, whenever we switch currency it means that the play visits a new energy block, and the \ufb01rst location in the block is within EM of the boundary. Intuitively, Case 1 is the simplest and follows from matching energy and budget in Nn. In Cases 3 and 4 Max invests and gains in the \u201cwrong\u201d currency. For example, in Case 3, if investing and gaining was in the same currency, Max would have gained in the currency of Nn instead of the higher currency of Nn\u22121. Finally, in Case 2, we again use this mismatch to ensure that the gain \u201ccovers\u201d the cost of Nn and in addition there is a \u201csurplus\u201d that covers the required wiggle room. Let ei\u22121 be the energy at the end of \u03c0i\u22121. Consider Cases 1, 3, and 4 in the \ufb01gure. We prove the \ufb01rst of these case and the others are similar. In Case 3, we have ei\u22121 \u2208Nn+1, and \u03c0i decreases into Nn and ei is near \u03b2\u2191 n. Thus, we have \u2113i\u22121 = (\u03b2\u2193 n+1 + wM) \u2212ei\u22121 and \u2113i = (\u03b2\u2193 n+1 + wM) \u2212ei. Since we decrease in blocks, we have \u2113i\u22121 < \u2113i and E(\u03c0i) = \u2113i\u22121 \u2212\u2113i. By Lemma 25, we have zn+1 \u00b7 pay(\u03c0i) \u2265 z \u00b7 (\u2113i\u22121 \u2212\u2113i), thus the gain in budget in \u03c0i is at least (\u2113i \u2212\u2113i\u22121)z\u2212n. The induction hypothesis states that Max\u2019s budget in \u03c0i\u22121 is at least (EM + StM + \u2113i\u22121) \u00b7 z\u2212n + P\u221e j=n Nz\u2212j, thus his budget after \u03c0i is at least (EM +StM +\u2113i)\u00b7z\u2212n+P\u221e j=n Nz\u2212j, and we are done. The \ufb01nal case, which is similar to Case 2 in the \ufb01gure with a slight difference; the \ufb01gure depicts energy that crosses Nn and we prove for a energy that crosses Nn+1 and ends in Nn. That is, the energy at \u03c0i\u22121 is in Nn+1 and ei\u22121 \u2265\u03b2\u2191 n+1 \u2212EM and ei \u2264\u03b2\u2193 n+1 = \u03b2\u2191 n. The decrease in energy is E(\u03c0i) = (2EM \u2212\u2113i\u22121) + (N \u22122EM) + \u2113i, thus by Lemma 25, the increase in budget is E(\u03c0i) \u00b7 zn\u22121. We chose N such that (N \u22122EM) \u00b7 z\u2212(n\u22121) \u2265(EM + StM) \u00b7 z\u2212(n\u22121) + N \u00b7 z\u2212n. The claim follows from combining with the induction hypothesis, and we are done. \u2026 \u2026 1 2 3 4 Currency z\u2212n z\u2212(n+1) z\u2212(n\u22121) { wM { wM { wM { wM Mn Mn\u22121 Mn+1 Figure 6: An illustration of the different cases of changing currency. Dark lines mark the boundary of an energy block and dotted lines mark a region of size EM around the boundary. It is not hard to show that Lemma 29 implies that fM is legal. That is, consider a \ufb01nite play \u03c0 that starts immediately after a change in currency. Using Lemma 24, we can prove by induction on the length of \u03c0 that Max has suf\ufb01cient budget for bidding. The harder case is when \u03c0 decreases, and the proof follows from the fact that wiggle is in the higher currency of the lower block. Combining Lemma 29 with our choice of the 19 \finitial energy, we get that the energy never reaches 0 as otherwise Min invests a budget of more than 1. The following theorem follows by combining with Lemma 22. Theorem 30. In a recurrent mean-payoff bidding game G with MP(RT(G)) > 0, with any positive initial budget Max has a strategy that guarantees a positive payoff. 4.2.2 An optimal Max strategy in general strongly-connected mean-payoff bidding games In this section we develop the ideas of the previous section and construct an optimal strategy for Max in general strongly-connected games. Recall that by Lemma 22 it suf\ufb01ces to construct a strategy that guarantees that the energy is bounded from below. The following example shows that naively adapting the strategy from the previous section fails. W : \u22121 b : 2.5 W z : 0 bz : 3 0 u 0.5 v1 W : 3.5 b : 0.5 W z : 3 bz : 0.5 -1.5 -3 v2 W : \u22121.5 W z : \u22123 0 v3 W : 2.5 b : 3.5 W z : 2 bz : 4 6 v4 -1 -2 v5 0 u Figure 7: An example showing that the Max strategy developed in the previous section fails in general strongly-connected games. In a vertex v with negative weight, the weight w(v) of v is depicted on top and wz(v) on the bottom. We choose z = 2. Example 31. Consider the strongly-connected mean-payoff bidding game G that is depicted in Figure 7. Note that G is not recurrent. Indeed, the candidates for the root would be u and v1 and there are cycles that avoid both of them. We choose z = 2. With this choice, we have v+ 1 = v1. Indeed, Poz(v1) > Poz(v3). Thus, according to the strategy in the previous section, upon winning a bidding in v1, Max chooses the self-loop to stay in v1. Since the weight of v1 is positive, staying in v1 implies an increase of energy, which implies a decrease of budget. Since Max avoids exhausting his budget, the currency must change in v1. In other words, Max cannot wait for a visit to u to change the currency. The inability to wait for visits to the root is the challenge of devising a strategy in general strongly-connected games. A naive solution would be to drop the assumption from the previous section that currency changes occur only at u. That is, we change currency upon entering a new energy block no matter what the current vertex is. We illustrate that this attempt fails, implying that a more involved adaptation is needed. The problem is with sinusoidal energy behaviors that occur on the boundary of an energy block. We describe such a play. Consider the cycle u, v1, v1, v3, v5, u, which intuitively corresponds to Max winning two biddings, then losing two biddings, and we ignore v5 since both players bid 0. In Gz, we have equality between energy and budget. Indeed, we have Stz(u) + Stz(v1) \u2212Stz(v1) \u2212Stz(v3) = 3 + 0.5 \u22120.5 \u22124 = \u22121 = 20 \fwz(u)+wz(v1)+wz(v1)+wz(v3)+wz(v5). Note that the \u201creal\u201d energy is the one in G and it is unchanged following this path since 2w(v1) + w(v5) = 0. Assume we start from u when the current energy is at the top of the third energy block. Recall that z = 2, thus the initial currency is z\u22123 = 1/8. After visiting v1 twice, the energy increases and enters the fourth block, thus the currency is updated to 1/16. Adding the currencies to the calculation above, we get Stz(u) \u00b7 z\u22123 + Stz(v1) \u00b7 z\u22123 \u2212Stz(v1) \u00b7 z\u22124 \u2212Stz(v3) \u00b7 z\u22124 = 3 \u00b7 1/8 + 0.5 \u00b7 1/8 \u22120.5 \u00b7 1/16 \u22124 \u00b7 1/16 > 0. All in all, Max\u2019s payments are positive, thus his budget decreases, while the energy level stays the same. Min can thus continue with such a strategy until Max\u2019s budget is exhausted. We develop further the ingredients from the previous sections. Recall that in recurrent games, we split the natural numbers into energy blocks, each block has a currency, where increasing the energy by c units in the n-th block costs Max at most c units of budget in the currency of the n-th block, and decreasing the energy by c units in the n-th block rewards him with at least c units of budget in the higher currency of the (n \u22121)-th block. In general strongly-connected games, we need stronger properties. First, we increase the asymmetry between investing and gaining: while gaining in the n-th block is still in the higher currency of the lower (n \u22121)-th block, investing is now in the lower currency of the higher (n + 1)-th block. Thus, now, in every change to the energy within an energy block, Max registers a pro\ufb01t. The larger the change in energy, the larger the pro\ufb01t. Second, we differentiate between even blocks and odd blocks so that odd blocks serve as \u201cbuffers\u201d that ensure that a change in currency only occurs after a signi\ufb01cant change in energy. We formalize this intuition. Consider a strongly-connected mean-payoff bidding game G = \u27e8V, E, w\u27e9 having MP(RT(G)) > 0. For z > 1, let \u02dc Gz = \u27e8V, E, \u02dc wz\u27e9, where \u02dc wz(v) = ( w(v) \u00b7 z if w(v) < 0 w(v) \u00b7 1 z if w(v) \u22650 As in the previous section, it is not hard to choose z > 1 such that MP(RT( \u02dc Gz)) > 0. Let \u02dc Po z and \u02dc St z denote the potentials and strengths in \u02dc Gz, and for a \ufb01nite play we denote by \u02dc Ez be the sum of weights that \u03c0 traverses in \u02dc Gz. The proof of the following lemma is similar to Lemma 23. Lemma 32. Consider a \ufb01nite play \u03c0. We have \u02dc Ez(\u03c0) \u2264z \u00b7 E(\u03c0) and \u02dc Ez(\u03c0) \u22641 z \u00b7 E(\u03c0). We describe Max\u2019s strategy, which we refer to as fM. As in the previous section, Max chooses a kI \u2208I N and plays as if that is the initial energy while guaranteeing that the energy never reaches 0. We specify kI later. When reaching a vertex v \u2208V , Max bids \u02dc St z(v) \u00b7 \u03b3 and moves to v+ upon winning, where we de\ufb01ne the currency \u03b3 \u2208(0, 1) next. We partition I N into blocks of \u02dc N \u2208I N, where we choose \u02dc N later on. We refer to the n-th block as \u02dc Nn = { \u02dc N \u00b7 (n \u22121), . . . , \u02dc Nn \u22121}. Unlike the previous section, changes in currency can occur in all vertices and only depend on the energy. The currency in even and odd blocks differs. For n \u2208I N, when the energy level reaches an even block \u02dc N2n, the currency is z\u2212n. In order to determine the currency in the odd blocks, we take the history of the play into account; the currency matches the currency in the last energy block that was visited before entering \u02dc N2n+1. Thus, if it is \u02dc N2n, then the currency is z\u2212n and if it is \u02dc N2n+2, the currency is z\u2212(n+1). We say that a \ufb01nite outcome is \u03b3-consistent when all the bids Max performs in it are made in the same currency \u03b3. Lemma 24 clearly applies to \u02dc Gz. Let the maximal weight of a vertex in G be wM = maxv\u2208V |W(v)|. The following lemma follows from combining Lemma 24 with Lemma 32. Lemma 33. Consider a (z\u2212n)-consistent outcome \u03c0 that starts in v and ends in v\u2032. We have \u2212pay(\u03c0) \u2265 \u2212E(\u03c0) \u00b7 z\u2212(n\u22121) \u22122wM \u00b7 z\u2212n and pay(\u03c0) \u2264E(\u03c0) \u00b7 z\u2212(n+1) + 2wM \u00b7 z\u2212n. Suppose Max is playing according to fM and Min is playing according to some strategy fm. Let \u03c0 = play(fm, fM) be the resulting in\ufb01nite play. Let \u03c0 = \u03c01 \u00b7 \u03c02 \u00b7 . . . be a partition of \u03c0 into maximal \ufb01nite 21 \fplays that have a consistent currency. For i \u22651, let ei \u2208I N be the energy at the end of \u03c0i, thus ei = kI + E(\u03c01 . . . \u03c0i), where kI is an initial energy. Also, let \u03b2\u2191 n and \u03b2\u2193 n be respectively, the upper and lower boundaries of the energy block Nn. Note that \u03b2\u2191 n = \u03b2\u2193 n+1. \u2026 \u2026 \u02dc M2n \u02dc M2n+1 \u02dc M2n+2 \u03b2\" 2n \u03b2\" 2n+1 \u03b2\" 2n+2 \u03b2\" 2n\u22121 1 2 3 4 Cases: Currency z\u2212n z\u2212(n+1) z\u2212(n+1) z\u2212n Figure 8: The four cases of \u03c0i in the general setting. Suppose a sub-play \u03c0i starts in a vertex v and ends in u. We make observations on the budget change during \u03c0i. There are four cases, which are depicted in Figure 8. Note that the currency in Cases 1 and 3 is z\u2212n and in Cases 2 and 4 it is z\u2212(n+1). The energy change in \u03c0i in Cases 1 and 2 is at least 2 \u02dc N and at most 2 \u02dc N + 2wM and in Cases 3 and 4 it is at least \u02dc N and at most \u02dc N + 2wM. We use Lemma 33 to obtain the following: Lemma 34. The following bounds hold for the change in budget in the four cases depicted in Figure 8. 1. pay(\u03c0i) \u2264(2 \u02dc N + 2wM) \u00b7 z\u2212(n+1) + 2wM \u00b7 z\u2212n, 2. \u2212pay(\u03c0i) \u22652 \u02dc N \u00b7 z\u2212n \u22122wM \u00b7 z\u2212(n+1), 3. pay(\u03c0i) \u2264( \u02dc N + 2wM) \u00b7 z\u2212(n+1) + 2wM \u00b7 z\u2212n, and 4. \u2212pay(\u03c0i) \u2265\u02dc N \u00b7 z\u2212n + 2wM \u00b7 z\u2212(n+1). To conclude the construction, given an initial Max budget, we \ufb01nd an initial energy level kI with which Max can guarantee that the energy stays positive. We do this by \ufb01nding an invariant on his budget at the end points of energy blocks. Recall the intuition that Max\u2019s budget should not run out even when the energy increases arbitrarily. We thus require his initial budget to be suf\ufb01cient to \u201cpurchase\u201d all the energy blocks above the initial energy. For n \u2208I N, the cost of the blocks \u02dc N2n and \u02dc N2n+1 is \u02dc N \u00b7 z\u2212n. Recall from the previous section that Max\u2019s budget at the bottom of an energy block \u02dc N2n, needs to include, in addition to the costs of the energy blocks \u02dc N\u22652n, wiggle room in the currency of the lower block. Going back to Lemma 34, we observe that Case 4 is the only problematic case. Indeed, in all other cases, the path \u03c0i crosses an energy block whose cost is given in a currency that is lower than the currency of gaining (when decreasing), or higher than the currency of investing (when increasing). Take for example Case 2. It crosses both \u02dc N2n+2 and \u02dc N2n+1. The cost of \u02dc N2n+2 is \u02dc N \u00b7z\u2212(n+1) whereas the gain for it is roughly \u02dc N \u00b7z\u2212n. The situation in Case 4 is not that bad. The gain equals the cost of \u02dc N2n+1, i.e., \u02dc N \u00b7 z\u2212n, up to a constant, i.e., 2wM \u00b7 z\u2212n. We add this constant in the invariant, thus we require Max\u2019s budget at \u03b2\u2191 2n+1 to include the costs of the higher blocks, the wiggle room, and a surplus of 2wM \u00b7 z\u2212n. We de\ufb01ne the invariant on Max\u2019s budget formally. Recall that wiggle = 2wM + \u02dc St z M, where \u02dc St z M is the maximal bid, and it is used to guarantee that fM is legal in a play that stays in an energy block. We write Inv(\u03b2\u2191 \u2113) to refer to the budget that Max has when the currency changes near \u03b2\u2191 \u2113, thus within |wM| of \u03b2\u2191 \u2113. We have the following. 22 \f\u2022 Inv(\u03b2\u2191 2n) = wiggle \u00b7 z\u2212n + z\u2212n \u02dc N + P\u221e i=n+1 2z\u2212i \u02dc N, and \u2022 Inv(\u03b2\u2191 2n+1) = wiggle \u00b7 z\u2212n + 2wM \u00b7 z\u2212(n+1) + P\u221e i=n+1 2z\u2212i \u02dc N. To conclude the construction, we choose \u02dc N to be large enough so that the invariant is maintained assuming it is maintained initially. Also, given an initial budget for Max, we choose an initial energy level such that the invariant is initially maintained. Combining with Lemma 22, we obtain the second direction of Theorem 13. Theorem 35. Consider a strongly-connected mean-payoff bidding game G with MP(RT(G)) > 0. Then, Max has a strategy that guarantees a positive payoff in G. 4.3 Remarks 4.3.1 Results for other bidding mechanisms We elaborate on further results on in\ufb01nite-duration bidding games that were obtained since an earlier publication of this paper. The bidding mechanism that we study in this paper is called Richman bidding. Poorman bidding is the same as Richman bidding only that the winner of the bidding pays the \u201cbank\u201d rather than the other player. Taxman bidding span the spectrum between Richman and poorman bidding. It is parameterized by a constant \u03c4 \u2208[0, 1]: portion \u03c4 of the winning bid is paid to the other player, and portion 1 \u2212\u03c4 to the bank. Richman bidding is obtained by setting \u03c4 = 1 and poorman bidding by setting \u03c4 = 0. Unlike Richman bidding, in both of these mechanisms, the sum of budgets is not constant throughout the game. The central quantity that is studied is thus the ratio of the players\u2019 budget: suppose that for i \u2208{1, 2}, Player i\u2019s budget is Bi, then Player 1\u2019s ratio is B1/(B1 + B2). Note that Player 1\u2019s ratio coincides with his budget in Richman bidding. For qualitative games, the central question is the existence of a threshold ratio, which is the straightforward adaptation of the threshold budgets we use (see De\ufb01nition 3). Reachability games with poorman and taxman bidding have been studied in [37]. It is shown that while threshold ratios exist in reachability poorman and taxman games, the structure of the game is more complicated and no probabilistic connection is known and it is unlikely to exist: already in the reachability game that is depicted in Figure 1, the threshold ratios with poorman bidding are irrational numbers. In\ufb01nite-duration bidding games with poorman bidding were studied in [8] and with taxman bidding in [10]. Given the probabilistic connection for reachability Richman-bidding games (Theorem 5), the probabilistic connection for mean-payoff Richman-bidding games (Theorem 13) may not be unexpected. On the other hand, since no probabilistic connection is known for reachability poorman-bidding games, we \ufb01nd the following probabilistic connection for mean-payoff poormanand taxman-bidding games surprising. The ideas that were developed in the constructions in this paper played a key role in the proof of the following theorem. Theorem 36. [8, 10] Consider a strongly-connected mean-payoff taxman game G and a constant \u03c4 \u2208 [0, 1]. The optimal payoff Max can guarantee with an initial ratio r \u2208(0, 1) in G equals the value of the biased random-turn game RTF(\u03c4,r)(G), for F(\u03c4, r) = r+\u03c4\u00b7(1\u2212r) 1+\u03c4 , in which in each turn Max is chosen with probability F(\u03c4, r) and Min with probability 1 \u2212F(\u03c4, r). In particular, for poorman bidding, the optimal payoff in G with initial ratio r equals MP(RTr(G)). Theorem 36 sheds new light on Theorem 13. Richman bidding is the exception of taxman bidding: For every \u03c4 < 1, the optimal payoff depends both on the structure of the game and the initial ratio. Only in Richman bidding does the optimal payoff depend only on the structure of the game and not on the initial ratio. For example, recall that in the game that is depicted in Figure 2, with Richman bidding, Min can guarantee a non-positive payoff no matter what positive initial budget he starts with (using the tit-for-tat 23 \fstrategy for example). With poorman bidding, on the other hand, when Max\u2019s initial budget is 2 and Min\u2019s initial budget is 1, Max\u2019s initial ratio is 2 3, and the optimal payoff that Max can guarantee is 2 3 \u00b71+ 1 3 \u00b7(\u22121) = 1 3. Theorem 36 implies an interesting connection between Richman and poorman bidding: the value in a mean-payoff bidding game with Richman bidding equals the value with poorman bidding and ratio 0.5. 4.3.2 An existential proof of Theorem 13 We describe an alternative existential proof of Theorem 13 that relies on a combination of the probabilistic connection for reachability bidding games that are played on in\ufb01nite graphs [37] and results on probabilistic models [14, 15]. The draw-back of this proof is that it does not give any insight on how to construct optimal strategies. That is, given a strongly-connected mean-payoff bidding game G, using Theorem 13 and the existential proof, the only knowledge we obtain is the optimal payoff a player can guarantee. There is no hint, however, on how to construct a strategy that achieves this payoff, which, as can be seen in the previous sections, can be a challenging task. Existential proof of Theorem 13. Consider a strongly-connected mean-payoff bidding game G = \u27e8V, E, w\u27e9, where w : V \u2192I N. The one-counter game2 that corresponds to G, denoted OCG(G), is played on the same graph only with a different objective: a counter tracks the energy in an in\ufb01nite play \u03c0, and \u03c0 is winning for Min iff there exists a \ufb01nite pre\ufb01x in which the energy is 0. That is, Max wins \u03c0 iff the energy stays positive in every \ufb01nite pre\ufb01x of \u03c0. A con\ufb01guration of OCG(G) is a pair \u27e8v, n\u27e9\u2208V \u00d7 I N, which intuitively means that the token is placed on v and the accumulated energy (the counter value) is n. Lemmas 15 and 22 can be rephrased to show the following correspondence between winning in the one-counter bidding game OCG(G) and guaranteeing an optimal payoff in G: Claim: If the threshold budget in every con\ufb01guration \u27e8v, n\u27e9in OCG(G) is 0, i.e., Min wins with any positive initial budget, then with every positive initial budget, Min guarantees a non-positive payoff in G. On the other hand, if for every vertex v \u2208V and a positive initial budget BM > 0 of Max there is an initial energy n \u2208I N such that BM > 1 \u2212TH(\u27e8v, n\u27e9) in OCG(G), i.e., Max can prevent Min from winning when the game starts from \u27e8v, n\u27e9, then Max can guarantee a positive payoff in G. The game OCG(G) is a reachability bidding game that is played on an in\ufb01nite graph. Formally, we have OCG(G) = \u27e8V \u00d7 I N, E\u2032, T\u27e9, where \u27e8v\u2032, n\u2032\u27e9is a neighbor of a vertex \u27e8v, n\u27e9iff \u27e8v, v\u2032\u27e9\u2208E and the update to the counter is correct and stays non-negative, i.e., n\u2032 = n + w(v) if n\u2032 \u22650 and n\u2032 = 0 otherwise, and the target for Min is the set of vertices V \u00d7 {0}. A key property of this game is that even though the graph is in\ufb01nite, the number of outgoing edges from each vertex is at most |E| and in particular \ufb01nite. The proof in [38] of the probabilistic connection for reachability bidding games (Theorem 5) extends to reachability games on in\ufb01nite graphs in which all vertices have a \ufb01nite out-degree. Thus, we have the following. Claim: The games OCG(G) and RT(OCG(G)) are equivalent: the threshold budget in a con\ufb01guration \u27e8v, m\u27e9\u2208V \u00d7 I N in OCG(G) equals the value of \u27e8v, n\u27e9in RT(OCG(G)), i.e., the probability of winning under optimal play. The game RT(OCG(G)) is a stochastic game with a one counter. Such games have been shown to have the following properties. Claim: [14, 15] When MP(RT(G)) \u22640, the value of every con\ufb01guration \u27e8v, n\u27e9in RT(OCG(G)) is 0. When MP(RT(G)) > 0, for every v \u2208V , the sequence val(RT(OCG(G)), \u27e8v, n\u27e9) tends to 0 as n tends to in\ufb01nity. The proof of the theorem follows from combining the three claims. 2Sometimes called an energy game [13]. 24 \f4.3.3 Strategy complexity In this section we discuss the memory requirements of the strategies that we construct for mean-payoff bidding games, which we call the complexity of the strategy. The complexity of a strategy is important since strategies are typically used to implement systems, and the complexity of the strategy translates to the complexity of the system. In all three strategies, when the token is placed on a vertex v, the strategy always prescribes the same vertex to move to upon winning the bidding, namely v\u2212for Min and v+ for Max, and the bid is of the form St(v) \u00b7 \u03b3, where St(v) is a constant and \u03b3 is the normalization factor, which changes as the game proceeds. Thus, a strategy uses memory only for determining the normalization factor. In Min\u2019s strategy, recall that \u03b3 is of the form 1/N, where N \u2208I N is chosen immediately after the energy hits 0. To compute the normalization, Min\u2019s strategy uses two variables that take integer values. One keeps track of the current energy level in order to observe that it hits 0 and that a new N needs to be chosen. The second variable keeps the current choice of N. In Max\u2019s strategy in recurrent games, the normalization, which is called the currency of the energy block, changes in the root vertex of the game depending on the energy level. Max\u2019s strategy again uses two variables that take integer values. The \ufb01rst keeps track of the energy and the second keeps the index of the energy block in the last visit to the root. In a vertex that is not the root, Max computes the normalization by referring to the stored index of the energy block. Finally, in Max\u2019s strategy in general strongly-connected games, the normalization changes when the energy visits an even energy block. Again, Max\u2019s strategy can be implemented using two variables that keep track of the current energy and the index of the energy block whose currency is currently being used. 5 Discussion and Future Directions We introduce and study in\ufb01nite-duration bidding games in which the players bid for the right to move the token. We showed the existence of threshold budgets in parity bidding games by reducing them to reachability bidding games. We also showed the existence of threshold budgets in mean-payoff bidding games. The key to the qualitative solution was a quantitative solution to strongly-connected mean-payoff bidding games: we showed that these games are equivalent to uniform random-turn games in the sense that the optimal payoff a player can guarantee in the bidding game equals the expected payoff in the stochastic game with optimal play. Thus, we show that the initial budgets do not matter in mean-payoff bidding games with the bidding rules we use, namely Richman bidding. That is, the payoff depends only on the structure of the game and not on the initial budgets. As we elaborate in Section 4.3.1, this is not the case with other bidding mechanisms, where the payoff depends both on the structure of the game and the initial budgets. This work belongs to a line of works that transfer concepts and ideas between the areas of formal veri\ufb01cation and algorithmic game theory [44]. Examples of works in the intersection of the two \ufb01elds include logics for specifying multi-agent systems [4, 20, 42], studies of equilibria in games related to synthesis and repair problems [19, 18, 26, 2], non-zero-sum games in formal veri\ufb01cation [12, 16, 22], and applying concepts from formal methods to resource allocation games such as rich speci\ufb01cations [11], ef\ufb01cient reasoning about very large games [6, 36], reasoning about resource interfaces [17], and a dynamic selection of resources [9]. We discuss some directions for future work. We studied the computational complexity of \ufb01nding threshold budgets, which we formally de\ufb01ne as the THRESH-BUDGET problem, and showed that for the objectives we consider, the problem is in NP and coNP using the reduction to random-turn games. We leave open the problem of \ufb01nding a tighter classi\ufb01cation for THRESH-BUDGET. Our result hints that the problem is not NP-hard. A tighter classi\ufb01cation would be, optimistically, a polynomial-time algorithm for THRESH-BUDGET, or, pessimistically, showing that THRESH-BUDGET is as hard as solving general 25 \fsimple stochastic games, which is a problem in NP and coNP for which no polynomial-time algorithm is known. In Section 4.3.2, we discussed one-counter games in which Min wins if the energy hits 0 once in a play. Note that unlike parity and mean-payoff, this objective is not pre\ufb01x independent. The complexity of THRESH-BUDGET in one-counter games is interesting and is related to recent work on optimizing the probability of reaching a destination in a weighted MDP [27, 48]. For acyclic one-counter bidding games, the problem is PP-hard using a result in [27], and for a single-vertex games the problem is in P using the direct formula of [33]. For general games the problem is open. Acknowledgments We thank Petr Novotn\u00fd and Rasmus Iben-Jensen for helpful discussions and pointers." + } + ], + "James Martens": [ + { + "url": "http://arxiv.org/abs/2110.01765v1", + "title": "Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping", + "abstract": "Using an extended and formalized version of the Q/C map analysis of Poole et\nal. (2016), along with Neural Tangent Kernel theory, we identify the main\npathologies present in deep networks that prevent them from training fast and\ngeneralizing to unseen data, and show how these can be avoided by carefully\ncontrolling the \"shape\" of the network's initialization-time kernel function.\nWe then develop a method called Deep Kernel Shaping (DKS), which accomplishes\nthis using a combination of precise parameter initialization, activation\nfunction transformations, and small architectural tweaks, all of which preserve\nthe model class. In our experiments we show that DKS enables SGD training of\nresidual networks without normalization layers on Imagenet and CIFAR-10\nclassification tasks at speeds comparable to standard ResNetV2 and Wide-ResNet\nmodels, with only a small decrease in generalization performance. And when\nusing K-FAC as the optimizer, we achieve similar results for networks without\nskip connections. Our results apply for a large variety of activation\nfunctions, including those which traditionally perform very badly, such as the\nlogistic sigmoid. In addition to DKS, we contribute a detailed analysis of skip\nconnections, normalization layers, special activation functions like RELU and\nSELU, and various initialization schemes, explaining their effectiveness as\nalternative (and ultimately incomplete) ways of \"shaping\" the network's\ninitialization-time kernel.", + "authors": "James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, Samuel S. Schoenholz", + "published": "2021-10-05", + "updated": "2021-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "main_content": "Introduction The current standard approach to deep learning relies on a combination of architectural elements including skip connections, normalization layers, and carefully chosen activation functions (such as RELU) to overcome the well-documented optimization di\ufb03culties present in traditional deep neural networks (Io\ufb00e and Szegedy, 2015; He et al., 2016a; Szegedy et al., 2017). While this approach has proven very successful, enabling many applications in diverse \ufb01elds such as vision (e.g. He et al., 2016a; Tan and Le, 2019), language (e.g. Vaswani et al., 2017; Brown et al., 2020), protein folding (Jumper et al., 2021) and reinforcement learning (e.g. Espeholt et al., 2018; Silver et al., 2018), it is not entirely satisfying for at least several reasons. First, the precise mechanism of action of these elements, as well as their interaction, is still not well understood, despite some recent progress in this area. This lack of understanding makes it di\ufb03cult to design new network architectures, as architectural choices not only a\ufb00ect the network\u2019s expressivity, but also its trainability, and in ways that are hard to predict. Second, without competitive alternatives to compare to, it\u2019s not clear whether the current standard approach enables deep networks to reach their full potential, or whether it has unseen drawbacks and limitations. For example, while the use of skip connections helps very deep networks to train much faster, this might only be because it makes them behave like an ensemble of shallower networks (Veit et al., 2016). Finally, the extra complexity introduced by these architectural elements, and their non-trivial interactions, makes theoretical analyses much more di\ufb03cult, potentially holding us back from developing a more fundamental understanding of deep learning. And while existing theoretical analyses can (and often do) drop these elements, they do so at the risk of missing an essential piece of the picture. In an ideal world, modelling and trainability would be decoupled, so that architectures could be designed with only modelling considerations in mind, and rapid training would be guaranteed as long as they conformed to a well-de\ufb01ned set of rules. One might also hope that the components of such a framework would each have a clear purpose, be theoretically well-understood, and interact with each other in simple and predictable ways. In the present work we take an important step towards this \u201cideal world\u201d, while simultaneously providing a competitive alternative to the current standard approach to deep learning. We do so by developing a theoretically well-founded method for constructing deep networks which allows them to be rapidly trained without the use of skip connections, normalization layers, or standard activation functions. Our approach, which we call Deep Kernel Shaping (DKS), requires only minor model class-preserving modi\ufb01cations to the architecture and activation functions, and is fully compatible with existing analysis frameworks such as Neural Tangent Kernel (NTK) theory (Jacot et al., 2018). As we show in experiments, DKS enables very deep residual networks without normalization layers to be trained using SGD on Imagenet and CIFAR-10 classi\ufb01cation tasks at similar speeds to standard ResNetV2 (He et al., 2016b) and Wide-ResNet models (Zagoruyko and Komodakis, 2016). It also achieves the same for networks without skip connections or normalization layers when combined with stronger optimizers like K-FAC (Martens and Grosse, 2015) or Shampoo (Gupta et al., 2018). Moreover, it works well with a large variety of activation functions, including those that traditionally perform very poorly (such as the 2 \fDeep Kernel Shaping logistic sigmoid). As a caveat, we observe a small decrease in generalization performance compared to standard ResNets, which we believe can be addressed in future work. While there have been some recently proposed methods for training very deep networks without skip connections and normalization layers (e.g Schoenholz et al., 2017; Balduzzi et al., 2017; Xiao et al., 2018), to the best of our knowledge, DKS is the \ufb01rst to achieve training speeds competitive with standard ResNet models on a challenging dataset like Imagenet. And while our use of K-FAC plays an important role in these results, our experiments show that K-FAC alone is not enough, even when used in combination with the aforementioned methods. The starting point for our development of DKS is the work of Poole et al. (2016), who described the approximate initialization-time behavior of fully-connected combined layers (which we de\ufb01ne here as an a\ufb03ne layer followed by an element-wise nonlinear layer) using special one-dimensional maps known as \u201cQ and C maps\u201d, and then used the \ufb01xed-point behavior of these maps to describe the depth-limiting behavior of a network composed of many such layers in sequence. We also take inspiration from Schoenholz et al. (2017), who applied this analysis framework to design an initialization method which modulated the \ufb01xed point behavior of each layer\u2019s C map to slow the loss of \u201cgeometric information\u201d with depth, and demonstrated encouraging results training very deep networks without skip networks or normalization layers. While originally derived within the semi-rigorous framework of \u201cmean \ufb01eld analysis\u201d, it turns out that Q/C maps also describe the approximate behavior of a combined layer\u2019s kernel function in wide networks, and as we will show, can be applied to convolutional layers if one uses a Delta initialization for the \ufb01lter banks. These maps can be further extended to describe entire networks with arbitrary topologies, where they provide useful information outside of the depth-limiting case considered by Poole et al. (2016). In the case of a fully connected network f, its Q map approximates the mapping from \u2225x\u22252/ dim(x) to \u2225f(x)\u22252/ dim(f(x)), and its C map approximate the mapping from x\u22a4x\u2032/(\u2225x\u2225\u2225x\u2032\u2225) to f(x)\u22a4f(x\u2032)/(\u2225f(x)\u2225\u2225f(x\u2032)\u2225). In deeper networks, the C map can easily become \u201cdegenerate\u201d, mapping most of its input domain [\u22121, 1] to a small point-like subset of its codomain, [\u22121, 1]. The implication of this is that the distance between any pair of output vectors from the network is e\ufb00ectively independent of the distance between the corresponding pair of input vectors. As we argue, both heuristically and using NTK theory, this behavior inevitably leads to very slow training and/or poor generalization under gradient descent. We thus design DKS to prevent this problem, while also guarding against certain secondary pathologies such as a badly behaved Q map, high approximation error in the Q/C maps themselves, and network behavior that is \u201ctoo linear\u201d (which limits network expressivity under gradient descent). To do this, we relate the overall \u201cshape\u201d of the network\u2019s C map and its tendency to become degenerate, to its value and derivative at a couple of points, which we in turn relate to the values of derivatives of the C maps for the network\u2019s individual layers. We then control these properties (and a couple of additional ones to address the aforementioned secondary pathologies), by transforming each activation function using a model class preserving scale and shift operation its input and output. This transformation is the same for each nonlinear layer for a given activation function, but depends 3 \fMartens et al. on the global structure of the network. In theory, we also require that sum operations in the network are \u201cnormalized\u201d in a certain way, that a special kind of data preprocessing is used, and that pooling layers are replaced with certain roughly equivalent alternatives, although we \ufb01nd the latter two of these to be non-essential in practice. In addition to developing DKS, we also use Q/C maps to help explain the e\ufb00ectiveness of standard deep learning techniques such as normalization layers, skip connections, initialization methods, and common activation functions such as RELU and SELU (Klambauer et al., 2017), in terms of their e\ufb00ect on the network\u2019s initialization-time kernel. This is facilitated in part by the connections we establish between Q/C maps and alternative analysis frameworks such as \u201cvariance propagation\u201d and \u201csignal propagation\u201d which underlie many of the said techniques. 2. Outline This manuscript is organized into \ufb01ve parts. Part I gives our assumptions and establishes the theoretical concepts used in subsequent parts. We begin in Sections 3 and 4 by de\ufb01ning our notation and stating our initial assumptions on network architecture and initialization. In Section 5 we discuss kernel functions for networks conforming to these assumptions, and how they can be approximated with much simpler functions at initialization time. In Sections 6 and 7 we show how these kernel approximations can be further broken down in terms of a generalized version of the Q/C maps originally proposed in Poole et al. (2016). Derivative computations for Q/C maps are given in Section 8, and Section 9 discusses how to handle sum operations when computing Q/C maps. In Section 10, we show how C maps can be simpli\ufb01ed down to one dimensional functions (from three dimensions) using a special type of data preprocessing which is designed to make two of their three inputs constant. And in Section 11 we discuss additional consequences of this, including that C maps become \u201cpositive de\ufb01nite functions\u201d. With the theoretical groundwork established, Part II focuses on identifying desirable Q/C map properties and ways to achieve these. In Section 12 we discuss C map behavior in deep networks and how it can \u2013 and usually does \u2013 become \u201cdegenerate\u201d, leading to slow training and/or poor generalization. We then set out to analyze C maps with the hope of controlling their properties so as to prevent this. To that end, in Section 13 we use the positive de\ufb01niteness of a C map to show how its deviation from the identity function (which is large in degenerate maps) can be predicted from its derivative at 1 and value at 0, implying that we can prevent degeneration by enforcing certain conditions on these quantities. In Section 14 we identify another way that a network can fail to be trainable: that its parameters must move very far from their initial values before the network can exhibit any signi\ufb01cantly nonlinear behavior. We then show this failure mode can be avoided by enforcing a condition on the C map of each nonlinear layer. In Section 15 we identify the breakdown of our kernel approximations as a third problem that we must avoid, and propose several solutions to this, including a condition to enforce on the network\u2019s Q map. Having identi\ufb01ed three distinct ways that a network can fail to be trainable, and conditions to enforce on the Q and C maps to prevent or mitigate these failures, we proceed with the speci\ufb01cation and derivation of DKS in Part III. In Section 16 we list the four conditions 4 \fDeep Kernel Shaping on the Q and C maps of the network (or more precisely its \u201csubnetworks\u201d) which we will enforce. Then in Section 17 we show how these conditions can be reduced to ones on the Q/C maps of the individual layers of the network via a special translation mechanism called the \u201cmaximal slope function\u201d which encodes structural information about the network (including its depth). In Section 18 we describe our main mechanism of enforcement for these per-layer conditions: scaling and shifting operations applied to the input and output of each nonlinear layer\u2019s activation function (which preserve the model class). Finally, in in Sections 19 and 20 we discuss how to deal with normalization and pooling layers in DKS. With DKS fully derived, we give a step-wise summary of it in Section 21.1, and provide details for the more di\ufb03cult aspects of its implementation in Section 22. In Section 23 we demonstrate the application of DKS on the various modi\ufb01ed ResNet and Wide-ResNet models which we use in our experiments, including ones with skip connections and/or normalization layers removed. Before proceeding to experiments, in Part IV we delve deeper into the theory underlying DKS, and analyze various related approaches from the perspective of kernel approximations and Q/C maps. In Section 24 we review Neural Tangent Kernel (NTK) theory, and give an elegant expression for the NTK using (extended) C maps. We then show how NTK theory predicts slow training and poor generalization for networks with degenerate C maps, and characterize the form of the NTK for networks constructed using DKS. In Section 25 we review certain previously published methods for understanding the behavior of neural networks at initialization time (such as variance/signal propagation), show how they give rise to what are essentially Q and C maps (but di\ufb00erent interpretations for what they actually compute), and advocate for the use of approximate kernel analysis as a more \ufb02exible and mathematically rigorous alternative. Exploiting these connections, we then review and analyze some prior methods for initializing and constructing neural networks in Section 26, including standard techniques such as normalization layers and residual networks, as well as methods aimed at replacing them. In each case we argue that the method can interpreted as enforcing some set of conditions on the network\u2019s Q/C map, which is often a strict subset of those enforced by DKS. Finally, in Part V we discuss experiments and conclude. This begins in Sections 27 and 28, where we describe the setup of our experiments, and discuss their results. Our experiments include comparisons of DKS to standard ResNets, the methods reviewed/analyzed in Section 26, and various \u201cablated\u201d/modi\ufb01ed versions of DKS. We then summarize our conclusions in Section 29, and in Section 30 discuss the limitations of DKS and possible ways to address them in future work. 5 \fMartens et al. Table of contents 1 Introduction 2 2 Outline 4 I Theoretical preliminaries 9 3 Neural network terminology and architectural assumptions 9 4 Parameter distributions 10 5 Kernel function approximations for neural networks 11 6 Q and C maps for combined layers 18 7 Extended Q and C maps 20 8 Q and C map derivative computations 22 9 Handling weighted sum operations 24 10 Uniform q values 26 11 Additional consequences of uniform q values for C maps 28 II Desirable Q/C map behavior and how to achieve it 30 12 C map behavior in deep networks and necessary requirements for trainability 31 13 Mathematical analysis of C maps 37 14 C map behavior in linear networks and the problem of being \u201ctoo linear\u201d 41 15 Mitigating kernel approximation error 43 III Speci\ufb01cation and derivation of Deep Kernel Shaping 45 16 Conditions on Q/C maps that we will enforce 45 6 \fDeep Kernel Shaping 17 From global map conditions to local ones 48 18 Activation function transformations 51 19 Addressing normalization layers 56 20 Addressing pooling layers 58 21 Summary of our method 62 22 Some implementation details 65 23 Application to various modi\ufb01ed ResNets 66 IV Additional analysis of DKS and related methods 70 24 Neural Tangent Kernel analysis 71 25 Variance propagation, signal propagation, and their relationship to approximate kernel analysis 79 26 Review and analysis of related approaches for constructing and initializing deep neural networks 86 V Experiments and conclusions 95 27 Experimental setup 95 28 Experimental results 100 29 Conclusions 121 30 Limitations and future directions 121 VI Appendix 124 A Approximating average unit values 124 B Mathematical details for Section 8.1.2 125 C A detailed analysis of C map convergence in deep networks 125 D Mathematical details for Section 13 132 7 \fMartens et al. E Mathematical details for Section 20.1.2 137 F Mathematical details for Section 18.5 139 G Mathematical details for Section 20.2 139 H Mathematical details for Section 24 140 I Path-weight analysis and its relationship to approximate kernel analysis143 J Analyzing (nearly) standard ResNets using Q/C maps 145 K Empirical evidence for the relationship between Q map derivatives and kernel approximation error 148 L Example learning rate schedules from FIRE PBT 150 M Meta-parameter studies 151 N Experiments with ablations and modi\ufb01cations of DKS 156 8 \fDeep Kernel Shaping Part I Theoretical preliminaries 3. Neural network terminology and architectural assumptions 3.1 Basic neural network terminology Throughout this work we will assume that the reader is already familiar with convolutional neural networks (Fukushima and Miyake, 1982; LeCun et al., 1998a) for which many overviews and tutorials are available (e.g. Goodfellow et al., 2016, Chapter 9). The purpose of this subsection won\u2019t be to de\ufb01ne convolutional network concepts from scratch, but rather to lay out the speci\ufb01c terminology we will use when referring to them. In this work will consider neural networks consisting of a\ufb03ne layers of the standard fully-connected and convolutional types, and nonlinear layers that compute element-wise activation functions (that are typically nonlinear). We de\ufb01ne a combined layer to be an a\ufb03ne layer, immediately followed by a nonlinear layer. (Note that a combined layer is what was traditionally referred to as a \u201clayer\u201d in the neural network literature, before the modern trend of referring to the individual a\ufb03ne and nonlinear parts as their own separate \u201clayers\u201d.) The input and output of convolutional layers (or networks) are called feature maps, and consist of an array of locations vectors, with the entries of these vectors being called channels. The parameters of an a\ufb03ne layer are its weights (sometimes called a \ufb01lter bank in the convolutional case), and its bias vector. So for example, in the fully-connected case, a combined layer would compute \u03c6(Wz + b), where z is its input vector, W its matrix of weights, b its bias vector, and \u03c6 its activation function (which is de\ufb01ned from R to R, and applied element-wise for higher dimensional inputs). In this work, the discussion will center around a single neural network which we will refer to simply as the network or sometimes the entire network. We will de\ufb01ne a subnetwork as a neural network formed from a subset of the entire network\u2019s layers which preserves all dependency relationships and has a well-de\ufb01ned and singular input and output (unlike the entire network, which can have multiple inputs and outputs in general). Subnetworks can be thought of as performing part of the computation of the network. So for example, if the network consists of a sequence of \ufb01ve layers, then layers 2, 3 and 4 form a subnetwork whose input is the input to layer 2, and whose output is the output of layer 4. But layers 2, 4, and 5 do not form a subnetwork since the dependency of layer 4 on layer 2 is not preserved. 3.2 Initial architectural assumptions We observe that a fully-connected layer is equivalent to an convolutional layer with a 1x1 feature map and 1x1 \ufb01lter size, where the input/output data dimensions are just the input/output channel dimensions. Thus, going forward, we will restrict our analysis to the convolution case, which implicitly handles the fully-connected case via this reduction. We will also assume, for now, that the network can be entirely built out of three components: combined layers (as de\ufb01ne above), non-zero constant scalar multiplications operations 9 \fMartens et al. applied to individual feature maps, and concatenation operations, which concatenate two feature maps of compatible sizes along their channel dimensions. We will permit a given feature map to act as the input to multiple operations/layers in the network, thus allowing \u201cbranching structures\u201d and multiple \u201coutput heads\u201d. The restriction to combined layers isn\u2019t as severe as it might seem, as an isolated a\ufb03ne layer is equivalent to a combined layer with an identity activation function. And while sum operations are not explicitly included among the allowed operations, under certain conditions they can be simulated via a simple construction whose details we will defer to Section 9. This means that our analysis can apply to networks containing actual sum operations, under said conditions. Two or more consecutive nonlinear layers are also not allowed by our assumptions, however one can simply fuse two such layers into a single one by composing their activation functions. For now we will assume that the network does not contain any pooling layers. We will (partially) relax this assumption later in Section 20. 4. Parameter distributions 4.1 Assumptions on the form of the parameter distributions In order to obtain a su\ufb03ciently simple characterization of the function computed by a neural network at initialization time, we will make certain assumptions about the distribution of its parameters at initialization. Our \ufb01rst one will be that the bias vector is initialized to zero. While not strictly necessary to the derivation and viability of DKS, this assumption will simplify our presentation. Our second will be that if the input of one layer depends on the output of another, either directly or indirectly, then the parameters of these layers must be initialized independently from each other. This rules out recurrent neural networks, for example, since parameters are shared across time-steps. Finally, except where stated otherwise, we will assume the use of a \u201cDelta initialization\u201d (Balduzzi et al., 2017; Xiao et al., 2018), which requires that \ufb01lter bank tensors are initialized to zero everywhere except for their central location/o\ufb00set (and have odd-sized \ufb01lter dimensions to make this possible). As an example, if we have a 5 \u00d7 5 \ufb01lter, then only the weights corresponding to entry (3, 3) would be non-zero. Note that for fully-connected layers there is only one location, so that a Delta initialization becomes equivalent to a standard one. The non-zero weights of a Delta-initialized \ufb01lter bank form a m \u00d7 k matrix, where k is the input channel dimension and m is the output channel dimension. To initialize this matrix we have two options. First, we can use an entry-wise iid Gaussian distribution with mean 0 and variance 1/k, which gives rise to the Gaussian Delta initialization. While it might seem restrictive to assume a variance of 1/k (instead of \u03c32/k for general \u03c3 > 0), this will simplify our presentation going forward, and other choices can be simulated by rescaling the network\u2019s activation functions (which will be part of DKS). The second option is to use a scaled-corrected uniform orthogonal (SUO) distribution, which is a special distribution of rescaled orthogonal matrices. When m \u2a7dk, samples from this distribution can be generated as (XX\u22a4)\u22121/2X, where X is a m\u00d7k matrix 10 \fDeep Kernel Shaping with entries sampled iid from N(0, 1). When m > k, we may apply the same procedure but with k and m reversed, and then transpose the result. The resulting distribution is given by the well-known Haar measure on orthogonal matrices (e.g. Meckes, 2019), and is also sometimes called the uniform distribution. To be consistent with the scaling characteristics of the Gaussian initialization, we further multiply by the scaling factor max \u0010p m/k, 1 \u0011 , which will have an e\ufb00ect only when m > k. We will call Delta initializations that use the SUO distribution Orthogonal Delta initializations. 4.2 A brief discussion about random orthogonal matrices and the SUO distribution The scaled-corrected uniform orthogonal distribution, as we have de\ufb01ned it, has the property that it is invariant to pre or post-multiplication of the matrix by a constant square orthonormal matrix (Eaton, 1989, Chapter 7). This implies that left-multiplying an input vector by an unobserved matrix sampled from this distribution erases all information about the vector\u2019s direction. The input vector\u2019s dimension-normalized squared norm (i.e. 1 dim(x)\u2225x\u22252) can meanwhile be exactly recovered when k \u2264m, and is equal to the output vector\u2019s dimension-normalized squared norm. For the computations in the next section to be valid for a given orthogonal weight distribution, we require that the distribution satis\ufb01es these properties. However, many randomized procedures used in practice for sampling orthogonal matrices lack the directional invariance property. And even procedures whose distributions do possess it often don\u2019t include the max \u0010p m/k, 1 \u0011 scale correction factor, which is required for the dimensionnormalized squared norm to be preserved. Thus, we strongly recommend that anyone implementing DKS use the sampling procedure for orthogonal matrices that we have outlined, unless they are con\ufb01dent that their own procedure gives precisely the same distribution. Note that Saxe et al. (2014) and Xiao et al. (2018) have used distributions over orthogonal matrices to initialize neural networks. It turns out that the formulas they derive also require SUO-distributed weights to be correct, even though they did not state this explicitly. Finally, note that the entry-wise iid N(0, 1/k) distribution for m \u00d7 k matrices behaves very similarly to the SUO distribution with respect to multiplication by an input vector, and gives a distribution on the output vector which is identical up to a multiplication by a random scalar (which is distributed according to the chi distribution with m degrees of freedom). The output vector\u2019s dimension-normalized squared norm is thus a random multiplicative perturbation of the input vector\u2019s (instead of being equal to it), where the perturbation\u2019s mean and variance are 1 and 2/m respectively. From these observations we can see that Gaussian initializations, like SUO ones, give rise to directional invariance, but only approximately preserve the dimension-normalized squared norm (and in a way that gets more precise as m grows). 5. Kernel function approximations for neural networks The starting point for our analysis of the initialization-time behavior of neural networks will be kernel functions, and the approximations of these that hold at initialization-time when 11 \fMartens et al. the channel dimensions are large. This type of analysis was originally pioneered by Neal (1996), and developed further in various subsequent works (e.g. Williams, 1997; Rahimi and Recht, 2008; Cho and Saul, 2009; Mairal et al., 2014; Anselmi et al., 2015; Hazan and Jaakkola, 2015; Daniely et al., 2016; Matthews et al., 2018; Lee et al., 2018; Garriga-Alonso et al., 2018; Novak et al., 2018; Arora et al., 2019). In this section we will review these concepts and establish our notation and terminology for the key quantities. We will depart from the index-heavy tensor notation of some previous works (such as Novak et al., 2018) in favor of a more compact one based on matrices. 5.1 Simpli\ufb01ed version for the fully-connected case Before we launch into our full treatment of kernel function approximations for convolutional neural networks, in this subsection we will quickly give a simpli\ufb01ed version for the fullyconnected case, with the goal of building intuition. Note that the notation de\ufb01ned here is only a special case of the more general notation we will develop in subsequent subsections. For a vector-valued function f : Rk \u2192Rm, we de\ufb01ne its kernel function \u03baf by \u03baf(z, z\u2032) = 1 mf(z)\u22a4f(z\u2032). It turns out (e.g. Daniely et al., 2016) that when f is a su\ufb03ciently wide fully-connected combined layer with iid N(0, 1/k) weights and activation function \u03c6, \u03baf(z, z\u2032) is closely approximated with high probability by f \u03baf(\u03a3z,z\u2032), where f \u03baf(\u03a3z,z\u2032) = E\u0014 u1 u2 \u0015 \u223cN(0, \u03a3z,z\u2032)[\u03c6(u1)\u03c6(u2)], (1) where \u03a3z,z\u2032 = 1 k \u0014 \u2225z\u22252 z\u22a4z\u2032 z\u22a4z\u2032 \u2225z\u2032\u22252 \u0015 \u2208R2\u00d72. This can be derived by observing that any two units in f\u2019s nonlinear layer are Gaussian distributed (when conditioned on z and z\u2032) with mean zero and covariance matrix \u03a3z,z\u2032. And so if we consider enough of these units, their average statistics (given \u03baf(z, z\u2032)) converge in probability to the expectation. Using the notable fact that f \u03baf(z, z\u2032) only depends on \u03a3z,z\u2032 (and not the full details of z and z\u2032), we can then compose these layer-wise kernel approximations to form ones for networks consisting of many such layers. 5.2 Notation for feature maps and subnetworks Throughout this work we will represent feature maps as matrices in R#channels \u00d7 #locations, where #channels is the number of channels in the feature map and #locations is the number of locations. Note that for fully-connected layers these matrices are just column vectors. We will represent subnetworks (of which single layers are a special case) by symbols such as \u201cf\u201d or \u201cg\u201d. Implicit in these representations is a dependence on all the structural details of the subnetwork, including its parameters, its activation functions, and anything 12 \fDeep Kernel Shaping else we need in order to construct our various approximations. At the same time, we will use standard functional notation such as f(Z) when we want to treat f as a function from its input to its output. 5.3 Inner product matrices (IPMs) and Pair-location kernel functions (PKFs) Suppose X, Y \u2208Rk\u00d7\u2113are feature maps with channel dimension k and number of locations \u2113. We will de\ufb01ne the inner product matrix (or IPM) of X and Y , denoted as \u03a3X,Y , by \u03a3X,Y \u22611 k \u0014 X\u22a4X X\u22a4Y Y \u22a4X Y \u22a4Y \u0015 \u2208R2\u2113\u00d72\u2113. The entries of an IPM are the (dimension-normalized) inner products between all pairs of column vectors from X and Y , or in other words, the average (across channels) of the entry-wise products between pairs of location vectors from the feature maps X and Y . Now suppose f is a subnetwork whose output feature map is in Rm\u00d7\u2113. We de\ufb01ne the paired-location kernel function (or PKF) of f, denoted by \u03baf, as \u03baf(Z, Z\u2032) \u2261\u03a3f(Z),f(Z\u2032) = 1 m \u0014 f(Z)\u22a4f(Z) f(Z)\u22a4f(Z\u2032) f(Z\u2032)\u22a4f(Z) f(Z\u2032)\u22a4f(Z\u2032) \u0015 . If f is a fully-connected combined layer, then \u03baf(Z, Z\u2032) is just a 2x2 matrix, while for general convolutional combined layers it has a 2 \u00d7 2 block structure, with blocks of size \u2113\u00d7 \u2113. Note that PKFs are analogous to Novak et al.\u2019s (2018) \u201cactivation covariance matrices\u201d. f\u2019s PKF \u03baf gives us a \u201cgeometric view\u201d of f\u2019s input-output behavior. In particular, because \u03baf determines the inner-products between all pairs of output vectors (across the di\ufb00erent locations and both inputs), it determines the distances between all such vectors via the formula \u2225x \u2212y\u2032\u2225= p x\u22a4x + y\u22a4y\u2032 \u22122x\u22a4y\u2032. 5.4 Initialization-time approximations to the PKF for combined layers In this subsection we will assume that f is a combined layer with element-wise activation function \u03c6. We will also assume Gaussian-distributed weights, as part of either a Delta or non-Delta initialization scheme. (An extension to SUO-distributed weights given in Subsection 5.9). We are interested in extracting a simple mathematical approximation of \u03baf that is valid at initialization time, which we can use in order to construct approximations of the PKF of larger subnetworks. To begin with, we will assume that the convolutional part of f uses padding and has a stride of 1, which means that input and output locations will be in one to one correspondence. (This assumption will be relaxed in the next subsection.) In general, computing \u03baf for combined layers f boils down to direct evaluation of the de\ufb01ning formula, with no simpli\ufb01cations possible. But when f\u2019s initial parameters are distributed as per Section 4, there exists a much simpler function f \u03baf that approximates \u03baf at initialization time with high probability, which we call the approximate pairedlocation kernel function (or APKF) of f. f \u03baf is obtained from \u03baf by taking the limit as the output channel dimension go to in\ufb01nity, and is a good approximation when the actual (\ufb01nite) output channel dimension is su\ufb03ciently large. 13 \fMartens et al. As shown by Garriga-Alonso et al. (2018) and Novak et al. (2018), the APKF for convolutional combined layers initialized with a standard Gaussian fan-in initialization1 (LeCun et al., 1998b) is given by f \u03baf(\u03a3Z,Z\u2032) = Eu\u223cN(0, A(\u03a3Z,Z\u2032))[\u03c6(u)\u03c6(u)\u22a4], (2) where A is the operator which maps \u03a3Z,Z\u2032 to \u03a3P(Z),P(Z\u2032), with P(Z) denoting the matrix of patch vectors2 generated from Z. A key property of f \u03baf is that it only depends on Z and Z\u2032 via the associated IPM \u03a3Z,Z\u2032. As discussed in Section 4, we are assuming the use of a Delta initialization scheme in this work. Intuitively, a Delta initialization makes a convolutional layer behave like a set of fully-connected layers that operate independently over locations in the feature map (and share parameters). This results in a simpli\ufb01ed form for f \u03baf which is a directly analogous to the kernel approximation for fully-connected combined layers (i.e. Equation 1). It is given by3 f \u03baf(\u03a3Z,Z\u2032) = Eu\u223cN(0,\u03a3Z,Z\u2032)[\u03c6(u)\u03c6(u)\u22a4]. A minor technical point is that \u03a3Z,Z\u2032 may be singular, in which case N(0, \u03a3Z,Z\u2032) will be \u201cdegenerate\u201d, and its density function technically unde\ufb01ned. The easiest way this can happen is if Z = Z\u2032. However, one can still meaningfully de\ufb01ne a distribution and sample from it using (\u03a3Z,Z\u2032)1/2v for v \u223cN(0, I), which is equivalent to adding \u03f5I to \u03a3Z,Z\u2032 and then letting \u03f5 \u21920. With this extended de\ufb01nition of N(0, \u03a3Z,Z\u2032) our formulas remain valid. 5.5 Padding, strides, and dropped locations If the stride of f\u2019s convolution is not 1, or if it doesn\u2019t use padding and has a \ufb01lter size larger than 1 \u00d7 1, then the locations in the input and output feature maps won\u2019t be in one to one correspondence. Instead, they will be related to each other via a projection function s(u), which maps input locations (given by the entries of u) to their corresponding output locations (given by the entries of s(u)). This results in the following generalized formula for \u03baf: f \u03baf(\u03a3Z,Z\u2032) = Eu\u223cN(0,\u03a3Z,Z\u2032)[\u03c6(s(u))\u03c6(s(u))\u22a4]. (3) When the input and output locations are in one to one correspondence, s is just the identity function. Otherwise, s essentially \u201cdrops\u201d the input locations that are never visited by the center of the \ufb01lter (i.e. s(u) will be independent of the entries of u that are \u201cdropped\u201d). We will refer to these as dropped locations, and most of our discussions going forward will assume that the location under consideration has not been dropped at the layer in question. When a location is dropped at some layer, both the exact and approximate PKFs of that layer (and all subsequent layers) will be e\ufb00ectively zero for that location. 1. This initialization uses an entry-wise iid Gaussian distribution with mean 0 and variance 1/d, where d is the \ufb01lter size times the input channel dimension. 2. A \u201cpatch vector\u201d is one formed by concatenating together the subset of columns of Z corresponding to a particular location visited by the convolutional \ufb01lter. They have dimension kb2 for b \u00d7 b convolutions. 3. This formula can be obtained from Equation 2 by observing that a Delta-initialized \ufb01lter bank behaves like a 1x1 \ufb01lter, and that A is the identity operator in the case of a 1x1 \ufb01lter (since P(Z) = Z). 14 \fDeep Kernel Shaping 5.6 Deriving APKFs given Gaussian distributed weights At a high level, the APKF formulas given above for a combined layer f can be derived by observing that each pair of outputs from the a\ufb03ne part of f are linear combinations of Gaussian random variables (i.e. the \ufb01lter weights) when conditioned on the two inputs Z and Z\u2032, and are thus are jointly Gaussian distributed with mean zero. A straightforward computation then shows that the covariance matrix C of this distribution is \u03a3Z,Z\u2032 \u2297Im\u00d7m or A(\u03a3Z,Z\u2032) \u2297Im\u00d7m, where \u2297denotes the Kronecker product. Because units in di\ufb00erent channels have zero covariance they are independent, and so \u03baf(Z, Z\u2032) is equal to an average over output channels of iid random variables, and thus converges in probability to its expectation as the number of output channels goes to in\ufb01nity. We set f \u03baf(\u03a3Z,Z\u2032) equal to this expectation, whose formula then follows from the one for C. Probabilistic bounds on the approximation error can then be obtained using concentration inequalities. 5.7 The APKF Condition and network-level PKF approximations The main approximation which we will use going forward is that the PKF of each combined layer is equal to its associated APKF at initialization time. Or in other words, that \u03a3f(Z),f(Z\u2032) = \u03baf(Z, Z\u2032) \u2248f \u03baf(\u03a3Z,Z\u2032) for each combined layer f of the network. We will refer to this as the APKF Condition. Observe that a combined layer\u2019s APKF depends on Z and Z\u2032 only through the associated IPM \u03a3Z,Z\u2032. Thus, under the APKF Condition, we can compose APKFs for each combined layer to form an initialization-time approximations of the PKFs for arbitrary subnetworks, which we will call network-level PKF approximations. Extending our notation from the combined layer case, we will denote these approximations by f \u03baf for arbitrary subnetworks f. (Note that we rely on the property that subnetworks have a single input and output feature maps for this de\ufb01nition and notation to make sense.) An additional complication that we must deal with when constructing network-level PKF approximations is the presence of concatenation operations, where Z is the concatenation of two feature maps X and Y along their channel dimensions. In this cases, we observe that \u03a3Z,Z\u2032 = k1\u03a3X,X\u2032 + k2\u03a3Y,Y \u2032 k1 + k1 , (4) where k1 and k2 are the number of channels in X and Y respectively. As we will see in the following sections, network-level PKF approximations are amenable to detailed analysis, and expose several key properties which end up being crucial determinants of network trainability (and which can be controlled through careful interventions). 5.8 How accurate are these approximations? As discussed above, the APKF for a combined layer f is derived by observing that the entries of \u03baf(Z, Z\u2032) are empirical averages of iid variables that converge in probability to their expectations as the output channel dimension goes to in\ufb01nity. Applying concentration inequalities then leads to statements of the form: \u201cfor any \u03f5 > 0 and \u03b4 > 0 there exists 15 \fMartens et al. an integer m0(\u03f5, \u03b4) such that if the output channel dimension satis\ufb01es m \u2a7em0(\u03f5, \u03b4) then \u2225\u03baf(Z, Z\u2032)\u2212f \u03baf(\u03a3Z,Z\u2032)\u2225< \u03f5 with probability 1\u2212\u03b4.\u201d. The precise dependency of m0(\u03f5, \u03b4) and on \u03f5 and \u03b4 is of practical interest, as the output channel dimension of real neural network layers is \ufb01nite, and may not even be particularly large in some cases. Ultimately, we are interested in bounding the kernel approximation error not just for single combined layers but for entire networks. In general, such bounds will be worse than anything provable for single layers, as approximation error will compound with depth (since the output of one approximation is fed as input into the next). The only work we are aware of that gives such bounds is that of Daniely et al. (2016). In that work, the authors analyze what are essentially networks of fully-connected combined layers arranged in arbitrary topologies, with certain technical conditions imposed on their input data and activation functions. Translating their main result into the language and assumptions of this work yields the following theorem: Theorem 1 (Adapted from Theorem 2 of Daniely et al. (2016)) Suppose that f is a network containing only fully-connected combined layers and concatenation operations, the former of which are initialized independently of each other with a standard Gaussian fan-in initialization, and use the same activation function \u03c6. Suppose further that \u03c6 is twice continuously di\ufb00erentiable and satis\ufb01es Ex\u223cN(0,1)[\u03c6(x)2] = 1 and \u2225\u03c6\u2225\u221e, \u2225\u03c6\u2032\u2225\u221e, \u2225\u03c6\u2032\u2032\u2225\u221e\u2a7dC for some C (with \u2225\u00b7 \u2225\u221edenoting the supremal value), and that each layer has output dimension (aka \u201cwidth\u201d) greater than or equal to (4C4)D log(8L/\u03b4) \u03f52 , where D is maximum number of nonlinear layers in any input-output path through the network (i.e. its \u201cdepth\u201d), L is its number of combined layers, and \u03b4, \u03f5 > 0. Then at initialization time, for all input vectors z and z\u2032 to f satisfying \u2225z\u22252 = \u2225z\u2032\u22252 = dim(z), we have that |[\u03baf(z, z\u2032)]1,2 \u2212[f \u03baf(\u03a3z,z\u2032)]1,2| \u2a7d\u03f5 with probability at least 1 \u2212\u03b4. Remark 2 Note that in our notation, both \u03baf(z, z\u2032) and f \u03baf(\u03a3z,z\u2032) are 2 \u00d7 2 matrices, and [\u00b7]1,2 extracts the (1, 2)-th entry, or in other words, the value of 1 dim(f(z))f(z)\u22a4f(z\u2032) and its approximation. One can estimate the error for diagonal entries simply by setting z = z\u2032. Remark 3 Because 1 = q Eu\u223cN(0,1)[\u03c6(u)2] \u2a7d q Eu\u223cN(0,1)[\u2225\u03c6\u22252 \u221e] = \u2225\u03c6\u2225\u221e, it thus follows that C \u2a7e1 in the above theorem. And while the theorem assumes the use of the Gaussian fan-in initialization, we note that for fully-connected networks this is equivalent to the Gaussian Delta initialization. Remark 4 This theorem statement di\ufb00ers from the one in Daniely et al. (2016) by explicitly assuming that the activation function \u03c6 satis\ufb01es Ex\u223cN(0,1)[\u03c6(x)2] = 1, or is in other words \u201cnormalized\u201d. As far as we can tell, this assumption is implicit in the de\ufb01nitions made by Daniely et al. (2016). 16 \fDeep Kernel Shaping Remark 5 The condition that Ex\u223cN(0,1)[\u03c6(x)2] = 1 can be achieved by normalizing the output of the activation functions by an appropriate constant. And the condition that \u2225z\u22252 = \u2225z\u2032\u22252 = dim(z) can be achieved through data pre-processing (as discussed in Section 10.2). Both of these conditions will be enforced as part of DKS (although motivated di\ufb00erently). The bound in Theorem 1 predicts an exponential dependence of the minimum required width and depth D, and a 1/\u03f52 dependence on the error tolerance \u03f5. The exponential dependence on D means that this bound could never realistically be applied to a moderately deep network running on actual hardware, as the required width would be prohibitive. While it could easily be the case that some choices of \u03c6 give an exponential dependence as the bound predicts, we conjecture that with more carefully designed assumptions on the properties of \u03c6, a bound with better dependence could be proven. Indeed, Daniely et al. (2016) themselves give a more specialized bound for networks with rescaled RELU activations (which technically violate the hypotheses of Theorem 1 since they are unbounded and not di\ufb00erentiable everywhere), where the required width is only quadratic in D. The main limitation of Theorem 1 is that it applies only to networks of fully-connected combined layers that don\u2019t share weights. We conjecture that a similar result may also hold for networks with convolutional layers and a restricted type of inter-layer weight sharing. 5.9 The orthogonal initialization case (assuming SUO-distributed weights) The kernel formulas and theory given so far in this section have all assumed the use of Gaussian Delta initializations. However, our assumptions also permit the use of Orthogonal Delta initializations, which as discussed in Section 4, use the SUO distribution instead of an iid Gaussian one to initialize the non-zero weights of the \ufb01lter. While some previous works (e.g. Xiao et al., 2018) have used these kinds of kernel approximation formulas in the orthogonal case, and have appealed to the vague notion that random orthogonal matrices \u201clook like\u201d Gaussian-distributed ones in high dimensions, there hasn\u2019t been any mathematically rigorous justi\ufb01cation of this practice until the recent work of Martens (2021). The following theorem, which is adapted from Martens (2021), establishes convergence in probability of the APKF to the associated PKF for a fully-connected combined layer with SUO-distributed weight matrix. Like Theorem 1, it provides an explicit and fairly reasonable convergence rate. An extension of this result to multi-layer networks would likely proceed along similar lines to the argument given in Daniely et al. (2016) for the Gaussian case. Theorem 6 (Adapted from Theorem 2 of Martens (2021)) Let f be a fully-connected combined layer with an SUO distributed m \u00d7 k weight matrix W, a bias vector equal to 0, and an activation function \u03c6 satisfying \u2225\u03c6\u2225\u221e, \u2225\u03c6\u2032\u2225\u221e\u2a7dC for some C (with \u2225\u00b7 \u2225\u221edenoting the supremal value). Denote n = max(k, m), and suppose that for \u03b4, \u03f5 \u2a7e0 we have m5/2 (n + 1)2 \u2a7elog(2/\u03b4) and n \u22121 m3/4 \u2a7e8 \u221a 2C2 \u03f5 . Then, at initialization time, for all pairs of vectors z, z\u2032 \u2208Rk satisfying \u2225z\u22252 = \u2225z\u2032\u22252 = k, we have that |[\u03baf(z, z\u2032)]1,2 \u2212[f \u03baf(\u03a3z,z\u2032)]1,2| \u2a7d\u03f5 17 \fMartens et al. with probability at least 1 \u2212\u03b4. Remark 7 The conditions on k, m, and n \u2261max(k, m) in the theorem statement will be satis\ufb01ed as long as n is su\ufb03ciently large and k is not too much larger than m. In the case where m \u2a7ek, the LHS\u2019s of these bounds simpli\ufb01es to approximately m1/2 and m1/4, respectively. It thus follows that the APKF converges in probability to the PKF as the output dimension m goes to \u221e. Remark 8 In the case where m \u2a7ek, the conditions imply that m \u2273128C4 log(2/\u03b4)2 \u03f52 , which is similar to the width bound from Theorem 1 for D = 1. Remark 9 Note that while this theorem is stated only for fully-connected combined layers, it also applies to convolutional combined layers that use Orthogonal Delta initializations by taking z and z\u2032 to be any pair of vectors from the union of the columns Z and Z\u2032. 6. Q and C maps for combined layers Q maps and C maps are mathematical constructs introduced by Saxe et al. (2014) and Poole et al. (2016) that describe the initialization time behavior of deep fully-connected networks. While original derived within the semi-rigorous \u201csignal propagation\u201d framework (which is discussed in Section 25.6), they can also be applied under certain conditions within the more rigorous context of kernel function approximations. In that context, they provide a compact alternative representation of approximate kernel functions that is easier to work with. As will be discussed later in Part II, the Q/C maps of a network tell us a lot about its trainability. Indeed, they have appeared either implicitly or explicitly, often in simpli\ufb01ed forms, in much of the previous work on network design and initialization (as will be made clear in Sections 25 and 26). They are also central to the derivation of DKS, and over the next few sections we will develop the generalized version of them that we will use in this work. In this section we will formally introduce Q/C maps maps and their associated notation, and give formulas to compute them for combined layers under our stated hypotheses. Note that while the connection between Q/C maps and approximate kernel functions has been previously observed (e.g. Lee et al., 2018), it hasn\u2019t before been carefully worked out, nor has it been generalized to convolutional layers (as we will do here). In the section that follows we will show how Q/C maps can be naturally extended beyond single combined layers to describe the behavior of network-level PKF approximations for arbitrary subnetworks with complex topologies. 6.1 Q maps for combined layers Consider a combined layer f with \u03c6 as its element-wise activation function, and Z and Z\u2032 as its two inputs. By Equation 3 and basic properties of Gaussian expectations, any given 18 \fDeep Kernel Shaping diagonal entry qout of f \u03baf(\u03a3Z,Z\u2032) depends only on the corresponding diagonal entry qin of \u03a3Z,Z\u2032, and can be computed as qout = Qf(qin) = Eu\u223cN(0,qin)[\u03c6(u)2] = Ex\u223cN(0,1) h \u03c6 (\u221aqinx)2i , (5) where Qf is de\ufb01ned as the Q map of f. We will call such diagonal entries q values, and note that they are equal to the dimension-normalized squared norms of their associated location vectors under the APKF Condition. Notably, the form of the Q map is the same for each location, and so we may associate them with combined layers in a location-independent way. 6.2 C maps for combined layers An o\ufb00-diagonal entry mout of f \u03baf(\u03a3Z,Z\u2032) has a slightly more complex dependence on \u03a3Z,Z\u2032 in Equation 3, as it depends on both the corresponding entry min of \u03a3Z,Z\u2032, as well as the two associated diagonal entries (q1 and q2) that share a row or column. It is given by mout = E\u0014 u1 u2 \u0015 \u223cN \u0012 0, \u0014 q1 min min q2 \u0015\u0013[\u03c6(u1)\u03c6(u2)]. (6) We call such o\ufb00-diagonal entries m values, and note that they are equal to the dimensionnormalized inner product of their two associated location vectors under the APKF Condition. Following Poole et al. (2016), we focus on \u201clength-normalized\u201d versions of the m values called c values. A c value can be obtained from an m value by dividing it by the square root of the product of its two associated q values. (e.g. cin = min/\u221aq1q2 in the context of Equation 6.) Under the APKF Condition, c values are equal to the cosine similarity between their two associated location vectors. c values are computed using C maps, which for a combined layer f are given by cout = Cf(cin, q1, q2) \u2261 1 p Qf(q1)Qf(q2)E\u0014 u1 u2 \u0015 \u223cN \u0012 0, \u0014 q1 min min q2 \u0015\u0013[\u03c6(u1)\u03c6(u2)] = 1 p Qf(q1)Qf(q2)E\u0014 v1 v2 \u0015 \u223cN \u0012 0, \u0014 1 cin cin 1 \u0015\u0013 [\u03c6 (\u221aq1v1) \u03c6 (\u221aq2v2)] = 1 p Qf(q1)Qf(q2)Ex,y\u223cN(0,1) \u0014 \u03c6 (\u221aq1x) \u03c6 \u0012\u221aq2 \u0012 cinx + q 1 \u2212c2 iny \u0013\u0013\u0015 , (7) where we have used the fact that \u221aq1x and \u221aq2 \u0010 cinx + q 1 \u2212c2 iny \u0011 are mean-zero Gaussian distributed with covariance matrix \u0014 q1 \u221aq1q2cin \u221aq1q2cin q2 \u0015 = \u0014 q1 min min q2 \u0015 . Like the Q map, the C map is the same for each location, and so we may associate a single C map to each combined layer. We note that Cf, when considered as a function of c, maps from [\u22121, 1] to [\u22121, 1]. This is immediate from the interpretation of c values as cosine similarities if we assume 19 \fMartens et al. the APKF Condition, but is true more generally. Intuitively, it must be the case, since the APKF Condition becomes exact in the limit as the channel dimension grows, and thus c values are precisely equal to cosine similarities in in\ufb01nite-dimensional spaces. To be more rigorous, one may apply H\u00a8 older\u2019s inequality within the Hilbert space of functions de\ufb01ned by the inner product \u27e8g, h\u27e9= Ex,y\u223cN(0,1)[g(x, y)h(x, y)], taking g(x, y) = \u03c6 \u0000\u221aq1x \u0001 and h(x, y) = \u03c6 \u0010\u221aq2 \u0010 cinx + q 1 \u2212c2 iny \u0011\u0011 . 6.3 Q/C maps for more general combined layers? Note that the existence of Q/C maps, as we have de\ufb01ned them, depends on our stated hypotheses for combined layers. In particular, that they are convolutional (or fully-connected), and use a Delta initialization scheme. While APKFs do exist for certain other layer types and initialization schemes, they may not always give rise to low dimensional maps that fully describe their behavior. For example, if we use a conventional fan-in initialization instead of a Delta initialization for the \ufb01lter weights, then the resulting APKF (given in Equation 2) implies a more complex dependence of the entries of the output IPM on the input IPM, where output q values will depend on (many) input c values. 7. Extended Q and C maps In Poole et al. (2016) and Schoenholz et al. (2017), the neural networks analyzed were assumed to be sequences of D fully-connected combined layers, each with the same activation function. Thus, the network\u2019s initialization-time behavior could be approximated using a single per-layer Q/C map composed with itself D times, and a dynamical systems analysis of this map could thus be performed. This analysis looked for the map\u2019s stable points and attractors, and characterized its asymptotic behavior as the number of self-compositions D (i.e. the network\u2019s depth) went to in\ufb01nity. In this work we consider architectures with a more general structure, and with layers that can be convolutional and employ a variety of activation functions. We are also interested in the given architecture\u2019s \ufb01nite structure, instead of its depth-limiting behavior, as this will allow us to more carefully tailor our manipulations to the given network. To facilitate this, in this section we will extend the notion of Q maps and C maps to arbitrary subnetworks (consisting of potentially many layers) in the natural way. Going forward, we will refer to Q maps and C maps de\ufb01ned speci\ufb01cally for combined layers as local Q/C maps, and maps de\ufb01ned speci\ufb01cally for larger subnetworks, via the extension procedure de\ufb01ned in the next subsection, as extended Q/C maps. Unquali\ufb01ed, \u201cQ/C maps\u201d will be a general term referring to both. 7.1 De\ufb01nition of extended Q/C maps The de\ufb01nition for extended Q/C maps is the natural generalization of the de\ufb01nition for local Q/C maps, where we replace APKF approximations for combined layers with network-level PKF approximations for subnetworks. In particular, given a subnetwork f, an extended Q map maps input q values, corresponding to the diagonal entries of the input IPM \u03a3Z,Z\u2032, to the associated output q values, corresponding to the diagonal entries of the associated 20 \fDeep Kernel Shaping output IPM f \u03baf(\u03a3Z,Z\u2032), as computed by the network-level PKF approximation f \u03baf. The de\ufb01nition for extended C maps is similar. That these de\ufb01nitions can be made in a location-independent way (as with the de\ufb01nitions of local Q/C maps), follows from the fact that extended Q/C maps can be constructed from local ones via composition and weighted averaging (as will be detailed below), which are both operations that preserve the location-independence property. 7.2 Computing extended maps Because Q maps compose with each other, and C maps compose with the combination of both, we can take the per-combined-layer maps and compose them in a way that mirrors the composition of the subnetwork\u2019s combined layers, analogously to how we assembled network-level PKF approximations from APKF approximations of each combined layer. For example, if we have two consecutive combined layers f and g, and wish to compute the Q and C map for the subnetwork h consisting of their composition, this is simply Qh(q) = Qg(Qf(q)) and Ch(c, q1, q2) = Cg(Cf(c, q1, q2), Qf(q1), Qf(q2)). The only complication is that we need to describe how q and c values can be computed when feature maps are concatenated along their channel dimensions, or when they are multiplied by a non-zero scalar constant. To handle the former situation, we recall from Equation 4 that concatenation leads to a weighted averaging of the feature maps\u2019 associated IPMs, with weights given by their respective number of channels. Thus, the q values, which are the diagonal entries of these matrices, average in the same way under concatenation. So given the channel dimensions k1 and k2, and the q values q1 and q2, we have that the associated q value of the concatenation is simply k1q1 + k2q2 k1 + k2 . (8) c values are slightly more complicated to deal with, but still relatively straightforward. We note that m values, the unnormalized counterparts of c values, are the o\ufb00-diagonal entries of the IPMs, and thus exhibit the same kind of averaging as q values. We can thus obtain the c values by \ufb01rst converting them to m values, performing the required weighted average, and then converting back to c values. This gives us the analogous formula k1\u221aq1,1q1,2c1 + k2\u221aq2,1q2,2c2 k1\u221aq1,1q1,2 + k2\u221aq2,1q2,2 , (9) where qi,j refers to the j-th q value associated with the c value from the i-th feature map being concatenated. (Recall that each c value is associated to two q values.) Note that the property that local C maps send [\u22121, 1] to [\u22121, 1] carries over to extended C maps, as this clearly preserved under composition and weighted averages. To handle multiplication of a feature map by a constant \u03b1 \u0338= 0, we note that the IPM of \u03b1Z and \u03b1Z\u2032 is equal to \u03b12 times the IPM of Z and Z\u2032, or in other words: \u03a3\u03b1Z,\u03b1Z\u2032 = \u03b12\u03a3Z,Z\u2032. We thus have that an output q value (or m value) for such an operation is simply \u03b12 times the corresponding input q value (or m value). And an output c value is just equal to the corresponding input c value, since the constant \u03b12 will cancel out when we divide by the geometric mean of the q values. 21 \fMartens et al. 7.3 Generalization to subnetworks with isolated a\ufb03ne and nonlinear layers Because it will simplify Q/C map computations for certain architectures (such as residual networks), we will also generalize Q/C maps to subnetworks that may contain a\ufb03ne or nonlinear layers in isolation (i.e. separated from their parent combined layer). To do this, we will de\ufb01ne local Q/C maps for isolated a\ufb03ne and nonlinear layers in a way that is consistent with our previous de\ufb01nitions (with one small proviso), and then use the previous composition argument to extend Q/C maps to larger subnetworks containing such layers. The isolated a\ufb03ne layer case is trivial, as an a\ufb03ne layer is equivalent to a combined layer with an identity activation function, and so is covered under the previous discussion. It follows that a\ufb03ne layers have local Q and C maps that are the identity function (which can easily be veri\ufb01ed by setting \u03c6(u) = u in Equations 5 and 7), and can thus be essentially ignored in the extended map computations. The case of nonlinear layers is more subtle. APKFs, from which local Q and C maps are de\ufb01ned, don\u2019t actually exist for nonlinear layers in isolation. In particular, for arbitrary input vectors it is not the case that one can closely approximate the norm of the output vector given only the norm of the input vector (with high probability). However, when the layer is part of a larger network in which its input vector is always the output of some a\ufb03ne layer (with a suitable parameter distribution), such a prediction can be made, and is given by the APKF for the corresponding combined layer. (To see this, note a\ufb03ne layers have identity Q and C maps and thus the input to the nonlinear layer has the same q and c values as the input to the corresponding combined layer.) Thus, we can de\ufb01ne the local Q and C map for an isolated nonlinear layer to be equal to the local Q and C maps for its associated combined layer, with the proviso that it describes the layer\u2019s kernel behavior only for \u201ctypical\u201d input vectors (i.e. those that are produced with high probability by the previous layers\u2019 computation) and not arbitrary input vectors. Note that these de\ufb01nitions are consistent with our de\ufb01nitions for combined layers, as the composition of the local Q/C map for an a\ufb03ne and nonlinear layer (as we have de\ufb01ned them here) does indeed recover the local Q/C map of the associated combined layer. Also, it should be emphasized that these arguments rely crucially on the fact that nonlinear layers may be \u201cisolated\u201d only from the point of the view of a given subnetwork. From the perspective of the entire network, it is still required that they are always part of a combined layer, or in other words, are always directly preceded by an a\ufb03ne layer. 8. Q and C map derivative computations Central to our analysis of Q and C maps are their derivatives, which encode many of the properties that we will care about. In this section we show how to compute them, \ufb01rst for local maps, and then for extended maps of arbitrary subnetworks. 8.1 Local map case A conceivable approach to computing the derivatives of local maps would be to derive a closed form expression for the required integrals, and then apply standard di\ufb00erentiation 22 \fDeep Kernel Shaping techniques. Unfortunately, closed form expressions for these integrals are not generally available for most the activation functions. Instead, following Poole et al. (2016), we will give integral expressions for the derivatives which are similar to the original maps themselves, and which can be e\ufb03ciently approximated using numerical integration (as discussed in Section 22.2). 8.1.1 Local Q map derivative for combined layers (or isolated nonlinear layers) Let f be a combined layer (or an isolated nonlinear layer) with element-wise activation function \u03c6. The derivative for Qf(q) with respect to q, which we denote by Q\u2032 f(q), can be computed straightforwardly from Equation 5, and is equal to Q\u2032 f(q) = 1 \u221aqEx\u223cN(0,1) \u0002 \u03c6 (\u221aqx) \u03c6\u2032 (\u221aqx) x \u0003 , where \u03c6\u2032 is the derivative of \u03c6. Note that because \u03c6 is continuous, we are still able to compute this expectation, and similar ones to follow, when \u03c6\u2032 is unde\ufb01ned on a \ufb01nite set of inputs (which is permitted under our global assumptions). 8.1.2 Local C map derivatives The derivative of local C maps with respect to their c value argument has an especially nice form which we make use of later in Section 11. We begin by de\ufb01ning the following notation: \u0393\u03c6(c, q1, q2) \u2261Ex,y\u223cN(0,1) h \u03c6 (\u221aq1x) \u03c6 \u0010\u221aq2 \u0010 cx + p 1 \u2212c2y \u0011\u0011i . (10) This function is closely related to the local C map of f (given by Equation 7) in the sense that Cf(c, q1, q2) = 1 \u221a Qf(q1)Qf(q2)\u0393\u03c6(c, q1, q2). The derivative of \u0393\u03c6(c, q1, q2) with respect to c, which we denote as \u0393\u2032 \u03c6(c, q1, q2), is given by \u0393\u2032 \u03c6(c, q1, q2) = \u221aq1q2\u0393\u03c6\u2032(c, q1, q2), This elegant formula was stated in Poole et al. (2016), although no explicit derivation of it was given. For completeness we provide one in Appendix B. An immediate consequence of this result is that the i-th derivative of \u0393\u03c6(c, q1, q2) with respect to c, which we denote by \u0393(i) \u03c6 (c, q1, q2), is equal to \u0393(i) \u03c6 (c, q1, q2) = (q1q2)i/2\u0393\u03c6(i)(c, q1, q2), where \u03c6(i) denotes the i-th derivative of \u03c6. From this it follows that the i-th derivative of Cf(c, q1, q2) w.r.t. c can be written as C(i) f (c, q1, q2) = (q1q2)i/2 p Qf(q1)Qf(q2)\u0393\u03c6(i)(c, q1, q2). (11) 23 \fMartens et al. This formula is valid even when i = 0, where the 0-th derivative is de\ufb01ned as the function itself (i.e. \u03c6(0) = \u03c6), as is standard convention. When \u03c6(i)(u) isn\u2019t de\ufb01ned on a measure zero set of points, the formula may still be valid, provided that \u03c6(i\u22121) is continuous. For example, if \u03c6 is the RELU function, \u03c6(u) is continuous everywhere and has a derivative everywhere except at u = 0, so the formula is valid for i = 1. However, \u03c6(1)(u) is not continuous at u = 0, and one can use Equation 15 to show that C(2) f (c, 1, 1) \u2192\u221eas c \u21921, while the formula would wrongly predict a value of 0. 8.2 Derivatives of extended maps Because extended maps can be expressed as compositions and weighted averages of local maps, their derivative computations can be performed straightforwardly using automatic di\ufb00erentiation. The resulting formulae will still depend on the derivatives of local maps, but these can be computed (or numerically approximated) as per the previous subsection. In such a scheme, composition corresponds to multiplication, and weighted averages correspond to weighted averages (since di\ufb00erentiation is linear). Notably, because q values don\u2019t depend on c values, the derivative of extended C maps with respect to their input c values can be computed as if all the q values in the network are constant (although they still depend on the network\u2019s input in general). So for example, the C map derivative for a composition of many combined layers is just the product of the local C map derivatives for each layer, evaluated at the appropriate values of c as per the forward evaluation. 9. Handling weighted sum operations An operation commonly performed in neural network models is the (weighted) sum of two or more feature maps. For example, in the ResNet-V2 architecture (which is described in detail in Section 23.1), the input to a \u201cresidual block\u201d is added to its output, using what is known as a \u201cresidual connection\u201d. Since sum operations are not among those listed as allowed in Section 3.2, it would seem that our assumptions rule out such architectures. However, for the purposes of our analysis, there is no requirement that a network be formally constructed the same way it would implemented in code or drawn in a diagram; it only matters that it can be constructed in a way that conforms to the assumptions outlined in Section 3.2. With this in mind, we will now describe a way that a certain restricted class of weighted sum operations can be simulated using only directly supported operations. The consequence of this is that our analysis will in fact apply to architectures that contain such sum operations. Typically, the feature maps that are summed in neural networks are the outputs of a set a\ufb03ne layers f1, f2, . . . , fn that don\u2019t share parameters. (This is true in ResNet-V2 architectures, for example.) In such situations, we can replace the sum Pn i=1 fi(Zi) with a single a\ufb03ne layer h \u0010\u0002 Z\u22a4 1 Z\u22a4 2 \u00b7 \u00b7 \u00b7 Z\u22a4 n \u0003\u22a4\u0011 , which is obtained by concatenating the \ufb01lter banks, the bias vectors, and the input feature maps (i.e. the Zi\u2019s) together along their respective channel dimensions. (If Pn i=1 fi(Zi) is followed by a nonlinear layer in the network, then one simply forms a new combined layer consisting of this and h.) 24 \fDeep Kernel Shaping While almost good enough, the issue with this construction is that the implied initial distribution of h\u2019s \ufb01lter bank parameters is not one of the ones described in Section 4, and in particular, the variance/scale is not correct. To account for this, we must renormalize by the new number of channels (after the concatenation), the e\ufb00ect of which is that h will instead compute a weighted sum of the form Pn i=1 \u221akifi(Zi) pPn i=1 ki , where ki is the input channel dimension for fi. Fortunately, we can extend this construction to support a more general class of weighted sums (with weights wi) by multiplying each Zi by a scalar \u03b1i = wi pPn i=1 ki/\u221aki before concatenating them. Doing so gives Pn i=1 \u221akifi(\u03b1iZi) pPn i=1 ki = Pn i=1 \u221aki\u03b1ifi(Zi) pPn i=1 ki = n X i=1 wifi(Zi), where we have used the fact that the a\ufb03ne fi\u2019s are in fact linear (given that the biases are initialized to 0). If the layer fi is still in the network after this replacement is performed for some i (e.g. because its output is used in more than one place), this creates parameter sharing between h and fi. However, as long as the network with sum operations that we are trying to simulate doesn\u2019t violate our parameter independence assumptions from Section 4, neither will our simulating network. The existence of this construction thus implies that weighted sums between the outputs of two or more a\ufb03ne layers (and directly followed by an optional nonlinear layer) are supported within our framework, provided that said a\ufb03ne layers don\u2019t share parameters. Note that the weighed sum operation can be performed directly in the model code, and the concatenation-based construction only needs to be referenced in the theoretical analysis. To deal with sum operations in Q map computations, we observe that the q value of \u0002 \u03b11Z\u22a4 1 \u03b12Z\u22a4 2 \u00b7 \u00b7 \u00b7 \u03b1nZ\u22a4 n \u0003\u22a4is, according to Equation 8, equal to Pn i=1 ki\u03b12 i qi Pn i=1 ki = Pn i=1 kiw2 i (Pn i=1 ki) /kiqi Pn i=1 ki = n X i=1 w2 i qi, (12) where qi is the q value associated with Zi, and we have used the fact that the q value for \u03b1iZi is \u03b12 i qi. From this it follows that the output q value from the sum is also Pn i=1 w2 i qi, since Qh is just the identity function. Given uniform q values, a similar derivation based on Equation 9 lets us compute the corresponding c value as Pn i=1 w2 i qici Pn i=1 w2 i qi , (13) where ci is the c value associated with Zi. Note that unlike the formula for the q value, this is always a weighted average of the ci\u2019s, regardless of the values of the wi\u2019s. And in the case where all input q values are equal, it simpli\ufb01es to \u0000Pn i=1 w2 i ci \u0001 / Pn i=1 w2 i . 25 \fMartens et al. 10. Uniform q values In general, C maps are three dimensional functions that depend on an input c value and two associated q values. While simpler objects than a network\u2019s PKF (or even the network-level PKF approximation), they are not yet simple enough for our purposes. In particular, the behavior of C maps depends strongly on the two input q values, and q values can vary signi\ufb01cantly between di\ufb00erent network inputs and/or feature map locations. Finding a single scheme that controls the behavior of the C map for all conceivable input q pairs is likely impossible in general, and so we look to restrict the possible q values through some sort of active intervention. The one we propose in this section is a form of input data preprocessing, which ensures that all q values for a given layer are equal (across all possible locations in the feature map and inputs to the network). (Note that this condition does not require that q values be the same across di\ufb00erent layers.) We will call this condition uniform q values. 10.1 A previous solution to this problem In Poole et al. (2016) it was observed that local Q maps can have stable \ufb01xed points, and that if the network consists of a composition of many combined layers of the same type, then its q values will converge with depth to such a point. Thus, a reasonable approximation, especially for deeper layers, is to assume that this convergence has already taken place, and that the q values over the entire network are equal. (Note that this is strictly stronger condition that uniform q values.) As discussed in Section 7, our setting is di\ufb00erent from Poole et al.\u2019s (2016) in that we consider more general architectures, and are interested in the precise behavior of a \ufb01nite network architecture instead of its depth limiting behavior. Moreover, it may be a poor approximation to assume that q values are close to convergence in the earlier layers of the network, especially if there are no constraints placed on the initial q values (which are determined by the network\u2019s input). 10.2 Uniform q values via Per-Location Normalization Our solution to the problem of unpredictable q values is a type of input data pre-processing which we call Per-Location Normalization (PLN ). This is related to the data normalization done in Daniely et al. (2016) for fully-connected networks, but generalized to convolutional networks. PLN ensures that each location vector in the network\u2019s input feature map has a dimension-normalized squared norm of 1, or in other words, that the q values for the network\u2019s input layer are all 1. Because subsequent q values are fully determined by previous q values via location-agnostic computations (i.e. Q maps), it thus follows by induction that each layer will have uniform q values under PLN. PLN can be easily realized through a number of di\ufb00erent possible transformations of the network\u2019s input, although care must be taken not to destroy information. The naive approach of normalizing the vector at each location of the input feature map (and multiplying by the square root of the channel dimension) destroys information because the vector goes from having k degrees of freedom to k \u22121 degrees of freedom (where k is the number 26 \fDeep Kernel Shaping of channels). This can be seen most starkly when k = 1, in which case all location-wise \u201cvectors\u201d are reduced to \u00b11 scalar values. The naive approach can however be repaired, by \ufb01rst adding an extra channel to the network\u2019s input. In our experiments we used the value \u0000 1 kEj[\u2225xj\u22252] \u0001 1 2 for this extra channel, where the expectation is an average over location vectors xj for the given input feature map X. This results in a vector of the form (k + 1) 1 2 \u0000\u2225xi\u22252 + 1 kEj[\u2225xj\u22252] \u0001 1 2 \" xi \u0000 1 kEj[\u2225xj\u22252] \u0001 1 2 # for each location i. Note that this approach to PLN still destroys some information, although it\u2019s only one degree of freedom across X, which includes all locations and channels. This can be seen most clearly in the case of only one location vector x, in which case the formula becomes (k + 1) 1 2 \u0000\u2225x\u22252 + 1 k\u2225x\u22252\u0001 1 2 \u0014 x \u2225x\u2225/ \u221a k \u0015 = \u0014 \u221a kx/\u2225x\u2225 1 \u0015 , from which we cannot recover the norm of x. Thus, it only makes sense to use this form of PLN when there are a large number of locations and/or channels. For cases where there is only one location (i.e. in a fully-connected network) and the channel dimension is small, one possible alternative is to use a data-independent constant value for the extra channel. Using a value of 1 gives (k + 1) 1 2 (\u2225x\u22252 + 1) 1 2 \u0014 x 1 \u0015 = \u221a k + 1 \u0014 x/ p \u2225x\u22252 + 1 1/ p \u2225x\u22252 + 1 \u0015 , which doesn\u2019t destroy any information about x (since we can invert the last entry to get p (\u2225x\u22252 + 1)/(k + 1), and then multiply that by the other entries to recover x). The disadvantage of this approach is that the scale of x could di\ufb00er very signi\ufb01cantly from 1, so that after normalization, the value of the extra channel could either dominate the overall vector, or be minuscule. Performing PLN may not always be important in practice, as we demonstrate later in our ablation experiments. Moreover, the version we have proposed seems to slightly degrade optimization performance in our benchmarks, possibly because of the extra parameters it adds to the \ufb01rst layer (for the extra channel dimension), or because of the nonlinear warping it applies to the input space. On the other hand, our experiments also demonstrate that very badly scaled input data can sometimes cause DKS to perform poorly, unless PLN is applied as a corrective measure. (See Appendix N.6 for the relevant results.) There are other ways we can produce normalized vectors without destroying information, such as those discussed in Daniely et al. (2016) for fully-connected networks. Of all of the aspects of DKS, our method of realizing PLN is the least explored, and we wouldn\u2019t be surprised if there was a signi\ufb01cantly better way of doing it. 27 \fMartens et al. 10.3 Assuming uniform q values going forward From this point forward we will assume that the uniform q value condition holds. This thus allows us to treat C maps as one dimensional functions, as the q value for each layer will be constant. As we will show in the next section, it also imbues C maps with a set of very useful properties which end up being crucial to our subsequent analysis of them in Section 13. 11. Additional consequences of uniform q values for C maps 11.1 Local C maps are positive de\ufb01nite functions and map c values of 1 to 1 For convenience, when we have uniform q values we will drop the formal dependence of the C map on its input q values, and instead treat these as known constants within the expression (which are both equal to the same value). This allows us to view C maps as essentially one dimensional functions, and we can use notation of the form \u201cCf(c)\u201d for them going forward. Under the assumption of uniform q values, several interesting and useful properties of local C maps emerge. Suppose f is a combined layer (or an isolated nonlinear layer) with activation function \u03c6. We begin by setting q1 = q2 = q in Equation 11, which gives C(i) f (c) = qi Qf(q)Ex,y\u223cN(0,1) h \u03c6(i) (\u221aqx) \u03c6(i) \u0010\u221aq \u0010 cx + p 1 \u2212c2y \u0011\u0011i . From this expression we can deduce that Cf(1) = 1 Qf(q)Ex\u223cN(0,1) h \u03c6 (\u221aqx)2i = Qf(q) Qf(q) = 1 and C(i) f (0) = qi Qf(q)Ex,y\u223cN(0,1) h \u03c6(i) (\u221aqx) \u03c6(i) (\u221aqy) i = qi Qf(q)Ex\u223cN(0,1) h \u03c6(i) (\u221aqx) i2 \u22650. (14) The second consequence implies two interesting properties. First, that for local maps, Cf(0) = 0 if and only if Ex\u223cN(0,1) \u0002 \u03c6 \u0000\u221aqx \u0001\u0003 = 0, where we note the interpretation of Ex\u223cN(0,1) \u0002 \u03c6 \u0000\u221aqx \u0001\u0003 from Appendix A. And second, that the Taylor series expansion of Cf(c) about c = 0, which is given by \u221e X i=0 1 i!C(i) f (0)ci, will have all non-negative coe\ufb03cients. Provided that the Taylor series converges and is equal to Cf(c) (which it will under mild technical conditions), it thus follows that Cf(c) is 28 \fDeep Kernel Shaping a positive de\ufb01nite function (Schoenberg, 1988; Daniely et al., 2016), which is de\ufb01ned as a function from [\u22121, 1] to R that can be written as \u221e X i=0 bici, for non-negative coe\ufb03cient bi. 11.2 Properties of positive de\ufb01nite functions Positive de\ufb01nite functions have many interesting and useful properties which thus carry over to C maps. These include: 1. The set of positive de\ufb01nite functions is closed under di\ufb00erentiation4. 2. Positive de\ufb01nite functions are non-negative, non-decreasing, and convex on the nonnegative part of their domain. (This follows from the fact that their derivatives are also positive de\ufb01nite functions, and thus non-negative for non-negative inputs.) 3. The set of positive de\ufb01nite functions is closed under composition and weighted averages with non-negative weights5. 11.3 Extended C maps are positive de\ufb01nite functions and map c values of 1 to 1 Given that local C maps are positive de\ufb01nite functions and map c values of 1 to 1, it\u2019s easy to show that the same applies to extended C maps. First, the property that c values of 1 map to 1 is clearly preserved under composition and weighted averaging, and thus carries over to extended maps since they are constructed from local maps this way. Second, the property of C maps being positive de\ufb01nite functions also carries over, since positive de\ufb01nite functions are closed under composition and non-negative weighted averages as mentioned above. Another way that one can show that extended C maps are positive de\ufb01nite is by observing that they describe the exact one-dimensional kernel function [\u03baf(z, z\u2032)]1,2 of a fullyconnected network f in the limit of in\ufb01nite width (where we substitute convolutional layers in the original network with fully-connected layers). Because this kernel depends only on the inner product of its inputs via the function Cf, it is thus invariant to orthogonal transformations of its input, and so by Schoenberg\u2019s Theorem (Schoenberg, 1988) it is a positive de\ufb01nite function of this inner product (i.e. Cf is positive de\ufb01nite). Note that this argument works even for non-smooth activation functions for which Equation 11 may not apply. See Daniely et al. (2016) for more details. 4. Closedness under di\ufb00erentiation can be easily veri\ufb01ed by observing that the derivative of P\u221e i=0 bici with respect to c is just P\u221e i=1 ibici\u22121, which is also positive de\ufb01nite since ibi \u2a7e0 when bi \u2a7e0. 5. Closedness under composition can be easily veri\ufb01ed by substituting one series into the other, expanding, and observing that the coe\ufb03cients of the resulting series are non-negative combinations of coe\ufb03cients from the two original series. Similarly, closedness under weighting averaging follows by observing that the coe\ufb03cients of the series for the weighted average are just weighted averages of the corresponding coe\ufb03cients from the original two series. 29 \fMartens et al. 11.4 A complementary perspective based on \u201cdual activations functions\u201d Assuming input q values of 1, local C maps are essentially equivalent to the \u201cdual activation functions\u201d de\ufb01ned in Daniely et al. (2016), with the only di\ufb00erence being that dual activation functions aren\u2019t normalized by the output q values (as C maps are). Using the notation of Equation 10, the dual activation function \u02dc \u03c6 of \u03c6 can be written as \u02dc \u03c6(c) = \u0393\u03c6(c, 1, 1). Assuming that we normalize each activation function so that its output q value is 1, these dual activation functions can be composed and averaged in order to form what are called \u201ccompositional kernels\u201d, which are approximations of the kernel function for the entire network, and are analogous to our network-level PKF approximation in the case where there is only one location (i.e. the fully-connected case). Given this connection, it may thus be an appealing prospect for us to adopt Daniely et al.\u2019s (2016) framework instead of the one we\u2019ve presented, as it\u2019s very carefully laid out and rigorously developed, and comes packaged with the best known error bounds for initialization-time kernel approximations of neural networks (one of which we adapt in Section 5.8). However, while their framework can deal with local receptive \ufb01elds, it cannot directly deal with the weight sharing used in convolutional layers, and it would likely require signi\ufb01cant work to extend it in that direction. Indeed, to deal with convolutional layers in a way that allows kernel approximations for individual layers to be naturally composed, one seemingly must de\ufb01ne something like our APKFs which keep track of approximations to entire IPMs (which contain inner products between every pair of locations in the feature maps of both inputs). Moreover, without assuming a Delta initialization, a decomposition of the kernel approximations into 1 dimensional functions (such as Q/C maps) becomes impossible, since APKFs for standard initializations involve non-trivial interactions between all the locations in the feature map (as seen in Equation 2). While we do indeed restrict our attention to Delta initializations in this work, without the PKF/APKF formalism we would not be able to extend our analysis to mean pooling layers, since the kernel approximation for such layers also involves interactions between locations. As we will see later in Section 23, this extension will be necessary later in order to understand how DKS can be applied to standard convolutional neural network architectures. 30 \fDeep Kernel Shaping Part II Desirable Q/C map behavior and how to achieve it 12. C map behavior in deep networks and necessary requirements for trainability C maps approximate a network\u2019s PKF at initialization time. In this view, c values approximate the cosine similarity between pairs of vectors (corresponding to di\ufb00erent locations/inputs), and their evolution via C maps thus describes how these cosine similarities evolve in the network. Given uniform q values, the norms of these vectors are approximately constant (for a given layer), and thus their relative distance is related to their cosine similarity c via \u2225x \u2212y\u2225 1 2(\u2225x\u2225+ \u2225y\u2225) = \u2225x \u2212y\u2225 p \u2225x\u2225\u2225y\u2225 = p 2(1 \u2212c). A (sub)network\u2019s C map thus provides a complete description of how it warps the geometry of its input space at initialization time. As we will argue in this section, the preservation of some amount of this geometric information through the network, as indicated by a \u201cwell-behaved\u201d C map, is a necessary condition for the network to be trainable. When C maps \u201cdegenerate\u201d in certain ways, as we will show they do for standard deep neural networks, it means that the relative distances between the network\u2019s (location-wise) input vectors are hard to infer from the network\u2019s outputs, making gradient-based training di\ufb03cult. 12.1 RELU networks The local C map of a combined layer f with a RELU activation function is given by Cf(c) = \u221a 1 \u2212c2 + (\u03c0 \u2212cos\u22121(c))c \u03c0 . (15) This formula is stated in Daniely et al. (2016), and is based on a derivation by Cho and Saul (2009), where it corresponds to a normalized version of the \u201c1st-order arc-cosine kernel function\u201d. Note that while in general C maps depend on the input q value, this formula is valid for any q value, which is a consequence of the fact that RELUs are positively homogeneous (i.e. RELU(\u03bbu) = RELU \u03c6(u) for all \u03bb \u2a7e0). One interesting fact about Cf is that C\u2032 f(1) = 1, which can be veri\ufb01ed by taking the derivative of Equation 15 and letting c \u21921. Moreover, because Cf(1) = 1 (which is true for general C maps), we have that a deep RELU network g consisting of the composition of D combined layers will also have the property that C\u2032 g(1) = 1D = 1. The following is a plot of Cf: 31 \fMartens et al. From this we can see that the entire domain [\u22121, 1] of input c values is compressed to the range [0, 1], which makes intuitive sense since the RELU function is non-negative. Other than that, Cf resembles a slightly shifted and rescaled identity function, and so is reasonably well-behaved. However, if we build a deep network as the composition of many RELU combined layers, compression of the C map\u2019s output becomes much more extreme, with outputs rapidly concentrating around 1 as depth increases. This can be seen below in the plot of the C map for RELU networks of depths 5, 20 and 100 (which we obtain by iterating Equation 15 the required number of times): Here, the C map for depth 100 has the property that maps the entire interval [\u22121, 1] to [0.996, 1], which represents an extreme amount of compression. 32 \fDeep Kernel Shaping 12.2 Sigmoidal networks (using the erf activation) Another example of an activation function whose associated local Q and C maps have closedform expressions is the classical \u201cerror function\u201d, which is given by erf(u) = 2 \u221a\u03c0 R u 0 exp(\u2212t2) d t. This function has a \u201csigmoidal shape\u201d, which makes it a reasonable stand-in for the more common sigmoidal activation functions like tanh and the logistic sigmoid. The local Q map for a combined layer f with an erf activation function is given by Qf(q) = 2 \u03c0 sin\u22121 \u0012 2q 1 + 2q \u0013 , and the local C map is given by Cf(c) = 1 Qf(q) 2 \u03c0 sin\u22121 \u0012 2cq 1 + 2q \u0013 . These formulas follow from equation 11 of Williams (1997). Unlike in the RELU case, the local C map depends on the input q value, and so to plot it we must make an assumption about this value. One natural choice is q = 1, which gives the following plot: Visually, this function is almost indistinguishable from the identity function. To compute the C maps for deeper RELU networks we need to track both the q and c values through each layer, using the previously stated equations. Doing so for depths 50, 150, and 500 gives the following plot: 33 \fMartens et al. From this plot we can see that at high depths, the network\u2019s C map has a tendency to compress nearly all input c values to a small region around 0. Moreover, this behavior only becomes more extreme as the depth increases. 12.3 C map degeneration in more general deep nonlinear networks The following proposition establishes that the C map degeneration we observed above for deep RELU and tanh networks happens for a larger class of deep networks. Moreover, the point c\u2217towards which (nearly) all c values get mapped as the depth increases is unique. Proposition 10 Suppose f is a deep network consisting of a composition of D subnetworks, each with the same C map C. Then for all c \u2208(\u22121, 1) we have lim D\u2192\u221eCf(c) = c\u2217, for some c\u2217\u2208[0, 1]. The proof of this proposition is a straightforward generalization6 of the proof of \u201cClaim 1\u201d from Daniely et al. (2016). While Proposition 10 describes the convergence of Cf(c) in the limit of in\ufb01nite depth, it is still informative about Cf(c) at \ufb01nite depths (which is the setting we actually care about). In particular, it essentially says that for any \u03f5 there is a constant D\u03f5 so that for when D \u2a7eD\u03f5, nearly all input c values get compressed to a region of radius \u03f5 around c\u2217. We will call C maps exhibiting this compressive behavior (with a small \u03f5) degenerate. 6. While the statement of their claim assumes that each of the D subnetworks is a combined layer, the only fact they use about C in their proof is that it is positive de\ufb01nite, which holds for more general subnetworks by Section 11.3. 34 \fDeep Kernel Shaping Remark 11 Note that Proposition 10 assumes that C is the same for all values of D. If, for example, we were to modify the network\u2019s activation functions based on the value of D (which DKS will do), the convergence seen in the proposition may not occur. Remark 12 Also note that the hypothesis in Proposition 10 that each subnetwork has the same C map C will rule out many common cases. For example, it is violated for the deep erf network we looked at before, because the q values are di\ufb00erent for each layer (which leads to di\ufb00erent local C maps). However, because the q values converge rapidly to a \ufb01xed point in such networks, convergence of the c values will still occur. As shown in Appendix J, the \u201cresidual blocks\u201d of ResNets (which are repeated many times in sequence) also violate this hypothesis, but their q values do not converge to a \ufb01xed point. In Appendix C (Theorems 40, 41, and 42) we give a much more detailed analysis of the convergence of Cf(c) in terms of the properties of C and the location of c in [\u22121, 1]. When C\u2032(1) \u0338= 1 we prove exponential convergence (as a function of the depth D) with precise rates, thus establishing that degeneration can happen very quickly in deep networks. In contrast to the related analyses of Poole et al. (2016), our results apply pre-asymptotically. 12.4 Types of degeneration and their implications for trainability As we\u2019ve seen above, degenerate C maps send a large range of input c values to a small (and sometimes point-like) region near some \ufb01xed point c\u2217. This means that the original geometric relationships between the corresponding input vectors are obscured by the action of the network, becoming essentially impossible to recover from its outputs. While it seems intuitively plausible that this would make gradient-based optimization of such networks di\ufb03cult (as has been argued by Schoenholz et al. (2017)), it\u2019s worth examining the situation in more detail. In this subsection we will give a detailed intuitive argument. A more rigorous argument which con\ufb01rms these intuitions will appear later in Section 24. Suppose f is some subnetwork of the overall network that we wish to train. There are two basic cases, corresponding to di\ufb00erent possible values for c\u2217. 12.4.1 c\u2217= 1: the collapsing case If Cf sends nearly all input c values to a small neighborhood near c\u2217= 1, this means that regardless of the original distance of the two associated input vectors, their corresponding output vectors under f will be nearly identical (i.e. have a relative distance p 2(1 \u2212Cf(c)) \u2248 p 2(1 \u2212c\u2217) = 0). And because this holds for all pairs of vectors, it means that f is nearly constant, with only a very weak dependence on its input. This has a di\ufb00erent set of consequences for layers in the network before f versus layers after. For layers before f there are two cases. If f\u2019s Jacobian is non-negligible for most inputs, then it will have to vary wildly over f\u2019s input space, as this is the only that a function can achieve a nearly constant output while having a non-negligible Jacobian. (Balduzzi et al. (2017) observed a similar phenomenon for early layers in deep RELU networks, likening the gradient function to a \u201crandom noise process\u201d.) This will make learning di\ufb03cult, or at the very least unlikely to generalize, as similar pairs of training cases will produce 35 \fMartens et al. very di\ufb00erent gradients. If on the other hand f\u2019s Jacobian is negligible, this means that gradient magnitudes for layers before f will be very small compared to those for other layers. This makes simultaneous optimization of the network\u2019s layers with gradient descent very di\ufb03cult, and even sophisticated 2nd-order methods may struggle in the more extreme cases. (This is arguably related to the well-known \u201cvanishing gradients\u201d phenomenon identi\ufb01ed in Hochreiter et al. (2001).) Meanwhile, layers after f won\u2019t be able to learn anything more than a constant output prediction, as their inputs will be nearly constant. And even if the output produced by f has enough variance across the training data to overcome the limits of numerical precision, the part of the network after f would need to have a very large Lipschitz constant in order to produce well-separated outputs for di\ufb00erent training cases. 12.4.2 0 \u2a7dc\u2217< 1: the exploding case If Cf sends nearly all input c values to a small neighborhood around c\u2217with 0 \u2a7dc\u2217< 1, then any two input vectors (that aren\u2019t either almost identical or negations of each other) will be mapped by f to output vectors that are nearly a constant relative distance d = p 2(1 \u2212c\u2217) > 0 apart. Given this condition, and that f is di\ufb00erentiable, it follows that f\u2019s Jacobian must be very large in certain regions of the input space (and possibly everywhere). This means that the gradient magnitudes for layers in the network before f will be much larger than those for other layers, making it di\ufb03cult to optimize them simultaneously. (This is arguably related to the well-known \u201cexploding gradient\u201d phenomenon discussed in Hochreiter et al. (2001).) A network containing such a subnetwork f possibly stands a greater chance of being trainable than in the previous \u201ccollapsing case\u201d, since f\u2019s output vectors will still be distinguishable for di\ufb00erent inputs vectors. Optimization of the layers after f (or towards the end of f) could even conceivably learn a map from these vectors to their associated targets in the training set. However, it is highly unlikely that the resulting model would generalize well, as the similarity between two such output vectors would have no discernible relationship to the similarity of the associated two input vectors. 12.5 Are well-behaved C maps su\ufb03cient? While a well-behaved (i.e. non-degenerate) C map seems like a necessary condition for trainability (as argued above), it should be noted that without additional hypotheses, no condition on the C map can be a su\ufb03cient. For example, because the C map is invariant to the network\u2019s parameterization7, or even whether its parameters are considered trainable at all, it cannot completely predict the performance of a gradient-based optimizer. Even if we assume the standard network parameterization, and a model class which is equivalent to a standard deep nonlinear network, interesting counterexamples to su\ufb03ciency still exist, as we will show in Section 14.2. 7. This can be seen by observing that APKFs, from which local Q/C maps and ultimately extended Q/C maps are de\ufb01ned, only depend on the functional behavior of combined layers, and not which variables are formally considered \u201cparameters\u201d from the standpoint of the optimizer. Indeed, one could reparameterize a layer\u2019s weights using any invertible function without changing what it computes at initialization time, or the form of its PKF/APKF. 36 \fDeep Kernel Shaping One way to incorporate hypotheses about the network\u2019s parameterization, and the optimizer used, is to analyze gradient descent training from the perspective of Neural Tangent Kernel (NTK) theory (Jacot et al., 2018). Later in Section 24 we will argue that for a deep fully-connected network, a degenerate C map leads to a form for the NTK which implies very poor generalization and/or slow optimization under gradient descent. Conversely, we will also show that a network constructed using DKS has an NTK which is suggestive of good generalization and fast optimization (although doesn\u2019t necessarily guarantee it). This NTK-based analysis can be viewed as a rigorous version of the intuitive argument given in the previous subsection. 13. Mathematical analysis of C maps As we saw in Section 12, C maps can become degenerate in deep networks by mapping nearly all input c values to a point-like region around some c\u2217\u2208[0, 1], which leads to di\ufb03culty when training with standard optimization methods. In this section we will analyze C maps in closer detail, and show how their overall deviation from the identity function (which serves as a measure of degeneration) can be predicted from their slopes at c = 0 and/or c = 1. (We will ultimately design DKS to control these slopes in order to prevent degeneration.) We will also establish connections between the properties of a local C map and its associated activation function, and characterize the slope behavior of degenerate C maps over [\u22121, 1]. 13.1 Measures of deviation from the identity Suppose f is some subnetwork. The question of how to measure the deviation of Cf from the identity function, which we will use as a measure of \u201cdegeneracy\u201d, is an interesting one. Since we ultimately want to forbid extreme behavior of Cf for all input c values, it makes sense to look at the worst-case ones. This suggests the following two options, which compare Cf to the identity function using either its values or its derivatives: 1. maxc\u2208[\u22121,1] |c \u2212Cf(c)| and 2. maxc\u2208[\u22121,1] |1 \u2212C\u2032 f(c)|. The \ufb01rst of these, while a reasonable choice, could fail to detect small ranges of c values where geometric information is lost due to the slope of Cf getting close to zero. The second option will detect such regions, but unlike the \ufb01rst measure, is insensitive to Cf being shifted by an additive constant, and is only weakly sensitive to it being \u201cangled\u201d away from the identity function. Fortunately, we can avoid having to choose between the two, since as we will see next, they can be simultaneously bounded using a few easily-computed properties of Cf. 13.2 Bounding deviation from the identity The following result relates the deviation of Cf from the identity function to the values of Cf(0), C\u2032 f(0), and C\u2032 f(1). Its proof, which makes strong use of the fact that Cf is a positive de\ufb01nite function, is given in Appendix D.1. 37 \fMartens et al. Theorem 13 For any subnetwork f we have 1 4(1 \u2212C\u2032 f(0)) \u2a7d maxc\u2208[\u22121,1] |Cf(c) \u2212c| \u2a7d 2(1 \u2212C\u2032 f(0)) and max c\u2208[\u22121,1] |C\u2032 f(c) \u22121| \u2a7d 2(1 \u2212C\u2032 f(0)) + (C\u2032 f(1) \u22121). If Cf(0) = 0, then we additionally have that max c\u2208[\u22121,1] |Cf(c) \u2212c| \u2a7d 2(C\u2032 f(1) \u22121) and max c\u2208[\u22121,1] |C\u2032 f(c) \u22121| \u2a7d 3(C\u2032 f(1) \u22121). From this result we see that the \ufb01rst measure of deviation is within a factor 4 of 1\u2212C\u2032 f(0) (and also upper bounded by 2[C\u2032 f(1)\u22121] when Cf(0) = 0), and the second measure is within a factor 3 of C\u2032 f(1) \u22121 when Cf(0) = 0. This suggests that we can control the deviation of Cf from the identity function by simply controlling the distance of C\u2032 f(0) and/or C\u2032 f(1) from 1. Remark 14 Note that a simple consequence of this result is that C\u2032 f(1) \u2a7e1 for subnetworks f satisfying Cf(0) = 0. It is also true more generally that C\u2032 f(0) \u2a7d1 (as shown in Appendix D.1). Remark 15 The lower bounds in Theorem 13 do not imply that Cf will look globally nonlinear when C\u2032 f(1) is large (in contrast to the theorem\u2019s upper bounds which do imply it will look globally linear when C\u2032 f(1) is small). For example, the function \u00001 \u22121/\u221aj \u0001 c + \u00001/\u221aj \u0001 cj is a valid C map and has a derivative \u2a7e\u221aj at c = 1, and yet is very close to linear everywhere except near c = 1 and c = \u22121 when j is large. 13.3 The relationship between identity C maps and linear activations The local C map for a nonlinear layer with a linear activation function is the identity (which can be easily veri\ufb01ed from Equation 7 by taking \u03c6(u) = \u03bbu with \u03bb \u0338= 0). However, from this observation it\u2019s not immediately obvious that a local C map will converge to the identity function as its associated activation function becomes \u201cmore linear\u201d, or vice versa. In this subsection we will show that this is indeed the case for certain carefully chosen measures on both function spaces, and we will give the rate of this convergence. We will assume that the activation functions live in a Hilbert H space with inner product given by \u27e8\u03c6, \u03c8\u27e9H = Ex\u223cN(0,1)[\u03c6(x)\u03c8(x)], whose associated norm is \u2225\u03c6\u2225H = p \u27e8\u03c6, \u03c6\u27e9. (By de\ufb01nition, elements of this Hilbert space are those functions \u03c6 for which \u2225\u03c6\u2225H exists.). The standard measure on H, de\ufb01ned by \u2225\u03c6 \u2212\u03c8\u2225H = q Ex\u223cN(0,1)[(\u03c6(x) \u2212\u03c8(x))2], 38 \fDeep Kernel Shaping is arguably the most natural one to use, as it employs a weighting over x that precisely re\ufb02ects the input distribution we would expect given an input q value of 1. This measure is also closely related to Q/C maps in the sense that \u2225\u03c6\u22252 H = \u0393\u03c6(1, 1, 1) = Qf(1) (with \u0393\u03c6 de\ufb01ned as in Equation 10), where f is a nonlinear/combined layer with \u03c6 as its activation function. \u03c6 is linear insofar as it\u2019s close to a multiple of the identity function h1. With this in mind, and given the fact that \u2225h1\u2225H = 1, we will measure the level of nonlinearity of \u03c6 according to nl(\u03c6) \u2261\u2225\u03c6 \u2212\u27e8\u03c6, h1\u27e9Hh1\u2225H \u2225\u03c6\u2225H , where we normalize by \u2225\u03c6\u2225H to keep nl(\u03c6) invariant to changes in the overall scale of \u03c6. Note that with this de\ufb01nition, \u03c6 is perfectly linear (i.e. a multiple of h1) if and only if nl(\u03c6) = 0. It turns out that we can relate nl(\u03c6) to properties of Cf, as is established in the following proposition whose proof is given in Appendix D.2: Proposition 16 Suppose f is a nonlinear/combined layer with activation function \u03c6 and input q value 1. Then we have nl(\u03c6)2 = 1 \u2212C\u2032 f(0). Moreover, 1 4 nl(\u03c6)2 \u2a7d maxc\u2208[\u22121,1] |Cf(c) \u2212c| \u2a7d 2 nl(\u03c6)2. This result shows a strong relationship between the level of linearity of \u03c6, and the distance between Cf and the identity function (as measured by the in\ufb01nity norm). Moreover, one converges to 0 (as we vary \u03c6) if and only if the other one does. Remark 17 Proposition 16 can be straightforwardly extended to the case of general input q values by modifying the de\ufb01nition of the inner product used for H. 13.4 The relationship between a\ufb03ne C maps and a\ufb03ne activations An a\ufb03ne function is, by de\ufb01nition, a linear function plus a constant term. Equivalently, it is a function whose derivative is constant (i.e. is a multiple of the constant function h0(x) = 1). From this second characterization, we can measure the \u201cnon-a\ufb03neness\u201d of \u03c6 by na(\u03c6) \u2261\u2225\u03c6\u2032 \u2212\u27e8\u03c6\u2032, h0\u27e9Hh0\u2225H \u2225\u03c6\u2032\u2225H . Note that with this de\ufb01nition, \u03c6 is perfectly a\ufb03ne if and only if nl(\u03c6) = 0. It turns out that we can relate na(\u03c6) to properties of Cf, as is established in the following proposition (whose proof is given in Appendix D.3). Proposition 18 Suppose f is a nonlinear/combined layer with \u03c6 as its activation function. Then na(\u03c6)2 = 1 \u2212 C\u2032 f(0) C\u2032 f(1). 39 \fMartens et al. From the above expression we can see that \u03c6 becomes more a\ufb03ne as the ratio C\u2032 f(0)/C\u2032 f(1) approaches 1. Moreover, \u03c6 approaches an a\ufb03ne function if and only if Cf itself does, as C\u2032 f(0)/C\u2032 f(1) is a measure of how a\ufb03ne Cf is. (To see this, note that C\u2032 f(0) \u2a7dC\u2032 f(c) \u2a7dC\u2032 f(1) for all c \u2208[0, 1] since Cf is convex on [0, 1] by Section 11.2, and thus C\u2032 f approaches a constant function on [0, 1] as C\u2032 f(0)/C\u2032 f(1) \u21921, which extends to all of [\u22121, 1] since Cf is analytic.) 13.5 Slope properties of degenerate C maps In subsection 13.2 we saw how certain conditions on the slope of a C map at c = 0 and/or c = 1 ensure that it is well behaved (i.e. not degenerate). In this subsection we will establish the converse: that degenerate C maps necessarily have extreme values for these slopes. As shown in Section 12, C maps in deep networks can become degenerate in the sense that they map almost their entire input domain (except points very close to \u00b11) to a small region around some limiting c value c\u2217. One way to quantify this behavior is to look at how \u201c\ufb02at\u201d the function is up to some c value c s.t. |c| < 1, which we can measure using Ff(c) \u2261Cf(|c|) \u2212Cf(0) |c| \u2a7e0 for c \u0338= 0. When Cf is degenerate, or in other words very \ufb02at, Ff(c) will be very small (for values of c not too close to \u00b11). While the interpretation of Ff(c) is clear for c > 0, it is less clear for c < 0. In the following proposition (proved in Appendix D.4) we show that Ff(c) does indeed work as a measure of \ufb02atness for values of c less than 0. Proposition 19 Suppose f is a subnetwork, and c \u2208[\u22121, 1] with c \u0338= 0. Then \f \f \f \f Cf(c) \u2212Cf(0) c \f \f \f \f \u2a7dFf(c). Note that because Cf is convex and non-decreasing on [0, 1] (by Section 11.2) and Cf(1) = 1, we have that Ff(c\u2032) \u2a7dFf(c) \u2a7d1 for any valid c, c\u2032 s.t. |c\u2032| \u2a7d|c|. Thus, Ff(c) being small implies \ufb02atness over the entire domain [\u2212c, c], and not just at c. A more \u201canalytic\u201d way to measure \ufb02atness is to look |C\u2032 f(c)|, which intuitively should be small in \ufb02at regions of Cf. It turns out that this intuition is basically correct, as we establish in the following proposition (whose proof is given in Appendix D.5). Proposition 20 Suppose f is a subnetwork, and c \u2208(\u22121, 1) with c \u0338= 0. Then |C\u2032 f(c)| \u2a7d Ff(c) log Ff(c) |c| log |c| (1 + |c|). This bound establishes that |C\u2032 f(c)| will be small whenever Ff(c) is (which is to say, when Cf is degenerate), provided that |c| is not too close to 1. 40 \fDeep Kernel Shaping Remark 21 While Proposition 20 doesn\u2019t address the value of |C\u2032 f(0)| directly, we can still bound it by applying Proposition 20 with c = \u03f5 for some 0 < \u03b5 < 1 and then use the fact that 0 \u2a7dC\u2032 f(0) \u2a7dC\u2032 f(\u03b5) = |C\u2032 f(\u03b5)| for any 0 < \u03b5 < 1 (which is true because Cf is non-decreasing and convex on [0, 1] by Section 11.2). In general, we cannot say that much about C\u2032 f(1) or C\u2032 f(\u22121) when Cf is degenerate. However, the following two propositions (proved in Appendices D.6 and D.7) give us some basic information about these values in certain special cases. Proposition 22 Suppose f is a composition of D subnetworks each having the C map C, and that c\u2217= 1. Then we have that C\u2032 f(1) = C\u2032(1)D with 0 \u2a7dC\u2032(1) \u2a7d1, and either C\u2032 f(\u22121) = \u2212C\u2032 f(1) or limD\u2192\u221eC\u2032 f(\u22121) = 0. Remark 23 Note that the claim made in Proposition 22 does not hold for more general types of networks. For example, if we have a sequence of networks (fn)\u221e n=1 such that Cfn(c) = 1 \u22121/n + cn2/n, then for all c \u2208[\u22121, 1] we have Cfn(c) \u21921, Ffn(c) \u21920 and C\u2032 fn(1) = n \u2192\u221eas n \u2192\u221e. Remark 24 For RELU combined layers we have C\u2032(1) = 1 (which follows from Equation 15 by taking the derivative and letting c \u21921), and thus the bound C\u2032(1) \u2a7d1 in Proposition 22 is tight. Proposition 25 Suppose f is a subnetwork. For all 0 < \u03f5 < 1 we have C\u2032 f(1) \u2a7e 1 \u2212Cf(0) \u2212Ff(1 \u2212\u03f5) \u03f5 . By taking a small value for \u03f5, Proposition 25 tells us that for degenerate C maps with c\u2217< 1 (so that Cf(0) \u2248c\u2217< 1), C\u2032 f(1) will be large provided that the \ufb02atness measure Ff(1 \u2212\u03f5) is small. See Section 12.2 for an example of degenerate C map where C\u2032 f(1) is indeed very large. 14. C map behavior in linear networks and the problem of being \u201ctoo linear\u201d 14.1 Linear networks have identity C maps and are easy to train In Section 12 we saw that deep nonlinear networks can easily have degenerate C maps, which makes them very hard to train with gradient-based methods. One might wonder if this pathology is reserved to nonlinear networks, or if deep linear networks8 also su\ufb00er from it. Given our assumption that the initial biases are zero, it turns out that the answer is no. The local C map for a combined layer with a non-zero linear activation function is equal to the identity. Because identity functions are preserved under composition and weighted 8. Here, a linear network is de\ufb01ned as one whose activation functions are a constant multiple of the identity function. 41 \fMartens et al. averages, it thus follows that the extended C map for any subnetwork is also the identity function, and is therefore well-behaved. Does this mean that linear networks are easy to train? Well, as discussed in Section 12.5, more hypotheses are required to say anything about trainability. But if one adopts the standard parameterization, very deep linear networks are surprisingly easy to train both in theory and practice using standard techniques (Saxe et al., 2014), provided that they are initialized using orthogonal weights. Linear networks thus represent an interesting example of where our necessary condition for trainability (i.e. having a well-behaved C map) also appears to be su\ufb03cient. Despite how easy they are to train, we obviously can\u2019t use linear networks in practice, as their expressivity is fundamentally limited. But these observations do suggest a possible strategy to address problem of degenerate C maps in nonlinear networks: we can transform the network\u2019s activation functions so that they appear \u201csu\ufb03ciently linear\u201d at initialization time. However, as we will see in the next subsections, going overboard on this idea will lead to a special type of untrainability that exists only in networks with very well-behaved C maps. 14.2 The problem of being \u201ctoo linear\u201d As suggested in the previous subsection, one way to achieve a well-behaved C map would be to transform the activation functions in a network to resemble the identity function (or a multiple thereof). We can do this for the RELU activation function (de\ufb01ned by RELU(u) = max(0, u)) by adding a large constant a to its input and subtracting the same constant from its output. In other words, we set \u03c6(u) = RELU(u + a) \u2212a = max(0, u + a) \u2212a, which is equivalent to the identity function for all inputs u \u2a7e\u2212a. If a is extremely large, say 10100, this means that all practically sized inputs \u03c6 will satisfy this constraint, and thus the function can be treated as the identity for all practical purposes. Moreover, the expectation formulas for local Q and C maps given in Section 6 will produce practically identical results to the identity function case, since the probability mass associated with inputs u < \u2212a to \u03c6 will be vanishingly small. (See Section 13.3 for a formal justi\ufb01cation of this.) Thus, the C map for networks consisting of the composition of many combined layers with these transformed RELUs will be equal to the identity function up to a vanishingly small approximation error. Meanwhile, because nonlinear layers are always preceded and followed by linear layers with learnable biases, the network can in principle learn to undo these transformations and thus simulate a standard deep RELU network. The model class is thus technically no di\ufb00erent from a standard RELU network, assuming a perfect optimizer. But despite this, these transformed networks will never actually learn nonlinear behavior via standard gradient-based methods in a reasonable amount of time, and so their hypothetical expressive power will fail to be properly utilized. Indeed, unless the optimizer manages to change the parameters by a factor on the order of 10100, the network will behave nearly identically to the corresponding linear network (which computes only a\ufb03ne functions of its input) 42 \fDeep Kernel Shaping throughout the entire course of optimization, both in terms of its function values and its gradient/curvature estimates. The basic problem here is that the transformed network has become \u201ctoo linear\u201d in the sense that we require a very large change in its parameters to see any signi\ufb01cant nonlinear behavior. While such a network may be readily trainable within the class of linear functions (as linear networks are), it will be severely limited compared to a standard RELU network in terms of its e\ufb00ective expressive power under gradient descent optimization. Thus, to achieve trainable networks that are also expressive, one must avoid this failure mode in addition to requiring a well-behaved C map. 14.3 How to avoid networks that are \u201ctoo linear\u201d If a subnetwork f has an C map which is very close to the identity function, this will usually mean the local C maps of its nonlinear layers are also very close to the identity function (and perhaps much more so). By Section 13.3, this implies that the activation functions must therefore be very close to linear, so that the network is at risk of being \u201ctoo linear\u201d (as de\ufb01ned above). We may thus hope to prevent this by insisting that the network\u2019s C map isn\u2019t too close to the identity function. However, this alone won\u2019t be good enough, as shown in the following example. Consider modifying the transformed RELU activations in the previous example by adding 1 to their output and dividing the result by \u221a 2, so that they compute \u03c6(x) = (x + 1)/ \u221a 2 over their high-probability range of inputs (which is an a\ufb03ne function of x). With this change, gradient-based learning will still be e\ufb00ectively restricted to the class of linear networks (which compute a\ufb03ne functions). Meanwhile, a straightforward calculation via Equations 5 and 7 shows that Qf(q) \u2248q and Cf(c) \u2248(c + 1)/2 for nonlinear/combined layers f with activation function \u03c6, and so Cf(c) di\ufb00ers signi\ufb01cantly from the identity. Fortunately, by leveraging the previous analysis, there is a simple way we can use C map properties to provably avoid networks that are \u201ctoo linear\u201d. From Section 13.4, the degree of \u201cnon-a\ufb03neness\u201d of \u03c6, denoted na(\u03c6), is given by na(\u03c6)2 = 1 \u2212 C\u2032 f(0) C\u2032 f(1). As we have C\u2032 f(0) \u2a7d1 by Remark 14, it thus follows that na(\u03c6)2 \u2a7e1 \u2212 1 C\u2032 f(1). This shows that we can avoid activation functions that are too a\ufb03ne (which is su\ufb03cient to avoid networks that are \u201ctoo linear\u201d) by requiring that C\u2032 f(1) be su\ufb03ciently greater than 1 for every nonlinear layer f. 15. Mitigating kernel approximation error Our analysis of the initialization-time behavior of deep neural networks via Q/C maps relies on the assumption that the APKF approximation, when applied over multiple layers in a 43 \fMartens et al. nested fashion, is a reasonable one to make. If this isn\u2019t true, then Q/C maps will cease to describe the network\u2019s PKF at initialization time, and our attempts to make the network trainable by controlling their properties will be doomed to failure. In Section 5.8 we discussed the error bounds from Daniely et al. (2016) in order to help justify our use of nested APKF approximations in deep networks. These bounds make high-probability statements about the error of initialization-time kernel approximations of neural networks, and give a maximum value which shrinks with the square root of the width and grows exponentially with depth. While they represent the best rigorous account of neural network kernel approximations, they are still too pessimistic to be useful in practical settings, either for predicting the approximation error or controlling it. In this section we will propose a heuristic way of looking at how approximation error originates and evolves across multiple layers which we have found to be quite predictive in practice, and which implies certain error mitigation strategies that we can incorporate into DKS. 15.1 Minimizing propagation of errors by controlling Q map derivatives Suppose f is a subnetwork consisting of a composition of many combined layers. A perturbation to the input q value to Qf, representing the error from approximations made at previous layers, will manifest as a perturbation of Qf\u2019s output. Up to \ufb01rst order, the size of the latter will be approximated by that of the former, multiplied by Qf\u2019s derivative. As discussed in Section 8.2, the derivative of Qf is equal to the product of the derivatives for its constituent local Q maps (i.e. those for each of f\u2019s combined layers), and thus will grow or shrink in an exponential fashion as a function of the depth. Thus, it can easily be the case that deep networks will have very large Q map derivatives, which suggests a very large ampli\ufb01cation of error through successive layers network. One way we can avoid this issue is by requiring that derivative of each local Q map, when evaluated at its expected input, is less than or equal to 1. A closely related perspective, which applies to networks consisting of a sequence of combined layers each with local Q map given by Q, is that a \ufb01xed point q\u2217of Q will be attractive if Q\u2032(q\u2217) < 0. Attractive \ufb01xed points have the property that QD(q) will converge to q\u2217as D \u2192\u221efor all values of q su\ufb03ciently close to q\u2217, and are thus naturally \u201crobust\u201d to reasonably sized errors in q. See Appendix K for empirical evidence of the relationship between Q map derivatives and kernel approximation error. 15.2 Minimizing errors using large width and SUO-distributed weights In addition to controlling the propagation of errors across layers, another way to mitigate error is to increase quality of each of the layer-wise APKF approximations from which the errors \ufb01rst originate. In the case of Gaussian-distributed weights, APKFs use analytic expectations to approximate \ufb01nite averages (over unit outputs), where each element being averaged is an iid unbiased estimator of the expectation. Increasing the width/channel dimension m will thus reduce the variance of the overall estimator, and thus reduce error (as predicted by Theorems 1 and 6). 44 \fDeep Kernel Shaping While not originally conceptualized as such, the use of SUO-distributed weights (as de\ufb01ned in Section 4) provides another way to mitigate the kernel approximation error that is essentially free. When using the SUO distribution, the weights are no longer statistically independent or Gaussian distributed, and so the unit outputs being averaged across are neither iid nor unbiased estimators of the kernel approximation formula. Nonetheless, their average is a consistent (but biased) estimator, whose variance goes to zero as m increases (as is established in Theorem 6). It is well-known that biased estimators can sometimes have lower variance than unbiased ones, and this does seem to be the case here. Recall the discussion from the end of Section 4.2, where it was observed that the distribution of an output vector from a linear layer is identical for the Gaussian and SUO-distributed cases, except that the former introduces a random multiplicative perturbation (with mean 1 and variance 1/m) on the vector\u2019s dimension-normalized squared length (which is an estimator of the associated q value). While this perturbation is required for the implied estimator (after the nonlinearity) to be unbiased, it leads to additional variance. We conjecture that this extra variance is more signi\ufb01cant than the bias, and thus SUO-distributed weights yield an overall lower approximation error for APKFs. Note that the upper bounds given in Theorems 1 and Theorem 6 do not re\ufb02ect these intuitions, as they suggest an overall lower approximation error for Gaussian-distributed weights. However, because these are only upper bounds without matching lower bounds, and are likely quite loose/pessimistic, one cannot draw any conclusions. Indeed, we conjecture that tighter bounds could be obtained for SUO-distributed weights given a more careful analysis than the one in Martens (2021). Part III Speci\ufb01cation and derivation of Deep Kernel Shaping 16. Conditions on Q/C maps that we will enforce Having established a detailed understanding of Q and C maps, and how their properties relate to network trainability, we are now in a position to state and justify the speci\ufb01c conditions which we will attempt to enforce with DKS. Our particular mechanism for doing this, which involves certain transformations of the network\u2019s activation functions, will be described in later sections, and isn\u2019t important for the present discussion. In the follow series of subsections we describe each of the conditions. Note that because we want the entire network\u2019s capacity to be utilized, and not just the subnetwork corresponding to the most direct input-output path, we will enforce these conditions for all subnetworks of the network. 45 \fMartens et al. 16.1 Qf(1) = 1 for all subnetworks f The uniform q condition ensures that the q values for a given layer are independent of location (in the feature map) and the network\u2019s input. While this alone is a su\ufb03cient hypothesis to derive an approach similar to DKS, we will go a step further and standardize to a q value of 1, which will allow us to reuse local Q/C map computations across the entire network. Note that given our use of PLN (which ensures initial q values are 1), this is equivalent to the condition that Qf(1) = 1 for all subnetworks f. The choice to standardize to a q value of 1 (as opposed to some other positive constant) is somewhat arbitrary and not particularly important. 2 would have worked equally well, for example. However, the choice of 1 does lead to somewhat simpler expressions for the local Q/C map and their derivatives, and corresponds to an output scale which is in the range of \u201cinteresting behavior\u201d for most typical loss functions (such as the commonly used softmax cross-entropy error). 16.2 Q\u2032 f(1) = 1 for all subnetworks f As discussed in Section 15, the size of the error in our kernel approximations can be roughly predicted from the size of the derivatives of the Q maps. Thus, in order for Q/C maps to be an accurate description of the network\u2019s true kernel function, we must keep the size of these derivatives under control. To this end, we will require that Q\u2032 f(1) = 1 for all subnetworks f. We look at the derivative at q = 1 in particular since this is the input q value we expect in the absence of error, due to the condition Qf(1) = 1. In principle, we also care about the value of Q\u2032 f(q) for q\u2019s close to 1, which would give us a more complete picture of the approximation error (and perhaps let us to establish rigorous bounds on it). Unfortunately, we don\u2019t yet have a powerful theory for the global properties Q maps like we do for C maps, and so the best we can do is to look at their properties at particular points. That being said, because we know that Q maps are smooth, it\u2019s likely that Q\u2032 f(q) will be reasonably well approximated by Q\u2032 f(1) for values of q close to 1. The choice to make Q\u2032 f(1) equal 1, which corresponds to the error neither growing nor shrinking, is somewhat arbitrary, and other choices for this value are possible. For example, we could minimize Q\u2032 f(1) instead of setting it to 1, thus suppressing the growth of errors as much as possible. Minimizing Q\u2032 f(1) did seem to work in our experiments, however we found that for certain activation functions (such as tanh) it resulted in slower optimization compared to using Q\u2032 f(1) = 1. (See Appendix N.4 for these experiments.) We are currently not sure why setting Q\u2032 f(1) = 1 works better than minimizing it for some activation functions. One possible explanation is that Q\u2032 f(1) = 1 allows f to transmit information about the overall scale of its input vector as a roughly linear function of q. Meanwhile, networks where Q\u2032 f(1) is minimized will tend to \u201csquash\u201d the range around q = 1, making it harder to recover the original input q value from the network\u2019s output. One can perhaps draw a rough analogy between this and the preservation of \u201cgeometric information\u201d by C maps as discussed in Section 12. 46 \fDeep Kernel Shaping 16.3 Cf(0) = 0 for all subnetworks f In order to apply the analysis of Section 13 we require that Cf(0) = 0 for all subnetworks f. While this might seem like an overly stringent requirement, it is worth noting that arbitrary deviation of Cf from the identity function is possible if we don\u2019t place any restrictions on the value of Cf(0). This is the case even when C\u2032 f(1) = 1 is enforced, as can be seen in Section 12.1 for deep RELU networks. 16.4 C\u2032 f(1) \u2264\u03b6 for all subnetworks f As discussed in Section 12, degenerate C maps correspond to networks that are di\ufb03cult to train with gradient-based methods. Avoiding this degeneration is the central aim of DKS. As argued in Section 13, we can do this for a given Cf by bounding its maximum deviation from identity function (which is the canonical non-degenerate C map). Given the condition Cf(0) = 0, Theorem 13 says that this deviation is roughly equal to C\u2032 f(1) \u22121. Thus, we will enforce the condition C\u2032 f(1) \u2264\u03b6 for all subnetworks f, where \u03b6 > 1 is a hyper-parameter which we will sometimes refer to as the global slope bound. (Note that C\u2032 f(1) \u2a7e1 is true automatically as consequence of Theorem 13.) This condition can be thought of as imposing a limit on how \u201cnon-linear\u201d any given subnetwork is allowed to look. 16.5 minf[C\u2032 f(1)] is maximized Even assuming that our kernel approximations are exact, a well-behaved C map is not a su\ufb03cient condition for a nonlinear network to be trainable. As discussed in Section 14.2, one way such a network can fail to be trainable is if it\u2019s too far away in parameter-space from a signi\ufb01cantly nonlinear function, or in other words is \u201ctoo linear\u201d. In such cases, a gradient-based optimizer will struggle to utilize the full expressive power of the network. For neural networks with standard parameterizations, this issue will manifest as nonlinear layers with activation functions that behave too much like a\ufb03ne functions. As argued in Section 14.3, this can be avoided by requiring that C\u2032 f(1) be su\ufb03ciently larger than 1 for such layers f. Thus, it makes sense to balance the condition in Subsection 16.4 with one requiring that minf[C\u2032 f(1)] is maximized, where the minimum is taken over all nonlinear layers f in the network. 16.6 Choosing the global slope bound \u03b6 Given the above two conditions, the global slope bound \u03b6 corresponds to the maximum value of C\u2032 f(1) over all subnetworks (and must be \u2a7e1). Heuristically, the degree to which \u03b6 is greater than 1 tells us how nonlinear the network\u2019s functional mapping is at initialization time. If \u03b6 is too large, then the C map for the network (or one of its subnetworks) will experience the \u201cexploding\u201d type of degeneration discussed in Section 12.4.2, where c values are squashed towards c0 = 0. If it\u2019s too close to 1, then the C map will be very close to the identity, and we run the risk of making the network \u201ctoo linear\u201d (as per Section 14.2). In our experiments on 100 layer networks we tried only a few values of \u03b6 before settling on \u03b6 = 1.5, and in general we found that DKS is reasonably robust to signi\ufb01cant variations 47 \fMartens et al. in \u03b6 (or more precisely, log(\u03b6 \u22121)). For example, \u03b6 = 1.01 and \u03b6 = 100 both produced depth 100 networks that trained at competitive speeds, being only somewhat outperformed by networks that used \u03b6 = 1.5. More extreme choices like \u03b6 = 1.001 and \u03b6 = 10000 meanwhile produced signi\ufb01cantly slower training. See Appendix M.3 for these results. For depths 200 or greater we found that it was sometimes necessary to use a value of \u03b6 less than 1.5 (such as 1.1) to achieve stable training. We speculate that this is because the kernel approximations underlying our Q/C maps tend to break down at very high depths (given our modest layer widths), but that this can be mitigated by making the network \u201cmore linear\u201d. An interesting systematic trend we observed is that larger \u03b6 values (up to a certain limit) tended to produce slightly faster optimization, whereas smaller ones led to slightly improved generalization, possibly because this made the inductive bias of the model (plus optimizer) favor a more linear solution. Relevant experimental data is presented in Appendix M.4. 17. From global map conditions to local ones In this section we will describe how the \u201cglobal\u201d map conditions given in the previous section can be achieved by enforcing an equivalent set of conditions on the local Q/C maps of the network. The particular mechanism we will use to enforce these \u201clocal\u201d conditions will be discussed later in Section 18. 17.1 Slope polynomials and maximal slope functions Before we can write down the local map conditions we will de\ufb01ne a special construction that allows us to relate the slope at 1 of extended C maps to the slope at 1 of local C maps. Let f be an arbitrary subnetwork. As discussed in Section 8.2, we may apply automatic di\ufb00erentiation to compute C\u2032 f(1) from the derivatives of the local C maps of f\u2019s constituent layers, where composition corresponds to multiplication and weighted averages (due to concatenations or sum operations) correspond to weighted averages (with the same weights). Since c values of 1 always map to 1 (as argued in Section 11), the expression for the derivative will be a polynomial function of the local C map derivatives at c = 1. If we further assume that there is a constant \u03c8 such that C\u2032 g(1) = \u03c8 for each nonlinear layer g in f, then we can express C\u2032 f(1) as a polynomial function of \u03c8, as the local maps for all other layers are the identity function. We will call this function the slope polynomial of f and denote it by pf(\u03c8). Note that since C\u2032 g(1) \u2a7e1 whenever Cg(0) = 0 (which we are enforcing), we may thus assume \u03c8 \u2a7e1 without loss of generality. Because slope polynomials can be constructed from products and weighted averages of lower degree slope polynomials, and the value of 1 is preserved under multiplication and weighted averages, it follows that pf(1) = 1 for any subnetwork f. And since \u03c8 7\u2192\u03c8 and \u03c8 7\u21921 are positive de\ufb01nition functions of \u03c8 (trivially), and positive de\ufb01nite functions are closed under multiplication and non-negative weighted averaging (as discussed in Section 11.2), it also follows that slope polynomials are positive de\ufb01nition functions, just like C maps. They are thus non-decreasing for \u03c8 \u2a7e0, and indeed strictly increasing provided that the subnetwork contains a nonlinear layer. From this it also follows that pf(\u03c8) \u2a7e1 for all \u03c8 \u2a7e1. 48 \fDeep Kernel Shaping As we will be interested in computing the most extreme slope over a network, we will de\ufb01ne a related function called the maximal slope function, which is given by \u00b5(\u03c8) = maxf[pf(\u03c8)], where the maximum is taken over all subnetworks f of the entire network. Because subnetworks with no nonlinear layers won\u2019t in\ufb02uence the maximum, and the maximum of a set of strictly increasing functions is strictly increasing, we have that the maximal slope function is strictly increasing provided that the network contains at least one nonlinear layer. And because it is the maximum over a set of continuous functions, the maximal slope functions is also continuous, and therefore invertible, which is a fact we will make use of later. 17.2 Computing maximal slope functions The number of distinct subnetworks in a network can be very large, and so computing the maximal slope function naively from the de\ufb01nition can be laborious. Fortunately, we can eliminate most of these subnetworks from consideration immediately. Observe that if a subnetwork is formed by feeding the output of one subnetwork into the input of another, i.e. h = f \u25e6g, then we have ph(\u03c8) = pf(\u03c8)pg(\u03c8) by the chain rule. And because pf(\u03c8) \u2a7e1 and pg(\u03c8) \u2a7e1 for all \u03c8 \u2a7e1, it thus follows that ph(\u03c8) \u2a7epf(\u03c8) and ph(\u03c8) \u2a7epg(\u03c8) for \u03c8 \u2a7e1. Therefore, any subnetwork that is part of another subnetwork in this particular sense can be ignored when computing the maximum. Moreover, without assuming any relationship between f, g, and h, if pf(\u03c8) is a factor of ph(\u03c8), then ph(\u03c8)/pf(\u03c8) is also a valid slope polynomial, and therefore ph(\u03c8) \u2a7epf(\u03c8) for all \u03c8 \u2a7e1, thus allowing us to ignore f in the maximum. Note this does not therefore imply that the maximal slope function is always the slope polynomial of the entire network9, as not every subnetwork can be related to the entire network in this way. For example, if we have a very deep nonlinear network with D nonlinear layers and a skip connection from the initial input to the \ufb01nal output, so that the \ufb01nal output is 1/ \u221a 2 times the initial input plus 1/ \u221a 2 times the output of the nonlinear subnetwork, then the slope polynomial for the nonlinear subnetwork is \u03c8D, while the slope polynomial for the entire network is \u0012 1 \u221a 2 \u00132 1 + \u0012 1 \u221a 2 \u00132 \u03c8D = 1 2(1 + \u03c8D), which is strictly smaller than \u03c8D for all \u03c8 \u2a7e1. (This formula can be derived by following the recipe given in Section 21.3.) For this network, the maximal slope function is in fact \u03c8D. An even more interesting example is the same network, but with additional nonlinear layer added to the end, after the skip connection. The maximal slope function of this network is max{\u03c8D, \u03c8(1 + \u03c8D)/2}, which cannot be reduced to a polynomial as there are settings of \u03c8 for which either input to the max is larger. 9. For the entire network to even have a slope polynomial requires that it be a valid subnetwork of itself, which is only the case for networks with a singular input and output. 49 \fMartens et al. 17.3 The equivalent local map conditions Having de\ufb01ned the maximal slope function, we are now in a position to derive the equivalent local map conditions to the global ones given in Section 16. First, observe that local Q/C maps for a\ufb03ne layers are identity functions, and can essentially be ignored when computing extended Q/C maps. What remains are nonlinear layers and weighted sum operations, and so we will concentrate on these. If we have that Qf(1) = 1 for all nonlinear layers f, then the analogous property automatically holds for all subnetworks that don\u2019t contain weighted sums or constant scalar multiplications, as it is clearly preserved under composition and weighted averages (arising due to concatenations). The same reasoning also applies to the condition Cf(0) = 0. While weighted sum operations are constructed from concatenation operations (which are accounted for in the above argument), the construction also introduces scalar multipliers which can a\ufb00ect the q values. To account for this, we must ensure that the output q value of each weighted sum operation is 1. By Equation 12, this is equivalent to requiring that the squares of the weights sum to 1, assuming that the inputs to the sum have q values of 1. We will call weighted sums satisfying this condition \u201cnormalized sums\u201d. Given that a weighted sum is normalized, and that its input q values are 1, it additionally follows (by Equations 12 and 13) that the corresponding output q and c values will be weighted averages of the input q and c values, with weights given by the squares of the weights of the sum itself. To \ufb01nally achieve Qf(1) = 1 for all subnetworks f we must remove any constant scalar multiplication operations from the network, except those that are part of the above normalized sums. With this done, a simple inductive argument then establishes that Qf(1) = 1 for all subnetworks f. Given constant q values of 1, and weighted sums that are all normalized, we may compute Q\u2032 g(1) using the same slope polynomials used to compute C\u2032 g(1), provided that Q\u2032 f(1) is the same for all nonlinear layers f. Thus if we impose the condition Q\u2032 f(1) = 1 for all nonlinear layers f, it will follow that Q\u2032 g(1) = pg(1) = 1 for all subnetworks g. Finally, maximizing minf[C\u2032 f(1)], while requiring that C\u2032 f(1) \u2264\u03b6 for all subnetworks f, is equivalent to setting C\u2032 f(1) = \u03c8 for all nonlinear layers f (since a single nonlinear layer is a subnetwork), where \u03c8 = \u00b5\u22121(\u03b6) and \u00b5\u22121 is the inverse of the maximal slope function for the network (which exists as long as the network has at least one nonlinear layer). Summarizing, the equivalent local map conditions are: 1. Qf(1) = 1, 2. Q\u2032 f(1) = 1, 3. Cf(0) = 0, and 4. C\u2032 f(1) = \u00b5\u22121(\u03b6), for all nonlinear layers f, with the additional requirement that all weighted sum operations in the network are normalized (i.e. that the squares of their weights sum to 1). 50 \fDeep Kernel Shaping 18. Activation function transformations In addition to PLN and the use of Delta initializations, our main mechanism of control over the initialization-time behavior of neural networks will be to apply transformations to their activation functions. In particular, we will apply constant scalar multiplications and shifts to both their inputs and outputs. For most typical activation functions this will give us su\ufb03cient control over a combined/nonlinear layer\u2019s local Q/C maps to enforce the \u201cequivalent local map conditions\u201d from the previous section. 18.1 Basic de\ufb01nition Suppose \u03c6 is some element-wise activation function in the network. We propose to make the following replacement: \u03c6(u) \u2212 \u2192 \u02c6 \u03c6(u) \u2261\u03b3(\u03c6(\u03b1u + \u03b2) + \u03b4) where \u03b1, \u03b2, \u03b3, and \u03b4 are static scalar constants (that we do not train). Note that these constants are the same for all channels and feature map locations within a given layer, but can di\ufb00er between layers. 18.2 Equivalent parameters and preservation of the model class Provided that each nonlinear layer is both preceded by and followed by an a\ufb03ne layer (which is true for most architectures), this way of transforming the activation functions has the property that it preserves the model class of the original network. By this we mean that for any network with transformed activation functions, there exists an equivalent network with untransformed activations that has precisely the same functional behavior. We will call the \ufb01lter weights and biases of this second network the equivalent parameters. Computing the equivalent parameters is relatively straightforward. Because each nonlinear layer is both preceded by and followed by an a\ufb03ne layer, we can essentially absorb the input/output scale and shift operations into these layers. To be more explicit, in the case of a standard fully-connected layer with weight matrix W and bias vector b, W(\u03b3x+\u03b41)+b becomes W \u2032x + b\u2032 with W \u2032 = \u03b3W and b\u2032 = \u03b4W1 + b (where 1 denotes the vector of ones). Similarly, \u03b1(Wx+b)+\u03b21 becomes W \u2032x+b\u2032 with W \u2032 = \u03b1W and b\u2032 = b+\u03b21. The construction for convolutional layers is similar, and relies on the fact that the scaling and shifting constants are the same for each location (just as the \ufb01lter weights and biases are). 18.3 Our method for transforming activation functions viewed as an initialization scheme The existence of equivalent parameters, and their relatively straightforward computation, makes it possible to turn our method for transforming activation functions into an initialization scheme for the network\u2019s parameters. One simply computes the constants needed to appropriately transform the activation functions, and then uses them to instead compute the equivalent parameters, starting from a network initialized as per Section 4. If we view this process as a sampling procedure for the network\u2019s parameters, then it corresponds to 51 \fMartens et al. a distribution with non-trivial correlations between the weights and biases of each a\ufb03ne layer. Note that while a transformed network and an untransformed network (with equivalent parameters) compute the same function, they correspond to di\ufb00erent parameterizations of the same model class, and thus may give rise to di\ufb00erent optimization dynamics. Stochastic gradient descent for example, is not invariant to reparameterizations of this type, and so we would expect it to behave di\ufb00erently on either network. The K-FAC optimizer (Martens and Grosse, 2015) on the other hand is approximately invariant reparameterizations involving a\ufb03ne transformations of layer inputs and outputs (Martens and Grosse, 2015; Grosse and Martens, 2016; Luk and Grosse, 2018). Our experimental results indicate that the transformed networks are easier to optimize with stochastic gradient descent than networks with equivalent parameters. Meanwhile, as predicted by the theory, the optimization performance with K-FAC is roughly the same for both versions. See Appendix N.7 for the relevant results. 18.4 Achieving local map conditions with activation function transformations Suppose f is a nonlinear layer with activation function \u03c6 which we propose to replace by \u02c6 \u03c6(u) \u2261\u03b3(\u03c6(\u03b1u + \u03b2) + \u03b4). The four equivalent local map conditions (from Section 17.3) give rise to a system of four nonlinear equations, with the four scalar constants (\u03b1, \u03b2, \u03b4, and \u03b3) as its unknowns. In this subsection we will show how to solve for these constants, assuming that a solution exists. By Equation 14, the condition Cf(0) = 0 holds if and only if Ex\u223cN(0,1)[\u02c6 \u03c6(x)] = 0. Noting that Ex\u223cN(0,1)[\u02c6 \u03c6(x)] = \u03b3(Ex\u223cN(0,1)[\u03c6(\u03b1x + \u03b2)] + \u03b4), this becomes equivalent to \u03b4 = \u2212Ex\u223cN(0,1)[\u03c6(\u03b1x + \u03b2)]. Thus \u03b4 is fully determined by \u03b1 and \u03b2, which eliminates a single degree of freedom. From Equation 5, and basic properties of expectations, we have that Qf(1) = Ex\u223cN(0,1)[\u02c6 \u03c6(x)2] = \u03b32Ex\u223cN(0,1)[(\u03c6(\u03b1x+\u03b2)+\u03b4)2] = \u03b32 Varx\u223cN(0,1)[\u03c6(\u03b1x+\u03b2)]. (16) Thus the condition Qf(1) = 1 is equivalent to \u03b3 = (Ex\u223cN(0,1)[(\u03c6(\u03b1x + \u03b2) + \u03b4)2])\u22121 2 = Varx\u223cN(0,1)[\u03c6(\u03b1x + \u03b2)]\u22121 2 . This fully determines the value of \u03b3 in terms of the other constants, thus eliminating another degree of freedom. Given the above solutions for \u03b3 and \u03b4, which we will treat as functions \u03b3(\u03b1, \u03b2) and \u03b4(\u03b1, \u03b2) of \u03b1 and \u03b2, it remains to solve for the values of \u03b1 and \u03b2 which satisfy the \ufb01nal two conditions Q\u2032 f(1) = 1 and C\u2032 f(1) = \u00b5\u22121(\u03b6). From the fact that Qf(1) = 1 (for our choice of \u03b3), these two conditions can be written as: i. Ex\u223cN(0,1)[\u02c6 \u03c6(x)\u02c6 \u03c6\u2032(x)x] = 1, and 52 \fDeep Kernel Shaping ii. Ex\u223cN(0,1)[\u02c6 \u03c6\u2032(x)2] = \u00b5\u22121(\u03b6) , where the dependence on \u03b1 and \u03b2 is implicit in \u02c6 \u03c6(x) = \u03b3(\u03b1, \u03b2)(\u03c6(\u03b1x + \u03b2) + \u03b4(\u03b1, \u03b2)) and \u02c6 \u03c6\u2032(x) = \u03b1\u03b3(\u03b1, \u03b2)\u03c6\u2032(\u03b1x + \u03b2). We are not aware of any closed-form solution for this two dimensional system. However, because it\u2019s only two dimensional, and the expectations required to evaluate it are one dimensional (including those needed to compute \u03b4 and \u03b3), we can readily solve it using blackbox numerical software, assuming a solution exists. And because the system of equations only depends on the functional form of \u03c6 and no other details about f, we only need to solve it once for each distinct activation function in the network. Implementation details are given in Section 22. 18.5 When will solutions exist? While we found in our experiments that solutions for \u03b1 and \u03b2 exist for nearly all commonly used nonlinear activation functions, the popular RELU is a notable exception (which we will examine in the next subsection). Thus, it is worth delving deeper into the question of the existence of these solutions. Noting that \u00b5\u22121(\u03b6) will typically be quite close to 1, if we can show that lim\u03b1\u21920 C\u2032 f(1) = 1, this will suggest that C\u2032 f(1) = \u00b5\u22121(\u03b6) is achievable by choosing a su\ufb03ciently small value of \u03b1. Intuitively speaking, shrinking \u03b1 allows us to e\ufb00ectively narrow the interval of typical inputs to \u03c6, meaning that \u03c6 starts to resemble an a\ufb03ne function over this interval (since di\ufb00erentiable functions are, by de\ufb01nition, closely approximated by their 1st-order Taylor approximations within any su\ufb03ciently small neighborhood). As discussed in Section 13.4, this means that C\u2032 f(0)/C\u2032 f(1) \u21921 as \u03b1 \u21920, which in turn implies that C\u2032 f(1) \u21921 (as we have by Remark 14 that C\u2032 f(0) \u2a7d1 \u2a7dC\u2032 f(1) when Cf(0) = 0). The following proposition formalizes this intuition, although is proved (in Appendix F.1) using a di\ufb00erent technique. Proposition 26 Let f be a nonlinear layer with transformed activation function \u02c6 \u03c6 de\ufb01ned as above, with \u03b4 and \u03b3 chosen as per Section 18.4. If \u03c6\u2032(\u03b2) \u0338= 0 then we have lim \u03b1\u21920 C\u2032 f(1) = 1. The hypothesis that \u03c6\u2032(\u03b2) \u0338= 0 is required here since otherwise \u02c6 \u03c6 will tend to the zero function as \u03b1 \u21920 (whose C map is unde\ufb01ned). Apart from this restriction, there is no obvious requirement on \u03b2 for the condition C\u2032 f(1) = \u00b5\u22121(\u03b6) to hold, and indeed in our preliminary tests we found that we could satisfy this for nearly all reasonable choices of \u03b2 for most activation functions. The role of \u03b2 can thus be thought of selecting the position in \u03c6\u2019s graph to \u201czoom in on\u201d, and gives us the extra \ufb02exibility needed to control the value of Q\u2032 f(1). 18.6 The problem with positively homogeneous activation functions A positively homogeneous activation function \u03c6(u) of degree k is one where \u03c6(\u03bbu) = \u03bbk\u03c6(u) for all non-negative scalars \u03bb. A well-known example for k = 1 is the RELU activation function, which is given by \u03c6(u) = max(u, 0). 53 \fMartens et al. Due to their de\ufb01ning property, positively homogeneous activation functions yield at most three e\ufb00ective degrees of freedom under our parameterized transformation, instead of the typical four. This can be seen by observing that \u02c6 \u03c6(u) = \u03b3(\u03c6(\u03b1u + \u03b2) + \u03b4) = \u03b3(\u03c6(|\u03b1|(sign(\u03b1)u + \u03b2/|\u03b1|)) + \u03b4) = |\u03b1|k\u03b3(\u03c6(sign(\u03b1)u + \u03b2/|\u03b1|) + \u03b4/|\u03b1|k) = \u02dc \u03b3(\u03c6(sign(\u03b1)u + \u02dc \u03b2) + \u02dc \u03b4), where we have de\ufb01ned \u02dc \u03b3 = |\u03b1|k\u03b3, \u02dc \u03b2 = \u03b2/|\u03b1|, and \u02dc \u03b4 = \u03b4/|\u03b1|k. Apart from the sign of \u03b1, which can take only two discrete values, we e\ufb00ectively have only three free real-valued random variables: \u02c6 \u03b3, \u02c6 \u03b2, and \u02c6 \u03b4. Because of this reduction in the degrees of freedom for positively homogeneous activation functions, we can only enforce at most three of our four local map conditions. The only one which is arguably optional is the condition that Q\u2032 f(1) = 1 for all combined layers f, which corresponds to the equation Ex\u223cN(0,1)[\u02c6 \u03c6(x)\u02c6 \u03c6\u2032(x)x] = 1. Thus, in all of our experiments with RELU networks we dropped this condition, and while it did produce networks that trained reasonably well, optimization performance was still slower compared to all other activation functions we tested, at least for skip connection-free networks trained with K-FAC. (See Section 28.4 for these results.) 18.7 Examples of transformed activation functions In this subsection we will give some examples of transformed activation functions produced by DKS. Our examples will assume a basic feedforward network of 100 combined layers, and a global slope bound \u03b6 = 1.5. We will consider the standard tanh and RELU activation functions, as well as Swish (Prajit et al., 2017), SELU (Klambauer et al., 2017), and a commonly used smooth substitute for RELU called \u201csoftplus\u201d (which is given by \u03c6(x) = log(1 + exp(x))). The following table gives the approximate values for the activation function parameters found by DKS: Activation function \u03b1 value \u03b2 value \u03b4 value \u03b3 value tanh 0.090438 -0.56011 0.50500 14.9025 softplus 0.22802 0.40751 -0.92372 7.30325 relu 0.387604 1.0000 -1.0006 2.5916 swish 0.12945 0.349475 -0.20889 11.50455 selu 0.088294 -0.25244 0.38694 8.25434 In the following plots we compare the default and transformed activation functions over the input interval [\u221210, 10] for tanh, softplus, and RELU. Assuming uniform q values of 1, and that the error in our kernel approximations is relatively low, this interval contains all the inputs that our nonlinear units will see at initialization time with overwhelming probability. 54 \fDeep Kernel Shaping 55 \fMartens et al. We can see from these plots that the transformed activation functions tend to look more like the identity functions than the defaults ones do (over the relevant range of inputs). In fact, they all bare a resemblance to each other (especially tanh, softplus and swish), as can be seen in the following plot: 19. Addressing normalization layers 19.1 Batch Normalization layers Batch Normalization (BN) layers (Io\ufb00e and Szegedy, 2015) are an important component in many neural network architectures, especially convolutional networks. For each unit scalar u in their input, BN layers compute a mean \u00b5 and variance \u03c32 of u over the training 56 \fDeep Kernel Shaping mini-batch, and then output a \u201cnormalized\u201d version (u \u2212\u00b5)/ \u221a \u03c32 + \u03f5, where \u03f5 is a small constant. This is this sometimes followed by the application of per-channel learnable bias parameters, which are initialized to zero. Because they use statistics computed over the mini-batch, BN layers cannot really be described in Q/C map framework we have presented, and are therefore incompatible with DKS. In particular, our formalism assumes that the network\u2019s computation for a single training input depends only on that input, and not on other elements of the mini-batch. To account for such interactions, one would have to introduce hypotheses on the size of the mini-batch and the statistical distribution of its vectors, as the behavior of BN layers are highly dependent on these factors. Moreover, the evolution of q and c values would not happen independently across the mini-batch, which would likely preclude a simple onedimensional description like Q and C maps. 19.2 Layer Normalization layers Layer Normalization (LN) layers (Ba et al., 2016) are a popular ingredient in neural network architectures such as Transformers, and are sometimes used as an alternative to BN layers. For each location vector z in its input feature map, an LN layer computes the scalar mean \u00b5 = 1 k1\u22a4zi and variance \u03c32 = 1 k\u2225z\u2212\u00b51\u22252 over the k entries of z (where 1 denotes the vector of 1\u2019s), and outputs a \u201cnormalized\u201d version (z \u2212\u00b51)/ \u221a \u03c32 + \u03f5, where \u03f5 is a small constant. This is this sometimes followed by the application of learnable per-channel gain and bias parameters, which are initialized to 1 and 0 respectively. Note that LN layers were not explicitly de\ufb01ned for convolutional networks in the original paper. Thus, one could also conceivably de\ufb01ne them as computing a mean \u00b5 and variance \u03c32 over both locations and channels, instead of individually per location. In this work we will assume our previous de\ufb01nition, and anything we say regarding LN layers from this point will apply only to that de\ufb01nition. Unlike BN layers, LN layers perform their computations and transformations individually per training case, and do not involve any computations across the mini-batch. Averaging of statistics instead occurs over entries (i.e. channels) of the location vectors, and the same scale and shift is applied to all entries. In general, \u00b5 and \u03c32 will be di\ufb00erent for each input to the network, so that the learnable gain and bias cannot ever actually \u201cundo\u201d the normalization for all training cases cases simultaneously. This means that introducing LN layers into a network will fundamentally change its model class. As we will show next, LN layers can be understood within our Q/C map framework, and are thus compatible with DKS. The formulas for their local Q/C maps are given below. 19.2.1 Q/C map computations for Layer Normalization layers As we are concerned with the network\u2019s initialization-time behavior when computing Q/C maps, we will assume going forward that the LN layers f\u2019s learnable gain and bias parameters, if they are indeed used, are set to their initial values (1 and 0). Given this assumption, and taking \u03f5 = 0, the output of f will always have a dimension-normalized squared length of 1, as \u03c32 = 1 k\u2225z \u2212\u00b51\u22252 by de\ufb01nition. As this is precisely the quantity approximated by q values, we can thus de\ufb01ne Qf(q) = 1. 57 \fMartens et al. To understand how f will a\ufb00ect c values, it su\ufb03ces to analyze it as a mapping from z to z \u2212\u00b51, since c values are invariant to scalar multiplications of the underlying vectors. Suppose z1 and z2 are two di\ufb00erent vector inputs to f (for a particular location), with q values q1, q2 and c values and c1, c2 (respectively), and de\ufb01ne \u00b5i = 1 k1\u22a4zi for i = 1, 2. Then we have 1 k(zi \u2212\u00b5i1)\u22a4(zj \u2212\u00b5j1) = 1 kz\u22a4 i zj \u2212\u00b5i\u00b5j for i, j \u2208{1, 2}. Cf(c, q1, q2) approximates the cosine similarity of z1 \u2212\u00b511 and z2 \u2212\u00b521, which can therefore be written as 1 kz\u22a4 1 z2 \u2212\u00b51\u00b52 q\u0000 1 k\u2225z1\u22252 \u2212\u00b52 1 \u0001 \u0000 1 k\u2225z2\u22252 \u2212\u00b52 2 \u0001 \u2248 \u221aq1q2c \u2212\u00b51\u00b52 p (q1 \u2212\u00b52 1)(q2 \u2212\u00b52 2) . For a network initialized as per Section 4, we have by Equation 24 that \u00b5i \u2248Ex\u223cN(0,1) \u0002 \u03c6 \u0000\u221aqix \u0001\u0003 for i = 1, 2, where \u03c6 is the activation function of the immediately preceding combined layer g (with \u03c6 being the identity if g is a\ufb03ne). If we have uniform q values (so that q1 = q2 = q), then by Equation 14 this implies \u00b51 = \u00b52 = p qCg(0), so that the above expression for f\u2019s C map simpli\ufb01es to Cf(c) = qc \u2212qCg(0) q \u2212qCg(0) = c \u2212Cg(0) 1 \u2212Cg(0). (17) When g is an a\ufb03ne layer (or a sum over multiple a\ufb03ne layers), or is a combined/nonlinear layer transformed via DKS, we have Cg(0) = 0. In this case, the above expression for f\u2019s C map reduces to the identity function. More generally, we note that Cf\u25e6g(0) = Cf(Cg(0)) = Cg(0) \u2212Cg(0) 1 \u2212Cg(0) = 0, and so the application of the LN layer f after g thus has the e\ufb00ect of ensuring that Cf\u25e6g(0) = 0 even when Cg(0) \u0338= 0. 20. Addressing pooling layers Pooling layers are a type of layer used in certain convolutional network architectures to compress information from a larger feature map into a smaller one (with fewer locations). In this section we will discuss why standard pooling layers aren\u2019t compatible with our Q/C map framework, and describe potential replacements for them which are. We will also give mathematical arguments and empirical evidence suggesting that it may nonetheless be okay to use them with DKS in practice. 20.1 (Global) mean-pooling layers Mean-pooling layers function similarly to convolutional layers, except that instead of computing a (learnable) a\ufb03ne function of each \u201cpatch\u201d of activation vectors, they simply compute the average of those vectors. Typically these patches don\u2019t overlap, and thus a mean pooling layer reduces the number of locations (while preserving the channels). 58 \fDeep Kernel Shaping In order to simplify the discussion we will restrict our attention to \u201cglobal\u201d mean-pooling layers, which average over all locations, and are the most common type used in practice. The same basic conclusions will apply to general mean-pooling layers, with somewhat more complicated formulas for the associated kernel functions. Formally, a global mean-pooling layer f computes f(Z) = 1 \u2113Z1, (18) where \u2113is the number of locations of the feature map Z, and 1 denotes a vector of 1\u2019s of the appropriate dimension (which will change based on context). 20.1.1 The PKF for mean-pooling layers and associated difficulties From the above equation, the PKF associated with f is a (2 \u00d7 2)-matrix-valued function given by \u03baf(Z, Z\u2032) = 1 k\u21132 \u0014 (Z1)\u22a4Z1 (Z1)\u22a4Z\u20321 (Z\u20321)\u22a4Z1 (Z\u20321)\u22a4Z\u20321 \u0015 = 1 \u21132 \" 1\u22a4\u0000 1 kZ\u22a4Z \u0001 1 1\u22a4\u0000 1 kZ\u22a4Z\u2032\u0001 1 1\u22a4\u0010 1 kZ\u2032\u22a4Z \u0011 1 1\u22a4\u0010 1 kZ\u2032\u22a4Z\u2032\u0011 1 # , (19) where k is the output channel dimension. Noting that the input and output channel dimension are equal for mean-pooling layers, we have \u03a3Z,Z\u2032 = \u0014 1 kZ\u22a4Z 1 kZ\u22a4Z\u2032 1 kZ\u2032\u22a4Z 1 kZ\u2032\u22a4Z\u2032 \u0015 , and so \u03baf(Z, Z\u2032) only depends on the inputs Z and Z\u2032 via their IPM \u03a3Z,Z\u2032. Thus, \u03baf can be composed with APKFs to form a network-level PKF approximation. Unfortunately, while \u03baf depends only on input q and c values (i.e. the entries of \u03a3Z,Z\u2032), it cannot be broken down in terms of Q and C maps that operate independently across di\ufb00erent locations. For example, the output q value is given by 1 \u21132 1\u22a4\u0000 1 kZ\u22a4Z \u0001 1, which would be computed as an average of multiple input q and c values. Even if we assume that the input q values to \u03baf are uniform, the output q values will di\ufb00er for each input to the network and each location due to their dependence on the c values. This invalidates our C map analysis for subsequent layers, which is predicated on uniform q values. 20.1.2 A possible replacement: weighted mean-pooling layers A possible solution to the issues associated with mean-pooling layers is to replace them with layers that can be more easily handled within our framework, and which ideally don\u2019t shrink the model class. (Expansion of the model class is less objectionable, provided that it doesn\u2019t signi\ufb01cantly harm generalization performance.) One natural option is to use convolutional layers whose \ufb01lter size equal is equal to that of the entire feature map. This won\u2019t shrink the model class, as such layers can easily simulate global mean-pooling layers by setting all \ufb01lter weights to \u0010 \u2113 \u221a k \u0011\u22121 . Unfortunately, such a layer will most likely have a very large \ufb01lter bank matrix, and this may signi\ufb01cantly increase the total number of parameters in the network. 59 \fMartens et al. Another option is something we call weighted mean-pooling layers, which are de\ufb01ned similarly to regular mean-pooling layers, except that the vector of 1\u2019s in Equation 18 is replaced by a learnable vector of weights w, giving f(Z) = Zw. These can clearly also simulate regular mean-pooling layers (by setting w = (1/\u2113)1). And because they don\u2019t introduce as many new parameters as the previous option, they have a better chance of preserving the model\u2019s generalization characteristics. Suppose f is a weighted mean-pooling layer. As with combined layers, we can compute an APKF f \u03baf which approximates f\u2019s PKF \u03baf at initialization-time (with high probability), under the assumption that w \u223cN(0, (1/\u2113)I). As shown in Appendix E, this is given by f \u03baf(\u03a3Z,Z\u2032) = 1 \u2113 \" tr \u0000 1 kZ\u22a4Z \u0001 tr \u0000 1 kZ\u22a4Z\u2032\u0001 tr \u0010 1 kZ\u2032\u22a4Z \u0011 tr \u0010 1 kZ\u2032\u22a4Z\u2032\u0011 # , and becomes a more precise approximation as \u2113grows, provided that the average absolute input c value across all pairs of locations simultaneously goes to zero. This latter condition could occur if the feature vectors for nearly all pairs of locations in the network\u2019s input image have a small absolute cosine similarity, since c values always decrease with depth under DKS. It could also occur for more general input images if DKS is used with a large \u03b6 parameter, and f is su\ufb03cient deep into the network (so that the C map up to f maps most inputs to a relatively small region near zero). f \u03baf has more favorable properties than the PKF for mean pooling layers given in Equation 19. In particular, since the output q/c value is just the average across locations of the input q/c values, the property of uniform q values of 1 will be preserved, thus enabling our C map analysis to be valid for subsequent layers. Complicating the story somewhat is the fact that the c values for di\ufb00erent locations are averaged together, as our analysis up to this point has assumed them to be separate and independently evolving. This means that geometric information about each individual location is no longer strictly preserved, as the averaging operation makes recovery of the individual c values impossible. It is true however that each location still has a proportional e\ufb00ect on the output, and thus the degeneration discussed in Section 12.4 can still be avoided, as long as the C map of the subnetwork up to f is su\ufb03ciently well-behaved. Because the input c values are given by Cg(ci) for locations i = 1, 2, . . . , \u2113(for some ci\u2019s) where g is the subnetwork up to f, and Cg is a convex and increasing function on [0, 1] (by Section 11.2), we have that Cg 1 n X i ci ! \u2a7d1 \u2113 X i Cg(ci) \u2a7dCg(max i {ci}), for ci \u2208[0, 1]. The output c value associated f \u03baf, which is given by 1 \u2113 P i Cg(ci), is thus closely related to Cg as applied to a single input. It is on this basis that we will treat f as having an identity C map in our computations which, for lack of a richer multi-dimensional theory, seems like a reasonable heuristic. 60 \fDeep Kernel Shaping In some of our experiments on convolutional networks we tried using a weighted meanpooling layer in place of the usual global mean-pooling operation near the end of the network. While this worked well, we found that it didn\u2019t provide any optimization bene\ufb01t. (See Appendix N.9 for these experiments.) Thus in our main set of experiments with DKS we continued to use standard global mean-pooling layers, despite their apparent incompatibility with our theoretical framework. 20.2 Max-pooling layers A max-pooling layer is similar to a mean-pooling layer, but instead of taking the mean of a set of location vectors, it takes the coordinate-wise maximum. 20.2.1 PKFs for max-pooling layers and approximate map properties In the previous subsection we saw that the PKF for a mean-pooling layer, despite having a simple form that depended only on the IPM (\u03a3Z,Z\u2032) of its input, had unfavorable properties that made it impossible to properly analyze within our Q/C map framework. The situation with max-pooling layers is arguably even worse, as its PKF has a more general dependence on its input, and thus cannot be composed with APKFs of combined layers to form a network-level PKF approximation. But despite this, we can still make some non-trivial statements about a max-pooling layer\u2019s PKF that will be useful in understanding how DKS may possibly still apply to networks containing such layers. Consider a patch of locations over which the max operation is applied. If all the location vectors in the patch are nearly equal to each other, then the max operation simply outputs a close approximation of the vector in the center of the patch, and thus has a PKF approximated by the identity (for non-dropped locations). It is therefore reasonable to approximate max-pooling layers as having local Q and C maps equal to the identity in this case. This situation is fairly common when max-pooling layers are used very early in the network, since nearby pixels tend to be similar to each other in natural image data, which, assuming well-behaved C maps, means that the corresponding vectors for subsequent layers will be similar too (as measured by their cosine similarly). Analogously, if the pixels within a patch fall into two tight clusters, which can happen if the patch overlaps an edge or the corner of an object, then the subsequent vectors will also fall into two tight clusters. If this is the situation, and we assume uniform q values and wide layers, then it can be shown that the output q value of a max pooling layer will be closely approximated by its input q value (so that we can treat the layer as having an identity Q map). This is shown in Appendix G, and relies on the somewhat surprising fact that E\u0014 u1 u2 \u0015 \u223cN \u0012 0, \u0014 1 c c 1 \u0015\u0013[max{u1, u2}2] = 1 for all c \u2208[\u22121, 1], along with the mild assumption that the max-pooling layer in question is directly preceded by a convolutional layer. Note that for clusters of 3 or more pixel values this approximation doesn\u2019t work, although the output q value will only deviate from the input q value by a factor that grows slowly with the number of clusters. 61 \fMartens et al. 20.2.2 Possible replacements for max-pooling layers As with mean-pooling layers, we could consider replacing max-pooling layers with ones that are handled within our framework. However, unlike the mean operation, the max operation is di\ufb03cult to elegantly simulate using our standard layer types, and so there are no obvious substitutions that would preserve the model class. In some architectures, max-pooling layers are used merely to reduce the size of a feature map, with the particular choice of pooling operation (max or mean) being unimportant from a modeling perspective. In such cases it may thus be quite reasonable to replace max-pooling layers with weighed mean-pooling layers. 21. Summary of our method 21.1 Architectural requirements In order to apply DKS we must observe certain architectural requirements on the network. These are summarized below: 1. The network must be constructed from combined layers (de\ufb01ned as an a\ufb03ne layer followed optionally by a nonlinear layer), weighted sums between the output of two or more a\ufb03ne layers (followed optionally by a nonlinear layer), concatenations of two or more feature maps along their channel dimension, mean-pooling layers, and max-pooling layers (although the latter should be used with caution as discussed in Section 20.2). 2. Batch Normalization layers must not be used. However, Layer Normalization layers are allowed, provided that their associated gain and bias parameters are initialized as per Item 5 of this list. (See Section 19 for additional discussion of normalization layers.) 3. Nonlinear layers must use element-wise activation functions. Positively homogeneous activation, such as RELU, are allowed but not recommended as they lead to a limited version of DKS (as discussed in Section 18.6). 4. Multiplication operations, such as those used in attention mechanisms, are also not allowed (although we hypothesize that DKS can be extended to handle these in the future). 5. The network should not contain any extraneous trainable parameters such as scalar multiplications or shift, unless these parameters have no e\ufb00ect at initialization time (e.g. a shift that is initialized to 0). Constant scalar multiplications are allowed, although these will typically be removed as part of the application of DKS. 6. Similarly, constant multiplications and shifts are not allowed, except as part of weighted sum operations. (Note that if such constants are normally required for the network to be trainable with standard optimization methods, it\u2019s likely that DKS will render them unnecessary/obsolete.) 62 \fDeep Kernel Shaping 21.2 Execution steps To apply DKS to a given network one performs the following steps: 1. Initialize each bias vector to 0, and each weight matrix/\ufb01lter bank using either a Gaussian Delta initialization, or an Orthogonal Delta initialization (which are both de\ufb01ned in Section 4). 2. Choose a value larger than 1 for the scalar hyperparameter \u03b6 (such as 1.5 or 1.1). Note that \u03b6 roughly corresponding to the \u201cdegree of nonlinearity\u201d of the network. See Section 16.6 for additional discussion of this. As observed in Section 16.6, lower values of \u03b6 tend to be associated with slightly better generalization, at the cost of somewhat slower optimization. 3. (optional) Apply some version of Per-Location Normalization (PLN) to the input data. Note that this can be done entirely online, as it only requires the current example, and not any aggregate statistics over the entire training set. (See Section 10.2 for more details.) 4. Remove any constant scalar multiply operations from the network. 5. Replace any weighted sums between features maps Y1, . . . , Yn with \u201cnormalized sums\u201d of the form Pn i=1 wiYi, for weights wi satisfying Pn i=1 w2 i = 1 (which may be chosen freely). Note that if wi = wj for all i and j this simpli\ufb01es to 1 \u221an Pn i=1 Yi, although other choices are permitted and may indeed be preferable (as demonstrated in Section 23.3). 6. (optional) Replace any mean-pooling layers with \u201cweighted mean-pooling layers\u201d as de\ufb01ned in Section 20.1.2. 7. Compute the network\u2019s maximal slope function \u00b5(\u03c8) (or some approximation of this). One can use the recipe given in Section 21.3. 8. Using the fact that \u00b5 is a 1D monotonically increasing function, compute \u00b5\u22121(\u03b6) using binary search (or a similar such method). 9. For each distinct activation function \u03c6 in the network, do the following: i. Given \u00b5\u22121(\u03b6), solve for \u03b1, \u03b2, \u03b3, and \u03b4 as per Section 18.4, using the numerical methods described in Section 22 (or some alternative). ii. Replace all instances of \u03c6 in the network with \u02c6 \u03c6(u) \u2261\u03b3(\u03c6(\u03b1u + \u03b2) + \u03b4). 21.3 Recipe for computing slope polynomials and maximal slope functions As per Sections 17.1 and 17.2, the maximal slope function \u00b5(\u03c8) is computed as \u00b5(\u03c8) = max f [pf(\u03c8)], where pf(\u03c8) is the network polynomial of f (whose computation we will describe below), and the maximum is taken over all subnetworks f of the entire network. 63 \fMartens et al. Note while the number of distinct subnetworks may be quadratic (or worse) in the depth, when computing the maximum we may ignore any subnetwork that can be composed with another one to form a strictly larger subnetwork, or more generally, any subnetwork whose slope polynomial is a factor of the slope polynomial of another subnetwork. For a given subnetwork f, the computational graph of the network polynomial pf(\u03c8) may be obtained from the computational graph of f by recursively applying the following rules (which are essentially just the result of applying automatic di\ufb00erentiation to the graph of Cf(c) and then evaluating the result at c = 1 to obtain pf(\u03c8) = C\u2032 f(1)): 1. Composition g \u25e6h of two subnetworks g and h maps to pg(\u03c8)ph(\u03c8). 2. A\ufb03ne layers map to the constant 1. 3. Nonlinear layers map to \u03c8. 4. Concatenation operations (over the channel dimension) between the outputs of subnetworks g1, g2, . . . , gn map to 1 Pn i=1 ki (k1pg1(\u03c8) + k2pg2(\u03c8) + \u00b7 \u00b7 \u00b7 + knpgn(\u03c8)), where ki is the number of output channels of gi. 5. Normalized sums with weights w1, w2, . . . , wn over the outputs of subnetworks g1, g2, . . . , gn map10 to w2 1pg1(\u03c8) + w2 2pg2(\u03c8) + \u00b7 \u00b7 \u00b7 + w2 npgn(\u03c8). 6. Layer normalization layers map to the constant 1. 7. Max-pooling and weighted mean-pooling layers map to the constant 1. (Standard meanpooling layers can be heuristically mapped to 1, although they technically break our network polynomial formalism.) 8. The network\u2019s input maps to the constant 1. Note that this recipe for computing can be generalized to compute C\u2032 f(1) for networks in which C\u2032 g(1) may be di\ufb00erent for each nonlinear layer (i.e. not equal to some common \u03c8) by mapping nonlinear layers g to C\u2032 g(1) instead of \u03c8. Provided that the network architecture is compatible with DKS, a quick way to compute slope polynomials is to count the number k of nonlinear layers in a given sequence of layers (to get a slope polynomial of \u03c8k for that subnetwork), and then apply the rule for normalized sums where appropriate. See Sections 17.2 and 23.4 for instructive examples of how to the compute maximal slope function for certain architectures. 10. For reference, when computing the slope polynomial for a network whose q values may vary between layers (which won\u2019t come up when applying DKS), normalized sums instead map to 1 Pn i=1 w2 i qi (w2 1qipg1(\u03c8) + w2 2qipg2(\u03c8) + \u00b7 \u00b7 \u00b7 + w2 nqipgn(\u03c8)), where qi is the output q value associated with gi. 64 \fDeep Kernel Shaping 22. Some implementation details Through careful optimization and engineering, and a lot of trial and error, we were able to get the runtime of DKS down to a few seconds for typical large-scale networks. In this section we describe the aspects of this that were the most challenging and non-obvious. 22.1 Solving for the \u03b1 and \u03b2 constants As we saw in Section 18.4, \ufb01nding the appropriate \u03b1 and \u03b2 constants for a particular activation function \u03c6 amounts to solving a system of two nonlinear equations. Since we don\u2019t have a closed form solution for this, we must resort to numerical methods. After trying several possibilities, we got the best results using scipy.optimize.root(), which is part of the popular SciPy Python package (Jones et al., 2001\u2013). We call this with the arguments method=\"hybr\" and jac=False, and leave all other options at their defaults. This invokes an implementation of the modi\ufb01ed Powell algorithm (Powell, 1964). Because the implicit regression loss of the system is non-convex in general, the solver sometimes fails to \ufb01nd a solution from its initial guess. Our solution to this is simply to call it repeatedly with di\ufb00erent initial guesses until it returns successfully. In our experiments it never took more than 4 calls to \ufb01nd a solution for any of the eleven activation functions we tried. We took our \ufb01rst six initial guesses for (\u03b1, \u03b2) from the list [(1, 0), (1, 1), (1, \u22121), (0.1, 0), (0.1, 1), (0.1, \u22121)] and then generated subsequent ones randomly using numpy.random.uniform(low=0.0, high=2.0) for \u03b1, and numpy.random.uniform(low=-3.0, high=3.0) for \u03b2. 22.2 High-quality estimates of the expectations In order to guarantee fast and reliable convergence, the solver scipy.optimize.root requires the LHS values of the nonlinear equations to be computed to a very high precision. Moreover, the system we need to solve may be numerically sensitive in general, regardless of the particular solver algorithm used. Thus it is important that we compute high quality estimates of the four Gaussian expectations that determine these LHS values. A naive estimate based on sampling x\u2019s from N(0, 1) will perform very poorly, as its variance scales as 1/n, where n is the number of sample points. Fortunately, all four of the Gaussian expectations are one dimensional, which opens the door to heavy-duty numerical integration methods capable of achieving near numerical precision in a reasonable amount of time. After experimenting with several such methods built into SciPy, we found that the best performing one by far was scipy.integrate.fixed_quad, which implements \ufb01xedorder Gauss-Legendre quadrature to compute one dimensional de\ufb01nite integrals. We used this method to approximate Gaussian expectations as Ex\u223cN(0,1)[h(x)] = 1 \u221a 2\u03c0 Z \u221e \u2212\u221e h(x) exp(\u2212x2/2)dx \u2248 1 \u221a 2\u03c0 Z 10 \u221210 h(x) exp(\u2212x2/2)dx, which is justi\ufb01ed by the fact that the integrand is negligible for values of x outside of [\u221210, 10] for all the h(x)\u2019s we care about. To ensure high quality estimates, we set the \u201corder\u201d parameter n to 105 (which is considered very high). 65 \fMartens et al. By far the most expensive part of fixed_quad\u2019s computation is the calculation of the sample points and weights (via the roots_legendre function), which can take around 15 minutes on a modern CPU when n = 105. Fortunately, because this part of the computation only depends on the order parameter n, it is cached by fixed_quad during the \ufb01rst call and reused in subsequent calls, allowing these calls to execute almost instantly. In our codebase we went a step further and stored the results of roots_legendre in a \ufb01le which was then loaded and monkey-patched into the SciPy library before the \ufb01rst call to fixed_quad, thus eliminating this 15 minute overhead completely. 22.3 Computing and inverting the maximal slope function in software As they are determined by a network\u2019s architecture, maximal slope functions can in principle be computed automatically and e\ufb03ciently, according to recipe in Section 21.3. Automating this in software requires access to a high-level description of network\u2019s structure, which could possibly be extracted from the API calls made to the neural network library. In our experiments we just computed them by hand, as we only experimented with a small handful of architectures. An alternative to manual computation or automation is to approximate the maximal slope function by a reasonable surrogate. For networks whose \u201cdeepest path\u201d has D nonlinear layers, a natural approximation to use is \u03c8D. However, for network architectures that involve extensive use of skip connections such as ResNets, this approximation may be very poor, as it fails to account for how the skip connections make the network\u2019s computation \u201cmore linear\u201d. (See Section 23.4 for more details.) As discussed in Section 17.1, maximal slope functions are continuous 1-dimensional functions of \u03c8 that are strictly increasing (as long as the network has at least one nonlinear layer), and thus they can be inverted up to a \ufb01xed tolerance using a simple binary search. Our implementation started with an interval of [1, 2] for the solution, and kept doubling the maximum if the solution was determined to lie outside of it. We used a convergence tolerance of 10\u22126 on the function value, and used full precision 64-bit \ufb02oating point numbers in all computations (which are cheap and can be done on the CPU). Note that for certain simple special cases, such as when m(\u03c8) = \u03c8D, closed form solutions can be used if desired. 23. Application to various modi\ufb01ed ResNets In this section we will discuss the very commonly used ResNet architecture (He et al., 2016a,b) and certain modi\ufb01ed versions of it, and then go over the application of DKS to these di\ufb00erent versions. In addition to being instructive in the application of DKS, these example will be the primary focus of our later experiments. 23.1 Standard ResNet-V2 architectures and terminology In this work we will only consider the \u201cV2\u201d version of the ResNet architecture (He et al., 2016b) as the opposed to the \u201cV1\u201d version (He et al., 2016a), as the former is conceptually simpler and is usually preferred by practitioners. We will also concentrate on the version of ResNet-V2 designed speci\ufb01cally for use in 224x224 Imagenet classi\ufb01cation, noting that 66 \fDeep Kernel Shaping versions of the architecture for other problems and datasets can di\ufb00er, especially in terms of their \ufb01rst and last few layers. We will denote by D the \u201cdepth parameter\u201d of the ResNet architecture, which corresponds to the total number of nonlinear layers plus 1. The standard values for D are 50, 101, and 152. The input is assumed to be 224x224 features maps with 3 dimensional pixel features (possibly extended to 4 dimensions if PLN is used as per Section 10.2). This is then fed into a 7x7 convolutional layer with 64 output channels and a stride of 2. Following this is a 3x3 max-pooling layer with a stride of 2. These two early layers are particular to the Imagenet version of ResNet-V2, and have the purpose of reducing the dimension of the feature representation to a smaller size for processing by subsequent layers. Following this is a long sequence of residual blocks that form the large bulk of the network. Each of these is parameterized by an associated output channel dimension, a \u201cbottleneck\u201d channel dimension, and a stride, which can di\ufb00er from block to block. The particular values for these parameters are determined by D. Let k be the input channel dimension, d the output channel dimension, b the bottleneck channel dimension, and s the stride associated with a particular residual block. The residual block contains two \u201cbranches\u201d from its input that get summed together at the output. The \ufb01rst is called the residual branch, and consists of the following sequence of layers: a Batch Normalization (BN) layer, a RELU nonlinear layer, a 1x1 convolutional layer with stride 1 and output channel dimension b, a BN layer, a RELU nonlinear layer, a 3x3 convolutional layer with stride s and output channel dimension b, a BN layer, a RELU nonlinear layer, and \ufb01nally a 1x1 convolutional layer with stride 1 and output channel dimension d. The second branch is called the shortcut branch, and consists of the identity map if k = d and s = 1, or a 2x2 max-pooling layer if k = d and s > 1. Otherwise, if k \u0338= d (which is the case for transition blocks), it consists of the following sequence of layers: a BN layer, a RELU nonlinear layer11, and a 1x1 convolutional layer with stride 1 and output channel dimension d. The shortcut branch is meant to act as an identity function or a reasonable approximation to one, except when its input and output channel dimensions di\ufb00er (which is only the case for transition blocks). Meanwhile, the residual branch, which always contains 3 nonlinear layers, is what performs the interesting nonlinear computation in the network. After the sequence of residual blocks, there is a BN layer and a RELU nonlinear layer, followed by a \u201cglobal\u201d mean-pooling layer which reduces the number of locations down to 1. The \ufb01nal layer of the network is a 1x1 convolutional layer operating on this single location (or equivalently a fully-connected layer), whose output channel dimension is the number of classes. Convolutional layers in ResNets typically do not have bias parameters, since these are made pointless by the mean-subtraction done by the BN layers that always immediately follow them. To compensate for this, BN layers will sometimes include trainable gain and/or bias parameters applied after their centering and normalization operations. 11. This nonlinear layer in the shortcut branch does not contribute to the total number of nonlinear layers for the purposes of computing D. Moreover, it can be identi\ufb01ed with the \ufb01rst nonlinear layer of the residual branch (as they compute the same thing), in which case both the shortcut branch and residual branches can be seen as \u201cbranching o\ufb00\u201d from this layer\u2019s output (instead of from the block\u2019s original input). 67 \fMartens et al. 23.2 Modi\ufb01ed ResNet architecture In this subsection we will describe the particular changes we made to the ResNet-V2 architecture in order to conform to the requirements listed in Section 21.1 and thus achieve compatibility with DKS. These changes don\u2019t perfectly preserve the model class, although we tried to make them as innocuous as possible in order to facilitate the fairest comparison to standard ResNets in our experiments. Note that other modi\ufb01cation schemes are possible, and the one we present here is not meant to be in any way \u201ccanonical\u201d for this or any other architecture. As BN layers are incompatible with DKS we elect to remove them, while adding learnable bias parameters (initialized at zero) back into the convolutional layers. Another option would be to replace BN layers with Layer Normalization layers, as the latter are compatible with DKS. As discussed in Section 18.6, while technically supported, RELU activations force us to use a diminished version of DKS. Thus in our main experiments we often used alternative activation functions instead, including ones with a \u201cRELU-like\u201d shape, such as softplus. Max-pooling layers are provisionally supported by DKS, especially if they occur early in the network and have a relatively small kernel size. (See Section 18.6 for more details about this.) The 3x3 max-pooling layer near the beginning of the network meets these criteria, and so we elect to leave it in. 23.3 Further modi\ufb01cations made by DKS Having achieved compatibility by making the above changes, we can now apply DKS to the resulting modi\ufb01ed ResNet architecture. In this subsection we will describe the subsequent changes made to the network as part of the execution of DKS itself , whose steps are outlined in Section 21.2. Note that some of these steps are optional, or involve degrees of freedom, and are all designed to preserve (or slightly expand) the model class. First, the mean-pooling layer near the end of the network can optionally be replaced by a weighted mean-pooling layer (as described in Section 20.1.2). While this replacement is necessary for the Q/C map computations to make sense, we found that it didn\u2019t signi\ufb01cantly improve optimization performance in our preliminary experiments, and so we didn\u2019t do it in our main ones. One possible explanation for this \ufb01nding is that because there is only a single nonlinear layer after the mean-pooling layer, the non-uniform q values produced by the latter can have only a limited e\ufb00ect on the network\u2019s overall C map. Next, we must replace the sum operations, which occur in ResNets at the end of each residual block (where the residual and shortcut branches are combined together), with normalized sums. Each normalized sum involves two weights and one constraint (that the squares of the weights sum to 1), and so has one degree of freedom. A natural choice is to set both weights to 1/ \u221a 2, which naively seems like the best option for reproducing the behavior of an unmodi\ufb01ed ResNet. However, as we will discuss in Section 26.6, for networks that forgo normalization layers and/or use bounded activation functions (as our modi\ufb01ed ResNets do), placing more weight on the shortcut branch will result in better behavior that more closely matches that of a standard ResNet. This is con\ufb01rmed in our experiments in Appendix M.1. 68 \fDeep Kernel Shaping The \ufb01nal modi\ufb01cation made to the network as part of DKS is to replace all of the activation functions with their transformed versions, as described in Step 9 of Section 21.2. 23.4 Computing the maximal slope function for modi\ufb01ed ResNets Having described the modi\ufb01cations we made to ResNets to achieve compatibility with DKS, and the further ones made by DKS itself, we are now in a position to compute the maximal slope function following the recipe in Section 21.3. For simplicity, we will assume that the normalized sums at the end of the residual blocks each have a weight of w on their residual branches. (A weight of \u221a 1 \u2212w2 on the shortcut branches is then implied.) The subnetwork before the sequence of residual blocks is just a a\ufb03ne and max-pooling layer and so has a slope polynomial of 1. Consider any non-transition block. The slope polynomial for the residual branch is \u03c83, as it has 3 nonlinear layers, and is 1 for the shortcut branch. Thus, the overall slope polynomial for the block is w2\u03c83 +(1\u2212w2). Similarly, the slope polynomial for a transition block (which has a single combined layer in its shortcut branch) is w2\u03c83 + (1 \u2212w2)\u03c8. The subnetwork after the sequence of blocks consists of nonlinear layer, a (possibly weighted) mean-pooling layer, and then an a\ufb03ne layer, and so has a slope polynomial of \u03c8. Noting that the total number of residual blocks is (D \u22122)/3, and the number transition blocks is 4 for all values of D, the overall slope polynomial for the network f (which has a single input and output and so is a subnetwork of itself) is pf(\u03c8) = (w2\u03c83 + (1 \u2212w2))(D\u22122)/3\u22124(w2\u03c83 + (1 \u2212w2)\u03c8)4\u03c8, which can be simpli\ufb01ed to pf(\u03c8) = (w2\u03c83 + (1 \u2212w2))(D\u221214)/3(w2\u03c82 + (1 \u2212w2))4\u03c85. (20) The only subnetworks of f that don\u2019t compose with other subnetworks to form larger ones are the residual branches, and so their slope polynomials are the only other ones to consider when computing the maximal slope function \u00b5(\u03c8). As they are simple compositions of layers with 3 (or 2) nonlinear layers total, their slope polynomials are \u03c83 (or \u03c82). Noting that \u03c83 (or \u03c82) is a factor of pf(\u03c8), we may ignore them when computing the maximum, and thus conclude that \u00b5(\u03c8) = pf(\u03c8). It is worthwhile to note the dependency of \u00b5(\u03c8) on the value of the residual branch weight w. For w = 0 we have \u00b5(\u03c8) = \u03c85, and for w = 1 we have \u00b5(\u03c8) = \u03c8D\u22121. More generally, \u00b5(\u03c8) will be a degree D \u22121 polynomial in \u03c8, where the coe\ufb03cients (which must sum to 1) will more heavily favor high order terms as w approaches 1, and low-order terms as w approaches 0. Thus, much like \u03c8, w can be thought of as controlling the overall \u201cdegree of nonlinearity\u201d of the network f (as quanti\ufb01ed by C\u2032 f(1)). 23.5 Equivalent standard convolutional networks To help demonstrate the power of DKS in our experiments, we will consider a skip-connectionfree convolutional network architecture obtained from the above modi\ufb01ed ResNet-V2 architecture by simple removal of the shortcut branches. The resulting architecture retains 69 \fMartens et al. the channel dimensions, strides, etc., of the original ResNet architecture, including it use of \u201cbottleneck\u201d layers in the residual branches, but is otherwise a standard deep convolutional network. Given the straightforward sequential structure of this architecture, with its D \u22121 nonlinear layers, its network polynomial and maximal slope function are simply \u03c8D\u22121 (which corresponds to the w = 0 case above). 23.6 \u201cWide\u201d ResNet variants for CIFAR-10 For our experiments involving the CIFAR-10 dataset (Krizhevsky and Hinton, 2009) we will make use of Wide Residual Networks (Zagoruyko and Komodakis, 2016), which are a well-known variant of the standard ResNet architecture. The Wide-ResNets we used in our experiments di\ufb00er from standard ResNets in the following ways: \u2022 The initial subnetwork before the sequence of residual blocks consists of just a 3x3 convolutional layer with 16 output channels and a stride of 1. There is no max-pooling layer. \u2022 Given per-block parameters s and d, a residual branch consist of the following sequence: a BN layer, a RELU nonlinear layer, a 3x3 convolutional layer with stride s and output channel dimension d, a BN layer, and RELU nonlinear layer, and \ufb01nally a 3x3 convolutional layer with stride 1 and output channel dimension d. Note that there are only 2 nonlinear layers instead of the 3 normally present in standard ResNets. \u2022 There is a global \u201cwidth\u201d parameter which acts as multiplier on the output channel dimensions of all the residual blocks. In our experiments this was set to 2. \u2022 The scheme for mapping D to a con\ufb01guration for the residual blocks is generalized to work with any value of D such that D \u22124 is divisible by 6. Here, D represents the number of nonlinear layers plus 3, so that there are (D \u22124)/2 total blocks, 3 of which are transition blocks. As we did for standard ResNets, to achieve compatibility of Wide-ResNets with DKS we will modify the architecture by removing the BN layers, adding back in learnable biases to the convolutional layers, and (possibly) replacing the RELU activation functions with various alternatives. Following a similar derivation to the one in Subsection 23.4, the maximal slope function for these networks is given by \u00b5(\u03c8) = (w2\u03c82 + (1 \u2212w2))(D\u221210)/2(w2\u03c8 + (1 \u2212w2))3\u03c84, where like before we have assumed weights w and \u221a 1 \u2212w2 for the weighted sum operation at the end of each residual block. We can also de\ufb01ne a skip-connection-free version of this architecture by removing the shortcut branches. The maximal slope polynomial associated with such a network is \u03c8D\u22123, as there are D \u22123 total nonlinear layers. 70 \fDeep Kernel Shaping Part IV Additional analysis of DKS and related methods 24. Neural Tangent Kernel analysis Recent advances in the theoretical understanding of neural network training have shown that highly overparameterized networks behave like linear functions of their parameters over the entire course of training by gradient descent (Jacot et al., 2018; Li and Liang, 2018; Du et al., 2019b,a; Allen-Zhu et al., 2019; Arora et al., 2019). This analysis works by approximating the network function by its own 1st-order Taylor series with respect to its parameters (centered at their initial values), and then showing that the parameters remain close enough to their initial values throughout training that the approximation remains a good one. Under this approximation, which becomes exact as the width of each layer goes to in\ufb01nity, training a neural network with gradient descent resembles kernel regression, with a kernel12 known as the Neural Tangent Kernel (NTK) that is computed from the network\u2019s Jacobian at initialization time. This enables one to accurately predict the functional form of the trained network, and precisely characterize the rate of convergence to this solution by gradient descent. While this type of analysis has been extended to exact and approximate natural gradient descent methods (Zhang et al., 2019b; Cai et al., 2019; Karakida and Osawa, 2020), we will only consider the gradient descent version in this work. Even though real networks trained on challenging datasets like Imagenet are typically not wide enough to satisfy the formal requirements of NTK theory (especially when random dataset transformations are employed), the setting where this 1st-order Taylor approximation works well \u2013 known colloquially as the \u201cNTK regime\u201d \u2013 may still serve as a rough analogy to more realistic training. It is thus interesting to consider what e\ufb00ect the network\u2019s architecture, activation functions, and initialization has on the NTK, and what this says about training in the NTK regime. In this section we will review the basics of NTK theory, characterize the NTK in terms of the properties of the network\u2019s C map, and show how the C map degeneration which happens naturally in deep networks (as shown in Section 12) leads to a form for the NTK which implies very slow optimization and/or very poor generalization. We will then show how the form of the NTK under DKS, assuming a reasonable choice for the global slope bound \u03b6, is much nicer, and leaves open the possibility of fast optimization and good generalization. (Although actually proving that it necessarily leads to these things would require assumptions on the dataset, and is beyond the scope of this work.) 24.1 Assumptions of this analysis For the remainder of this section we will assume that the network in question is a standard feed-forward MLP comprised of D fully-connected combined layers, where the last such 12. Note that the Neural Tangent Kernel is related to but distinct from the kernels we have been analyzing in this work so far. 71 \fMartens et al. layer has an identity activation function. We will represent the network as the function f(x, \u03b8) for input vector x and parameter vector \u03b8. For notational simplicity we will assume that the network\u2019s output dimension is 1. The parameter vector \u03b8 will be split across layers into D segments denoted \u03b8i for i = 1, 2, . . . , D. Each \u03b8i corresponds to \u221adiWi (as opposed to Wi itself), where is Wi is the weight matrix for layer i, and di its input dimension. This non-standard parameterization, which is known as the NTK parameterization, is what we apply gradient descent on, and is required for the NTK to have its desired properties. We will not consider bias parameters in this analysis. The training set will consist of n input-output pairs (xi, yi)n i=1 satisfying \u2225xi\u22252 = d0 for all i (where d0 is the network\u2019s input dimension), and the objective function used to train the network will be the standard mean squared error: 1 2 Pn i=1(yi \u2212f(xi, \u03b8))2. We will denote by \u03b8(t) (or \u03b8i(t)) the parameters at iteration t of optimization. \u03b8(0) (or \u03b8i(0)) will denote their random initial value, which is determined by a Gaussian fan-in initialization applied to the standard parameters (i.e. the original Wi\u2019s). In addition to our global assumption that the activation functions are in\ufb01nitely di\ufb00erentiable everywhere except for a \ufb01nite set of points, we will also assume that they are Lipshitz continuous, which is required in order to apply the results in Jacot et al. (2018). 24.2 NTK de\ufb01nition The Neural Tangent Kernel (NTK) is given by \u0398(x, x\u2032) = PD i=1 \u0398i(x, x\u2032), where \u0398i(x, x\u2032) \u2208R denotes the inner-product * \u2202f(x, \u03b8) \u2202\u03b8i \f \f \f \f \u03b8=\u03b8(0) , \u2202f(x\u2032, \u03b8) \u2202\u03b8i \f \f \f \f \u03b8=\u03b8(0) + . Given the NTK \u0398(x, x\u2032) and training dataset (xi, yi)n i=1, the NTK matrix K \u2208Rn\u00d7n is de\ufb01ned by Ki = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u0398(x1, x1) \u0398(x1, x2) \u00b7 \u00b7 \u00b7 \u0398(x1, xn) \u0398(x2, x1) \u0398(x2, x2) . . . . . . ... \u0398(xn, x1) \u00b7 \u00b7 \u00b7 \u0398(xn, xn) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb . We can similarly de\ufb01ne the per-layer NTK matrix Ki \u2208Rn\u00d7n for layer i by Ki = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u0398i(x1, x1) \u0398i(x1, x2) \u00b7 \u00b7 \u00b7 \u0398i(x1, xn) \u0398i(x2, x1) \u0398i(x2, x2) . . . . . . ... \u0398i(xn, x1) \u00b7 \u00b7 \u00b7 \u0398i(xn, xn) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , noting that K = PD i=1 Ki. 72 \fDeep Kernel Shaping 24.3 Training in the NTK regime: a brief review There are various NTK-type results that bound the convergence rate of gradient descent in the case of \ufb01nite width layers (e.g Du et al., 2019b). However, such results are complicated to prove, and seem to be fairly pessimistic in terms of the rate of convergence13 they predict, and the width they require. On the other hand, the situation simpli\ufb01es considerably in the limit of in\ufb01nite width (for all layers but input and output ones, whose width is \ufb01xed), and quite simple and elegant expressions exist for both the convergence rate of gradient descent, and the function computed at the converged solution (Jacot et al., 2018). While the in\ufb01nite width limit is unrealistic, and totally ignores how kernel approximation error a\ufb00ects the theoretical predictions, we will nonetheless use it in our analysis for the sake of simplicity and clarity. 24.3.1 Notation and basic results Before we begin we must de\ufb01ne some additional notation. Let y = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 y1 y2 . . . yn \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb\u2208Rn, f(t) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 f(x1, \u03b8(t)) f(x2, \u03b8(t)) . . . f(xn, \u03b8(t)) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb\u2208Rn, ki(x) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 \u0398i(x, x1) \u0398i(x, x2) . . . \u0398i(x, xn) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb\u2208Rn, and de\ufb01ne k(x) = PD i=1 ki(x). In the in\ufb01nite width limit, provided that K is positive de\ufb01nite (i.e. non-singular), a standard result of NTK theory is that the t-th iterate \u03b8(t) produced by gradient descent (with learning rate \u03b7) satis\ufb01es f(x, \u03b8(t)) = k(x)\u22a4K\u22121(I \u2212(I \u2212\u03b7K)t)(y \u2212f(0)) + f(x, \u03b8(0)) (21) for all valid x. If 0 < \u03b7 \u2a7d1/\u03bb1(K), where \u03bbi(K) denotes the i-th largest eigenvalue of K, then \u03b8(t) converges to some \u03b8\u22c6. At this solution, the form of f is given by taking t \u2192\u221ein the above equation, yielding: f (x, \u03b8\u22c6) = k(x)\u22a4K\u22121(y \u2212f(0)) + f(x, \u03b8(0)). (22) 24.3.2 Convergence behavior on the training set Observing that k(xi)\u22a4is the i-th row of K, we can \u201cstack\u201d both sides of Equation 21 to obtain f(t) = KK\u22121(I \u2212(I \u2212\u03b7K)t)(y \u2212f(0)) + f(0) = y \u2212f(0) \u2212(I \u2212\u03b7K)t(y \u2212f(0)) + f(0) = y \u2212(I \u2212\u03b7K)t(y \u2212f(0)) = y \u2212 n X i=1 (1 \u2212\u03b7\u03bbi(K))t(v\u22a4 i (y \u2212f(0)))vi, 13. While these results predict exponential convergence, the associated rate constants are close enough to 1 that convergence requires a prohibitively large number of iterations. 73 \fMartens et al. where vi denotes the eigenvector of K corresponding to the eigenvalue \u03bbi(K). Plugging this expression into the objective function and using the fact that the vi\u2019s are mutually orthogonal gives the following expression for the training loss: 1 2 n X i=1 (yi \u2212f(xi, \u03b8t))2 = 1 2\u2225y \u2212f(t)\u22252 = 1 2 n X i=1 (1 \u2212\u03b7\u03bbi(K))2t(v\u22a4 i (y \u2212f(0)))2. When 0 < \u03b7 \u2a7d1/\u03bb1(K), this expression converges to 0 which implies that \u03b8\u22c6is indeed a global minimizer of the objective. Moreover, if we employ early stopping, then directions in function space corresponding to eigenvectors with with smaller eigenvalues in K will have converged less than the others. As observed by Jacot et al. (2018), this may help explain early stopping\u2019s regularization bene\ufb01ts. A complete picture of the convergence of the objective requires us to know the entire spectrum of K and the coe\ufb03cients v\u22a4 i (y \u2212f(0)). However, assuming that (v\u22a4 n (y \u2212f(0)))2 is signi\ufb01cantly large, the convergence speed will tend to (1 \u2212\u03b7\u03bbn(K))2t asymptotically. With the optimal learning rate of \u03b7 = 1/\u03bb1(K) this becomes (1 \u22121/ cond(K))2t, where cond(K) = \u03bb1(K)/\u03bbn(K) is the condition number of K. 24.3.3 Training only certain layers If we only optimize layer i, we may replace K by Ki and k(x) by ki(x) in the above formulas (provided that it is positive de\ufb01nite) in order to obtain a description of the resulting convergence. As before, the training error will converge to zero at a speed determined by the eigenvalues of Ki. An analogous statement is also true if we optimize an arbitrary subset S of the layers, in which case we replace K by P i\u2208S Ki and k(x) by P i\u2208S ki(x). Note that because we are assuming in\ufb01nitely wide layers there is no paradox here; each layer has enough capacity to memorize the training data entirely by itself. The form of the NTK matrix allows us to gain insight into the relative contribution of each layer to the overall solution. A layer whose per-layer NTK matrix is much smaller14 than the other layers will have much smaller gradients, and the changes to its weights made during training will have a much smaller e\ufb00ect on the overall solution. While training any single layer is su\ufb03cient to achieve zero error in the in\ufb01nite width case, what this arguably means for realistically sized networks is that layers with very small per-layer NTKs will train much slower than other layers. 24.4 An elegant expression for the limiting NTK using C maps The inner-product which de\ufb01nes the per-layer NTK \u0398i(x, x\u2032) is a random variable that depends on the random initial value \u03b8(0) of the parameters \u03b8. In the limit as the width of the network goes to in\ufb01nity, \u0398i(x, x\u2032) converges in probability to a deterministic function in much the same way that the network\u2019s kernel function does. Indeed, an approximation result directly analogous to Theorem 1 exists for the NTK (Arora et al., 2019, Theorem 3.1). As we are performing our analysis in the in\ufb01nite width limit, we will take \u0398i(x, x\u2032) to be this limiting value going forward. 14. By \u201csmaller\u201d we mean that a PSD matrix A is smaller than B, written A \u227aB, if B \u2212A is positive de\ufb01nite. 74 \fDeep Kernel Shaping Let gi be the subnetwork that maps the network\u2019s input to the input of the i-th combined layer (which is the output of the (i \u22121)-th combined layer when i \u2a7e2). Jacot et al. (2018) show that \u0398i(x, x\u2032) = \u0002 f \u03bagi(\u03a3x,x\u2032) \u0003 1,2 D Y j=i h Eu\u223cN(0,g \u03bagj (\u03a3x,x\u2032))[\u03c6\u2032 j(u)\u03c6\u2032 j(u)\u22a4] i 1,2 , where we note that f \u03bag1(\u03a3x,x\u2032) = x\u22a4x\u2032/d0 (since g1 is the identity), and that the quantities inside of [\u00b7]1,2 are 2 \u00d7 2 matrices (so that [\u00b7]1,2 extracts their top corner entry). Let fi represent the i-th combined layer of the network, qi its output q value (with q0 = 1 being the q value for the network\u2019s input), and \u03c6i its activation function. By Equations 7, 10, and 11, we can write the above expression for the NTK as \u0398i(x, x\u2032) = qi\u22121Cgi(c0) D Y j=i \u0393\u03c6\u2032 j(Cgj(c0), qj\u22121, qj\u22121) = qi\u22121Cgi(c0) D Y j=i qj qj\u22121 C\u2032 fj(Cgj(c0)) = qD+1Cgi(c0) D Y j=i C\u2032 fj(Cgj(c0)), where c0 \u2261x\u22a4x\u2032/d0 is the c value for the network\u2019s input (recalling that \u2225x\u22252 = \u2225x\u2032\u22252 = d0 by assumption). Denote by hi the subnetwork that maps the input of fi to the network\u2019s \ufb01nal output. Since we have hi = fD \u25e6fD\u22121 \u25e6\u00b7 \u00b7 \u00b7 \u25e6fi it follows that Chi = CfD \u25e6CfD\u22121\u25e6\u00b7 \u00b7 \u00b7 \u25e6Cfi, and so by the chain rule we have C\u2032 hi(Cgi(c0)) = QD j=i C\u2032 fj(Cgj(c0)). Plugging this into the above equation we arrive at the elegant formula \u0398i(x, x\u2032) = qDCgi(c0)C\u2032 hi(Cgi(c0)). (23) While this formula has only been proven for deep MLPs (consisting of a composition of a sequence combined layers), we conjecture that it holds for more general architectures. 24.5 The form of the NTK matrix given a degenerate C map and implications for gradient descent training In this subsection we will consider the situation where a deep network f has a \u201cdegenerate\u201d C map Cf that sends nearly all input c values to a small region around some value c\u2217 (in the sense of Section 12), and argue that this implies slow optimization and/or poor generalization in the NTK regime. This analysis can be seen as a more rigorous version of the intuitive argument given in Section 12.4, and overlaps with the results of Xiao et al. (2020). 75 \fMartens et al. 24.5.1 Additional assumptions of this analysis To simplify the discussion, we will assume that each combined layer has the same activation function (except the last one, which is required to be linear), which means that the network\u2019s C map Cf is just the composition of D \u22121 copies of some local C map C . We will further assume that C is itself \u201cwell-behaved\u201d in the sense that C\u2032(1) is reasonably close to 1, so that the overall C map Cf is degenerate only because D is large. Moreover, any su\ufb03ciently \u201cdeep\u201d subnetwork of f is also degenerate, and any su\ufb03ciently \u201cshallow\u201d subnetwork is well-behaved. Additionally, we will assume that qD = 1 (without loss of generality), and that there are no two distinct inputs x and x\u2032, from either the training or test set, for which x\u22a4x\u2032/d0 is very close to 1 or \u22121 (which would imply that either x \u2248x\u2032 or x \u2248\u2212x\u2032 given our previous assumption that \u2225x\u22252 = \u2225x\u2032\u22252 = d0). 24.5.2 NTK matrix estimates As in Section 12.4 there are two main cases to consider for c\u2217: the \u201ccollapsing case\u201d, where c\u2217= 1, and the \u201cexploding case\u201d, where 0 \u2a7dc\u2217< 1 with c\u2217\u0338\u22481. For the collapsing case we must have C\u2032(1) \u2a7d1 since c\u2217= 1 is an attractive \ufb01xed point of C. And for the exploding case we must have that 1 is a non-attractive \ufb01xed point (since C can only have one such point by Proposition 10), and so C\u2032(1) > 1. For the layer index i there are three cases to consider: i is small so that the layer is \u201cearly\u201d in f, D \u2212i is small so that the layer is \u201clate\u201d in f, and the default case where neither i nor D \u2212i are small, so that the layer is the \u201cmiddle\u201d of the network. The following table gives estimates of the layer-wise NTK matrix for each combination of cases. These estimates are computed in Appendix H.1 using a semi-rigorous style of argument. The results of these computations have been checked numerically for the case of RELU and Erf activation functions (whose C maps have convenient analytic forms). Here, the symbol E denotes the matrix of 1\u2019s. Type of degeneration Early layers Middle layers Later layers Collapsing case (c\u2217 = 1) w/ C\u2032(1) < 1 Ki \u22480 Ki \u22480 Ki \u2248C\u2032(1)D\u2212iE Collapsing case (c\u2217 = 1) w/ C\u2032(1) = 1 Ki \u2248I Ki \u2248I + \u03b1i(E \u2212I) where 0 = \u03b11 \u2a7d \u03b12 \u2a7d\u00b7 \u00b7 \u00b7 \u2a7d\u03b1D = 1 (Observed empirically, and conjectured to be true in general.) Ki \u2248E Exploding case (0 \u2a7d c\u2217< 1, c\u2217\u0338\u22481) w/ C\u2032(1) > 1 Ki \u2248C\u2032(1)D\u2212iI (very large) Ki \u2248C\u2032(1)D\u2212iI (very large, but still much smaller than for early layers) Ki \u2248C\u2032(1)D\u2212iI + c\u2217C\u2032(c\u2217)D\u2212i(E \u2212I) (not very large) 76 \fDeep Kernel Shaping From the above values we can compute an estimate of the overall NTK matrix. This is given the following table, which is computed in Appendix H.2: Type of degeneration Overall NTK matrix Collapsing case (c\u2217= 1) w/ C\u2032(1) < 1 K \u2248 1 1\u2212C\u2032(1)E Collapsing case (c\u2217= 1) w/ C\u2032(1) = 1 K \u2248D(I + \u00af \u03b1(E \u2212I)) for some 0 \u2a7d\u00af \u03b1 \u2a7d1 (Observed empirically for deep RELU networks with \u00af \u03b1 = 1/4, and for other networks with \u00af \u03b1 = 1/3. Conjectured to be true in general.) Exploding case (0 \u2a7dc\u2217< 1, c\u2217\u0338\u22481) w/ C\u2032(1) > 1 K \u22481\u2212C\u2032(1)D 1\u2212C\u2032(1) I + c\u2217 1\u2212C\u2032(c\u2217)(E \u2212I) 24.5.3 Implications for speed and generalization of gradient descent training There are several implications for gradient descent training that we can infer from the above estimates, all of which are bad. Firstly, in the collapsing case with C\u2032(1) < 1, and in the exploding case, the magnitude of the per-layer NTK matrices di\ufb00er substantially over the network. This means that the layers whose per-layer NTKs are not amoung the largest will train very slowly. Given that such layers are only a small fraction of the total, this implies that only a few layers of the network will have the potential to train quickly. While this is technically su\ufb03cient to minimize the training loss in the NTK regime, in practice, our networks often won\u2019t be highly overparameterized to the extent required by NTK theory, and so we actually will need to train all of the layers in order to \ufb01t the dataset. Insofar as the NTK regime is an analogy to this more realistic setting, this analysis thus predicts slow training. Secondly, in the collapsing case with C\u2032(1) < 1, we have that the per-layer and overall NTK matrices are approximately rank 1, which implies that they have a very high condition number. This means neither the individual layers, nor the overall network, will train quickly, and so the training loss will take a very long time to be minimized no matter what subset of layers we elect to train. Finally, in all cases we have that the approximate form of the per-layer and overall NTK matrices does not depend on the input training data. Additionally, we have that the vector k(x) does not depend on the training data, since by the derivations in Appendices H.1 and H.2, \u0398i(x, x\u2032) doesn\u2019t depend on x or x\u2032 (except to detect when x \u2248x\u2032 or x \u2248\u2212x\u2032). And f(0) also won\u2019t depend on the training data, since it will either look like a multiple of the ones vector in the collapsing case, or a completely random vector in the exploding case. It thus follows from Equation 22 that the predictions made by the fully trained network for a test point x will not actually depend on the input training data in any signi\ufb01cant way, making it impossible for the network to generalize. 24.5.4 Comparison to the results of Xiao et al. (2020) The results of this subsection overlap with those of Xiao et al. (2020), who derive approximations to the overall NTK matrix for deep networks (although not for individual layers) 77 \fMartens et al. using a di\ufb00erent style of argument. Their results mostly agree with ours, except that for the case C\u2032(1) = 1 they predict a universal value of \u00af \u03b1 = 1/3 (whereas we observe \u00af \u03b1 = 1/4 for deep RELU networks), and for C\u2032(1) = 1 they estimate the second (and less signi\ufb01cant) term of K to be 1 1\u2212C\u2032(c\u2217)(E \u2212I), whereas we predict c\u2217 1\u2212C\u2032(c\u2217)(E \u2212I). Numerical studies we performed on the Q/C maps of deep RELU/erf networks seem to con\ufb01rm our predictions in these cases. 24.6 The form of the NTK under DKS The following theorem is proved in Appendix H.3 using Theorem 13 and Equation 23. Theorem 27 Suppose that \u0398i is the per-layer NTK (for layer i) of a network conforming to the assumptions of Section 24.1 which has been transformed using DKS with global slope bound \u03b6. Then we have \f \f \f \f\u0398i(x, x\u2032) \u22121 d0 x\u22a4x\u2032 \f \f \f \f \u2a7d11(\u03b6 \u22121). The bound in this theorem establishes that each per-layer NTK matrix Ki converges to the training data Gram matrix X\u22a4X/d0 as \u03b6 approaches 1, where X = \u0002 x1 x2 \u00b7 \u00b7 \u00b7 xn \u0003 . It also allows us to reason about larger values of \u03b6 to a limited extent, although it arguably only becomes interesting when \u03b6 < 1 + 1 11. (We suspect that with a tighter and/or more detailed analysis, interesting statements about the relationship of \u03b6 and the layer-wise NTK could be made for larger values of \u03b6.) If X\u22a4X/d0 is low rank, which it will be in the common case that dim(x) < n, this means that the K will approach a low-rank matrix as \u03b6 approaches 1, which corresponds to slow/impossible training under gradient descent. Intuitively this makes sense, since a value of \u03b6 very close (or equal) to 1 corresponds to a network that looks almost perfectly linear at initialization time (by Theorem 13), and thus could fail to properly train as per the discussion in Section 14.2. Indeed, the foundational works on NTK only predict that the NTK will be positive de\ufb01nite (i.e. full-rank) when the activation functions are nonpolynomial (and thus nonlinear) functions, and DKS makes them approach linear functions as \u03b6 \u21921. So while a value of \u03b6 very close to 1 is clearly a bad choice, a value somewhat close to 1 (such as 1.5; which we use in most of our experiments) will allow K to retain some of the structure of X\u22a4X/d0, thereby ensuring that the network\u2019s prediction (given in Equation 22) depends on the training data and thus has the potential to generalize. It will also allow K to deviate enough from X\u22a4X/d0 to be full rank with a potentially small condition number (which would imply fast training). Unfortunately, cond(K) is di\ufb03cult to accurately estimate without full knowledge of both X\u22a4X/d0 and the behavior of the C map over its entire domain, and existing methods to bound cond(K) (e.g. Du et al., 2019b) seem unlikely to produce useful results in our context. We leave the problem of accurately estimating the value of cond(K) under DKS to future work. 78 \fDeep Kernel Shaping 25. Variance propagation, signal propagation, and their relationship to approximate kernel analysis Many previous methods for constructing and initializing neural networks are justi\ufb01ed using analysis frameworks which attempt to characterize the initialization-time behavior of neural networks. The two most prominent examples of such frameworks are \u201cvariance propagation\u201d and \u201csignal propagation\u201d. In this section we review these frameworks, highlight certain mathematical issues with them, and provide counterexamples to their general claims where possible. We also relate them and their predictions to the kernel approximation framework underlying DKS, and advocate for the latter as a more powerful and mathematically rigorous alternative. 25.1 The original variance propagation analysis of LeCun et al. (1998b) The earliest such analysis that we are aware of appeared in LeCun et al. (1998b, Section 4.6), which we will call \u201cvariance propagation\u201d. It is based on the idea of computing the per-unit variance for each layer as a function of the per-unit variance of the previous layer, where the underlying distribution is over training cases. Typically, all units within a layer will have the same variance, and so only one scalar needs to be \u201cpropagated\u201d. For fully-connected layers, LeCun et al. (1998b) argue that the variance for a particular output unit is equal to the \u21132-norm of that unit\u2019s vector of weights, multiplied by the perunit variance of the input. For this to hold, they require that the input units are uncorrelated with each other and have the same variance. For nonlinear layers they assume the use of the activation functions that approximately preserve the mean and variance of their inputs. As an example, they give a transformed tanh activation function which resembles the identity function within a prescribed range of \u201ctypical\u201d inputs around 0. Assuming that each weight vector has a norm of approximately 1, and that the training input data is whitened, one might try to apply this single-layer argument recursively to all layers of the network, starting from the input. Unfortunately, this doesn\u2019t seem to work. While a whitening transform applied to the training data ensures that the input units of the \ufb01rst layer are uncorrelated with variance 1, uncorrelatedness will fail to hold for the output of this layer, making it impossible to apply the same argument for subsequent layers. As no assumption is made about the network\u2019s parameters, beyond that the weight vectors must have a norm of \u223c1, one can design a counterexample where these variance computations break down after the \ufb01rst fully-connected layer. For example, consider a linear neural network with a 1-dimensional input x, a 2-dimensional hidden layer, and a 1-dimensional output y, de\ufb01ned by the equations h1 = x, h2 = \u2212x, and y = 1 \u221a 2h1 + 1 \u221a 2h2. The input weight vector for each unit has norm 1, and yet y = 0 for all x, so that the network will not preserve the per-unit variance of x. (Intuitively, this is because h1 and h2 have strong negative correlation.) Note that this example can easily be generalized to arbitrarily wide layers. 79 \fMartens et al. Another more subtle issue with the analysis in LeCun et al. (1998b) is that the approximation errors arising from the analysis of nonlinear layers can easily accumulate with depth, and may push the activation functions out of their assumed range of inputs. 25.2 Klambauer et al.\u2019s (2017) modi\ufb01ed variance propagation Klambauer et al. (2017) present a modi\ufb01ed version of LeCun et al.\u2019s (1998b) variance propagation analysis, which would seem to address the issues we\u2019ve highlighted. They do this by arguing that as long as the inputs to a fully-connected layer are independent, its outputs (which are \ufb01xed linear combinations of its inputs as determined by the weights) will be approximately Gaussian distributed, thanks to the Central Limit Theorem (CLT) and the assumption of wide layers. Using this approximation they then compute the moments of the subsequent nonlinear layer using Gaussian integrals (similar to those that de\ufb01ne Q maps), without having to make any strong assumptions on its activation function. Unfortunately, CLT is not actually applicable to arbitrary weighted sums of variables, even when those variables are perfectly iid. For example, if the weights of the sum are (1, 0, . . . , 0), then the output will have the same distribution as the \ufb01rst input unit, which won\u2019t be Gaussian in general. Even if we somehow ruled out such weight vectors as having \u201clow probability\u201d, and focused only on weight vectors for which CLT would apply, there would still be major di\ufb03culties to overcome. Firstly, since we need to show that the output units of a fully-connected layer are approximately independent (in order to recursively apply the same analysis to subsequent layers), we would need to show that they are jointly Gaussian distributed with a diagonal covariance matrix. This would require the use of one of the multi-dimensional versions of CLT, all of which require signi\ufb01cant additional hypotheses compared to the standard one-dimensional versions. Secondly, the approximate independence provided by CLT would not be su\ufb03cient to recursively apply the same argument to subsequent layers, as CLT typically requires exact independence of the variables under summation15. Thirdly, because CLT requires that the number of variables under summation is large, it usually won\u2019t be applicable to the \ufb01rst layer of the network (where the input dimension is a \ufb01xed property of the training data). 25.3 The version of variance propagation in Glorot and Bengio (2010) Glorot and Bengio (2010) present a modi\ufb01ed version of LeCun et al.\u2019s (1998b) variance propagation analysis, which has formed the basis of many subsequent analyses over the years. The \ufb01rst change they make is to compute per-unit variances with respect to the joint distribution on network inputs and parameters (as opposed to just the inputs). Their second modi\ufb01cation is to directly assume that the network\u2019s activation functions behave like the identity function over typical inputs, thus implying that they preserve the mean and variance of their inputs. 15. In an attempt to preempt this criticism, Klambauer et al. (2017) refer to Bradley (1981), which proves a version of CLT that relaxes the independence assumption. However, this result assumes a very speci\ufb01c type of weak dependence which is unlikely to satis\ufb01ed in this setting, and also only applies for onedimensional variables. 80 \fDeep Kernel Shaping Assuming that the weights of a given fully-connected layer are iid with mean zero and variance \u03c32, and that its input units are mean zero with variance v, they show that the per-unit variance of the layer\u2019s output is simply k\u03c32v, where k is the input dimension. Notably, by computing variances with respect to training cases and parameters, they do not require the input units to a layer to be uncorrelated. (Intuitively, this is because the multiplication by the independent mean-zero random weights causes any two random variable to become decorrelated.) This addresses one of the main problems of the original variance propagation analysis, and allows it to be recursively applied over the entire network without issue. Unfortunately, the modi\ufb01cation also introduces a new issue not present in prior analysis: the variances no longer refer to any single network (with a particular parameter setting), but rather to a distribution over networks. This makes the interpretation of these variances unclear, and represents a subtle but serious issue in their analysis. One could possibly argue that, with high probability, a single network sampled from this distribution would have variances similar to those computed over the whole distribution. However, this would require additional hypotheses, since otherwise there are simple counterexamples to the general claim. For example, consider a linear network with D \u226b1 fully-connected layers of width 1, where the biases are zero and the weights are sampled iid from N(0, 1). The function computed by this network amounts to just multiplying its input by the product of D scalar weights drawn independently from N(0, 1). Variance propagation would predict that such a network will exactly preserve the variance of its input, or in other words, that the product of these D weights would be approximately 1. However, the distribution of the product of D independent samples from N(0, 1) is highly concentrated around zero for even moderate large values of D (which can be seen via Monte Carlo simulation), despite the fact that the variance of this product is 1. These counterexamples are not restricted to narrow networks either. If, for example, the weights are drawn iid from a heavy-tailed distribution that is highly concentrated around zero and has variance 1/k (where k is the width), then even for large k there will be an overwhelming probability that all the weights will be close to zero, leading to a network which \u201csquashes\u201d its input. This contradicts the prediction made by variance propagation, which is that such a network would approximately preserve the variance of its input. 25.4 Extension of variance propagation to RELUs He et al. (2015) extend the version of variance propagation in Glorot and Bengio (2010) to deal speci\ufb01cally with RELU activation functions, as there is no zero-centered range of inputs for which RELUs resemble the identity function. To do this, they introduce the additional hypotheses that the weights have a symmetric distribution around zero, that the biases are initialized to zero, and that each RELU layer is directly preceded by a fullyconnected layer. Given these hypotheses, it follows that the input to each RELU layer is distributed symmetrically around zero, and thus the expected squared valued of a RELU unit will be exactly 1/2 times its input variance. One can then use this expected squared value in place of the input variance for the variance propagation calculation at the next layer, since multiplication by the mean-zero weights of said layer will restore a mean of zero. 81 \fMartens et al. While it deals with nonlinear layers in a cleaner fashion (at least for RELU networks), this analysis retains the central issue present in Glorot and Bengio\u2019s (2010) analysis, which is that the variances do not necessarily describe the behavior of a single network. Moreover, while the variance propagation formulas were originally derived for fully-connected networks, He et al. (2015) also applies them to convolutional networks without any additional justi\ufb01cation. (This is problematic since the weight sharing violates the iid weights assumption.) 25.5 Extensions of variance propagation to normalizer-free residual networks Zhang et al. (2019c)16, De and Smith (2020), and Shao et al. (2020)17 adopt Glorot and Bengio\u2019s (2010) variance propagation framework to perform an analysis of ResNets (which are described in detail in Section 23.1) without normalization layers. While a mostly straightforward application of the existing variance propagation formulas, they require an additional one which says that the output variance of a residual block is the sum of the output variances of its two branches. While the formula itself is correct (given the conceits of variance propagation), as far as we can tell it has never been properly justi\ufb01ed. Zhang et al. (2019c) attempt to justify it using Var(x + y) = E[Var(y|x)] + Var(x), although this formula is shown to be incorrect by taking x = y . Shao et al. (2020) give a di\ufb00erent argument which assumes that the input units to a residual block are uncorrelated (which won\u2019t be true in general), and which treats the weights as \ufb01xed instead of random variables (which violates one of the core premises of variance propagation). For reference, we will give an argument here for fully-connected networks. Let x be the input to the residual block and z be the input to the \ufb01nal fully-connected layer of the residual branch, which has weight matrix W. Given that E[W] = 0 and W is independent of x and z we have Cov(Wz, x) = E[Wz(x \u2212E[x])\u22a4] = E[W]E[z(x \u2212E[x])\u22a4] = 0, where we have used E[Wz] = E[W]E[z] = 0. From this it follows that Var(Wz + x) = Var(Wz) + Var(x). 16. Zhang et al. (2019c) claim that their analysis actually describes the case of a constant network input, where only the network\u2019s parameters are random variables. However, all of their variance propagation equations express the per-unit output variance of a layer as a function of its per-unit input variance. As this variance will be zero for the \ufb01rst layer when the network\u2019s input is constant, this claim appears to be unsupported. 17. Technically, Shao et al. (2020) never actually specify what distribution they compute variances over. However, the only interpretation which makes sense, given the majority of their derivations, is that this is the distribution over network inputs and parameters. Despite this, they treat the parameters as \ufb01xed in some of their discussions. 82 \fDeep Kernel Shaping 25.6 Signal propagation (aka mean \ufb01eld analysis) Closely related to variance propagation is an approach for understanding the initializationtime behavior of neural networks commonly referred to as \u201csignal propagation\u201d18 or \u201cmean \ufb01eld analysis\u201d (Poole et al., 2016). In this approach, instead of propagating variances, one propagates per-unit expected squared values , or expected products between corresponding units from two copies of the same network (each fed di\ufb00erent inputs). Here, expectations are taken with respect to the distribution on network parameters, and on the two network inputs (which may be correlated). In order to propagate through nonlinear layers, one approximates their input as being Gaussian distributed with mean zero and covariance matrix determined by the expectations from the previous layer. As will be explained in the next subsection, the expectations computed under signal propagation end up being equal to q and m values (or c values, after suitable normalization) as we have de\ufb01ned them in this work. Indeed, Q/C maps were originally derived by Poole et al. (2016) in the context of signal propagation. The mathematical justi\ufb01cation of signal propagation given by Poole et al. (2016) in the case of a single fully-connected combined layer is roughly as follows. One starts from the assumption that the k entries of the input vector are iid random variables with expected squared values given by q. Then, multiplication by an m\u00d7k random matrix with mean-zero iid entries of variance \u03c32/k produces m outputs, each of which has bounded variance \u03c32q, and is a sum of k iid terms. For large k one applies the Central Limit Theorem (CLT) to get that these sums will be approximately iid Gaussian distributed, with mean zero and variance \u03c32q. It then follows that the entry-wise outputs of the nonlinear layer are approximately iid, with expected squared values given by Gaussian integrals. Expected products are then handled using a straightforward generalisation of this argument. In principle, this single layer argument can be applied recursively to a composition of combined layers, always starting from the hypothesis that the entry-wise inputs to a given layer are iid with some known expected squared value. Unfortunately, this recursive approach runs into the same problems with CLT discussed in the last paragraph of Section 25.2, which cannot be easily repaired19. As with variance propagation, the interpretation of the expectations computed under signal propagation isn\u2019t clear. In particular, there is no obvious relationship between these expectations, and the properties of a single randomly initialized network. Signal propagation\u2019s two main advantages over variance propagation are that it handles nonlinearities in a much more general and precise way (via Gaussian integrals), and that it also describes the propagation of expected unit products for correlated network inputs. These features make it a much more powerful framework for understanding the understanding the initialization-time behavior of neural networks, and for designing initial18. Note that some works (e.g. De and Smith, 2020) use the term \u201csignal propagation\u201d to refer to certain versions of what we have been calling \u201cvariance propagation\u201d. In such works elements of both types of analysis often appear, and the precise distinction between them becomes a bit blurry. 19. To the best of our knowledge, the only mathematically rigorous CLT-based treatment of the widthlimiting behavior of random networks is that of Matthews et al. (2018), which is given in the context of approximate kernel analysis. It\u2019s not immediately obvious if/how Matthews et al.\u2019s (2018) arguments can be used to rigorously justify signal propagation. 83 \fMartens et al. ization schemes. However, as we will discuss next, approximate kernel analysis has the same advantages while also being mathematically rigorous and more clearly interpretable. 25.7 Relationship of variance/signal propagation to approximate kernel analysis When \u03c32 = 1 (in the notation of the previous subsection), signal propagation\u2019s de\ufb01ning equations for fully-connected combined layers are precisely equivalent the local Q and C maps computed under approximate kernel analysis. We may thus interpret the quantities propagated by signal propagation as q and m values, and their normalized versions as c values. And for other values of \u03c32, a similar statement holds for a slightly generalized notion of Q/C maps (as given in Poole et al. (2016)). In some sense, this equivalence acts as a mathematical justi\ufb01cation of signal propagation\u2019s equations, although with a di\ufb00erent meaning for the quantities being propagated. In particular, the expected squared values computed by signal propagation correspond to q values, and can thus be thought of as approximations of dimension-normalized squared norms of the associated feature map\u2019s vectors. Similarly, the expected products computed by signal propagation correspond to m values, and can thus be viewed as approximations of the dimension-normalized inner-product between two such vectors (or the same vector for two di\ufb00erent network inputs). Given this relationship between approximate kernel analysis and signal propagation, we can also relate approximate kernel analysis to Glorot and Bengio\u2019s (2010) version of variance propagation (and its extensions). To so see this, note that insofar as the units in each layer have mean zero (under variance/signal propagation\u2019s assumed distribution), their expected squared values are equal to their variances, in which case variance propagation also computes q values. Moreover, even when the means are not zero, as is the case for RELU networks, one can modify variance propagation to deal directly with expected squared values in a manner similar to He et al. (2015). While approximate kernel analysis provides the same level of description as signal propagation, it has several advantages. The \ufb01rst is that it is based on a rigorous mathematical theory with clearly de\ufb01ned hypotheses and probabilistic error estimates. This allows one to be con\ufb01dent in determining which architectures it can be applied to, and to have a rigorous pathway for extending it to new architectures (which we exploited in our treatment of normalization and pooling layers). The second advantage is that the quantities it computes have a clear relationship to the (high probability) initialization-time behavior of actual randomly initialized networks with de\ufb01nite inputs and weights. The third is that it applies to networks with low dimensional inputs, for which the CLT-based arguments commonly used to justify signal propagation are inapplicable. And while these advantages come at the cost of additional/stronger hypotheses (such as Gaussian or SUO-distributed weights), such hypotheses are likely required in order for the predictions made by the equations to be accurate in general. 84 \fDeep Kernel Shaping 25.8 Extensions of variance/signal propagation to networks with Batch Normalization layers De and Smith (2020) propose an extension of variance propagation to networks with Batch Normalization (BN) layers, in order to analyze standard ResNets. To do this, they argue that for large mini-batches, BN layers will compute a per-unit empirical variance which closely matches the per-unit variance computed under variance propagation. Thus, after normalization by the square root of this variance, the per-unit output variance of a BN layer will be always be 1, regardless of its per-unit input variance. There appears to be a subtle issue with this argument. As discussed above, variance propagation is a faithful description of a single randomly initialized network (with de\ufb01nite inputs) only insofar as the variances it computes correspond to q values. q values in turn are approximations of dimension-normalized squared norms of entire activation vectors, and have no clear relationship to the properties of individual units within a layer of such a network. So in general, the empirical unit-wise variances computed by a BN layer will not correspond to the variances computed by variance propagation, even approximately. It is conceivable that with additional hypotheses on the batch size and distribution, the network, and the initialization, the empirical distribution of the values of each input unit to a BN layer (taken across the mini-batch, for \ufb01xed parameters) might all have roughly the same variance with high probability, in which case the approximation in De and Smith (2020) would be a valid one. However, formalizing this would likely be quite di\ufb03cult. Yang et al. (2019) propose an extension of signal propagation/mean \ufb01eld analysis to networks where BN layers are inserted between a\ufb03ne and nonlinear layers. To facilitate this, they propagate B \u00d7 B matrices representing the expected products between di\ufb00erent copies of the same unit for each of B possible inputs to the network, where B is the batch size. For networks without BN layers the propagation equations decomposes nicely in terms of low-dimensional Q/C maps, while for networks with BN layers no such decomposition exists, due to the way di\ufb00erent elements of the mini-batch interact in BN layers. Yang et al. (2019) are nonetheless able to analyze the resulting high-dimensional propagation equations using various sophisticated approximations and characterize their asymptotic \ufb01xed point behavior. In their approach, the batch size, as well as the distribution used to generate the minibatch, are encoded via the initial B \u00d7 B matrix of expectations to the \ufb01rst layer. Thus, their analysis is not dependent on B being large, or on any strong distributional assumptions about the mini-batch. However, unlike for networks with element-wise nonlinearities (where approximate kernel analysis gives rise to the same equations as signal propagation), there is no mathematically rigorous derivation of their generalized equations for BN layers. Thus, it remains an open question as to whether these equations are accurate approximations in the sense of Section 5.8. Yang et al. (2019) provide empirical evidence that they are, at least for fairly wide networks with some commonly used activation functions. 85 \fMartens et al. 26. Review and analysis of related approaches for constructing and initializing deep neural networks In this section we will review some existing techniques, both standard and otherwise, for constructing and initializing neural networks in order to make them easier to train. We will further analyze these techniques from the perspective of approximate kernel theory by exploiting the latter\u2019s connections with variance/signal propagation established in Section 25. 26.1 The fan-in initialization and related approaches derived from variance propagation The classical fan-in initialization (LeCun et al., 1998b) for fully-connected neural networks samples \ufb01lter weights iid with mean zero and variance 1/k, where k is the total input dimension. Here, 1/k is precisely the value required for their version of variance propagation to predict constant per-unit variances throughout the entire network, with other values leading to an exponential increase or decrease with depth. However, as LeCun et al.\u2019s (1998b) variance propagation analysis is only a reasonable approximation for activation functions that preserve the mean and variance of their input, their initialization will tend to fail in more realistic settings, especially as the network\u2019s depth increases (He et al., 2015). Glorot and Bengio (2010) use their own version of variance propagation to motivate a similar initialization scheme, where the weight variance is 2/(k+m), with m being the output dimension. This choice is made as a \u201ccompromise\u201d between the following two competing constraints: that the per-unit variances should be uniform across layers, and that the variances of the per-layer gradients should also be similarly uniform. We would argue that 2/(k + m) is not a good choice in general compared to 1/k. For example, if the layer widths alternate between between n and 2n for some n \u2a7e1, running variance propagation across two consecutive combined layers would predict a decrease in the variance by a factor n(2n)(2/(n + 2n))2 = 8/9. This will lead to an exponential convergence of the variance towards zero as depth increases. Meanwhile, for the choice 1/k, variance propagation (or approximate kernel analysis) predicts no such exponential increase or decrease for any choice of widths. He et al. (2015) propose to use a weight variance of 2/k speci\ufb01cally in RELU networks, which compensates for how RELU nonlinear layers decrease the variance by a factor of 1/2 instead of preserving it. This is based on their expanded version of variance propagation that handles RELU activation functions. Setting aside issues of mathematical rigor and the interpretation of the quantities being propagated, variance propagation and approximate kernel analysis involve similar calculations (as discussed in Section 25.7), and so these three initialization schemes can all be viewed as methods to control the q values of the network. When combined with the normalization of the input vectors (as per Section 10.2), and applied to standard feedforward fully-connected networks with suitable activation functions20, the fan-in initializa20. Here, \u201csuitable\u201d means (approximately) mean and variance preserving for the standard fan-in initialization, or RELU for He et al.\u2019s (2015) modi\ufb01ed version. Note that the large majority of activation functions do not fall into the former category. 86 \fDeep Kernel Shaping tion method and its extensions achieve q values of \u223c1 throughout the network, which is one of the four constraints enforced by DKS. Having q values of 1 ensures that local C maps are the same for each combined layer (assuming they all use the same activation function), and that the \ufb01nal output of the network falls within a reasonable range. If this is not done, q values can grow very large or small with increasing depth, leading to various problems. In particular, very large values can cause bounded monotonic activation functions like tanh to \u201csaturate\u201d, so that local C maps become increasing degenerate with depth. And very small values can cause most activation functions to behave in a way that is \u201ctoo linear\u201d, which may limit the e\ufb00ective expressivity of the network (as per the discussion in Section 14.2). Notably, the RELU activation function is immune both of these issues due to it being positively homogeneous, which perhaps explains its popularity. However, by not enforcing the other three conditions of DKS, networks using these initializations can still have degenerate network-level C maps, and can experience an exponential accumulation of kernel approximation errors with depth (so that the q values won\u2019t actually be constant in practice). As a concrete example of the former problem, consider the example from Section 12.1 of a standard deep RELU network. This network\u2019s C map doesn\u2019t depend on the input q value at all (as long its uniform), but still develops degenerate behavior at very high depths, leading to a network that is essentially untrainable. Another more subtle issue with these initializations is that q values of 1 will work very badly for certain activation functions. For example, consider the activation function de\ufb01ned by \u03c6(x) = tanh(\u03b1x). As \u03b1 increases, an input q value of 1 becomes arbitrarily bad, leading to increasing levels of saturation and consequent C map degeneration. The reason that q values of one work reasonable well in practice is that the most commonly used activation functions in the literature happen to work well it, or have local C map behavior that is insensitive to q values (as is the case for RELUs). Notably, DKS does not su\ufb00er from this issue (despite also enforcing q values of 1), as its use of a multiplier on the input of each activation function ensures that 1 will always be optimal (since any other value can be e\ufb00ectively \u201csimulated\u201d). 26.2 Layer-Sequential Unit-Variance initialization and Within-Layer initialization Mishkin and Matas (2015) proposed an initialization method called Layer-Sequential UnitVariance (LSUV), which uses an iterative procedure that starts from a standard random initialization and adjusts the scale of each weight matrix/\ufb01lter bank to achieve the condition that the variance of the output of each a\ufb03ne layer \u2013 taken over the channels, locations and training cases \u2013 is approximately equal to 1. These variances are computed by evaluating the network empirically on random mini-batches of training data. By taking \u03c6 in Equation 24 to be the identity function, we have that the average value (across channels) for each location-wise output vector of an a\ufb03ne layer is approximately zero with high probability. The variances computed by LSUV can therefore be interpreted as estimates of the length-normalized squared norms of the location vectors for each layer, except that they are also averaged over locations and network inputs. Thus, we can think of LSUV as enforcing the condition that the \u201caverage q value\u201d for the output of each a\ufb03ne 87 \fMartens et al. layer is equal to 1. This condition is similar to the one that the fan-in initialization (and its variants) are trying to achieve, and thus our discussion and critique of those methods (in Section 26.1) also applies to LSUV. In particular, a network initialized with LSUV can still have degenerate C maps, with all of their consequent problems. Because LSUV uses empirically computed statistics instead of canned formulas, it takes into account the given architecture and network topology, as well as the properties of the input training vectors. By contrast, variants of the fan-in initialization are usually only valid for the particular activation function and network topology they were derived for, despite often being applied more generally. They also implicitly assume that the training input vectors are appropriately normalized, which isn\u2019t always the case in practice. (Note that DKS, while it is also based on formulas instead of empirical evaluations of the network, takes into account the activation functions and network topology, and is packaged with a data preprocessing technique for the input vectors.) The \u201cWithin-Layer Initialization\u201d (WLI) of Kr\u00a8 ahenb\u00a8 uhl et al. (2016) can be viewed as a modi\ufb01cation of LSUV where one enforces the condition that the mean and variance of the output of each layer and each channel is 0 and 1 respectively (as opposed to LSUV, which considers the average variance across channels). This is done by rescaling the \ufb01lter bank weights separately for each output channel, and setting the bias appropriately. Because it enforces conditions per-channel instead of averaging across channels, this modi\ufb01cation is harder to compare directly to fan-in initializations or DKS. It is perhaps more closely related to Batch Normalization (which is discussed later in this section), as it achieves the same conditions that BN does before the \ufb01rst step of optimization. 26.3 Self-normalizing neural networks Self-normalizing neural networks (Klambauer et al., 2017) use Scaled Exponential Linear Unit (SELU) activation functions to achieve a per-unit mean of 0 and variance of 1 asymptotically with depth, as computed under variance propagation. Due to the relationship between variance propagation and Q/C maps (discussed in Section 25.7), this is equivalent to Qg(g) having an attractive \ufb01xed point at q = 1, and Cg(c) having an attractive \ufb01xed point at c = 0, where g is a SELU nonlinear layer. Assuming the use of PLN, and a standard feed-forward architecture (or normalized sums in more general architectures), these conditions imply that Qf(1) = 1 and Cf(0) = 0 for all subnetworks f, which is two of the four conditions enforced by DKS. As previously discussed, Qf(1) = 1 for all subnetworks f is a good condition to have, and will prevent extreme q values from developing in deep networks (which can adversely a\ufb00ect C maps). However, it won\u2019t in general guarantee a well-behaved C map in deep networks, even when combined with the condition Cf(0) = 0. For a SELU nonlinear layer g we have C\u2032 g(1) \u22481.0716, which can be computed numerically using Equation 22.2 and the methods described in Section 22.2. Along with the condition Cg(0) = 0, this guarantees a well-behaved C map up to a modest depth. For example, given a standard 100 layer network f we have Cf(0) = 0 and C\u2032 f(1) = C\u2032(1)100 \u22481.005\u00b7103, so that Cf is reasonably well behaved according to Theorem 13. However, if f has 300 layers then we have C\u2032 f(1) \u22481.0157 \u00b7 109, which indicates degenerate behavior with c\u2217= 0 (in the sense of Section 12.4). 88 \fDeep Kernel Shaping 26.4 The \u201cEdge of Chaos\u201d (EOC) method Consider a network f de\ufb01ned by a composition of D combined layers, each with the same nonlinear activation function \u03c6. Every combined layer will have the same Q map, which we denote by Q. As discussed in Section 10.1, Q will typically have a \ufb01xed point q\u2217which is rapidly converged to under repeated applications, and thus one may approximate the q values as uniform and constant across layers. This allows one to de\ufb01ne a local C map C which only depends on the input c value, and which is the same for each combined layer. As argued by Schoenholz et al. (2017), C\u2032(1) will describe the asymptotic dynamics of the c values as they evolve through the layers of the network in the limit as D \u2192\u221e. C\u2032(1) > 1 indicates rapid convergence of the c values to 1, while C\u2032(1) < 1 indicates rapid convergence to a value c0 < 1. Because of its close proximity to these two undesirable depth-limiting asymptotic behaviors, a network with C\u2032(1) = 1 is said to be \u201con the edge of chaos\u201d, and will have its c values converge slowly towards 1 at an asymptotic rate which is sub-exponential. In the initialization method proposed by Schoenholz et al. (2017), which we will call the \u201cEdge of Chaos\u201d method (EOC), one initializes the weights using a standard Gaussian fan-in method, with the variance of the weights and biases chosen so that C\u2032(1) = 1. (Note that q\u2217also depends on these variances, which is taken into account when computing C\u2032(1).) As observed by Schoenholz et al. (2017), there are typically in\ufb01nitely many combinations for these two variances which achieve C\u2032(1) = 1 (assuming any exist for the given activation function), and so one is chosen arbitrarily. Notably, the condition C\u2032(1) = 1 is based entirely on the properties of C, and as such does not depend on the depth of the network. In Xiao et al. (2018), a version of EOC was proposed that used an Orthogonal Delta initialization technique for convolutional layers, with variances chosen so that C\u2032(1) = 1. In their experiments Xiao et al. (2018) showed that basic convolutional neural networks (without skip-connections or batch normalization) can be successfully trained on CIFAR-10 with the resulting initialization at depths of up to 10,000. This was a remarkable result, as such networks are considered essentially impossible to train at even modestly large depths when initialized using standard methods. 26.4.1 Relationship to DKS DKS is in many ways a spiritual successor to EOC, and is derived using an extended version of the same basic Q/C map analysis that underlies the latter. Like EOC, DKS also makes use of the Orthogonal Delta initialization technique. But despite these similarities, there are many important di\ufb00erences between the two methods, which we will discuss in sequence below. Firstly, DKS enforces uniform q values via data pre-processing instead of relying on the (presumed) convergent \ufb01xed-point behavior of the Q maps to achieve this asymptotically. See Section 10.1 for a more detailed discussion of this point. Secondly, instead of targeting the condition C\u2032(1) = 1 for each combined layer f, DKS targets C\u2032(1) = \u00b5\u22121(\u03b6), so that the \u201cdegree of nonlinearity\u201d is calibrated to the given architecture. This is motivated by looking at the overall C map behavior of the network, instead of the \ufb01xed point convergence behavior of its local C maps. From the perspective of 89 \fMartens et al. \ufb01xed point convergence, DKS achieves exponential rate towards c\u2217= 0, while EOC achieves sub-exponential rate towards c\u2217= 1. While this might seem like a point against DKS, one must remember that the precise rate of its exponential convergence will depend21 on the overall depth D, the e\ufb00ect of which is that the c values will be far from converged even by the D-th layer. The third di\ufb00erence between DKS and EOC is that DKS manipulate the network by transforming the input and output of the activation functions, as opposed to changing the variances of the weights and biases. If we consider the \u201cequivalent parameters\u201d (as per Section 18.3), DKS is implicitly searching over a space of distributions with more degrees of freedom than the two used by EOC, and one which allows for non-zero correlations between the weights and biases. Fourthly, despite having more degrees of freedom with which to manipulate the network, DKS makes use of all of them in order to enforce a total of four conditions on the Q and C maps of the network (which are listed in Section 17.3). EOC meanwhile only enforces a single condition (C\u2032(1) = 1), which leaves one of its two degrees of freedom unconstrained. (The e\ufb00ect of this is that EOC has a manifold of possible weight and biases variances from which to choose, and is therefore under-determined.) Fifthly and \ufb01nally, DKS is applicable to a diverse set of architectures, thanks to our generalized notion of Q/C maps, analysis of pooling layers, and network polynomial construction. As it was originally developed, EOC assumes a strictly feedforward network consisting of a sequence of fully-connected or convolutional combined layers. 26.4.2 Possible failure modes of EOC It is worth pointing out that the condition C\u2032(1) = 1 enforced by EOC does not necessarily imply that the network will look perfectly linear at initialization time, as C(0) = 0 is not enforced in EOC as it is in DKS. Nonetheless, depending on the activation function, there may be choices for the weight and bias variances which can make the network look \u201ctoo linear\u201d, thus leading to very slow training as per the discussion in Section 14.2. As such choices are not explicitly forbidden in EOC, and are indeed compatible with the condition C\u2032(1) = 1, this may represent a failure mode of the method. Conversely, without C(0) = 0, the condition C\u2032(1) = 1 may not su\ufb03cient to ensure that the entire network\u2019s C map Cf is well-behaved. For example, given a choice of 1 and 0 for the variances of the weights and biases respectively, we have C\u2032(1) = 1 for unmodi\ufb01ed RELU activation functions (by Section 12.1), and yet Cf quickly degenerates as D grows, as can be seen from the \ufb01gures in Section 12.1. Moreover, our experiments in Section 28.2 con\ufb01rm that standard deep RELU networks (without BN layers or skip connections) are not readily trained at high depths, even with a high-powered optimizer like K-FAC. 21. In particular, the worst-case rate of convergence to the \ufb01xed point will be given by C\u2032(0), which by Equation 27 satis\ufb01es C\u2032(0) \u2a7e2 \u2212C\u2032(1) = 2 \u2212\u00b5\u22121(\u03b6). For a D layer convolutional network this is 2 \u2212\u03b61/D, which will be just slightly below 1.0 when D is large (given typical choices for \u03b6). 90 \fDeep Kernel Shaping 26.5 The Looks Linear method Balduzzi et al. (2017) use path-weight analysis (which we review in Appendix I) to argue that gradients with respect to the network\u2019s input will decorrelate or \u201cshatter\u201d in deep RELU networks, leading to di\ufb03culties when training with gradient descent22. They also argue that this happens to a much lesser extent in ResNets. Motivated by these observations, and by the fact that this e\ufb00ect doesn\u2019t occur in a purely linear network, Balduzzi et al. (2017) propose the Looks Linear (LL) method for initializing/constructing RELU networks. This method exploits the fact that \u03c6(x) \u2212\u03c6(\u2212x) = x when \u03c6 is the RELU function in order to construct a network that behaves exactly like a linear one at initialization. In particular, one replaces \u03c6 by (\u03c6(x), \u03c6(\u2212x)) for each RELU nonlinear layer (which e\ufb00ectively doubles its output channel dimension), and initializes the weights of the a\ufb03ne layer after each RELU layer according to (W, \u2212W), where W is sampled according to a Delta Orthogonal initialization. For fully-connected combined layers this produces an overall computation W\u03c6(x) \u2212W\u03c6(\u2212x) = W(\u03c6(x) \u2212\u03c6(\u2212x)) = Wx, while for convolutional combined layers the computation is similarly linear (although harder to express in standard matrix notation). From the perspective of our analysis, perfectly linear-looking networks have (very) wellbehaved C maps, and thus satisfy one of the main necessary conditions for trainability. With such networks there is always the danger that they may be \u201ctoo linear\u201d in the sense of Section 14.2, but LL-initialized networks avoid this because W1\u03c6(x)+W2\u03c6(\u2212x) will become highly nonlinear given a relatively small perturbations of W1 away from \u2212W2. The main two obvious disadvantages to the LL approach are that it only works for RELU networks, and that it doubles the widths of RELU layers (without proportionally increasing the network\u2019s expressivity/capacity). Beyond these things, the main di\ufb00erence between LL and DKS is the precise mechanism used to make the network \u201clook linear\u201d, and the implications that this has for optimization (which is not well understood in either case). In DKS, the degree of nonlinearity of a nonlinear layer, as measured by properties of its C map (such as the slope at c = 1), varies smoothly as a function of the parameters of the transformed activation functions, and so one could argue that it should also vary smoothly as a function of the network\u2019s parameters, resulting in easier optimization. By contrast, the linearity property of the LL method depends on a delicate mirrored symmetry of the weights in each layer, so that relatively small perturbations in these could lead to large changes in the degree of nonlinearity of the network. This sensitivity may make optimization more di\ufb03cult, and may explain the optimization di\ufb03culties we observed in our experiments with the LL approach. (See Section 28.5.5 for more details.) 26.6 Residual connections Residual Networks aka ResNets (He et al., 2016a,b), which are described in detail in Section 23.1, have become the dominant neural network architecture for computer vision problems. What makes ResNets so successful isn\u2019t that they are more powerful or expressive than 22. Note that this prediction agrees with our NTK analysis in the sense that the per-layer NTK matrix of the \ufb01rst layer of a deep RELU network is approximately the identity, although the implications for training are somewhat di\ufb00erent; see Appendix I 91 \fMartens et al. other more traditional deep convolutional architectures like VGG (Simonyan and Zisserman, 2015), but rather that they are easier to train with stochastic gradient descent at very high depths (He et al., 2016a; Szegedy et al., 2017). This easier training is owed to their use of skip connections (aka shortcut connections; which have been a feature of network architectures since the 1990s), Batch Normalization (BN) layers (Io\ufb00e and Szegedy, 2015), RELU nonlinearities, and the surprising interplay between all three of these components (De and Smith, 2020). Moreover, popular new architectures such as E\ufb03cient Nets (Tan and Le, 2019) and Transformers (Vaswani et al., 2017) are based on the same high-level residual block structure, and di\ufb00er only in terms the layers contained in their residual branches. ResNets, and their generalizations, thus represent a solution to the problem of how to achieve fast and stable training of very deep neural networks. And while the nature of this solution is still not totally understood, there has been progress in this direction (of which we will cover only a small subset). Veit et al. (2016) argued that residual networks behave like a ensemble of shallow networks of varying depth throughout training. They gave evidence for this by showing that deep residual networks are highly robust to \u201clesion\u201d operations which remove or rearrange layers, and that the network\u2019s gradient is dominated by contributions made by paths through the network with fewer nonlinear layers Zhang et al. (2019c) observed that if one removes the BN layers from a ResNet-V2 network and initializes the last convolutional layer of each residual branch to zero (along with a few other smaller tweaks to the architecture and its initialization), the resulting network achieves training speed comparable to a standard ResNet, at least at modest batch sizes. In such networks, the residual blocks act as identity functions at initialization tune, only becoming nonlinear as training progresses. Subsequent work showed that one could achieve similar results in BN-free networks simply by using learnable weights on the residual branches that are initialized to zero (De and Smith, 2020; Bachlechner et al., 2020). More recently, it was found by Shao et al. (2020) that the branch sum can use static (nonlearnable) weights, where the relative size of the weight on the residual branch is set to a small value (that can vary between blocks). To help explain these \ufb01ndings for BN-free networks, De and Smith (2020) applied a version of variance propagation to argue that the per-unit output variance of a residual block will be roughly 1 plus its per-unit input variance, so that the i-th residual block has a variance proportional to i. Then, because the output variance of each residual branch is constant (due to the use of BN), it follows that the relative contribution to the block\u2019s output made by the residual branch shrinks as 1/i. This, they argue, leads to a network which behaves more like an linear function than it otherwise would. In Appendix J we make this argument more rigorous by computing q values in a (nearly) standard ResNet, and showing that their growth over layers leads to a better behaved C map. We also show that an identical C map can be obtained in a network without normalization layers via careful selection of weights on the residual and shortcut branches. 26.7 Normalization layers Normalization layers (the two most common types of which are de\ufb01ned in Section 19) have become a standard component in neural networks, since the introduction of Batch 92 \fDeep Kernel Shaping Normalization (BN) by Io\ufb00e and Szegedy (2015). In addition to the important and speci\ufb01c role they play in ResNet-style architectures (as discussed in Section 26.6 and Appendix J), these layers have been observed to make deep neural networks easier to train on their own. In this subsection we will discuss possible explanations for this, with a particular focus on ones arising from Q/C map analysis. We will also give some arguments for why normalization layers alone are insu\ufb03cient to enable fast training of deep networks. 26.7.1 Layer Normalization layers As discussed in Section 19.2.1, a Layer Normalization (LN) layer f has the property that Qf(q) = 1 for all q, provided that its learnable gain and bias are set to their initial values. Additionally, when applied after a combined/nonlinear layer g, the C map of the composition has the property that Cf\u25e6g(0) = 0, regardless of C map behavior of g. Thus, when used after each nonlinear layer in a network initialized as per Section 4, LN layers achieve uniform q values of 1 throughout the network, and also Ch(0) = 0 for all subnetworks h, which are two of the four conditions enforced by DKS. (Note that if LN layers are instead inserted before each nonlinear layer, they will not achieve the latter condition.) The way LN layers achieve these conditions di\ufb00ers from DKS in at least two ways. First, they perform direct calculation of the relevant quantities instead of using q and c values as approximations. This allows them to work with arbitrary initializations of the parameters (including badly scaled ones), poorly scaled input data, and without any explicit knowledge of the network\u2019s structure or activation functions. Second, LN layers continue to enforce a version of these conditions throughout training, or at least as long as their learnable gain and bias remain close to their initial values of 1 and 0. As discussed previously (e.g. Subsection 26.1), uniform q values of 1 is a useful property to have, but is far from su\ufb03cient to ensure trainability. The condition Ch(0) = 0 for all subnetworks h is meanwhile only one half of the two conditions required by Theorem 13 to ensure well-behaved C maps, and arguably the less important of the two. To make this discussion more concrete, we will consider how putting LN layers after each nonlinear layer will e\ufb00ect the C map of a standard deep RELU network. For a RELU nonlinear layer g we have by Equation 15 that Cg(0) = 1/\u03c0 and C\u2032 g(1) = 1 (which follows from Equation 15 by taking the derivative and letting c \u21921). Taking the derivative in Equation 17 we have C\u2032 f(c) = 1/(1 \u2212Cg(0)) = \u03c0/(\u03c0 \u22121), and so by the chain rule C\u2032 f\u25e6g(1) = C\u2032 f(Cg(1))C\u2032 g(1) = C\u2032 f(1)C\u2032 g(1) = \u03c0/(\u03c0 \u22121) \u22481.467. Thus we see that while the use of an LN layer after a RELU layer gives us Cf\u25e6g(0) = 0, it comes at the price of increasing the C map slope from 1 to \u223c1.467. The following plot shows the extended C map for a RELU network h with 20 combined layers, with and without LN layers used after each nonlinear layer: 93 \fMartens et al. From this plot we can see that the C map for the network with LN layers has a much larger output range. However, for the vast majority of its input domain, the output is restricted to a small region around 0, and it is still highly degenerate in the sense of Section 12.4 (and thus suggestive of poor training). Beyond their a\ufb00ect on the initial behavior of the network, LN layers may also have an independent and possibly bene\ufb01cial e\ufb00ect on optimization, as they change the relationship of the loss and the parameters. Ba et al. (2016) argue that LN layers lead to a Fisher information matrix with more favorable properties for optimization. Another intuition is that an LN layer decouples the scale and direction of the weights of its immediately preceding a\ufb03ne layer, which may encourage faster optimization with gradient descent. Networks with LN layers may also be also be smoother when considered as functions of either their parameters or their inputs, since the change in the output of an LN layer is always bounded. Despite these intuitions, as far as we know there has yet to be strong theoretical or empirical evidence in favor of a speci\ufb01c optimization bene\ufb01t to LN layers beyond their a\ufb00ect on the network\u2019s initial behavior. 26.7.2 Batch Normalization layers As discussed in Section 19.1, Batch Normalization (BN) layers cannot be analyzed within the Q/C map framework we have presented. Despite this, we can still make some observations regarding their e\ufb00ect on network behavior in the context of our previous discussions. As shown in Section 12.4, one common way that a deep network f can become di\ufb03cult/impossible to train is when all input vectors map to approximately the same output vector (as measured by cosine similarity) at deeper layers of the network. This happens naturally in deep RELU networks, where BN is typically applied. Placing BN layers throughout the network may mitigate this particular pathology by ensuring that the empirical distribution of each unit over the mini-batch has a large variance compared to its mean. However, this won\u2019t obviously do anything to help with the opposite problem discussed in Section 94 \fDeep Kernel Shaping 12.4, where output vectors appear \u201crandom\u201d, and in particular fail to re\ufb02ect the geometric relationships between the original input vectors. These intuitions are con\ufb01rmed by Yang et al.\u2019s (2019) signal propagation analysis of RELU networks with BN layers (which we discuss in Section 25.8). In particular, Yang et al. (2019) predict that the distances between the output vectors (generated from di\ufb00erent inputs) will converge to a constant as depth increases, and that this leads to an exponential increase in the norm of the gradient. Like with LN layers, placing BN layers after each a\ufb03ne layer makes the network insensitive to the scale of its weight parameters, which can thus correct for badly scaled initial weights. One can perhaps also view BN layers as ensuring that the \u201caverage q value\u201d across the mini-batch is 1, although this is an imperfect analogy since BN layers operate on a per-channel basis instead of averaging over channels. Similar to LN layers, BN layers may also have an e\ufb00ect on optimization which is independent from their e\ufb00ect on the network\u2019s initial behavior. Evidence for this includes the fact that various methods which modify networks and their initializations in an attempt to eliminate the need for BN (such as those we\u2019ve previously discussed) fail to achieve the same optimization performance under SGD, except perhaps at small mini-batches sizes (where classical optimization considerations like curvature matter a lot less, as argued in Zhang et al. (2019a)). In support of the optimization-e\ufb00ect hypothesis, Santurkar et al. (2018) argue that BN layers make a network\u2019s output a smoother function of its parameters, and that this helps improves the performance of gradient descent. Li and Arora (2019) argue that gradient descent applied to networks with BN layers behaves similarly to gradient descent applied to a normalizer-free network with a decaying learning rate, thus allowing gradient descent with a constant learning rate to converge in the stochastic setting (where it otherwise might not). Finally, Grosse (2021) argues that placing a BN layer after an a\ufb03ne layer g will make the network invariant to scaling and shifting of g\u2019s input, and that this leads to a curvature matrix for f\u2019s parameters which is better conditioned. Part V Experiments and conclusions 27. Experimental setup In this section we will describe and justify the setup we will use in our experiments, which will depart somewhat from common practice. 27.1 Training problem and datasets The benchmark training problem we use in all of our experiments is image classi\ufb01cation, on either the Imagenet (Deng et al., 2009) and CIFAR-10 (Krizhevsky and Hinton, 2009) datasets. 95 \fMartens et al. The training objective is the average loss over the training set, with the loss given by the cross-entropy error between network\u2019s output (interpreted as \u201clogits\u201d of a softmax) and the dataset labels. We also measure top-1 classi\ufb01cation accuracy, and report this instead of the loss in our plots due to its higher interpretability. For Imagenet, we use an image preprocessing and random augmentation pipeline similar to the one from Szegedy et al. (2015) to obtain images of size 224 \u00d7 224. The training set is obtained from the standard Imagenet training set, minus the last 10000 cases (which is used as a new validation set), and the test set is obtained from the usual Imagenet validation set. Training accuracy is reported using the examples actually used during training, which are subject to random augmentation. Test accuracy is meanwhile reported using examples from the test set without random augmentation. For CIFAR-10, we apply the standard preprocessing consisting of mean subtraction and normalization of each color channel. The training and test sets are their standard versions. For both datasets we apply Per-Location Normalization (as described in Section 10.2) as a \ufb01nal stage of processing before feeding the inputs to the network. This is done for all approaches and experiments unless stated otherwise, in the interest of fairness. 27.2 Focusing on optimization speed While we will report test set accuracy in many of our experiments, our primary focus will be on optimization speed, as measured using training accuracy. Moreover, the decisions we make while designing out experiments will be in the interest of obtaining the cleanest and fairest comparison for optimization speed, and we will tune various components (like the learning rate schedule, regularization, etc) with this in mind. In this subsection we will explain our rationale for this decision. The current standard approach to deep learning is to train normalized residual architectures with RELU nonlinearities, such as ResNets or Transformers, with basic optimizers like SGD or Adam. Alternative approaches (such as normalization-free networks using Fix-up (Zhang et al., 2019c), standard deep convolutional networks initialized with EOC, or DKS) can underperform the standard approach in one of two basic ways. First, they can yield networks whose training plateaus earlier, resulting in under\ufb01tting, or whose training is just much slower overall. And second, they can yield a worse inductive bias for typical training problems (like Imagenet classi\ufb01cation), resulting in increased over\ufb01tting. In our initial experiments we found that while alternative deep learning approaches are typically a\ufb00ected by both of these problems, slower training is by far the more signi\ufb01cant one, particularly for networks without skip connections. Moreover, the resulting under\ufb01tting problem (given a \ufb01nite optimization step budget) led to a commensurate degradation in test set performance. (These \ufb01ndings echo those of He et al. (2016a) and Szegedy et al. (2017), who observed that the main bene\ufb01t of adding skip connections was faster optimization.) Thus, by focusing on training speed, we are isolating what is the more serious problem currently a\ufb00ecting alternative methods, and the one which arguably should be addressed before attempting to close the generalization gap. We believe that the increased generalization gap we observed on Imagenet for alternative approaches such as ours is small enough that it can be overcome through the use of additional regularization strategies, dataset augmentation, scheduling of the optimizer 96 \fDeep Kernel Shaping hyperparameters, architectural tweaks, etc. We will leave this to future work. This position is echoed by Zhang et al. (2019c), and was arguably validated in the recent work of Brock et al. (2021) on \u201cNormalizer-Free Networks\u201d, which used a combination of these techniques to close the generalization gap for one such alternative approach. It is also our view that over\ufb01tting may become less of a concern as the machine learning community moves beyond supervised benchmark problems like Imagenet classi\ufb01cation, and towards giant/streaming datasets and unsupervised methods. 27.3 Network architectures and regularization In our experiments we train standard and modi\ufb01ed ResNets and Wide-ResNet models for Imagenet and CIFAR-10 image classi\ufb01cation. (A detailed description of all the relevant architectures is given in Section 23.) We will place particular emphasis on \u201cablated\u201d versions of ResNets, where Batch Normalization (BN) layers and/or the skip connections are removed, leaving everything else unchanged. The motivation for doing this is that we want to facilitate the fairest possible comparison to the standard deep learning approach. In particular, since we are focused mostly on optimization speed in our comparisons, we want to use models that are provably no more powerful than standard ResNets in terms of the class of functions they can express, so that the fundamental data \ufb01tting problem doesn\u2019t become any \u201ceasier\u201d. In the interest of making our experiments fair we also didn\u2019t include the standard L2 regularization that is often used when training ResNets. This is because the e\ufb00ect of L2 regularization on the e\ufb00ective capacity of a model is highly dependent on the model\u2019s parameterization, and this will vary signi\ufb01cantly across the di\ufb00erent approaches we consider. For example, due to the way BN layers are invariant to scalar multiplication of their inputs, one can rescale the weights of any a\ufb03ne layer that precedes a BN layer without changing the overall output of the network. Thus, networks with BN layers can e\ufb00ectively \u201ccheat\u201d the L2 regularization penalty in a way that networks without BN layers cannot. In our experiments we found that the removal of L2 regularization did have a small but still signi\ufb01cant e\ufb00ect on the test set performance of standard ResNets, which is re\ufb02ected in our reported results. Note that our purpose in experimenting with these modi\ufb01ed ResNets is not to show they are a good replacement for standard ResNets in practice. Rather, our purpose is to determine the extent to which we can replace the ingredients of the standard deep learning approach with various alternatives (that preserve the model class), while retaining its fast training capabilities. If we were primarily interested in maximizing test set performance in our evaluations, then we would be free to design a network architecture best suited to DKS, and to include whatever regularization scheme we found to be most e\ufb00ective. And while this does seem like an interesting direction to explore, as has been done in the context of other alternative approaches to deep learning (Brock et al., 2021), it is beyond the scope of the present work. 27.4 Automatic learning rate schedules with Fire PBT Achieving a near optimal rate of convergence for standard ResNet training with SGD seems to require a carefully designed learning rate schedule (and not just a \ufb01xed value), especially 97 \fMartens et al. for more di\ufb03cult datasets like Imagenet. Through extensive and costly trial and error, the community has produced learning rate schedules which seem to work well on certain standard problems, such as Imagenet classi\ufb01cation with ResNets. These typically involve a quick \u201cwarm-up\u201d of the learning rate from a moderate starting value to a larger one, followed by a decay or step-wise descent towards zero. In our experiments we consider a large variety of approaches for training deep networks, most of which depart from the standard one along directions such as architectural choices, optimizers, initialization, etc. There is no reason to think that a learning rate schedule tuned for standard ResNet training with SGD should perform well for all such approaches, and this was borne out in our initial experiments. (By contrast, it seemed like the momentum hyperparameter was much less important.) Thus, in order to conduct fair experiments, which are minimally confounded by hyperparameter tuning, we need a way of determining a near-optimal learning rate schedule for each approach. And this should ideally be done in an automatic way in order to reduce the role of experimenter bias. Recently, Dalibard and Jaderberg (2021) proposed an alternative version of Population Based Training (PBT, Jaderberg et al., 2017) called FIRE PBT, which is designed speci\ufb01cally for the dynamic adjustment of optimizer hyperparameters. Like many other methods for automatically tuning the learning rate, standard PBT falls into the trap of being too greedy, and tends to lower the learning rate too quickly for the sake of short-term improvements in the loss (Wu et al., 2018). FIRE PBT is designed to tackle this issue, using a strategy which we will now brie\ufb02y explain. Both PBT and FIRE PBT work by having many workers independently train neural networks, each with their own values for the hyperparameters. Both methods also associate a \ufb01tness to each of their workers which guides an evolutionary procedure. In PBT, this \ufb01tness is simply the current value of the objective function (which can be de\ufb01ned on the training or test sets). In FIRE PBT these \ufb01tnesses are altered in order to promote population members which may have a worse objective but are promising in other ways. In particular, a separate class of workers, called evaluators, periodically copy the model parameters of other workers, change the hyperparameters (e.g. decay the learning rate), and measure the rate at which the objective function improves while training with the new hyperparameters. The higher the rate of improvement as measured by the evaluator, the higher the \ufb01tness FIRE PBT will associate to the original worker whose model parameters were copied. This approach encourages workers to use \u201cnon-greedy\u201d hyperparameters (such as high learning rates), if it is shown that doing so leads to better performance after training with di\ufb00erent hyperparameters (such as lower learning rates) in the long run. In their experiments, Dalibard and Jaderberg (2021) showed that FIRE PBT worked very well at automatically generating learning rate schedules on the \ufb02y for standard ResNet training with SGD, matching or exceeding the performance of the previously mentioned community-tuned schedules. In our initial experiments we found that this capability carried over nicely to non-standard deep learning approaches as well, and so we decided to use it in all of our subsequent experiments. We now discuss the technical settings related to our use of FIRE PBT. We follow the presentation of Dalibard and Jaderberg (2021). Each experiment uses 36 workers. We divide them into three sub-populations P1, P2, P3, each of size 8, and the evaluator set H 98 \fDeep Kernel Shaping which includes the remaining 12 workers. We train for a maximum of 200,000 steps when training on ImageNet and for a maximum of 25,000 when training on CIFAR-10. Hyperparameters We optimise the learning rate hyperparameter. When using SGD or Adam, the learning rate is initially sampled log-uniformly in the range [10\u22125, 1]. When using K-FAC, we instead use the range [10\u22127, 10\u22123]. Objective function We evaluate the current model by reporting the current negated training loss. Ready A member of the population is deemed ready to exploit and explore every 500 steps when training on ImageNet and 50 steps when training on CIFAR-10. Exploit We use a truncation selector: If a population member has a \ufb01tness in the bottom 25% of the population, it copies the neural network weights and hyperparameters of a random member in the top 25% of the population. Explore We multiply the learning rate by a value sampled at uniform random between the following two value: [0.8, 1.25]. We further set the FIRE PBT hyperparameter of max_eval_steps to 7200 when training on ImageNet, and 360 when training on CIFAR-10. We set the hyperparameter min_steps_before_eval to 5000 and 10000 for P2 and P3 respectively when training on ImageNet, and to 250 and 500 when training on CIFAR-10. The training curves plotted in our results section use the values recorded by the sequence of workers that led to the best eventual objective value (negative loss). In Appendix L we plot the learning rate schedule found by FIRE PBT for some of our main experiments. We note that apart from some small \ufb02uctuations, these schedules are fairly simple and natural looking, and typically involve an initial rapid increase in the learning rate, followed by a gradual decay. Thus, we don\u2019t believe that the qualitative nature of our results is highly dependent on our use of FIRE PBT versus a simpler approach for learning rate tuning. 27.5 Optimizers In our experiments we used SGD (with momentum), K-FAC (Martens and Grosse, 2015), Adam (Kingma and Ba, 2014), and Shampoo (Gupta et al., 2018; Anil et al., 2020) as optimizers, with the majority just using SGD and K-FAC. Our motivation for considering stronger optimizers is that alternative deep learning approaches such as ours seem to bene\ufb01t substantially from using them. For all optimizers we used a momentum parameter of 0.9, and adjusted the learning rate dynamically throughout training using FIRE PBT. For Imagenet experiments we used a batch size of 512 with all optimizers, and for CIFAR-10 we used a batch size of 1024. For Adam we used a value of 10\u22125 for the \u201c\u03f5\u201d parameter, which performed slightly better than the default value of 10\u22128. For K-FAC, we used a 0.99 exponential decay of the curvature matrix, and computed its inverse every 50 iterations. We initialized K-FAC\u2019s damping parameter \u03bb to 10\u22123, and exponentially decayed it at the rate 0.98 every 50 iterations to a minimum value of 10\u22126. Finally, we enforced a maximum norm of 10\u22122 on all updates, with the norm computed using K-FAC\u2019s approximate curvature matrix (as in Ba et al. (2017)). 99 \fMartens et al. For Shampoo we used an epsilon parameter of 10\u22125 and an exponential decay factor if 0.99 for the second moments. In order to achieve optimization performance which was competitive with K-FAC, we used an \u201cexponent multiplier\u201d of 3 (which increases the exponent of all factors of the preconditioner by a factor of 3, with 1 being the default value), and enabled \u201cgrafting\u201d (which uses Adagrad (Duchi et al., 2011) to compute the magnitude of the update for each parameter tensor, and the usual Shampoo formula to compute its direction). 27.6 Hardware and implementation details All of our experiments were implemented in TensorFlow (Abadi et al., 2015). Each of the 36 workers used by FIRE PBT ran on an 16 chip 32 core Cloud TPU v3 Pod (Google, 2018). For multi-core TPU Pods, each core ran a \u201creplica\u201d of the entire gradient computation on its assigned subset of the training mini-batch, with gradients and other key optimization quantities being averaged across the cores to simulate a single core computation. As long as training cases are independent of each other in the forward pass, this simulation is exact. However, this independence is slightly violated for networks with BN layers, and the resulting simulation is thus imperfect. Handling BN in this way in the multi-core setting has nonetheless become standard practice, and is even thought to be bene\ufb01cial as it increases the \u201cnoise\u201d originating from BN layers, which is thought to have a regularizing e\ufb00ect. 28. Experimental results In this section we will present our main experimental results as a series of plots of training/test top-1 accuracy vs iteration number, with some discussion. Most of our experiments will use the standard RELU, tanh, and softplus activation functions, the latter of which is a smooth analogue of the RELU function de\ufb01ned by \u03c6(x) = log(1 + exp(x)). Contents 28.1 Default hyperparameters and network con\ufb01gurations assumed for each experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 28.2 DKS with skip-free nets vs standard baselines . . . . . . . . . . . . . . . . . 101 28.3 DKS with and without skip connections . . . . . . . . . . . . . . . . . . . . 105 28.4 DKS with di\ufb00erent activation functions . . . . . . . . . . . . . . . . . . . . 108 28.5 Comparisons to other approaches . . . . . . . . . . . . . . . . . . . . . . . . 109 28.5.1 Gaussian fan-in initialization . . . . . . . . . . . . . . . . . . . . . . 109 28.5.2 Glorot uniform initialization . . . . . . . . . . . . . . . . . . . . . . . 110 28.5.3 LSUV and WLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 28.5.4 Self-normalizing neural networks . . . . . . . . . . . . . . . . . . . . 112 28.5.5 Looks linear method . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 28.5.6 Edge of Chaos (EOC) method . . . . . . . . . . . . . . . . . . . . . 116 100 \fDeep Kernel Shaping 28.5.7 Fix-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 28.6 Meta-parameter studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 28.7 Ablations and modi\ufb01cations of DKS . . . . . . . . . . . . . . . . . . . . . . 120 28.1 Default hyperparameters and network con\ufb01gurations assumed for each experiment We use a value of 1.5 in all experiments for DKS\u2019s global slope bound parameter \u03b6, unless stated otherwise. Except for DKS and the Looks Linear method, whenever using RELU activation functions we multiply the network\u2019s initial weights by \u221a 2. This has become standard practice in the literature following He et al. (2015), and can be interpreted as making the local Q map of combined RELU layers equal to the identity. We don\u2019t do this for DKS or the Looks Linear method since those methods achieve identity local Q maps through other means. Unless otherwise indicated, all result will be given for a skip connection-free BN-free modi\ufb01ed ResNet-101 architecture trained on Imagenet. \u201cStandard ResNet\u201d will refer to a standard unmodi\ufb01ed ResNet with RELU activation functions, initialized with the standard Gaussian Fan-in initialization (with a \u221a 2 multiplier). For networks trained on CIFAR-10 we will use a modi\ufb01ed Wide-Resnet with 250 layers and a width multiplier of 2. 28.2 DKS with skip-free nets vs standard baselines In this subsection we present our main results, in which we compare DKS networks without skip connections or BN layers, to both standard ResNets, and various \u201cablated\u201d ResNets that are missing skip connections or BN layers (or both). From this \ufb01rst plot we can see that, with K-FAC, DKS enables skip-free BN-free networks to train as fast as a standard ResNet on Imagenet, which is the \ufb01rst time this has 101 \fMartens et al. been demonstrated to the best of our knowledge. Meanwhile, the ablated ResNets exhibit signi\ufb01cantly slower optimization or under\ufb01tting. We also see that DKS underperforms for RELU compared to other activation functions, perhaps for the reasons discussed in Section 18.6. The story is somewhat di\ufb00erent for SGD training. With SGD and no skip connections, DKS networks fail to match the training speed of standard ResNets, although they still outperform the ablated ResNets. Interestingly, RELUs give the same performance with DKS as the other activation functions do in this setting. 102 \fDeep Kernel Shaping For test set performance with K-FAC training we observe increased over\ufb01tting with DKS compared to standard ResNets, resulting in an overall lower test accuracy. Notably however, test accuracy is still higher than for the ablated ResNets. Once again the story is somewhat similar for SGD training, although with a larger performance gap vs standard ResNets due to the additional e\ufb00ect of under\ufb01tting from using SGD (without skip connections) instead of K-FAC. Note that the test error numbers for standard ResNet training with SGD are a few percentage points the commonly reported values. This is for a number of reasons, including the fact that we don\u2019t include L2 regularization (as discussed in Section 27.3), that we con\ufb01gured FIRE PBT to maximize training speed and not test set performance, and that we use PLN to process the data. (Because these things a\ufb00ect DKS as well, we believe the comparison to still be fair.) The remaining results in this subsection are analogous to the previous ones, but use CIFAR-10 with modi\ufb01ed/ablated Wide-ResNets models. The observations from these results are similar, although we note that the performance gap between the DKS networks and ablated Wide-ResNets is considerably larger, likely due to the higher depth (250 vs 100) used in these experiments. 103 \fMartens et al. 104 \fDeep Kernel Shaping 28.3 DKS with and without skip connections In this subsection we compare the performance, with and without skip connections, of BNfree networks constructed with DKS. We use weights of \u221a 0.05 and \u221a 0.95 for the residual and shortcut branches respectively (so that the all sums in the network are normalized as per Section 21.2). The value \u221a 0.05 was selected from several candidate options in order to maximize training speed, as shown in Appendix M.1. 105 \fMartens et al. When using K-FAC we see that the training speed remains the same whether or not we use skip connections, except in the case of RELU activation functions. For RELUs, skip connections seem to help signi\ufb01cantly, closing the performance gap with the other activation functions. With SGD the story is di\ufb00erent, and skip connections allow us to match the training speed of standard ResNets with DKS, at least when using softplus or RELU activation functions. 106 \fDeep Kernel Shaping For K-FAC, the improvement to test set accuracy from using skip connections with DKS appears to be minimal, with the notable exception of RELU networks (where the improvement is likely due to improved \ufb01tting/optimization, as opposed to improved generalization). By contrast, in the context of SGD training we see a signi\ufb01cant improvement to the test set accuracy from using skip connections with DKS. Although again, this is likely due to improved \ufb01tting enabled by the use of skip connections with SGD, rather than improved generalization. 107 \fMartens et al. 28.4 DKS with di\ufb00erent activation functions In this subsection we compare performance of DKS with twelve di\ufb00erent activation functions. In addition to certain well-known mathematical functions, we also include SELU (Klambauer et al., 2017), Softsign (Bergstra et al., 2009), Swish (Ramachandran et al., 2017; Elfwing et al., 2018), Elu (Clevert et al., 2016), and BentId (de\ufb01ned by \u03c6(x) = x + ( \u221a x2 + 1 \u22121)/2). For K-FAC we see fairly similar training speeds for each of the twelve activation functions, with RELU being the notable outlier. 108 \fDeep Kernel Shaping For SGD, there is a larger deviation in performance observed for the di\ufb00erent options, and RELU is notably no longer an outlier. Results for test set accuracy were qualitatively very similar, and so we won\u2019t report them here. 28.5 Comparisons to other approaches In this subsection we compare DKS to various other approaches for initializing and constructing neural networks. We will focus primarily on skip connection-free BN-free networks, except when comparing to Fix-up (which requires the use of skip connections). We will omit test set accuracy in these comparisons, as we found that it gave qualitatively similar results to training accuracy. (This is likely because nearly all competing methods yield signi\ufb01cant under\ufb01tting for skip-free BN-free networks, which overwhelms any possible advantage they might have in terms of generalization.) 28.5.1 Gaussian fan-in initialization The Gaussian fan-in initialization (aka \u201cvariance scaling initialization\u201d or \u201cLecun initialization\u201d), which is discussed in Section 26.1, is the default initialization method used in many modern neural network frameworks, and is the \ufb01rst method we compare to. 109 \fMartens et al. From these results we can see that DKS signi\ufb01cantly outperforms this canonical approach, whose poor performance in this setting is not surprising given the analysis of Section 12. Note that it is common in practice to use a truncated Gaussian distribution or uniform distribution to sample the weights in a fan-in initialization, instead of the usual Gaussian distribution. When used with an appropriate rescaling term, these distributions produce weights with the same variance as the standard Gaussian distribution, although they won\u2019t necessarily give rise to the same approximate kernel functions. We ran additional experiments using these distributions, and found that they gave similar results to those presented above. 28.5.2 Glorot uniform initialization Glorot initialization (aka \u201cXavier initialization\u201d) is a commonly used modi\ufb01cation of the Gaussian fan-in initialization which we discuss in Section 26.1. As with the Gaussian fan-in method, it is also often used with a truncated Gaussian or uniform distribution, the latter of which we will present results for. (We also performed experiments using truncated and non-truncated Gaussian distributions for the weights, which yielded similar \ufb01ndings.) 110 \fDeep Kernel Shaping From these results we can see that the Glorot approach is signi\ufb01cantly outperformed by DKS, and completely fails to produce a trainable network for both the RELU and softplus activation functions. 28.5.3 LSUV and WLI The LSUV and WLI approaches, which are discussed in Section 26.2, represent the \ufb01rst generation of methods which attempt to capture the bene\ufb01ts of Batch Normalization through initialization. They are fairly similar in their implementation, which is why we consider them together here. 111 \fMartens et al. From these plots we can see that these methods outperform simple initializations schemes like fan-in and Glorot, but are still signi\ufb01cantly outperformed by DKS. 28.5.4 Self-normalizing neural networks Self-normalizing neural networks (which we discuss in Section 26.3) use SELU activation functions, together with a standard Gaussian fan-in initialization, to achieve certain conditions under variance propagation which are essentially equivalent to two of the four conditions enforced by DKS. 112 \fDeep Kernel Shaping From these results we see that DKS applied to a softplus network matches or exceeds the optimization performance of a self-normalizing network. DKS also improves the performance of a SELU network optimized with K-FAC, although slightly degrades it for SGD. 28.5.5 Looks linear method The Looks Linear method, which is discussed in Section 26.5, is an approach for constructing and initializing RELU networks which makes them behave like perfectly linear functions at initialization time, without the use of skip connections. The method is somewhat di\ufb03cult to fairly compare to other ones, as it involves doubling the channel dimension of each layer, 113 \fMartens et al. while using a form of weight sharing which makes the resulting network less expressive than a standard one of the same dimensions. Our imperfect solution to this problem is to use the original dimensions when constructing networks with DKS, which will disadvantage DKS in the comparison. We had some trouble optimizing the networks constructed with the Looks Linear method. K-FAC would quickly diverge for all the hyperparameter settings we tried, perhaps because it broke the delicate symmetry of the initial weights too quickly, leading to extreme nonlinear behavior. We had more luck with Adam and SGD, although we found that it was necessary to threshold the maximum update magnitude at 1 to achieve stable optimization (which is an approach known as \u201cclipping\u201d (Pascanu et al., 2013)). Because we couldn\u2019t get K-FAC to work well with the Looks Linear method, we used it with Adam instead in our \ufb01rst comparison. We note that with Adam, DKS performs similarly to the Looks Linear method, but when used with K-FAC, DKS signi\ufb01cantly outperforms it. 114 \fDeep Kernel Shaping For SGD both methods seem to perform similarly, and notably better than both the fan-in/Glorot initializations, and also the LSUV/WLI methods. We also conducted experiments with CIFAR-10, which yielded similar results. These are given below without commentary. 115 \fMartens et al. 28.5.6 Edge of Chaos (EOC) method The Edge of Chaos (EOC) method (described in detail in Section 26.4) is the closest approach to ours in the existing literature, and the one which directly inspired it. The version in Xiao et al. (2018), which we will use here, involves two ingredients: choosing variances for the weight and bias distributions so that C\u2032(1) = 1 for each local C map C, and using the Delta Orthogonal initialization for the weights (which is rescaled to achieve the target variance). A clean comparison to EOC is somewhat di\ufb03cult, as it is not fully speci\ufb01ed. In particular, for most activation functions there are in\ufb01nitely many combinations of the two variances which achieve C\u2032(1) = 1. And for the RELU activation function, the condition C\u2032(1) = 1 holds for any weight variance (given zero bias variance), so that the method reduces to an Orthogonal Delta initialization. Xiao et al. (2018) focused their experiments on tanh networks, and following their advice we will take the variance of the weights and biases to be 1.01/k and 1.654355 \u00b7 10\u22127 (respectively) for tanh nets, where k is the input channel dimension for the given layer. We will also consider RELU networks, with a weight variance of 2/k and a bias variance of 0. 116 \fDeep Kernel Shaping 117 \fMartens et al. From these results we see that DKS signi\ufb01cantly outperforms EOC in terms of optimization speed (for both Imagenet and CIFAR-10), which in turn outperforms the simple Fan-in initialization method. 28.5.7 Fix-up Fix-up, which we brie\ufb02y discuss in Section 26.6, is a recent method for constructing and initializing networks with residual connections which is designed to eliminate the need for normalization layers. It involves initializing the weights of the \ufb01nal convolutional layer in each residual block to zero (so that the residual blocks behave like identity functions at 118 \fDeep Kernel Shaping initialization), using a special formula for the variance of the weights distribution, as well as introducing learnable scalar multiplication and bias operations throughout the network. We were not able to get K-FAC to work well with Fix-up. This might have been due to a bad interaction with K-FAC and the extra parameters introduced by Fix-up (as KFAC is designed speci\ufb01cally for the standard neural network parameters). Another possible explanation is that, like networks created with the Looks Linear method, the larger steps taken by K-FAC cause Fix-up networks to transition too quickly to extreme nonlinear behavior (after being essentially linear at initialization). As Fix-up requires the use a skip connections, for the sake of fairness we compared it to BN-free networks constructed with DKS that also used skip connections. And because we couldn\u2019t get K-FAC to work well with Fix-up, we instead used Adam with Fix-up in our \ufb01rst comparison. (While this may seem unfair, we note that for networks with skip connections, K-FAC and SGD perform similarly, as shown in Subsection 28.3.) 119 \fMartens et al. From these results we see that Fix-up performs similarly to DKS for RELU activation functions, but falls behind for tanh and softplus. 28.6 Meta-parameter studies The in\ufb02uence of various training \u201cmeta-parameters\u201d on the optimization and generalization performance of DKS networks is considered in Appendix M. These meta-parameters include the weight on the residual branch when using skip connections, DKS\u2019s \u03b6 parameter, and the choice of optimizer. Our conclusions from these studies are summarized as follows: \u2022 When using DKS with skip connections, a weight of \u221a 0.05 on the residual branch works the best overall among several other sensible options, although this is likely to be contingent on details of the architecture (such as depth). \u2022 In terms of optimization performance, \u03b6 = 1.5 typically works better than values that are much larger, or much closer to 1, although the di\ufb00erence isn\u2019t very big. In terms of generalization performance, somewhat smaller values (such as 1.1) may work slightly better. \u2022 For networks without skip connections, K-FAC is the best optimizer in terms of speed, followed closely by Shampoo. Following that are Adam and then SGD, which both perform signi\ufb01cantly worse than Shampoo in this setting. For networks with skip connections, the gap between K-FAC and SGD narrows substantially. 28.7 Ablations and modi\ufb01cations of DKS Various ablations and modi\ufb01cations of DKS are considered in Appendix N. The overall conclusion of these studies is that each component of DKS, except perhaps for PLN (assuming reasonably well scaled input data), is required to achieve the highest optimization 120 \fDeep Kernel Shaping speed. When considering test error the conclusions are similar but somewhat muted, with the single exception that using weighted mean-pooling layers with K-FAC seems to improve test set performance while degrading training set performance. 29." + }, + { + "url": "http://arxiv.org/abs/1503.05671v7", + "title": "Optimizing Neural Networks with Kronecker-factored Approximate Curvature", + "abstract": "We propose an efficient method for approximating natural gradient descent in\nneural networks which we call Kronecker-Factored Approximate Curvature (K-FAC).\nK-FAC is based on an efficiently invertible approximation of a neural network's\nFisher information matrix which is neither diagonal nor low-rank, and in some\ncases is completely non-sparse. It is derived by approximating various large\nblocks of the Fisher (corresponding to entire layers) as being the Kronecker\nproduct of two much smaller matrices. While only several times more expensive\nto compute than the plain stochastic gradient, the updates produced by K-FAC\nmake much more progress optimizing the objective, which results in an algorithm\nthat can be much faster than stochastic gradient descent with momentum in\npractice. And unlike some previously proposed approximate\nnatural-gradient/Newton methods which use high-quality non-diagonal curvature\nmatrices (such as Hessian-free optimization), K-FAC works very well in highly\nstochastic optimization regimes. This is because the cost of storing and\ninverting K-FAC's approximation to the curvature matrix does not depend on the\namount of data used to estimate it, which is a feature typically associated\nonly with diagonal or low-rank approximations to the curvature matrix.", + "authors": "James Martens, Roger Grosse", + "published": "2015-03-19", + "updated": "2020-06-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "main_content": "Introduction The problem of training neural networks is one of the most important and highly investigated ones in machine learning. Despite work on layer-wise pretraining schemes, and various sophisticated optimization methods which try to approximate Newton-Raphson updates or natural gradient updates, stochastic gradient descent (SGD), possibly augmented with momentum, remains the method of choice for large-scale neural network training (Sutskever et al., 2013). From the work on Hessian-free optimization (HF) (Martens, 2010) and related methods (e.g. \u2217jmartens@cs.toronto.edu \u2020rgrosse@cs.toronto.edu 1 arXiv:1503.05671v7 [cs.LG] 8 Jun 2020 \fVinyals and Povey, 2012) we know that updates computed using local curvature information can make much more progress per iteration than the scaled gradient. The reason that HF sees fewer practical applications than SGD are twofold. Firstly, its updates are much more expensive to compute, as they involve running linear conjugate gradient (CG) for potentially hundreds of iterations, each of which requires a matrix-vector product with the curvature matrix (which are as expensive to compute as the stochastic gradient on the current mini-batch). Secondly, HF\u2019s estimate of the curvature matrix must remain \ufb01xed while CG iterates, and thus the method is able to go through much less data than SGD can in a comparable amount of time, making it less well suited to stochastic optimizations. As discussed in Martens and Sutskever (2012) and Sutskever et al. (2013), CG has the potential to be much faster at local optimization than gradient descent, when applied to quadratic objective functions. Thus, insofar as the objective can be locally approximated by a quadratic, each step of CG could potentially be doing a lot more work than each iteration of SGD, which would result in HF being much faster overall than SGD. However, there are examples of quadratic functions (e.g. Li, 2005), characterized by curvature matrices with highly spread-out eigenvalue distributions, where CG will have no distinct advantage over well-tuned gradient descent with momentum. Thus, insofar as the quadratic functions being optimized by CG within HF are of this character, HF shouldn\u2019t in principle be faster than well-tuned SGD with momentum. The extent to which neural network objective functions give rise to such quadratics is unclear, although Sutskever et al. (2013) provides some preliminary evidence that they do. CG falls victim to this worst-case analysis because it is a \ufb01rst-order method. This motivates us to consider methods which don\u2019t rely on \ufb01rst-order methods like CG as their primary engines of optimization. One such class of methods which have been widely studied are those which work by directly inverting a diagonal, block-diagonal, or low-rank approximation to the curvature matrix (e.g. Becker and LeCun, 1989; Schaul et al., 2013; Zeiler, 2013; Le Roux et al., 2008; Ollivier, 2013). In fact, a diagonal approximation of the Fisher information matrix is used within HF as a preconditioner for CG. However, these methods provide only a limited performance improvement in practice, especially compared to SGD with momentum (see for example Schraudolph et al., 2007; Zeiler, 2013), and many practitioners tend to forgo them in favor of SGD or SGD with momentum. We know that the curvature associated with neural network objective functions is highly nondiagonal, and that updates which properly respect and account for this non-diagonal curvature, such as those generated by HF, can make much more progress minimizing the objective than the plain gradient or updates computed from diagonal approximations of the curvature (usually \u223c102 HF updates are required to adequately minimize most objectives, compared to the \u223c 104 \u2212105 required by methods that use diagonal approximations). Thus, if we had an ef\ufb01cient and direct way to compute the inverse of a high-quality non-diagonal approximation to the curvature matrix (i.e. without relying on \ufb01rst-order methods like CG) this could potentially yield an optimization method whose updates would be large and powerful like HF\u2019s, while being (almost) as cheap to compute as the stochastic gradient. 2 \fIn this work we develop such a method, which we call Kronecker-factored Approximate Curvature (K-FAC). We show that our method can be much faster in practice than even highly tuned implementations of SGD with momentum on certain standard neural network optimization benchmarks. The main ingredient in K-FAC is a sophisticated approximation to the Fisher information matrix, which despite being neither diagonal nor low-rank, nor even block-diagonal with small blocks, can be inverted very ef\ufb01ciently, and can be estimated in an online fashion using arbitrarily large subsets of the training data (without increasing the cost of inversion). This approximation is built in two stages. In the \ufb01rst, the rows and columns of the Fisher are divided into groups, each of which corresponds to all the weights in a given layer, and this gives rise to a block-partitioning of the matrix (where the blocks are much larger than those used by Le Roux et al. (2008) or Ollivier (2013)). These blocks are then approximated as Kronecker products between much smaller matrices, which we show is equivalent to making certain approximating assumptions regarding the statistics of the network\u2019s gradients. In the second stage, this matrix is further approximated as having an inverse which is either block-diagonal or block-tridiagonal. We justify this approximation through a careful examination of the relationships between inverse covariances, tree-structured graphical models, and linear regression. Notably, this justi\ufb01cation doesn\u2019t apply to the Fisher itself, and our experiments con\ufb01rm that while the inverse Fisher does indeed possess this structure (approximately), the Fisher itself does not. The rest of this paper is organized as follows. Section 2 gives basic background and notation for neural networks and the natural gradient. Section 3 describes our initial Kronecker product approximation to the Fisher. Section 4 describes our further block-diagonal and block-tridiagonal approximations of the inverse Fisher, and how these can be used to derive an ef\ufb01cient inversion algorithm. Section 5 describes how we compute online estimates of the quantities required by our inverse Fisher approximation over a large \u201cwindow\u201d of previously processed mini-batches (which makes K-FAC very different from methods like HF or KSD, which base their estimates of the curvature on a single mini-batch). Section 6 describes how we use our approximate Fisher to obtain a practical and robust optimization algorithm which requires very little manual tuning, through the careful application of various theoretically well-founded \u201cdamping\u201d techniques that are standard in the optimization literature. Note that damping techniques compensate both for the local quadratic approximation being implicitly made to the objective, and for our further approximation of the Fisher, and are non-optional for essentially any 2nd-order method like K-FAC to work properly, as is well established by both theory and practice within the optimization literature (Nocedal and Wright, 2006). Section 7 describes a simple and effective way of adding a type of \u201cmomentum\u201d to K-FAC, which we have found works very well in practice. Section 8 describes the computational costs associated with K-FAC, and various ways to reduce them to the point where each update is at most only several times more expensive to compute than the stochastic gradient. Section 9 gives complete high-level pseudocode for K-FAC. Section 10 characterizes a broad class of network transformations and reparameterizations to which K-FAC is essentially invariant. Section 3 \f11 considers some related prior methods for neural network optimization. Proofs of formal results are located in the appendix. 2 Background and notation 2.1 Neural Networks In this section we will de\ufb01ne the basic notation for feed-forward neural networks which we will use throughout this paper. Note that this presentation closely follows the one from Martens (2014). A neural network transforms its input a0 = x to an output f(x, \u03b8) = a\u2113through a series of \u2113layers, each of which consists of a bank of units/neurons. The units each receive as input a weighted sum of the outputs of units from the previous layer and compute their output via a nonlinear \u201cactivation\u201d function. We denote by si the vector of these weighted sums for the i-th layer, and by ai the vector of unit outputs (aka \u201cactivities\u201d). The precise computation performed at each layer i \u2208{1, . . . , \u2113} is given as follows: si = Wi\u00af ai\u22121 ai = \u03c6i(si) where \u03c6i is an element-wise nonlinear function, Wi is a weight matrix, and \u00af ai is de\ufb01ned as the vector formed by appending to ai an additional homogeneous coordinate with value 1. Note that we do not include explicit bias parameters here as these are captured implicitly through our use of homogeneous coordinates. In particular, the last column of each weight matrix Wi corresponds to what is usually thought of as the \u201cbias vector\u201d. Figure 1 illustrates our de\ufb01nition for \u2113= 2. We will de\ufb01ne \u03b8 = [vec(W1)\u22a4vec(W2)\u22a4. . . vec(W\u2113)\u22a4]\u22a4, which is the vector consisting of all of the network\u2019s parameters concatenated together, where vec is the operator which vectorizes matrices by stacking their columns together. We let L(y, z) denote the loss function which measures the disagreement between a prediction z made by the network, and a target y. The training objective function h(\u03b8) is the average (or expectation) of losses L(y, f(x, \u03b8)) with respect to a training distribution \u02c6 Qx,y over input-target pairs (x, y). h(\u03b8) is a proxy for the objective which we actually care about but don\u2019t have access to, which is the expectation of the loss taken with respect to the true data distribution Qx,y. We will assume that the loss is given by the negative log probability associated with a simple predictive distribution Ry|z for y parameterized by z, i.e. that we have L(y, z) = \u2212log r(y|z) where r is Ry|z\u2019s density function. This is the case for both the standard least-squares and crossentropy objective functions, where the predictive distributions are multivariate normal and multinomial, respectively. 4 \fWe will let Py|x(\u03b8) = Ry|f(x,\u03b8) denote the conditional distribution de\ufb01ned by the neural network, as parameterized by \u03b8, and p(y|x, \u03b8) = r(y|f(x, \u03b8)) its density function. Note that minimizing the objective function h(\u03b8) can be seen as maximum likelihood learning of the model Py|x(\u03b8). For convenience we will de\ufb01ne the following additional notation: Dv = dL(y, f(x, \u03b8)) dv = \u2212d log p(y|x, \u03b8) dv and gi = Dsi Algorithm 1 shows how to compute the gradient D\u03b8 of the loss function of a neural network using standard backpropagation. Algorithm 1 An algorithm for computing the gradient of the loss L(y, f(x, \u03b8)) for a given (x, y). Note that we are assuming here for simplicity that the \u03c6i are de\ufb01ned as coordinate-wise functions. input: a0 = x; \u03b8 mapped to (W1, W2, . . . , W\u2113). /* Forward pass */ for all i from 1 to \u2113do si \u2190Wi\u00af ai\u22121 ai \u2190\u03c6i(si) end for /* Loss derivative computation */ Da\u2113\u2190\u2202L(y, z) \u2202z \f \f \f \f z=a\u2113 /* Backwards pass */ for all i from \u2113downto 1 do gi \u2190Dai \u2299\u03c6\u2032 i(si) DWi \u2190gi\u00af a\u22a4 i\u22121 Dai\u22121 \u2190W \u22a4 i gi end for output: D\u03b8 = [vec(DW1)\u22a4vec(DW2)\u22a4. . . vec(DW\u2113)\u22a4]\u22a4 5 \fFigure 1: A depiction of a standard feed-forward neural network for \u2113= 2. 2.2 The Natural Gradient Because our network de\ufb01nes a conditional model Py|x(\u03b8), it has an associated Fisher information matrix (which we will simply call \u201cthe Fisher\u201d) which is given by F = E \" d log p(y|x, \u03b8) d\u03b8 d log p(y|x, \u03b8) d\u03b8 \u22a4# = E[D\u03b8D\u03b8\u22a4] Here, the expectation is taken with respect to the data distribution Qx over inputs x, and the model\u2019s predictive distribution Py|x(\u03b8) over y. Since we usually don\u2019t have access to Qx, and the above expectation would likely be intractable even if we did, we will instead compute F using the training distribution \u02c6 Qx over inputs x. The well-known natural gradient (Amari, 1998) is de\ufb01ned as F \u22121\u2207h(\u03b8). Motivated from the perspective of information geometry (Amari and Nagaoka, 2000), the natural gradient de\ufb01nes the direction in parameter space which gives the largest change in the objective per unit of change in the model, as measured by the KL-divergence. This is to be contrasted with the standard gradient, which can be de\ufb01ned as the direction in parameter space which gives the largest change in the objective per unit of change in the parameters, as measured by the standard Euclidean metric. The natural gradient also has links to several classical ideas from optimization. It can be shown (Martens, 2014; Pascanu and Bengio, 2014) that the Fisher is equivalent to the Generalized Gauss-Newton matrix (GGN) (Schraudolph, 2002; Martens and Sutskever, 2012) in certain important cases, which is a well-known positive semi-de\ufb01nite approximation to the Hessian of the objective function. In particular, (Martens, 2014) showed that when the GGN is de\ufb01ned so that the 6 \fnetwork is linearized up to the loss function, and the loss function corresponds to the negative log probability of observations under an exponential family model Ry|z with z representing the natural parameters, then the Fisher corresponds exactly to the GGN.1 The GGN has served as the curvature matrix of choice in HF and related methods, and so in light of its equivalence to the Fisher, these 2nd-order methods can be seen as approximate natural gradient methods. And perhaps more importantly from a practical perspective, natural gradientbased optimization methods can conversely be viewed as 2nd-order optimization methods, which as pointed out by Martens (2014)), brings to bare the vast wisdom that has accumulated about how to make such methods work well in both theory and practice (e.g Nocedal and Wright, 2006). In Section 6 we productively make use of these connections in order to design a robust and highly effective optimization method using our approximation to the natural gradient/Fisher (which is developed in Sections 3 and 4). For some good recent discussion and analysis of the natural gradient, see Arnold et al. (2011); Martens (2014); Pascanu and Bengio (2014). 3 A block-wise Kronecker-factored Fisher approximation The main computational challenge associated with using the natural gradient is computing F \u22121 (or its product with \u2207h). For large networks, with potentially millions of parameters, computing this inverse naively is computationally impractical. In this section we develop an initial approximation of F which will be a key ingredient in deriving our ef\ufb01ciently computable approximation to F \u22121 and the natural gradient. Note that D\u03b8 = [vec(DW1)\u22a4vec(DW2)\u22a4\u00b7 \u00b7 \u00b7 vec(DW\u2113)\u22a4]\u22a4and so F can be expressed as F = E \u0002 D\u03b8D\u03b8\u22a4\u0003 = E \u0002 [vec(DW1)\u22a4vec(DW2)\u22a4\u00b7 \u00b7 \u00b7 vec(DW\u2113)\u22a4]\u22a4[vec(DW1)\u22a4vec(DW2)\u22a4\u00b7 \u00b7 \u00b7 vec(DW\u2113)\u22a4] \u0003 = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 E \u0002 vec(DW1) vec(DW1)\u22a4\u0003 E \u0002 vec(DW1) vec(DW2)\u22a4\u0003 \u00b7 \u00b7 \u00b7 E \u0002 vec(DW1) vec(DW\u2113)\u22a4\u0003 E \u0002 vec(DW2) vec(DW1)\u22a4\u0003 E \u0002 vec(DW2) vec(DW2)\u22a4\u0003 \u00b7 \u00b7 \u00b7 E \u0002 vec(DW2) vec(DW\u2113)\u22a4\u0003 . . . . . . ... . . . E \u0002 vec(DW\u2113) vec(DW1)\u22a4\u0003 E \u0002 vec(DW\u2113) vec(DW2)\u22a4\u0003 \u00b7 \u00b7 \u00b7 E \u0002 vec(DW\u2113) vec(DW\u2113)\u22a4\u0003 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb Thus, we see that F can be viewed as an \u2113by \u2113block matrix, with the (i, j)-th block Fi,j given by Fi,j = E \u0002 vec(DWi) vec(DWj)\u22a4\u0003 . 1Note that the condition that z represents the natural parameters might require one to formally include the nonlinear transformation usually performed by the \ufb01nal nonlinearity \u03c6\u2113of the network (such as the logistic-sigmoid transform before a cross-entropy error) as part of the loss function L instead. Equivalently, one could linearize the network only up to the input s\u2113to \u03c6\u2113when computing the GGN (see Martens and Sutskever (2012)). 7 \fNoting that DWi = gi\u00af a\u22a4 i\u22121 and that vec(uv\u22a4) = v \u2297u we have vec(DWi) = vec(gi\u00af a\u22a4 i\u22121) = \u00af ai\u22121 \u2297gi, and thus we can rewrite Fi,j as Fi,j = E \u0002 vec(DWi) vec(DWj)\u22a4\u0003 = E \u0002 (\u00af ai\u22121 \u2297gi)(\u00af aj\u22121 \u2297gj)\u22a4\u0003 = E \u0002 (\u00af ai\u22121 \u2297gi)(\u00af a\u22a4 j\u22121 \u2297g\u22a4 j ) \u0003 = E \u0002 \u00af ai\u22121\u00af a\u22a4 j\u22121 \u2297gig\u22a4 j \u0003 where A \u2297B denotes the Kronecker product between A \u2208Rm\u00d7n and B, and is given by A \u2297B \u2261 \uf8ee \uf8ef \uf8f0 [A]1,1B \u00b7 \u00b7 \u00b7 [A]1,nB . . . ... . . . [A]m,1B \u00b7 \u00b7 \u00b7 [A]m,nB \uf8f9 \uf8fa \uf8fb Note that the Kronecker product satis\ufb01es many convenient properties that we will make use of in this paper, especially the identity (A \u2297B)\u22121 = A\u22121 \u2297B\u22121. See Van Loan (2000) for a good discussion of the Kronecker product. Our initial approximation \u02dc F to F will be de\ufb01ned by the following block-wise approximation: Fi,j = E \u0002 \u00af ai\u22121\u00af a\u22a4 j\u22121 \u2297gig\u22a4 j \u0003 \u2248E \u0002 \u00af ai\u22121\u00af a\u22a4 j\u22121 \u0003 \u2297E \u0002 gig\u22a4 j \u0003 = \u00af Ai\u22121,j\u22121 \u2297Gi,j = \u02dc Fi,j (1) where \u00af Ai,j = E \u0002 \u00af ai\u00af a\u22a4 j \u0003 and Gi,j = E \u0002 gig\u22a4 j \u0003 . This gives \u02dc F = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 \u00af A0,0 \u2297G1,1 \u00af A0,1 \u2297G1,2 \u00b7 \u00b7 \u00b7 \u00af A0,\u2113\u22121 \u2297G1,\u2113 \u00af A1,0 \u2297G2,1 \u00af A1,1 \u2297G2,2 \u00b7 \u00b7 \u00b7 \u00af A1,\u2113\u22121 \u2297G2,\u2113 . . . . . . ... . . . \u00af A\u2113\u22121,0 \u2297G\u2113,1 \u00af A\u2113\u22121,1 \u2297G\u2113,2 \u00b7 \u00b7 \u00b7 \u00af A\u2113\u22121,\u2113\u22121 \u2297G\u2113,\u2113 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb which has the form of what is known as a Khatri-Rao product in multivariate statistics. The expectation of a Kronecker product is, in general, not equal to the Kronecker product of expectations, and so this is indeed a major approximation to make, and one which likely won\u2019t become exact under any realistic set of assumptions, or as a limiting case in some kind of asymptotic analysis. Nevertheless, it seems to be fairly accurate in practice, and is able to successfully capture the \u201ccoarse structure\u201d of the Fisher, as demonstrated in Figure 2 for an example network. As we will see in later sections, this approximation leads to signi\ufb01cant computational savings in terms of storage and inversion, which we will be able to leverage in order to design an ef\ufb01cient algorithm for computing an approximation to the natural gradient. 3.1 Interpretations of this approximation Consider an arbitrary pair of weights [Wi]k1,k2 and [Wj]k3,k4 from the network, where [\u00b7]i,j denotes the value of the (i, j)-th entry. We have that the corresponding derivatives of these weights are 8 \fFigure 2: A comparison of the exact Fisher F and our block-wise Kronecker-factored approximation \u02dc F, for the middle 4 layers of a standard deep neural network partially trained to classify a 16x16 down-scaled version of MNIST. The network was trained with 7 iterations of K-FAC in batch mode, achieving 5% error (the error reached 0% after 22 iterations) . The network architecture is 256-20-20-20-20-20-10 and uses standard tanh units. On the left is the exact Fisher F, in the middle is our approximation \u02dc F, and on the right is the difference of these. The dashed lines delineate the blocks. Note that for the purposes of visibility we plot the absolute values of the entries, with the white level corresponding linearly to the size of these values (up to some maximum, which is the same in each image). given by D[Wi]k1,k2 = \u00af a(1)g(1) and D[Wj]k3,k4 = \u00af a(2)g(2), where we denote for convenience \u00af a(1) = [\u00af ai\u22121]k1, \u00af a(2) = [\u00af aj\u22121]k3, g(1) = [gi]k2, and g(2) = [gj]k4. The approximation given by eqn. 1 is equivalent to making the following approximation for each pair of weights: E [D[Wi]k1,k2D[Wj]k3,k4] = E \u0002 (\u00af a(1)g(1))(\u00af a(2)g(2)) \u0003 = E \u0002 \u00af a(1)\u00af a(2) g(1)g(2)\u0003 \u2248E \u0002 \u00af a(1)\u00af a(2)\u0003 E \u0002 g(1)g(2)\u0003 (2) And thus one way to interpret the approximation in eqn. 1 is that we are assuming statistical independence between products \u00af a(1)\u00af a(2) of unit activities and products g(1)g(2) of unit input derivatives. Another more detailed interpretation of the approximation emerges by considering the following expression for the approximation error E \u0002 \u00af a(1)\u00af a(2) g(1)g(2)\u0003 \u2212E \u0002 \u00af a(1)\u00af a(2)\u0003 E \u0002 g(1)g(2)\u0003 (which is derived in the appendix): \u03ba(\u00af a(1), \u00af a(2), g(1), g(2)) + E[\u00af a(1)]\u03ba(\u00af a(2), g(1), g(2)) + E[\u00af a(2)]\u03ba(\u00af a(1), g(1), g(2)) (3) Here \u03ba(\u00b7) denotes the cumulant of its arguments. Cumulants are a natural generalization of the concept of mean and variance to higher orders, and indeed 1st-order cumulants are means and 2nd-order cumulants are covariances. Intuitively, cumulants of order k measure the degree to which the interaction between variables is intrinsically of order k, as opposed to arising from many lower-order interactions. A basic upper bound for the approximation error is |\u03ba(\u00af a(1), \u00af a(2), g(1), g(2))| + | E[\u00af a(1)]||\u03ba(\u00af a(2), g(1), g(2))| + | E[\u00af a(2)]||\u03ba(\u00af a(1), g(1), g(2))| (4) 9 \fwhich will be small if all of the higher-order cumulants are small (i.e. those of order 3 or higher). Note that in principle this upper bound may be loose due to possible cancellations between the terms in eqn. 3. Because higher-order cumulants are zero for variables jointly distributed according to a multivariate Gaussian, it follows that this upper bound on the approximation error will be small insofar as the joint distribution over \u00af a(1), \u00af a(2), g(1), and g(2) is well approximated by a multivariate Gaussian. And while we are not aware of an argument for why this should be the case in practice, it does seem to be the case that for the example network from Figure 2, the size of the error is well predicted by the size of the higher-order cumulants. In particular, the total approximation error, summed over all pairs of weights in the middle 4 layers, is 2894.4, and is of roughly the same size as the corresponding upper bound (4134.6), whose size is tied to that of the higher order cumulants (due to the impossibility of cancellations in eqn. 4). 4 Additional approximations to \u02dc F and inverse computations To the best of our knowledge there is no ef\ufb01cient general method for inverting a Khatri-Rao product like \u02dc F. Thus, we must make further approximations if we hope to obtain an ef\ufb01ciently computable approximation of the inverse Fisher. In the following subsections we argue that the inverse of \u02dc F can be reasonably approximated as having one of two special structures, either of which make it ef\ufb01ciently computable. The second of these will be slightly less restrictive than the \ufb01rst (and hence a better approximation) at the cost of some additional complexity. We will then show how matrix-vector products with these approximate inverses can be ef\ufb01ciently computed, which will thus give an ef\ufb01cient algorithm for computing an approximation to the natural gradient. 4.1 Structured inverses and the connection to linear regression Suppose we are given a multivariate distribution whose associated covariance matrix is \u03a3. De\ufb01ne the matrix B so that for i \u0338= j, [B]i,j is the coef\ufb01cient on the j-th variable in the optimal linear predictor of the i-th variable from all the other variables, and for i = j, [B]i,j = 0. Then de\ufb01ne the matrix D to be the diagonal matrix where [D]i,i is the variance of the error associated with such a predictor of the i-th variable. Pourahmadi (2011) showed that B and D can be obtained from the inverse covariance \u03a3\u22121 by the formulas [B]i,j = \u2212[\u03a3\u22121]i,j [\u03a3\u22121]i,i and [D]i,i = 1 [\u03a3\u22121]i,i 10 \ffrom which it follows that the inverse covariance matrix can be expressed as \u03a3\u22121 = D\u22121(I \u2212B) Intuitively, this result says that each row of the inverse covariance \u03a3\u22121 is given by the coef\ufb01cients of the optimal linear predictor of the i-th variable from the others, up to a scaling factor. So if the j-th variable is much less \u201cuseful\u201d than the other variables for predicting the i-th variable, we can expect that the (i, j)-th entry of the inverse covariance will be relatively small. Note that \u201cusefulness\u201d is a subtle property as we have informally de\ufb01ned it. In particular, it is not equivalent to the degree of correlation between the j-th and i-th variables, or any such simple measure. As a simple example, consider the case where the j-th variable is equal to the k-th variable plus independent Gaussian noise. Since any linear predictor can achieve a lower variance simply by shifting weight from the j-th variable to the k-th variable, we have that the j-th variable is not useful (and its coef\ufb01cient will thus be zero) in the task of predicting the i-th variable for any setting of i other than i = j or i = k. Noting that the Fisher F is a covariance matrix over D\u03b8 w.r.t. the model\u2019s distribution (because E[D\u03b8] = 0 by Lemma 4), we can thus apply the above analysis to the distribution over D\u03b8 to gain insight into the approximate structure of F \u22121, and by extension its approximation \u02dc F \u22121. Consider the derivative DWi of the loss with respect to the weights Wi of layer i. Intuitively, if we are trying to predict one of the entries of DWi from the other entries of D\u03b8, those entries also in DWi will likely be the most useful in this regard. Thus, it stands to reason that the largest entries of \u02dc F \u22121 will be those on the diagonal blocks, so that \u02dc F \u22121 will be well approximated as block-diagonal, with each block corresponding to a different DWi. Beyond the other entries of DWi, it is the entries of DWi+1 and DWi\u22121 (i.e. those associated with adjacent layers) that will arguably be the most useful in predicting a given entry of DWi. This is because the true process for computing the loss gradient only uses information from the layer below (during the forward pass) and from the layer above (during the backwards pass). Thus, approximating \u02dc F \u22121 as block-tridiagonal seems like a reasonable and milder alternative than taking it to be block-diagonal. Indeed, this approximation would be exact if the distribution over D\u03b8 were given by a directed graphical model which generated each of the DWi\u2019s, one layer at a time, from either DWi+1 or DWi\u22121. Or equivalently, if DWi were distributed according to an undirected Gaussian graphical model with binary potentials only between entries in the same or adjacent layers. Both of these models are depicted in Figure 4. Now while in reality the DWi\u2019s are generated using information from adjacent layers according to a process that is neither linear nor Gaussian, it nonetheless stands to reason that their joint statistics might be reasonably approximated by such a model. In fact, the idea of approximating the distribution over loss gradients with a directed graphical model forms the basis of the recent FANG method of Grosse and Salakhutdinov (2015). Figure 3 examines the extent to which the inverse Fisher is well approximated as blockdiagonal or block-tridiagonal for an example network. 11 \fFigure 3: A comparison of our block-wise Kronecker-factored approximation \u02dc F, and its inverse, using the example neural network from Figure 2. On the left is \u02dc F, in the middle is its exact inverse, and on the right is a 4x4 matrix containing the averages of the absolute values of the entries in each block of the inverse. As predicted by our theory, the inverse exhibits an approximate block-tridiagonal structure, whereas \u02dc F itself does not. Note that the corresponding plots for the exact F and its inverse look similar. The very small blocks visible on the diagonal of the inverse each correspond to the weights on the outgoing connections of a particular unit. The inverse was computed subject to the factored Tikhonov damping technique described in Sections 6.3 and 6.6, using the same value of \u03b3 that was used by K-FAC at the iteration from which this example was taken (see Figure 2). Note that for the purposes of visibility we plot the absolute values of the entries, with the white level corresponding linearly to the size of these values (up to some maximum, which is chosen differently for the Fisher approximation and its inverse, due to the highly differing scales of these matrices). 12 \fIn the following two subsections we show how both the block-diagonal and block-tridiagonal approximations to \u02dc F \u22121 give rise to computationally ef\ufb01cient methods for computing matrix-vector products with it. And at the end of Section 4 we present two \ufb01gures (Figures 5 and 6) which examine the quality of these approximations for an example network. 4.2 Approximating \u02dc F \u22121 as block-diagonal Approximating \u02dc F \u22121 as block-diagonal is equivalent to approximating \u02dc F as block-diagonal. A natural choice for such an approximation \u02d8 F of \u02dc F, is to take the block-diagonal of \u02d8 F to be that of \u02dc F. This gives the matrix \u02d8 F = diag \u0010 \u02dc F1,1, \u02dc F2,2, . . . , , \u02dc F\u2113,\u2113 \u0011 = diag \u0000 \u00af A0,0 \u2297G1,1, \u00af A1,1 \u2297G2,2, . . . , \u00af A\u2113\u22121,\u2113\u22121 \u2297G\u2113,\u2113 \u0001 Using the identity (A \u2297B)\u22121 = A\u22121 \u2297B\u22121 we can easily compute the inverse of \u02d8 F as \u02d8 F \u22121 = diag \u0000 \u00af A\u22121 0,0 \u2297G\u22121 1,1, \u00af A\u22121 1,1 \u2297G\u22121 2,2, . . . , \u00af A\u22121 \u2113\u22121,\u2113\u22121 \u2297G\u22121 \u2113,\u2113 \u0001 Thus, computing \u02d8 F \u22121 amounts to computing the inverses of 2\u2113smaller matrices. Then to compute u = \u02d8 F \u22121v, we can make use of the well-known identity (A \u2297B) vec(X) = vec(BXA\u22a4) to get Ui = G\u22121 i,i Vi \u00af A\u22121 i\u22121,i\u22121 where v maps to (V1, V2, . . . , V\u2113) and u maps to (U1, U2, . . . , U\u2113) in an analogous way to how \u03b8 maps to (W1, W2, . . . , W\u2113). Note that block-diagonal approximations to the Fisher information have been proposed before in TONGA (Le Roux et al., 2008), where each block corresponds to the weights associated with a particular unit. In our block-diagonal approximation, the blocks correspond to all the parameters in a given layer, and are thus much larger. In fact, they are so large that they would be impractical to invert as general matrices. 4.3 Approximating \u02dc F \u22121 as block-tridiagonal Note that unlike in the above block-diagonal case, approximating \u02dc F \u22121 as block-tridiagonal is not equivalent to approximating \u02dc F as block-tridiagonal. Thus we require a more sophisticated approach to deal with such an approximation. We develop such an approach in this subsection. To start, we will de\ufb01ne \u02c6 F to be the matrix which agrees with \u02dc F on the tridiagonal blocks, and which satis\ufb01es the property that \u02c6 F \u22121 is block-tridiagonal. Note that this de\ufb01nition implies certain values for the off-tridiagonal blocks of \u02c6 F which will differ from those of \u02dc F insofar as \u02dc F \u22121 is not actually block-tridiagonal. 13 \f. . . . . . Figure 4: A diagram depicting the UGGM corresponding to \u02c6 F \u22121 and its equivalent DGGM. The UGGM\u2019s edges are labeled with the corresponding weights of the model (these are distinct from the network\u2019s weights). Here, ( \u02c6 F \u22121)i,j denotes the (i, j)-th block of \u02c6 F \u22121. The DGGM\u2019s edges are labeled with the matrices that specify the linear mapping from the source node to the conditional mean of the destination node (whose conditional covariance is given by its label). To establish that such a matrix \u02c6 F is well de\ufb01ned and can be inverted ef\ufb01ciently, we \ufb01rst observe that assuming that \u02c6 F \u22121 is block-tridiagonal is equivalent to assuming that it is the precision matrix of an undirected Gaussian graphical model (UGGM) over D\u03b8 (as depicted in Figure 4), whose density function is proportional to exp(\u2212D\u03b8\u22a4\u02c6 F \u22121D\u03b8). As this graphical model has a tree structure, there is an equivalent directed graphical model with the same distribution and the same (undirected) graphical structure (e.g. Bishop, 2006), where the directionality of the edges is given by a directed acyclic graph (DAG). Moreover, this equivalent directed model will also be linear/Gaussian, and hence a directed Gaussian Graphical model (DGGM). Next we will show how the parameters of such a DGGM corresponding to \u02c6 F can be ef\ufb01ciently recovered from the tridiagonal blocks of \u02c6 F, so that \u02c6 F is uniquely determined by these blocks (and hence well-de\ufb01ned). We will assume here that the direction of the edges is from the higher layers to the lower ones. Note that a different choice for these directions would yield a super\ufb01cially different algorithm for computing the inverse of \u02c6 F that would nonetheless yield the same output. For each i, we will denote the conditional covariance matrix of vec(DWi) on vec(DWi+1) by \u03a3i|i+1 and the linear coef\ufb01cients from vec(DWi+1) to vec(DWi) by the matrix \u03a8i,i+1, so that the conditional distributions de\ufb01ning the model are vec(DWi) \u223cN \u0000\u03a8i,i+1vec(DWi+1), \u03a3i|i+1 \u0001 and vec(DW\u2113) \u223cN \u0010 \u20d7 0, \u03a3\u2113 \u0011 Since \u03a3\u2113is just the covariance of vec(DW\u2113), it is given simply by \u02c6 F\u2113,\u2113= \u02dc F\u2113,\u2113. And for i \u2264\u2113\u22121, we can see that \u03a8i,i+1 is given by \u03a8i,i+1 = \u02c6 Fi,i+1 \u02c6 F \u22121 i+1,i+1 = \u02dc Fi,i+1 \u02dc F \u22121 i+1,i+1 = \u0000 \u00af Ai\u22121,i \u2297Gi,i+1 \u0001 \u0000 \u00af Ai,i \u2297Gi+1,i+1 \u0001\u22121 = \u03a8 \u00af A i\u22121,i \u2297\u03a8G i,i+1 14 \fwhere \u03a8 \u00af A i\u22121,i = \u00af Ai\u22121,i \u00af A\u22121 i,i and \u03a8G i,i+1 = Gi,i+1G\u22121 i+1,i+1 The conditional covariance \u03a3i|i+1 is thus given by \u03a3i|i+1 = \u02c6 Fi,i \u2212\u03a8i,i+1 \u02c6 Fi+1,i+1\u03a8\u22a4 i,i+1 = \u02dc Fi,i \u2212\u03a8i,i+1 \u02dc Fi+1,i+1\u03a8\u22a4 i,i+1 = \u00af Ai\u22121,i\u22121 \u2297Gi,i \u2212\u03a8 \u00af A i\u22121,i \u00af Ai,i\u03a8 \u00af A\u22a4 i\u22121,i \u2297\u03a8G i,i+1Gi+1,i+1\u03a8G\u22a4 i,i+1 Following the work of Grosse and Salakhutdinov (2015), we use the block generalization of well-known \u201cCholesky\u201d decomposition of the precision matrix of DGGMs (Pourahmadi, 1999), which gives \u02c6 F \u22121 = \u039e\u22a4\u039b\u039e where, \u039b = diag \u0010 \u03a3\u22121 1|2, \u03a3\u22121 2|3, . . . , \u03a3\u22121 \u2113\u22121|\u2113, \u03a3\u22121 \u2113 \u0011 and \u039e = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 I \u2212\u03a81,2 I \u2212\u03a82,3 I ... ... \u2212\u03a8\u2113\u22121,\u2113 I \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb Thus, matrix-vector multiplication with \u02c6 F \u22121 amounts to performing matrix-vector multiplication by \u039e, followed by \u039b, and then by \u039e\u22a4. As in the block-diagonal case considered in the previous subsection, matrix-vector products with \u039e (and \u039e\u22a4) can be ef\ufb01ciently computed using the well-known identity (A \u2297B)\u22121 = A\u22121 \u2297 B\u22121. In particular, u = \u039e\u22a4v can be computed as Ui = Vi \u2212\u03a8G\u22a4 i\u22121,iVi\u22121\u03a8 \u00af A i\u22122,i\u22121 and U1 = V1 and similarly u = \u039ev can be computed as Ui = Vi \u2212\u03a8G i,i+1Vi+1\u03a8 \u00af A\u22a4 i\u22121,i and U\u2113= V\u2113 where the Ui\u2019s and Vi\u2019s are de\ufb01ned in terms of u and v as in the previous subsection. Multiplying a vector v by \u039b amounts to multiplying each vec(Vi) by the corresponding \u03a3\u22121 i|i+1. This is slightly tricky because \u03a3i|i+1 is the difference of Kronecker products, so we cannot use the straightforward identity (A \u2297B)\u22121 = A\u22121 \u2297B\u22121. Fortunately, there are ef\ufb01cient techniques for inverting such matrices which we discuss in detail in Appendix B. 15 \f4.4 Examining the approximation quality Figures 5 and 6 examine the quality of the approximations \u02d8 F and \u02c6 F of \u02dc F, which are derived by approximating \u02dc F \u22121 as block-diagonal and block-tridiagonal (resp.), for an example network. From Figure 5, which compares \u02d8 F and \u02c6 F directly to \u02dc F, we can see that while \u02d8 F and \u02c6 F exactly capture the diagonal and tridiagonal blocks (resp.) of \u02dc F, as they must by de\ufb01nition, \u02c6 F ends up approximating the off-tridiagonal blocks of \u02dc F very well too. This is likely owed to the fact that the approximating assumption used to derive \u02c6 F, that \u02dc F \u22121 is block-tridiagonal, is a very reasonable one in practice (judging by Figure 3). Figure 6, which compares \u02d8 F \u22121 and \u02c6 F \u22121 to \u02dc F \u22121, paints an arguably more interesting and relevant picture, as the quality of the approximation of the natural gradient will be roughly proportional2 to the quality of approximation of the inverse Fisher. We can see from this \ufb01gure that due to the approximate block-diagonal structure of \u02dc F \u22121, \u02d8 F \u22121 is actually a reasonably good approximation of \u02dc F \u22121, despite \u02d8 F being a rather poor approximation of \u02dc F (based on Figure 5). Meanwhile, we can see that by accounting for the tri-diagonal blocks, \u02c6 F \u22121 is indeed a signi\ufb01cantly better approximation of \u02dc F \u22121 than \u02d8 F \u22121 is, even on the diagonal blocks. 2The error in any approximation F \u22121 0 \u2207h of the natural gradient F \u22121\u2207h will be roughly proportional to the error in the approximation F \u22121 0 of the associated inverse Fisher F \u22121, since \u2225F \u22121\u2207h \u2212F \u22121 0 \u2207h\u2225\u2264\u2225\u2207h\u2225\u2225F \u22121 \u2212F \u22121 0 \u2225. 16 \fFigure 5: A comparison of our block-wise Kronecker-factored approximation \u02dc F, and its approximations \u02d8 F and \u02c6 F (which are based on approximating the inverse \u02dc F \u22121 as either block-diagonal or block-tridiagonal, respectively), using the example neural network from Figure 2. On the left is \u02dc F, in the middle its approximation, and on the right is the absolute difference of these. The top row compares to \u02d8 F and the bottom row compares to \u02c6 F. While the diagonal blocks of the top right matrix, and the tridiagonal blocks of the bottom right matrix are exactly zero due to how \u02d8 F and \u02c6 F (resp.) are constructed, the off-tridiagonal blocks of the bottom right matrix, while being very close to zero, are actually non-zero (which is hard to see from the plot). Note that for the purposes of visibility we plot the absolute values of the entries, with the white level corresponding linearly to the size of these values (up to some maximum, which is the same in each image). 17 \fFigure 6: A comparison of the exact inverse \u02dc F \u22121 of our block-wise Kronecker-factored approximation \u02dc F, and its block-diagonal and block-tridiagonal approximations \u02d8 F \u22121 and \u02c6 F \u22121 (resp.), using the example neural network from Figure 2. On the left is \u02dc F \u22121, in the middle its approximation, and on the right is the absolute difference of these. The top row compares to \u02d8 F \u22121 and the bottom row compares to \u02c6 F \u22121. The inverse was computed subject to the factored Tikhonov damping technique described in Sections 6.3 and 6.6, using the same value of \u03b3 that was used by K-FAC at the iteration from which this example was taken (see Figure 2). Note that for the purposes of visibility we plot the absolute values of the entries, with the white level corresponding linearly to the size of these values (up to some maximum, which is the same in each image). 5 Estimating the required statistics Recall that \u00af Ai,j = E \u0002 \u00af ai\u00af a\u22a4 j \u0003 and Gi,j = E \u0002 gig\u22a4 j \u0003 . Both approximate Fisher inverses discussed in Section 4 require some subset of these. In particular, the block-diagonal approximation requires them for i = j, while the block-tridiagonal approximation requires them for j \u2208{i, i + 1} (noting that \u00af A\u22a4 i,j = \u00af Aj,i and G\u22a4 i,j = Gj,i). Since the \u00af ai\u2019s don\u2019t depend on y, we can take the expectation E \u0002 \u00af ai\u00af a\u22a4 j \u0003 with respect to just the training distribution \u02c6 Qx over the inputs x. On the other hand, the gi\u2019s do depend on y, and so the expectation3 E \u0002 gig\u22a4 j \u0003 must be taken with respect to both \u02c6 Qx and the network\u2019s predictive 3It is important to note this expectation should not be taken with respect to the training/data distribution over y (i.e. 18 \fdistribution Py|x. While computing matrix-vector products with the Gi,j could be done exactly and ef\ufb01ciently for a given input x (or small mini-batch of x\u2019s) by adapting the methods of Schraudolph (2002), there doesn\u2019t seem to be a suf\ufb01ciently ef\ufb01cient method for computing the entire matrix itself. Indeed, the hardness results of Martens et al. (2012) suggest that this would require, for each example x in the mini-batch, work that is asymptotically equivalent to matrix-matrix multiplication involving matrices the same size as Gi,j. While a small constant number of such multiplications is arguably an acceptable cost (see Section 8), a number which grows with the size of the mini-batch would not be. Instead, we will approximate the expectation over y by a standard Monte-Carlo estimate obtained by sampling y\u2019s from the network\u2019s predictive distribution and then rerunning the backwards phase of backpropagation (see Algorithm 1) as if these were the training targets. Note that computing/estimating the required \u00af Ai,j/Gi,j\u2019s involves computing averages over outer products of various \u00af ai\u2019s from network\u2019s usual forward pass, and gi\u2019s from the modi\ufb01ed backwards pass (with targets sampled as above). Thus we can compute/estimate these quantities on the same input data used to compute the gradient \u2207h, at the cost of one or more additional backwards passes, and a few additional outer-product averages. Fortunately, this turns out to be quite inexpensive, as we have found that just one modi\ufb01ed backwards pass is suf\ufb01cient to obtain a good quality estimate in practice, and the required outer-product averages are similar to those already used to compute the gradient in the usual backpropagation algorithm. In the case of online/stochastic optimization we have found that the best strategy is to maintain running estimates of the required \u00af Ai,j\u2019s and Gi,j\u2019s using a simple exponentially decaying averaging scheme. In particular, we take the new running estimate to be the old one weighted by \u03f5, plus the estimate on the new mini-batch weighted by 1 \u2212\u03f5, for some 0 \u2264\u03f5 < 1. In our experiments we used \u03f5 = min{1 \u22121/k, 0.95}, where k is the iteration number. Note that the more naive averaging scheme where the estimates from each iteration are given equal weight would be inappropriate here. This is because the \u00af Ai,j\u2019s and Gi,j\u2019s depend on the network\u2019s parameters \u03b8, and these will slowly change over time as optimization proceeds, so that estimates computed many iterations ago will become stale. This kind of exponentially decaying averaging scheme is commonly used in methods involving diagonal or block-diagonal approximations (with much smaller blocks than ours) to the curvature matrix (e.g. LeCun et al., 1998; Park et al., 2000; Schaul et al., 2013). Such schemes have the desirable property that they allow the curvature estimate to depend on much more data than can be \u02c6 Qy|x or Qy|x). Using the training/data distribution for y would perhaps give an approximation to a quantity known as the \u201cempirical Fisher information matrix\u201d, which lacks the previously discussed equivalence to the Generalized GaussNewton matrix, and would not be compatible with the theoretical analysis performed in Section 3.1 (in particular, Lemma 4 would break down). Moreover, such a choice would not give rise to what is usually thought of as the natural gradient, and based on the \ufb01ndings of Martens (2010), would likely perform worse in practice as part of an optimization algorithm. See Martens (2014) for a more detailed discussion of the empirical Fisher and reasons why it may be a poor choice for a curvature matrix compared to the standard Fisher. 19 \freasonably processed in a single mini-batch. Notably, for methods like HF which deal with the exact Fisher indirectly via matrix-vector products, such a scheme would be impossible to implement ef\ufb01ciently, as the exact Fisher matrix (or GGN) seemingly cannot be summarized using a compact data structure whose size is independent of the amount of data used to estimate it. Indeed, it seems that the only representation of the exact Fisher which would be independent of the amount of data used to estimate it would be an explicit n\u00d7n matrix (which is far too big to be practical). Because of this, HF and related methods must base their curvature estimates only on subsets of data that can be reasonably processed all at once, which limits their effectiveness in the stochastic optimization regime. 6 Update damping 6.1 Background and motivation The idealized natural gradient approach is to follow the smooth path in the Riemannian manifold (implied by the Fisher information matrix viewed as a metric tensor) that is generated by taking a series of in\ufb01nitesimally small steps (in the original parameter space) in the direction of the natural gradient (which gets recomputed at each point). While this is clearly impractical as a real optimization method, one can take larger steps and still follow these paths approximately. But in our experience, to obtain an update which satis\ufb01es the minimal requirement of not worsening the objective function value, it is often the case that one must make the step size so small that the resulting optimization algorithm performs poorly in practice. The reason that the natural gradient can only be reliably followed a short distance is that it is de\ufb01ned merely as an optimal direction (which trades off improvement in the objective versus change in the predictive distribution), and not a discrete update. Fortunately, as observed by Martens (2014), the natural gradient can be understood using a more traditional optimization-theoretic perspective which implies how it can be used to generate updates that will be useful over larger distances. In particular, when Ry|z is an exponential family model with z as its natural parameters (as it will be in our experiments), Martens (2014) showed that the Fisher becomes equivalent to the Generalized Gauss-Newton matrix (GGN), which is a positive semi-de\ufb01nite approximation of the Hessian of h. Additionally, there is the well-known fact that when L(x, f(x, \u03b8)) is the negative log-likelihood function associated with a given (x, y) pair (as we are assuming in this work), the Hessian H of h and the Fisher F are closely related in the sense H is the expected Hessian of L under the training distribution \u02c6 Qx,y, while F is the expected Hessian of L under the model\u2019s distribution Px,y (de\ufb01ned by the density p(x, y) = p(y|x)q(x)). From these observations it follows that M(\u03b4) = 1 2\u03b4\u22a4F\u03b4 + \u2207h(\u03b8)\u22a4\u03b4 + h(\u03b8) (5) 20 \fcan be viewed as a convex approximation of the 2nd-order Taylor series of expansion of h(\u03b4 + \u03b8), whose minimizer \u03b4\u2217is the (negative) natural gradient \u2212F \u22121\u2207h(\u03b8). Note that if we add an \u21132 or \u201cweight-decay\u201d regularization term to h of the form \u03b7 2\u2225\u03b8\u22252 2, then similarly F +\u03b7I can be viewed as an approximation of the Hessian of h, and replacing F with F + \u03b7I in M(\u03b4) yields an approximation of the 2nd-order Taylor series, whose minimizer is a kind of \u201cregularized\u201d (negative) natural gradient \u2212(F + \u03b7I)\u22121\u2207h(\u03b8), which is what we end up using in practice. From the interpretation of the natural gradient as the minimizer of M(\u03b4), we can see that it fails to be useful as a local update only insofar as M(\u03b4) fails to be a good local approximation to h(\u03b4 + \u03b8). And so as argued by Martens (2014), it is natural to make use of the various \u201cdamping\u201d techniques that have been developed in the optimization literature for dealing with the breakdowns in local quadratic approximations that inevitably occur during optimization. Notably, this breakdown usually won\u2019t occur in the \ufb01nal \u201clocal convergence\u201d stage of optimization where the function becomes well approximated as a convex quadratic within a suf\ufb01ciently large neighborhood of the local optimum. This is the phase traditionally analyzed in most theoretical results, and while it is important that an optimizer be able to converge well in this \ufb01nal phase, it is arguably much more important from a practical standpoint that it behaves sensibly before this phase. This initial \u201cexploration phase\u201d (Darken and Moody, 1990) is where damping techniques help in ways that are not apparent from the asymptotic convergence theorems alone, which is not to say there are not strong mathematical arguments that support their use (see Nocedal and Wright, 2006). In particular, in the exploration phase it will often still be true that h(\u03b8 + \u03b4) is accurately approximated by a convex quadratic locally within some region around \u03b4 = 0, and that therefor optimization can be most ef\ufb01ciently performed by minimizing a sequence of such convex quadratic approximations within adaptively sized local regions. Note that well designed damping techniques, such as the ones we will employ, automatically adapt to the local properties of the function, and effectively \u201cturn themselves off\u201d when the quadratic model becomes a suf\ufb01ciently accurate local approximation of h, allowing the optimizer to achieve the desired asymptotic convergence behavior (Mor\u00b4 e, 1978). Successful and theoretically well-founded damping techniques include Tikhonov damping (aka Tikhonov regularization, which is closely connected to the trust-region method) with LevenbergMarquardt style adaptation (Mor\u00b4 e, 1978), line-searches, and trust regions, truncation, etc., all of which tend to be much more effective in practice than merely applying a learning rate to the update, or adding a \ufb01xed multiple of the identity to the curvature matrix. Indeed, a subset of these techniques was exploited in the work of Martens (2010), and primitive versions of them have appeared implicitly in older works such as Becker and LeCun (1989), and also in many recent diagonal methods like that of Zeiler (2013), although often without a good understanding of what they are doing and why they help. Crucially, more powerful 2nd-order optimizers like HF and K-FAC, which have the capability of taking much larger steps than 1st-order methods (or methods which use diagonal curvature matrices), require more sophisticated damping solutions to work well, and will usually completely fail without them, which is consistent with predictions made in various theoretical analyses (e.g. 21 \fNocedal and Wright, 2006). As an analogy one can think of such powerful 2nd-order optimizers as extremely fast racing cars that need more sophisticated control systems than standard cars to prevent them from \ufb02ying off the road. Arguably one of the reasons why high-powered 2nd-order optimization methods have historically tended to under-perform in machine learning applications, and in neural network training in particular, is that their designers did not understand or take seriously the issue of quadratic model approximation quality, and did not employ the more sophisticated and effective damping techniques that are available to deal with this issue. For a detailed review and discussion of various damping techniques and their crucial role in practical 2nd-order optimization methods, we refer the reader to Martens and Sutskever (2012). 6.2 A highly effective damping scheme for K-FAC Methods like HF which use the exact Fisher seem to work reasonably well with an adaptive Tikhonov regularization technique where \u03bbI is added to F + \u03b7I, and where \u03bb is adapted according to Levenberg-Marquardt style adjustment rule. This common and well-studied method can be shown to be equivalent to imposing an adaptive spherical region (known as a \u201ctrust region\u201d) which constrains the optimization of the quadratic model (e.g Nocedal and Wright, 2006). However, we found that this simple technique is insuf\ufb01cient when used with our approximate natural gradient update proposals. In particular, we have found that there never seems to be a \u201cgood\u201d choice for \u03bb that gives rise to updates which are of a quality comparable to those produced by methods that use the exact Fisher, such as HF. One possible explanation for this \ufb01nding is that, unlike quadratic models based on the exact Fisher (or equivalently, the GGN), the one underlying K-FAC has no guarantee of being accurate up to 2nd-order. Thus, \u03bb must remain large in order to compensate for this intrinsic 2nd-order inaccuracy of the model, which has the side effect of \u201cwashing out\u201d the small eigenvalues (which represent important low-curvature directions). Fortunately, through trial and error, we were able to \ufb01nd a relatively simple and highly effective damping scheme, which combines several different techniques, and which works well within K-FAC. Our scheme works by computing an initial update proposal using a version of the above described adaptive Tikhonov damping/regularization method, and then re-scaling this according to quadratic model computed using the exact Fisher. This second step is made practical by the fact that it only requires a single matrix-vector product with the exact Fisher, and this can be computed ef\ufb01ciently using standard methods. We discuss the details of this scheme in the following subsections. 6.3 A factored Tikhonov regularization technique In the \ufb01rst stage of our damping scheme we generate a candidate update proposal \u2206by applying a slightly modi\ufb01ed form of Tikhononv damping to our approximate Fisher, before multiplying \u2212\u2207h 22 \fby its inverse. In the usual Tikhonov regularization/damping technique, one adds (\u03bb + \u03b7)I to the curvature matrix (where \u03b7 accounts for the \u21132 regularization), which is equivalent to adding a term of the form \u03bb + \u03b7 2 \u2225\u03b4\u22252 2 to the corresponding quadratic model (given by M(\u03b4) with F replaced by our approximation). For the block-diagonal approximation \u02d8 F of \u02dc F (from Section 4.2) this amounts to adding (\u03bb + \u03b7)I (for a lower dimensional I) to each of the individual diagonal blocks, which gives modi\ufb01ed diagonal blocks of the form \u00af Ai\u22121,i\u22121 \u2297Gi,i + (\u03bb + \u03b7)I = \u00af Ai\u22121,i\u22121 \u2297Gi,i + (\u03bb + \u03b7)I \u2297I (6) Because this is the sum of two Kronecker products we cannot use the simple identity (A\u2297B)\u22121 = A\u22121 \u2297B\u22121 anymore. Fortunately however, there are ef\ufb01cient techniques for inverting such matrices, which we discuss in detail in Appendix B. If we try to apply this same Tikhonov technique to our more sophisticated approximation \u02c6 F of \u02dc F (from Section 4.3) by adding (\u03bb + \u03b7)I to each of the diagonal blocks of \u02c6 F, it is no longer clear how to ef\ufb01ciently invert \u02c6 F. Instead, a solution which we have found works very well in practice (and which we also use for the block-diagonal approximation \u02d8 F), is to add \u03c0i(\u221a\u03bb + \u03b7)I and 1 \u03c0i ( p \u03bb + \u03b7)I for a scalar constant \u03c0i to the individual Kronecker factors \u00af Ai\u22121,i\u22121 and Gi,i (resp.) of each diagonal block, giving \u0010 \u00af Ai\u22121,i\u22121 + \u03c0i( p \u03bb + \u03b7)I \u0011 \u2297 \u0012 Gi,i + 1 \u03c0i ( p \u03bb + \u03b7)I \u0013 (7) As this is a single Kronecker product, all of the computations described in Sections 4.2 and 4.3 can still be used here too, simply by replacing each \u00af Ai\u22121,i\u22121 and Gi,i with their modi\ufb01ed versions \u00af Ai\u22121,i\u22121 + \u03c0i(\u221a\u03bb + \u03b7)I and Gi,i + 1 \u03c0i ( p \u03bb + \u03b7)I. To see why the expression in eqn. 7 is a reasonable approximation to eqn. 6, note that expanding it gives \u00af Ai\u22121,i\u22121 \u2297Gi,i + \u03c0i( p \u03bb + \u03b7)I \u2297Gi,i + 1 \u03c0i ( p \u03bb + \u03b7) \u00af Ai\u22121,i\u22121 \u2297I + (\u03bb + \u03b7)I \u2297I which differs from eqn. 6 by the residual error expression \u03c0i( p \u03bb + \u03b7)I \u2297Gi,i + 1 \u03c0i ( p \u03bb + \u03b7) \u00af Ai\u22121,i\u22121 \u2297I While the choice of \u03c0i = 1 is simple and can sometimes work well in practice, a slightly more principled choice can be found by minimizing the obvious upper bound (following from the triangle inequality) on the matrix norm of this residual expression, for some matrix norm \u2225\u00b7 \u2225\u03c5. 23 \fThis gives \u03c0i = s \u2225\u00af Ai\u22121,i\u22121 \u2297I\u2225\u03c5 \u2225I \u2297Gi,i\u2225\u03c5 Evaluating this expression can be done ef\ufb01ciently for various common choices of the matrix norm \u2225\u00b7 \u2225\u03c5. For example, for a general B we have \u2225I \u2297B\u2225F = \u2225B \u2297I\u2225F = \u221a d\u2225B\u2225F where d is the height/dimension of I, and also \u2225I \u2297B\u22252 = \u2225B \u2297I\u22252 = \u2225B\u22252. In our experience, one of the best and must robust choices for the norm \u2225\u00b7\u2225\u03c5 is the trace-norm, which for PSD matrices is given by the trace. With this choice, the formula for \u03c0i has the following simple form: \u03c0i = s tr( \u00af Ai\u22121,i\u22121)/(di\u22121 + 1) tr(Gi,i)/di where di is the dimension (number of units) in layer i. Intuitively, the inner fraction is just the average eigenvalue of \u00af Ai\u22121,i\u22121 divided by the average eigenvalue of Gi,i. Interestingly, we have found that this factored approximate Tikhonov approach, which was originally motivated by computational concerns, often works better than the exact version (eqn. 6) in practice. The reasons for this are still somewhat mysterious to us, but it may have to do with the fact that the inverse of the product of two quantities is often most robustly estimated as the inverse of the product of their individually regularized estimates. 6.4 Re-scaling according to the exact F Given an update proposal \u2206produced by multiplying the negative gradient \u2212\u2207h by our approximate Fisher inverse (subject to the Tikhonov technique described in the previous subsection), the second stage of our proposed damping scheme re-scales \u2206according to the quadratic model M as computed with the exact F, to produce a \ufb01nal update \u03b4 = \u03b1\u2206. More precisely, we optimize \u03b1 according to the value of the quadratic model M(\u03b4) = M(\u03b1\u2206) = \u03b12 2 \u2206\u22a4(F + (\u03bb + \u03b7)I)\u2206+ \u03b1\u2207h\u22a4\u2206+ h(\u03b8) as computed using an estimate of the exact Fisher F (to which we add the \u21132 regularization + Tikhonov term (\u03bb + \u03b7)I). Because this is a 1-dimensional quadratic minimization problem, the formula for the optimal \u03b1 can be computed very ef\ufb01ciently as \u03b1\u2217= \u2212\u2207h\u22a4\u2206 \u2206\u22a4(F + (\u03bb + \u03b7)I)\u2206= \u2212\u2207h\u22a4\u2206 \u2206\u22a4F\u2206+ (\u03bb + \u03b7)\u2225\u2206\u22252 2 24 \fTo evaluate this formula we use the current stochastic gradient \u2207h (i.e. the same one used to produce \u2206), and compute matrix-vector products with F using the input data from the same minibatch. While using a mini-batch to compute F gets away from the idea of basing our estimate of the curvature on a long history of data (as we do with our approximate Fisher), it is made slightly less objectionable by the fact that we are only using it to estimate a single scalar quantity (\u2206\u22a4F\u2206). This is to be contrasted with methods like HF which perform a long and careful optimization of M(\u03b4) using such an estimate of F. Because the matrix-vector products with F are only used to compute scalar quantities in KFAC, we can reduce their computational cost by roughly one half (versus standard matrix-vector products with F) using a simple trick which is discussed in Appendix C. Intuitively, this second stage of our damping scheme effectively compensates for the intrinsic inaccuracy of the approximate quadratic model (based on our approximate Fisher) used to generate the initial update proposal \u2206, by essentially falling back on a more accurate quadratic model based on the exact Fisher. Interestingly, by re-scaling \u2206according to M(\u03b4), K-FAC can be viewed as a version of HF which uses our approximate Fisher as a preconditioning matrix (instead of the traditional diagonal preconditioner), and runs CG for only 1 step, initializing it from 0. This observation suggests running CG for longer, thus obtaining an algorithm which is even closer to HF (although using a much better preconditioner for CG). Indeed, this approach works reasonably well in our experience, but suffers from some of the same problems that HF has in the stochastic setting, due its much stronger use of the mini-batch\u2013estimated exact F. Figure 7 demonstrates the effectiveness of this re-scaling technique versus the simpler method of just using the raw \u2206as an update proposal. We can see that \u2206, without being re-scaled, is a very poor update to \u03b8, and won\u2019t even give any improvement in the objective function unless the strength of the factored Tikhonov damping terms is made very large. On the other hand, when the update is re-scaled, we can afford to compute \u2206using a much smaller strength for the factored Tikhonov damping terms, and overall this yields a much larger and more effective update to \u03b8. 6.5 Adapting \u03bb Tikhonov damping can be interpreted as implementing a trust-region constraint on the update \u03b4, so that in particular the constraint \u2225\u03b4\u2225\u2264r is imposed for some r, where r depends on \u03bb and the curvature matrix (e.g. Nocedal and Wright, 2006). While some approaches adjust r and then seek to \ufb01nd the matching \u03bb, it is often simpler just to adjust \u03bb directly, as the precise relationship between \u03bb and r is complicated, and the curvature matrix is constantly evolving as optimization takes place. The theoretically well-founded Levenberg-Marquardt style rule used by HF for doing this, which we will adopt for K-FAC, is given by if \u03c1 > 3/4 then \u03bb \u2190\u03c91\u03bb 25 \f10 \u22127 10 \u22126 10 \u22125 10 \u22124 10 \u22123 10 \u22122 10 \u22121 10 0 10 1 \u22125 0 5 10 15 x 10 \u22123 strength of factored Tikhonov damping (\u03b3) improvement in objective (higher is better) with re\u2212scaling without re\u2212scaling (no moment.) with re\u2212scaling (no moment.) Figure 7: A comparison of the effectiveness of the proposed damping scheme, with and without the rescaling techniques described in Section 6.4. The network used for this comparison is the one produced at iteration 500 by K-FAC (with the block-tridiagonal inverse approximation) on the MNIST autoencoder problem described in Section 13. The y-axis is the improvement in the objective function h (i.e. h(\u03b8)\u2212h(\u03b8+ \u03b4)) produced by the update \u03b4, while the x-axis is the strength constant used in the factored Tikhonov damping technique (which is denoted by \u201c\u03b3\u201d as described in Section 6.6). In the legend, \u201cno moment.\u201d indicates that the momentum technique developed for K-FAC in Section 7 (which relies on the use of re-scaling) was not used. if \u03c1 < 1/4 then \u03bb \u21901 \u03c91 \u03bb where \u03c1 \u2261h(\u03b8 + \u03b4) \u2212h(\u03b8) M(\u03b4) \u2212M(0) is the \u201creduction ratio\u201d and 0 < \u03c91 < 1 is some decay constant, and all quantities are computed on the current mini-batch (and M uses the exact F). Intuitively, this rule tries to make \u03bb as small as possible (and hence the implicit trust-region as large as possible) while maintaining the property that the quadratic model M(\u03b4) remains a good local approximation to h (in the sense that it accurately predicts the value of h(\u03b8 + \u03b4) for the \u03b4 which gets chosen at each iteration). It has the desirable property that as the optimization enters the \ufb01nal convergence stage where M becomes an almost exact approximation in a suf\ufb01ciently large neighborhood of the local minimum, the value of \u03bb will go rapidly enough towards 0 that it doesn\u2019t interfere with the asymptotic local convergence theory enjoyed by 2nd-order methods (Mor\u00b4 e, 1978). In our experiments we applied this rule every T1 iterations of K-FAC, with \u03c91 = (19/20)T1 and T1 = 5, from a starting value of \u03bb = 150. Note that the optimal value of \u03c91 and the starting value of \u03bb may be application dependent, and setting them inappropriately could signi\ufb01cantly slow down K-FAC in practice. Computing \u03c1 can be done quite ef\ufb01ciently. Note that for the optimal \u03b4, M(\u03b4) \u2212M(0) = 1 2\u2207h\u22a4\u03b4, and h(\u03b8) is available from the usual forward pass. The only remaining quantity which is needed to evaluate \u03c1 is thus h(\u03b8+\u03b4), which will require an additional forward pass. But fortunately, we only need to perform this once every T1 iterations. 26 \f6.6 Maintaining a separate damping strength for the approximate Fisher While the scheme described in the previous sections works reasonably well in most situations, we have found that in order to avoid certain failure cases and to be truly robust in a large variety of situations, the Tikhonov damping strength parameter for the factored Tikhonov technique described in Section 6.3 should be maintained and adjusted independently of \u03bb. To this end we replace the expression \u221a\u03bb + \u03b7 in Section 6.3 with a separate constant \u03b3, which we initialize to \u221a\u03bb + \u03b7 but which is then adjusted using a different rule, which is described at the end of this section. The reasoning behind this modi\ufb01cation is as follows. The role of \u03bb, according to the Levenberg Marquardt theory (Mor\u00b4 e, 1978), is to be as small as possible while maintaining the property that the quadratic model M remains a trust-worthy approximation of the true objective. Meanwhile, \u03b3\u2019s role is to ensure that the initial update proposal \u2206is as good an approximation as possible to the true optimum of M (as computed using a mini-batch estimate of the exact F), so that in particular the re-scaling performed in Section 6.4 is as benign as possible. While one might hope that adding the same multiple of the identity to our approximate Fisher as we do to the exact F (as it appears in M) would produce the best \u2206in this regard, this isn\u2019t obviously the case. In particular, using a larger multiple may help compensate for the approximation we are making to the Fisher when computing \u2206, and thus help produce a more \u201cconservative\u201d but ultimately more useful initial update proposal \u2206, which is what we observe happens in practice. A simple measure of the quality of our choice of \u03b3 is the (negative) value of the quadratic model M(\u03b4) = M(\u03b1\u2206) for the optimally chosen \u03b1. To adjust \u03b3 based on this measure (or others like it) we use a simple greedy adjustment rule. In particular, every T2 iterations during the optimization we try 3 different values of \u03b3 (\u03b30, \u03c92\u03b30, and (1/\u03c92)\u03b30, where \u03b30 is the current value) and choose the new \u03b3 to be the best of these, as measured by our quality metric. In our experiments we used T2 = 20 (which must be a multiple of the constant T3 as de\ufb01ned in Section 8), and \u03c92 = ( p 19/20)T2. We have found that M(\u03b4) works well in practice as a measure of the quality of \u03b3, and has the added bonus that it can be computed at essentially no additional cost from the incidental quantities already computed when solving for the optimal \u03b1. In our initial experiments we found that using it gave similar results to those obtained by using other obvious measures for the quality of \u03b3, such as h(\u03b8 + \u03b4). 7 Momentum Sutskever et al. (2013) found that momentum (Polyak, 1964; Plaut et al., 1986) was very helpful in the context of stochastic gradient descent optimization of deep neural networks. A version of momentum is also present in the original HF method, and it plays an arguably even more important role in more \u201cstochastic\u201d versions of HF (Martens and Sutskever, 2012; Kiros, 2013). A natural way of adding momentum to K-FAC, and one which we have found works well 27 \fin practice, is to take the update to be \u03b4 = \u03b1\u2206+ \u00b5\u03b40, where \u03b40 is the \ufb01nal update computed at the previous iteration, and where \u03b1 and \u00b5 are chosen to minimize M(\u03b4). This allows K-FAC to effectively build up a better solution to the local quadratic optimization problem min\u03b4 M(\u03b4) (where M uses the exact F) over many iterations, somewhat similarly to how Matrix Momentum (Scarpetta et al., 1999) and HF do this (see Sutskever et al., 2013). The optimal solution for \u03b1 and \u00b5 can be computed as \u0014\u03b1\u2217 \u00b5\u2217 \u0015 = \u2212 \u0014\u2206\u22a4F\u2206+ (\u03bb + \u03b7)\u2225\u2206\u22252 2 \u2206\u22a4F\u03b40 + (\u03bb + \u03b7)\u2206\u22a4\u03b40 \u2206\u22a4F\u03b40 + (\u03bb + \u03b7)\u2206\u22a4\u03b40 \u03b4\u22a4 0 F\u03b40 + (\u03bb + \u03b7)\u2225\u03b40\u22252 2 \u0015\u22121 \u0014\u2207h\u22a4\u2206 \u2207h\u22a4\u03b40 \u0015 The main cost in evaluating this formula is computing the two matrix-vector products F\u2206 and F\u03b40. Fortunately, the technique discussed in Appendix C can be applied here to compute the 4 required scalars at the cost of only two forwards passes (equivalent to the cost of only one matrix-vector product with F). Empirically we have found that this type of momentum provides substantial acceleration in regimes where the gradient signal has a low noise to signal ratio, which is usually the case in the early to mid stages of stochastic optimization, but can also be the case in later stages if the mini-batch size is made suf\ufb01ciently large. These \ufb01ndings are consistent with predictions made by convex optimization theory, and with older empirical work done on neural network optimization (LeCun et al., 1998). Notably, because the implicit \u201cmomentum decay constant\u201d \u00b5 in our method is being computed on the \ufb02y, one doesn\u2019t have to worry about setting schedules for it, or adjusting it via heuristics, as one often does in the context of SGD. Interestingly, if h is a quadratic function (so the de\ufb01nition of M(\u03b4) remains \ufb01xed at each iteration) and all quantities are computed deterministically (i.e. without noise), then using this type of momentum makes K-FAC equivalent to performing preconditioned linear CG on M(\u03b4), with the preconditioner given by our approximate Fisher. This follows from the fact that linear CG can be interpreted as a momentum method where the learning rate \u03b1 and momentum decay coef\ufb01cient \u00b5 are chosen to jointly minimize M(\u03b4) at the current iteration. 8 Computational Costs and Ef\ufb01ciency Improvements Let d be the typical number of units in each layer and m the mini-batch size. The signi\ufb01cant computational tasks required to compute a single update/iteration of K-FAC, and rough estimates of their associated computational costs, are as follows: 1. standard forwards and backwards pass: 2C1\u2113d2m 2. computation of the gradient \u2207h on the current mini-batch using quantities computed in backwards pass: C2\u2113d2m 28 \f3. additional backwards pass with random targets (as described in Section 5): C1\u2113d2m 4. updating the estimates of the required \u00af Ai,j\u2019s and Gi,j\u2019s from quantities computed in the forwards pass and the additional randomized backwards pass: 2C2\u2113d2m 5. matrix inverses (or SVDs for the block-tridiagonal inverse, as described in Appendix B) required to compute the inverse of the approximate Fisher: C3\u2113d3 for the block-diagonal inverse, C4\u2113d3 for the block-tridiagonal inverse 6. various matrix-matrix products required to compute the matrix-vector product of the approximate inverse with the stochastic gradient: C5\u2113d3 for the block-diagonal inverse, C6\u2113d3 for the block-tridiagonal inverse 7. matrix-vector products with the exact F on the current mini-batch using the approach in Appendix C: 4C1\u2113d2m with momentum, 2C1\u2113d2m without momentum 8. additional forward pass required to evaluate the reduction ratio \u03c1 needed to apply the \u03bb adjustment rule described in Section 6.5: C1\u2113d2m every T1 iterations Here the Ci are various constants that account for implementation details, and we are assuming the use of the naive cubic matrix-matrix multiplication and inversion algorithms when producing the cost estimates. Note that it it is hard to assign precise values to the constants, as they very much depend on how these various tasks are implemented. Note that most of the computations required for these tasks will be sped up greatly by performing them in parallel across units, layers, training cases, or all of these. The above cost estimates however measure sequential operations, and thus may not accurately re\ufb02ect the true computation times enjoyed by a parallel implementation. In our experiments we used a vectorized implementation that performed the computations in parallel over units and training cases, although not over layers (which is possible for computations that don\u2019t involve a sequential forwards or backwards \u201cpass\u201d over the layers). Tasks 1 and 2 represent the standard stochastic gradient computation. The costs of tasks 3 and 4 are similar and slightly smaller than those of tasks 1 and 2. One way to signi\ufb01cantly reduce them is to use a random subset of the current mini-batch of size \u03c41m to update the estimates of the required \u00af Ai,j\u2019s and Gi,j\u2019s. One can similarly reduce the cost of task 7 by computing the (factored) matrix-vector product with F using such a subset of size \u03c42m, although we recommend proceeding with caution when doing this, as using inconsistent sets of data for the quadratic and linear terms in M(\u03b4) can hypothetically cause instability problems which are avoided by using consistent data (see Martens and Sutskever (2012), Section 13.1). In our experiments in Section 13 we used \u03c41 = 1/8 and \u03c42 = 1/4, which seemed to have a negligible effect on the quality of the resultant updates, while signi\ufb01cantly reducing per-iteration computation time. In a separate set of unreported experiments we found that in certain situations, such as when \u21132 regularization isn\u2019t used and the network starts heavily over\ufb01tting the data, or when smaller mini-batches were used, we had to revert to using \u03c42 = 1 to prevent signi\ufb01cant deterioration in the quality of the updates. 29 \fThe cost of task 8 can be made relatively insigni\ufb01cant by making the adjustment period T1 for \u03bb large enough. We used T1 = 5 in our experiments. The costs of tasks 5 and 6 are hard to compare directly with the costs associated with computing the gradient, as their relative sizes will depend on factors such as the architecture of the neural network being trained, as well as the particulars of the implementation. However, one quick observation we can make is that both tasks 5 and 6 involve computations that be performed in parallel across the different layers, which is to be contrasted with many of the other tasks which require sequential passes over the layers of the network. Clearly, if m \u226bd, then the cost of tasks 5 and 6 becomes negligible in comparison to the others. However, it is more often the case that m is comparable or perhaps smaller than d. Moreover, while algorithms for inverses and SVDs tend to have the same asymptotic cost as matrix-matrix multiplication, they are at least several times more expensive in practice, in addition to being harder to parallelize on modern GPU architectures (indeed, CPU implementations are often faster in our experience). Thus, C3 and C4 will typically be (much) larger than C5 and C6, and so in a basic/naive implementation of K-FAC, task 5 can dominate the overall per-iteration cost. Fortunately, there are several possible ways to mitigate the cost of task 5. As mentioned above, one way is to perform the computations for each layer in parallel, and even simultaneously with the gradient computation and other tasks. In the case of our block-tridiagonal approximation to the inverse, one can avoid computing any SVDs or matrix square roots by using an iterative Stein-equation solver (see Appendix B). And there are also ways of reducing matrix-inversion (and even matrix square-root) to a short sequence of matrix-matrix multiplications using iterative methods (Pan and Schreiber, 1991). Furthermore, because the matrices in question only change slowly over time, one can consider hot-starting these iterative inversion methods from previous solutions. In the extreme case where d is very large, one can also consider using low-rank + diagonal approximations of the \u00af Ai,j and Gi,j matrices maintained online (e.g. using a similar strategy as Le Roux et al. (2008)) from which inverses and/or SVDs can be more easily computed. Although based on our experience such approximations can, in some cases, lead to a substantial degradation in the quality of the updates. While these ideas work reasonably well in practice, perhaps the simplest method, and the one we ended up settling on for our experiments, is to simply recompute the approximate Fisher inverse only every T3 iterations (we used T3 = 20 in our experiments). As it turns out, the curvature of the objective stays relatively stable during optimization, especially in the later stages, and so in our experience this strategy results in only a modest decrease in the quality of the updates. If m is much smaller than d, the costs associated with task 6 can begin to dominate (provided T3 is suf\ufb01ciently large so that the cost of task 5 is relatively small). And unlike task 5, task 6 must be performed at every iteration. While the simplest solution is to increase m (while reaping the bene\ufb01ts of a less noisy gradient), in the case of the block-diagonal inverse it turns out that we can change the cost of task 6 from C5\u2113d3 to C5\u2113d2m by taking advantage of the low-rank structure of the stochastic gradient. The method for doing this is described below. 30 \fLet \u00af Ai and Gi be matrices whose columns are the m \u00af ai\u2019s and gi\u2019s (resp.) associated with the current mini-batch. Let \u2207Wih denote the gradient of h with respect to Wi, shaped as a matrix (instead of a vector). The estimate of \u2207Wih over the mini-batch is given by 1 mGi \u00af A\u22a4 i\u22121, which is of rank-m. From Section 4.2, computing the \u02d8 F \u22121\u2207h amounts to computing Ui = G\u22121 i,i (\u2207Wih) \u00af A\u22121 i\u22121,i\u22121. Substituting in our mini-batch estimate of \u2207Wih gives Ui = G\u22121 i,i \u0012 1 mGi \u00af A\u22a4 i\u22121 \u0013 \u00af A\u22121 i\u22121,i\u22121 = 1 m \u0000G\u22121 i,i Gi \u0001 \u0000 \u00af A\u22a4 i\u22121 \u00af A\u22121 i\u22121,i\u22121 \u0001 Direct evaluation of the expression on the right-hand side involves only matrix-matrix multiplications between matrices of size m \u00d7 d and d \u00d7 m (or between those of size d \u00d7 d and d \u00d7 m), and thus we can reduce the cost of task 6 to C5\u2113d2m. Note that the use of standard \u21132 weight-decay is not compatible with this trick. This is because the contribution of the weight-decay term to each \u2207Wih is \u03b7Wi, which will typically not be lowrank. Some possible ways around this issue include computing the weight-decay contribution \u03b7 \u02d8 F \u22121\u03b8 separately and refreshing it only occasionally, or using a different regularization method, such as drop-out (Hinton et al., 2012) or weight-magnitude constraints. Note that the adjustment technique for \u03b3 described in Section 6.6 requires that, at every T2 iterations, we compute 3 different versions of the update for each of 3 candidate values of \u03b3. In an ideal implementation these could be computed in parallel with each other, although in the summary analysis below we will assume they are computed serially. Summarizing, we have that with all of the various ef\ufb01ciency improvements discussed in this section, the average per-iteration computational cost of K-FAC, in terms of serial arithmetic operations, is [(2 + \u03c41 + 2(1 + \u03c7mom)(1 + 2/T2)\u03c42 + 1/T1)C1 + (1 + 2\u03c41)C2]\u2113d2m + (1 + 2/T2)[(C4/T3 + C6)\u03c7tri + C3/T3(1 \u2212\u03c7tri)]\u2113d3 + (1 + 2/T2)C5(1 \u2212\u03c7tri)\u2113d2 min{d, m} where \u03c7mom, \u03c7tri \u2208{0, 1} are \ufb02ag variables indicating whether momentum and the block-tridiagonal inverse approximation (resp.) are used. Plugging in the values of these various constants that we used in our experiments, for the block-diagonal inverse approximation (\u03c7tri = 0) this becomes (3.425C1 + 1.25C2)\u2113d2m + 0.055C3\u2113d3 + 1.1C5\u2113d2 min{d, m} and for the block-tridiagonal inverse approximation (\u03c7tri = 1) (3.425C1 + 1.25C2)\u2113d2m + (0.055C4 + 1.1C6)\u2113d3 which is to be compared to the per-iteration cost of SGD, as given by (2C1 + C2)\u2113d2m 31 \f9 Pseudocode for K-FAC Algorithm 2 gives high-level pseudocode for the K-FAC method, with the details of how to perform the computations required for each major step left to their respective sections. Algorithm 2 High-level pseudocode for K-FAC \u2022 Initialize \u03b81 (e.g. using a good method such as the ones described in Martens (2010) or Glorot and Bengio (2010)) \u2022 Choose initial values of \u03bb (err on the side of making it too large) \u2022 \u03b3 \u2190\u221a\u03bb + \u03b7 \u2022 k \u21901 while \u03b8k is not satisfactory do \u2022 Choose a mini-batch size m (e.g. using a \ufb01xed value, an adaptive rule, or some prede\ufb01ned schedule) \u2022 Select a random mini-batch S\u2032 \u2282S of training cases of size m \u2022 Select a random subset S1 \u2282S\u2032 of size \u03c41|S\u2032| \u2022 Select a random subset S2 \u2282S\u2032 of size \u03c42|S\u2032| \u2022 Perform a forward and backward pass on S\u2032 to estimate the gradient \u2207h(\u03b8k) (see Algorithm 1) \u2022 Perform an additional backwards pass on S1 using random targets generated from the model\u2019s predictive distribution (as described in Section 5) \u2022 Update the estimates of the required \u00af Ai,j\u2019s and Gi,j\u2019s using the ai\u2019s computed in forward pass for S1, and the gi\u2019s computed in the additional backwards pass for S1 (as described Section 5) \u2022 Choose a set \u0393 of new candidate \u03b3\u2019s as described in Section 6.6 (setting \u0393 = {\u03b3} if not adjusting \u03b3 at this iteration, i.e. if k \u0338\u22610 (mod T2) ) for each \u03b3 \u2208\u0393 do if recomputing the approximate Fisher inverse this iteration (i.e. if k \u22610 (mod T3) or k \u22643) then \u2022 Compute the approximate Fisher inverse (using the formulas derived in Section 4.2 or Section 4.3) from versions of the current \u00af Ai,j\u2019s and Gi,j\u2019s which are modi\ufb01ed as per the factored Tikhonov damping technique described in Section 6.3 (but using \u03b3 as described in Section 6.6) end if \u2022 Compute the update proposal \u2206by multiplying current estimate of approximate Fisher inverse by the estimate of \u2207h (using the formulas derived in Section 4.2 or Section 4.3). For layers with size d < m consider using trick described at the end of Section 8 for increased ef\ufb01ciency. \u2022 Compute the \ufb01nal update \u03b4 from \u2206as described in Section 6.4 (or Section 7 if using momentum) where the matrix-vector products with F are estimated on S2 using the ai\u2019s computed in the forward pass end for \u2022 Select the \u03b4 and the new \u03b3 computing in the above loop that correspond to the lowest value of M(\u03b4) (see Section 6.6) if updating \u03bb this iteration (i.e. if k \u22610 (mod T1)) then \u2022 Update \u03bb with the Levenberg-Marquardt style rule described in Section 6.5 end if \u2022 \u03b8k+1 \u2190\u03b8k + \u03b4 \u2022 k \u2190k + 1 end while 32 \f10 Invariance Properties and the Relationship to Whitening and Centering When computed with the exact Fisher, the natural gradient speci\ufb01es a direction in the space of predictive distributions which is invariant to the speci\ufb01c way that the model is parameterized. This invariance means that the smooth path through distribution space produced by following the natural gradient with in\ufb01nitesimally small steps will be similarly invariant. For a practical natural gradient based optimization method which takes large discrete steps in the direction of the natural gradient, this invariance of the optimization path will only hold approximately. As shown by Martens (2014), the approximation error will go to zero as the effects of damping diminish and the reparameterizing function \u03b6 tends to a locally linear function. Note that the latter will happen as \u03b6 becomes smoother, or the local region containing the update shrinks to zero. Because K-FAC uses an approximation of the natural gradient, these invariance results are not applicable in our case. Fortunately, as was shown by Martens (2014), one can establish invariance of an update direction with respect to a given reparameterization of the model by verifying certain simple properties of the curvature matrix C used to compute the update. We will use this result to show that, under the assumption that damping is absent (or negligible in its affect), K-FAC is invariant to a broad and natural class of transformations of the network. This class of transformations is given by the following modi\ufb01ed network de\ufb01nition (c.f. the de\ufb01nition in Section 2.1): s\u2020 i = W \u2020 i \u00af a\u2020 i\u22121 \u00af a\u2020 i = \u2126i \u00af \u03c6i(\u03a6is\u2020 i) where \u00af \u03c6i is the function that computes \u03c6i and then appends a homogeneous coordinate (with value 1), \u2126i and \u03a6i are arbitrary invertible matrices of the appropriate sizes (except that we assume \u2126\u2113= I), \u00af a\u2020 0 = \u21260\u00af a0, and where the network\u2019s output is given by f \u2020(x, \u03b8) = a\u2020 \u2113. Note that because \u2126i multiplies \u00af \u03c6i(\u03a6is\u2020 i), it can implement arbitrary translations of the unit activities \u03c6i(\u03a6is\u2020 i) in addition to arbitrary linear transformations. Figure 8 illustrates our modi\ufb01ed network de\ufb01nition for \u2113= 2 (c.f. Figure 1). Here, and going forward, we will add a \u201c\u2020\u201d superscript to any network-dependent quantity in order to denote the analogous version of it computed by the transformed network. Note that under this identi\ufb01cation, the loss derivative formulas for the transformed network are analogous to those of the original network, and so our various Fisher approximations are still well de\ufb01ned. 33 \fFigure 8: A depiction of a transformed network for \u2113= 2. Note that the quantities labeled with \u00af ai and si (without \u201c\u2020\u201d) will be equal to the analogous quantities from the default network, provided that \u03b8 = \u03b6(\u03b8\u2020) as in Theorem 1. 34 \fThe following theorem describes the main technical result of this section. Theorem 1. There exists an invertible linear function \u03b8 = \u03b6(\u03b8\u2020) so that f \u2020(x, \u03b8\u2020) = f(x, \u03b8) = f(x, \u03b6(\u03b8\u2020)), and thus the transformed network can be viewed as a reparameterization of the original network by \u03b8\u2020. Moreover, additively updating \u03b8 by \u03b4 = \u2212\u03b1 \u02d8 F \u22121\u2207h or \u03b4 = \u2212\u03b1 \u02c6 F \u22121\u2207h in the original network is equivalent to additively updating \u03b8\u2020 by \u03b4\u2020 = \u2212\u03b1 \u02d8 F \u2020\u22121\u2207h\u2020 or \u03b4\u2020 = \u2212\u03b1 \u02c6 F \u2020\u22121\u2207h\u2020 (resp.) in the transformed network, in the sense that \u03b6(\u03b8\u2020 + \u03b4\u2020) = \u03b8 + \u03b4. This immediately implies the following corollary which characterizes the invariance of a basic version of K-FAC to the given class of network transformations. Corollary 2. The optimization path taken by K-FAC (using either of our Fisher approximations \u02d8 F or \u02c6 F) through the space of predictive distributions is the same for the default network as it is for the transformed network (where the \u2126i\u2019s and \u03a6i\u2019s remain \ufb01xed). This assumes the use of an equivalent initialization (\u03b80 = \u03b6(\u03b8\u2020 0)), and a basic version of K-FAC where damping is absent or negligible in effect, momentum is not used, and where the learning rates are chosen in a way that is independent of the network\u2019s parameterization. While this corollary assumes that the \u2126i\u2019s and \u03a6i\u2019s are \ufb01xed, if we relax this assumption so that they are allowed to vary smoothly with \u03b8, then \u03b6 will be a smooth function of \u03b8, and so as discussed in Martens (2014), invariance of the optimization path will hold approximately in a way that depends on the smoothness of \u03b6 (which measures how quickly the \u2126i\u2019s and \u03a6i\u2019s change) and the size of the update. Moreover, invariance will hold exactly in the limit as the learning rate goes to 0. Note that the network transformations can be interpreted as replacing the network\u2019s nonlinearity \u00af \u03c6i(si) at each layer i with a \u201ctransformed\u201d version \u2126i \u00af \u03c6i(\u03a6isi). So since the well-known logistic sigmoid and tanh functions are related to each other by such a transformation, an immediate consequence of Corollary 2 is that K-FAC is invariant to the choice of logistic sigmoid vs. tanh activation functions (provided that equivalent initializations are used and that the effect of damping is negligible, etc.). Also note that because the network inputs are also transformed by \u21260, K-FAC is thus invariant to arbitrary af\ufb01ne transformations of the input, which includes many popular training data preprocessing techniques. Many other natural network transformations, such as ones which \u201ccenter\u201d and normalize unit activities so that they have mean 0 and variance 1 can be described using diagonal choices for the \u2126i\u2019s and \u03a6i\u2019s which vary smoothly with \u03b8. In addition to being approximately invariant to such transformations (or exactly, in the limit as the step size goes to 0), K-FAC is similarly invariant to a more general class of such transformations, such as those which transform the units so that they have a mean of 0, so they are \u201ccentered\u201d, and a covariance matrix of I, so they are \u201cwhitened\u201d, which is a much stronger condition than the variances of the individual units each being 1. In the case where we use the block-diagonal approximation \u02d8 F and compute updates without damping, Theorem 1 affords us an additional elegant interpretation of what K-FAC is doing. In 35 \fparticular, the updates produced by K-FAC end up being equivalent to those produced by standard gradient descent using a network which is transformed so that the unit activities and the unitgradients are both centered and whitened (with respect to the model\u2019s distribution). This is stated formally in the following corollary. Corollary 3. Additively updating \u03b8 by \u2212\u03b1 \u02d8 F \u22121\u2207h in the original network is equivalent to additively updating \u03b8\u2020 by the gradient descent update \u2212\u03b1\u2207h\u2020 (where \u03b8 = \u03b6(\u03b8\u2020) as in Theorem 1) in a transformed version of the network where the unit activities a\u2020 i and the unit-gradients g\u2020 i are both centered and whitened with respect to the model\u2019s distribution. 11 Related Work The Hessian-free optimization method of Martens (2010) uses linear conjugate gradient (CG) to optimize local quadratic models of the form of eqn. 5 (subject to an adaptive Tikhonov damping technique) in lieu of directly solving it using matrix inverses. As discussed in the introduction, the main advantages of K-FAC over HF are twofold. Firstly, K-FAC uses an ef\ufb01ciently computable direct solution for the inverse of the curvature matrix and thus avoids the costly matrix-vector products associated with running CG within HF. Secondly, it can estimate the curvature matrix from a lot of data by using an online exponentially-decayed average, as opposed to relatively small-sized \ufb01xed mini-batches used by HF. The cost of doing this is of course the use of an inexact approximation to the curvature matrix. Le Roux et al. (2008) proposed a neural network optimization method known as TONGA based on a block-diagonal approximation of the empirical Fisher where each block corresponds to the weights associated with a particular unit. By contrast, K-FAC uses much larger blocks, each of which corresponds to all the weights within a particular layer. The matrices which are inverted in K-FAC are roughly the same size as those which are inverted in TONGA, but rather than there being one per unit as in TONGA, there are only two per layer. Therefore, K-FAC is signi\ufb01cantly less computationally intensive than TONGA, despite using what is arguably a much more accurate approximation to the Fisher. Note that to help mitigate the cost of the many matrix inversions it requires, TONGA approximates the blocks as being low-rank plus a diagonal term, although this introduces further approximation error. Centering methods work by either modifying the gradient (Schraudolph, 1998) or dynamically reparameterizing the network itself (Raiko et al., 2012; Vatanen et al., 2013; Wiesler et al., 2014), so that various unit-wise scalar quantities like the activities (the ai\u2019s) and local derivatives (the \u03c6\u2032 i(si)\u2019s) are 0 on average (i.e. \u201ccentered\u201d), as they appear in the formula for the gradient. Typically, these methods require the introduction of additional \u201cskip\u201d connections (which bypass the nonlinearities of a given layer) in order to preserve the expressive power/ef\ufb01ciency of the network after these transformations are applied. It is argued by Raiko et al. (2012) that the application of the centering transformation makes the Fisher of the resulting network closer to a diagonal matrix, and thus makes its gradient more 36 \fclosely resemble its natural gradient. However, this argument uses the strong approximating assumption that the correlations between various network-dependent quantities, such as the activities of different units within a given layer, are zero. In our notation, this would be like assuming that the Gi,i\u2019s are diagonal, and that the \u00af Ai,i\u2019s are rank-1 plus a diagonal term. Indeed, using such an approximation within the block-diagonal version of K-FAC would yield an algorithm similar to standard centering, although without the need for skip connections (and hence similar to the version of centering proposed by Wiesler et al. (2014)). As shown in Corollary 3, K-FAC can also be interpreted as using the gradient of a transformed network as its update direction, although one in which the gi\u2019s and ai\u2019s are both centered and whitened (with respect to the model\u2019s distribution). Intuitively, it is this whitening which accounts for the correlations between activities (or back-propagated gradients) within a given layer. Ollivier (2013) proposed a neural network optimization method which uses a block-diagonal approximation of the Fisher, with the blocks corresponding to the incoming weights (and bias) of each unit. This method is similar to TONGA, except that it approximates the Fisher instead of the empirical Fisher (see Martens (2014) for a discussion of the difference between these). Because computing blocks of the Fisher is expensive (it requires k backpropagations, where k is the number of output units), this method uses a biased deterministic approximation which can be computed more ef\ufb01ciently, and is similar in spirit to the deterministic approximation used by LeCun et al. (1998). Note that while such an approximation could hypothetically be used within K-FAC to compute the Gi,j\u2019s, we have found that our basic unbiased stochastic approximation works nearly as well as the exact values in practice. The work most closely related to ours is that of Heskes (2000), who proposed an approximation of the Fisher of feed-forward neural networks similar to our Kronecker-factored blockdiagonal approximation \u02d8 F from Section 4.2, and used it to derive an ef\ufb01cient approximate naturalgradient based optimization method by exploiting the identity (A \u2297B)\u22121 = A\u22121 \u2297B\u22121. K-FAC differs from Heskes\u2019 method in several important ways which turn out to be crucial to it working well in practice. In Heskes\u2019 method, update damping is accomplished using a basic factored Tikhonov technique where \u03b3I is added to each Gi,i and \u00af Ai,i for a \ufb01xed parameter \u03b3 > 0 which is set by hand. By contrast, K-FAC uses a factored Tikhonov technique where \u03b3 adapted dynamically as described in Section 6.6, combined with a re-scaling technique based on a local quadratic model computed using the exact Fisher (see Section 6.4). Note that the adaptation of \u03b3 is important since what constitutes a good or even merely acceptable value of \u03b3 will change signi\ufb01cantly over the course of optimization. And the use of our re-scaling technique, or something similar to it, is also crucial as we have observed empirically that basic Tikhonov damping is incapable of producing high quality updates by itself, even when \u03b3 is chosen optimally at each iteration (see Figure 7 of Section 6.4). Also, while Heskes\u2019 method computes the Gi,i\u2019s exactly, K-FAC uses a stochastic approximation which scales ef\ufb01ciently to neural networks with much higher-dimensional outputs (see Section 5). 37 \fOther advances we have introduced include the more accurate block-tridiagonal approximation to the inverse Fisher, a parameter-free type of momentum (see Section 7), online estimation of the Gi,i and \u00af Ai,i matrices, and various improvements in computational ef\ufb01ciency (see Section 8). We have found that each of these additional elements is important in order for K-FAC to work as well as it does in various settings. Concurrently with this work Povey et al. (2015) has developed a neural network optimization method which uses a block-diagonal Kronecker-factored approximation similar to the one from Heskes (2000). This approach differs from K-FAC in numerous ways, including its use of the empirical Fisher (which doesn\u2019t work as well as the standard Fisher in our experience \u2013 see Section 5), and its use of only a basic factored Tikhonov damping technique without adaptive re-scaling or any form of momentum. One interesting idea introduced by Povey et al. (2015) is a particular method for maintaining an online low-rank plus diagonal approximation of the factor matrices for each block, which allows their inverses to be computed more ef\ufb01ciently (although subject to an approximation). While our experiments with similar kinds of methods for maintaining such online estimates found that they performed poorly in practice compared to the solution of refreshing the inverses only occasionally (see Section 8), the particular one developed by Povey et al. (2015) could potentially still work well, and may be especially useful for networks with very wide layers. 12 Heskes\u2019 interpretation of the block-diagonal approximation Heskes (2000) discussed an alternative interpretation of the block-diagonal approximation which yields some useful insight to complement our own theoretical analysis. In particular, he observed that the block-diagonal Fisher approximation \u02d8 F is the curvature matrix corresponding to the following quadratic function which measures the difference between the new parameter value \u03b8\u2032 and the current value \u03b8: D(\u03b8\u2032, \u03b8) = 1 2 \u2113 X i=1 E \u0002 (si \u2212s\u2032 i)\u22a4Gi,i(si \u2212s\u2032 i) \u0003 Here, s\u2032 i = W \u2032 i\u00af ai\u22121, and the si\u2019s and \u00af ai\u2019s are determined by \u03b8 and are independent of \u03b8\u2032 (which determines the W \u2032 i\u2019s). D(\u03b8\u2032, \u03b8) can be interpreted as a reweighted sum of squared changes of each of the si\u2019s. The reweighing matrix Gi,i is given by Gi,i = E \u0002 gig\u22a4 i \u0003 = E \u0014 FP (i) y|si \u0015 where P (i) y|si is the network\u2019s predictive distribution as parameterized by si, and FP (i) y|si is its Fisher information matrix, and where the expectation is taken w.r.t. the distribution on si (as induced by the distribution on the network\u2019s input x). Thus, the effect of reweighing by the Gi,i\u2019s is to 38 \f(approximately) translate changes in si into changes in the predictive distribution over y, although using the expected/average Fisher Gi,i = E[FP (i) y|si ] instead of the more speci\ufb01c Fisher FP (i) y|si . Interestingly, if one used FP (i) y|si instead of Gi,i in the expression for D(\u03b8\u2032, \u03b8), then D(\u03b8\u2032, \u03b8) would correspond to a basic layer-wise block-diagonal approximation of F where the blocks are computed exactly (i.e. without the Kronecker-factorizing approximation introduced in Section 3). Such an approximate Fisher would have the interpretation of being the Hessian w.r.t. \u03b8\u2032 of either of the measures \u2113 X i=1 E h KL \u0010 P (i) y|si \u2225P (i) y|s\u2032 i \u0011i or \u2113 X i=1 E h KL \u0010 P (i) y|s\u2032 i \u2225P (i) y|si \u0011i Note that each term in either of these sums is a function measuring an intrinsic quantity (i.e. changes in the output distribution), and so overall these are intrinsic measures except insofar as they assume that \u03b8 is divided into \u2113independent groups that each parameterize one of the \u2113different predictive distributions (which are each conditioned on their respective ai\u22121\u2019s). It is not clear whether \u02d8 F, with its Kronecker-factorizing structure can similarly be interpreted as the Hessian of such a self-evidently intrinsic measure. If it could be, then this would considerably simplify the proof of our Theorem 1 (e.g. using the techniques of Arnold et al. (2011)). Note that D(\u03b8\u2032, \u03b8) itself doesn\u2019t work, as it isn\u2019t obviously intrinsic. Despite this, as shown in Section 10, both \u02d8 F and our more advanced approximation \u02c6 F produce updates which have strong invariance properties. 13 Experiments To investigate the practical performance of K-FAC we applied it to the 3 deep autoencoder optimization problems from Hinton and Salakhutdinov (2006), which use the \u201cMNIST\u201d, \u201cCURVES\u201d, and \u201cFACES\u201d datasets respectively (see Hinton and Salakhutdinov (2006) for a complete description of the network architectures and datasets). Due to their high dif\ufb01culty, performance on these problems has become a standard benchmark for neural network optimization methods (e.g. Martens, 2010; Vinyals and Povey, 2012; Sutskever et al., 2013). We included \u21132 regularization with a coef\ufb01cient of \u03b7 = 10\u22125 in each of these three optimization problems (i.e. so that \u03b7 2\u2225\u03b8\u22252 2 was added to the objective), which is lower than what was used by Martens (2010), but higher than what was used by Sutskever et al. (2013). As our baseline we used the version of SGD with momentum based on Nesterov\u2019s Accelerated Gradient (Nesterov, 1983) described in Sutskever et al. (2013), which was calibrated to work well on these particular deep autoencoder problems. For each problem we followed the prescription given by Sutskever et al. (2013) for determining the learning rate, and the increasing schedule for the decay parameter \u00b5. We did not compare to methods based on diagonal approximations of the curvature matrix, as in our experience such methods tend not perform as well on these kinds of 39 \foptimization problems as the baseline does (an observation which is consistent with the \ufb01ndings of Schraudolph (2002); Zeiler (2013)). Our implementation of K-FAC used most of the ef\ufb01ciency improvements described in Section 8, except that all \u201ctasks\u201d were computed serially (and thus with better engineering and more hardware, a faster implementation could likely be obtained). Because the mini-batch size m tended to be comparable to or larger than the typical/average layer size d, we did not use the technique described at the end of Section 8 for accelerating the computation of the approximate inverse, as this only improves ef\ufb01ciency in the case where m < d, and will otherwise decrease ef\ufb01ciency. Both K-FAC and the baseline were implemented using vectorized MATLAB code accelerated with the GPU package Jacket. The code for K-FAC is available for download4. All tests were performed on a single computer with a 4.4 Ghz 6 core Intel CPU and an NVidia GTX 580 GPU with 3GB of memory. Each method used the same initial parameter setting, which was generated using the \u201csparse initialization\u201d technique from Martens (2010) (which was also used by Sutskever et al. (2013)). To help mitigate the detrimental effect that the noise in the stochastic gradient has on the convergence of the baseline (and to a lesser extent K-FAC as well) we used a exponentially decayed iterate averaging approach based loosely on Polyak averaging (e.g. Swersky et al., 2010). In particular, at each iteration we took the \u201caveraged\u201d parameter estimate to be the previous such estimate, multiplied by \u03be, plus the new iterate produced by the optimizer, multiplied by 1 \u2212\u03be, for \u03be = 0.99. Since the training error associated with the optimizer\u2019s current iterate may sometimes be lower than the training error associated with the averaged estimate (which will often be the case when the mini-batch size m is very large), we report the minimum of these two quantities. To be consistent with the numbers given in previous papers we report the reconstruction error instead of the actual objective function value (although these are almost perfectly correlated in our experience). And we report the error on the training set as opposed to the test set, as we are chie\ufb02y interested in optimization speed and not the generalization capabilities of the networks themselves. In our \ufb01rst experiment we examined the relationship between the mini-batch size m and the per-iteration rate of progress made by K-FAC and the baseline on the MNIST problem. The results from this experiment are plotted in Figure 9. They strongly suggest that the per-iteration rate of progress of K-FAC tends to a superlinear function of m (which can be most clearly seen by examining the plots of training error vs training cases processed), which is to be contrasted with the baseline, where increasing m has a much smaller effect on the per-iteration rate of progress, and with K-FAC without momentum, where the per-iteration rate of progress seems to be a linear or slightly sublinear function of m. It thus appears that the main limiting factor in the convergence of K-FAC (with momentum applied) is the noise in the gradient, at least in later stages of optimization, and that this is not true of the baseline to nearly the same extent. This would seem to suggest that K-FAC, much more than SGD, would bene\ufb01t from a massively parallel distributed implementation which makes use of more computational resources than a single GPU. 4http://www.cs.toronto.edu/\u02dcjmartens/docs/KFAC3-MATLAB.zip 40 \f0 1 2 3 4 5 6 7 x 10 4 10 \u22120.2 10 0 10 0.2 iterations error (log\u2212scale) Baseline (m = 2000) Baseline (m = 4000) Baseline (m = 6000) 0 1 2 3 4 x 10 8 10 \u22120.2 10 0 10 0.2 training cases processed error (log\u2212scale) Baseline (m = 2000) Baseline (m = 4000) Baseline (m = 6000) 0 0.5 1 1.5 2 2.5 3 x 10 4 10 \u22120.2 10 0 10 0.2 iterations error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (no moment., m = 2000) Blk\u2212TriDiag K\u2212FAC (no moment., m = 4000) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) 0 2 4 6 8 10 12 x 10 7 10 \u22120.2 10 0 10 0.2 training cases processed error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (no moment., m = 2000) Blk\u2212TriDiag K\u2212FAC (no moment., m = 4000) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) 0 0.5 1 1.5 2 2.5 3 x 10 4 10 \u22120.2 10 0 10 0.2 iterations error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (m = 2000) Blk\u2212TriDiag K\u2212FAC (m = 4000) Blk\u2212TriDiag K\u2212FAC (m = 6000) 0 2 4 6 8 10 12 x 10 7 10 \u22120.2 10 0 10 0.2 training cases processed error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (m = 2000) Blk\u2212TriDiag K\u2212FAC (m = 4000) Blk\u2212TriDiag K\u2212FAC (m = 6000) 0 5 10 15 x 10 6 10 \u22120.1 10 0 10 0.1 10 0.2 training cases processed error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (m = 2000) Blk\u2212TriDiag K\u2212FAC (m = 4000) Blk\u2212TriDiag K\u2212FAC (m = 6000) 0 2 4 6 8 10 12 x 10 7 10 \u22120.2 10 \u22120.17 10 \u22120.14 10 \u22120.11 training cases processed error (log\u2212scale) Blk\u2212TriDiag K\u2212FAC (m = 2000) Blk\u2212TriDiag K\u2212FAC (m = 4000) Blk\u2212TriDiag K\u2212FAC (m = 6000) Figure 9: Results from our \ufb01rst experiment examining the relationship between the mini-batch size m and the per-iteration progress (left column) or per-training case progress (right column) progress made by KFAC on the MNIST deep autoencoder problem. Here, \u201cBlk-TriDiag K-FAC\u201d is the block-tridiagonal version of K-FAC, and \u201cBlk-Diag K-FAC\u201d is the block-diagonal version, and \u201cno moment.\u201d indicates that momentum was not used. The bottom row consists of zoomed-in versions of the right plot from the row above it, with the left plot concentrating on the beginning stage of optimization, and the right plot concentrating on the later stage. Note that the x-axes of these two last plots are at signi\ufb01cantly different scales (106 vs 107). 41 \fBut even in the single CPU/GPU setting, the fact that the per-iteration rate of progress tends to a superlinear function of m, while the per-iteration computational cost of K-FAC is a roughly linear function of m, suggests that in order to obtain the best per-second rate of progress with KFAC, we should use a rapidly increasing schedule for m. To this end we designed an exponentially increasing schedule for m, given by mk = min(m1 exp((k \u22121)/b), |S|), where k is the current iteration, m1 = 1000, and where b is chosen so that m500 = |S|. The approach of increasing the mini-batch size in this way is analyzed by Friedlander and Schmidt (2012). Note that for other neural network optimization problems, such as ones involving larger training datasets than these autoencoder problems, a more slowly increasing schedule, or one that stops increasing well before m reaches |S|, may be more appropriate. One may also consider using an approach similar to that of Byrd et al. (2012) for adaptively determining a suitable mini-batch size. In our second experiment we evaluated the performance of our implementation of K-FAC versus the baseline on all 3 deep autoencoder problems, where we used the above described exponentially increasing schedule for m for K-FAC, and a \ufb01xed setting of m for the baseline and momentum-less K-FAC (which was chosen from a small range of candidates to give the best overall per-second rate of progress). The relatively high values of m chosen for the baseline (m = 250 for CURVES, and m = 500 for MNIST and FACES, compared to the m = 200 which was used by Sutskever et al. (2013)) re\ufb02ect the fact that our implementation of the baseline uses a highperformance GPU and a highly optimized linear algebra package, which allows for many training cases to be ef\ufb01ciently processed in parallel. Indeed, after a certain point, making m much smaller didn\u2019t result in a signi\ufb01cant reduction in the baseline\u2019s per-iteration computation time. Note that in order to process the very large mini-batches required for the exponentially increasing schedule without overwhelming the memory of the GPU, we partitioned the mini-batches into smaller \u201cchunks\u201d and performed all computations involving the mini-batches, or subsets thereof, one chunk at a time. The results from this second experiment are plotted in Figures 10 and 11. For each problem K-FAC had a per-iteration rate of progress which was orders of magnitude higher than that of the baseline\u2019s (Figure 11), provided that momentum was used, which translated into an overall much higher per-second rate of progress (Figure 10), despite the higher cost of K-FAC\u2019s iterations (due mostly to the much larger mini-batch sizes used). Note that Polyak averaging didn\u2019t produce a signi\ufb01cant increase in convergence rate of K-FAC in this second experiment (actually, it hurt a bit) as the increasing schedule for m provided a much more effective (although expensive) solution to the problem of noise in the gradient. The importance of using some form of momentum on these problems is emphasized in these experiments by the fact that without the momentum technique developed in Section 7, K-FAC wasn\u2019t signi\ufb01cantly faster than the baseline (which itself used a strong form of momentum). These results echo those of Sutskever et al. (2013), who found that without momentum, SGD was orders of magnitude slower on these particular problems. Indeed, if we had included results for the baseline without momentum they wouldn\u2019t even have appeared in the axes boundaries of the plots in Figure 10. 42 \f0 1000 2000 3000 4000 5000 6000 7000 10 \u22121 10 0 time (s) error (log\u2212scale) Baseline (m = 250) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = exp. sched.) 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10 0 time (s) error (log\u2212scale) Baseline (m = 500) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) 0 2000 4000 6000 8000 10000 12000 14000 10 1 time (s) error (log\u2212scale) Baseline (m = 500) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) Figure 10: Results from our second experiment showing training error versus computation time on the CURVES (top), MNIST (middle), and FACES (bottom) deep autoencoder problems. 43 \f0 0.5 1 1.5 2 2.5 x 10 5 10 \u22121 10 0 iterations error (log\u2212scale) Baseline (m = 250) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = exp. sched.) 0 2000 4000 6000 8000 10000 10 \u22121 iterations error (log\u2212scale) 0 2 4 6 8 10 x 10 4 10 0 iterations error (log\u2212scale) Baseline (m = 500) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) 0 500 1000 1500 2000 2500 10 \u22120.2 10 \u22120.1 10 0 10 0.1 10 0.2 10 0.3 iterations error (log\u2212scale) 0 1 2 3 4 5 6 7 x 10 4 10 1 iterations error (log\u2212scale) Baseline (m = 500) Blk\u2212TriDiag K\u2212FAC (m = exp. sched.) Blk\u2212Diag K\u2212FAC (m = exp. sched.) Blk\u2212TriDiag K\u2212FAC (no moment., m = 6000) 0 200 400 600 800 1000 1200 1400 1600 10 1 iterations error (log\u2212scale) Figure 11: More results from our second experiment showing training error versus iteration on the CURVES (top row), MNIST (middle row), and FACES (bottom row) deep autoencoder problems. The plots on the right are zoomed in versions of those on the left which highlight the difference in per-iteration progress made by the different versions of K-FAC. 44 \fRecall that the type of momentum used by K-FAC compensates for the inexactness of our approximation to the Fisher by allowing K-FAC to build up a better solution to the exact quadratic model minimization problem (de\ufb01ned using the exact Fisher) across many iterations. Thus, if we were to use a much stronger approximation to the Fisher when computing our update proposals \u2206, the bene\ufb01t of using this type of momentum would have likely been much smaller than what we observed. One might hypothesize that it is the particular type of momentum used by K-FAC that is mostly responsible for its advantages over the SGD baseline. However in our testing we found that for SGD the more conventional type of momentum used by Sutskever et al. (2013) performs signi\ufb01cantly better. From Figure 11 we can see that the block-tridiagonal version of K-FAC has a per-iteration rate of progress which is typically 25% to 40% larger than the simpler block-diagonal version. This observation provides empirical support for the idea that the block-tridiagonal approximate inverse Fisher \u02c6 F \u22121 is a more accurate approximation of F \u22121 than the block-diagonal approximation \u02d8 F \u22121. However, due to the higher cost of the iterations in the block-tridiagonal version, its overall persecond rate of progress seems to be only moderately higher than the block-diagonal version\u2019s, depending on the problem. Note that while matrix-matrix multiplication, matrix inverse, and SVD computation all have the same computational complexity, in practice their costs differ signi\ufb01cantly (in increasing order as listed). Computation of the approximate Fisher inverse, which is performed in our experiments once every 20 iterations (and for the \ufb01rst 3 iterations), requires matrix inverses for the blockdiagonal version, and SVDs for the block-tridiagonal version. For the FACES problem, where the layers can have as many as 2000 units, this accounted for a signi\ufb01cant portion of the difference in the average per-iteration computational cost of the two versions (as these operations must be performed on 2000 \u00d7 2000 sized matrices). While our results suggest that the block-diagonal version is probably the better option overall due to its greater simplicity (and comparable per-second progress rate), the situation may be different given a more ef\ufb01cient implementation of K-FAC where the more expensive SVDs required by the tri-diagonal version are computed approximately and/or in parallel with the other tasks, or perhaps even while the network is being optimized. Our results also suggest that K-FAC may be much better suited than the SGD baseline for a massively distributed implementation, since it would require far fewer synchronization steps (by virtue of the fact that it requires far fewer iterations). 14" + }, + { + "url": "http://arxiv.org/abs/1411.7717v3", + "title": "On the Expressive Efficiency of Sum Product Networks", + "abstract": "Sum Product Networks (SPNs) are a recently developed class of deep generative\nmodels which compute their associated unnormalized density functions using a\nspecial type of arithmetic circuit. When certain sufficient conditions, called\nthe decomposability and completeness conditions (or \"D&C\" conditions), are\nimposed on the structure of these circuits, marginal densities and other useful\nquantities, which are typically intractable for other deep generative models,\ncan be computed by what amounts to a single evaluation of the network (which is\na property known as \"validity\"). However, the effect that the D&C conditions\nhave on the capabilities of D&C SPNs is not well understood.\n In this work we analyze the D&C conditions, expose the various connections\nthat D&C SPNs have with multilinear arithmetic circuits, and consider the\nquestion of how well they can capture various distributions as a function of\ntheir size and depth. Among our various contributions is a result which\nestablishes the existence of a relatively simple distribution with fully\ntractable marginal densities which cannot be efficiently captured by D&C SPNs\nof any depth, but which can be efficiently captured by various other deep\ngenerative models. We also show that with each additional layer of depth\npermitted, the set of distributions which can be efficiently captured by D&C\nSPNs grows in size. This kind of \"depth hierarchy\" property has been widely\nconjectured to hold for various deep models, but has never been proven for any\nof them. Some of our other contributions include a new characterization of the\nD&C conditions as sufficient and necessary ones for a slightly strengthened\nnotion of validity, and various state-machine characterizations of the types of\ncomputations that can be performed efficiently by D&C SPNs.", + "authors": "James Martens, Venkatesh Medabalimi", + "published": "2014-11-27", + "updated": "2015-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Sum Product Networks (SPNs) [Poon and Domingos, 2011] are a recently developed class of deep generative models which compute their associated unnormalized density functions using a special type of arithmetic circuit. Like neural networks, arithmetic circuits [e.g. Shpilka and Yehudayoff, \u2217jmartens@cs.toronto.edu \u2020venkatm@cs.toronto.edu 1 \f2010] are feed-forward circuits whose gates/nodes compute real values, and whose connections have associated real-valued weights. Each node in an arithmetic circuit computes either a weighted sum or a product over their real-valued inputs. For an important special class of SPNs called \u201cvalid SPNs\u201d, computing the normalizing constant, along with any marginals, can be performed by what amounts to a single evaluation of the network. This is to be contrasted with other deep generative models like Deep Boltzmann Machines [Salakhutdinov and Hinton, 2009], where quantities crucial to learning and model evaluation (such as the normalizing constant) are provably intractable, unless P = #P [Roth, 1996]. The tractability properties of valid SPNs are the primary reason they are interesting both from a theoretical and practical perspective. However, validity is typically enforced via the so-called \u201cdecomposability\u201d and \u201ccompleteness\u201d conditions (which we will abbreviate as \u201cD&C\u201d). While easy to describe and verify, the D&C conditions impose stringent structural restrictions on SPNs which limit the kinds of architectures that are allowed. While some learning algorithms have been developed that can respect these conditions [e.g. Gens and Domingos, 2013, Peharz et al., 2013, Rooshenas and Lowd, 2014], the extent to which they limit the expressive ef\ufb01ciency1 of SPNs versus various other deep generative models remains unclear. Like most models, D&C SPNs are \u201cuniversal\u201d in the sense that they can capture any distribution if they are allowed to be of a size which is exponential in the dimension n of the data/input. However, any distribution function which can be ef\ufb01ciently captured by D&C SPNs, which is to say by one of polynomial size, must therefore be tractable (in the sense of having computable marginals, etc). And given complexity theoretic assumptions like P \u0338= #P it is easy to come up with density functions whose marginals/normalizers are intractable, but which nonetheless correspond to distributions which can be ef\ufb01ciently captured by various other deep generative models (e.g. using the simulation results from Martens [2014]). Thus we see that the tractability properties enjoyed by D&C SPNs indeed come with a price. However, one could argue that the intractability of these kinds of \u201chard\u201d distributions would make it dif\ufb01cult or even impossible to learn them in practice. Moreover, any model which can ef\ufb01ciently capture them must therefor lack an ef\ufb01cient general-case inference/learning algorithm. 1By this we mean the extent to which they can ef\ufb01ciently capture various distribution. A distribution is \u201cef\ufb01ciently captured\u201d if it is contained in the closure of the set of distributions corresponding to different settings of the models parameters, for polynomial sized (in the dimension n of the data/input) instances of the model, where size is measured by the number of \u201cunits\u201d or parameters. Often these will be realistic low-order polynomials, although this depends on how exactly the constructions are done. Note that a distribution being \u201cef\ufb01ciently captured\u201d says nothing about how easily the marginal densities or partition function of its associated density can be computed (except in the case of D&C SPNs of course). The concept of expressive ef\ufb01ciency is also sometimes called \u201cexpressive power\u201d or \u201crepresentational power\u201d, although we will use the word \u201cef\ufb01ciency\u201d instead of \u201cpower\u201d to emphasize our focus on the question of whether or not certain distributions which can be captured ef\ufb01ciently by the model, instead of the question of whether or not they can be captured at all (i.e. by super-polynomially sized instances of the model). This latter question is the topic of papers which present so-called \u201cuniversality\u201d results which show how some models can capture any distribution if they are allowed be exponentially large in n (by essentially simulating a giant look-up table). Such results are fairly straightforward, and indeed it easy to show that D&C SPNs are universal in this sense. 2 \fThis is a valid point, and it suggests the obvious follow-up question: is there a fully tractable distribution (in the sense that its marginal densities and partition function can be computed ef\ufb01ciently) which can be ef\ufb01ciently captured by other deep models, but not by D&C SPNs? In this work we answer this question in the af\ufb01rmative, without assuming any complexity theoretic conjectures. This result thus establishes that D&C SPNs are in some sense less expressively ef\ufb01cient than many other deep models (since, by the results of Martens [2014], such models can ef\ufb01ciently simulate D&C SPNs), even if we restrict our attention only to tractable distributions. Moreover, it suggests existence of a hypothetical model which could share the tractability properties of D&C SPNs, while being more expressively ef\ufb01cient. In addition to this result, we also analyze the effect of depth, and other structural characteristics, on the expressive ef\ufb01ciency of D&C SPNs. Perhaps most notably, we use existing results from arithmetic circuit theory to establish that D&C SPNs gain expressive ef\ufb01ciency with each additional layer of depth. In particular, we show that the set of distributions which can be ef\ufb01ciently captured by D&C SPNs grows with each layer of depth permitted. This kind of \u201cdepth hierarchy\u201d property has never before been shown to hold for any other well-known deep model, despite the widespread belief that it does hold for most of them [e.g. Bengio and Delalleau, 2011]. Along with these two results, we also make numerous other contributions to the theoretical understanding of SPNs which are summarized below. In Section 2, we \ufb01rst propose a generalized de\ufb01nition of SPNs that captures all previous de\ufb01nitions. We then illuminate the various connections between SPNs and multilinear arithmetic circuits, allowing us to exploit the many powerful results which have already been proved for the latter. In Section 3 we provide new insights regarding the D&C conditions and their relationship to validity, and introduce a slightly strengthened version of validity which we show to be equivalent to the D&C conditions (whereas standard validity is merely implied by them). We also show that for a slightly generalized de\ufb01nition of SPNs, testing for standard validity is a co-NP hard problem. In Section 5 we give examples of various state-based models of computation which can be ef\ufb01ciently simulated by D&C SPNs, and show how these can be used to give constructive proofs that various simple density functions can be ef\ufb01ciently computed by D&C SPNs. In Section 6 we address the prior work on the expressive ef\ufb01ciency of D&C SPNs due to Delalleau and Bengio [2011], and give a much shorter proof of their results using powerful techniques borrowed from circuit theory. We go on to show how these techniques allow us to signi\ufb01cantly strengthen and extend the results of Delalleau and Bengio [2011], answering an open question which they posed. In Section 7 we leverage prior work done on multilinear arithmetic circuits to prove several very powerful results regarding the relationship between depth and expressive ef\ufb01ciency of D&C SPNs. First, we show that with each extra layer of depth added, there is an expansion of the set of functions ef\ufb01ciently computable by D&C SPNs (thus giving a strict \u201chierarchy of depth\u201d). Next 3 \fwe show that if depth is allowed to grow with the input dimension n, that its effect on expressive ef\ufb01ciency greatly diminishes after it reaches O(log(n)2). In Section 8 we show that when D&C SPNs are constrained to have a recursive \u201cformula\u201d structure, as they are when learned using the approach of [Gens and Domingos, 2013], they lose expressive ef\ufb01ciency. In particular we use prior work on multilinear arithmetic circuits to produce example functions which can be ef\ufb01ciently computed by general D&C SPNs, but not by ones constrained to have a formula structure. Finally, in Section 9 we give what is perhaps our most signi\ufb01cant and dif\ufb01cult result, which is the existence of a simple density function whose marginals and normalizer are computable by an O(n1.19) time algorithm, and whose corresponding distribution can be ef\ufb01ciently captured by various other deep models (in terms of their size), but which cannot be ef\ufb01ciently computed, or even ef\ufb01ciently approximated, by a D&C SPN of any depth. 2 De\ufb01nitions and Notation 2.1 Arithmetic circuits Arithmetic circuits [e.g. Shpilka and Yehudayoff, 2010] are a type of circuit, similar to Boolean logic circuits, or neural networks. But instead of having gates/nodes which compute basic logical operations like AND, or sigmoidal non-linearities, they have nodes which perform one of the two fundamental operations of arithmetic: addition and multiplication. Their formal de\ufb01nition follows. An arithmetic circuit \u03a6 over a set/tuple2 of real-valued variables y = (y1, y2, .., y\u2113) will be de\ufb01ned as a special type of directed acyclic graph with the following properties. Each node of the graph with in-degree 0 is labeled by either a variable from y or an element from R. Every other node is labeled by either \u00d7 or +, and are known as product nodes or sum nodes respectively. All the incoming edges to a sum node are labeled with weights from R. Nodes with no outgoing edges are referred to as output nodes. We will assume that arithmetic circuits only have one output node, which we will refer to as the root. Nodes with edges going into a node u in \u03a6 are referred to as u\u2019s children. The set of such children is denoted by C(u). Given values of the elements of y, a node u of an arithmetic circuit computes a real-valued output, which we denote by qu(y), according to the following rules. When u is labeled with an element of y or R, the node simply computes its label. The latter type of nodes are referred to as constant nodes, since they compute constants that don\u2019t depend on y. Product nodes compute the product of the outputs of their children, i.e. qu(y) = Q v\u2208C(u) qv(y), while sum nodes compute a weighted sum of the outputs of their children, i.e. qu(y) = P v\u2208C(u) wv,uqv(y), where wu,v denotes the weight labeling the edge from from v to u. Given these de\ufb01nitions, it is not hard to see that for 2By \u201cset/tuple\u201d we mean a tuple like t = (a, b, c) which we will occasionally treat like a standard set, so that expressions such as t \u2229s are well de\ufb01ned and have the natural interpretation. 4 \feach node u of an arithmetic circuit, qu(y) is a multivariate polynomial function in the elements of y. The output of \u03a6, denoted by q\u03a6(y), is de\ufb01ned as the output of its singular root/output node (i.e. qr(y), where r is the root/output node of \u03a6). For a node u in \u03a6, \u03a6u denotes the subcircuit of \u03a6 rooted at u. This subcircuit is formed by taking only the nodes in \u03a6 that are on a path to u. An arithmetic circuit is said to be monotone if all of its weights and constants are non-negative elements of R. The scope of a node u, denoted by yu, is de\ufb01ned as the subset of the elements of y which appear as labels in the sub-circuit rooted at u. These are the variables which u\u2019s output essentially \u201cdepends on\u201d. The size of \u03a6, denoted by |\u03a6|, is de\ufb01ned as the number of nodes in \u03a6, and its depth is de\ufb01ned as the length of the longest directed path in \u03a6. An alternative notion of depth, called product depth [Raz and Yehudayoff, 2009], is de\ufb01ned as the largest number of product nodes which appear in a directed path in \u03a6. Note that in general, nodes in an arithmetic circuit can have out-degree greater than 1, thus allowing the quantities they compute to be used in multiple subsequent computations by other nodes. When this is not the case and the nodes of \u03a6 each have out-degree at most 1, \u03a6 is said to be an arithmetic formula, because it can be written out compactly as a formula. 2.2 Sum Product Networks (SPNs) In this section we will give our generalized de\ufb01nition of Sum Product Networks (SPNs). Let x = (x1, x2, .., xn) be a set/tuple of variables where xi can take values in a range set Ri \u2286R and M1, M2, ..., Mn are measures over the respective Ri\u2019s. For each i let fi = (fi,1(xi), fi,2(xi), ..., fi,mi(xi)) denote a set/tuple of mi non-negative real-valued univariate functions of xi, each with a \ufb01nite Mi-integral over Ri. f will denote the set/tuple whose elements are given by appending all of these fi\u2019s together, and M will denote the product measure M1 \u00d7 M2 \u00d7 ... \u00d7 Mn. A Sum Product Network (SPN) \u03a6 is de\ufb01ned as a monotone arithmetic circuit over f. It inherits all of the properties of monotone arithmetic circuits, and gains some additional ones, which are discussed below. Because an SPN is an arithmetic circuit over f, any one of its nodes u computes a polynomial function qu(f) in f. But because the elements of f are functions of the elements of x, a node u of an SPN can be also viewed as computing a function of x, as given by qu(f(x)), where f(x) denotes the set/tuple obtained by replacing each element fi,j in f with the value of fi,j(xi). The dependency-scope of a node u is de\ufb01ned as the set of elements of x on which members of u\u2019s scope fu depend. The dependency-scope is denoted by xu. 5 \fSPNs are primarily used to model distributions over x. They do this by de\ufb01ning a density function given by p\u03a6(x) = 1 Zq\u03a6(f(x)), where Z = R q\u03a6(f(x))dM(x) is a normalizing constant known as the partition function. Because q\u03a6(f(x)) is non-negative this density is well de\ufb01ned, provided that Z is non-zero and \ufb01nite. A formula SPN is de\ufb01ned as an SPN which is also an arithmetic formula. In other words, a formula SPN is one whose nodes each have an out-degree of at most 1. It is important to remember that the domains (the Ri\u2019s) and the measures (Mi\u2019s) can be de\ufb01ned however we want, so that SPNs can represent both continuous and discrete distributions. For example, to represent a discrete distribution, we can choose Mi to be the counting measure with support given by a \ufb01nite subset, such as {0, 1}. In such a case, integration of some function g(xi) w.r.t. such a Mi amounts to the summation P xi\u2208{0,1} g(xi). 2.3 Validity, decomposability, and completeness Treated as density models, general SPNs suffer from many of the same intractability issues that plague other deep density models, such as Deep Boltzmann Machines [Salakhutdinov and Hinton, 2009]. In particular, there is no ef\ufb01cient general algorithm for computing their associated partition function Z or marginal densities. However, it turns out that for a special class of SPNs, called valid SPNs, computing the the partition function and marginal densities can be accomplished by what is essentially a single evaluation of the network. Moreover, the validity of a given SPN can be established using certain easy-to-test structural conditions called decomposability and completeness, which we will discuss later. De\ufb01nition 1 (Valid SPN) An SPN \u03a6, is said to be valid if the following condition always holds. Let I = (i1, i2, ..., i\u2113) where each ij is a distinct element of [n] = {1, 2, ..., n}, and let Si1 \u2286Ri1, Si2 \u2286 Ri2, ..., Si\u2113\u2286Ri\u2113be subsets of the ranges of the respective xij\u2019s. For any \ufb01xed value of x[n]\\I we have Z Si1\u00d7Si2\u00d7...\u00d7Si\u2113 q\u03a6(f(x)) dMI(xI) = q\u03a6(AI(SI, x[n]\\I)) where xI = (xi1, xi2, ..., xi\u2113) (with x[n]\\I de\ufb01ned analogously), MI = Mi1 \u00d7 Mi2 \u00d7 ... \u00d7 Mi\u2113 (with SI de\ufb01ned analogously), and where AI(SI, x[n]\\I) denotes the set/tuple obtained by taking f and for each i \u2208I and each j replacing fi,j with its integral R Si fi,j(xi)dMi(xi) over Si, and also replacing fi,j for each i \u2208[n] \\ I with fi,j(xi). Decoding the notation, this de\ufb01nition says that for a valid SPN \u03a6 we can compute the integral of the output function q\u03a6(f(x)) with respect to a subset of the input variables (given by the index set I) over corresponding subsets of their respective domains (the Si\u2019s), simply by computing the corresponding integrals over the respective univariate functions (the fi,j\u2019s) and evaluating the circuit by having nodes labeled by these fi,j\u2019s compute said integrals. 6 \fNote that for a subsets of the range of Ri1 \u00d7 Ri2 \u00d7 ... \u00d7 Ri\u2113of xI that do not have the form of a Cartesian product Si1 \u00d7 Si2 \u00d7 ... \u00d7 Si\u2113, validity doesn\u2019t say anything. In general, the integral over such a set will be intractable for valid SPNs. Validity is a very useful property for an SPN \u03a6 to have, as it allows us to ef\ufb01ciently integrate q\u03a6(f(x)) with respect to any subset of variables/elements of x by performing what amounts to a single evaluation of \u03a6. Among other uses [Gens and Domingos, 2013], this allows us to ef\ufb01ciently compute the partition function3 which normalizes \u03a6\u2019s associated density function p\u03a6 by taking I = [n] and Si = Ri for each i \u2208[n]. It also allows us to ef\ufb01ciently compute any marginal density function by taking I \u2282[n] and Si = Ri for each i \u2208I. While validity may seem like a magical property for an SPN to have, as shown by Poon and Domingos [2011] there is a pair of easy to enforce (and verify) structural properties which, when they appear together, imply validity. These are known as \u201cdecomposability\u201d and \u201ccompleteness\u201d, and are de\ufb01ned as follows. De\ufb01nition 2 (Decomposable) An SPN \u03a6 is decomposable if for every product node u in \u03a6 the dependency-scopes of its children are pairwise disjoint. De\ufb01nition 3 (Completeness) An SPN \u03a6 is complete if for every sum node u in \u03a6 the dependencyscopes of its children are all the same. As was the case in the work of Poon and Domingos [2011], decomposability and completeness turn out to be suf\ufb01cient conditions, but not necessary ones, for ensuring validity according to our more general set of de\ufb01nitions. Moreover, we will show that for a natural strengthening of the concept of validity, decomposability and completeness become necessary conditions as well. The tractability of the partition function and marginal densities is a virtually unheard of property for deep probabilistic models, is the primary reason that decomposably and complete SPNs are so appealing. For the sake of brevity we will call an SPN which satis\ufb01es the decomposability and completeness conditions a D&C SPN. A notion related to decomposability which was discussed in Poon and Domingos [2011] is that of \u201cconsistency\u201d, which is de\ufb01ned only for SPNs whose univariate functions f are either the identity function g(z) = z or the negation function g(z) = 1 \u2212z, and whose inputs variables x are all 0/1-valued. Such an SPN is said to be consistent if each product node satis\ufb01es the property that if one of its children has the identity function of xi in its scope, then none of the other children can have the negation function of xi in their scopes. This is a weaker condition than decomposability, and is also known to imply validity [Poon and Domingos, 2011]. Note that for 0/1-valued variables we have x2 i = xi and (1\u2212xi)2 = 1\u2212xi, and so it is possible to construct an equivalent decomposable SPN from a consistent SPN by modifying the children 3As an aside, note that validity also acts as a proof of the \ufb01niteness of the partition function, provided each integral R Ri fi,j(xi)dM(xi) is \ufb01nite. 7 \fof each product node so as to remove the \u201credundant\u201d factors of xi (or 1 \u2212xi). Note that such a construction may require the introduction of polynomially many additional nodes, as in the proof of Proposition 10. In light of this, and the fact that consistency only applies to a narrowly de\ufb01ned sub-class of SPNs, we can conclude that consistency is not a particularly interesting property to study by itself, and so we will not discuss it any further. 2.4 Top-down view of D&C SPNs For a D&C SPN \u03a6 it is known (and is straightforward to show) that if the weights on the incoming edges to each sum node sum to 1, and the univariate functions have integrals of 1 (i.e. so they are normalized density functions), then the normalizing constant of \u03a6\u2019s associated density is 1, and each node can be interpreted as computing a normalized density over the variables in its dependency scope. We will call such a \u03a6 \u201cweight-normalized\u201d. A weight-normalized D&C SPN \u03a6 can be interpreted as a top-down directed generative model where each sum node corresponds to a mixture distribution over the distributions associated with its children (with mixture weights given by the corresponding edge weights), and where each product node corresponds to factorized distribution, with factors given by the distributions of its children [Gens and Domingos, 2013]. Given this interpretation it is not hard to see that sampling from \u03a6 can be accomplished in a top-down fashion starting at the root, just like in a standard directed acyclic graphical model. One interesting observation we can make is that it is always possible to transform a general D&C SPN into an equivalent weight-normalized one, as is formalized in the following proposition: Proposition 4. Given a D&C SPN \u03a6 there exists a weight-normalized D&C SPN \u03a6\u2032 with the same structure as \u03a6 and with the same associated distribution. 2.5 Relationship to previous de\ufb01nitions Our de\ufb01nitions of SPNs and related concepts subsume those given by Poon and Domingos [2011] and later by Gens and Domingos [2013]. Thus the various results we prove in this paper will still be valid according to those older de\ufb01nitions. The purpose of this subsection is justify the above claim, with a brief discussion which assumes pre-existing familiarity with the de\ufb01nitions given in the above cited works. First, to see that our de\ufb01nition of SPNs generalizes that of Poon and Domingos [2011], observe that we can take the univariate functions to be of the form xi or xi = 1 \u2212xi, and that we can choose the domains of measures so that the xi\u2019s are discrete {0, 1}-valued variables, and choose the associated measures so that integration over values of xi becomes equivalent to summation. 8 \fSecond, to see that our de\ufb01nition generalizes that of Gens and Domingos [2013], observe that we can take the univariate functions to be univariate density functions. And while Gens and Domingos [2013] formally de\ufb01ned SPNs as always being decomposable and complete, we will keep the concepts of SPNs and D&C SPNs separate in our discussions, as in the original paper by Poon and Domingos [2011]. 2.6 Polynomials and multilinearity In this section we will de\ufb01ne some additional basic concepts which will be useful in our analysis of SPNs in the coming sections. Given a set/tuple of formal variables y = (y1, y2, ..., y\u2113), a monomial is de\ufb01ned as a product of elements of y (allowing repeats). For example, y1y3 2 is a monomial. In an arithmetic expression, a \u201cmonomial term\u201d refers to a monomial times a coef\ufb01cient from R. For example 4y1y4 2 is a monomial term in 2y1 + 4y1y4 2. In general, polynomials over y are de\ufb01ned as a \ufb01nite sum of monomial terms. Given a monomial m, its associated coef\ufb01cient in a polynomial q will refer to the coef\ufb01cient of the monomial term whose associated monomial is m (we will assume that like terms have been collected, so this is unique). As a short-hand, we will say that a monomial m is \u201cin q\u201d if the associated coef\ufb01cient of m in q is non-zero. The zero polynomial is de\ufb01ned as a polynomial which has no monomials in it. While the zero polynomial clearly computes the zero function, non-zero polynomials can sometimes also compute the zero function over the domain of y, and thus these are related but distinct concepts 4. A polynomial is called non-negative if the coef\ufb01cients of each of its monomials are nonnegative. Non-negativity is related to the monotonicity of arithmetic circuits in the following way: Fact 5. If \u03a6 is a monotone arithmetic circuit over y (such as an SPN with y = f), then q\u03a6 is a non-negative polynomial. We will de\ufb01ne the scope of a polynomial q in y, denoted by yq, to be the set of variables which appear as factors in at least one of its monomials. Note that for an node u in an arithmetic circuit, the scope yu of u can easily be shown to be a superset of the scope of its output polynomial qu (i.e. yqu), but it will not be equal to yqu in general. A central concept in our analysis of D&C SPNs will be that of multilinearity, which is closely related to the decomposability condition. De\ufb01nition 6 (Multilinear Polynomial) A polynomial q in y is multilinear if the degree of each element of y is at most one in each monomial in q. For example, y1 + y2y3 is a multilinear polynomial. 4For example, y1(1 \u2212y1) computes the zero function when the domain of y1 is {0, 1} but is clearly not the zero polynomial. 9 \fSome more interesting examples of multilinear polynomials include the permanent and determinant of a matrix (where we view the entries of the matrix as the variables). De\ufb01nition 7 (Multilinear Arithmetic Circuit) If every node of an arithmetic circuit \u03a6 over y computes a multilinear polynomial in y, \u03a6 is said to be a (semantically) multilinear arithmetic circuit. And if for every product node in \u03a6, the scopes of its child nodes are pair-wise disjoint, \u03a6 is said to be a syntactically multilinear arithmetic circuit. It is easy to show that a syntactically multilinear arithmetic circuit is also a semantically multilinear circuit. However, it is an open question as to whether one can convert a semantically multilinear arithmetic circuit into a syntactically multilinear one without increasing its size by a super-polynomial factor Raz et al. [2008]. In the case of formulas however, given a semantically multilinear formula of size s one can transform it into an equivalent syntactically multilinear formula of size at most s Raz [2004]. It should be obvious by this point that there is an important connection between syntactic multilinearity and decomposability. In particular, if our univariate functions of the xi\u2019s are all identity functions, then scope and dependency-scope become equivalent, and thus so do syntactic multilinearity and decomposability. Given this observation we have that a monotone syntactically multilinear arithmetic circuit over x can be viewed as a decomposable SPN. A somewhat less obvious fact (which will be very useful later) is that any decomposable SPN over 0/1-valued xi\u2019s can be viewed as a syntactically multilinear arithmetic circuit over x, of a similar size and depth. To see this, note that any arbitrary univariate function g of a 0/1valued variable z can always be written as an af\ufb01ne function of z, i.e. of the form az + b with a = g(1) \u2212g(0) and b = g(0). Thus we can replace each node computing a univariate function of some xi with a subcircuit computing this af\ufb01ne function, and this yields a (non-monotone) syntactically multilinear arithmetic circuit over x, with a single additional layer of sum nodes (of size O(n)). An extension of the concept of multilinearity is that of set-multilinearity [e.g. Shpilka and Yehudayoff, 2010]. To de\ufb01ne set-multilinearity we must \ufb01rst de\ufb01ne some additional notation which we will carry through the rest of the paper. Let G1, G2, G3, ..., Gk be a partitioning of the elements of y into disjoint sets. The set-scope Gq of a polynomial q is the sub-collection of the collection {G1, ..., Gk} de\ufb01ned as consisting of those sets Gi which have some element in common with the scope yq of q. i.e. Gq = {Gi : yq \u2229Gi \u0338= \u2205}. Similarly, the set-scope Gu of a node u in an arithmetic circuit \u03a6 is the subcollection of the collection {G1, ..., Gk} de\ufb01ned as consisting of those sets Gi which have some element in common with the scope of u. i.e. Gu = {Gi : yu \u2229Gi \u0338= \u2205}. De\ufb01nition 8 (Set-multilinear polynomial) A polynomial is set-multilinear if each of its monomials has exactly one factor from each of the Gi\u2019s in its set-scope. For example, 3y1y3\u2212y2y4 is a set-multilinear polynomial when G1 = {y1, y2}, G2 = {y3, y4}, 10 \fwhile y1y2+2y2y4 is not. The permanent and determinant of a matrix also turn out to be non-trivial examples of set-multilinear polynomials, if we de\ufb01ne the collection of sets so that Gi consists of the entries in the ith row of the matrix. De\ufb01nition 9 (Set-multilinear arithmetic circuits) An arithmetic circuit is called (semantically) setmultilinear if each of its nodes computes a set-multilinear polynomial. An arithmetic circuit \u03a6 is called syntactically set-multilinear if it satis\ufb01es the following two properties: \u2022 for each product node u in \u03a6, the set-scopes of the children of u are pairwise disjoint \u2022 for each sum node u, the set-scopes of the children of u are all the same. A crucial observation is that the concepts of set-multilineary in arithmetic circuits and decomposability and completeness in SPNs are even more closely related than syntactic multilinearity is to decomposability. In particular, it is not hard to see that if we take k = n, y = f and Gi = fi, then set-scope (of nodes) and dependency-scope become analogous concepts, and D&C SPNs correspond precisely to monotone syntactically set-multilinear arithmetic circuits in f. Because of the usefulness of this connection, we will use the above identifcations for the remainder of this paper whenever we discuss set-multilinearity in the speci\ufb01c context of SPNs. This connection also motivates a natural de\ufb01nition for the dependency-scope for polynomials over f. In particular, the dependency-scope of the polynomial q over f will be de\ufb01ned as the set of variables on which the members of q\u2019s scope fu depend. We will denote the dependency-scope by xq. 3 Analysis of Validity, Decomposability and Completeness In this section we give a novel analysis of the relationship between validity, decomposability and completeness, making use of many of the concepts from circuit theory reviewed in the previous section. First, we will give a quick result which shows that an incomplete SPN can always be ef\ufb01ciently transformed into a complete one which computes the same function of x. Note that this is not a paradoxical result, as the new SPN will behave differently than the original one when used to evaluate integrals (in the sense of de\ufb01nition of valid SPNs in Section 2). Proposition 10. Given an SPN \u03a6 of size s there exists a complete SPN \u03a6\u2032 of size s+n+k \u2208O(s2), and an expanded set/tuple of univariate functions f \u2032 s.t. q\u03a6\u2032(f \u2032(x)) = q\u03a6(f(x)) for all values of x, where k is the sum over the fan-in\u2019s of the sum nodes of \u03a6. Moreover, \u03a6\u2032 is decomposable if \u03a6 is. So in some sense we can always get completeness \u201cfor free\u201d, and of the two properties, decomposability will be the one which actually constrains SPNs in a way that affects their expressive ef\ufb01ciency. 11 \fUnlike with decomposability and completeness, validity depends on the particular de\ufb01nitions of univariate functions making up f, and thus cannot be described in purely structural terms like set-multilinearity. This leads us to propose a slightly stronger condition which we call strong validity, which is independent of the particular choice of univariate functions making up f. De\ufb01nition 11 An SPN \u03a6 is said to be strongly valid if it is valid for every possible choice of the univiariate functions making up f. Note: only the values computed by each fi,j are allowed to vary here, not the identities of the dependent variables. The following theorem establishes the fundamental connection between set-multilinearity and strong validity. Theorem 12. Suppose the elements of x are all non-trivial variables (as de\ufb01ned below). Then an SPN \u03a6 is strongly valid if and only if its output polynomial is set-multilinear. A variable xi is non-trivial if there are at least two disjoint subsets of xi\u2019s range Ri which have \ufb01nite and non-zero measure under Mi. Non-triviality is a very mild condition. In the discrete case, it is equivalent to requiring that there is more than one element of the range set Ri which has mass under the associated measure. Trivial variables can essentially be thought of as \u201cconstants in disguise\u201d, and we can easily just replace them with constant nodes without affecting the input-output behavior of the circuit. It is worth noting that the non-triviality hypothesis is a necessary one for the forward direction of Theorem 12 (although not the reverse direction). To see this, consider for example the SPN \u03a6 which computes f1,1(x1)2f2,1(x2) in the obvious way, where R1 = {1} and R2 = {0, 1}, and the Mi are the standard counting measures. While \u03a6\u2019s output polynomial q\u03a6 is not set-multilinear by inspection, it is relatively easy to show that \u03a6 is indeed strongly valid, as it is basically equivalent to cf2,1(x2) for a constant c. While Theorem 12 is interesting by itself as it provides a complete characterization of strong validity in terms of purely algebraic properties of an SPN\u2019s output polynomial, its main application in this paper will be to help prove the equivalence of strong validity with the decomposability and completeness conditions. Note that such an equivalence does not hold for standard validity, as was \ufb01rst demonstrated by Poon and Domingos [2011]. To see this, consider the SPN which computes the expression (f1,1(x1)f1,2(x1) + 1)f2,1(x2) in the obvious way, where the xi are 0/1-valued, the Mi are the standard counting measures, and f1,1(x1) = x1, f1,2(x1) = x1 = 1 \u2212x1, and f2,1(x2) = x2. Clearly this SPN is neither decomposable nor complete, and yet an exhaustive case analysis shows that it is valid for these particular choices of the fi,j\u2019s. Before we can prove the equivalence of strong validity with the decomposability and completeness conditions, we need to introduce another mild hypothesis which we call \u201cnon-degeneracy\u201d. De\ufb01nition 13 A monotone arithmetic circuit (such as an SPN) is called non-degenerate if all of its weights and constants are non-zero (i.e. strictly positive). 12 \fLike non-triviality, non-degeneracy is a very mild condition to impose, since weights which are zero don\u2019t actually \u201caffect\u201d the output. Moreover, there is a simple and size-preserving procedure which can transform a degenerate monotone arithmetic circuit to a non-degenerate one which computes precisely the same output polynomial, and also preserves structural properties like decomposability and completeness in SPNs. The procedure is as follows. First we remove all edges with weight 0. Then we repeatedly remove any nodes with fan-out 0 (except for the original output node) or fan-in 0 (except input node and constant nodes), making sure to remove any product node which is a parent of a node we remove. It is not hard to see that deletion of a node by this procedure is a proof that it computes the zero-polynomial and thus doesn\u2019t affect the \ufb01nal output. Without non-degeneracy, the equivalence between strong validity and the decomposability and completeness conditions does not hold for SPNs, as can be seen by considering the SPN \u03a6 which computes the expression 0f1,1(x1)2 + f1,1(x1)f2,1(x2) in the obvious way, where the xi are 0/1-valued and the Mi are the standard counting measures, and the fi,j\u2019s are the identity function (i.e. fi,j(xi) = xi). Because the output polynomial q\u03a6 of \u03a6 is equivalent to f1,1(x1)f2,1(x2), it is indeed valid. However, the product node within \u03a6 which computes f1,1(x1)2 violates the decomposability condition, even though this computation is never actually \u201cused\u201d in the \ufb01nal output (due to how it is weighted by 0). Non-degeneracy allows us to prove many convenient properties, which are given in the lemma below. Lemma 14. Suppose \u03a6 is a non-degenerate monotone arithmetic circuit. Denote by r the root of \u03a6, and {ui}i its child nodes. We have the following facts: 1. Each of the \u03a6ui\u2019s are non-degenerate monotone arithmetic circuits. 2. If r is a product node, the set of monomials in qr is equal to the set consisting of every possible product formed by taking one monomial from each of the qui\u2019s. NOTE: This is true even for degenerate circuits. 3. If r is a sum node, the set of monomials in qr is equal to the union over the sets of monomials in the qui\u2019s. 4. qr is not the zero polynomial. 5. The set-scope of r is equal to the set-scope of qr. We are now in a position to prove the following theorem. Theorem 15. A non-degenerate monotone arithmetic circuit \u03a6 has a set-multilinear output polynomial if and only if it is syntactically set-multilinear. Given this theorem, and utilizing the previously discussed connection between syntactic setmultilinearity and the decomposability and completeness conditions, the following corollary is immediate: 13 \fCorollary 16. A non-degenerate SPN \u03a6 has a set-multilinear output polynomial if and only if it is decomposable and complete. And from this and Theorem 12, we have a 3-way equivalence between strong validity, the decomposability and completeness conditions, and the set-multilinearity of the output polynomial. This is stated as the following theorem. Theorem 17. Suppose \u03a6 is a non-degenerate SPN whose input variables (the elements of x) are all non-trivial. Then the following 3 conditions are equivalent: 1. \u03a6 is strongly valid 2. \u03a6 is decomposable and complete 3. \u03a6\u2019s output polynomial is set-multilinear Because SPNs can always be ef\ufb01ciently transformed so that the non-degeneracy and nontriviality hypotheses are both satis\ufb01ed (as discussed above), this equivalence between strong validity and the D&C conditions makes the former easy to verify (since the D&C conditions themselves are). However, as we will see in later sections, decomposability and completeness are restrictive conditions that limit the expressive power of SPNs in a fundamental way. And so a worthwhile question to ask is whether a set of ef\ufb01ciently testable criteria exist for verifying standard/weak validity. We will shed some light on this question by proving a result which shows that a criterion cannot be both ef\ufb01ciently testable and capture all valid SPNs, provided that P \u0338= NP. A caveat to this result is that we can only prove it for a slightly extended de\ufb01nition of SPNs where negative weights and constants are permitted. Theorem 18. De\ufb01ne an extended SPN as one which is allowed to have negative weights and constants. The problem of deciding whether a given extended SPN is valid is co-NP-hard. We leave it as an open question as to whether a similar co-NP-hardness property holds for validity checking of standard SPNs. 4 Focusing on D&C SPNs One of the main goals of this paper is to advance the understanding of the expressive ef\ufb01ciency of SPNs. In this section we explore possible directions we can take towards this goal, and ultimately propose to focus exclusively on D&C SPNs. It is well known that standard arithmetic circuits can ef\ufb01ciently simulate Boolean logic circuits with only a constant factor overhead. Thus they are as ef\ufb01cient at computing a given function as any standard model of computation, up to a polynomial factor. However, we cannot easily exploit 14 \fthis fact to study SPNs, as this simulation requires negative weights, and the weights of an SPN are constrained to be non-negative (i.e. they are monotone arithmetic circuits). And while SPNs have access to non-negative valued univariate functions of the input which standard monotone arithmetic circuits do not, this fact cannot obviously be used to construct a simulation of Boolean logic circuits. Another possible way to gain insight into general SPNs would be to apply existing results for monotone arithmetic circuits. However, a direct application of such results is impossible, as SPNs are monotone arithmetic circuits over f and not x, and indeed their univariate functions can compute various non-negative functions of x (such as 1 \u2212xi for values of xi in {0, 1}) which a monotone circuit could not. But while it seems that the existing circuit theory literature doesn\u2019t offer much insight into general SPNs, there are many interesting results available for multilinear and set-multilinear arithmetic circuits. And as we saw in Section 3, these are closely related to D&C SPNs. Moreover, it makes sense to study D&C SPNs, as they are arguably the most interesting class of SPNs, both from a theoretical and practical perspective. Indeed, the main reason why SPNs are interesting and useful in the \ufb01rst place is that valid SPNs avoid the intractability problems that plague conventional deep models like Deep Boltzmann Machines. Meanwhile the D&C conditions are the only ef\ufb01ciently testable conditions for ensuring validity that we are aware of, and as we showed in Section 3, they are also necessary conditions for a slightly strengthened notion of validity. Thus, D&C SPNs will be our focus for the rest of the paper. 5 Capabilities of D&C SPNs Intuitively, D&C SPNs seem very limited compared to general arithmetic circuits. In addition to being restricted to use non-negative weights and constants like general SPNs, decomposability heavily restricts the kinds of structure the networks can have, and hence the kinds of computations they can perform. For example, something as simple as squaring the number computed by some node u becomes impossible. In order to address the theoretical question of what kinds of functions D&C SPNs can compute ef\ufb01ciently, despite their apparent limitations, we will construct explicit D&C SPNs that ef\ufb01ciently compute various example functions. This is dif\ufb01cult to do directly because the decomposability condition prevents us from using the basic computational operations we are accustomed to working with when designing algorithms or writing down formulae. To overcome this dif\ufb01culty we will provide a couple of related examples of computational systems which we will show can be ef\ufb01ciently simulated by SPNs. These systems will be closer to more traditional models of computation like state-space machines, so that our existing intuitions about algorithm design will be more directly applicable to them. 15 \fThe \ufb01rst such system we will call a Fixed-Permutation Linear Model (FPLM), which works as follows. We start by initializing a \u201cworking vector\u201d v with a value a, and then we process the input (the xi\u2019s) in sequence, according to a \ufb01xed order given by a permutation \u03c0 of [n]. At each stage we multiply v by a matrix which is determined by the value of the current xi. After seeing the whole input, we then take the inner product of v with another vector b, which gives us our real-valued output. More formally, we can de\ufb01ne FPLMs as follows. De\ufb01nition 19 A Fixed-Permutation Linear Model (FPLM) will by de\ufb01ned by a \ufb01xed permutation \u03c0 of [n], a \u2018dimension\u2019 constant k (which in some sense measures the size of the FPLM), vectors a, b \u2208Rk \u22650 and for each i \u2208[n], a matrix-valued function Ti from xi to Rk\u00d7k \u22650 . The output of a FPLM is de\ufb01ned as b\u22a4T\u03c0(n)(x\u03c0(n))T\u03c0(n\u22121)(x\u03c0(n\u22121)) \u00b7 \u00b7 \u00b7T\u03c0(1)(x\u03c0(1))a. An FPLM can be viewed as a computational system which must process its input in a \ufb01xed order and maintains its memory/state as a k-dimensional vector. Crucially, an FPLM cannot revisit inputs that it has already processed, which is a similar limitation to the one faced by read-once Turing Machines. The state vector can be transformed at each stage by a linear transformation which is a function of the current input. While its k-dimensional state vector allows an FPLM to use powerful distributed representations which clearly possess enough information capacity to memorize the input seen so far, the fundamental limitation of FPLMs lies in their limited tools for manipulating this representation. In particular, they can only use linear transformations (given by matrices with positive entries). If they had access to arbitrary transformations of their state then it is not hard to see that any function could be ef\ufb01ciently computed by them. The following result establishes that D&C SPNs can ef\ufb01ciently simulate FPLMs. Proposition 20. Given a FPLM of dimension k there exists a D&C SPN of size O(nk2) which computes the same function. Thus D&C SPNs are at least as expressively ef\ufb01cient as FPLMs. This suggests the following question: are they strictly more expressively ef\ufb01cient than FPLMs, or are they equivalent? It turns out that they are more expressively ef\ufb01cient. We sketch a proof of this fact below. Suppose that x takes values in {0, 1}n. As observed in Section 2.6, this allows us to assume without loss of generality that any univariate function of one of the xi\u2019s is af\ufb01ne in xi. In particular, we can assume that the matrix-valued functions Ti used in FPLMs are af\ufb01ne functions of the respective xi\u2019s. In this setting it turns out that FPLMs can be viewed as a special case of a computational system called \u201cordered syntactically multilinear branching programs\u201d, as they are de\ufb01ned by Jansen [2008]. Jansen [2008] showed that there exists a polynomial function in x whose computation by such a system requires exponential size (corresponding to an FPLM with an exponentially large dimension k). Moreover, this function is computable by a polynomially sized monotone syntactically multilinear arithmetic. As observed in Section 2.6, such a circuit can be viewed as a decomposable SPN whose univariate functions are just identity functions. Then using Proposition 10 we can convert such a decomposable SPN to a D&C SPN while only squaring its size. So the polynomial provided by Jansen [2008] is indeed computed by a D&C SPN of poly16 \fnomial size, while requiring exponential size to be computed by a FPLM, thus proving that D&C SPNs are indeed more expressively ef\ufb01cient. Given this result, we see that FPLMs do not fully characterize the capabilities of D&C SPNs. Nevertheless, if we can construct an FPLM which computes some function ef\ufb01ciently, this constitutes proof of existence of a similarly ef\ufb01cient D&C SPN for computing said function. To make the construction of such FPLMs simpler, we will de\ufb01ne a third computational system which we call a Fixed-Permutation State-Space Model (FPSSM) which is even easier to understand than FPLMs, and then show that FPLMs (and hence also D&C SPNs) can ef\ufb01ciently simulate FPSSMs. An FPSSM works as follows. We initialize our \u201cworking state\u201d u as c, and then we process the input (the xi\u2019s) in sequence, according to a \ufb01xed order given by the permutation \u03c0 of [n]. At each stage we transform u by computing g\u03c0(i)(x\u03c0(i), u), where the transition function g\u03c0(i) can be de\ufb01ned arbitrarily. After seeing the whole input, we then decode the state u as the non-negative real number h(u). More formally we have the following de\ufb01nition. De\ufb01nition 21 A Fixed-Permutation State-Space Model (FPSSM) will by de\ufb01ned by a \ufb01xed permutation \u03c0 of [n], a \u2018state-size\u2019 constant k (which in some sense measures the size of the FPSSM), an initial state c \u2208[k], a decoding function h from [k] to R\u22650, and for each i \u2208[n] an arbitrary function gi which maps values of xi and elements of [k] to elements of [k]. The output of an FPSSM will be de\ufb01ned as h(g\u03c0(n)(x\u03c0(n), g\u03c0(n\u22121)(\u00b7 \u00b7 \u00b7 g\u03c0(1)(x\u03c0(1), c) \u00b7 \u00b7 \u00b7))) for an arbitrary function h mapping elements of [k] to R\u22650. FPSSMs can be seen as general state-space machines (of state size k), which like FPLMs, are subject to the restriction that they must process their inputs in a \ufb01xed order which is determined ahead of time, and are not allowed to revisit past inputs. If the state-space is large enough to be able to memorize every input seen so far, it is clear that FPSSMs can compute any function, given that their state-transition function can be arbitrary. But this would require their state-size constant k to grow exponentially in n, as one needs a state size of 2b in order to memorize b input bits. FPSSMs of realistic sizes can only memorize a number of bits which is logarithmic in n. And this, combined with their inability to revisit past inputs, clearly limits their ability to compute certain functions ef\ufb01ciently. This is to be contrasted with FPLMs, whose combinatorial/distributed state have a high information capacity even for small FPLMs, but are limited instead in how they can manipulate this state. The following result establishes that FPLMs can ef\ufb01ciently simulate FPSSMs. Proposition 22. Given a FPSSM of state-size k there exists a FPLM of dimension k which computes the same function. Note that this result also implies that FPSSMs are no more expressively ef\ufb01cient than FPLMs, and are thus strictly less expressively ef\ufb01cient than D&C SPNs. 17 \fThe following Corollary follows directly from Propositions 20 and 22: Corollary 23. Given a FPSSM of state-size k there exists a D&C SPN of size O(nk2) which computes the same function. Unlike with D&C SPNs, our intuitions about algorithm design readily apply to FPSSMs, making it easy to directly construct FPSSMs which implement algorithmic solutions to particular problems. For example, suppose we wish to compute the number of inputs whose value is equal to 1. We can solve this with an FPSSM with a state size of k = n by taking \u03c0 to be the identity permutation, and the state u to be the number of 1\u2019s seen so far, which we increment whenever the current xi has value 1. We can similarly compute the parity of the number of ones (which is a well known and theoretically important function often referred to simply as \u201cPARITY\u201d) by storing the current number of them modulo 2, which only requires the state size k to be 1. We can also decide if the majority of the xi\u2019s are 1 (which is a well known function and theoretically important often referred to as \u201cMAJORITY\u201d or \u201cMAJ\u201d) by storing a count of the number of ones (which requires a state size of k = n), and then outputting 1 if s \u2265n/2 and 0 otherwise. It is noteworthy that the simulations of various models given in this section each require an D&C SPN of depth n. However, as we will see in Section 7.2, the depth of any D&C SPN can be reduced to O(log(n)2), while only increasing its size polynomially. 6 Separating Depth 3 From Higher Depths The only prior work on the expressive ef\ufb01ciency of SPNs which we are aware of is that of Delalleau and Bengio [2011]. In that work, the authors give a pair of results which demonstrate a difference in expressive ef\ufb01ciency between D&C SPNs of depth 3, and those of higher depths. Their \ufb01rst result establishes the existence of an n-dimensional function g (for each n) which can be computed by a D&C SPN of size O(n) and depth O(log(n)), but which requires size \u2126(2 \u221an) to be computed by a D&C SPN of depth5 3. In their second result they show that for each d \u22654 there is an n-dimensional function hd which can be computed by a D&C SPN of size O(dn) and depth d, but which requires size \u2126(nd) to be computed by a D&C SPN of depth 3. It is important to note that these results do not establish a separation in expressive ef\ufb01ciency between D&C SPNs of any two depths both larger than 3 (e.g. between depths 4 and 5). So in particular, despite how the size lower bound increases with d in their second result6 this does not imply that the set of ef\ufb01ciently computable functions is larger for D&C SPNs of depth d + k than 5Note that in our presentation the input layer counts as the \ufb01rst layer and contributes to the total depth. 6As shown by our Theorem 29, there is a much stronger separation between depths 3 and 4 than is proved by Delalleau and Bengio [2011] to exist between depths 3 and d for any d \u22654, and thus this apparent increase in the size of their lower bound isn\u2019t due to the increasing power of D&C SPNs with depth so much as it an artifact of their particular proof techniques. 18 \ffor those of depth d, for any k > 0, except when d \u22643. This is to be contrasted with our much stronger \u201cdepth hierarchy\u201d result (Theorem 29 of Section 7.1) which shows that D&C SPNs do in fact have this property (even with k = 1) for all choices of d, where depth is measured in terms of product-depth. In the next subsection we will show how basic circuit theoretic techniques can be used to give a short proof of a result which is stronger than both of the separation results of Delalleau and Bengio [2011], using example functions which are natural and simple to understand. Beyond providing a simpli\ufb01ed proof of existing results, this will also serve as a demonstration of some of the techniques underlying the more advanced results from circuit theory which we will later make use of in Sections 7.1 and 8. Moreover, by employing these more general and powerful proof techniques, we are able to prove a stronger result which seperates functions that can be ef\ufb01ciently approximated by D&C SPNs of depth 3 from those which can be computed by D&C SPNs of depth 4 and higher. This addresses the open question posed by Delalleau and Bengio [2011]. 6.1 Basic separation results We begin by de\ufb01ning some basic concepts and notation which are standard in circuit theory. For an arbitrary function g of x, and a partition (A, B) of the set [n] of indices of the elements of x, de\ufb01ne MA,B g to be the 2|A| by 2|B| matrix of values that g takes for different values of x, where the rows of MA,B g are indexed by possible values of xA, and the columns of MA,B g are indexed by possible values of xB. MA,B g is called a \u201ccommunication matrix\u201d in the context of communication complexity, and appears frequently as a tool to prove lower bounds. Its usefulness in lower bounding the size of D&C SPNs of depth 3 is established by the following theorem. Theorem 24. Suppose \u03a6 is a D&C SPN of depth 3 with k nodes in its second layer. For any partition (A, B) of [n] we have k \u2265rank \u0010 MA,B q\u03a6(f(x)) \u0011 . Note that the proof of this theorem doesn\u2019t use the non-negativity of the weights of the SPN, and thus applies to the \u201cextended\u201d version of SPNs discussed in Section 3. We will now de\ufb01ne the separating function which we will use to separate the expressive ef\ufb01ciency of depth 3 and 4 D&C SPNs. De\ufb01ne H1 = {1, 2, ..., n/2} and H2 = {n/2 + 1, n/2 + 2, ..., n}. We will de\ufb01ne the function EQUAL for {0, 1}n-valued x to be 1 when xH1 = xH2 (i.e. the \ufb01rst half of the input is equal to the second half) and 0 otherwise. Observe that MH1,H2 EQUAL = I and so this matrix has rank 2n/2. This gives the following simple corollary of the above theorem: 19 \fCorollary 25. Any D&C SPN of depth 3 with computes EQUAL(x) must have at least 2n/2 nodes in its second layer. Meanwhile, EQUAL(x) is easily shown to be ef\ufb01ciently computed by a D&C SPN of depth 4. This is stated as the following proposition. Proposition 26. EQUAL can be computed by an D&C SPN of size O(n) and depth 4. Note that the combination of Corollary 25 and Proposition 26 gives a stronger separation result than both of the aforementioned results of Delalleau and Bengio [2011]. Our result also has the advantage of using an example function which is easy to interpret, and can be easily extended to prove separation results for other functions which have a high rank communication matrix. 6.2 Separations for approximate computation An open question posed by Delalleau and Bengio [2011] asked whether a separation in expressive ef\ufb01ciency exists between D&C SPNs of depth 3 and 4 if the former are only required to compute an approximation to the desired function. In this section we answer this question in the af\ufb01rmative by making use of Theorem 24 and an additional technical result which lower bounds the rank of the perturbed versions of the identity matrix. Theorem 27. Suppose \u03a6 is a D&C SPN of depth 3 whose associated distribution is such that each value of x with EQUAL(x) = 1 has an associated probability between a/2 and a for some a > 0 (so that all such values of x have roughly equal probability), and that the total probability \u03b4 of all of the values of x satisfying EQUAL(x) = 0 obeys \u03b4 \u22641/4 (so that the probability of drawing a sample with EQUAL(x) = 0 is \u22641/4). Then \u03a6 must have at least 2n/2\u22122/3 nodes in its second layer. To prove this result using Theorem 24 we will make use of the following lemma which lower bounds the rank of matrices of the form I + D = MH1,H2 EQUAL + D for some \u201cperturbation matrix\u201d D, in terms of a measure of the total size of the entries of D. Lemma 28. Suppose D \u2208Rk\u00d7k is a real-valued matrix such that P i,j |[D]i,j| = \u2206for some \u2206\u22650. Then rank(I + D) \u2265k/2 \u2212\u2206/2. 20 \f7 Depth Analysis 7.1 A depth hierarchy for D&C SPNs In this section we show that D&C SPNs become more expressively ef\ufb01cient as their product-depth7 d increases, in the sense that the set of ef\ufb01ciently computable density functions expands as d grows. This is stated formally as follows: Theorem 29. For every integer d \u22651 and input size n there exists a real-valued function gd+1 of x such that: 1. There is a D&C SPN of product-depth d+1 and size O(n2) which computes gd+1 for all values of x in {0, 1}n, where the SPN\u2019s univariate functions f consist only of identity functions. 2. For any choice of the univariate functions f, a D&C SPN of product-depth d that computes gd+1 for all values of x in {0, 1}n must be of size n\u2126(log(n)1/2d) (which is super-polynomial in n). Previously, the only available result on the relationship of depth and expressive ef\ufb01ciency of D&C SPNs has been that of Delalleau and Bengio [2011], who showed that D&C SPNs of depth 3 are less expressively ef\ufb01cient than D&C SPNs of depth 4. An anologous result seperating very shallow networks from deeper ones also exists for neural networks. In particular, it is known that under various realistic constraints on their weights, threshold-based neural networks with one hidden layer (not counting the output layer) are less expressively ef\ufb01cient those with 2 or more hidden layers [Hajnal et al., 1993, Forster, 2002]. More recently, Martens et al. [2013] showed that Restricted Boltzmann Machines are incapable of ef\ufb01ciently capturing certain simple distributions, which by the results of Martens [2014], can be ef\ufb01ciently captured by Deep Boltzmann Machines. A \u201cdepth-hierarchy\u201d property analogous to Theorem 29 is believed to hold for various other deep models like neural networks, Deep Boltzmann Machines [Salakhutdinov and Hinton, 2009], and Sigmoid Belief Networks [Neal, 1992], but has never been proven to hold for any of them. Thus, to the best of our knowledge, Theorem 29 represents the \ufb01rst time that a practical and nontrivial deep model has been rigorously shown to gain expressive ef\ufb01ciency with each additional layer of depth added. To prove Theorem 29, we will make use of the following analogous result which is a slight modi\ufb01cation of one proved by Raz and Yehudayoff [2009] in the context of multilinear circuits. Theorem 30. (Adapted from Theorem 1.2 of Raz and Yehudayoff [2009]) For every integer d \u22651 and input size n there exists a real-valued function gd+1 of x such that: 7Product-depth is de\ufb01ned in Section 2.1. Note that it can be shown to be equivalent to standard depth up to a factor of 2 (e.g. by \u2018merging\u2019 sum nodes that are connected as parent/child). 21 \f1. There is a monotone syntactically multilinear arithmetic circuit over x of product-depth d+1, size O(n) which computes gd+1 for all values of x in Rn. 2. Any syntactically multilinear arithmetic circuit over x of product-depth d that computes gd+1 for all values of x in Rn must be of size n\u2126(log(n)1/2d). Note that the original Theorem 1.2 from Raz and Yehudayoff [2009] uses a slightly different de\ufb01nition of arithmetic circuits from ours (they do not permit weighted connections), and the constructed circuits are not stated to be monotone. However we have con\ufb01rmed with the authors that their result still holds even with our de\ufb01nition, and the circuits constructed in their proof are indeed monotone [Raz, 2014]. There are several issues which must be overcome before we can use Theorem 30 to prove Theorem 29. The most serious one is that syntactically multilinear arithmetic circuits are not equivalent to D&C SPNs as either type of circuit has capabilities that the other does not. Thus the ability or inability of syntactically multilinear arithmetic circuits to compute certain functions does not immediately imply the same thing for D&C SPNs. To address this issue, we will consider the case where x is binary-valued (i.e. takes values in {0, 1}n) so that we may exploit the close relationship which exists between syntactically multilinear arithmetic circuits and decomposable SPNs over binary-valued inputs x (as discussed in Section 2.6). Another issue is that Theorem 30 deals only with the hardness of computing certain functions over all of Rn instead of just {0, 1}n (which could be easier in principle). However, it turns out that for circuits with multilinear output polynomials, computing a function over {0, 1}n is equivalent to computing it over Rn, as is established by the following lemma. Lemma 31. If q1 and q2 are two multilinear polynomials over y = (y1, ..., y\u2113) with q1(y) = q2(y) \u2200y \u2208{0, 1}\u2113, then q1(y) = q2(y) \u2200y \u2208R\u2113. With these observations in place the proof of Theorem 29 from Theorem 30 becomes straightforward (and is given in the appendix). 7.2 The limits of depth Next, we give a result which shows that the depth of any polynomially sized D&C SPN can be essentially compressed down to O(log(n)2), at the cost of only a polynomial increase in its total size. Thus, beyond this sublinear threshold, adding depth to a D&C SPN does not increase the set of functions which can be computed ef\ufb01ciently (where we use this term liberally to mean \u201cwith polynomial size\u201d). Note that this does not contradict Theorem 29 from the previous subsection as that dealt with the case where the depth d is a \ufb01xed constant and not allowed to grow with n. To prove this result, we will make use of a similar result proved by Raz and Yehudayoff [2008] in the context of multilinear circuits. In particular, Theorem 3.1 from Raz and Yehudayoff 22 \f[2008] states, roughly speaking, that for any syntactically multilinear arithmetic circuit over y = (y1, ..., y\u2113) of size s (of arbitrary depth) there exists a syntactically multilinear circuit of size O(\u21136s3) and depth O(log(\u2113) log(s)) which computes the same function. Because this depth-reducing transformation doesn\u2019t explicitly preserve monotonicity, and deals with multilinear circuits instead of set-multilinear circuits, while using a slightly different de\ufb01nition of arithmetic circuits, it cannot be directly applied to prove an analogous statement for D&C SPNs. However, it turns out that the proof contained in Raz and Yehudayoff [2008] does in fact support a result which doesn\u2019t have these issues [Raz, 2014]. We state this as the following theorem. Theorem 32. (Adapted from Theorem 3.1 of Raz and Yehudayoff [2008]) Given a monotone syntactically set-multilinear arithmetic circuit (over y = (y1, ..., y\u2113) with sets given by G1,...,Gn) of size s and arbitrary depth, there exists a monotone syntactically set-multilinear arithmetic circuit of size O(s3) and depth O(log(n) log(s)) which computes the same function. Note that the size of the constructed circuit is smaller here than in Theorem 3.1 of Raz and Yehudayoff [2008] because we can avoid the \u201chomogenization\u201d step required in the original proof, as syntactically set-multilinear arithmetic circuits automatically have this property. Given this theorem and the equivalence between monotone syntactically set-multilinear arithmetic circuits and D&C SPNs which was discussed near the end of Section 2.6, the following corollary is immediate. Corollary 33. Given a D&C SPN of size s and arbitrary depth there exists a D&C SPN of size O(s3) and depth O(log(n) log(s)) which computes the same function. Note that when the size s is a polynomial function of n, this depth bound is stated more simply as O(log(n)2). 8 Circuits vs Formulas In Gens and Domingos [2013] the authors gave a learning algorithm for SPNs which produced D&C SPN formulas. Recall that formulas are distinguished from more general circuits in that each node has fan-out at most 1. They are called \u201cformulas\u201d because they can be written down directly as formula expressions without the need to de\ufb01ne temporary variables. It is worthwhile asking whether this kind of structural restriction limits the expressive ef\ufb01ciency of D&C SPNs. As we show in this section, the answer turns out to be yes, and indeed D&C SPN formulas are strictly less expressively ef\ufb01cient than more general D&C SPNs. This is stated formally as the following theorem: Theorem 34. For every input size n there exists a real-valued function g of x such that: 23 \f1. There is a D&C SPN of size O(n4/3) which computes g, where the SPN\u2019s univariate functions f consist only of identity functions. 2. For any choice of the univariate functions f, a D&C SPN formula that computes g must be of size n\u2126(log(n)) (which is super-polynomial in n). As in Section 7.1, to prove Theorem 34, we will make use of an analogous result which is a slight modi\ufb01cation of one proved by Raz and Yehudayoff [2008] in the context of multilinear circuits. This is stated below. Theorem 35. (Adapted from Theorem 4.4 of Raz and Yehudayoff [2008]) For every input size n there exists a real-valued function g of x such that: 1. There is a monotone syntactically multilinear arithmetic circuit over x of size O(n) with nodes of maximum in-degree O(n1/3) which computes g for all values of x in Rn. 2. Any syntactically multilinear arithmetic formula over x that computes g for all values of x in Rn must be of size n\u2126(log(n)). As in Section 7.1, the original Theorem 4.4 from Raz and Yehudayoff [2008] uses a slightly different de\ufb01nition of arithmetic circuits from ours (they do not permit weighted connections), and the constructed circuits are not stated to be monotone. However we have con\ufb01rmed with the authors that their result still holds even with our de\ufb01nition, and the circuits constructed in their proof are indeed monotone [Raz, 2014]. When trying to use Theorem 35 to prove Theorem 34, we encounter similar obstacles to those encountered in Section 7.1. Fortunately, the transformation between decomposable SPNs and multilinear arithmetic circuits (for the case of binary-valued inputs) happens to preserve formula structure. Thus the ideas discussed in Section 7.1 for overcoming these obstacles also apply here. 9 A Tractable Distribution Separating D&C SPNs and Other Deep Models The existence of a D&C SPN of size s for computing some density function (possibly unnormalized) implies that the corresponding marginal densities can be computed by an O(s) time algorithm. Thus, it follows that D&C SPNs cannot ef\ufb01ciently compute densities whose marginals are known to be intractable. And if we assume the widely believed complexity theoretic conjecture that P \u0338= #P, such examples are plentiful. However, it is debatable whether this should be considered a major drawback of D&C SPNs, since distributions with intractable marginals are unlikely to be learnable using any model. Thus we are left with an important question: can D&C SPNs ef\ufb01ciently compute any density with tractable marginals? 24 \fIn Poon and Domingos [2011] it was observed that essentially every known model with tractable marginal densities can be viewed as a D&C SPN, and it was speculated that the answer to this question is yes. In this section we refute this speculation by giving a counter example. In particular, we construct a simple distribution D whose density function and corresponding marginals can be evaluated by ef\ufb01cient algorithms, but which provably cannot be computed by a sub-exponentially sized D&C SPN of any depth. Notably, this density function can be computed by a Boolean circuit of modest depth and size, and so by the simulation results of [Martens, 2014] the distribution can in fact be captured ef\ufb01ciently by various other deep probabilistic models like Deep Boltzmann Machines (DBMs). Notably, our proof of the lower bound on the size of D&C SPNs computing this density function will not use any unproven complexity theoretic conjectures, such as P \u0338= #P. It is worthwhile considering whether there might be distributions which can be ef\ufb01ciently modeled by D&C SPNs but not by other deep generative models like DBMs or Contrastive Backprop Networks [Hinton et al., 2004]. The answer to this question turns out to be no. To see this, note that arithmetic circuits can be ef\ufb01ciently approximated by Boolean circuits, and even more ef\ufb01ciently approximated by linear threshold networks (which are a simple type of neural network). Thus, by the simulations results of Martens [2014] the aforementioned deep models can ef\ufb01ciently simulate D&C SPNs of similar depths (up to a reasonable approximation factor). Here \u201cef\ufb01ciently\u201d means \u201cwith a polynomial increase in size\u201d, although in practice this polynomial can be of low order, depending on how exactly one decides to simulate the required arithmetic. For linear threshold networks (and hence also Contrastive Back-prop Nets), very ef\ufb01cient simulations of arithmetic circuits can be performed using the results of Reif and Tate [1992], for example. 9.1 Constructing the distribution To construct the promised distribution over values of x we will view each xi as an indicator variable for the presence or absence of a particular labeled edge in a subgraph Gx of Km, where Km denotes the complete graph on m vertices. In particular, xi will take the value 1 if the edge labeled by i is present in Gx and 0 otherwise. Note that there are \u0000m 2 \u0001 total edges in Km and so the total input size is n = \u0000m 2 \u0001 . The distribution D will then be de\ufb01ned simply as the uniform distribution over values of x satisfying the property that Gx is a spanning tree of Km. We will denote its density function by d(x). Computing d(x) up to a normalizing constant8 amounts to deciding if the graph Gx represented by x is indeed a spanning tree of Km, and outputting 1 if it is, and 0 otherwise. And to 8The normalizing constant in this case is given by Z = mm\u22122 by Cayley\u2019s Formula. 25 \fdecide if the graph Gx is a spanning tree amounts to checking that it is connected, and that it has exactly m \u22121 edges. The \ufb01rst problem can be ef\ufb01ciently solved by a Boolean circuit with O(n) gates and depth O(log(n)) using the well-known trick of repeatedly squaring the adjacency matrix. The second can be solved by adding all of the entries of x together, which can also be done with a Boolean circuit with O(n) gates and depth O(log(n)) [Paterson et al., 1990]. Due to how neural networks with simple linear threshold nonlinearities can simulate Boolean circuits in a 1-1 manner [e.g Parberry, 1994], it follows that such networks of a similar size and depth can compute d(x). And since linear threshold gates are easily simulated by a few sigmoid or recti\ufb01ed linear units, it follows that neural networks of the same dimensions equipped with such nonlinearities can also compute d(x), or at least approximate it arbitrarily well (see Martens [2014] for a review of these basic simulation results). Moreover, by the results of Martens [2014] we know that any distribution whose density is computable up to a normalization constant by Boolean circuits can be captured, to an arbitrarily small KL divergence, by various deep probabilistic models of similar dimensions. In particular, these results imply that Deep Boltzmann Machines of size O(n) and Constrastive Backprop Networks of size O(n) and depth O(log(n)) can model the distribution D to an arbitrary degree of accuracy. And since we can sample (n\u22122)-length Pr\u00a8 ufer sequences [Pr\u00a8 ufer, 1918] and implement the algorithm for converting these sequences to trees using a threshold network of size O(n2) and depth O(n) it follows from Martens [2014] that we can also approximate D using Sigmoid Belief Networks [Neal, 1992] of this size and depth. While the existence of small circuits for computing d(x) isn\u2019t too surprising, it is a somewhat remarkable fact that it is possible to evaluate any marginal of d(x) using an O(n1.19)-time algorithm. That is, given a subset I of {1, ..., n}, and associated \ufb01xed values of the corresponding variables (i.e. xI), we can compute the sum of d(x) over all possible values of the remaining variables (i.e. x{1,...n}\\I) using an algorithm which runs in time O(n1.19). To construct this algorithm we \ufb01rst observe that the problem of computing these marginal densities reduces to the problem of counting the number of spanning trees consistent with a given setting of xI (for a given I). And it turns out that this is a problem we can attack directly by \ufb01rst reducing it to the problem of counting the total number of spanning trees of a certain auxiliary graph derived from xI, and then reducing it to the problem of computing determinants of the Laplacian matrix of this auxiliary graph via an application of generalized version of Kirchoff\u2019s famous Matrix Tree Theorem [Tutte, 2001]. This argument is formalized in the proof of the following theorem. Theorem 36. There exists a O(n1.19)-time algorithm, which given as input a set I \u2282{1, ..., n} and corresponding \ufb01xed values of xI, outputs the number of edge-labeled spanning trees T of Km which are consistent with those values. 26 \f9.2 Main lower bound result The main result of this section is stated as follows: Theorem 37. Suppose that d(x) can be approximated arbitrarily well by D&C SPNs of size \u2264s and m \u226520. Then s \u22652m/30240. By \u201capproximated arbitrarily well by D&C SPNs of size \u2264s\u201d we mean that there is a sequence of D&C SPNs of size \u2264s whose output approaches d(x), where the univariate functions f are allowed to be different for each SPN in the sequence. Observe that d(x) being computed exactly by a D&C SPN of size s trivially implies that it can approximated arbitrarily well by D&C SPNs of size \u2264s. Note that the large constant in the denominator of the exponent can likely be lowered substantially with a tighter analysis than the one we will present. However, for our purposes, we will be content simply to show that the lower bound on s is exponential in m (and hence also in \u221an). Our strategy for proving Theorem 37 involves two major steps. In the \ufb01rst we will show that the output polynomial of any D&C SPN of size s can be \u201cdecomposed\u201d into the sum of s2 \u201cweak\u201d functions. We will then extend this result to show that the same is true of any function which can computed as the limit of the outputs of an in\ufb01nite sequence of D&C SPNs of size \u2264s. This will be Theorem 39. In the second step of the proof we will show that in order to express d(x) as the sum of such \u201cweak\u201d functions, the size k of the sum must be exponentially large in m, and thus so must s. This will follow from the fact (which we will show) that each \u201cweak\u201d function can only be non-zero on a very small fraction of the all the spanning trees of Km (to avoid being non-zero for a non-spanning tree graph), and so if a sum of them has the property of being non-zero for all of the spanning trees, then that sum must be very large. This will be Theorem 40. Theorem 37 will then follow directly from Theorems 39 and 40. 9.3 Decomposing D&C SPNs The following theorem shows how the output polynomial of a D&C SPN of size s can be \u201cdecomposed\u201d into a sum of s2 non-negatives functions which are \u201cweak\u201d in the sense that they factorize over two relatively equal-sized partitions of the set of input variables. Theorem 38. Suppose \u03a6 is a D&C SPN over f of size s. Then we have: q\u03a6 = k X i=1 gihi 27 \fwhere k \u2264s2, and where the gi\u2019s and hi\u2019s are non-negative polynomials in f satisfying the conditions: n 3 \u2264|xgi|, |xhi| \u22642n 3 , xgi \u2229xhi = \u2205, xgi \u222axhi = x (1) It should be noted that this result is similar to an existing one proved by Raz and Yehudayoff [2011] for monotone multilinear circuits, although we arrived at it independently. While Theorem 38 provides a useful characterization of the form of functions which can be computed exactly by a D&C SPN of size s, it doesn\u2019t say anything about functions which can only be computed approximately. To address this, we will strengthen this result by showing that any function which can be approximated arbitrarily well by D&C SPNs of size s also has a decomposition which is analogous to the one in Theorem 38. This is stated as follows. Theorem 39. Suppose {\u03a6j}\u221e j=1 is a sequence of D&C SPNs of size at most s (where the de\ufb01nitions of the univariate functions f is allowed to be different for each), such that the sequence {q\u03a6k}\u221e 1 of corresponding output polynomials converges pointwise (considered as functions of x) to some function \u03b3 of x. And further suppose that the size of the range of possible values of x is given by some \ufb01nite d. Then we have that \u03b3 can be written as \u03b3 = k X i=1 gihi (2) where k \u2264s2 and \u2200i, gi and hi are real-valued non-negative functions of yi and zi (resp.) where yi and zi are sub-sets/tuples of the variables in x satisfying n 3 \u2264|yi|, |zi| \u2264 2n 3 , yi \u2229zi = \u2205, yi \u222azi = x. 9.4 A lower bound on k In this section we will show that if d(x) is of the same form of \u03b3(x) from eqn. 2, then the size of the size k of the sum must grow exponentially with m (and hence \u221an). In particular, we will prove the following theorem. Theorem 40. Suppose d(x) is of the form from eqn. 2, and m \u226520. Then we must have that k \u22652m/15120. Our strategy to prove this result will be to show that each term in the sum can only be non-zero on an exponentially small fraction of all the spanning trees of Km (and is thus \u201cweak\u201d). And since the sum must be non-zero on all the spanning trees in order to give d(x), it will follow that k will have to be exponentially large. We will start with the simple observation that, due to the non-negativity of the gi\u2019s and hi\u2019s, each factored term gihi in the sum d = Pk i=1 gihi must agree with d wherever d is 0 (i.e. because 28 \fwe have d(x) \u2265gi(yi)hi(zi) for each i). And in particular, for each value of x with d(x) = 0, either gi(yi) or hi(zi) must be 0. Intuitively, this is a very stringent requirement. As an analogy, we can think of each factor (gi or hi) as \u201cseeing\u201d roughly half the input edges, and voting \u201cyes, I think this is a spanning tree\u201d, or \u201cno, I don\u2019t think this is a spanning tree\u201d by outputting either a value > 0 for \u201cyes\u201d or 0 for \u201cno\u201d, with tie votes always going \u201cno\u201d. The requirement can thus be stated as: \u201ceach pair of factors is never allowed to reach an incorrect \u2018yes\u2019 decision\u201d. Despite both factors in each pair being arbitrary functions of their respective inputs (at least in principle), each only \u201csees\u201d the status of roughly half the edges in the input graph, and so cannot say much about whether the entire graph actually is a spanning tree. While some potential cycles might be entirely visible from the part of the graph visible to one of the factors, this will not be true of most potential cycles. Thus, to avoid ever producing an incorrect \u201cyes\u201d decision, the factors are forced to vote using a very conservative strategy which will favor \u201cno\u201d. The remainder of this section is devoted to formalizing this argument by essentially characterizing this conservative voting strategy and showing that it leads to a situation where only a very small fraction of all of the possible spanning trees of Km can receive two \u201cyes\u201d votes. Lemma 41. Suppose g(y) and h(z) are real-valued non-negative functions of the same form as those described in eqn. 2, and that for any value of x, d(x) = 0 implies g(y) = 0 or h(z) = 0. De\ufb01ne P = |{x \u2208{0, 1}m : d(x) = 1 and g(y)h(z) > 0}| and Z = |{x \u2208{0, 1}m : d(x) = 1}| = mm\u22122. Then for m \u226520 we have P Z \u2264 1 2m/15120 It is not hard to see that this lemma will immediately imply Theorem 40. In particular, provided that d(x) = 0 implies that each term in the sum is 0, we have each term can be non-zero on at most a proportion 1 2m/15120 of the values of x for which d(x) = 1, and thus the entire sum can be non-zero on at most a proportion at most k 2m/15120 . Thus we must have that k 1 2m/15120 \u22651, i.e. k \u22652m/15120. The rest of this section will be devoted to the proof of Lemma 41, which begins as follows. Suppose we are given such a g and h. We will color all of the edges of the complete graph Km as red or blue according to whether they correspond to input variables from y or z (resp.). We de\ufb01ne a \u201ctriangle\u201d of a graph to be any complete subgraph on 3 vertices. Km has \u0000m 3 \u0001 triangles total since it is fully connected. After coloring Km, each triangle is either monochromatic (having edges with only one color), or dichromatic, having 2 edges of one color and 1 edge of the other color. We will refer to these dichromatic triangles as \u201cconstraint triangles\u201d, for reasons which will soon become clear. Clearly any graph Gx which is a spanning tree of Km can\u2019t contain any triangles, as these are simple examples of cycles. And determining whether Gx contains all 3 edges of a given constraint 29 \ftriangle is impossible for g or h by themselves, since neither of them gets to see the status of all 3 edges. Because of this, g and h must jointly employ one of several very conservative strategies with regards to each constraint triangle in order to avoid deciding \u201cyes\u201d for some graph containing said triangle. In particular, we can show that either g must always vote \u2018no\u2019 whenever all of the red edges of the triangle are present in the input graph Gx, or h must vote \u201cno\u201d whenever all of the blues edges of the triangle are present in Gx. This is formalized in the following proposition. Proposition 42. Let a, b and c be edges that form a constraint triangle in Kn. Suppose that a and b are both of a different color from c. Then one of the following two properties holds: \u2022 g(y)h(z) = 0 for all values of x such that Gx contains both a and b \u2022 g(y)h(z) = 0 for all values of x such that Gx contains c Thus we can see that each constraint triangle over edges a, b, and c in Km gives rise to distinct constraint which must be obeyed by any graph Gx for which g(y)h(z) > 0. These are each one of two basic forms: 1. Gx doesn\u2019t contain both a and b 2. Gx doesn\u2019t contain c We now give a lower bound on the number of constraint triangles (i.e. the number of dichromatic triangles) in Km as a function of the number edges of each color. Lemma 43. Given any coloring of the complete graph Km with m \u226520 which has r red edges and n \u2212r blue edges (recall n = \u0000m 2 \u0001 is the total number of edges), for n/3 \u2264r \u22642n/3, the total number of dichromatic triangles is lower bounded by m3/60. Our proof of the above lemma makes use of a known upper bound of the number of triangles in an arbitrary graph due to Fisher [1989]. As the choice of y and z implies the hypothesis n/3 \u2264r \u22642n/3 we can apply this lemma to conclude that there are at least m3/60 constraint triangles, and thus any graph Gx for which g(y)h(z) > 0 must obey m3/60 distinct constraints of the forms given above. It remains to show that the requirement of obeying m3/60 such constraints limits the number of graphs Gx for which g(y)h(z) > 0 to be an exponentially small proportion of the total. Our strategy for doing this will be as follows. We will consider a randomized procedure [due to Aldous, 1990] that samples uniformly from the set of all spanning trees of Km by performing a type of random walk on Km, adding an edge from the previous vertex whenever it visits a previously unvisited vertex. We will then show that the sequence of vertices produced by this random walk will, with very high probability, contain a length-3 subsequence which implies that the sampled tree violates at least one of the constraints. 30 \fThis argument is formalized in the proof of the following lemma. Lemma 44. Suppose we are given C distinct constraints which are each one of the two forms discussed above. Then, of all the spanning trees of Km, a proportion of at most \u0012 1 \u2212C m3 \u0013C/(6m2) of them obey all of the constraints. As we have C \u2265m3/60 constraints, this lemma tells us that the proportion of spanning trees Gx for which g(y)h(z) > 0 is upper bounded by (1 \u22121/60)m/360 = 1 2\u2212log2(1\u22121/60)m/360 \u2264 1 2(1/42)m/360 = 1 2m/15120 This \ufb01nally proves Lemma 41, and thus Theorem 40. 10 Discussion and future directions We have shown that there are tractable distributions which D&C SPNs cannot ef\ufb01cient capture, but other deep models can. However, our separating distribution D, which is the uniform distribution over adjacency matrices of spanning trees of the complete graph, is a somewhat \u201ccomplicated\u201d one, and seems to require log(n) depth to be ef\ufb01ciently captured by other deep models. Some questions worth considering are: \u2022 Is a distribution like D learnable by other deep models in practice? \u2022 Is there a simpler example than D of a tractable separating distribution? \u2022 Can we extend D&C SPNs in some natural way that would allow them to capture distributions like D? \u2022 Should we care that D&C SPNs have this limitation, or are most \u201cnatural\u201d distributions that we might want to model with D&C SPNs of a fundamentally different character than D? Far from showing that D&C SPNs are uninteresting, we feel that this paper has established that they are a very attractive objects for theoretical analysis. While the D&C conditions limit SPNs, they also make it possible for us to prove much stronger statements about them than we otherwise could. Indeed, it is worth underlining the point that the results we have proved about the expressive ef\ufb01ciency of D&C SPNs are much stronger and more thorough than results available for other deep models. This is likely owed to the intrinsically tractable nature of D&C SPNs, which makes them amenable to analysis using known mathematical methods, avoiding the various proof barriers that exist for more general circuits. 31 \fOne aspect of SPNs which we have not touched on in this work is their learnability. It is strongly believed that for conditional models like neural networks, which are capable of ef\ufb01ciently simulating Boolean circuits, learning is hard in general [Daniely et al., 2014]. However, D&C SPNs don\u2019t seem to fall into this category, and to the best of our knowledge, it is still an open question as to whether there is a provably effective and ef\ufb01cient learning algorithm for them. It seems likely that the \u201ctractable\u201d nature of D&C SPNs, which has allowed us to prove so many strong statements about their expressive ef\ufb01ciency, might also make it possible to prove strong statements about their learnability. Acknowledgments The authors would like to thank Ran Raz for his helpful discussions regarding multilinear circuits. James Martens was supported by a Google Fellowship." + }, + { + "url": "http://arxiv.org/abs/1206.6464v2", + "title": "Estimating the Hessian by Back-propagating Curvature", + "abstract": "In this work we develop Curvature Propagation (CP), a general technique for\nefficiently computing unbiased approximations of the Hessian of any function\nthat is computed using a computational graph. At the cost of roughly two\ngradient evaluations, CP can give a rank-1 approximation of the whole Hessian,\nand can be repeatedly applied to give increasingly precise unbiased estimates\nof any or all of the entries of the Hessian. Of particular interest is the\ndiagonal of the Hessian, for which no general approach is known to exist that\nis both efficient and accurate. We show in experiments that CP turns out to\nwork well in practice, giving very accurate estimates of the Hessian of neural\nnetworks, for example, with a relatively small amount of work. We also apply CP\nto Score Matching, where a diagonal of a Hessian plays an integral role in the\nScore Matching objective, and where it is usually computed exactly using\ninefficient algorithms which do not scale to larger and more complex models.", + "authors": "James Martens, Ilya Sutskever, Kevin Swersky", + "published": "2012-06-27", + "updated": "2012-09-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction There are many models and learning algorithms where it becomes necessary, or is at least very useful, to compute entries of the Hessian of some complicated function. For functions that can be computed using a computational graph there are automatic methods available for computing Hessian-vector products exactly (e.g. Pearlmutter, 1994). These can be used to recover speci\ufb01c columns of the Hessian, but are inef\ufb01cient at recovering other parts of the matrix such as large blocks, or the diagonal. For the diagonal of the Hessian of a neural network training objective, there are deterministic approximations available such as that of Becker and Le Cun (1988), but these are not guaranteed to be accurate. Recently Chapelle and Erhan (2011) showed how to compute an unbiased estimate of the diagonal of the GaussNewton matrix, and used this to perform preconditionAppearing in Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012. Copyright 2012 by the author(s)/owner(s). ing within a Hessian-free Newton optimization algorithm (Martens, 2010). In this paper we build upon this idea and develop a family of algorithms, which we call Curvature Propagation (CP), for ef\ufb01ciently computing unbiased estimators of the Hessians of arbitrary functions. Estimating entries of the Hessian turns out to be strictly harder than doing the same for the Gauss-Newton matrix, and the resulting approach is necessarily more complex, requiring several additional ideas. As with the algorithm of Chapelle and Erhan (2011), CP involves reverse sweeps of the computational graph of the function, which can be repeated to obtain higher-rank estimates of arbitrary accuracy. And when applied to a function which decomposes as the sum of M terms, such as typical training objective functions, applying CP to the terms individually results in an estimate of rank M, at no additional expense than applying it to the sum. This is useful in many applications. For example, the diagonal of the Hessian can be used as a preconditioner for \ufb01rst and second order nonlinear optimizers, which is the motivating application of Becker and Le Cun (1988) and Chapelle and Erhan (2011). Another example is Score Matching (Hyvarinen, 2006), a method for parameter estimation in Markov Random Fields. Because Score Matching uses the diagonal of the Hessian within its objective it is expensive to apply the method to all but the simplest models. As we will see, CP makes it possible to ef\ufb01ciently apply Score Matching to any model. 2. Derivation of CP In the following section we develop the Curvature Propagation method (CP) for functions that are de\ufb01ned in terms of general computational graphs. We will present one version of the approach that relies on the use of complex arithmetic, and later also give a version that uses only real arithmetic. At a high level, we will de\ufb01ne complex vector-valued linear function on the computational graph of our target function f, and then show through a series of lemmas that the expectation of the self outer-product of this function is in fact the Hessian matrix. This function can be computed by what amounts to a modi\ufb01cation of reverse-mode automatic differentiation, where noise is injected at each node. \fEstimating the Hessian by Back-propagating Curvature 2.1. Setting and notation Let f : Rn \u2212 ! R be a twice differentiable function. We will assume that f can be computed via a computation graph consisting of a set of nodes N = {i : 1 \uf8ffi \uf8ffL} and directed edges E = (i, j) : i, j 2 N, where at each node i there is a vector valued output yi 2 Rni is computed via yi = fi(xi) for some twice-differentiable function fi. Here xi 2 Rmi is the total input to node i, and is given by the concatenation of vectors yk for k 2 Pi and Pi = {k : k is a parent of i} = {k : (k, i) 2 E}. We identify node 1 as input or \u201csource\u201d node (so that P1 = ;) and node L as the output or \u201csink\u201d node, with yL = f(y1) being the \ufb01nal output of the graph. Let Ja b denote the Jacobian of a w.r.t. b where a and b are vectors, or in other words, @a @b . And let Hc a,b denote the Hessian of the scalar function c w.r.t. a and then w.r.t. b (the order matters since it determines the dimension of the matrix). Note that if a and b are quantities associated with nodes i and j (resp.) in the computational graph, Ja b and Hc a,b will only be well-de\ufb01ned when j does not depend directly or indirectly on i, i.e. i 62 Aj, where Aj = {k : k is an ancestor of j}. Also note that when there is no dependency on b of a it will be the case that Ja b = 0. Under this notation, the Hessian of f w.r.t. its input is denoted by Hf y1,y1, but we will use the short-hand H for convenience. For k 2 Pi, let Ri,k denote the projection matrix which maps the output yk of node k to the their positions in node i\u2019s input vector xi, so that we have xi = P k2Pi Ri,kyk. Summarizing, we have the following set of recursive de\ufb01nitions for computing yL = f(y1) which are iterated for i ranging from 2 to L: xi = X k2Pi Ri,kyk yi = fi(xi) Note that Ri,k need not appear explicitly as a matrix when implementing these recursions in actual code, but is merely the formal mathematical representation we will use to describe the projective mapping which is performed whenever outputs from a given computational node k are used as input to another node i. 2.2. Computing gradients and Hessians Reverse-mode automatic differentiation1 is a well known method for computing the gradient of functions which are 1also known as back-propagation (Rumelhart et al., 1986) in the context of neural networks de\ufb01ned in terms of computation graphs. It works by starting at the \ufb01nal node L and going backwards through the graph, recursively computing the gradient of f w.r.t. the yi for each i once the same has been done for all of i\u2019s children. Using the vector-valued computational graph formalism and notation we have established, the recursions for computing the gradient rf = Jf y1 (remembering that f \u2318yL) are given by Jf yL = 1 (1) Jf yi = X k2Ci Jf xkJxk yi = X k2Ci Jf xkR> k,i (2) Jf xi = Jf yiJyi xi (3) where Ci = {k : k is a child of i} and we have used the fact that Jxk yi = R> k,i. For this method to yield a realizable algorithm, it is assumed that for each node i, the function fi is simple enough that direct computation of and/or multiplication by the \u201clocal\u201d Jacobian Jyi xi = f 0 i(xi) is easy. If for a particular node i this is not the case, then the usual procedure is to split i into several new nodes which effectively break fi into several computationally simpler pieces. By computing the vector derivative of both sides of each of the above equations w.r.t. yL, yj, and yj respectively (for j 62 Ai), the following recursions can be derived Hf yL,yL = 0 (4) Hf yi,yj = X k2Ci Rk,iHf xk,yj (5) Hf xi,yj = Jyi xi >Hf yi,yj + MiJxi yj (6) where Mi \u2318 ni X q=1 Jf yi,qHyi,q xi,xi (7) and where yi,q denotes the q-th component of yi. In deriving the above it is important to remember that Rk,i is a constant, so that its Jacobian w.r.t. yj is the zero matrix. Also note that Jf yi,q is a scalar and that Hyi,q xi,xi is the Hessian of the local nonlinearity fi. The overall Hessian of f, Hf y1,y1 can be obtained by applying these recursions in a backwards manner (assuming that the various Jacobians are already computed). The additional Jacobian terms of the form Jxi yj which appear in eqn. 6 can be computed according to recursions analogous to those used to compute the gradient, which are given by the equations below: Jxi xi = Imi\u21e5mi (8) Jxi xj = Jxi yj Jyj xj (9) Jxi yj = X k2Cj Jxi xkJxk yj = X k2Cj Jxi xkRk,j 8i 62 Aj (10) Jxi xj = 0 8i 2 Aj (11) \fEstimating the Hessian by Back-propagating Curvature where, for convenience, we have de\ufb01ned Jxi,xj to be zero whenever i is an ancestor of j, whereas otherwise it would be unde\ufb01ned. In general, using these recursions for direct computation of Hf y1,y1 will be highly impractical unless the computation tree for f involves a small total number of nodes, each with small associated output and input dimensions ni and mi. The purpose in giving them is to reveal how the \u201cstructure\u201d of the Hessian follows the computation tree, which will become critically important in both motivating the CP algorithm and then proving its correctness. 2.3. The S function We now de\ufb01ne an ef\ufb01ciently computable function S that will allow us to obtain rank-1 estimates of the Hessian. Its argument consists of an ordered list of vectors V \u2318 {vi}L i=1 where vi 2 R`i, and its output is a n-dimensional vector (which may be complex valued). It will be de\ufb01ned as S(V ) \u2318Sy1(V ), where Syi(V ) 2 Cni and Sxi(V ) 2 Cmi are vector-valued functions of V de\ufb01ned recursively via the equations SyL(V ) = 0 (12) Syi(V ) = X k2Ci R> k,iSxk(V ) (13) Sxi(V ) = F > i vi + Jyi xi >Syi(V ) (14) where each Fi is a (not necessarily square) complex-valued matrix in C`i\u21e5mi satisfying F > i Fi = Mi. Such an Fi is guaranteed to exist because Mi is symmetric, which follows from the fact that it is a linear combination of Hessian matrices. Note that these recursions closely resemble those given previously for computing the gradient (eqn. 1, 2, and 3). The multiplication by Jyi xi > of the vector Syi(V ) at each stage of the recursion is easy to perform since this is precisely what happens at each stage of reverse-mode automatic differentiation used to compute the gradient of f. In general, the cost of computing S is similar to that of computing the gradient, which itself is similar to that of evaluating f. The practical aspects computing S(V ) will be discussed further in section 5. 2.4. Properties of the S function with stochastic inputs Suppose that the random variable V satis\ufb01es: 8i E \u21e5 viv> i \u21e4 = I and 8j 6= i, E \u21e5 viv> j \u21e4 = 0 (15) For example, each vi could be drawn from a multivariate normal with mean 0 and covariance matrix I. We will now give a result which establishes the usefulness of S(V ) as a tool for approximating H. The proof of this theorem and others will be located in the appendix/supplement. Theorem 2.1. E[S(V )S(V )>] = Hf y1,y1(\u2318H) In addition to being an unbiased estimator of H, S(V )S(V )> will be symmetric and possibly complexvalued. To achieve a real-valued estimate we can instead use only the real component of S(V )S(V )>, which itself will also be an unbiased estimator for H since the imaginary part of S(V )S(V )> is zero in expectation. 2.5. Avoiding complex numbers The factorization of the Mi\u2019s and resulting complex arithmetic associated with using these factors can be avoided if we rede\ufb01ne V so that each vi is of dimension mi (instead of `i), and we de\ufb01ne the real vector-valued functions T(V ) \u2318Ty1(V ) and U(V ) \u2318Uy1(V ) according to the following recursions: TyL(V ) = 0 UyL(V ) = 0 Tyi(V ) = X k2Ci Jxk yi >Txk(V ) Uyi(V ) = X k2Ci Jxk yi >Uxk(V ) Txi(V ) = Mivi + Jyi xi >Tyi(V ) Uxi(V ) = vi + Jyi xi >Uyi(V ) Both these recursions for T and U are trivial modi\ufb01cations of those given for S(V ), with the only difference being the matrix which multiplies vi (it\u2019s F > i for S, Mi for T, and I for U). And because they do not involve complex quantities at any point, they will be real-valued. Theorem 2.2. T(V )U(V )> is an unbiased estimator of H Since H is symmetric, it follows directly from this result that % T(V )U(V )>&> = U(V )T(V )> is also an unbiased estimator of H. Note however that while both T(V )U(V )> and U(V )T(V )> will be symmetric in expectation (since Hf y1,y1 is), for any particular choice of V they generally will not be. This issue can be addressed by instead using the estimator 1 2 % T(V )U(V )> + U(V )T(V )>& which will be symmetric for any V . However, despite the fact that S(V )S(V )> and this alternative estimator are both symmetric for all V \u2019s and also unbiased, they will not, in general, be equal. While computing both T and U will require a total of 2 sweeps over the computational graph versus only the one required for S(V ), the total amount of work will be the same due to the doubly expensive complex-valued arithmetic required to evaluate S(V ). 2.6. Matrix interpretation of S, T and U Suppose we represent V as a large vector v \u2318[v> 1 . . . v> L ]> with dimension m \u2318P i mi. Then the functions S, T and U are linear in the vi\u2019s (a fact which follows from the recursive de\ufb01nitions of these functions) and hence v. Thus S, T, and U have an associated representation as matrices \u02dc S 2 Cn\u21e5m, \u02dc T 2 Rn\u21e5m, and \u02dc U 2 Rn\u21e5m w.r.t. the coordinate bases given by \u02dc v. Then noting that S(V )S(V )> = \u02dc Svv> \u02dc S>, and that condition (15) is equivalent to E[vv>] = I, we obtain Hf y1,y1 = E h \u02dc Svv> \u02dc S>i = \u02dc S E \u21e5 vv>\u21e4\u02dc S> = \u02dc S \u02dc S> \fEstimating the Hessian by Back-propagating Curvature and thus we can see that \u02dc S has an interpretation as a \u201cfactor\u201d of Hf y1,y1. Similarly we have \u02dc T \u02dc U > = Hf y1,y1 and \u02dc U \u02dc T > = Hf y1,y1. 3. A simpler method? At the cost of roughly two passes through the computational graph it is possible to compute the Hessian-vector Hw for an arbitrary vector w 2 Rn (e.g. Pearlmutter, 1994). This suggests the following simple approach to computing an unbiased rank-1 estimate of H: draw w from a distribution satisfying E[ww>] = I and then take the outer product of Hw with w. It is easy to see that this is unbiased, since E \u21e5 HwwT \u21e4 = H E \u21e5 wwT \u21e4 = H (16) Computationally, this estimator is just as expensive as CP, but since there are several pre-existing methods computing Hessian vector products, it may be easier to implement. However, we will prove in the next section that the CP estimator will have much lower variance in most situations, and later con\ufb01rm these \ufb01ndings experimentally. And in addition to this, there are certain situations, which arise frequently in machine learning applications, where vectorized implementations of CP will consume far less memory than similar vectorized implementations of this simpler estimator ever could, and we will demonstrate this in the speci\ufb01c case when f is a neural network training objective function. It is also worth noting that this estimator underlies the Hessian norm estimation technique used in Rifai et al. (2011). That this is true is due to the equivalence between the stochastic \ufb01nite-difference formulation used in that work and matrix-vector products with randomly drawn vectors. We will make this rigorous in the appendix/supplement. 4. Covariance analysis Let AB> be an arbitrary matrix factorization of H, with A, B 2 Cn\u21e5`. Given a vector valued random variable u 2 R` satisfying E[uu>] = I, we can use this factorization to produce an unbiased rank-1 estimate of the Hessian, HA,B \u2318(Au)(Bu)> = Auu>B>. Note that the various CP estimators, as well as the simpler one discussed in the previous section are all of this form, and differ only in their choices of A and B. Expanding we have: E[HA,B ij HA,B kl ] = E 2 4X a,b Ai,auaubBj,b X c,d Ak,cucudBl,d 3 5 (17) = X a,b,c,d AiaBjbAkcBld E [uaubucud] (18) where here (and in the remainder of this section) the subscripts on u refer to scalar components of the vector u and not elements of a collection of vectors. If we assume u \u21e0G \u2318Normal(0, I), we can use the wellknow formula EG[uaubucud] = \u03b4ab\u03b4cd + \u03b4ac\u03b4bd + \u03b4ad\u03b4bc and simplify this further to: = X a,b,c,d AiaBjbAkcBld(\u03b4ab\u03b4cd + \u03b4ac\u03b4bd + \u03b4ad\u03b4bc) = (A> i Bj)(A> k Bl) + (A> i Ak)(B> j Bl) + (A> i Bl)(A> k Bj) = HijHkl + (A> i Ak)(B> j Bl) + HilHjk where Ai is a vector consisting of the i-th row of A, and similarly for Bi, and where we have used Hij = A> i Bj. Consequently, the variance is given by: CovG h HA,B ij , HA,B kl i = EG h HA,B ij HA,B kl i \u2212HijHkl = (A> i Ak)(B> j Bl) + HilHjk Note that when A = B = \u02dc S, we have that (A> i Ak)(B> j Bl) = ( \u02dc S> i \u02dc Sk)( \u02dc S> j \u02dc Sl) = HikHjl. Thus the estimator H \u02dc S, \u02dc S has the following desirable property: its covariance depends only on H and not on the speci\ufb01c details of the computational graph used to construct the S function. If on the other hand we assume that u \u21e0 K \u2318 Bernoilli({\u22121, 1})`, i.e. K is a multivariate distribution of independent Bernoulli random variables on {\u22121, 1}, we have EB[uaubucud] = \u03b4ab\u03b4cd + \u03b4ac\u03b4bd + \u03b4ad\u03b4bc \u2212 2\u03b4ab\u03b4bc\u03b4cd, which when plugged into (18) gives: CovK h HA,B ij HA,B kl i = (A> i Ak)(B> j Bl) + HilHjk \u22122 X a BiaAjaBkaAla = CovG h HA,B ij HA,B kl i \u22122 X a BiaAjaBkaAla Of particular interest is the self-variance of HA,B ij (i.e. Var h HA,B ij i = Cov h HA,B ij , HA,B ij i ). In this case we have that: VarK h HA,B ij i = VarG h HA,B ij i \u22122 X a (BiaAja)2 and we see that variance of estimator that uses K will always be strictly smaller than the one that uses G, unless P a(BiaAja)2 = 0 (which would imply that P a BiaAja = Hij = 0). Returning to the case that u \u21e0G, we can prove the following result, which shows that when it comes to estimating the diagonal entries Hii of H, the estimator which uses A = B = \u02dc S has the lowest variance among all possible estimators of the form HA,B: Theorem 4.1. 8i and 8A, B s.t. AB> = H we have: VarG h HA,B ii i \u2265VarG h H \u02dc S, \u02dc S ii i = 2H2 ii \fEstimating the Hessian by Back-propagating Curvature Moreover, in the particular case of using the \u2018simple\u2019 estimator (which is given by A = H, B = I) the variance of the diagonal entries is given by: VarG h HH,I ii i = H> i Hi + H2 ii = X j6=i H2 ij + VarG h H \u02dc S, \u02dc S ii i and so we can see that the CP estimator based on S always gives a lower variance, and is strictly lower in most cases. 5. Practical aspects 5.1. Computing and factoring the Mi\u2019s Computing the matrices Mi for each node i is necessary in order to compute the S, T and U functions, and for S we must also be able to factor them. Fortunately, each Mi can be computed straightforwardly according to eqn. 7 as long as the operations performed at node i are simple enough. And each Hyi,q xi,xi is determined completely by the local function fi computed at node i. The Jacobian term Jf yi,q = \u21e5 Jf yi \u21e4 q which appears in the formula for Mi is just a scalar, and is the derivative of f w.r.t. yi,q. This can be made cheaply and easily available by performing, in parallel with the computation of S(V ), the standard backwards automatic differentiation pass for computing the gradient of f w.r.t. to y1, which will produce the gradient of f w.r.t. each yi along the way. Alternatively, this gradient information may be cached from a gradient computation which is performed ahead of time (which in many applications is done anyway). In general, when Mi is block or banded diagonal, Fi will be as well (with the same pattern), which will greatly reduce the associated computational and storage requirements. For example, when fi corresponds to the element-wise nonlinearities computed in a particular layer of a neural network, Mi will be diagonal and hence so will Fi, and these matrices can be stored as such. Also, if Mi happens to be sparse or low rank, without any other obvious special structure, there are algorithms which can compute factors Fi which will also be sparse or low-rank. Alternatively, in the most extreme case, the vector valued nodes in the graph can be sub-divided to produce a graph with the property that every node outputs only a scalar and has at most 2 inputs. In such a case, each Mi will be no bigger than 2 \u21e52. Such an approach is best avoided unless deemed necessary since the vector formalism allows for a much more vectorized and thus ef\ufb01cient implementation in most situations which arise in practice. Another option to consider if it turns out that Mi is easy to work with but hard to factor, is to use the T, U based estimator instead of the S based one. It may also be the case that Mi or Fi will have a special sparse form which makes sampling the entire vector vi unnecessary. For example, if a node copies a large input vector to its output and transforms a single entry by some nonlinear function, Mi will be all zeros with a single element on the diagonal (and hence so will its factor Fi), making it possible to sample only the component of vi that corresponds to that entry. 5.2. Increasing the rank As with any unbiased estimator, the estimate can be made more accurate by collecting multiple samples. Fortunately, sampling and computing S(V ) for multiple V \u2019s is trivially parallelizeable. And it can be easily implemented in vectorized code for k samples by taking the de\ufb01ning recursions for S (eqn. 12, 13, and 14) and rede\ufb01ning Syi(V ) and Sxi(V ) to be matrix valued functions (with k columns) and vi to be a mi \u21e5k matrix of column vectors which are generated from independent draws from the usual distribution for vi. In the case where f is a sum of B similarly structured terms, which occurs frequently in machine learning such as when f is sum of regression errors or log-likelihood terms over a collection of training cases, one can apply CP individually to each term in the sum at almost no extra cost as just applying it to f, thus obtaining a rank-k estimate of f instead of a rank-1 estimate. 5.3. Curvature Propagation for Diagonal Hessian Estimation in Feedforward Neural Networks In this section we will apply CP to the speci\ufb01c example of computing an unbiased estimate diag( \u02c6 H) of the diagonal of H (diag(H)) of a feed-forward neural networks with ` layers. The pseudocode below computes the objective f of our neural network for a batch of B cases. 1: Input: z1, a matrix of inputs (with B columns, one per case) 2: for all i from 1 to ` \u22121 do 3: ui+1 Wizi 4: zi+1 g(ui+1) 5: end for 6: f PB b=1 Lb(z`,b)/B 7: Output: f Here g(x) is a coordinate-wise nonlinearity, zi are matrices containing the outputs of the neuronal units at layer i for all the cases, and similarly the matrices ui contain their inputs. Lb denotes the loss-function associated with case b (the dependency on b is necessary so we can include targets). For simplicity we will assume that Lb is the standard squared loss given by Lb(z`,b) = 1/2kz`,b \u2212tbk2 for target vector tb (where t will denote the matrix of these vectors). The special structure of this objective permits us to ef\ufb01ciently apply CP to each scalar term of the average PB b=1 Lb(z`,b)/B, instead of to f directly. By summing the estimates of the diagonal Hessians for each Lb(z`,b)/B we thus obtain a rank-B estimate of H instead of merely a rank-1 estimate. That this is just as ef\ufb01cient as applying CP directly to f is due to the fact that the computations of each z`,b are performed independently of each other. For ease of presentation, we will rede\ufb01ne V \u2318{vi}L i=1 so that each vi is not a single vector, but a matrix of \fEstimating the Hessian by Back-propagating Curvature such vectors with B columns. We construct the computational graph so that the element-wise nonlinearities and the weight matrix multiplications performed at each of the ` layers each correspond to a node in the graph. We de\ufb01ne Sui \u2318Sxji (V ) where ji is the node corresponding to the computation of ui (from zi\u22121 and Wi\u22121), Szi \u2318Sxki (V ) where ki is the node correspond to the computation of zi (from ui), and SWi \u2318[Sy1(V )]Wi where [\u00b7]Wi denotes extraction of the rows in y1 corresponding to the i-th weightmatrix (Wi). The variables dzi and dui are derivatives w.r.t. ui and zi and are computed with backpropagation. Consistent with our mild rede\ufb01nition / abuse of notation for V , each of Sui, Szi and SWi will be matrix-valued with a column for each of the B training cases. Finally, we let a\u2299b denote the element-wise product, a\u22992 the element-wise power, outer(a, b) \u2318ab>, outer2(a, b) \u2318outer(a\u22992, b\u22992), and vec(\u00b7) the vectorization operator. Under this notation, the algorithm below estimates the diagonal of the Hessian of f by estimating the sub-objective corresponding to each case, and then averaging the results. Like the pseudo-code for the neural network objective itself, it makes use of vectorization, which allows for an easily parallelized implementation. 1: Sz` vj` ; dz` z` \u2212t 2: Su` Sz` ; du` dz` 3: for all i from ` \u22121 down to 1 do 4: Szi W > i Sui+1 ; dzi W > i dui+1 5: [diag( \u02c6 H)]Wi vec(outer2 % zi, Sui+1 & /B) 6: Ki g00(ui) \u2299dzi 7: Sui Szi \u2299g0(ui) + vki \u2299K\u22991/2 i 8: dui dzi \u2299g0(ui) 9: end for For i < `, each Ki is a B-columned matrix of vectors containing the diagonals for each training case of the local matrices Mki for each case occurring at node ki. Because Mj corresponds to an element-wise non-linear function, it is diagonal, and so K\u22991/2 i will be a matrix of vectors corresponding to the diagonals of the factors Fki (which are themselves diagonal). Note that the above algorithm makes use of the fact that the local matrices Mji can be set to zero and the estimator of the diagonal will remain unbiased. At no point in the above implementation do we need to store any matrix the size of SWi, as the computation of [diag( \u02c6 H)]Wi, which involves an element-wise square of SWi and a sum over cases (as accomplished by line 5), can be performed as soon as the one and only contribution to SWi from other nodes in the graph is available. This is desirable since SWi will usually be much larger than the various other intermediate quantities which we need to store, such as zi or Sui. In functions f where large groups of parameters are accessed repeatedly throughout the computation graph, such as in the training objective of recurrent neural networks, we may have to temporally store some matrices the size Sy1 (or certain row-restrictions of this, like SWi) as the contributions from different cases are collected and summed together, which can make CP less practical. Notably, despite the structural similarities of backprop (BP) to CP, this problem doesn\u2019t exist with BP since one can store incomplete contributions from each case in the batch into a single n dimensional vector, which is impossible in CP due to the need to take the entry-wise square of Sy1 before summing over cases. 6. Hardness of exact computation An approach like CP wouldn\u2019t be as useful if there was an ef\ufb01cient and exact algorithm for computing the diagonal of the Hessian of the function de\ufb01ned by an arbitrary computation graph. In this section we will argue why such an algorithm is unlikely to exist. To do this we will reduce the problem of multiplying two matrices to that of computing (exactly) the diagonal of the Hessian of a certain function f, and then appeal to a hardness due to Raz and Shpilka (2001) which shows that matrix multiplication will require asymptotically more computation than CP does when it is applied to f. This result assumes a limited computational model consisting of bounded depth arithmetic circuits with arbitrary fan-in gates. While not a fully general model of ef\ufb01cient computation, it nonetheless captures most natural algebraic formulae and algorithms that one might try to use to compute the diagonal of f. The function f will be de\ufb01ned by: f(y) \u2318 1/2y>W >ZWy, where Z 2 R2n\u21e52n is symmetric, and W \u2318[P >Q]> with P 2 Rn\u21e5n and Q 2 Rn\u21e5n. Note that f may be easily evaluated in O(n2) time by multiplying y \ufb01rst by W, obtaining z, and then multiplying z by Z, obtaining Zz, and \ufb01nally pre-multiplying by z> obtaining z>Zz = y>W >ZWy. Thus applying CP is relatively straight-forward, with the only potential dif\ufb01culty being that the matrix Z, which is the local Hessian associated with the node that computes z>Zz, may not be easy to factorize. But using the T/U variant of CP gets around this issue, and achieves a O(n2) computational cost. Moreover, it is easy to see how the required passes could be implemented by a \ufb01xed-depth arithmetic circuit (with gates of arbitrary fan-in) with O(n2) edge-cost since the critical operations required are just a few matrix-vector multiplications. The goal of the next theorem is to show that there can be no such circuit of edge cost O(n2) for computing the exact Hessian of f. Theorem 6.1. Any family of bounded depth arithmetic circuits with arbitrary fan-in gates which computes the diagonal of f given inputs W and Z will have an edge count which is superlinear in n2. The basic idea of the proof is to use the existence of such a circuit family to construct a family of circuits with bounded depth and edge count O(n2), that can multiply arbitrary n \u21e5n matrices (which will turn out to be the matrices P and Q that parameterize f), contradicting a theorem of Raz and Shpilka (2001) which shows that any such circuit fam\fEstimating the Hessian by Back-propagating Curvature ily must have edge count which is superlinear n2. The following lemma accomplishes this construction: Lemma 6.2. If an arithmetic circuit with arbitrary fan-in gates computes the diagonal of the Hessian of f for arbitrary P, Q and Z, then there is also a circuit of twice the depth + O(1), and three times the number of edges + O(n2), which computes the product PQ for arbitrary input matrices P, Q 2 Rn\u21e5n. The results presented in this section rule out, or make extremely unlikely, the possible existence of algorithms which could perform a constant number of backwards and forwards \u201cpasses\u201d through the computational graph of f to \ufb01nd its exact Hessian. 7. Related work The simplest way of computing the entries of the Hessian, including the diagonal, is by using an algorithm for Hessian-vector multiplication and running through the vectors ei for i = 1...n, recovering each column of H in turn. Unfortunately this method is too expensive in most situations, and in the example function f used in Section 6, would require O(n3) time. The method of Chapelle and Erhan (2011) can be viewed as a special case of CP, where all the Mi\u2019s except for the Mi associated with the \ufb01nal nonlinearity are set to zero. Because of this, all of the results proved in this paper also apply to this approach, but with the Hessian replaced by the Gauss-Newton matrix. Becker and Le Cun (1988) gave an approach for approximating the diagonal of the Hessian of a neural network training objective using a deterministic algorithm which does several passes through the computation tree. This method applies recursions similar to (4)-(6), except that all the \u201cintermediate Hessians\u201d at each layer are approximated by their diagonals, thus producing a biased estimate (unless the intermediate Hessians really are diagonal). We numerically compare CP to this approach in Section 8. In Bishop (1992), a method for computing entries of the Hessian of a feedforward neural network was derived. This method, while being exact, and more ef\ufb01cient than the naive approach discussed at the start of this section, is not practical for large networks, since it requires a number of passes which will be at least as big as the total number of hidden and outputs units. CP by contrast requires only 1 pass to obtain a single unbiased rank-B estimate, where B is the number of training cases. 8. Experiments 8.1. Accuracy Evaluation In this section we test the accuracy of CP on a small neural network as we vary the number of samples. The network consists of 3 hidden layers, each with 20 units. The input and output layers are of size 256 and 10 respectively giving a total of 6190 parameters. We tested both a network with random weights set by Gaussian noise with a variance of 0.01, and one trained to classify handwritten digits from the USPS dataset 2. For the random vectors v, we tested both Gaussian and {\u22121, 1}-Bernoulli noise using the CP estimators based on using S and T/U, and the simpler estimator discussed in Section 3 based on using H/I. For the sake of comparison, we also included the deterministic method of (Becker and Le Cun, 1988). The experiments were carried out by picking a subset of 1000 data points from the USPS dataset and keeping it \ufb01xed. Note that sample size refers to the number of random vectors generated per data case. This means that a sample size of 1 corresponds to an aggregation of 1000 rank-1 estimates. Our results in 8.1 show that the accuracy of each estimator improves by roughly an order of magnitude for every order of magnitude increase in samples. It also shows that the Sbased estimator along with binary noise is by far the most ef\ufb01cient and the simple H/I based estimator is the least ef\ufb01cient by an order of magnitude. 8.2. Score-Matching Experiments To test the effectiveness of CP in a more practical scenario, we focus on estimating the parameters of a Markov random \ufb01eld using the score matching technique. Score matching is a simple alternative to maximum likelihood that has been widely used to train energy-based models (K\u00a8 oster and Hyv\u00a8 arinen, 2007; Swersky et al., 2011). One of its drawbacks is that the learning objective requires the diagonal Hessian of the log-likelihood with respect to the data, which can render it unreasonably slow for deep and otherwise complicated models. Our speci\ufb01c test involves learning the parameters of a covariance-restricted Boltzmann machine (cRBM; Ranzato et al., 2010). This can be seen as a two-layer network where the \ufb01rst layer uses the squared activation function followed by a second layer that uses the softplus activation function: log(1+exp(x)). The details of applying score matching to this model can be found in Swersky et al. (2011). In this experiment, we attempted to train a cRBM using stochastic gradient descent on minibatches of size 100. Our setup is identical to Ranzato et al. (2010). In particular, our cRBM contained 256 factors and hidden units. We trained the model on 11000 image patches of size 16\u21e516 from the Berkeley dataset 3. For our training procedure, we optimize the \ufb01rst layer for 100 epochs, then freeze those weights and train the second layer for another 25 epochs. Score-matching requires the derivatives (w.r.t. the model parameters) of the sum of the diagonal entries of the Hessian (w.r.t. the data). We can thus use CP to estimate the 2http://cs.nyu.edu/\u02dcroweis/data/usps_all. mat 3http://www.cs.berkeley.edu/projects/ vision/grouping/segbench \fEstimating the Hessian by Back-propagating Curvature 100 101 102 103 Sample Size per Case 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Sum of Squared Error Det B-S B-U/T G-S G-U/T B-H/I G-H/I (a) Random Weights 100 101 102 103 Sample Size per Case 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 Sum of Squared Error Det B-S B-U/T G-S G-U/T B-H/I G-H/I (b) Trained Weights 0 20 40 60 80 100 120 140 Epoch \u22121200 \u22121000 \u2212800 \u2212600 \u2212400 \u2212200 0 Score Matching Objective Train-approx::Eval-approx Train-approx::Eval-exact Train-exact::Eval-approx Train-exact::Eval-exact (c) Score Matching cRBM Figure 1. 1(a)-1(b): Accuracy of various estimators for the diagonal Hessian of a small neural network as the number of randomly drawn vectors per data case increases. B and G indicate the type of noise used (Binary or Gaussian), S and U/T are the complex and noncomplex variants of CP, H/I is the simple approach discussed in Section 3, and Det is the approach of Becker and Le Cun (1988). 1(c): Score matching loss versus epoch when training using exact minibatch gradient and approximate minibatch gradient. In addition, when training with exact or approximate methods we also evaluate and plot the approximate/exact objective to ensure that they are not too different. Training the second layer begins after epoch 100. Figure 2. Covariance \ufb01lters (left) and examples of second-layer pooling (right) from a cRBM learned with score matching on natural image patches using a stochastic objective. score-matching gradient by applying automatic differentiation to the CP estimator itself (sampling and then \ufb01xing the random noise V ), exploiting the facts that the linear sum over the diagonal respects expectation, and the derivative of the expectation over V is the expectation of the derivative, and so this will indeed produce an unbiased estimate of the required gradient. A random subset of covariance \ufb01lters from the trained model are shown in Figure 8.2. As expected the \ufb01lters appear Gabor-like, with various spatial locations, frequencies, and orientations. The second layer also reproduces the desired effect of pooling similar \ufb01lters from the layer below. To demonstrate that learning can proceed with no loss in accuracy we trained two different versions of the model, one where we use the exact minibatch gradient, and one where we use approximate gradients via our estimator. We plot the training loss versus epoch, and our results in Figure 1(c) show that the noise incurred from our unbiased approximation does not affect accuracy during learning with minibatches. Unfortunately, it is dif\ufb01cult to train for many epochs in the second layer because evaluating the exact objective is prohibitively expensive in this model. ACKNOWLEDGEMENTS We thank Olivier Chapelle for his helpful discussions." + } + ], + "Nicolas Ballas": [ + { + "url": "http://arxiv.org/abs/1511.06432v4", + "title": "Delving Deeper into Convolutional Networks for Learning Video Representations", + "abstract": "We propose an approach to learn spatio-temporal features in videos from\nintermediate visual representations we call \"percepts\" using\nGated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts\nthat are extracted from all level of a deep convolutional network trained on\nthe large ImageNet dataset. While high-level percepts contain highly\ndiscriminative information, they tend to have a low-spatial resolution.\nLow-level percepts, on the other hand, preserve a higher spatial resolution\nfrom which we can model finer motion patterns. Using low-level percepts can\nleads to high-dimensionality video representations. To mitigate this effect and\ncontrol the model number of parameters, we introduce a variant of the GRU model\nthat leverages the convolution operations to enforce sparse connectivity of the\nmodel units and share parameters across the input spatial locations.\n We empirically validate our approach on both Human Action Recognition and\nVideo Captioning tasks. In particular, we achieve results equivalent to\nstate-of-art on the YouTube2Text dataset using a simpler text-decoder model and\nwithout extra 3D CNN features.", + "authors": "Nicolas Ballas, Li Yao, Chris Pal, Aaron Courville", + "published": "2015-11-19", + "updated": "2016-03-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.NE" + ], + "main_content": "INTRODUCTION Video analysis and understanding represents a major challenge for computer vision and machine learning research. While previous work has traditionally relied on hand-crafted and task-speci\ufb01c representations (Wang et al., 2011; Sadanand & Corso, 2012), there is a growing interest in designing general video representations that could help solve tasks in video understanding such as human action recognition, video retrieval or video captionning (Tran et al., 2014). Two-dimensional Convolutional Neural Networks (CNN) have exhibited state-of-art performance in still image tasks such as classi\ufb01cation or detection (Simonyan & Zisserman, 2014b). However, such models discard temporal information that has been shown to provide important cues in videos (Wang et al., 2011). On the other hand, recurrent neural networks (RNN) have demonstrated the ability to understand temporal sequences in various learning tasks such as speech recognition (Graves & Jaitly, 2014) or machine translation (Bahdanau et al., 2014). Consequently, Recurrent Convolution Networks (RCN) (Srivastava et al., 2015; Donahue et al., 2014; Ng et al., 2015) that leverage both recurrence and convolution have recently been introduced for learning video representation. Such approaches typically extract \u201cvisual percepts\u201d by applying a 2D CNN on the video frames and then feed the CNN activations to an RNN in order to characterize the video temporal variation. Previous works on RCNs has tended to focus on high-level visual percepts extracted from the 2D CNN top-layers. CNNs, however, hierarchically build-up spatial invariance through pooling layers (LeCun et al., 1998; Simonyan & Zisserman, 2014b) as Figure 2 highlights. While CNNs tends to discard local information in their top layers, frame-to-frame temporal variation is known to be smooth. The motion of video patches tend to be restricted to a local neighborhood (Brox & Malik, 2011). For this reason, we argue that current RCN architectures are not well suited for capturing \ufb01ne motion information. Instead, they are more likely focus on global appearance changes such as shot transitions. To address this issue, we introduce a novel RCN architecture that applies an RNN not solely on the 2D CNN top-layer but also on the intermediate convolutional layers. Convolutional 1 arXiv:1511.06432v4 [cs.CV] 1 Mar 2016 \fPublished as a conference paper at ICLR 2016 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 Figure 1: Visualization of convolutional maps on successive frames in video. As we go up in the CNN hierarchy, we observe that the convolutional maps are more stable over time, and thus discard variation over short temporal windows. layer activations, or convolutional maps, preserve a \ufb01ner spatial resolution of the input video from which local spatio-temporal patterns are extracted. Applying an RNN directly on intermediate convolutional maps, however, inevitably results in a drastic number of parameters characterizing the input-to-hidden transformation due to the convolutional maps size. On the other hand, convolutional maps preserve the frame spatial topology. We propose to leverage this topology by introducing sparsity and locality in the RNN units to reduce the memory requirement. We extend the GRU-RNN model (Cho et al., 2014) and replace the fully-connected RNN linear product operation with a convolution. Our GRU-extension therefore encodes the locality and temporal smoothness prior of videos directly in the model structure. We evaluate our solution on UCF101 human action recognition from Soomro et al. (2012) as well as the YouTube2text video captioning dataset from Chen & Dolan (2011). Our experiments show that leveraging \u201cpercepts\u201d at multiple resolutions to model temporal variation improves performance over our baseline model with respective gains of 3.4% for action recognition and 10% for video captioning. 2 GRU: GATED RECURRENT UNIT NETWORKS In this section, we review Gated-Recurrent-Unit (GRU) networks which are a particular type of RNN. An RNN model is applied to a sequence of inputs, which can have variable lengths. It de\ufb01nes a recurrent hidden state whose activation at each time is dependent on that of the previous time. Speci\ufb01cally, given a sequence X = (x1, x2, ..., xT ), the RNN hidden state at time t is de\ufb01ned as ht = \u03c6(ht\u22121, xt), where \u03c6 is a nonlinear activation function. RNNs are known to be dif\ufb01cult to train due to the exploding or vanishing gradient effect (Bengio et al., 1994). However, variants of RNNs such as Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) or Gated Recurrent Units (GRU) (Cho et al., 2014) have empirically demonstrated their ability to model long-term temporal dependency in various task such as machine translation or image/video caption generation. In this paper, we will mainly focus on GRU networks as they have shown similar performance to LSTMs but with a lower memory requirement (Chung et al., 2014). GRU networks allow each recurrent unit to adaptively capture dependencies of different time scales. The activation ht of the GRU is de\ufb01ned by the following equations: zt = \u03c3(Wzxt + Uzht\u22121), (1) rt = \u03c3(Wrxt + Urht\u22121), (2) \u02dc ht = tanh(Wxt + U(rt \u2299ht\u22121), (3) ht = (1 \u2212zt)ht\u22121 + zt\u02dc ht, (4) 2 \fPublished as a conference paper at ICLR 2016 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 Figure 2: High-level visualization of our model. Our approach leverages convolutional maps from different layers of a pretrained-convnet. Each map is given as input to a convolutional GRU-RNN (hence GRU-RCN) at different time-step. Bottom-up connections may be optionally added between RCN layers to form Stack-GRU-RCN. where \u2299is an element-wise multiplication. zt is an update gate that decides the degree to which the unit updates its activation, or content. rt is a reset gate. \u03c3 is the sigmoid function. When a unit ri t is close to 0, the reset gate forgets the previously computed state, and makes the unit act as if it is reading the \ufb01rst symbol of an input sequence. \u02dc ht is a candidate activation which is computed similarly to that of the traditional recurrent unit in an RNN. 3 DELVING DEEPER INTO CONVOLUTIONAL NEURAL NETWORKS This section delves into the main contributions of this work. We aim at leveraging visual percepts from different convolutional levels in order to capture temporal patterns that occur at different spatial resolution. Let\u2019s consider (x1 t, ..., xL\u22121 t , xL t )(t=1..T ), a set of 2D convolutional maps extracted from L layers at different time steps in a video. We propose two alternative RCN architectures, GRU-RCN, and Stacked-GRU-RCN (illustrated in Figure 2) that combines information extracted from those convolutional maps. 3.1 GRU-RCN: In the \ufb01rst RCN architecture, we propose to apply L RNNs independently on each convolutional map. We de\ufb01ne L RNNs as \u03c61, ..., \u03c6L, such that hl t = \u03c6l(xl t, hl t\u22121). The hidden representation of the \ufb01nal time step h1 T , ..., hL T are then fed to a classi\ufb01cation layer in the case of action recognition, or to a text-decoder RNN for caption generation. To implement the RNN recurrent function \u03c6l, we propose to leverage Gated Recurrent Units (Cho et al., 2014). GRUs were originally introduced for machine translation. They model input to hiddenstate and hidden to hidden transitions using fully connected units. However, convolutional map inputs are 3D tensors (spatial dimension and input channels). Applying a GRU directly can lead to a drastic number of parameters. Let N1, N2 and Ox be the input convolutional map spatial size and number of channels. Applying a GRU directly would require input-to-hidden parameters Wl, Wl z and Wl r to be of size N1 \u00d7 N2 \u00d7 Ox \u00d7 Oh where Oh is the dimensionality of the GRU hidden representation. 3 \fPublished as a conference paper at ICLR 2016 Fully-connected GRUs do not take advantage of the underlying structure of convolutional maps. Indeed, convolutional maps are extracted from images that are composed of patterns with strong local correlation which are repeated over different spatial locations. In addition, videos have smooth temporal variation over time, i.e. motion associated with a given patch in successive frames will be restricted in a local spatial neighborhood. We embed such a prior in our model structure and replace the fully-connected units in GRU with convolution operations. We therefore obtain recurrent units that have sparse connectivity and share their parameters across different input spatial locations: zl t = \u03c3(Wl z \u2217xl t + Ul z \u2217hl t\u22121), (5) rl t = \u03c3(Wl r \u2217xl t + Ul r \u2217hl t\u22121), (6) \u02dc hl t = tanh(Wl \u2217xl t + U \u2217(rl t \u2299hl t\u22121), (7) hl t = (1 \u2212zl t)hl t\u22121 + zl t\u02dc hl t, (8) where \u2217denotes a convolution operation. In this formulation, Model parameters W, Wl z, Wl r and Ul, Ul z, Ul r are 2D-convolutional kernels. Our model results in hidden recurrent representation that preserves the spatial topology, hl t = (hl t(i, j)) where hl t(i, j)) is a feature vector de\ufb01ned at the location (i, j). To ensure that the spatial size of the hidden representation remains \ufb01xed over time, we use zero-padding in the recurrent convolutions. Using convolution, parameters Wl, Wl z and Wl r have a size of k1\u00d7k2\u00d7Ox\u00d7Oh where k1\u00d7k2 is the convolutional kernel spatial size (usually 3 \u00d7 3), chosen to be signi\ufb01cantly lower than convolutional map size N1 \u00d7 N2. The candidate hidden representation \u02dc ht(i, j), the activation gate zk(i, j) and the reset gate rk(i, j) are de\ufb01ned based on a local neigborhood of size (k1 \u00d7 k2) at the location(i, j) in both the input data xt and the previous hidden-state ht\u22121. In addition, the size of receptive \ufb01eld associated with hl(i, j)t increases in the previous presentation hl t\u22121, hl t\u22122... as we go back further in time. Our model is therefore capable of characterizing spatio-temporal patterns with high spatial variation in time. A GRU-RCN layer applies 6 2D-convolutions at each time-step (2 per GRU gate and 2 for computing the candidate activation). If we assume for simplicity that the input-to-hidden and hidden-tohidden convolutions have the same kernel size and perserve the input dimension, GRU-RCN requires O(3TN1N2k1k2(OxOh + OhOh)) multiplications. GRU-RCN sparse connectivity therefore saves computation compared to a fully-connected RNN that would require O(3TN1N2N1N2(OxOh + OhOh)) computations. Memorywise, GRU-RCN needs to store the parameters for all 6 convolutions kernels leading to O(3k1k2(OxOh + OhOh)) parameters. 3.2 STACKED GRU-RCN: In the second RCN architecture, we investigate the importance of bottom-up connection across RNNs. While GRU-RCN applies each layer-wise GRU-RNN in an independent fashion, Stacked GRU-RCN preconditions each GRU-RNN on the output of the previous GRU-RNN at the current time step: hl t = \u03c6l(hl t\u22121, hl\u22121 t , xl t). The previous RNN hidden representation is given as an extrainput to the GRU convolutional units: zl l = \u03c3(Wl z \u2217xl t + Wl zl \u2217hl\u22121 t + Ul z \u2217hl t\u22121), (9) rl t = \u03c3(Wl r \u2217xl t + Wl rl \u2217hl\u22121 t + Ul rhl t\u22121), (10) \u02dc hl t = tanh(Wl \u2217xl t + Ul \u2217(rt \u2299hl t\u22121), (11) hl t = (1 \u2212zl t)hl t\u22121 + zl t\u02dc hl t, (12) Adding this extra-connection brings more \ufb02exibility and gives the opportunity for the model to leverage representations with different resolutions. 4 RELATED WORK Deep learning approaches have recently been used to learn video representations and have produced state-of-art results (Karpathy et al., 2014; Simonyan & Zisserman, 2014a; Wang et al., 2015b; Tran et al., 2014). Karpathy et al. (2014); Tran et al. (2014) proposed to use 3D CNN learn a video representations, leveraging large training datasets such as the Sport 1 Million. However, unlike image 4 \fPublished as a conference paper at ICLR 2016 classi\ufb01cation (Simonyan & Zisserman, 2014b), CNNs did not yield large improvement over these traditional methods (Lan et al., 2014) highlighting the dif\ufb01culty of learning video representations even with large training dataset. Simonyan & Zisserman (2014a) introduced a two-stream framework where they train CNNs independently on RGB and optical \ufb02ow inputs. While the \ufb02ow stream focuses only on motion information, the RGB stream can leverage 2D CNN pre-trained on image datasets. Based on the Two Stream representation, Wang et al. (2015a) extracted deep feature and conducted trajectory constrained pooling to aggregate convolutional feature as video representations. RNN models have also been used to encode temporal information for learning video representations in conjonction with 2D CNNs. Ng et al. (2015); Donahue et al. (2014) applied an RNN on top of the the two-stream framework, while Srivastava et al. (2015) proposed, in addition, to investigate the bene\ufb01t of learning a video representation in an unsupervised manner. Previous works on this topic has tended to focus only on high-level CNN \u201cvisual percepts\u201d. In contrast, our approach proposes to leverage visual \u201cpercepts\u201d extracted from different layers in the 2D-CNN. Recently, Shi et al. (2015) also proposed to leverage convolutional units inside an RNN network. However, they focus on different task (now-casting) and a different RNN model based on an LSTM. In addition, they applied their model directly on pixels. Here, we use recurrent convolutional units on pre-trained CNN convolutional maps, to extract temporal pattern from visual \u201cpercepts\u201d with different spatial sizes. 5 EXPERIMENTATION This section presents an empirical evaluation of the proposed GRU-RCN and Stacked GRU-RCN architectures. We conduct experimentations on two different tasks: human action recognition and video caption generation. 5.1 ACTION RECOGNITION We evaluate our approach on the UCF101 dataset Soomro et al. (2012). This dataset has 101 action classes spanning over 13320 YouTube videos clips. Videos composing the dataset are subject to large camera motion, viewpoint change and cluttered backgrounds. We report results on the dataset UCF101 \ufb01rst split, as this is most commonly used split in the literature. To perform proper hyperparameter seach, we use the videos from the UCF-Thumos validation split Jiang et al. (2014) as the validation set. 5.1.1 MODEL ARCHITECTURE In this experiment, we consider the RGB and \ufb02ow representations of videos as inputs. We extract visual \u201cpercept\u201d using VGG-16 CNNs that consider either RGB or \ufb02ow inputs. VGG-16 CNNs are pretrained on ImageNet (Simonyan & Zisserman, 2014b) and \ufb01ne-tuned on the UCF-101 dataset, following the protocol in Wang et al. (2015b). We then extract the convolution maps from pool2, pool3, pool4, pool5 layers and the fully-connected map from layer fc-7 (which can be view as a feature map with a 1 \u00d7 1 spatial dimension). Those features maps are given as inputs to our RCN models. We design and evaluate three RCN architectures for action recognition. In the \ufb01rst RCN architecture, GRU-RCN, we apply 5 convolutional GRU-RNNs independently on each convolutional map. Each convolution in the GRU-RCN has zero-padded 3 \u00d7 3 convolutions that preserves the spatial dimension of the inputs . The number of channels of each respective GRU-RNN hidden-representations are 64, 128, 256, 256, 512. After the RCN operation we obtain 5 hidden-representations for each time step. We apply average pooling on the hidden-representations of the last time-step to reduce their spatial dimension to 1 \u00d7 1, and feed the representations to 5 classi\ufb01ers, composed by a linear layer with a softmax nonlineary. Each classi\ufb01er therefore focuses on only 1 hidden-representation extracted from the convolutional map of a speci\ufb01c layer. The classi\ufb01er outputs are then averaged to get the \ufb01nal decision. A dropout ratio of 0.7 is applied on the input of each classi\ufb01ers. In the second RCN architecture, Stacked GRU-RCN, we investigate the usefulness of bottom-up connections. Our stacked GRU-RCN uses the same base architecture as the GRU-RCN, consisting of 5 convolutional GRU-RNNs having 64, 128, 256, 256 channels respectively. However, each 5 \fPublished as a conference paper at ICLR 2016 convolutional GRU-RNN is now preconditioned on the hidden-representation that the GRU-RNN applied on the previous convolution-map outputs. We apply max-pooling on the hidden representations between the GRU-RNN layers for the compatibility of the spatial dimensions. As for the previous architecture, each GRU-RNN hidden-representation at the last time step is pooled and then given as input to a classi\ufb01er. Finally, in our bi-directional GRU-RCN, we investigate the importance of reverse temporal information. Given convolutional maps extracted from one layer, we run the GRU-RCN twice, considering the inputs in both sequential and reverse temporal order. We then concatenate the last hiddenrepresentations of the foward GRU-RCN and backward GRU-RCN, and give the resulting vector to a classi\ufb01er. 5.1.2 MODEL TRAINING AND EVALUATION We follow the training procedure introduced by the two-stream framework Simonyan & Zisserman (2014a). At each iteration, a batch of 64 videos are sampled randomly from the the training set. To perform scale-augmentation, we randomly sample the cropping width and height from 256, 224, 192, 168. The temporal cropping size is set to 10. We then resize the cropped volume to 224 \u00d7 224 \u00d7 10. We estimate each model parameters by maximizing the model log-likelihood: L(\u03b8) = 1 N N X n=1 log p(yn | c(xn), \u03b8), where there are N training video-action pairs (xn, yn), c is a function that takes a crop at random. We use Adam Kingma & Ba (2014) with the gradient computed by the backpropagation algorithm. We perform early stopping and choose the parameters that maximize the log-probability of the validation set. We also follow the evaluation protocol of the two-stream framework Simonyan & Zisserman (2014a). At the test time, we sample 25 equally spaced video sub-volumes with a temporal size of 10 frames. From each of these selected sub-volumes, we obtain 10 inputs for our model, i.e. 4 corners, 1 center, and their horizontal \ufb02ipping. The \ufb01nal prediction score is obtained by averaging across the sampled sub-volumes and their cropped regions. 5.1.3 RESULTS Method RGB Flow VGG-16 78.0 85.4 VGG-16 RNN 78.1 84.9 GRU-RCN 79.9 85.7 Stacked-GRU RCN 78.3 Bi-directional GRU-RCN 80.7 Two-Stream Simonyan & Zisserman (2014b) 72.8 81.2 Two-Stream + LSTM Donahue et al. (2014) 71.1 76.9 Two-Stream + LSTM + Unsupervised Srivastava et al. (2015) 77.7 83.7 Improved Two-Stream Wang et al. (2015b) 79.8 85.7 C3D one network Tran et al. (2014), 1 million videos as training 82.3 C3D ensemble Tran et al. (2014), 1 million videos as training 85.2 Deep networks Karpathy et al. (2014), 1 million videos as training 65.2 Table 1: Classi\ufb01cation accuracy of different variants of the model on the UCF101 split 1. We report the performance of previous works that learn representation using only RGB information only. We compare our approach with two different baselines, VGG-16 and VGG-16 RNN. VGG-16 is the 2D spatial stream that is described in Wang et al. (2015b). We take the VGG-16 model, pretrained on Image-Net and \ufb01ne-tune it on the UCF-101 dataset. VGG-16 RNN baseline applied an RNN, using fully-connected gated-recurrent units, on top-of VGG-16. It takes as input the VGG-16 fully-connected representation fc-7. Following GRU-RCN top-layer, the VGG-16 RNN has hiddenrepresentation dimensionality of 512. 6 \fPublished as a conference paper at ICLR 2016 The \ufb01rst column of Table 1 focuses on RGB inputs. We \ufb01rst report results of different GRU-RCN variants and compare them with the two baselines: VGG-16 and VGG-16 RNN. Our GRU-RCN variants all outperform the baselines, showing the bene\ufb01t of delving deeper into a CNN in order to learn a video representation. We notice that VGG-16 RNN only slightly improve over the VGG16 baseline, 78.1 against 78.0. This result con\ufb01rms that CNN top-layer tends to discard temporal variation over short temporal windows. Stacked-GRU RCN performs signi\ufb01cantly lower than GRURCN and Bi-directional GRU-RCN. We argue that bottom-up connection, increasing the depth of the model, combined with the lack of training data (UCF-101 is train set composed by only 9500 videos) make the Stacked-GRU RCN learning dif\ufb01cult. The bi-directional GRU-RCN performs the best among the GRU-RCN variant with an accuracy of 80.7, showing the advantage of modeling temporal information in both sequential and reverse order. Bi-directional GRU-RCN obtains a gain 3.4% in term of performances, relatively to the baselines that focus only the VGG-16 top layer. Table 1 also reports results from other state-of-art approaches using RGB inputs. C3D Tran et al. (2014) obtains the best performance on UCF-101 with 85.2. However, it should be noted that C3D is trained over 1 million videos. Other approaches use only the 9500 videos of UCF101 training set for learning temporal pattern. Our Bi-directional GRU-RCN compare favorably with other Recurrent Convolution Network (second blocks), con\ufb01rming the bene\ufb01t of using different CNN layers to model temporal variation. Table 1 also evaluates the GRU-RCN model applied \ufb02ow inputs. VGG-16 RNN baseline actually decreases the performance compared to the VGG-16 baseline. On the other hand, GRU-RCN outperforms the VGG-16 baseline achieving 85.7 against 85.4. While the improvement is less important than the RGB stream, it should be noted that the \ufb02ow stream of VGG-16 is applied on 10 consecutive \ufb02ow inputs to extract visual \u201cpercepts\u201d, and therefore already captures some motion information. Finally, we investigate the combination of the RGB and \ufb02ow streams. Following Wang et al. (2015b), we use a weighted linear combination of their prediction scores, where the weight is set to 2 as for the \ufb02ow stream net and 1 for the temporal stream. Fusion the VGG-16 model baseline achieve an accuracy of 89.1. Combining the RGB Bi-directional GRU-RCN with the \ufb02ow GRU-RCN achieves a performance gain of 1.9% over baseline, reaching 90.8. Our model is on part with Wang et al. (2015b) that obtain state-of-art results using both RGB and \ufb02ow streams which obtains 90.9. 5.2 VIDEO CAPTIONING We also evaluate our representation on the video captioning task using YouTube2Text video corpus Chen & Dolan (2011). The dataset has 1,970 video clips with multiple natural language descriptions for each video clip. The dataset is open-domain and covers a wide range of topics such as sports, animals, music and movie clips. Following Yao et al. (2015b), we split the dataset into a training set of 1,200 video clips, a validation set of 100 clips and a test set consisting of the remaining clips. 5.2.1 MODEL SPECIFICATIONS To perform video captioning, we use the so-called encoder-decoder framework Cho et al. (2014). In this framework the encoder maps input videos into abstract representations that precondition a caption-generating decoder. As for encoder, we compare both VGG-16 CNN and Bi-directional GRU-RCN. Both models have been \ufb01ne-tuned on the UCF-101 dataset and therefore focus on detecting actions. To extract an abstract representation from a video, we sample K equally-space segments. When using the VGG16 encoder, we provide the fc7 layer activations of the each segment\u2019s \ufb01rst frame as the input to the text-decoder. For the GRU-RCN, we apply our model on the segment\u2019s 10 \ufb01rst frames. We concatenate the GRU-RCN hidden-representation from the last time step. The concatenated vector is given as the input to the text decoder. As it has been shown that characterizing entities in addition of action is important for the caption-generation task Yao et al. (2015a), we also use as encoder a CNN Szegedy et al. (2014), pretrained on ImageNet, that focuses on detecting static visual object categories. As for the decoder, we use an LSTM text-generator with soft-attention on the video temporal frames Yao et al. (2015b). 7 \fPublished as a conference paper at ICLR 2016 YouTube2Text Model model selection BLEU METEOR CIDEr VGG-16 Encoder BLEU 0.3700 0.2640 0.4330 Bi-directional GRU-RCN Encoder BLEU 0.4100 0.2850 0.5010 GoogleNet Encoder BLEU 0.4128 0.2900 0.4804 GoogleNet + Bi-directional GRU-RCN Encoder BLEU 0.4963 0.3075 0.5937 GoogleNet + Bi-directional GRU-RCN Encoder NLL 0.4790 0.3114 0.6782 GoogleNet + Bi-directional GRU-RCN Encoder METEOR 0.4842 0.3170 0.6538 GoogleNet + Bi-directional GRU-RCN Encoder CIDEr 0.4326 0.3160 0.6801 GoogleNet + HRNE (Pan et al., 2015) 0.436 0.321 VGG + p-RNN (Yu et al., 2015) 0.443 0.311 VGG + C3D + p-RNN (Yu et al., 2015) 0.499 0.326 Soft-attention Yao et al. (2015b) 0.4192 0.2960 0.5167 Venugopalan et al. Venugopalan et al. (2015) 0.3119 0.2687 + Extra Data (Flickr30k, COCO) 0.3329 0.2907 Thomason et al. Thomason et al. (2014) 0.1368 0.2390 Table 2: Performance of different variants of the model on YouTube2Text for video captioning. Representations obtained with the proposed RCN architecture combined with decoders from Yao et al. (2015b) offer a signi\ufb01cant performance boost, reaching the performance of the other state-ofthe-art models. 5.2.2 TRAINING For all video captioning models, we estimated the parameters \u03b8 of the decoder by maximizing the log-likelihood: L(\u03b8) = 1 N N X n=1 tn X i=1 log p(yn i | yn 0}. We first argue that a \u0338\u2261L aa but aa \u2261L aan for all n \u22651. Taking u = v = u0 = \u03b5 and v0 = b, observe that the preconditions for the syntactic congruence are satisfied but ab \u2208L while aab / \u2208L, therefore a \u0338\u2261L aa. Now, let n \u22652, and consider the words aa and aan. Intuitively, since there is no x, y \u2208b \u03a3\u2217such that xaay \u2208L and xaany \u2208L, we show that whenever the preconditions for the congruence are satisfied, both longer words are out of L. Given u, v, u0, v0 \u2208b \u03a3\u2217such that u0a \u2208b \u03a3\u2217 \u2265 \u22d7, av0 \u2208b \u03a3\u2217 \u2264 \u22d6, and u[u0aav0]v, we assume towards \f10 Regular Methods for OPLs contradiction that uu0aav0v \u2208L. Since uu0aav0v \u2208L and u0a \u2208b \u03a3\u2217 \u2265 \u22d7, we have u0 = \u03b5. Moreover, since av0 \u2208b \u03a3\u2217 \u2264 \u22d6, we have that v0 is either of the from a\u2217or a\u2217b. Consequently, \u03bb(u0aav0) is aaa\u2217or aaa\u2217b. This contradicts that u[u0aav0]v because a \u22d6a, and therefore uu0aav0v / \u2208L. The same argument shows that uu0aanv0v / \u2208L, implying that aa \u2261L aan. Similarly as above, we can show that u \u0338\u2261L v but v \u2261L w for all u, v, w \u2208b \u03a3\u2217such that \u03bb(u) = (ab)ia, \u03bb(v) = (ab)jaa, and \u03bb(w) = (ab)kaan, where n, i, j, k \u22651. We now show that the syntactic congruence is chain-monotonic. \u25b6Theorem 23. For every L \u2286b \u03a3\u2217, \u2261L is a chain-monotonic equivalence relation. The main result of this section is the characterization theorem below. We prove each direction separately in Sections 3.1 and 3.2. \u25b6Theorem 24. A language L is an OPL iff \u2261L admits finitely many equivalence classes. 3.1 Finiteness of the Syntactic Congruence Let b \u03a3 be an operator precedence alphabet, A = (Q, I, F, \u2206) be an OPA over b \u03a3, and \u22c6/ \u2208\u03a3 be a fresh letter for which we extend the precedence relation with a \u22d6\u22c6for all a \u2208\u03a3. For every word w \u2208b \u03a3\u2217, we define the functions fw : Q \u00d7 (\u0393 \u222a{\u22a5}) \u21922Q and \u03a6w : Q \u00d7 (\u0393 \u222a{\u22a5}) \u21922\u0393+\u222a{\u22a5} such that for all q \u2208Q and all \u03b3 \u2208\u0393 \u222a{\u22a5}, we have fw(q, \u03b3) = {qw \u2208 Q | \u2203\u03b3w \u2208\u0393+ \u222a{\u22a5}, (q, w\u22c6, \u03b3) \u2217(qw, \u22c6, \u03b3w)} and \u03a6w(q, \u03b3) = {\u03b3w \u2208\u0393+ \u222a{\u22a5} | \u2203qw \u2208 Q, (q, w\u22c6, \u03b3) \u2217(qw, \u22c6, \u03b3w)}. Intuitively, the states in fw(q, \u03b3) and the stacks in \u03a6w(q, \u03b3) come from the configurations that A can reach after reading w from the state in q, but before triggering any pop-transition due to reaching the end of the word w. Furthermore, for every w \u2208b \u03a3\u2217, we define the function gw : Q2 \u00d7 (\u0393 \u222a{\u22a5}) \u21922Q such that for all q1, q2 \u2208Q and all \u03b3 \u2208\u0393 \u222a{\u22a5} we have gw(q1, q2, \u03b3) = {pw \u2208Q | \u2203\u03b3w \u2208 \u03a6w(q1, \u03b3), (q2, \u03b5, \u03b3w) \u2217(pw, \u03b5, \u22a5)}. Intuitively, gw(q1, q2, \u03b3) is the set of states that A can reach after triggering from q2 the pop-transitions that empty the (unique) stack \u03b3w \u2208\u03a6w(q1, \u03b3) that was generated by reading w while moving from the state q1 to some state in fw(q1, \u03b3). Recall that for a given stack \u03b8 \u2208\u0393+ \u222a{\u22a5}, we denote by \u03b8\u22a4the stack symbol at the top of \u03b8, which is \u03b5 when \u03b8 = \u22a5. Moreover, for a given set of stacks \u0398 \u2286\u0393+ \u222a{\u22a5}, let us define \u0398\u22a4= {\u03b8\u22a4| \u03b8 \u2208\u0398}. For the sequel, we define the following equivalence relation: \u25b6Definition 25 (structural congruence). Given an OPA A = (Q, I, F, \u2206), we define the relation \u2261A over b \u03a3\u2217as follows: x \u2261A y \u21d0 \u21d2x \u2248y\u2227fx = fy \u2227gx = gy \u2227 \u0000\u2200q \u2208Q, \u2200\u03b3 \u2208\u0393\u222a{\u22a5}, (\u03a6x(q, \u03b3))\u22a4= (\u03a6y(q, \u03b3))\u22a4\u0001 First, we show that the structural congruence of any OPA has a finite index. \u25b6Lemma 26. For every OPA A with n states and m input letters, the structural congruence \u2261A has at most O(m)O(m\u00d7n)O(1) equivalence classes. Then, we show that for any OPA the syntactic congruence of its language is coarser than its structural congruence, therefore has a finite index as well. \u25b6Lemma 27. For every OPA A, the congruence \u2261L(A) is coarser than the congruence \u2261A. As a direct result of Lemmas 26 and 27 above, we obtain the following. \u25b6Corollary 28. For every L \u2286b \u03a3\u2217, if L is a b \u03a3-OPL then \u2261L has finite index. \fT. A. Henzinger and P. Kebis and N. Mazzocchi and N. E. Sara\u00e7 11 3.2 From the Syntactic Congruence to Operator Precedence Automata Consider a language L \u2286b \u03a3\u2217such that \u2261L has finitely many equivalence classes. We construct a deterministic OPA that recognizes L and whose states are based on the equivalence classes of \u2261L. Given w \u2208b \u03a3\u2217, we denote by [w] its equivalence class with respect to \u2261L. We construct A = (Q, {q0}, F, \u2206) with the set of states Q = {([u], [v]) | u, v \u2208b \u03a3\u2217}, the initial state q0 = ([\u03b5], [\u03b5]), the set of accepting states F = {([\u03b5], [w]) | w \u2208L}, and the b \u03a3-driven transition function \u2206: Q \u00d7 \u03a3 \u00d7 (\u0393+ \u222a{\u22a5}) \u2192Q \u00d7 (\u03a3 \u222a{\u03b5}) \u00d7 (\u0393+ \u222a{\u22a5}), where \u0393 = \u03a3 \u00d7 Q, is defined as follows: \u2206maps (([u], [v]), a, \u27e8b, ([u\u2032], [v\u2032])\u27e9\u03b8) to (([a], [\u03b5]), \u03b5, \u27e8a, ([u], [v])\u27e9\u27e8b, ([u\u2032], [v\u2032])\u27e9\u03b8) if b \u22d6a, it returns (([uva], [\u03b5]), \u03b5, \u27e8a, ([u\u2032], [v\u2032])\u27e9\u03b8) if b \u02d9 = a, and (([u\u2032], [v\u2032uv]), a, \u03b8) if b \u22d7a. The soundness of our construction is given by the proof of the following lemma in Appendix. \u25b6Lemma 29. For every L \u2286b \u03a3\u2217, if \u2261L has finite index then L is a b \u03a3-OPL. 4 Antichain-based Inclusion Checking Considering two languages L1 and L2 given by some automata, the classical approach for deciding whether L1 \u2286L2 holds is to first compute the complement L2 of L2, and then decide the emptiness of L1 \u2229L2. The major drawback with this approach is that the complementation requires the determinization of the automaton denoting L2. A way to avoid the determinization is to search among words of L1 for a counterexample to L1 \u2286L2. For this, a breadth-first search can be performed symbolically as a fixpoint iteration. In order to guarantee its termination, the search is equipped with a well quasi-order, and considers only words that are not subsumed, i.e., the minima of L1 with respect to the quasi-order. It is known that well quasi-orders satisfy the finite basis property, i.e., all sets of words have finitely many minima. Our approach is inspired by [36] which, in the context of unstructured words, presents the antichain approach as a Galois connection, and observes that the upward closure of the quasi-order is a complete abstraction of concatenation according to the standard notion of completeness in abstract interpretation [16]. We identify, in the context of structured words, sufficient conditions on quasi-orders to enable the antichain approach, by defining the class of language abstraction quasi-orders (which satisfy the finite basis property). Further, we relax the syntactic congruence into a quasi-order that is a language abstraction of a given OPL. In particular, we prove that the syntactic congruence itself is a language abstraction for its language. Then, we design our inclusion algorithm based on a fixpoint characterization of OPLs, which allows us to iterate breadth-first over all words accepted by a given OPA. Once equipped with a language abstraction quasi-order, this fixpoint is guaranteed to terminate, thus to synthesize a finite set T \u2286L1 of membership queries for L2 which suffices to decide whether L1 \u2286L2 holds. 4.1 Language Abstraction by Quasi-order Let E be a set of elements and \u227cbe a binary relation over E. The relation \u227cis a quasi-order when it is reflexive and transitive. A quasi-order \u227cover E is decidable if for all x, y \u2208E, determining whether x \u227cy holds is computable. Given a subset X of E, we define its upward closure with respect to the quasi-order \u227cby \u227c\u21bfX = {e \u2208E | \u2203x \u2208X, x \u227ce}. Given two subsets X, Y \u2286E the set X is a basis for Y with respect to \u227c, denoted B(X \u227cY ), whenever X \u2286Y and \u227c\u21bfX = \u227c\u21bfY . The quasi-order \u227cis a well quasi-order if and only if for each set Y \u2286E there exists a finite set X \u2286E such that B(X \u227cY ). This property on bases is also known as the finite basis property. Other equivalent definitions of well quasi-orders can be found in the literature [23], we will use the following two: (\u2020) For every sequence {ei}i\u2208N in E, there exists i, j \u2208N with i < j such that ei \u227cej. \f12 Regular Methods for OPLs b \u03a3cr c r \u03b5 c \u22d6\u02d9 = \u22d7 r \u22d7\u22d7\u22d7 \u03b5 \u22d6\u22d6\u02d9 = Figure 5 (left) OPA A over b \u03a3cr recognizing the VPL of well-matched call/return words. Figure 6 (right) OPA B over b \u03a3cr recognizing the regular language of words of even length. (\u2021) There is no sequence {Xi}i\u2208N in 2E such that \u227c\u21bfX1 \u228a\u227c\u21bfX2 \u228a. . . holds. Let L1, L2 be two languages. The main idea behind our inclusion algorithm is to compute a finite subset T of L1, called a query-basis, such that T \u2286L2 \u21d4L1 \u2286L2. Then, L1 \u2286L2 holds if and only if each word of T belongs to L2, which is checked via finitely many membership queries. The computation of a query-basis consists of collecting enough words of L1 to obtain a finite basis T for L1 with respect to a quasi-order \u227cthat abstracts L2. When \u227cis a well quasi-order, some basis is guaranteed to exist thanks to the finite basis property. To ensure the equivalence L1 \u2286L2 \u21d4T \u2286L2 for any T such that B(T \u227cL1), a counterexample w \u2208L1 \\ L2 can be discarded (not included in T), only if it there exists w0 \u2208T such that w0 \u227cw and w0 is also a counterexample. Thus, we introduce the language saturation property asking a quasi-order \u227cto satisfy the following: for all w0, w \u2208b \u03a3\u2217if w0 \u227cw and w0 \u2208L2 then w \u2208L2, or equivalently, \u227c\u21bfL2 = L2. Intuitively, language saturation ensures the completeness of the language abstraction with respect to the inclusion. Finally, to guarantee that the query-basis T is iteratively constructible with an effective fixpoint computation, the quasi-order \u227cmust be both chain-monotonic and decidable. We now define the notion of language abstraction to identify the properties for a quasi-order over structured words that allow an effectively computable query-basis, as was done in [25, 36] in the context of B\u00fcchi automata for quasi-orders over unstructured infinite words. \u25b6Definition 30 (language abstraction). Let L \u2286b \u03a3\u2217. A quasi-order \u227cover b \u03a3\u2217is a language abstraction of L iff (1) it is decidable, (2) it is chain-monotonic, (3) it is a well quasi-order, and (4) it saturates L. In the next section, we provide an effective computation of a query-basis for an OPA, thanks to a quasi-order that abstracts its language. \u25b6Example 31. The operator precedence alphabet b \u03a3cr of A and B from Figures 5 and 6 induces four families of words: (1) the words of b \u03a3\u2217 \u02d9 = where every c matches an r, (2) the words of b \u03a3\u2217 \u22d6= b \u03a3\u2217 \u2264 \u22d6\\ b \u03a3\u2217 \u02d9 = where some c is pending for an r on its right, (3) the words of b \u03a3\u2217 \u22d7= b \u03a3\u2217 \u2265 \u22d7\\ b \u03a3\u2217 \u02d9 = where some r is pending for a c on its left, and (4) all other words of b \u03a3\u2217 \u0338= \u02d9 = = \u03a3\u2217\\ \u0010 b \u03a3\u2217 \u2264 \u22d6\u222ab \u03a3\u2217 \u2265 \u22d7 \u0011 . We focus on deciding whether L(B) is a subset of L(A) and suppose that we are given the quasi-order \u226athat is a language abstraction of L(A). Additionally, we have that two words compare with \u226aonly if they belong to the same family, and we have the following bases: B({cr} \u226ab \u03a3\u2217 \u02d9 =), B({c} \u226ab \u03a3\u2217 \u22d6), B({r} \u226ab \u03a3\u2217 \u22d7), and B({rc} \u226ab \u03a3\u2217 \u0338= \u02d9 =). We observe that \u226asaturates L(A) since b \u03a3\u2217 \u02d9 = \u2286L(A) and b \u03a3\u2217 \u2264 \u22d6, b \u03a3\u2217 \u2265 \u22d7, b \u03a3\u2217 \u0338= \u02d9 = \u2288L(A). Among the representatives cr, c, r, and rc, we can construct the set T = {cr, rc} since c, r / \u2208L(B). The set T is a query-basis for deciding whether L(B) is a subset of L(A). In particular, rc \u2208T witnesses that L(B) \u2288L(A). \fT. A. Henzinger and P. Kebis and N. Mazzocchi and N. E. Sara\u00e7 13 Note that the syntactic congruence is a natural language abstraction of OPLs. \u25b6Proposition 32. For every OPL L, \u2261L is a language abstraction of L. When the language to be abstracted is given by an OPA we are able to define a quasi-order, called structural quasi-order, that is based on the underlying structure of the automaton. \u25b6Definition 33 (structural quasi-order). Given an OPA A = (Q, I, F, \u2206), we define the relation \u2a7dA over b \u03a3\u2217as follows: x \u2a7dA y \u21d0 \u21d2x \u2248y \u2227\u2200q, q\u2032 \u2208Q, \u2200\u03b3 \u2208\u0393 \u222a{\u22a5} ^ \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 fx(q, \u03b3) \u2286fy(q, \u03b3) gx(q, q\u2032, \u03b3) \u2286gy(q, q\u2032, \u03b3) (\u03a6x(q, \u03b3))\u22a4\u2286(\u03a6y(q, \u03b3))\u22a4 \u25b6Remark 34. For every OPA A, the quasi-order \u2a7dA relaxes the congruence \u2261A from Section 3. For every OPA A, the quasi-order \u2a7dA relaxes the congruence \u2261A from Section 3. Note that, for every OPA A, the set Q \u00d7 (\u0393 \u222a{\u22a5}) is finite. Consequently, \u2a7dA is computable, and it is a well quasi-order since there cannot exist an infinite sequence of incomparable elements, i.e., (\u2020) holds. \u25b6Proposition 35. For every OPA A, \u2a7dA is a computable chain-monotonic well quasi-order. Next, we establish that structural quasi-orders saturate their languages. \u25b6Lemma 36. For every OPA A and w1, w2 \u2208b \u03a3\u2217, if w1 \u2a7dA w2 and w1 \u2208L(A) then w2 \u2208L(A). The following comes as a direct consequence of Proposition 35 and Lemma 36. \u25b6Corollary 37. For every OPA A, \u2a7dA is a language abstraction of L(A). We continue Example 31, showing that the structural quasi-order agrees with the considered bases above. \u25b6Example 38. The quasi-order \u226adescribed in Example 31 agrees with the structural quasi-order \u2a7dA of the OPA A in Figure 5. Indeed, due to the constraint that two comparable words x, y \u2208b \u03a3\u2217should be chain equivalent, i.e., x \u2248y, the quasi-order \u2a7dA compares only the words from the same families among b \u03a3\u2217 \u02d9 =, b \u03a3\u2217 \u22d6, b \u03a3\u2217 \u22d7, and b \u03a3\u2217 \u0338= \u02d9 =. We also note that, for all words, adding a factor in b \u03a3\u2217 \u02d9 = cannot change the accessibility in A since reading such a factor has no effect on the stack or the current state. Additionally, reading several c in a row triggers a self loop and reading several r is not possible in A. As a consequence, the base predicates mentioned in Example 31 hold, that is, B({cr} \u2a7dA b \u03a3\u2217 \u02d9 =), B({c} \u2a7dA b \u03a3\u2217 \u22d6), B({r} \u2a7dA b \u03a3\u2217 \u22d7), and B({rc} \u2a7dA b \u03a3\u2217 \u0338= \u02d9 =). Yet, we have that cr \u2a7dA \u03b5 because (q0, cr, \u22a5) \u2217(q2, \u03b5, \u27e8c, q0\u27e9) but (q0, \u03b5, \u22a5) / \u2217(q2, \u03b5, \u27e8c, q0\u27e9). 4.2 Fixpoint Characterization of Languages and Inclusion In order to formulate our inclusion algorithm, it remains to give an effective computation of a query-basis. We do so through a fixpoint characterization of the languages recognized by OPAs. We introduce the function Cat to construct words that follow the runs of the given OPA. Iterating the Cat function n \u2208N times captures all words of length up to n, and the fixpoint of the iteration captures the entire language of a given OPA. Let A = (Q, I, F, \u2206) be an OPA. Consider a vector of set of words \u20d7 X that accesses its fields with two states s, t \u2208Q, and three letters a, b, c \u2208b \u03a3 \u222a{\u03b5}. Intuitively, we aim at \f14 Regular Methods for OPLs constructing \u20d7 X iteratively such that, reading any w \u2208\u20d7 Xa,b,c s,t from the configuration (s, wc, \u03b1) where \u03b1\u22a4= a allows reaching (t, c, \u03b2) where \u03b2\u22a4= b in A. We recall that \u22a5\u22a4= \u03b5. As the base case, we take \u20d7 Xa,b,c s,t = \u03b5 when a = b and s = t, otherwise \u20d7 Xa,b,c s,t = \u2205. Then, we introduce operations (more explicitly, functions from sets of words to sets of words) that use the transitivity of \u2217in A to extend the sets of \u20d7 X. We first introduce: CatShift( \u20d7 Xa,b,c s,t ) = ( ub\u2032v a\u2032, b\u2032 \u2208\u03a3, q, s\u2032, t\u2032 \u2208Q, u \u2208\u20d7 Xa,a\u2032,b\u2032 s,s\u2032 , v \u2208\u20d7 Xb\u2032,b,c t\u2032,t , (s\u2032, \u27e8a\u2032, q\u27e9\u22a5) b\u2032 (t\u2032, \u27e8b\u2032, q\u27e9\u22a5) ) Essentially, CatShift adds ub\u2032v to \u20d7 Xa,b,c s,t when some run over u can be appended with b\u2032 thanks to a shift-transition, and some run of v requires starting with b\u2032 at the top of the stack. Next, we introduce: CatChain( \u20d7 Xa,b,c s,t ) = ( ub\u2032v a\u2032, b\u2032, c\u2032 \u2208\u03a3, q, s\u2032, t\u2032 \u2208Q, u \u2208\u20d7 Xa,b,b\u2032 s,q , v \u2208\u20d7 Xb\u2032,c\u2032,c s\u2032,t\u2032 , b \u22d6b\u2032 \u2227(q, \u22a5) b\u2032 (s\u2032, \u27e8b\u2032, q\u27e9\u22a5) \u2227(t\u2032, \u27e8c\u2032, q\u27e9\u22a5) c (t, \u22a5) ) Intuitively, CatChain adds ub\u2032v to \u20d7 Xa,b,c s,t when some run over u can be appended with b\u2032 thanks to a push-transition, and some run of v requires starting with b\u2032 at the top of the stack. Additionally, b\u2032 is guaranteed to be removed from the stack thanks to a pop-transition on the incoming letter c. Finally, we define: Cat( \u20d7 Xa,b,c s,t ) = \u20d7 Xa,b,c s,t \u222aCatShift( \u20d7 Xa,b,c s,t ) \u222aCatChain( \u20d7 Xa,b,c s,t ) Note that the function Cat never removes words from the sets of \u20d7 X, i.e., \u20d7 Xa,b,c s,t \u2286Cat( \u20d7 Xa,b,c s,t ). Iterating the Cat function n \u2208N times allows us to extend the sets of \u20d7 X to words of length at most n that follow some run of A. In particular, Cat characterizes the language of A by w \u2208L(A) if and only if w \u2208Cat\u2217( \u20d7 X\u03b5,\u03b5,\u03b5 qI,qF ) for some qI \u2208I and qF \u2208F. This is formalized by the following lemma. \u25b6Lemma 39. Let A = (Q, I, F, \u2206) be an OPA, and let \u0393 = \u03a3 \u00d7 Q. Considering \u20d7 U a,b,c s,t = \u03b5 when a = b and s = t, otherwise \u20d7 U a,b,c s,t = \u2205. The following holds for all n > 0: Catn(\u20d7 U a,b,c s,t )= \b u | (s, uc, \u03b1) \u2217(t, c, \u03b2), |u| = n, \u03b1 \u2208\u0398a, \u03b2 \u2208\u0398b, au \u2208b \u03a3\u2217 \u2264 \u22d6, uc \u2208b \u03a3\u2217 \u2265 \u22d7, u\u25b7= b \t where, for all a \u2208b \u03a3, the set of stack symbols \u0398a \u2286\u0393 \u222a{\u22a5} is defined by \u0398a = {\u22a5} if a = \u03b5, and \u0398a = {\u27e8a, q\u27e9| q \u2208Q} otherwise. We continue Example 31, showing that Cat agrees with the considered query-basis. \u25b6Example 40. Let \u20d7 U a,b,c s,t = \u03b5 when a = b and s = t, otherwise \u20d7 U a,b,c s,t = \u2205. Thanks to Lemma 39, we have that L(B) = Cat\u2217(\u20d7 U \u03b5,\u03b5,\u03b5 p0,p0). First observe that c, r / \u2208Cat\u2217(\u20d7 U \u03b5,\u03b5,\u03b5 p0,p0). This comes from Lemma 39 and the fact that there is no run of B from p0 to p0 that reads a single letter. Next, we prove that cr, rc \u2208Cat2(\u20d7 U \u03b5,\u03b5,\u03b5 p0,p0). We show that r \u2208Cat(\u20d7 U \u03b5,\u03b5,c p0,p1) by CatChain. Indeed, we have \u03b5 \u2208\u20d7 U \u03b5,\u03b5,r p0,p0, \u03b5 \u2208\u20d7 U r,r,c p1,p1, \u03b5 \u22d6r, and (p0, \u22a5) r (p1, \u27e8r, p1\u27e9\u22a5) c (p1, \u22a5). Then, rc \u2208Cat2(\u20d7 U \u03b5,\u03b5,\u03b5 p0,p0) by CatChain since r \u2208Cat(\u20d7 U \u03b5,\u03b5,c p0,p1), \u03b5 \u2208\u20d7 U c,c,\u03b5 p0,p0, \u03b5 \u22d6c, and (p1, \u22a5) c (p0, \u27e8c, p1\u27e9\u22a5) \u03b5 (p1, \u22a5). We show that r \u2208Cat(\u20d7 U c,r,\u03b5 p1,p0) by CatShift. Indeed, we have \u03b5 \u2208\u20d7 U c,c,r p1,p1, \u03b5 \u2208\u20d7 U r,r,\u03b5 p0,p0, and (p1, \u27e8c, p\u27e9\u22a5) r (p0, \u27e8r, p\u27e9\u22a5), for all p \u2208{p0, p1}. Then, cr \u2208Cat2(\u20d7 U \u03b5,\u03b5,\u03b5 p0,p0) by CatChain since \u03b5 \u2208\u20d7 U \u03b5,\u03b5,c p0,p0, r \u2208Cat(\u20d7 U c,r,\u03b5 p1,p0), \u03b5 \u22d6c, (p0, \u22a5) c (p1, \u27e8c, p0\u27e9\u22a5), and (p0, \u27e8r, p0\u27e9) \u03b5 (p0, \u22a5). \fT. A. Henzinger and P. Kebis and N. Mazzocchi and N. E. Sara\u00e7 15 The computation of a query-basis for deciding whether L1 is a subset of L2 consists of iterating Cat to collect enough words to obtain a vector of finite bases with respect to the quasi-order \u227cthat is a language abstraction of L2. In other words, we search for n \u2208N such that Catn( \u20d7 Xa,b,c s,t ) is a basis for limk7\u2192\u221eCatk(\u20d7 U a,b,c s,t ) with respect to \u227c. The following lemma shows that when B(Catn( \u20d7 Xa,b,c s,t ) \u227cCatn+1( \u20d7 Xs,b,c s,t )) holds for some n \u2208N, then B(Catn( \u20d7 Xa,b,c s,t ) \u227climk7\u2192\u221eCatk( \u20d7 Xa,b,c s,t )) holds also, as long as the used quasi-order is chain-monotonic. \u25b6Lemma 41. Let \u227cbe a chain-monotonic quasi-order over b \u03a3\u2217. For every A = (Q, I, F, \u2206) and \u20d7 X, \u20d7 Y such that B( \u20d7 Xa,b,c s,t \u227c\u20d7 Y a,b,c s,t ) holds for all s, t \u2208Q and all a, b, c \u2208\u03a3 \u222a{\u03b5}, we have B(Cat( \u20d7 Xa,b,c s,t ) \u227cCat(\u20d7 Y a,b,c s,t )) holds also for all s, t \u2208Q and all a, b, c \u2208\u03a3 \u222a{\u03b5}. Input: an OPL L1 given by the OPA (Q, I, F, \u2206) Input: a language L2 with a procedure deciding if w \u2208L2 Input: a quasi-order \u227cthat is a language abstraction of L2 Output: Returns ok if L1 \u2286L2 and ko otherwise 1 Function: 2 let \u20d7 U as \u20d7 U a,b,c s,t := \u03b5 if a = b \u2227s = t else \u20d7 U a,b,c s,t := \u2205 3 \u20d7 X := \u20d7 U 4 repeat 5 let \u20d7 X as \u20d7 Xa,b,c s,t := Cat( \u20d7 Xa,b,c s,t ) 6 until B( \u20d7 Xa,b,c s,t \u227cCat( \u20d7 Xa,b,c s,t )) for all s, t \u2208Q and all a, b, c \u2208\u03a3 \u222a{\u03b5} 7 for each (qI, qF ) \u2208I \u00d7 F do 8 for each w \u2208\u20d7 X\u03b5,\u03b5,\u03b5 qI,qF do 9 if w / \u2208L2 then return ko 10 return ok Figure 7 Antichain inclusion algorithm. Our inclusion algorithm is given in Figure 7. We can prove that it always terminates thanks to the finite base property of language abstractions. Additionally, its correctness is based on the following: Lemmas 39 and 41 ensure that the repeat-until loop computes a basis of the language L1 given by an OPA while the language saturation ensures the completeness of this basis with respect to the inclusion problem. \u25b6Theorem 42. The algorithm from Figure 7 terminates and decides language inclusion. We establish that our inclusion algorithm for OPAs is in ExpTime as a consequence of Lemma 26, Remark 34, the facts that the vector \u20d7 X maintains polynomially many sets of words and the membership problem for OPAs is in PTime (Remark 17). We recall that inclusion and universality are ExpTime-C for both OPLs and VPLs [3, 43]. \u25b6Theorem 43. For all OPAs A, B with respectively nA, nB states and m input letters, the inclusion algorithm from Figure 7 with \u2a7dB as the language abstraction quasi-order decides if L(A) \u2286L(B) in time O(m \u00d7 nA)O(m\u00d7nB)O(1). 5" + }, + { + "url": "http://arxiv.org/abs/2012.08185v2", + "title": "Scalable Verification of Quantized Neural Networks (Technical Report)", + "abstract": "Formal verification of neural networks is an active topic of research, and\nrecent advances have significantly increased the size of the networks that\nverification tools can handle. However, most methods are designed for\nverification of an idealized model of the actual network which works over real\narithmetic and ignores rounding imprecisions. This idealization is in stark\ncontrast to network quantization, which is a technique that trades numerical\nprecision for computational efficiency and is, therefore, often applied in\npractice. Neglecting rounding errors of such low-bit quantized neural networks\nhas been shown to lead to wrong conclusions about the network's correctness.\nThus, the desired approach for verifying quantized neural networks would be one\nthat takes these rounding errors into account. In this paper, we show that\nverifying the bit-exact implementation of quantized neural networks with\nbit-vector specifications is PSPACE-hard, even though verifying idealized\nreal-valued networks and satisfiability of bit-vector specifications alone are\neach in NP. Furthermore, we explore several practical heuristics toward closing\nthe complexity gap between idealized and bit-exact verification. In particular,\nwe propose three techniques for making SMT-based verification of quantized\nneural networks more scalable. Our experiments demonstrate that our proposed\nmethods allow a speedup of up to three orders of magnitude over existing\napproaches.", + "authors": "Thomas A. Henzinger, Mathias Lechner, \u0110or\u0111e \u017dikeli\u0107", + "published": "2020-12-15", + "updated": "2022-04-05", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Deep neural networks for image classi\ufb01cation typically consist of a large number of sequentially composed layers. Computing the output of such a network for a single input sample may require more than a billion \ufb02oating-point operations (Tan and Le 2019). Consequently, deploying a trained deep neural network imposes demanding requirements on the computational resources available at the computing device that runs the network. Quantization of neural networks is a technique that reduces the computational cost of running a neural network by reducing the arithmetic precision of computations inside the network (Jacob et al. 2018). As a result, quantization has been widely adapted in industry for deploying neural networks in a resource-friendly way. For instance, Tesla\u2019s Autopilot Hardware 3.0 is designed for running 8-bit quantized neural networks (wikichip.org (accessed December 14, 2020)). Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. The veri\ufb01cation problem for neural networks consists of checking validity of some input-output relation. More precisely, given two conditions over inputs and outputs of the network, the goal is to check if for every input sample which satis\ufb01es the input condition, the corresponding output of the neural network satis\ufb01es the output condition. Veri\ufb01cation of neural networks has many important practical applications such as checking robustness to adversarial attacks (Szegedy et al. 2013; Tjeng, Xiao, and Tedrake 2019), proving safety in safety-critical applications (Huang et al. 2017; Lechner et al. 2021a,b) or output range analysis (Dutta, Chen, and Sankaranarayanan 2019), to name a few. There are many ef\ufb01cient methods for veri\ufb01cation of neural networks (e.g. (Katz et al. 2017; Tjeng, Xiao, and Tedrake 2019; Bunel et al. 2018)), however most of them ignore rounding errors in computations. The few approaches that can handle the semantics of rounding operations are overapproximation-based methods, i.e., incomplete veri\ufb01cation (Singh et al. 2018, 2019). The imprecision introduced by quantization stands in stark contrast with the idealization made by veri\ufb01cation methods for standard neural networks, which disregards rounding errors that appear due to the network\u2019s semantics. Consequently, veri\ufb01cation methods developed for standard networks are not sound for and cannot be applied to quantized neural networks. Indeed, recently it has been shown that speci\ufb01cations that hold for a \ufb02oating-point representation of a network need not necessarily hold after quantizing the network (Giacobbe, Henzinger, and Lechner 2020). As a result, specialized veri\ufb01cation methods that take quantization into account need to be developed, due to more complex semantics of quantized neural networks. Groundwork on such methods demonstrated that special encodings of networks in terms of Satis\ufb01ability Modulo Theories (SMT) (Clark and Cesare 2018) with bitvector (Giacobbe, Henzinger, and Lechner 2020) or \ufb01xedpoint (Baranowski et al. 2020) theories present a promising approach towards the veri\ufb01cation of quantized networks. However, the size of networks that these tools can handle and runtimes of these approaches do not match the ef\ufb01ciency of advanced veri\ufb01cation methods developed for standard networks like Reluplex(Katz et al. 2017) and Neurify (Wang et al. 2018a). In this paper, we provide \ufb01rst evidence that the veri\ufb01ca\ftion problem for quantized neural networks is harder compared to veri\ufb01cation of their idealized counterparts, thus explaining the scalability-gap between existing methods for standard and quantized network veri\ufb01cation. In particular, we show that verifying quantized neural networks with bitvector speci\ufb01cations is PSPACE-hard, despite the satis\ufb01ability problem of formulas in the given speci\ufb01cation logic being in NP. As veri\ufb01cation of neural networks without quantization is known to be NP-complete (Katz et al. 2017), this implies that the veri\ufb01cation of quantized neural networks is a harder problem. We then address the scalability limitation of SMT-based methods for veri\ufb01cation of quantized neural networks, and propose three techniques for their more ef\ufb01cient SMT encoding. First, we introduce a technique for identifying those variables and constraints whose value can be determined in advance, thus decreasing the size of SMT-encodings of networks. Second, we show how to encode variables as bit-vectors of minimal necessary bit-width. This signi\ufb01cantly reduces the size of bit-vector encoding of networks in (Giacobbe, Henzinger, and Lechner 2020). Third, we propose a redundancy elimination heuristic which exploits bitlevel redundancies occurring in the semantics of the network. Finally, we propose a new method for the analysis of the quantized network\u2019s reachable value range, which is based on abstract interpretation and assists our new techniques for SMT-encoding of quantized networks. We evaluate our approach on two well-studied adversarial robustness veri\ufb01cation benchmarks. Our evaluation demonstrates that the combined effect of our techniques is a speed-up of over three orders of magnitude compared to the existing tools. The rest of this work is organized as follows: First, we provide background and discuss related works on the veri\ufb01cation of neural networks and quantized neural networks. We then start with our contribution by showing that the veri\ufb01cation problem for quantized neural networks with bitvector speci\ufb01cations is PSPACE-hard. In the following section, we propose several improvements to the existing SMTencodings of quantized neural networks. Finally, we present our experimental evaluation to assess the performance impacts of our techniques. Background and Related work A neural network is a function f : Rn \u2192Rm that consists of several layers f = l1 \u25e6l2 \u25e6\u00b7 \u00b7 \u00b7 \u25e6lk that are sequentially composed, with each layer parameterized by learned weight values. Commonly found types of layers are linear l(x) = Wx + b, W \u2208Rno\u00d7ni, b \u2208Rno, (1) ReLU l(x) = max{x, 0}, and convolutional layers (LeCun et al. 1998). In practice, the function f is implemented by \ufb02oatingpoint arithmetic instead of real-valued computations. To distinguish a neural network from its approximation, we de\ufb01ne an interpretation JfK as a map which assigns a new function to each network, i.e. JK : (Rn \u2192Rm) \u2192(D \u2192Rm), (2) where D \u2282Rn is the admissible input domain. For instance, we denote by JfKR : f 7\u2192f the idealized real-valued abstraction of a network f, whereas JfK\ufb02oat32 denotes its \ufb02oating-point implementation, i.e. the realization of f using 32-bit IEEE \ufb02oating-point (Kahan 1996) instead of real arithmetic. Evaluating f, even under \ufb02oating-point interpretation, can be costly in terms of computations and memory resources. In order to reduce these resource requirements, networks are usually quantized before being deployed to end devices (Jacob et al. 2018). Formally, quantization is an interpretation JfKint-k that evaluates a network f which uses k-bit \ufb01xed-point arithmetic (Smith et al. 1997), e.g. 4 to 8 bits. Let [Z]k = {0, 1}k denote the set of all bit-vectors of bit-width k. For each layer l : [Z]ni k \u2192[Z]n0 k in JfKint-k, we de\ufb01ne its semantics by de\ufb01ning l(x1, . . . , xni) = (y1, . . . , yn0) as follows: x\u2032 i = ni X j=1 wijxj + bi, (3) x\u2032\u2032 i = round(x\u2032 i, ki) = \u230ax\u2032 i \u00b7 2\u2212ki\u230b, and (4) yi = max{0, min{2Ni \u22121, x\u2032\u2032 i }}, (5) Here, wi,j and bi for each 1 \u2264j \u2264ni and 1 \u2264i \u2264n0 denote the learned weights and biases of f, and ki and Ni denote the bit-shift and the cut-off value associated to each variable yi, respectively. Eq. (3) multiplies the inputs xj with the weight values wij and adds the bias bi, eq. (4) rounds the result to the nearest valid k-bit \ufb01xed-point value, and eq. (5) is a non-linear ReLU-N activation function 1. An illustration of how the computations inside a network differ based on the used interpretation is shown in Fig. 1. Veri\ufb01cation of neural networks The veri\ufb01cation problem for a neural network and its given interpretation consists of verifying some input-output relation. More formally, given a neural network f, its interpretation JfK and two predicates \u03d5 and \u03c8 over the input domain D and output domain Rm of JfK, we want to check validity of the following formula (i.e. whether it holds for each x \u2208D) \u03d5(x) \u2227JfK(x) = y = \u21d2\u03c8(y). (6) We refer to the formula in eq. (6) as the formal speci\ufb01cation that needs to be proved. In order to formally verify a neural network, it is insuf\ufb01cient to just specify the network without also providing a particular interpretation. A property that holds with respect to one interpretation need not necessarily remain true if we consider a different interpretation. For example, robustness of the real-valued abstraction does not imply robustness of the \ufb02oating-point implementation of a network (Giacobbe, Henzinger, and Lechner 2020; Jia and Rinard 2020). Ideally, we would like to verify neural networks under the exact semantics that are used for running networks on 1Note that for quanitzed neural networks, the double-side bounded ReLU-N activation is preferred over the standard ReLU activation function (Jacob et al. 2018) \fA) Idealized real-valued network JfKR 0.94374 . . . 1.382723 . . . 2.57799431 . . . + 1.75 0.67 B) Floating-point network JfK\ufb02oat32 0.94374 1.3827 J2.577954K\ufb02oat32 = 2.5780 + 1.75 0.67 C) Quantized (\ufb01xed-point) network JfKint-8 0.94 1.38 J2.5696Kint-8 = 2.57 + 1.75 0.67 Figure 1: Illustration of how different interpretations of the same network run with different numerical precision. A) JfKR assumes in\ufb01nite precision. B) JfK\ufb02oat32 rounds the mantissa according on the IEEE 754 standard. C) JfKint-8 rounds to a \ufb01xed number of digits before and after the comma. (Note that this \ufb01gure serves as a hypothetical example in decimal format, the actual computations run with the base-2 representation.) the end device, i.e., JfK\ufb02oat32 most of the time. However, as veri\ufb01cation methods for IEEE \ufb02oating-point arithmetic are extremely inef\ufb01cient, research has focused on verifying the idealized real-valued abstraction JfKR of f. In particular, ef\ufb01cient methods have been developed for a popular type or networks that only consist of linear and ReLU operations (Figure 2 a) (Katz et al. 2017; Ehlers 2017; Tjeng, Xiao, and Tedrake 2019; Bunel et al. 2018). The piecewise linearity of such ReLU networks allows the use of Linear Programming (LP) techniques, which make the veri\ufb01cation methods more ef\ufb01cient. The underlying veri\ufb01cation problem of ReLU networks with linear inequality speci\ufb01cations was shown to be NP-complete in the number of ReLU operations (Katz et al. 2017), however advanced tools scale beyond toy networks. Although these methods can handle networks of large size, they are building on the assumption that JfK\ufb02oat32 \u2248JfKR, (7) i.e. that the rounding errors introduced by the IEEE \ufb02oatingpoint arithmetic of both the network and the veri\ufb01cation algorithm can be neglected. It has been recently shown that this need not always be true. For example, Jia and Rinard (Jia and Rinard 2020) crafted adversarial counterexamples to the \ufb02oating-point implementation of a neural network whose idealized interpretation was veri\ufb01ed to be robust against such attacks, by exploiting subtle numerical differences between JfK\ufb02oat32 and JfKR. a) x y y = JReLU(x)KR b) x y y = JReLU-N(x)Kint-k Figure 2: Illustration of a) the ReLU activation function under real-valued semantics, and b) ReLU-N activation under \ufb01xed-point semantics (right). Veri\ufb01cation of quantized neural networks The low numerical precision of few-bit \ufb01xed-point arithmetic implies that JfKint-k \u0338= JfKR. Indeed, (Giacobbe, Henzinger, and Lechner 2020) constructed a prototypical network that either satis\ufb01es or violates a formal speci\ufb01cation, depending on the numerical precision used to evaluate the network. Moreover, they observed such discrepancy in networks found in practice. Thus, no formal guarantee on JfKint-k can be obtained by verifying JfKR or JfK\ufb02oat32. In order to verify \ufb01xed-point implementations of (i.e. quantized) neural networks, new approaches are required. Fig. 2 depicts the ReLU activation function for idealized real-valued ReLU networks and for quantized ReLU networks, respectively. The activation function under \ufb01xedpoint semantics consists of an exponential number of piecewise constant intervals thus making the LP-based techniques, which otherwise work well for real-valued networks, extremely inef\ufb01cient. So the approaches developed for idealized real-valued ReLU networks cannot be ef\ufb01ciently applied to quantized networks. Existing veri\ufb01cation methods for quantized neural networks are based on bit-exact Boolean Satis\ufb01ability (SAT) and SMT encodings. For 1-bit networks, i.e., binarized neural networks, Narodytska et al. (Narodytska et al.) and (Cheng et al. 2018) proposed to encode the network semantics and the formal speci\ufb01cation into an SAT formula, which is then checked by an off-the-shelf SAT solver. While their approach could handle networks of decent size, the use of SAT-solving is limited to binarized networks, which are not very common in practice. (Giacobbe, Henzinger, and Lechner 2020) proposed to verify many-bit quantized neural network by encoding their semantics and speci\ufb01cations into quanti\ufb01er-free bit-vector SMT (QF_BV) formulas. The authors showed that, by reordering linear summations inside the network, such monolithic bit-vector SMT encodings could scale to the veri\ufb01cation of small but interestingly sized networks. (Baranowski et al. 2020) introduced an SMT theory for \ufb01xed-point arithmetic and showed that the semantics of quantized neural networks could be encoded in this theory very naturally. However, as the authors only proposed prototype solvers for reference purposes, the size of the veri\ufb01ed networks was limited. \fLimitations of neural network veri\ufb01cation The existing techniques for veri\ufb01cation of idealized real-valued abstractions of neural networks have significantly increased the size of networks that can be veri\ufb01ed (Ehlers 2017; Katz et al. 2017; Bunel et al. 2018; Tjeng, Xiao, and Tedrake 2019). However, scalability remains the key challenge hindering formal veri\ufb01cation of neural networks in practice. For instance, even the largest networks veri\ufb01ed by the existing methods (Ruan, Huang, and Kwiatkowska 2018) are tiny compared to the network architectures used for object detection and image classi\ufb01cation (He et al. 2016). Regarding the veri\ufb01cation of quantized neural networks, no advanced techniques aiming at performance improvements have been studied so far. In this paper, we address the scalability of quantized neural network veri\ufb01cation methods that rely on SMT-solving. Hardness of Veri\ufb01cation of Quantized Neural Networks The size of quantized neural networks that existing veri\ufb01cation methods can handle is signi\ufb01cantly smaller compared to the real arithmetic networks that can be veri\ufb01ed by the state-of-the-art tools like (Katz et al. 2017; Tjeng, Xiao, and Tedrake 2019; Bunel et al. 2018). Thus, a natural question is whether this gap in scalability is only because existing methods for quantized neural networks are less ef\ufb01cient, or if the veri\ufb01cation problem for quantized neural networks is computationally harder. In this section, we study the computational complexity of the veri\ufb01cation problem for quantized neural networks. For idealized real arithmetic interpretation of neural networks, it was shown in (Katz et al. 2017) that, if predicates on inputs and outputs are given as conjunctions of linear inequalities, then the problem is NP-complete. The fact that the problem is NP-hard is established by reduction from 3-SAT, and the same argument can be used to show that the veri\ufb01cation problem for quantized neural networks is also NP-hard. In this work, we argue that the veri\ufb01cation problem for quantized neural networks with bit-vector speci\ufb01cations is in fact PSPACE-hard, and thus harder then verifying real arithmetic neural networks. Moreover, we show that this holds even for the special case when there are no constraints on the inputs of the network, i.e. when the predicate on inputs is assumed to be a tautology. The veri\ufb01cation problem for a quantized neural network f that we consider consists of checking validity of a given input-output relation formula JfKint-k(x) = y = \u21d2\u03c8(y). Here, JfKint-k is the k-bit \ufb01xed point arithmetic interpretation of f, and \u03c8 is a predicate in some speci\ufb01cation logic over the outputs of JfKint-k. Equivalently, we may also check satis\ufb01ability of the dual formula JfKint-k(x) = y \u2227\u00ac\u03c8(y). (8) In order to study complexity of the veri\ufb01cation problem, we also need to specify the speci\ufb01cation logic to which formula \u03c8 belongs. In this work, we study hardness with respect to the fragment QF_BV2bw of the \ufb01xed-size bit-vector logic QF_BV2 (Kov\u00e1sznai, Fr\u00f6hlich, and Biere 2016). The fragment QF_BV2bw allows bit-wise logical operations (such as bit-wise conjunction, disjunction and negation) and the equality operator. The index 2 in QF_BV2bw is used to denote that the constants and bitwidths are given in binary representation. It was shown in (Kov\u00e1sznai, Fr\u00f6hlich, and Biere 2016) that the satis\ufb01ability problem for formulas in QF_BV2bw is NP-complete. Even though QF_BV2bw itself allows only bit-vector operations and not linear integer arithmetic, we show that by introducing dummy output variables in JfKint-k we may still encode formal speci\ufb01cations on outputs that are boolean combinations of linear inequalities over network\u2019s outputs. Thus, this speci\ufb01cation logic is suf\ufb01ciently expressive to encode formal speci\ufb01cations most often seen in practice. Let y1, . . . , ym denote output variables of JfKint-k. In order to encode an inequality of the form a1y1 +\u00b7 \u00b7 \u00b7+amym +b \u22650 into the output speci\ufb01cation, we do the following: \u2022 Introduce an additional output neuron \u02dc y and a directed edge from each output neuron yi to \u02dc y. Let ai be the weight of an edge from yi to \u02dc y, b be the bias term of \u02dc y, k \u22121 be the bit-shift value of \u02dc y, and N = k be the number of bits de\ufb01ning the cut-off value of \u02dc y. Then \u02dc y = ReLU-N(round(2\u2212(k\u22121)(a1y1 + \u00b7 \u00b7 \u00b7 + amym + b))). Thus, as we work with bit-vectors of bit-width k, \u02dc y is just the sign bit of a1y1 + \u00b7 \u00b7 \u00b7 + asys + b preceded by zeros. \u2022 As a1y1 + \u00b7 \u00b7 \u00b7 + asys + b \u22650 holds if and only if the sign bit of a1y1 + \u00b7 \u00b7 \u00b7 + asys + b is 0, in order to encode the inequality into the output speci\ufb01cation it suf\ufb01ces to encode that \u02dc y = 0, which is a formula expressible in QF_BV2bw. By doing this for each linear inequality in the speci\ufb01cation and since the logical operations are allowed by QF_BV2bw, it follows that we may use QF_BV2bw to encode boolean combinations of linear inequalities over outputs as formal speci\ufb01cations that are to be veri\ufb01ed. Our main result in this section is that, if \u03c8 in eq. (8) is assumed to be a formula in QF_BV2bw, then the veri\ufb01cation problem for quantized neural networks is PSPACEhard. Since checking satis\ufb01ability of \u03c8 can be done in nondeterministic polynomial time, this means that the additional hardness really comes from the quantized neural networks. Theorem 1 (Complexity of veri\ufb01cation of QNNs). If the predicate on outputs is assumed to be a formula in QF_BV2bw, the veri\ufb01cation problem for quantized neural networks is PSPACE-hard. Proof sketch. Here we summarize the key ideas of our proof. For the complete proof, see the appendix. To prove PSPACE-hardness, we exhibit a reduction from TQBF which is known to be PSPACEcomplete (Arora and Barak 2009). TQBF is the problem of deciding whether a quanti\ufb01ed boolean formula (QBF) of the form Q1x1. Q2x2. . . . Qnxn. \u03c6(x1, x2, . . . , xn) is true, where each Qi \u2208{\u2203, \u2200} and \u03c6 is a quanti\ufb01er-free formula in propositional logic over the variables x1, . . . , xn. A QBF \fformula is true if it admits a truth table for each existentially quanti\ufb01ed variable xi, where the truth table for xi speci\ufb01es a value in {0, 1} for each valuation of those universally quanti\ufb01ed variables xj on which xi depends (i.e. xj with j < i). Thus, the size of each truth table is at most 2k, where k is the total number of universally quanti\ufb01ed variables in the formula. In our reduction, given an instance of the TQBF problem Q1x1. Q2x2. . . . Qnxn. \u03c6(x1, x2, . . . , xn) we map it to the corresponding veri\ufb01cation problem as follows. The interpretation JfKint-k of the neural network f consists of n+ 1 disjoint gadgets f1, . . . , fn, g, each having a single input and a single output neuron of bit-width 2k. Note that bit-widths are given in binary representation, thus this is still polynomial in the size of the problem. We use these gadgets to encode all possible inputs to the QBF formula, whereas the postcondition in the veri\ufb01cation problem encodes the quanti\ufb01er-free formula itself. For a universally quanti\ufb01ed variable xi, the output of fi is always a constant vector encoding the values of xi in each of the 2k valuations of universally quanti\ufb01ed variables (for a \ufb01xed ordering of the valuations). For existentially quanti\ufb01ed xi, we use fi and its input neuron to encode 2k possible choices for the value of xi, one for each valuation of universally quanti\ufb01ed variables, and thus to encode the truth table for xi. Finally, the gadget g is used to return a constant bit-vector 1 of bit-width 2k on any possible input. The predicate \u03c8 on the outputs is then de\ufb01ned as \u03c8 := (\u03c6bw(y1, . . . , yn) = 1), where \u03c6bw is the quanti\ufb01er-free formula in QF_BV2bw identical to \u03c6, with only difference being that the inputs of \u03c6bw are bit-vectors of bit-width 2k instead of boolean variables, and logical operations are also de\ufb01ned over bitvectors (again, since bit-widths are encoded in binary representation, this is of polynomial size). The output of \u03c6bw is thus tested if it equals 1 for each valuation of universally quanti\ufb01ed variables and the corresponding values of existentially quanti\ufb01ed variables de\ufb01ned by the truth tables. Our construction ensures that any satisfying input for the neural networks induces satisfying truth tables for the TQBF instance and vice-versa, which completes the reduction. Theorem 1 is to our best knowledge the \ufb01rst theoretical result which indicates that the veri\ufb01cation problem for quantized neural networks is harder than verifying their idealized real arithmetic counterparts. It sheds some light on the scalability gap of existing SMT-based methods for their veri\ufb01cation, and shows that this gap is not solely due to practical inef\ufb01ciency of existing methods for quantized neural networks, but also due to the fact that the problem is computationally harder. While Theorem 1 gives a lower bound on the hardness of verifying quantized neural networks, it is easy to see that an upper bound on the complexity of this problem is NEXP since the inputs to the veri\ufb01cation problem are of size that is exponential in the size of the problem. Closing the gap and identifying tight complexity bounds is an interesting direction of future work. Note though that the speci\ufb01cation logic QF_BV2bw used to encode predicates over outputs is strictly more expressive than what we need to express boolean combinations of linear integer inequalities, which is the most common form of formal speci\ufb01cations seen in practice. This is because QF_BV2bw also allows logical operations over bit vectors, and not just over single bits. Nevertheless, our result presents the \ufb01rst step towards understanding computational hardness of the quantized neural network veri\ufb01cation problem. Improvements to bit-vector SMT-encodings In this section, we study ef\ufb01cient SMT-encodings of quantized neural networks that would improve scalability of veri\ufb01cation methods for them. In particular, we propose three simpli\ufb01cations to the monolithic SMT encoding of eq. (3), (4), and (5) introduced in (Giacobbe, Henzinger, and Lechner 2020), which encodes quantized neural networks and formal speci\ufb01cations as formulas in the QF_BV2 logic : I) Remove dead branches of the If-Then-Else encoding of the activation function in eq. (5), i.e., branches that are guaranteed to never be taken; II) Allocate only the minimal number of bits for each bit-vector variable in the formula; and III) Eliminate sub-expressions from the summation in eq. (3). To obtain the information needed by the techniques I and II we further propose an abstract interpretation framework for quantized neural networks. Abstract interpretation analysis Abstract interpretation (Cousot and Cousot 1977) is a technique for constructing over-approximations to the behavior of a system. Initially developed for software veri\ufb01cation, the method has recently been adapted to robustness veri\ufb01cation of neural networks and is used to over-approximate the output range of variables in the network. Instead of considering all possible subsets of real numbers, it only considers an abstract domain which consists of subsets of suitable form (e.g. intervals, boxes or polyhedra). This allows modeling each operation in the network in terms of operations over the elements of the abstract domain, thus over-approximating the semantics of the network. While it leads to some impreision, abstract interpretation allows more ef\ufb01cient output range analysis for variables. Due to its over-approximating nature, it remains sound for verifying neural networks. Interval (Wang et al. 2018b; Tjeng, Xiao, and Tedrake 2019), zonotope (Mirman, Gehr, and Vechev 2018; Singh et al. 2018), and convex polytope (Katz et al. 2017; Ehlers 2017; Bunel et al. 2018; Wang et al. 2018a) abstractions have emerged in literature as ef\ufb01cient and yet precise choices for the abstract domains of real-valued neural networks. The obtained abstract domains have been used for output range analysis (Wang et al. 2018b), as well as removing decision points from the search process of complete veri\ufb01cation algorithms (Tjeng, Xiao, and Tedrake 2019; Katz et al. 2017). One important difference between standard and quantized networks is the use of double-sided bounded activation functions in quantized neural networks, i.e., ReLU-N instead of ReLU (Jacob et al. 2018). This additional non-linear transition, on one hand, renders linear abstractions less effective, while on the other hand it provides hard upper bounds to each neuron, which bounds the over-approximation error. Consequently, we adopt \finterval abstractions (IA) on the quantized interpretation of a network to obtain reachability sets for each neuron in the network. As discussed in (Tjeng, Xiao, and Tedrake 2019), using a tighter abstract interpretation poses a tradeoff between veri\ufb01cation and pre-processing complexity. Dead branch removal Suppose that through our abstract interpretation we obtained an interval [lb, ub] for the input x of a ReLU-N operation y = ReLU-N(x). Then, we can substitute the formulation of the ReLU-N by \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0, if ub \u22640 2N \u22121, if lb \u22652N \u22121 x, if ub \u22650 and lb \u22642N \u22121 max{0, x}, if 0 < ub \u22642N \u22121. min{2N \u22121, x}, if 0 \u2264lb < 2N \u22121. max{0, min{2N \u22121, x}}, otherwise, which reduces the number of decision points in the SMT formula. Minimum bit allocation A k-bit quantized neural network represents each neuron and weight variable by a k-bit integer. However, when computing the values of certain types of layers, such as the linear layer in eq. (1), a wider register is necessary. The binary multiplication of a k-bit weight and a k-bit neuron value results in a number that is represented by 2k-bits. Furthermore, summing up n such 2k-bit integer requires bnaive = 2k + log2(n) + 1 (9) bits to be safely represented without resulting in an over\ufb02ow. Thus, linear combinations are in practice usually computed on 32-bit integer registers. Application of \ufb01xed-point rounding and the activation function then reduces the neuron values back to a k-bit representation (Jacob et al. 2018). QF_BV2 reasons over \ufb01xed-size bit-vectors, i.e. the bit width of each variable must be \ufb01xed in the formula regardless of the variable\u2019s value. (Giacobbe, Henzinger, and Lechner 2020) showed that the number of bits used for all weight and neuron variables in the formal affects the runtime of the SMT-solver significantly. For example, omitting the least signi\ufb01cant bit of each variable cuts the runtime on average by half. However, the SMT encoding of (Giacobbe, Henzinger, and Lechner 2020) allocates bnaive bits according to eq. (9) for each accumulation variable of a linear layer. Our approach uses the interval [lb, ub] obtained for each variable by abstract interpretation to compute the minimal number of bits necessary to express any value in the interval. As the signed bit-vector variables are represented in the two\u2019s complement format, we can compute the bit width b of variable x with computed interval [lb, ub] by bminimal = 1 + log2(max{|lb|, |ub|} + 1). (10) Trivially, one can show that bminimal < bnaive, as |ub|\u226422kn and |lb|\u226422kn. Redundant multiplication elimination Another difference between quantized and standard neural networks is the rounding of the weight values to the nearest representable value of the employed \ufb01xed-point format. Consequently, there is a considerable chance that two connections outgoing from the same source neuron will have the same weight value. For instance, assuming an 8-bit network and a uniform weight distribution, the chance of two connections having the same weight value is around 0.4% compared to the much lower 4 \u00b7 10\u22128% of the same scenario happening in a \ufb02oating-point network. Moreover, many weight values express some subtle form of redundancy on a bit-level. For instance, both multiplication by 2 and multiplication by 6 contain a shift operations by 1 digit in their binary representation. Thus, computations y1 = 3 \u00b7 x1 y2 = 6 \u00b7 x1 (11) can be rewritten as y1 = 3 \u00b7 x1 y2 = y1 << 1, (12) where << is a binary shift to the left by 1 digit. As a result, a multiplication by 6 is replaced by a much simpler shift operation. Based on this motivation, we propose a redundancy elimination heuristic to remove redundant and partially redundant multiplications from the SMT formula. Our heuristic \ufb01rst orders all outgoing weights of a neuron in ascending order and then sequentially applies a rule-matching for each weight value. The rules try to \ufb01nd a simpler way to compute the multiplication of the weight and the neuron value by using already performed multiplications. The algorithm and the rules in full are provided in the appendix. Note that a similar idea was introduced by (Cheng et al. 2018) in the form of a neuron factoring algorithm for the encoding of binarized (1-bit) neural networks into SAT formulas. However, the heuristic of (Cheng et al. 2018) removes redundant additions, whereas we consider bit-level redundancies in multiplications. For many-bit quantization, the probability of two neurons sharing more than one incoming weight is negligible, thus making such neuron factoring proposed in (Cheng et al. 2018) less effective. Experimental Evaluation We create an experimental setup to evaluate how much the proposed techniques affect the runtime and ef\ufb01ciency of the SMT-solver. Our reference baseline is the approach of (Giacobbe, Henzinger, and Lechner 2020), which consists of a monolithic and \"balanced\" bit-vector formulation for the Boolector SMT-solver. We implement our techniques on top of this baseline. We limited our evaluation to Boolector, as other SMT-solvers supporting bit-vector theories, such as Z3 (De Moura and Bj\u00f8rner 2008), CVC4 (Barrett et al. 2011), and Yices (Dutertre 2014), performed much worse in the evaluation of (Giacobbe, Henzinger, and Lechner 2020). Our evaluation comprises of two benchmarks. Our \ufb01rst evaluation considers the adversarial robustness veri\ufb01cation of image classi\ufb01er trained on the MNIST dataset (LeCun et al. 1998). In particular, we check the l\u221erobustness of networks against adversarial attacks (Szegedy et al. \fAttack Baseline Baseline Ours radius (+ Lingeling) (+ CaDiCal) \u03b5 = 1 63 (63.6%) 92 (92.9%) 99 (100.0%) \u03b5 = 2 0 (0.0%) 20 (20.2%) 94 (94.9%) \u03b5 = 3 0 (0.0%) 2 (2.1%) 71 (74.0%) \u03b5 = 4 0 (0.0%) 1 (1.0%) 54 (55.7%) Table 1: Number of solved instances of adversarial robustness veri\ufb01cation on the MNIST dataset. Absolute numbers and in percentages of checked instances in parenthesis. Dataset Baseline Baseline Ours (+ Lingeling) (+ CaDiCal) MNIST 8803 |8789 2798 |3931 5 |90 Fashion-MNIST 6927 |6927 3105 |3474 4 |49 Table 2: Median |mean runtime of adversarial robustness veri\ufb01cation process per sample. The reported values only account for non-timed-out samples. 2013). Other norms, such as l1 and l2, can be expressed in bit-vector SMT constraints as well, although with potentially negative effects on the solver runtime. In the second evaluation, we repeat the experiment on the slightly more complex Fashion-MNIST dataset (Xiao, Rasul, and Vollgraf 2017) . All experiments are run on a 14-core Intel W-2175 CPU with 64GB of memory. We used the boolector (Niemetz, Preiner, and Biere 2015) with the SAT-solvers Lingeling(Biere 2017) (only baseline) and CaDiCal (Biere 2019) (baseline + our improvements) as SAT-backend. Adversarial robustness speci\ufb01cation can be expressed as |x \u2212xi|\u221e\u2264\u03b5 \u2227y = JfKint-k(x) = \u21d2y = yi, (13) where (xi, yi) is a human labeled test sample and \u03b5 is a \ufb01xed attack radius. As shown in eq. (13), the space of possible attacks increases with \u03b5. Consequently, we evaluate with different attack radii \u03b5 and study the runtimes individually. In particular, for MNIST we check the \ufb01rst 100 test samples with an attack radius of \u03b5 = 1, the next 100 test samples with \u03b5 = 2, and the next 200 test samples with \u03b5 = 3 and \u03b5 = 4 respectively. For our Fashion-MNIST evaluation, we reduce the number of samples to 50 per attack radius value for \u03b5 > 2 due to time and compute limitations. The network studied in our benchmark consists of four fully-connected layers (784,64,32,10), resulting in 52,650 parameters in total. It was trained using a quantization-aware training scheme with a 6-bit quantization. The results for the MNIST evaluation in terms of solved instances and median solver runtime are shown in Table 1 and Table 2 respectively. Table 3 and Table 2 show the results for the Fashion-MNIST benchmark. Ablation analysis We perform an ablation analysis where we re-run our robustness evaluation with one of our proposed techniques disabled. The objective of our ablation analysis is to understand how the individual techniques affect the observed ef\ufb01ciency Attack Baseline Baseline Ours radius (+ Lingeling) (+ CaDiCal) \u03b5 = 1 2 (2.3%) 44 (50.6%) 76 (87.4%) \u03b5 = 2 0 (0.0%) 7 (7.8%) 73 (81.1%) \u03b5 = 3 0 (0.0%) 1 (2.3%) 27 (62.8%) \u03b5 = 4 0 (0.0%) 0 (0.0%) 18 (40.9%) Table 3: Number of solved instances of adversarial robustness veri\ufb01cation on the Fashion-MNIST dataset. Absolute numbers and in percentages of checked instances in parenthesis. Best method in bold. Method Total solved Cumulative instances runtime No redundancy eliminiation 316 (80.8%) 7.7 h No minimum bitwidth 315 (80.6%) 5.1 h No ReLU simplify 88 (22.5%) 83.2 h No Abstract interpretation 107 (27.4%) 126.0 h All enabled 318 (81.3%) 7.9 h Table 4: Results of our ablation analysis on the MNIST dataset. The cumulative runtime only accounts for nontimed-out samples. gains. Due to time and computational limitations we focus our ablation experiments to MNIST exclusively. The results in Table 4 show the highest number of solved instances were achieved when all our techniques were enabled. Nonetheless, Table 4 demonstrate these gains are not equally distributed across the three techniques. In particular, the ReLU simpli\ufb01cation has a much higher contribution for explaining the gains compared to the redundancy elimination and minimum bitwidth methods. The limited bene\ufb01ts observed for these two techniques may be explain by the inner workings of the Boolector SMT-solver. The Boolector SMT-solver (Niemetz, Preiner, and Biere 2015) is based on a portfolio approach which sequentially applies several different heuristics to \ufb01nd a satisfying assignment of the input formula (Wintersteiger, Hamadi, and De Moura 2009). In particular, Boolector starts with fast but incomplete local search heuristics and falls back to slower but complete bit-blasting (Clark and Cesare 2018) in case the incomplete search is unsuccessful (Niemetz, Preiner, and Biere 2019). Although our redundancy elimination and minimum bitwidth techniques simplify the bit-blasted representation of the encoding, it introduces additional dependencies between different bit-vector variables. As a result, we believe these extra dependencies make the local search heuristics of Boolector less effective and thus enabling only limited performance improvements." + }, + { + "url": "http://arxiv.org/abs/1005.0747v1", + "title": "Hybrid Numerical Solution of the Chemical Master Equation", + "abstract": "We present a numerical approximation technique for the analysis of\ncontinuous-time Markov chains that describe networks of biochemical reactions\nand play an important role in the stochastic modeling of biological systems.\nOur approach is based on the construction of a stochastic hybrid model in which\ncertain discrete random variables of the original Markov chain are approximated\nby continuous deterministic variables. We compute the solution of the\nstochastic hybrid model using a numerical algorithm that discretizes time and\nin each step performs a mutual update of the transient probability distribution\nof the discrete stochastic variables and the values of the continuous\ndeterministic variables. We implemented the algorithm and we demonstrate its\nusefulness and efficiency on several case studies from systems biology.", + "authors": "Thomas A. Henzinger, Maria Mateescu, Linar Mikeev, Verena Wolf", + "published": "2010-05-05", + "updated": "2010-05-05", + "primary_cat": "q-bio.QM", + "cats": [ + "q-bio.QM", + "cs.NA", + "60J28" + ], + "main_content": "INTRODUCTION A common dynamical model in systems biology is a system of ordinary di\ufb00erential equations (ODEs) that describes the time evolution of the concentrations of certain proteins in a biological compartment. This macroscopic model is based on the theory of chemical kinetics and assumes that the concentrations of chemical species in a well-stirred system change deterministically and continuously in time. It provides an appropriate description of a chemically reacting system as long as the numbers of molecules of the chemical species are large. However, in living cells the chemical populations can be low (e.g. a single DNA molecule, tens or a few hundreds of RNA or protein molecules). In this case the underlying assumptions of the ODE approach are violated and a more detailed model is necessary, which takes into account the inherently discrete and stochastic nature of chemical reactions [24, 30, 8, 27, 34]. The theory of stochastic chemical kinetics provides an appropriate description by means of a discrete-state Markov process, that is, a continuous-time Markov chain (CTMC) that represents the chemical populations as random variables [9, 10]. If n is the number of di\ufb00erent types of molecules, then we describe the state of the system at a certain time instant by an n-dimensional random vector whose i-th entry represents the number of molecules of type i. In the thermodynamic limit (when the number of molecules and the volume of the system approach in\ufb01nity) the Markov model and the macroscopic ODE description are equal [21]. Therefore, the ODE approach can be used to approximate the CTMC only if all populations are large. The evolution of the CTMC is given by a system of linear ordinary di\ufb00erential equations, known as the chemical master equation (CME). A single equation in the CME describes the time derivative of the probability of a certain state at all times t \u22650. Thus, the solution of the CME is the probability distribution over all states of the CTMC at a particular time t, that is, the transient state probabilities at time t. The solution of the CME can then be used to derive measures of interest such as the distribution of switching delays [23], the distribution of the time of DNA replication initiation at di\ufb00erent origins [26], or the distribution of gene expression products [35]. Moreover, many parameter estimation methods require the computation of the posterior distribution because means and variances do not provide enough information to calibrate parameters [15]. The more detailed description of chemical reactions using a CTMC comes at a price of signi\ufb01cantly increased computational complexity because the underlying state space is usually very large or even in\ufb01nite. Therefore, Monte Carlo simulation is in widespread use, because it allows to generate random trajectories of the model while requiring only little memory. Estimates of the measures of interest can be derived once the number of trajectories is large enough to achieve the desired statistical accuracy. However, the main drawback of simulative solution techniques is that a large number of trajectories is necessary to obtain reliable results. For instance, in order to halve the con\ufb01dence interval of an estimate, four times more trajectories have to be generated. Consequently, often stochastic simulation is only feasible with a very low level of con\ufb01dence in the accuracy of the results. Recently, e\ufb03cient numerical algorithms have been developed to compute an approximation of the CME [16, 25, 3, 5, 6, 13, \f32, 7, 18]. Many of them are based on the idea of restricting the analysis of the model during a certain time interval to a subset of states that have \u201csigni\ufb01cant\u201d probability. While some of these methods rely on an a priori estimation of the geometric bounds of the signi\ufb01cant subset [16, 25, 3], others are based on a conversion to discrete time and they decide dynamically which states to consider at a certain time step [5, 6, 32]. If the system under consideration contains large populations, then the numerical algorithms mentioned above perform poorly. The reason is that the random variables that represent large populations have a large variance. Thus, a large number of states have a signi\ufb01cant probability, which renders the numerical approximation of the distribution computationally expensive or infeasible. In this paper we use a stochastic hybrid approach to e\ufb03ciently approximate the solution of systems containing both small and large populations. More precisely, we maintain the discrete stochastic representation for small populations, but at the same time we exploit the small relative variance of large populations and represent them by continuous deterministic variables. Since population sizes change over time we decide dynamically (\u201don-the-\ufb02y\u201d) whether we represent a population by a continuous deterministic variable or keep the discrete stochastic representation. Our criterion for changing from a discrete to a continuous treatment of a variable and vice versa is based on a population threshold. For the solution of the stochastic hybrid model, we propose a numerical approximation method that discretizes time and performs a mutual update of the distributions of the discrete stochastic variables and the values of the continuous deterministic variables. Hence, we compute the solution of a CME with a reduced dimension as well as the solution of a system of (non-linear) ordinary di\ufb00erential equations. The former describes the distribution of the discrete stochastic variables and the latter the values of the continuous deterministic variables, and the two descriptions depend on each other. Assume, for instance, that a system has two chemical species. The two population sizes at time t are represented by the random variables X(t) and Y (t), where X(t) is large and Y (t) is small. Then, we consider for Y (t) all events Y (t) = y that have signi\ufb01cant probability, i.e., Pr(Y (t) = y) is greater than a certain threshold. For X(t) we consider the conditional expectations E[X(t) | Y (t) = y] and assume that they change continuously and deterministically in time. We iterate over small time steps h > 0 and, given the distribution for Y (t) and the values E[X(t) | Y (t) = y], we compute the distribution of Y (t+h) and the values E[X(t+h) | Y (t+h) = y]. Again, we restrict our computation to those values of y that have signi\ufb01cant probability. To demonstrate the e\ufb00ectiveness of our approach, we have implemented the algorithm and applied it successfully to several examples from systems biology. Our most complex example has 6 di\ufb00erent chemical species and 10 reactions. We compare our results with our earlier purely discrete stochastic approach and with the purely continuous deterministic approach in terms of running times and accuracy. Related Work. Di\ufb00erent hybrid approaches have been proposed in the literature [12, 31, 29]. As opposed to our approach, they focus on Monte Carlo simulation and consider the problem of multiple time scales. They do not use deterministic variables but try to reduce the computational complexity of generating a trajectory of the model by approximating the number of reactions during a certain time step. The closest work to ours is the hybrid approach proposed by Hellander and L\u00a8 otstedt [14]. They approximate large populations by normally distributed random variables with a small variance and use Monte Carlo simulation to statistically estimate the probability distribution of the remaining populations with small sizes. They consider a single ODE to approximate the expected sizes of the large populations. As opposed to that, here we consider a set of ODEs to approximate the expected sizes of the large populations conditioned on the small populations. This allows us to track the dependencies between the di\ufb00erent populations more acurately. Moreover, instead of a statistical estimation of probabilities, we provide a direct numerical method to solve the stochastic hybrid model. The direct numerical method that we use for the computation of the probability distributions of the stochastic variables has shown to be superior to Monte Carlo simulation [5]. Another di\ufb00erence is that the method in [14] does not allow a dynamic switching between stochastic and deterministic treatment of variables. Finally, our approach is related to the stochastic hybrid models considered in [4, 2] and to \ufb02uid stochastic Petri nets [17]. These approaches di\ufb00er from our approach in that they use probability distributions for the di\ufb00erent values a continuous variable can take. In our setting, at a \ufb01xed point in time we only consider the conditional expectations of the continuous variables, which is based on the assumption that the respective populations are large and their relative variance is small. This allows us to provide an e\ufb03cient numerical approximation algorithm that can be applied to systems with large state spaces. The stochastic hybrid models in [4, 2, 17] cannot be solved numerically except in the case of small state spaces. 2. DISCRETE-STATE STOCHASTIC MODEL According to Gillespie\u2019s theory of stochastic chemical kinetics, a well-stirred mixture of n molecular species in a volume with \ufb01xed size and \ufb01xed temperature can be represented as a continuous-time Markov chain (X(t), t \u22650) [9]. The random vector X(t) = (X1(t), . . . , Xn(t)) describes the chemical populations at time t, i.e., Xi(t) is the number of molecules of type i \u2208{1, . . . , n}. Thus, the state space of X is Zn + = {0, 1, . . .}n. The state changes of X are triggered by the occurrences of chemical reactions, which come in m di\ufb00erent types. For j \u2208{1, . . . , m} let uj \u2208Zn be the change vector of the j-th reaction type, that is, uj = u\u2212 j +u+ j where u\u2212 j contains only non-positive entries that specify how many molecules of each species are consumed (reactants) if an instance of the reaction occurs and vector u+ j contains only non-negative entries that specify how many molecules of each species are produced (products). Thus, if X(t) = x for some x \u2208Zn + with x + u\u2212 j being non-negative, then X(t + dt) = x + uj is the state of the system after the occurrence of the j-th reaction within the in\ufb01nitesimal time \fP1 P2 P1 or P2 gene 1 gene 2 common promotor Figure 1: Illustration of the exclusive switch in Ex. 1 (picture is adapted from [22]). The stochastic hybrid model with only three discrete stochastic states and two di\ufb00erential equations per state. interval [t, t + dt). As rigorously derived by Gillespie [10], each reaction type has an associated propensity function, denoted by \u03b11, . . . , \u03b1m, which is such that \u03b1j(x) \u00b7 dt is the probability that, given X(t) = x, one instance of the j-th reaction occurs within [t, t + dt). The value \u03b1j(x) is proportional to the number of distinct reactant combinations in state x. More precisely, if x = (x1, . . . , xn) is a state for which x + u\u2212 j is nonnegative then \u03b1j(x) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 cj if u\u2212 j = (0, . . . , 0), cj \u00b7 xi if u\u2212 j = \u2212ei, cj \u00b7 xi \u00b7 x\u2113 if u\u2212 j = \u2212ei \u2212e\u2113, cj \u00b7 \u0000xi 2 \u0001 = cj \u00b7 xi\u00b7(xi\u22121) 2 if u\u2212 j = \u22122 \u00b7 ei, (1) where i \u0338= \u2113, cj > 0 is a constant, and ei is the vector with the i-th entry 1 and all other entries 0. We set \u03b1j(x) = 0 whenever the vector x + u\u2212 j contains negative entries, that is, when not enough reactant molecules are available. The constant cj refers to the probability that a randomly selected pair of reactants collides and undergoes the j-th chemical reaction. Thus, if N is the volume (in liters) times Avogadro\u2019s number, then cj \u2022 scales inversely with N in the case of two reactants, \u2022 is independent of N in the case of a single reactant, \u2022 is proportional to N in the case of no reactants. Since reactions of higher order (requiring more than two reactants) are usually the result of several successive lower order reactions, we do not consider the case of more than two reactants. Example 1. We consider a gene regulatory network, called the exclusive switch [22]. It consists of two genes with a common promotor region. Each of the two gene products P1 and P2 inhibits the expression of the other product if a molecule is bound to the promotor region. More precisely, if the promotor region is free, molecules of both types P1 and P2 are produced. If a molecule of type P1 (P2) is bound to the promotor region, only molecules of type P1 (P2) are produced, respectively. We illustrate the system in Fig. 1. The system has \ufb01ve chemical species of which two have an in\ufb01nite range, namely P1 and P2. If x = (x1, . . . , x5) is the current state, then the \ufb01rst two entries represent the populations of P1 and P2, respectively. The entry x3 denotes the number of unbound DNA molecules which is either zero or one. The entry x4 (x5) is one of a molecule of type P1 (P2) is bound to the promotor region and zero otherwise. The chemical reactions are as follows. Let j \u2208{1, 2}. \u2022 We describe production of Pj by DNA \u2192DNA+Pj. Thus, uj = ej and \u03b1j(x) = cj \u00b7 x3. \u2022 We describe degradation of Pj by Pj \u2192\u2205with uj+2 = \u2212ej and \u03b1j+2(x) = cj+2 \u00b7 xj. \u2022 We model the binding of Pj to the promotor by DNA + Pj \u2192DNA.Pj with uj+4 = \u2212ej\u2212e3+ej+3 and \u03b1j+4(x) = cj+4 \u00b7 xj \u00b7 x3. \u2022 For unbinding of Pj we use DNA.Pj \u2192DNA + Pj with uj+6(x) = ej + e3 \u2212ej+3 and \u03b1j+6(x) = cj+6 \u00b7 xj+3. \u2022 Finally, we have production of Pj if a molecule of type Pj is bound to the promotor, i.e., DNA.Pj \u2192DNA.Pj + Pj with uj+8(x) = ej and \u03b1j+8(x) = cj+8 \u00b7 xj+3. Depending on the chosen parameters, the probability distribution of the exclusive switch is bistable, i.e. most of the probability mass concentrates on two distinct regions in the state space. In particular, if binding to the promotor is likely, then these two regions correspond to the two con\ufb01gurations where either the production of P1 or the production of P2 is inhibited. The Chemical Master Equation. For x \u2208Zn + and t \u22650, let p(x, t) denote the probability that the current population vector is x, i.e., p(x, t) = Pr(X(t) = x). Let p(t) be the row vector with entries p(x, t). Given u\u2212 1 , . . . , u\u2212 m, u+ 1 , . . . , u+ m, \u03b11, . . . , \u03b1m, and some initial distribution p(0), the Markov chain X is uniquely speci\ufb01ed if the propensity functions are of the form in Eq. (1). The evolution of X is given by the chemical master equation (CME), which equates the change d dtp(x, t) of the probability in state x and the sum over all reactions of the\u201cin\ufb02ow\u201d\u03b1j(x\u2212uj)\u00b7p(x\u2212uj, t) and\u201cout\ufb02ow\u201d \u03b1j(x) \u00b7 p(x, t) of probability [20]. Thus, d dtp(x, t) = m X j=1 \u0000\u03b1j(x\u2212uj)\u00b7p(x\u2212uj, t)\u2212\u03b1j(x)\u00b7p(x, t) \u0001 . (2) Since the CME is linear it can be written as d dtp(t) = p(t)\u00b7Q, where Q is the generator matrix of X with Q(x, x + uj) = \u03b1j(x) and Q(x, x) = \u2212Pm j=1 \u03b1j(x). If Q is bounded, then Eq (2) has the general solution p(t) = p(0) \u00b7 eQt, (3) where the matrix exponential is de\ufb01ned as eQt = P\u221e i=0 (Qt)i i! . If the state space is in\ufb01nite, then we can only compute approximations of p(t) and even if Q is \ufb01nite, the size of the matrix Q is often large because it grows exponentially with the number of state variables. Moreover, even if Q is sparse, as it usually is because the number of reaction types is small compared to the number of states, standard numerical solution techniques for systems of \ufb01rst-order linear equations of the form of Eq. (2), such as uniformization [19], approximations in the Krylov subspace [28], or numerical integration [33], are infeasible. The reason is that the number of nonzero entries in Q often exceeds the available memory capacity for systems of realistic size. If the populations of all species remain small (at most a few hundreds) then the solution of the CME can be e\ufb03ciently approximated using projection methods [16, 25, 3] or fast uniformization methods [5, 6, 32]. The idea of these methods is to avoid an exhaustive state space exploration and, depending on a certain time interval, restrict the analysis of the system to a \fsubset of states. Fast Solution of the Discrete Stochastic Model. Here, we present a method similar to our previous work [6] that e\ufb03ciently approximates the solution of the CME if the chemical populations remain small. We use it in Section 4 to solve the discrete part of the stochastic hybrid model. The algorithm, called fast RK4, is based on the numerical integration of Eq. (2) using an explicit fourth-order RungeKutta method. The main idea is to integrate only those di\ufb00erential equations in Eq. (2) that correspond to states with \u201csigni\ufb01cant probability\u201d. This reduces the computational e\ufb00ort signi\ufb01cantly since in each iteration step only a comparatively small subset of states is considered. We dynamically decide which states to drop/add based on a \ufb01xed probability threshold \u03b4 > 0. Due to the regular structure of the Markov model the approximation error of the algorithm remains small since probability mass is usually concentrated at certain parts of the state space. The farther away a state is from such a \u201csigni\ufb01cant set\u201d the smaller is its probability. Thus, the total error of the approximation remains small. Unless otherwise speci\ufb01ed, in our experiments we \ufb01x \u03b4 to 10\u221214. This value that has been shown to lead to accurate approximations [6]. The standard explicit fourth-order Runge-Kutta method applied to Eq. (2) yields the iteration step [33] p(t + h) = p(t) + h \u00b7 (k1 + 2 \u00b7 k2 + 2 \u00b7 k3 + k4)/6, (4) where h > 0 is the time step of the method and the vectors k1, k2, k3, k4 are given by k1 = p(t) \u00b7 Q, k3 = (p(t) + h \u00b7 k2 2 ) \u00b7 Q, k2 = (p(t) + h \u00b7 k1 2 ) \u00b7 Q, k4 = (p(t) + h \u00b7 k3) \u00b7 Q. (5) Note that the entries k1(x), . . . , k4(x) of state x in the vectors k1, . . . , k4 are given by k1(x) = m P j=1 \u0000\u03b1j(x\u2212uj)\u00b7p(x\u2212uj, t)\u2212\u03b1j(x)\u00b7p(x, t) \u0001 , ki+1(x) = m P j=1 \u0000\u03b1j(x\u2212uj)\u00b7(p(x\u2212uj, t)+h\u00b7ki(x\u2212uj)/2) \u2212\u03b1j(x)\u00b7(p(x, t)+h \u00b7 ki(x)/2) \u0001 for i \u2208{1, 2}, k4(x) = m P j=1 \u0000\u03b1j(x\u2212uj)\u00b7(p(x\u2212uj, t)+h \u00b7 k3(x\u2212uj)) \u2212\u03b1j(x)\u00b7(p(x, t)+h \u00b7 k3(x)) \u0001 . (6) In order to avoid the explicit construction of Q and in order to work with a dynamic set Sig of signi\ufb01cant states that changes in each step, we use for a state x a data structure with the following components: \u2022 a \ufb01eld x.prob for the current probability of x, \u2022 \ufb01elds x.k1, . . . , x.k4 for the four terms in the equation of state x in the system of Eq. (5), \u2022 for all j with x + u\u2212 j \u22650 a pointer to the successor state x + uj as well as the rate \u03b1j(x). We start at time t = 0 and initialize the set Sig as the set of all states that have initially a probability greater than \u03b4, i.e. Sig := {x | p(x, 0) > \u03b4}. We perform a step of the iteration in Eq. (4) by traversing the set Sig \ufb01ve times. In the \ufb01rst four rounds we compute k1, . . . , k4 and in the \ufb01nal round 1 choose step size h; 2 for i = 1, 2, 3, 4 do / /traverse Sig four times 3 / /decide which \ufb01elds from state data structure 4 / /are needed for ki 5 switch i 6 case i = 1: coe\ufb00:= 1; \ufb01eld := prob; 7 case i \u2208{2, 3}: coe\ufb00:= h/2; \ufb01eld := ki\u22121; 8 case i = 4: coe\ufb00:= h; \ufb01eld := ki\u22121; 9 x.ki := x.k1; 10 for all x \u2208Sig do 11 for j = 1, . . . , m with x + uj \u22650 do 12 x.ki := x.ki \u2212coe\ufb00\u00b7 x.field \u00b7 \u03b1j(x); 13 if x+uj \u0338\u2208Sig then 14 Sig := Sig \u222a{x+uj}; 15 (x+uj).ki := (x+uj).ki + coe\ufb00\u00b7x.field\u00b7\u03b1j(x); 16 for all x \u2208Sig do 17 x.prob :=x.prob+h\u00b7(x.k1+2\u00b7x.k2+2\u00b7x.k3+x.k4)/6; 18 x.k1 := 0; x.k2 := 0; x.k3 := 0; x.k4 := 0; 19 if x.prob < \u03b4 then 20 Sig := Sig \\ {x}; Table 1: A single iteration step of the fast RK4 algorithm, which approximates the solution of the CME. we accumulate the summands. While processing state x in round i, i < 5, for each reaction j, we transfer probability mass from state x to its successor x + uj, by subtracting a term from ki(x) (see Eq. (6)) and adding the same term to ki(x + uj). A single iteration step is illustrated in pseudocode in Table 1. In line 20, we ensure that Sig does not contain states with a probability less than \u03b4. We choose the step size h in line 1 as suggested in [33]. In line 2-15 we compute the values k1(x), . . . , k4(x) for all x \u2208Sig (see Eq. (5)). The \ufb01fth round starts in line 16 and in line 17 the approximation of the probability p(x, t + h) is calculated. Note that the \ufb01elds x.k1, . . . , x.k4 are initialized with zero. Clearly, for the solution of the CME the same ideas as above can be used for many other numerical integration methods. Here, we focus on the explicit RK4 method and do not consider more advanced numerical integration methods to keep our presentation simple. The focus of this paper is not on particular numerical methods to solve di\ufb00erential equations but rather on general strategies for the approximate solution of the stochastic models that we consider. Moreover, we do not use uniformization methods as in previous work since uniformization is ine\ufb03cient for very small time horizons. But small time steps are necessary for the solution of the hybrid model in order to take into account the dependencies between the stochastic and the deterministic variables. 3. DERIVATION OF THE DETERMINISTIC LIMIT The numerical approximation presented in the previous section works well as long as only the main part of the probability mass is concentrated on a small subset of the state space. If the system contains large populations then the probability mass distributes on a very large number of states whereas the information content is rather low since we dis\ftinguish, for instance, the cases of having Xi(t) = 10000, Xi(t) = 10001, etc. In such cases no direct numerical approximation of the CME is possible without resorting to Monte Carlo techniques or discarding the discreteness of the state space. If all populations are large the solution of X can be accurately approximated by considering the deterministic limit of X. Here, we shortly recall the basic steps for the derivation of the deterministic limit. For a detailed discussion, we refer to Kurtz [21]. We \ufb01rst de\ufb01ne a set of functions \u03b2j such that if N is large (recall that N is the volume times the Avogadro\u2019s number) then the propensity functions can be approximated as \u03b1j(x) \u2248N \u00b7 \u03b2j(z), where z = (z1, . . . , zn) = x \u00b7 N \u22121 corresponds to the vector of concentrations of chemical species and belongs to Rn. Recall the dependencies of cj on the scaling factor N as described at the beginning of Section 2. For constants kj > 0 that are independent of N, \u2022 cj = kj \u00b7 N in the case of no reactants, \u2022 cj = kj in the case of a single reactant, \u2022 cj = kj/N in the case of two reactants. From this, it follows that except for the case of bimolecular reactions, we can construct the functions \u03b2j such that \u03b1j(x) = N \u00b7 \u03b2j(z). \u03b2j(z) = \u03b1j(x) N = \uf8f1 \uf8f2 \uf8f3 cj N = kj if u\u2212 j = (0, . . . , 0), cj \u00b7 xi N = kj \u00b7zi if u\u2212 j = \u2212ei, cj \u00b7xi\u00b7 x\u2113 N = kj \u00b7zi\u00b7z\u2113if u\u2212 j = \u2212ei \u2212e\u2113, where i \u0338= \u2113. In the case of bimolecular reactins (u\u2212 j = \u22122 \u00b7 ei), we use the approximation N \u00b7\u03b2j(z) = kj\u00b7N \u00b7z2 i = kj \u00b7xi\u00b7zi = ( 1 2cjN)\u00b7xi\u00b7zi = 1 2cj \u00b7x2 i \u22481 2cj \u00b7xi(xi \u22121) = \u03b1j(x), which is accurate if xi is large, In order to derive the deterministic limit for the vector X(t) = (X1(t), . . . , Xn(t)) that describes the chemical populations, we \ufb01rst write X(t) as X(t) = X(0) + m X j=1 uj \u00b7 Cj(t), where X(0) is the initial population vector and Cj(t) denotes the number of occurrences of the j-th reaction until time t. The process Cj(t) is a counting process with intensity \u03b1j(X(t)) and it can be regarded as a Poisson process whose time-dependent intensity changes according to the stochastic process X(t). Now, recall that a Poisson process \u02dc Y (t) with time-dependent intensity \u03bb(t) can be transformed into a Poisson process Y (u) with constant intensity one, using the simple time transform u = R t 0 \u03bb(s)ds, that is, Y (u) = Y ( R t 0 \u03bb(s)ds) = \u02dc Y (t). Similarly, we can describe Cj(t) as a Poisson process with intensity one, i.e., Cj(t) = Yj \u0012Z t 0 \u03b1j(X(s))ds \u0013 , where Yj are independent Poisson processes with intensity one. Hence, for i \u2208{1, . . . , n} Xi(t) = Xi(0) + m X j=1 uji \u00b7 Yj \u0012Z t 0 \u03b1j(X(s))ds \u0013 , (7) where uj = (uj1, . . . , ujn). The next step is to de\ufb01ne Z(t) = X(t) \u00b7 N \u22121, that is, Z(t) = (Z1(t), . . . , Zn(t)) contains the concentrations of the chemical species in moles per liter at time t. Thus, Zi(t) = Zi(0) + m X j=1 uji \u00b7 N \u22121 \u00b7 Yj \u0012Z t 0 \u03b1j(X(s))ds \u0013 , (8) and using the fact that \u03b1j(x) \u2248N \u00b7 \u03b2j(z) yields Zi(t) \u2248Zi(0)+ m X j=1 uji \u00b7N \u22121 \u00b7Yj \u0012 N \u00b7 Z t 0 \u03b2j(Z(s))ds \u0013 . (9) By the law of large numbers, the unit Poisson process Yj will approach N \u00b7 u at time N \u00b7 u for large N \u00b7 u. Thus, Yj(N \u00b7 u) \u2248N \u00b7 u and hence, Zi(t) \u2248Zi(0) + m X j=1 uji \u00b7 Z t 0 \u03b2j(Z(s))ds. (10) The right-hand side of the above integral equation is the solution z(t) of the system of ODEs d dtz(t) = m X j=1 uj \u00b7 \u03b2j(z(t)). (11) As shown by Kurtz [21], in the large volume limit, where the volume and the number of molecules approach in\ufb01nity (while the concentrations remain constant), Z(t) \u2192z(t) in probability for \ufb01nite times t. Note that the chemical concentrations z(t) evolve continuously and deterministically in time. This continuous deterministic approximation is reasonable if all species have a small relative variance and if they are only weakly correlated. The reason is that only in this case the assumption that Zi(t) is deterministic is appropriate. Note that for most models this is the case if the population of species i is large since this implies that E[Xi(t)] is large whereas the occurrence of chemical reactions results only in a marginal relative change of the value of Xi(t). Example 1 (cont.). The ODEs of the exclusive switch are given by d dtz1(t) = k1 \u00b7 z3(t) \u2212k3 \u00b7 z1(t) \u2212k5 \u00b7 z1(t) \u00b7 z3(t) +k7 \u00b7 z4(t) + k9 \u00b7 z4(t) d dtz2(t) = k2 \u00b7 z3(t) \u2212k4 \u00b7 z2(t) \u2212k6 \u00b7 z2(t) \u00b7 z3(t) +k8 \u00b7 z5(t) + k10 \u00b7 z5(t) d dtz3(t) = \u2212k5 \u00b7 z1(t) \u00b7 z3(t) \u2212k6 \u00b7 z2(t) \u00b7 z3(t) +k7 \u00b7 z4(t) + k8 \u00b7 z5(t) d dtz4(t) = k5 \u00b7 z1(t) \u00b7 z3(t) \u2212k7 \u00b7 z4(t) d dtz5(t) = k6 \u00b7 z2(t) \u00b7 z3(t) \u2212k8 \u00b7 z5(t) where z1(t), z2(t), z3(t), z4(t), z5(t) denote the respective chemical concentrations. Moreover, cj = N \u22121 \u00b7 kj for j \u2208{5, 6} and cj = kj for j \u0338\u2208{5, 6}. In [1], Ball et al. scale only a subset of the populations in order to approximate the behavior of the system if certain populations are large and others are small. Additionally, they take into account the di\ufb00erent speeds of the chemical reactions. For a selected number of examples, they give analytical expressions for the distributions in the limit, i.e., when the scaling parameter approaches in\ufb01nity. In the next section, we will construct a stochastic hybrid model that is equivalent to the one considered in [1] if we scale the continuous components and consider the deterministic limit. \f4. STOCHASTIC HYBRID MODEL A straightforward consequence of the CME is that the time derivative of the populations\u2019 expectations are given by d dtE[X(t)] = Pm j=1 uj \u00b7 E [\u03b1j (X(t))] . (12) If all reactions of the system involve at most one reactant, Eq. (12) can be simpli\ufb01ed to d dtE[X(t)] = Pm j=1 uj \u00b7 \u03b1j (E[X(t)]) . (13) because the propensity functions \u03b1j are linear in x. But in the case of bimolecular reactions, we have either \u03b1j(x) = cj \u00b7xi \u00b7x\u2113for some i, \u2113with i \u0338= \u2113or \u03b1j(x) = cj \u00b7xi \u00b7(xi \u22121)/2 if the j-th reaction involves two reactants of type i. But this means that E [\u03b1j (X(t))] = cj \u00b7 E [Xi(t) \u00b7 X\u2113(t)] or E [\u03b1j (X(t))] = 1 2 \u00b7 cj \u00b7 E \u0002 (Xi(t))2\u0003 \u2212E [(Xi(t))] , respectively. In both cases new di\ufb00erential equations are necessary to describe the unknown values of E [Xi(t) \u00b7 X\u2113(t)] and E \u0002 (Xi(t))2\u0003 . This problem repeats and leads to an in\ufb01nite system of ODEs. As shown in the sequel, we can, however, exploit Eq. (12) to derive a stochastic hybrid model. Assume we have a system where certain species have a large population. In that case we approximate them with continuous deterministic variables. The remaining variables are kept discrete stochastic. This is done because it is usually infeasible or at least computationally very costly to solve a purely stochastic model with high populations since in the respective dimensions the number of signi\ufb01cant states is large. Therefore, we propose to switch to a hybrid model where the stochastic part does not contain large populations. In this way we can guarantee an e\ufb03cient approximation of the solution. Formally, we split X(t) into small populations V(t) and large populations W(t), i.e. X(t) = (V(t), W(t)). Let \u02dc n be the dimension of V(t) and \u02c6 n the dimension of W(t), i.e. n = \u02dc n + \u02c6 n. Moreover, let \u02dc D and \u02c6 D be the set of indices that correspond to the populations in V and W, respectively. Thus, \u02dc D, \u02c6 D \u2286{1, . . . , n} and | \u02dc D| = \u02dc n, | \u02c6 D| = \u02c6 n. We de\ufb01ne \u02dc uj and \u02c6 uj as the components of uj that belong to \u02dc D and \u02c6 D, respectively. Under the condition that V(t) = v and W(t) = w, we assume that for an in\ufb01nitesimal time interval of length dt the evolution of W is given by the stochastic di\ufb00erential equation W(t + dt) = W(t) + Pm j=1 \u02c6 uj \u00b7 \u03b1j(v, w) \u00b7 dt. (14) The evolution of V remains unchanged, i.e., Pr(V(t + dt) = v + \u02dc uj | V(t) = v, W(t) = w) = \u03b1j(v, w)\u00b7dt The density function h(v, w, t) of the Markov process {(V(t), W(t)), t \u22650} can be derived in the same way as done by Horton et al. [17]. Here, for simplicity we consider only the case \u02c6 n = 1 which means that w = w is a scalar. The generalization to higher dimensions is straightforward. If w > 0 then the following partial di\ufb00erential equation holds for h. \u2202h(v, w, t) \u2202t + \u2202 \u0000h(v, w, t) \u00b7 P j \u02c6 uj \u00b7 \u03b1j(v, w) \u0001 \u2202w = P j \u03b1j(v\u2212\u02dc uj, w) \u00b7 h(v\u2212\u02dc uj, w, t)\u2212P j \u03b1j(v, w) \u00b7 h(v, w, t). If w = 0 then we have probability mass g(v, w, t) in state (v, w) where \u2202g(v, w, t) \u2202t + h(v, w, t) \u00b7 P j \u02c6 uj \u00b7 \u03b1j(v, w) = P j \u03b1j(v\u2212\u02dc uj, w) \u00b7 g(v\u2212\u02dc uj, w, t)\u2212P j \u03b1j(v, w) \u00b7 g(v, w, t). As explained in-depth by Horton et al., the above equations express that probability mass must be conserved, i.e. the change of probability mass in a\u201ccell\u201dwith boundaries (v, w\u2212 dw) and (v, w + dw) equals the total mass of probability entering the cell minus the total mass leaving the cell. In order to exploit the fact that the relative variance of W is small, we suggest an approximative solution of the stochastic hybrid model given above. The main idea is not to compute the full density h and the mass function g but only the distribution of V as well as the conditional expectations E[W(t) = w | V(t) = v]. Thus, in our numerical procedure the distribution of W is approximated by the di\ufb00erent values E[W(t) = w | V(t) = v], v \u2208N\u02dc n that are taken by W(t) with probability Pr(V(t) = v). Assume that at time t we have the approximation p(t) of the discrete stochastic model as described in Section 2, that is, for all states x that have a probability that is greater than \u03b4 we have p(x, t) > 0 and for all other states x we have p(x, t) = 0. At time t the expectations of one or more populations reached a certain large population threshold. Thus, we switch to a hybrid model where the large populations (index set \u02c6 D) are represented as continuous deterministic variables W(t) while the small populations (index set \u02dc D) are represented by V(t). We \ufb01rst compute the vector of conditional expectations \u03a8v(t) := E[W(t) = w | V(t) = v] = P x:\u02dc x=v w \u00b7 p(x, t). Here, \u02dc x is the subvector of x that corresponds to \u02dc D. We also compute the distribution p(t) of V(t) as r(v, t) := P x:\u02dc x=v p(x, t). Now, we integrate the system for a small time interval of length h > 0. This is done in three steps as described below. We will write \u03a8v(t\u2032) for the approximation of E[W(t\u2032) = w | V(t\u2032) = v]. The i-th element of the \u02c6 n-dimensional vector \u03a8v(t\u2032) is denoted by \u03c8v(i, t\u2032). The value r(v, t\u2032) denotes the approximation of Pr(V(t) = v) where t\u2032 \u2208[t, t + h). The vector r(t\u2032) contains the elements r(v, t\u2032). (1) Update distribution. We \ufb01rst integrate r(t) for h time units according to a CME with dimension \u02dc n to approximate the probabilities Pr(V(t + h) = v) by r(v, t+h), that is, r(t + h) is the solution of the system of ODEs dr(v, t\u2032) dt\u2032 = P j \u03b1j \u0000v \u2212\u02dc uj, \u03a8v\u2212\u02dc uj (t\u2032) \u0001 \u00b7 r(v \u2212\u02dc uj, t\u2032) \u2212P j \u03b1j(v, \u03a8v(t\u2032)) \u00b7 r(v, t\u2032) with initial condition r(t). Note that this equation is as Eq. (2) except that the dimensions in \u02c6 D are removed. Moreover, the population sizes w are replaced by the conditional expectations \u03a8v(t\u2032). (2) Integrate. For each state v with r(v, t) > \u03b4, we compute an approximation \u03a6v(t+h) of the conditional expecta\ftion E[W(t + h) | V(t\u2032) = v, t\u2032 \u2208[t, t + h)], that is, we assume that the system remains in state v during [t, t+h) and that the expected numbers of the large populations W change deterministically and continuously in time. Thus, the \u02c6 n-dimensional vector \u03a6v(t+h) is obtained by numerical integration of the ODE d dt\u2032 \u03a6v(t\u2032) = Pm j=1 \u02c6 uj \u00b7 \u03b1j(v, \u03a6v(t\u2032)) with initial condition \u03a6v(t). The above ODEs are similar to Eq. (12) except that for t\u2032 \u2208[t, t + h) the value E[\u03b1j(X(t\u2032))] is approximated by \u03b1j(v, \u03a6v(t\u2032)). For instance, if the j-th reaction is a bimolecular reaction that involves two populations with indices i, \u2113in \u02c6 D then E[\u03b1j(v, W(t\u2032)) | V(t\u2032) = v] is approximated by cj \u00b7 \u03c6v(i, t\u2032) \u00b7 \u03c6v(\u2113, t\u2032) where the two last factors are the elements of the vector \u03a6v(t\u2032) corresponding to the i-th and \u2113-th population. Thus, in this case the correlations between the i-th and the \u2113-th populations are not taken into account which is reasonable if the two populations are large. Note that the correlations are taken into account when at least one population is represented as a discrete stochastic variable. If, for instance, i \u2208\u02dc D and \u2113\u2208\u02c6 D, then we use the approximation cj\u00b7vi\u00b7\u03c6v(\u2113, t\u2032) where vi is the entry in vector v that represents the size of the i-th population. (3) Distribute. In order to approximate E[W(t+h) | V(t+ h)] by \u03a8v(t + h) for all states v, we have to replace the condition V(t\u2032) = v, t\u2032 \u2208[t, t + h) by V(t + h) = v in the conditional expectation \u03a6v(t+h) that was computed in step 2. This is done by \u201cdistributing\u201d \u03a6v(t+h) according to the change in the distribution of V(t) as explained below. The idea is to take into account that V enters state v from v\u2032 during the interval [t, t + h). Assume that [t, t + h) is an in\ufb01nitesimal time interval and that q(v\u2032, v, h), v \u0338= v\u2032 is the probability to enter v from v\u2032 within [t, t + h). Then P(V(t+h)=v) = P v\u2032\u0338=v q(v\u2032, v, h) \u00b7 P(V(t)=v\u2032) +(1 \u2212P v\u2032\u0338=v q(v, v\u2032, h)) \u00b7 P(V(t)=v). (15) Thus, we approximate E[W(t + h) | V(t + h) = v] as P v\u2032\u0338=v \u03a6v\u2032(t+h) \u00b7 q(v\u2032, v, h) \u00b7 P(V(t)=v\u2032|V(t+h)=v) (16) + \u03a6v(t+h) \u00b7 (1\u2212P v\u2032\u0338=v q(v, v\u2032, h)) \u00b7 P(V(t)=v|V(t+h)=v). Obviously, we can make use of the current approximations r(t) and r(t + h) to compute the conditional probabilities P(V(t)=v\u2032|V(t+h)=v). For a small time step h, q(v\u2032, v, h) \u2248 h\u00b7\u03b1j(v\u2032, \u03a8v\u2032(t)) if v\u2032 = v+ \u02dc uj and q(v\u2032, v, h) \u22480 otherwise. Using Eq. (16), we compute the approximation \u03a8v(t + h) \u2248 E[W(t+h)|V(t+h) = v] as \u03a8v(t + h) = P j \u03a6v \u2212\u02dc uj(t+h) \u00b7 p(v \u2212\u02dc uj ,t) p(v,t+h) \u00b7 \u03b1j(v\u2212\u02dc uj, \u03a8v\u2212\u02dc uj (t)) \u00b7 h +\u03a6v(t+h) \u00b7 p(v,t) p(v,t+h)(1 \u2212P j \u03b1j(v, \u03a8v(t)) \u00b7 h). (17) Note that the \ufb01rst sum runs over all direct predecessors v\u2212\u02dc uj of v. Example 1 (cont.). In the exclusive switch the expected number of molecules of type P1 and/or P2 may become high, 0 50 100 0 50 100 P1 P2 0 50 100 0 50 100 P1 P2 Figure 2: Probability distribution of the exclusive switch in Ex. 1. v1 v2 v3 c5 \u00b7 \u03c8v1(1, t) c7 c6 \u00b7 \u03c8v1(2, t) c8 Figure 3: The discrete stochastic part of the stochastic hybrid model of Ex. 1. depending on the chosen parameters. If, for instance, c1 = c2 = c9 = c10 = 0.5, c3 = c4 = c7 = c8 = 0.005, c5 = c6 = 0.01, and we start initially without any proteins, i.e. with probability one in state y = (0, 0, 1, 0, 0), then after 500 time units most of the probability mass is located around the states x = (92, 2, 0, 1, 0) and x = (2, 92, 0, 0, 1) (compare the plot in Fig. 2, left). Note that x3 = 0, x4 = 1, x5 = 0 refers to the case that a molecule of type P1 is bound to the promotor and x3 = x4 = 0, x5 = 1 refers to the case that a molecule of type P2 is bound to the promotor. Since for these parameters the system is symmetric, the expected populations of P1 and P2 are identical. Assume that at a certain time instant, both populations reach the threshold from which on we approximate them by continuous deterministic variables (we consider the unsymmetric case later, in Section 5). The remaining discrete model then becomes \ufb01nite since only P1 and P2 have an in\ufb01nite range in the original model (\u02c6 n = 2, \u02dc n = 3). More precisely, it contains only 3 states, namely the state v1 where the promotor is free (x3 = 1, x4 = x5 = 0), the state v2 where P1 is bound to the promotor (x3 =0, x4 =1, x5 =0), and the state v3 where P2 is bound to the promotor (x3 = x4 = 0, x5 = 1), see also Fig. 1. For i \u2208{1, 2} let Wi(t) be the population size of Pi. The di\ufb00erential equations which are used to approximate the conditional expectations \u03a8vj(t + h), j \u2208{1, 2, 3} are d\u03c6v1 (i,t\u2032) dt\u2032 = ci \u2212c2+i \u00b7 \u03c6v1(i, t\u2032) \u2212c4+i \u00b7 \u03c6v1(i, t\u2032) d\u03c6v2 (i,t\u2032) dt\u2032 = \u2212c2+i \u00b7 \u03c6v2(i, t\u2032) + (c7 + c9) \u00b7 (2 \u2212i) d\u03c6v3 (i,t\u2032) dt\u2032 = \u2212c2+i \u00b7 \u03c6v3(i, t\u2032) + (c8 + c10) \u00b7 (i \u22121+) where \u03c6v(1, t\u2032) and \u03c6v(2, t\u2032) are the elements of the vector \u03a6v1(t\u2032). Note that each of the 3 states v1, v2, v3 has a system of two di\ufb00erential equations, one for P1 and one for P2. The transition rates in the discrete stochastic part of the model are illustrated in Fig. 3. Thus, after solving the di\ufb00erential equations above to compute \u03a6vj (t + h) we obtain the vector \u03a8vj (t + h) of the two conditional expectations for P1 and P2 from distributing \u03a6v1(t+h), \u03a6v2(t+h), \u03a6v3(t+h) among the 3 states as de\ufb01ned in Eq. (17). For the parameters used in Fig. 1, left, the conditional expectations of the \fpset purely stochastic stochastic hybrid purely determ. ex. time |Sig| error pop. thres. ex. time |Sig| m1 m2 m3 ex. time m1 1a 11h 46min 8 \u00b7 105 7 \u00b7 10\u22125 50 15sec 4 \u00b7 102 0.005 0.2 0.30 1sec 0.03 100 1min 50sec 3 \u00b7 103 0.004 0.2 0.30 1b 7min 43sec 5 \u00b7 104 7 \u00b7 10\u22127 50 1min 19sec 6 \u00b7 103 0.01 0.19 0.30 1sec 0.03 100 2min 50sec 3 \u00b7 104 0.01 0.19 0.30 2 4h 51min 2 \u00b7 105 4 \u00b7 10\u22125 50 25sec 4 \u00b7 102 0.06 0.08 0.09 1sec 0.45 100 28sec 6 \u00b7 102 0.06 0.07 0.09 3 2min 21sec 7 \u00b7 105 6 \u00b7 10\u22125 50 18sec 6 \u00b7 103 0.02 0.08 0.16 1sec 0.05 100 1min 41sec 4 \u00b7 104 0.01 0.05 0.12 Table 2: Results for the exclusive switch example. states v2 and v3 accurately predict the two stable regions where most of the probability mass is located. The state v1 has small probability and its conditional expectation is located between the two stable regions. It is important to point out that, for this example, a purely deterministic solution cannot detect the bistability because the deterministic model has a single steady-state [22]. Finally, we remark that in this example the number of states in the reduced discrete model is very small. If, however, populations with an in\ufb01nite range but small expectations are present, we use the truncation described in Section 2 to keep the number of states small. If at time t a population, say the i-th population, is represented by its conditional expectations, it is possible to switch back to the original discrete stochastic treatment. This is done by adding an entry to the states v for the i-th dimension. This entry then equals \u03c8v(i, t). This means that at this point we assume that the conditional probability distribution has mass one for the value \u03c8v(i, t). Note that here switching back and forth between discrete stochastic and continuous deterministic representations is based on a population threshold. Thus, if the expectation of a population oscillates we may switch back and forth in each period. 5. EXPERIMENTAL RESULTS We implemented the numerical solution of the stochastic hybrid model described above in C++ as well as the fast solution of the discrete stochastic model described in Section 2. In our implementation we dynamically switch the representation of a random variable whenever it reaches a certain population threshold. We ran experiments with two di\ufb00erent thresholds (50 and 100) on an Intel 2.5GHz Linux workstation with 8GB of RAM. In this section we present 3 examples to that we applied our algorithm, namely the exclusive switch, Goutsias\u2019 model, and a predator-prey model. Our most complex example has 6 di\ufb00erent chemical species and 10 reactions. We compare our results to a purely stochastic solution where switching is turned o\ufb00as well as to a purely deterministic solution. For all experiments, we \ufb01xed the cutting threshold \u03b4 = 10\u221214 to truncate the in\ufb01nite state space as explained in Sec. 2. Exclusive Switch. We chose di\ufb00erent parameters for the exclusive switch in order to test whether our hybrid approach works well if 1) the populations of P1 and P2 are large (a) or small (b), 2) the model is unsymmetric (e.g. P1 is produced at a higher rate than P2 and degrades at a slower rate than P2), 3) the bistable form of the distribution is destroyed (i.e. promotor binding is less likely, unbinding is more likely). The following table lists the parameter sets (psets): pset c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 1a 5 5 0.0005 0.0005 0.1 0.1 0.005 0.005 5 5 1b 0.5 0.5 0.0005 0.0005 0.1 0.1 0.005 0.005 0.5 0.5 2 5 0.5 0.0005 0.005 0.1 0.1 0.005 0.005 5 0.5 3 0.5 0.5 0.0005 0.0005 0.01 0.01 0.1 0.1 0.5 0.5 We chose a time horizon of t = 500 for all parameter sets. Note that in the case of pset 3 the probability distribution forms a thick line in the state space (compare the plot in Fig. 2, right). We list our results in Table 2 where the \ufb01rst column refers to the parameter set. Column 2 to 4 list the results of a purely stochastic solution (see Section 2) where \u201cex. time\u201d refers to the execution time, |Sig| to the average size of the set of signi\ufb01cant states and \u201cerror\u201d refers to the amount of probability mass lost due to the truncation with threshold \u03b4, i.e. 1 \u2212P x\u2208Sig p(x, t). The columns 6-10 list the results of our stochastic hybrid approach and column 5 lists the population threshold used for switching in the representations in the stochastic hybrid model. Here, \u201cm1\u201d, \u201cm2\u201d, \u201cm3\u201d refer to the relative error of the \ufb01rst three moments of the joint probability distribution at the \ufb01nal time instant. For this, we compare the (approximate) solution of the hybrid model with the solution of the purely stochastic model. Since we have \ufb01ve species, we simply take the average relative error over all species. Note that even if a species is represented by its conditional expectations, we can approximate its \ufb01rst three moments by E[W(t)i] \u2248P v (\u03a8v(t))i \u00b7 r(v, t) where the i-th power of the vectors are taken componentwise. Finally, in the last two columns we list the results of a purely deterministic solution as explained in Section 3. The last column refers to the average relative error of the expected populations when we compare the purely deterministic solution to the purely stochastic solution. Note that the deterministic solution of the exclusive switch yields an accurate approximation of the \ufb01rst moment (except for pset 2) because of the symmetry of the model. It does, however, not reveal the bistability of the distribution. As opposed to that, the hybrid solution does show this important property. For pset 1 and 3, the conditional expectations of the 3 discrete states are such that two of them match exactly the two stable regions where most of the probability mass is located (see also Example 1 in Sec. 4) . The remaining conditional \fmodel purely stochastic stochastic hybrid purely determ. ex. time |Sig| error pop. thres. ex. time |Sig| m1 m2 m3 ex. time m1 Goutsias 1h 16min 1 \u00b7 106 4 \u00b7 10\u22127 50 8min 47sec 1 \u00b7 105 0.001 0.07 0.13 1sec 0.95 100 48min 57sec 6 \u00b7 105 0.0001 0.0003 0.001 p.-prey 6h 6min 5 \u00b7 105 1 \u00b7 10\u22127 50 8min 56sec 2 \u00b7 104 0.06 0.15 0.27 1sec 0.86 100 1h 2min 8 \u00b7 104 0.04 0.11 0.23 Table 3: Results for Goutsias\u2019 model and the predator-prey model. 0 1 2 3 4 10 12 14 16 18 Species M stoch determ hyb 0 1 2 3 4 0 20 40 60 80 100 120 Species RNA stoch determ hyb Figure 4: Expected populations in Goutsias\u2019 model. expectation of the state where the promotor region is free has small probability and predicts a conditional expectation between the two stable regions. The execution time of the purely stochastic approach is high in the case of pset 1a, because the expected populations of P1 and P2 are high. This yields large sizes of Sig while we iterate over time. During the hybrid solution, we switch when the populations reach the threshold and the size of Sig drops to 3. Thus, the average number of signi\ufb01cant states is much smaller. In the case of pset 1b, the expected populations are small and we use a deterministic representation for protein populations only during a short time interval (at the end of the time horizon). For pset 2, the accuracy of the purely deterministic solution is poor because the model is no longer symmetric. The accuracy of the hybrid solution on the other hand is independent of the symmetry of the model. Goutsias\u2019 Model. In [11], Goutsias de\ufb01nes a model for the transcription regulation of a repressor protein in bacteriophage \u03bb. This protein is responsible for maintaining lysogeny of the \u03bb virus in E. coli. The model involves 6 di\ufb00erent species and the following 10 reactions. 1: RNA \u2192RNA+M 6: DNA.D \u2192DNA+D 2: M \u2192\u2205 7: DNA.D+D \u2192DNA.2D 3: DNA.D \u2192RNA+DNA.D 8: DNA.2D \u2192DNA.D+D 4: RNA \u2192\u2205 9: M + M \u2192D 5: DNA+D \u2192DNA.D 10: D \u2192M+M We used the following parameters that di\ufb00er from the original parameters used in [11] in that they increases the number of RNA molecules (because with the original parameters, all populations remain small). c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 0.043 7e-4 71.5 3.9e-6 0.02 0.48 2e-4 9e-12 0.08 0.5 Table 3 shows the results for the Goutsias\u2019 model where we use the same column labels as above. We always start initially with 10 molecules of RNA, M, and D, as well as 2 DNA molecules. We choose the time horizon as t = 4. Note that the hybrid solution as well as the purely deterministic solution are feasible for much longer time horizons. The increase of the size of the set of signi\ufb01cant states makes the purely stochastic solution infeasible for longer time horizons. As opposed to that the memory requirements of the hybrid solution remain tractable. In Fig. 4 we plot the means of two of the six species obtained from the purely stochastic (stoch), purely deterministic (determ), and the hybrid (hyb) solution. Note that a purely deterministic solution yields very poor accuracy (relative error of the means is 95%). Predator Prey. We apply our algorithm to the predator prey model described in [9]. It involves two species A and B and the reactions are A \u21922A, A + B \u21922B, and B \u2192\u2205. The model shows sustainable periodic oscillations until eventually the population of B reaches zero. We use this example to test the switching mechanism of our algorithm. We choose rate constants c1 = 1, c2 = 0.03, c3 = 1 and start initially with 30 molecules of type A and 120 molecules of type B. For a population threshold of 50, we start with a stochastic representation of A and a deterministic representation of B. Then, around time 1.3 we switch to a purely stochastic representation since the expectation of B becomes less than 50. Around time t = 6.1 we switch the representation of A because E[A(t)] > 50, etc. We present our detailed results in Table 3. Similar to Goutsias\u2019 model, the deterministic solution has a high relative error whereas the hybrid solution yields accurate results. 6." + } + ], + "Josef Tkadlec": [ + { + "url": "http://arxiv.org/abs/2111.10890v1", + "title": "Natural selection of mutants that modify population structure", + "abstract": "Evolution occurs in populations of reproducing individuals. It is well known\nthat population structure can affect evolutionary dynamics. Traditionally,\nnatural selection is studied between mutants that differ in reproductive rate,\nbut are subject to the same population structure. Here we study how natural\nselection acts on mutants that have the same reproductive rate, but experience\ndifferent population structures. In our framework, mutation alters population\nstructure, which is given by a graph that specifies the dispersal of offspring.\nReproduction can be either genetic or cultural. Competing mutants disperse\ntheir offspring on different graphs. A more connected graph implies higher\nmotility. We show that enhanced motility tends to increase an invader's\nfixation probability, but there are interesting exceptions. For island models,\nwe show that the magnitude of the effect depends crucially on the exact layout\nof the additional links. Finally, we show that for low-dimensional lattices,\nthe effect of altered motility is comparable to that of altered fitness: in the\nlimit of large population size, the invader's fixation probability is either\nconstant or exponentially small, depending on whether it is more or less motile\nthan the resident.", + "authors": "Josef Tkadlec, Kamran Kaveh, Krishnendu Chatterjee, Martin A. Nowak", + "published": "2021-11-21", + "updated": "2021-11-21", + "primary_cat": "q-bio.PE", + "cats": [ + "q-bio.PE" + ], + "main_content": "Introduction Evolutionary dynamics is the study of how di\ufb00erent traits arise and disappear in a population of reproducing individuals. Each trait might confer a \ufb01tness advantage (or disadvantage) on its bearer, thus in turn altering the probability that the trait spreads through the population (an event called \ufb01xation) or disappears (extinction). Besides the \ufb01tness advantage, another important factor in determining the fate of a trait over time (its \ufb01xation or extinction) is the spatial structure of the population [1, 2, 3, 4, 5]. For instance, the population might be subdivided into \u201cislands\u201d: An o\ufb00spring of a reproducing individual then typically stays in the same island, but occasionally it migrates to some nearby island. The \ufb01xation probability of a trait then crucially depends on the dispersal pattern, that is, the migration rates among the islands. Incorporation of population structure into a model of selection dynamics substantially improves the descriptive power of the model [1, 5, 6, 7, 8, 9, 10, 11]. Evolutionary graph theory is a powerful framework for studying natural selection in population structures with arbitrarily complex dispersal patterns [12, 13, 14, 15, 16, 17, 18]. On an evolutionary graph (network), individuals occupy the nodes (vertices), and the edges (links) specify where the o\ufb00spring can migrate. Graphs can represent spatial structures, contact networks in epidemiology, social networks, and phenotypic or genotypic structures in biological populations [12, 19, 20, 21, 22, 23]. The question is then: How does a graph structure a\ufb00ect the \ufb01xation probability of a new mutant introduced into a background population of residents? Extensive research over the past decade has produced many remarkable population structures with various desirable properties [24, 25, 26, 27, 28]. As one example, consider a mutation that increases the reproduction rate of the a\ufb00ected individual. Population structures that increase the \ufb01xation probability of such mutations, as compared to the baseline case of unstructured (well-mixed) populations, are known as 1 arXiv:2111.10890v1 [q-bio.PE] 21 Nov 2021 \ftype A: mutant type B: resident Figure 1: In epithelial tissues, di\ufb00erent cell types align along di\ufb00erent lattice-like structures. ampli\ufb01ers of selection. Many ampli\ufb01ers of selection are known, both simple ones and strong ones [29, 30, 31, 32]. In this work, we consider mutations that do not change the reproductive rate of the a\ufb00ected individual, but rather its motility potential. In nature, an altered motility potential could arise in a variety of scenarios. We give three examples. First, consider a species occupying a region that is split by a geographical barrier into two parts. If the mutation allows the o\ufb00spring to successfully cross the barrier, the mutants will perceive the population structure as being close to well-mixed, whereas the residents will continue perceiving it as being split into two parts (islands). As a second example, consider structured multicellular organisms. There, cells are arranged in symmetric lattice structures known as epithelia. An epithelial tissue may be described as a two-dimensional sheet de\ufb01ned by vertex points representing wall junctions, one-dimensional edges representing cell walls, and twodimensional faces representing cells. The form of this tissue network is determined by the extracellular matrix (ECM). The ECM is a network consisting of extracellular macromolecules, collagen, and enzymes that provide structural and biochemical support to surrounding cells. The composition of ECM varies between multicellular structures [33, 34, 35, 36, 37]. Thus, when discussing somatic evolution in multicellular organisms, the invading genotype might di\ufb00er in what network structure it is forming [38, 35]. In other words, each type, in the absence of the other type, forms its own and di\ufb00erent extracellular matrix. This leads to di\ufb00erent alignment of cells and thus a new population structure, see Fig. 1. Carcinoma is yet another example of how the tissue organization of the invader and resident type can di\ufb00er from each other. In this case, tumor cells normally have a highly disorganized neighborhood structure, due to the variability in cell-cell adhesion and the lack of proper epithelial programs among tumor cells in the tumor microenvironment [39, 40]. Normal epithelial cells, on the other hand, typically follow symmetric geometric lattice patterns. This change in structure between an invading trait and the resident type can have substantial consequences on the outcome of the evolutionary process. However, in the context of evolutionary graph models, such considerations have not yet received appropriate attention. In order to model di\ufb00erences in the motility potential within the framework of evolutionary graph theory, we represent the population structure as two graphs GA, GB overlaid on top of each other on the same set of nodes [41]. The two graphs GA, GB represent the dispersal patterns for the mutants and residents, respectively. In other words, mutant o\ufb00spring migrate along the edges of GA, whereas resident o\ufb00spring migrate along the edges of GB. We study the \ufb01xation probability \u03c1(GA, GB) of a single neutral mutant who appears at a random node and perceives the population structure as GA, as it attempts to invade a population of residents who perceive the population through GB. There is a large body of literature on the evolution and ecology of migration and dispersal [42, 43, 44, 45, 46], especially for population structures formed by islands (also called patches, demes, or metapopulations) [47, 48, 49]. Our framework is a generalization of this approach in the same way that evolutionary graph theory is a generalization of the vast literature on evolution and ecology in spatially structured populations [12, 9]. The framework is \ufb02exible, allowing us to study both simple and arbitrarily complex population structures of any population size. As such, it facilitates a discovery of new phenomena. Among the graph-theoretical approaches, other ways to model motility and dispersal have been suggested in the literature. They allow for the o\ufb00springs to disperse in more complex forms and reach locations that 2 \fare not directly connected to the mother location. This introduces migration potential as an independent quantity relative to the proliferation potential of the types [50, 51, 52, 53, 54, 55, 56]. In those cases, the motility potential is representative of a random motion and it is typically decoupled from the reproduction events. Such random motility and motion has an anti-synergistic relationship with the proliferation potential. In other words, if invaders are more motile, their \ufb01xation probability tends to decrease [51, 52, 53]. Here we show that, in contrast to random motility, enhanced structured motility generally leads to an increase in the \ufb01xation probability of the invading mutant. Speci\ufb01cally, we prove that for any population size N the Complete graph KN is \u201clocally optimal\u201d. That is, if mutants instead perceive the population through a graph MN that misses a single edge, their \ufb01xation probability is decreased. However, we show that the obvious generalization of this claim is not true: By numerically computing the \ufb01xation probabilities for small population sizes, we identify speci\ufb01c circumstances in which making mutants less motile actually increases their \ufb01xation probability. Next, we show that even for simple population structures that correspond to island models, the extent to which increased motility helps the mutant \ufb01xate can vary considerably, depending on the exact layout of the extra connections. Finally, we show that for low-dimensional lattices, the e\ufb00ect of altered motility is comparable to the e\ufb00ect of altered reproductive rate: in the limit of large population size, the \ufb01xation probability of a mutant is either constant or exponentially small, depending on whether it is more or less motile than the residents. 1 Model Standard Moran process on a graph. Within the framework of Evolutionary graph theory [12], a population structure is described as a graph (network), where nodes (vertices) represent locations (sites) and the graph connectivity de\ufb01nes the topology and the neighborhood. There are N nodes and each node is occupied by a single individual. Each individual is either of type A (mutant) with \ufb01tness rA, or of type B (resident) with \ufb01tness rB. The evolutionary dynamics is governed by the standard stochastic discrete-time Moran Birth-death process, adapted to the population structure: at each time point, a single individual is picked for reproduction, proportionally to its \ufb01tness. This focal individual produces o\ufb00spring (a copy of itself), and the o\ufb00spring then migrates and replaces a random neighboring individual. The probability of migration from node i to node j is given by an N \u00d7N dispersal matrix M = (mi,j)N i,j=1. Thus, for undirected, unweighted graphs (which are the focus of this work), the entries mi,j of the dispersal matrix M satisfy mi,j = ( 1/ deg(i), if nodes i and j are adjacent, 0, otherwise. (Here deg(u) is the degree of node u, that is, the number of nodes adjacent to u.) Moran process on two graphs. It is commonly assumed that the dispersal matrix is independent of the two types, that is, both types of individuals perceive the population through the same population structure. Following the recent work of Melissourgos et al. [41], here we study a more general case in which the dispersal pattern depends on the type of the o\ufb00spring that migrates. Thus, we consider two graphs GA, GB and the corresponding dispersal matrices M A = (mA i,j)N i,j=1, M B = (mB i,j)N i,j=1. That is, any time a type A individual reproduces at a node i, the o\ufb00spring replaces an individual at node j with probability mA ij. In contrast, the o\ufb00spring of a type B individual reproducing at node i migrates to node j with probability mB ij, see Fig. 2. The state of the population at any given time point is described by a vector n = (n1, . . . , nN) of N zeros and ones, where ni = 1 denotes that node i is currently occupied by a type A individual (mutant). The model is a Markov chain with 2N possible states. Two of the states are absorbing, and they correspond to homogeneous population consisting purely of type A individuals (state n1 = (1, . . . , 1)) or type B individuals (state n0 = (0, . . . , 0)). Formally, the transition probabilities between the states are given by the following equations: 3 \f. . . type A: mutant type B: resident Figure 2: Moran process with type-dependent dispersal patterns. In each discrete time-step, a random individual reproduces and the o\ufb00spring proliferates to a neighboring node. Type-A o\ufb00spring (mutant, blue) migrate along the edges of the blue graph GA, whereas type-B o\ufb00spring (residents, red) migrate along the red edges of GB. The key quantity is the \ufb01xation probability \u03c1(GA, GB) that a single initial mutant successfully invades the population of residents. p+ i (n) :=P[(n1, . . . , ni, . . . , nN) \u2192(n1, . . . , ni + 1, . . . , nN)] = P j nj(1 \u2212ni)rAmA ji P k (nkrA + (1 \u2212nk)rB) p\u2212 i (n) :=P[(n1, . . . , ni, . . . , nN) \u2192(n1, . . . , ni \u22121, . . . , nN)] = P j(1 \u2212nj)nirBmB ji P k (nkrA + (1 \u2212nk)rB) (1) Questions and Results. In this work, we study how di\ufb00erences in the migration and dispersal pattern GA of mutants and GB of residents in\ufb02uence the fate of a single random mutant who appears at a random location. As a measure of the mutant success, we use its \ufb01xation probability under neutral drift (that is, rA = rB). We denote this quantity by \u03c1(GA, GB). It is known that whenever the two types have the same dispersal pattern (GA = GB), the \ufb01xation probability under neutral drift is equal to 1/N, regardless of GA = GB [57]. Thus, the regime of neutral drift provides a clean baseline and it decouples the e\ufb00ect of a di\ufb00erence in population structure from other e\ufb00ects. Speci\ufb01cally, we study the following questions: 1. Does increased motility increase or decrease the mutant \ufb01xation probability? 2. Can the e\ufb00ect be quanti\ufb01ed for simple natural structures, such as island models or low-dimensional lattices? To address the \ufb01rst question, in Section 2.1 we numerically compute the \ufb01xation probabilities \u03c1(GA, GB) for all pairs GA, GB of graphs of small size. We \ufb01nd that, generally speaking, increased motility potential (that is, living on a graph with more edges) tends to increase the \ufb01xation probability of the mutant. In particular, we prove (see Theorem 1 in the Appendix) that the Complete graph is locally optimal, in a sense described below. However, we also identify special cases, in which an increase in the motility potential decreases the \ufb01xation probability rather than increasing it. This suggests that for arbitrary population structures the e\ufb00ects of motility on the \ufb01xation probability are complex. Given this complexity, we proceed to study pairs of regular structures. To address the second question, in Section 2.2 we consider certain population structures that correspond to island models with two equal islands. We show that two such structures with the same total number of edges exhibit a substantially di\ufb00erent behavior in the limit N \u2192\u221e. This implies that the e\ufb00ect of altered motility in dense regular graphs can not be easily quanti\ufb01ed in terms of a single parameter (the total number of edges). Then, motivated by tissue organization in multicellular organisms, in Section 2.3 we consider 1and 2-dimensional lattices. We show that in this setting, the di\ufb00erence in motility can be quanti\ufb01ed and it 4 \fhas analogous e\ufb00ect to a di\ufb00erence in reproductive rate: increased motility results in mutant \ufb01xation with constant probability, whereas decreased motility causes the \ufb01xation probability to be exponentially small. Related work. The question of computing \ufb01xation probabilities for various versions of Moran processes on graphs has been studied extensively. In principle, for any population structure the \ufb01xation probability can be computed numerically by solving a system of linear equations [58]. However since the size of the system is generally exponential in the population size, this approach is practically feasible only for very small populations, or for very speci\ufb01c population structures [27, 59, 60]. For large population sizes, there exist e\ufb03cient approximation algorithms either in the limit of weak selection [18, 28, 61] or when the underlying graph is undirected [15, 62]. While this manuscript was under preperation, Melissourgos et al. [41] extended the latter result to a special case of the two-graph setting, namely for mutants with reproductive advantage (rA \u2269rB) who perceive the population as a Complete graph (GA = KN). They also established bounds for certain special pairs of graphs, such as the Complete graph invading the Star graph. In contrast, in this work we consider the problem from a biological perspective and we study mutants with no reproductive advantage (rA = rB) who, similarly to the residents, perceive the population structure either as an island model or as a low-dimensional lattice. In this way, the two manuscripts complement each other. We also answer some questions stated in [41] related to the best-response dynamics in the space of all graphs. Namely, we show that while the Complete graph is locally optimal (see Theorem 1 in the Appendix), it is not always the best response (see Fig. 9). 2 Results 2.1 Small Graphs In this section we consider population structures on N labelled nodes, for small values of N. In this regime, the \ufb01xation probability \u03c1(GA, GB) can be computed exactly, by numerically solving a system of 2N linear equations. For N = 2 there is only one connected graph and, by symmetry, the \ufb01xation probability of a single type A individual is equal to 1/2. For N = 3 there are four undirected graphs: a single graph G0 with three edges (equivalently a complete graph, or a cycle), and three di\ufb00erent graphs G1, G2, G3 with two edges each. The corresponding \ufb01xation probabilities are given in Fig. 3b. Note that \u03c1(GA, GB) = 1/N when GA and GB are identical, but in general \u03c1(GA, GB) could be both more than 1/N or less than 1/N, even when GA and GB are isomorphic (if they are not identical), see Fig. 3c. G3 G0 G1 G2 \u03c1(GA, GB) G0 G1 G2 G3 G0 G1 G2 G3 GA GB a b 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 4 5 12 c S4 S\u2032 4 5 12 5 12 1 4 1 4 \u03c1( , ) = 63 208 > 1 4 S\u2032 4 S4 Figure 3: Small populations N = 3. a, There are four connected graphs G0, . . . , G3 on N = 3 labeled nodes. b, The \ufb01xation probabilities \u03c1(GA, GB) for all 4 \u00b7 4 = 16 combinations. c, When GA and GB are isomorphic but not identical, the \ufb01xation probability is not necessarily equal to 1/N. For instance, we have \u03c1(S4, S\u2032 4) = 63/208 . = 0.31. For general N, there are 2N 2\u2212N pairs of graphs on N labeled nodes. Already for N = 6 this is more than a billion pairs, hence in what follows we focus on the case when one of the graphs GA, GB is a Complete graph, denoted KN. We use a shorthand notation \u03c1(G) = \u03c1(G, KN), for the \ufb01xation probability of a single mutant who perceives the population structure as a graph G and invades a population of residents who perceive the population structure as a Complete graph KN. Analogously, we denote by \u03c1\u22c6(G) = \u03c1(KN, G) 5 \fthe \ufb01xation probability of a single mutant living on a Complete graph KN and invading a population of residents who live on G. Fig. 4 shows \u03c1(G) and \u03c1\u22c6(G) for all undirected graphs on N = 6 vertices, based on the number of edges in G. a P6 TT6 b LP6 M6 S6 K6 C6 L6 Residents on a Complete graph Mutants on a Complete graph Figure 4: Small populations N = 6. The \ufb01xation probabilities a, \u03c1(G) = \u03c1(G, KN) and b, \u03c1\u22c6(G) = \u03c1(KN, G) for all 112 graphs G on N = 6 vertices. Each dot corresponds to a graph G, the orange dots correspond to regular graphs. When G = KN, both \u03c1(G) and \u03c1\u22c6(G) are equal to 1/N. Other graphs G6 on six vertices satisfy \u03c1(G6) < 1/6 and \u03c1\u22c6(G6) > 1/6. Maximal and minimal \ufb01xation probability. Among the graphs on 6 vertices, \ufb01xation probability \u03c1(G) is maximized when G is the Complete graph K6. Recall that \u03c1(KN) = 1/N, for any integer N. In relation to this, we prove that \u03c1(KN) is \u201clocally maximal\u201d: that is, we show that if one edge is removed from the Complete graph KN, then the resulting graph MN satis\ufb01es \u03c1(MN) = N\u22122 (N\u22121)2 < 1 N = \u03c1(KN). Similarly, we prove that KN is locally minimal with respect to \u03c1\u22c6(G): we show that \u03c1\u22c6(MN) = 1/(N \u22121) > 1/N, see Theorem 1 in the Appendix. Note that, in contrast, for N = 6 the \ufb01xation probability \u03c1(G) is minimized for the Star graph S6. Here a Star graph, denoted SN, consists of a single node (\u201ccenter\u201d) connected to all other nodes (\u201cleaves\u201d). It is known [41] that \u03c1(SN) \u22641/(N \u22122)! and \u03c1\u22c6(SN) \u21921 as N \u2192\u221e. Relation to the number of edges. In general, \ufb01xation probability \u03c1(G) tends to be higher for graphs G with more edges. However, this is only a rule of thumb. For instance, the Lollipop graph LP6 has a relatively low \ufb01xation probability \u03c1(LP6), given its number of edges. Here a Lollipop graph, denoted LPN, consists of a Complete graph on N \u22121 vertices and a single extra edge connecting the last node. Moreover, adding edges to a graph G to produce a graph G\u2032 sometimes does not increase the \ufb01xation probability but rather decreases it: this is illustrated by the Pan graph P6 and the Treetop graph TT6 for which we have \u03c1(P6) > 0.071 and \u03c1(TT6) < 0.065. Here a Pan graph, denoted PN, consists of a cycle on N \u22121 nodes and a single extra edge connecting the last node. In a Treetop graph, denoted TTN, the vertex with degree 3 is further connected to all other vertices. Regular graphs. Recall that a graph is regular if all its nodes have the same degree (that is, the same number of neighbors). Fig. 4 shows that, given a \ufb01xed number of edges, the \ufb01xation probability \u03c1(G) tends to be higher for regular (or almost regular) graphs as compared to non-regular graphs. For instance, for the Cycle graph C6 and the Line graph L6, the \ufb01xation probabilities \u03c1(C6), \u03c1(L6) are relatively high, given the low number of edges of C6 and L6. Here a Cycle graph, denoted CN, is the connected graph where each node is connected to two neighbors, and a Line graph, denoted LN, is the Cycle graph with one edge 6 \fmissing. However, we prove that the Line graph generally does not maximize the \ufb01xation probability among the connected graphs with N \u22121 edges (so-called trees): in particular, for N = 8 the graph G8 consisting of three paths of lengths 2, 2, and 3 meeting at a single vertex satis\ufb01es \u03c1(G8) > 0.0098 > 0.0095 > \u03c1(L8). Moreover, the Isothermal Theorem of [12] does not hold: for two di\ufb00erent regular graphs G, G\u2032 (even with the same degree) the \ufb01xation probabilities \u03c1(G), \u03c1(G\u2032) are generally di\ufb00erent, as witnessed by the two 3-regular graphs with N = 6 nodes and 9 edges. 2.2 Dense regular graphs As suggested by Fig. 4, regular graphs G have high \ufb01xation probability \u03c1(G), compared to other graphs with the same number of edges. Here we consider certain simple regular graphs that contain approximately half of the total possible number of edges. We show that for some such graphs, the \ufb01xation probability is comparable to that of a Complete graph, whereas for other graphs it is substantially smaller. Thus the Isothermal Theorem [12] is strongly violated. Given a population size N (with N even), let BN = KN/2,N/2 be a (complete) Bipartite graph with equal parts N/2, N/2 and let TN be a Two-clique graph obtained by adding N/2 matching edges to a union of two disjoint Complete graphs of size N/2 each, see Fig. 5a. Note that both BN and TN have precisely 1 4N 2 edges, which is roughly half of the edges of KN. Also, note that both BN and TN represent populations subdivided into two large islands: in case of BN, the o\ufb00spring always migrates to the opposite island, whereas in case of TN the o\ufb00spring mostly stays in the same island and it migrates only rarely (namely with probability of the order of 1/N). a b c K8 B8 T8 Complete graph Bipartite graph Two-clique graph Mutants on {KN, BN, TN} invading KN Mutants on KN invading {KN, BN, TN} Figure 5: Dense regular graphs. a, In a (complete) Bipartite graph BN and a Two-clique graph TN, each vertex is connected to N/2 other vertices (here N is even). b, When the mutant lives on BN, the \ufb01xation probability satis\ufb01es \u03c1(BN) \u22480.82 \u00b7 1 N . In contrast, when the mutant lives on TN, the \ufb01xation probability \u03c1(TN) tends to zero faster than 1/N. c, When the residents live on BN or TN, we have \u03c1\u22c6(BN) \u22481.1 \u00b7 1 N ans \u03c1\u22c6(TN) \u22481.4 \u00b7 1 N . We prove that \u03c1(BN) > 0.58/N (see Theorem 2 in the Appendix). Since \u03c1(KN) = 1/N, this implies that missing roughly half of the edges only reduces the \ufb01xation probability by a constant factor, independent of the population size N. In fact, numerical computation shows that N \u00b7 \u03c1(BN) \u22480.82 whereas for the Two-clique graph we observe N \u00b7 \u03c1(TN) \u21920, see Fig. 5b. The intuition for this distinction is as follows. On both graphs, the state of the system at any given time point is completely described by the frequencies NL \u2208[0, N] and NR \u2208[0, N] of mutants in the left and the right half. On BN, the two frequencies remain roughly equal throughout the process (NL \u2248NR): indeed, once say NL \u226bNR, more mutant o\ufb00spring is produced on the left and they migrate to the right, thereby helping balance the numbers again. In contrast, on TN the mutants migrate rarely, thus the lineage produced 7 \fby the initial mutant remains trapped in one half for substantial amount of time. Throughout that time, the mutants are \u201cblocking\u201d each other from spreading more than they would block each other if they were split evenly between the two halves: indeed, with all mutants in one half, the probability that a reproducing mutant replaces another mutant (thus not increasing the size of the mutant subpopulation) is twice as large, as compared to the situation where the mutants are evenly split. For small mutant subpopulations, this e\ufb00ect is non-negligible and it causes the \ufb01xation probability \u03c1(BN) to decay faster than inversely proportionally to N. Regarding \u03c1\u22c6, we observe N \u00b7 \u03c1\u22c6(BN) \u22481.11 and N \u00b7 \u03c1\u22c6(TN) \u22481.4, see Fig. 5c. The intuition is that when mutants live on a Complete graph KN, the o\ufb00spring is equally likely to migrate to any location. By randomness, the condition NL \u2248NR is thus maintained throughout most of the early stages of the process. Therefore, as with \u03c1(BN), both \u03c1\u22c6(BN) and \u03c1\u22c6(TN) are inversely proportional to N. To sum up, the graphs BN and TN show a considerably di\ufb00erent behavior in terms of \u03c1 but a qualitatively comparable behavior in terms of \u03c1\u22c6. 2.3 Lattice graphs Here we study sparse regular graphs, speci\ufb01cally lattice graphs. Lattices exist in any number of dimensions. We focus on oneand two-dimensional lattices, since those are biologically relevant. For each dimension, we study the e\ufb00ect of increased or decreased connectivity (degree) of the lattice on the \ufb01xation probability of an invading mutant. One-dimensional lattices. In one dimension, we consider circulation graphs Cird N (already studied in this context from a di\ufb00erent point of view, see [41]). For a \ufb01xed even integer d, a d-Circulation graph, denoted Cird N, consists of N vertices arranged in a cycle, where each vertex is connected to d other vertices, namely the next d/2 vertices and the previous d/2 vertices in the cyclic order, see Fig. 6a. To shorten the notation, we denote by \u03c11D N (d1, d2) = \u03c1(Cird1 N , Cird2 N ) the \ufb01xation probability of a mutant living on a one-dimensional lattice Cird1 N with degree d1 versus a population of residents living on a onedimensional lattice Cird2 N with degree d2. Note that when d1 = d2 = d then \u03c11D N (d, d) = 1/N. a b c Cir2 9 Mutants on a denser circulation Mutants on a sparser circulation Cir4 9 Cir6 9 1-D lattices \u03c1(Cir4 N, Cir2 N) \u03c1(Cir6 N, Cir2 N) \u03c1(Cir6 N, Cir4 N) \u03c1(Cir2 N, Cir4 N) \u03c1(Cir4 N, Cir6 N) \u03c1(Cir2 N, Cir6 N) Figure 6: Overlaying 1-D lattices with di\ufb00erent connectivities. a, A circulation graph Cird N is a 1dimensional lattice with periodic boundary and connectivity (degree) d. We consider d \u2208{2, 4, 6}. b, When mutants live on a less connected graph (d1 < d2), their \ufb01xation probability decays to 0 at an exponential rate as N \u2192\u221e(here y-axis is log-scale). c, In contrast, when mutants live on a more densely connected graph (d1 > d2), their \ufb01xation probability tends to a constant. In both panels, the black dashed line shows the neutral baseline 1/N. The values for N \u226413 are computed by numerically solving a large system of linear equations. The values for N \u226514 are obtained by simulating the process 105 times and reporting the proportion of the runs that terminated with the mutant \ufb01xating. 8 \fWhen the degrees d1, d2 of the mutant and resident graph di\ufb00er, the \ufb01xation probability crucially depends on which of the two degrees is larger. When the mutant graph has a lower connectivity (d1 < d2) then \u03c11D N (d1, d2) tends to 0 exponentially quickly as N \u2192\u221e, see Fig. 6b. In contrast, when the mutant graph has a higher connectivity (d1 > d2) then \u03c11D N (d1, d2) tends to a positive constant c that depends on d1 and d2, see Fig. 6c. Speci\ufb01cally, for large N we observe that \u03c11D N (4, 2) \u22480.16, \u03c11D N (6, 2) \u22480.17 and \u03c11D N (6, 4) \u22480.09. Those results are in agreement with bounds 0.11 \u2264\u03c11D N (4, 2) \u22640.25 that we prove analytically by a stochastic domination argument (see Theorem 3 in the Appendix). The intuition behind the argument is that once the mutants form a contiguous block of a large size, the block is more likely to expand rather than to diminish at both interfaces. Indeed, the probability of gaining the boundary node is the same as losing the (other) boundary node but, on top of that, mutants could skip the boundary node, invade the interior of the resident territory and only after that gain the skipped node. This event has a non-negligible probability of happening, hence there is a positive bias favoring the spread of mutants. For a formal proof, see Theorem 3 in the Appendix. Two-dimensional lattices. In two dimensions, we consider graphs drawn on a square lattice with periodic boundary condition. For instance, by connecting each vertex to its 4 closest vertices (Von Neumann neighborhood), we obtain a graph Sq4 N, see Fig. 7a. Similarly, by connecting to 8 closest vertices (Moore neighborhood) we obtain a graph Sq8 N. We also consider other graphs Sqd N with di\ufb00erent connectivities d \u2208{6, 12, 20}. We again shorten the notation by denoting \u03c12D N (d1, d2) = \u03c1(Sqd1 N , Sqd2 N ). a b c Mutants on a sparser 2-D lattice 2-D lattices 1/N 1/N Sq4 16 Sq6 16 Sq8 16 Mutants on a denser 2-D lattice \u03c12D N (4, 6) \u03c12D N (6, 8) \u03c12D N (4, 8) \u03c12D N (6, 4) \u03c12D N (8, 6) \u03c12D N (8, 4) Figure 7: Overlaying 2-D lattices with di\ufb00erent connectivities. a, We consider two-dimensional lattices with degree 4 (Vonn Neumann neighborhood), 6 (triangular grid), and 8 (Moore neighborhood), and with dimensions 3 \u00d7 3, 3 \u00d7 4, . . . , 30 \u00d7 30. b, c Similarly to the 1-D case, the \ufb01xation probability decays to 0 exponentially quickly when d1 < d2, whereas it tends to a positive constant when d1 > d2. The black dashed line shows the baseline 1/N. The values are obtained by simulating the process (at least 105 repetitions per data point). The results are analogous to the case of one-dimensional lattices. When the mutants live on a less connected lattice, their \ufb01xation probability tends to 0 exponentially quickly. In contrast, when they live on a more densely connected lattice, their \ufb01xation probability tends to a constant as the population size N tends to in\ufb01nity (see Fig. 7). E\ufb00ective \ufb01tness. The behavior of the \ufb01xation probability for pairs of low-dimensional lattices is reminiscent of the behavior of the \ufb01xation probability \u03c1(KN; r) of a single mutant with relative reproductive rate r \u0338= 1 in a well-mixed population of N \u22121 other residents. In that setting, we have \u03c1(KN; r) = 1\u22121/r 1\u22121/rN . For any \ufb01xed r \u0338= 1, the formula exhibits one of two possible behaviors in the limit N \u2192\u221e. When r < 1 9 \fthen \u03c1(KN; r) decays approximately as 1/rN. In contrast, when r > 1 then it tends to a positive constant 1 \u22121/r. (When r = 1 we have \u03c1(KN; r) = 1/N by symmetry.) This suggests a possible interpretation: for the neutral mutant, living on a more densely connected lattice has a comparable e\ufb00ect on the \ufb01xation probability as having a certain relative reproductive advantage rd1,d2. Formally, given a population size N and two lattices LN, L\u2032 N we de\ufb01ne the e\ufb00ective \ufb01tness, denoted r(LN, L\u2032 N), as the unique number r such that \u03c1(LN, L\u2032 N) = \u03c1(KN; r). In other words, the e\ufb00ective \ufb01tness is such a number r(LN, L\u2032 N), that a neutral mutant on a lattice LN invading a lattice L\u2032 N has the same \ufb01xation probability as a mutant with relative reproductive advantage r(LN, L\u2032 N) in a well-mixed population. For pairs of low-dimensional lattices with di\ufb00erent connectivities d1, d2, the e\ufb00ective \ufb01tness can be computed from the data presented above, see Fig. 8. We observe that while the e\ufb00ective \ufb01tness depends on the connectivities d, d\u2032 of the two lattices and on their dimensionality, it is mostly independent of the population size N. a Effective r for pairs of 1-D lattices b Effective r for 2-D lattices r1D N (4, 2) r1D N (6, 4) r1D N (6, 2) r1D N (2, 4) r1D N (2, 6) r1D N (4, 6) r2D N (6, 4) r2D N (8, 6) r2D N (8, 4) r2D N (4, 6) r2D N (4, 8) r2D N (6, 8) Figure 8: E\ufb00ective \ufb01tness. Given the connectivities d, d\u2032 of the mutant and resident lattice, we compute the e\ufb00ective \ufb01tness that would result in the same \ufb01xation probability, had both types lived on a Complete graph. a, One-dimensional lattices Cird N with d \u2208{2, 4, 6}. b, Two-dimensional lattices Sqd N for d \u2208{4, 6, 8}. We have r2D N (8, 4) \u22481.06, r2D N (6, 4) \u2248r2D N (8, 6) \u22481.03, and r2D N (6, 8) \u22480.96, r2D N (4, 6) \u22480.95, r2D N (4, 8) \u22480.92. In both panels, the black dashed line shows the neutral baseline r = 1. 3 Discussion In this work, we studied the e\ufb00ect of mutations that, rather than altering the reproductive rate of the a\ufb00ected individual, alter how the individual experiences the population structure. To that end, we considered a powerful framework based on the classical Moran Birth-death process on graphs, in which the two types of individuals (the novel mutant and the existing residents) perceive the population structure through di\ufb00erent graphs. As the key quantity, we studied the probability \u03c1(GA, GB) that a single neutral mutant who perceives the population structure as a graph GA successfully invades the population of residents who perceive the population structure as a graph GB. For small population sizes, we computed the pairwise \ufb01xation probabilities numerically, and we observed that \u03c1(GA, GB) tends to be higher when GA contains many edges (that is, the mutant is more motile) and when GA is regular. We note that the latter aspect 10 \fcontrasts with other models of motility, where an increased dispersal potential of the mutant generally diminishes the \ufb01xation probability [51, 52, 53]. Next, motivated by island models, we considered two regular graphs with the same total number of edges and we showed that the corresponding \ufb01xation probabilities are asymptotically di\ufb00erent. In particular, as the population size N increases, the \ufb01xation probabilities decay at di\ufb00erent rates. Thus, in the asymptotic sense, the Isothermal Theorem of [12] is strongly violated. Finally, we studied the biologically relevant case of 1and 2-dimensional lattices and we showed that the dispersal radius has similar e\ufb00ect on the \ufb01xation probability as the reproductive rate. Recall that in large unstructured populations, a bene\ufb01cial mutation \ufb01xates with constant probability, whereas the \ufb01xation probability of a deleterious mutation is exponentially small. Likewise, neutral mutants on lattices with larger dispersal radius have a constant chance of successfully \ufb01xating, whereas having lower dispersal radius leads to \ufb01xation of the mutant only with exponentially small probability. Thus, in terms of the \ufb01xation probability of the mutant, perceiving the population through a more densely connected lattice is e\ufb00ectively equivalent to having an increased reproductive rate. Moving on to more complex (though perhaps less realistic) population structures, many natural questions arise. We conclude by commenting on three of them. Recall that for any graph GN on N nodes we have \u03c1(GN, GN) = 1/N [57]. First, Fig. 4 suggests that \u03c1(GA N, KN) < 1/N for all mutant graphs GA N \u0338= KN. While we can prove that \u03c1(MN, KN) < 1/N for a graph MN that misses a single edge (see Theorem 1 in the Appendix), the general claim is left as an open problem. Similarly, we do not know whether \u03c1(KN, GB N) > 1/N holds for all resident graphs GB N \u0338= KN (we do know that it holds for GB N = MN). Second, following the game theory perspective, Melissourgos et al. [41] asked what is the best mutant response to a given resident graph. That is, given a resident graph GB N on N nodes, which mutant graph GA N on N nodes maximizes the \ufb01xation probability \u03c1(GA N, GB N)? Our results for small graphs show that although the Complete graph KN is frequently the best mutant response, it is not always the case, see Fig. 9. In particular, when the residents live on a Star graph S6, the population is easier to invade through a graph M6 that misses a single edge, rather than through the Complete graph K6 \u2013 direct computation gives \u03c1(M6, S6) > 0.643 > 0.641 > \u03c1(K6, S6). We note that the di\ufb00erence is minor \u2013 both mutant graphs M6 and K6 provide a \ufb01xation probability well over the neutral threshold value 1/6 \u22480.167. S6 M6 G6 G\u2032 6 a b Figure 9: Best-response graphs. The Complete graph is sometimes not the best response when optimizing the \ufb01xation probability \u03c1. a. The resident population living on the Star graph S6 (red) is easier to invade by mutants living on M6 (blue) than by mutants living on the Complete graph K6. b, The mutants living on a graph G6 (blue) have a harder time invading the graph G\u2032 6 than they have invading the Complete graph K6. For the complementary question of what is the best resident response GB N to a given mutant graph GA N, the situation is analogous: while the Complete graph is generally hard to invade, it is sometimes not the hardest one. As an example (see Fig. 9b), when mutants live on a graph G6 then for the graph G\u2032 6 we have \u03c1(G6, G\u2032 6) < 0.025 < 0.026 < \u03c1(G6, K6). Third, we observe that when mutants and residents live on di\ufb00erent graphs GA N \u0338= GB N with the same edge density, the \ufb01xation probability \u03c1(GA N, GB N) typically drops below 1/N. The intuition is that the mutant subpopulation tends to form clusters in GA N but not necessarily in GB N. As a consequence, mutants block each other from spawning onto a resident but they do not guard each other from being replaced by residents. As an extreme example of this phenomenon, suppose that mutants live on a long cycle GA N = CN and they currently form a contiguous block of 10 individuals. The probability p+ that, in a single step, a mutant is selected for reproduction and its o\ufb00spring replaces a resident is equal to p+ = 1/N. However, if the residents perceive the population as a completely di\ufb00erent cycle GB N = C\u2032 N (such that no two mutants are 11 \fadjacent in C\u2032 N), then the probability p\u2212that a resident replaces a mutant equals p\u2212= 10/N. Thus, in a single step, the size of the mutant subpopulation is 10\u00d7 more likely to decrease than it is to increase. To some extent, similar e\ufb00ects occur whenever the two graphs GA N and GB N di\ufb00er. This suggests that for distinct graphs GA N \u0338= GB N we typically have \u03c1(GA N, GB N) < 1/N. In one direction, this phenomenon can be easily overcome, for instance when one graph is denser than the other one. Moreover, as witnessed by the two Star graphs depicted in Fig. 3, there are pairs of irregular graphs for which the phenomenon is overcome in both directions. However, we are not aware of any such pair GN, G\u2032 N of regular graphs. Hence this is another open problem: do there exist two regular graphs such that both \u03c1(GN, G\u2032 N) > 1/N and \u03c1(G\u2032 N, GN) > 1/N? Acknowledgements K.C. acknowledges support from ERC Consolidator grant no. (863818: ForM-SMart). Author Contributions All authors designed the research. J.T. and K.K. performed the mathematical analysis. J.T. wrote the computer code and produced the \ufb01gures. All authors wrote the manuscript. Competing interests The authors declare no competing interests. Code and Data availability The datasets generated during and/or analysed during the current study are available in the Figshare repository, https://figshare.com/s/2d9cc41100151547b61a." + }, + { + "url": "http://arxiv.org/abs/1906.02785v1", + "title": "Limits on amplifiers of natural selection under death-Birth updating", + "abstract": "The fixation probability of a single mutant invading a population of\nresidents is among the most widely-studied quantities in evolutionary dynamics.\nAmplifiers of natural selection are population structures that increase the\nfixation probability of advantageous mutants, compared to well-mixed\npopulations. Extensive studies have shown that many amplifiers exist for the\nBirth-death Moran process, some of them substantially increasing the fixation\nprobability or even guaranteeing fixation in the limit of large population\nsize. On the other hand, no amplifiers are known for the death-Birth Moran\nprocess, and computer-assisted exhaustive searches have failed to discover\namplification. In this work we resolve this disparity, by showing that any\namplification under death-Birth updating is necessarily \\emph{bounded} and\n\\emph{transient}. Our boundedness result states that even if a population\nstructure does amplify selection, the resulting fixation probability is close\nto that of the well-mixed population. Our transience result states that for any\npopulation structure there exists a threshold $r^*$ such that the population\nstructure ceases to amplify selection if the mutant fitness advantage $r$ is\nlarger than $r^\\star$. Finally, we also extend the above results to\n$\\delta$-death-Birth updating, which is a combination of Birth-death and\ndeath-Birth updating. On the positive side, we identify population structures\nthat maintain amplification for a wide range of values $r$ and $\\delta$. These\nresults demonstrate that amplification of natural selection depends on the\nspecific mechanisms of the evolutionary process.", + "authors": "Josef Tkadlec, Andreas Pavlogiannis, Krishnendu Chatterjee, Martin A. Nowak", + "published": "2019-06-06", + "updated": "2019-06-06", + "primary_cat": "q-bio.PE", + "cats": [ + "q-bio.PE", + "math.PR" + ], + "main_content": "Introduction The evolutionary rate of populations is determined by their ability to accumulate advantageous mutations [1, 2, 3, 4, 5]. Once a new mutant has been randomly generated in a population, its fate is governed by the dynamics of natural selection and random drift. The most important quantity in this process is the \ufb01xation probability which is the probability that the invading mutant \ufb01xates in the population as opposed to being swept away. A classical mathematical framework for rigorous study of the mutant spread is the discrete-time Moran process [6]. Given a population of N individuals, at each time step, (1) an individual is chosen randomly for reproduction proportionally to its \ufb01tness and (2) an individual dies uniformly at random; then the o\ufb00spring of the reproducing individual replaces the dead individual, and the population size remains constant. Many evolutionary properties are a\ufb00ected by the spatial arrangement of the population [7, 8, 9, 10, 11, 12, 13, 14, 15]. Evolutionary graph theory represents population structure of size N by a graph (network) GN [16, 17, 18, 19, 20, 21]: each individual occupies a vertex, and neighboring vertices mark sites of spatial proximity (see Fig. 1a). Mutant spread must respect the structure, in that the o\ufb00spring of a reproducing individual in one vertex can only move to a neighboring vertex. The Moran process on graphs has two distinct variants: \u2022 In the Birth-death Moran process, the death event is conditioned on the Birth event. That is, \ufb01rst an individual is chosen for reproduction and then its o\ufb00spring replaces a random neighbor (see Fig. 1b). \u2022 In the death-Birth Moran process, the Birth event is conditioned on the death event. That is, \ufb01rst an individual is chosen for death and then its neighbors compete to \ufb01ll the vacancy with their o\ufb00spring (see Fig. 1c). The \ufb01xation probability of the invading mutant is a function of its \ufb01tness r, as well as the graph GN. In alignment with most of the literature, we focus on advantageous mutants, where r > 1. The well-mixed population of size N is represented by a complete graph KN. In the Birth-death Moran process, the \ufb01xation probability in the well-mixed population is \u03c1Bd(KN, r) = (1\u22121/r)/(1\u2212 1/rN) [3]. Under death-Birth updating, the \ufb01xation probability is \u03c1dB(KN, r) = (1 \u22121/N) \u00b7 (1 \u2212 1/r)/(1 \u22121/rN\u22121) [22]. Speci\ufb01cally, as N \u2192\u221e, both the expressions converge to 1 \u22121/r. Ampli\ufb01ers of natural selection are graphs that increase the \ufb01xation probability of the advantageous mutants compared to the well-mixed population [23, 16]. Under Birth-death updating, many amplifying families of graphs have been constructed, such as the Star graph [24, 25, 26], the Complete Bipartite graph [27] and the Comet graph [28], as well as families that guarantee \ufb01xation in the limit of large population size [16, 29, 30, 31, 32]. Extensive computer simulations on small populations have also shown that many graphs have amplifying properties [33, 34, 35]. While the above results hold for the Birth-death Moran process, no ampli\ufb01ers are known for the death-Birth 2 \fa b Birth-death c death-Birth Figure 1: Moran process on graphs. a, The spatial structure is represented by a graph. Each vertex represents a site and is occupied either by a resident (red) with \ufb01tness 1 or by a mutant (blue) with relative \ufb01tness r > 1. Each edge can be one-way (arrow) or two-way. b, In each step of the Birth-death process, one individual is sampled for reproduction proportionally to \ufb01tness, and then its o\ufb00spring replaces a random neighbor. c, In each step of the death-Birth process, a random individual dies and then it is replaced by a neighbor sampled proportionally to \ufb01tness. Moran process, and computer-assisted search has found that, under death-Birth updating, most small graphs suppress the \ufb01xation probability rather than amplifying it [34]. Here we prove two negative results on the existence of ampli\ufb01ers under death-Birth updating. Our \ufb01rst result states that the \ufb01xation probability in any graph is bounded by 1\u22121/(r+1). Hence, even if ampli\ufb01ers do exist, they can provide only limited ampli\ufb01cation. In particular, there are no families of graphs that would guarantee \ufb01xation in the limit of large population size. Our second result states that for any graph GN, there exists a threshold r\u2217such that for all r \u2265r\u2217, the \ufb01xation probability is bounded by \u03c1dB(r, KN). Hence, even if some graphs amplify for certain values of r, their amplifying property is necessarily transient, and lost when the mutant \ufb01tness advantage r becomes large enough. We note that a companion work [36] identi\ufb01es transient ampli\ufb01ers among graphs that have weighted edges. Finally, we also study the mixed \u03b4-death-Birth Moran process, for \u03b4 \u2208[0, 1], under which death-Birth and Birth-death updates happen with rate \u03b4 and 1 \u2212\u03b4, respectively [37]. We establish analogous negative results for mixed \u03b4-updating, for any \ufb01xed \u03b4 > 0. Note that as \u03b4 vanishes (\u03b4 \u21920), we approach (pure) Birth-death Moran process for which both universal and super ampli\ufb01ers exist. We \ufb01nd that some of those ampli\ufb01ers are less sensitive to variations in \u03b4 than other. In particular, certain bipartite structures achieve transient ampli\ufb01cation for \u03b4 as big as 0.5. 2 Model The Moran process on graphs In evolutionary graph theory, a population structure has traditionally been represented by a graph GN = (V, E), where V is the set of N vertices representing sites and E \u2286V \u00d7 V is the set of edges representing neighborships between the sites. We say that GN is undirected when all edges are two-way, that is, (v, u) is an edge whenever (u, v) is. Since the focus of this work is on death-Birth updating, we require that there are no self-loops in GN (that is, (u, u) is never an edge). More generally, a population structure can be represented by a weighted graph. In that case, every edge (u, v) is assigned a weight wu,v \u2208[0, 1] which indicates the strength of interaction from site u to site v. In full generality, we allow for non-symmetric weights (that is, possibly wu,v \u0338= wv,u). The family of unweighted graphs is recovered when we insist that all edges have weight 1. Even though 3 \four primary focus is on unweighted graphs, our results apply to weighted graphs too. A population of N residents inhabits the graph GN with a single individual occupying each of the vertices of GN. In the beginning of the Moran process, one vertex is chosen uniformly at random to host the initial mutant. The mutant has a \ufb01tness advantage r > 1, whereas each of the residents has \ufb01tness normalized to 1. We denote by f(u) the \ufb01tness of the individual occupying the vertex u. From that point on, the process proceeds in discrete time steps, according to one of the two variants of updating: 1. Under death-Birth (dB) updating, \ufb01rst an individual is selected to die uniformly at random. This leaves a vacancy in the corresponding vertex v of GN, and the neighbors of v compete to \ufb01ll it. Speci\ufb01cally, every neighbor u of v is chosen for reproduction with probability proportional to f(u) \u00b7 wu,v, and the selected individual places a copy of itself on v. 2. Under Birth-death (Bd) updating, \ufb01rst an individual is selected to reproduce with probability proportional to its \ufb01tness, that is, with probability proportional to f(u) for the individual occupying the vertex u. The o\ufb00spring of u then replaces a random neighbor v of u with probability proportional to wu,v. We also consider a combination of dB and Bd updating, which yields the mixed \u03b4-death-Birth Moran process. 3 Under \u03b4-death-Birth (\u03b4-dB) updating, for \u03b4 \u2208[0, 1], in each step, the Moran process follows a dB update with probability \u03b4, and a Bd update with probability 1 \u2212\u03b4. In this notation, \u03b4 = 1 corresponds to pure dB updating, and \u03b4 = 0 corresponds to pure Bd updating. We only consider strongly connected graphs for which, with probability 1 in the long run, the Moran process leads either to the \ufb01xation of the mutant in the population (all vertices are eventually occupied by mutants) or to the extinction of the mutant (all vertices are eventually occupied by residents). We denote by \u03c1dB(GN, r), \u03c1Bd(GN, r) and \u03c1\u03b4(GN, r) the \ufb01xation probability under dB, Bd and \u03b4-dB updating, respectively. Ampli\ufb01ers The well-mixed population is modelled by the undirected complete graph KN. The \ufb01xation probability on KN under Bd updating is [3] \u03c1Bd(KN, r) = 1 \u22121/r 1 \u22121/rN . (1) Similarly, the \ufb01xation probability on KN under dB updating is [22] \u03c1dB(KN, r) = \u0012 1 \u22121 N \u0013 \u00b7 1 \u22121/r 1 \u22121/rN\u22121 . (2) Speci\ufb01cally, as N \u2192\u221e, both the expressions converge to 1 \u22121/r. Population structure can a\ufb00ect the \ufb01xation probability of advantageous mutants. Given r > 1, a graph GN is a Bd (resp., dB) r-ampli\ufb01er if \u03c1Bd(GN, r) > \u03c1Bd(KN, r) (resp., \u03c1dB(GN, r) > \u03c1dB(KN, r)). In words, a graph GN is an r-ampl\ufb01ier the \ufb01xation probability of mutants with \ufb01tness advantage r on GN is larger than the \ufb01xation probability of the same mutants on the well-mixed 4 \fpopulation. If GN is an r-ampli\ufb01er for all r > 1, then we say that GN is a universal ampli\ufb01er. (In the earlier literature, the word \u201campli\ufb01er\u201d had typically been used to mean \u201cuniversal ampli\ufb01er\u201d.) If GN is an r-ampli\ufb01er for only a limited range of r-values, that is, there exists a threshold value r\u22c6 such that GN does not amplify for any r > r\u22c6, we say that GN is a transient ampli\ufb01er. (Note that, in principle, an ampli\ufb01er could be neither universal nor transient \u2013 it could inde\ufb01nitely alternate between amplifying and suppressing as r grows larger.) A notorious example of a universal Bd ampli\ufb01er is a Star graph SN (of any \ufb01xed size N \u22653) which consists of one central vertex connected to each of the N \u22121 surrounding leaf vertices. As N \u2192\u221e, the \ufb01xation probability of a mutant with \ufb01tness advantage r on SN converges to \u03c1Bd(SN, r) \u2192N\u2192\u221e1 \u22121/r2. E\ufb00ectively, the Star-like population structure rescales the \ufb01tness of the mutant from r to r2. Under Bd updating, there also exist population structures that amplify for only some values of r > 1 [38]. Although the \ufb01xation probability on KN is known for dB and Bd updating, there is no known formula for \u03c1\u03b4(KN, r). We computed \u03c1\u03b4(KN, r) numerically for various values of N and r and we observed that \u03c1\u03b4(KN, r) is essentially indistinguishable from the linear interpolation b \u03c1\u03b4(KN, r) = \u03b4 \u00b7 \u03c1dB(KN, r) + (1 \u2212\u03b4) \u00b7 \u03c1Bd(KN, r) (3) between \u03c1Bd(KN, r) and \u03c1dB(KN, r) (see Fig. 2a). In fact, the ratio b \u03c1\u03b4(KN, r)/\u03c1\u03b4(KN, r) appears to be well within 1 % of 1, and most of the time even within 0.01 % of 1 (see Fig. 2b). Therefore, in \u03b4-dB updating we use b \u03c1\u03b4(KN, r) as the baseline comparison, and say that a graph GN is a \u03b4-dB r-ampli\ufb01er if \u03c1\u03b4(GN, r) > b \u03c1\u03b4(KN, r). Implied scale of \ufb01tness We are typically interested in the \ufb01xation probability when the population size is large. This leads us to the study of families of graphs {GN}\u221e N=1 of increasing population size, the \ufb01xation probability of which is taken in the limit of N \u2192\u221e. Graph families can be classi\ufb01ed by ampli\ufb01cation strength. Given such a family, the implied scale of \ufb01tness for that family [23] is a function isf(r) such that lim inf N\u2192\u221e\u03c1dB(GN, r) = 1 \u22121/ isf(r) Speci\ufb01cally, for the family of complete graphs KN we have isf(r) = r, under both dB updating and Bd updating. We say that the family is (at most) a bounded ampli\ufb01er if isf(r) \u2264r + c0 for some constant c0. We say that the family is (at least) a linear ampli\ufb01er if isf(r) \u2265c1r + c0 for some constants c1 > 1, c0. We say that the family is (at least) a quadratic ampli\ufb01er if isf(r) \u2265 c2r2 + c1r + c0 for some constants c2 > 0, c1, c0. For instance, Star graphs are quadratic ampli\ufb01ers under Bd updating [3], however they do not amplify under dB updating [22]. Finally, the family is a super ampli\ufb01er if isf(r) = \u221efor any r > 1. That is, for any r > 1 we have \u03c1dB(GN, r) \u2192N\u2192\u221e= 1 and hence \ufb01xation is guaranteed in the limit of large population size (see Fig. 3). The above de\ufb01nitions carry naturally to the \u03b4-dB Moran process, where the implied scale of \ufb01tness is de\ufb01ned such that lim inf N\u2192\u221e\u03c1\u03b4(GN, r) = 1 \u22121/ isf(r) 5 \fb a Figure 2: Linear interpolation for \u03b4-dB updating. On a complete graph KN, the \ufb01xation probability \u03c1\u03b4(KN, r) under \u03b4-dB updating is essentially indistinguishable from the linear interpolation b \u03c1\u03b4(KN, r) between \ufb01xation probability under pure dB and pure Bd updating. a, The x-axis shows \u03b4 \u2208[0, 1], the y-axis shows the \ufb01xation probability \u03c1\u03b4(KN, r) (marks) and the linear interpolation b \u03c1\u03b4(KN, r) (lines) for several pairs (N, r). The marks lie almost exactly on the lines. b, The ratio b \u03c1\u03b4(KN, r)/\u03c1\u03b4(KN, r) is well within 1 %, typically even within 0.1 % of 1. The interpolation is exact for N = 2. Questions For the Bd Moran process, various results on ampli\ufb01ers exist. The Star graph is a prominent example of a graph that is a quadratic ampli\ufb01er for any r > 1 [24, 25, 27, 26, 28] and there exist super ampli\ufb01ers, that is, families of graphs that guarantee \ufb01xation in the limit of large population size, for any \ufb01xed r > 1 [16, 29, 30, 31, 32]. Furthermore, computer simulations on small populations have shown that many small graphs have amplifying properties [33, 34, 35]. Given the vast literature on results under Bd updating, the following questions arise naturally. Q1: Do there exist universal ampli\ufb01ers for the dB Moran process? Q2: Do there exist families that are amplifying for the dB Moran process? More speci\ufb01cally, do there exist linear, quadratic, or even super ampli\ufb01ers? The \ufb01rst question is concerned with small populations, and asks for a graph that ampli\ufb01es for all r > 1. The second question asks for ampli\ufb01cation in the limit of large populations. 3 Results Here we establish some useful observations about the dB Moran process, and then answer questions Q1 and Q2. 6 \fc Birth-death death-Birth a b Figure 3: Implied scale of \ufb01tness. The implied scale of \ufb01tness for several graph families. a, Complete graphs KN, Ring graphs RN, complete Bipartite graphs B\u221a N,N\u2212 \u221a N and Star graphs SN. b, Under Birth-death updating, the Star graphs and the Bipartite graphs are quadratic ampli\ufb01ers, whereas the Ring graphs are equivalent to Complete graphs. There also exist super ampli\ufb01ers that guarantee \ufb01xation with probability 1 for any r > 1. (To model the limit N \u2192\u221ewe show values for N = 400.) c, Under death-Birth updating, none of Bipartite graphs, Star graphs or Ring graphs amplify selection. First, consider the dB Moran process on any (\ufb01xed) graph GN. The \ufb01xation probability can be bounded from above in terms of the number of neighbors of the vertex where the initial mutant has appeared. As a simple example, consider that GN is unweighted and undirected, and each vertex has precisely d neighbors (e.g. a square lattice where d = 4). Denote by v the vertex that hosts the initial mutant. We observe that if v is selected for death before any of its d neighbors then the mutants have just gone extinct. Since this event has probability 1/(d + 1), the \ufb01xation probability is at most 1\u22121/(d+1) = d/(d+1), regardless of r. A more re\ufb01ned version of this argument, which also accounts for arbitrary graphs, yields the following stronger bound. Lemma 1. Fix r > 1 and let GN be a graph (possibly with directed and/or weighted edges) with average out-degree d. Then \u03c1dB(GN, r) \u2264 d \u00b7 r d \u00b7 r + d + r \u22121. For large enough r and small (\ufb01xed) d, the bound of Lemma 1 coincides with the bound we obtained with our sketchy argument above. Observe that even when r \u2192\u221e, the lemma yields an upper-bound on the \ufb01xation probability that is strictly less than 1. On the other hand, under Bd updating, the \ufb01xation probability tends to 1 as r \u2192\u221e, regardless of the graph. Hence we have the following corollary, which states that for all population structures, Bd updating favors \ufb01xation more than dB updating, provided that the \ufb01tness advantage is large enough. Corollary 1. For any graph GN, there exists some r\u22c6, such that for all r > r\u22c6, we have \u03c1Bd(GN, r) > \u03c1dB(GN, r). 7 \fAmpli\ufb01ers of the dB Moran process Here we answer the two questions Q1, Q2. We start with Q1 which asks for the existence of universal ampli\ufb01ers under dB updating. We show the following theorem. Theorem 1 (All dB ampli\ufb01ers are transient). Fix a non-complete graph GN (possibly with directed and/or weighted edges). Then there exists r\u22c6> 1 such that for all r > r\u22c6we have \u03c1dB(GN, r) < \u03c1dB(KN, r), where KN is the complete graph on N vertices. In particular, we can take r\u22c6= 2N2. Since our baseline for ampli\ufb01ers is the complete graph KN, Theorem 1 implies that, under dB updating, every (unweighted) graph is, at best, a transient ampli\ufb01er. Moreover, the only graph that may be a universal (that is, non-transient) ampli\ufb01er is a weighted version of the complete graph KN. This is in sharp contrast to Bd updating, for which universal ampli\ufb01ers exist (e.g., the Star graph [27]). To sketch the intuition behind Theorem 1, consider again, for simplicity, our toy example of an unweighted undirected graph GN where each vertex has precisely d neighbors. Then the \ufb01xation probability is at most d/(d + 1), regardless of r. On the other hand, equation 2 implies that the \ufb01xation probability on a complete graph tends to 1 \u22121/N as r \u2192\u221e. If d < N \u22121, then 1 \u22121/N is strictly more than d/(d + 1), hence the graph GN ceases to amplify in the limit r \u2192\u221e. In the proof, we use Lemma 1 which applies to possibly weighted, directed, and/or non-regular graphs and which yields an explicit bound on the threshold r-value r\u22c6\u22642N2. Second, we turn our attention to question Q2, which asks for the existence of strong amplifying families. We establish the following theorem, which answers Q2 in negative. Theorem 2 (All dB ampli\ufb01ers are bounded). Fix r > 1. Then for any graph GN (possibly with directed and/or weighted edges) we have \u03c1dB(GN, r) \u22641 \u2212 1 r+1. In particular, Theorem 2 implies that, under dB updating, the implied scale of \ufb01tness of any graph is at most r+1. Thus every graph is, at best, a bounded ampli\ufb01er (see Fig. 3b). In particular, there exist no linear ampli\ufb01ers, and thus no quadratic ampli\ufb01ers or super ampli\ufb01ers. Again, this is in sharp contrast to Bd updating for which super ampli\ufb01ers exist [16, 29, 30] and, in fact, are abundant [31]. The proof again follows from Lemma 1: for any r > 1, the fraction on the right-hand side of Lemma 1 is at most the desired 1 \u22121/(r + 1), with equality when d \u2192\u221e. We remark that even though universal ampli\ufb01cation is impossible by Theorem 1, some population structures might achieve certain level of ampli\ufb01cation for certain range of r-values. In fact, a companion work [36] presents weighted population structures called Fans that, in the appropriate limit, amplify selection in a range 1 < r < (1+ \u221a 5)/2. The extent to which these structures amplify is well within the bounds provided by Theorem 2 (see Fig. 4). It is not known whether there exist unweighted graphs that provide transient ampli\ufb01cation. Extensions to \u03b4-dB ampli\ufb01ers Given the negative answers to questions Q1 and Q2 above, we proceed with studying the \u03b4-dB Moran process, in which the death-Birth updates are interleaved with the Birth-death updates. The insight of Corollary 1 is that mutants have a higher \ufb01xation probability under Bd updating, compared to dB updating (given a large enough \ufb01tness advantage r). Qualitatively, we expect that given a \ufb01xed population structure under \u03b4-dB updating, the \ufb01xation probability increases as 8 \fa b \u03b5 1 1+ \u221a 5 2 Figure 4: Transient ampli\ufb01ers under death-Birth updating. A companion work [36] identi\ufb01ed certain weighted graphs that are transient dB-ampli\ufb01ers. a, The Fan graph FN,\u03b5 with N blades is a weighted graph obtained from a Star graph S2N+1 by pairing up the 2N leaves and rescaling the weight of each edge coming from the center to \u03b5 < 1. b, The implied scale of \ufb01tness of a large Fan (here N = 101 and \u03b5 = 10\u22125). If r is small enough then the Fan ampli\ufb01es selection under dB updating. The level of ampli\ufb01cation is well within the scope allowed by Theorem 2 (shaded region). For comparison, we again show the implied scale of \ufb01tness for the Complete, Bipartite, Star, and Ring graphs (N = 400). \u03b4 decreases. Fig. 5 con\ufb01rms this intuition numerically, for Complete graphs, Ring graphs and Star graphs. The two extremes of the \u03b4-dB Moran process are the pure Bd (\u03b4 = 0) and pure dB (\u03b4 = 1) Moran processes. It is known that under Bd updating, both universal ampli\ufb01ers and super ampli\ufb01ers exist. On the other hand, we have shown here that under pure dB updating, any ampli\ufb01cation is inevitably transient and bounded. The next two natural questions are to investigate whether universal or strong ampli\ufb01ers exist for small values of \u03b4 \u2208(0, 1), for which the process is heavily biased towards Bd updating. Perhaps surprisingly, we answer both questions in negative. Concerning universality, we show the following theorem. Theorem 3 (All \u03b4-dB ampli\ufb01ers are transient). Fix a non-complete graph GN on N vertices (possibly with directed and/or weighted edges) and \u03b4 \u2208(0, 1]. Then there exists r\u22c6> 1 such that for all r > r\u22c6we have \u03c1\u03b4(GN, r) < b \u03c1\u03b4(KN, r), where KN is the complete graph on N vertices. Theorem 3 is a \u03b4-dB analogue of Theorem 1. It implies that, compared to the baseline given by a weighted average b \u03c1\u03b4(KN, r) between \u03c1dB(KN, r) and \u03c1Bd(KN, r), every unweighted graph is at best a transient ampli\ufb01er, and a weighted graph can only be a universal ampli\ufb01er if it is a weighted version of the complete graph KN. Hence for any positive \u03b4 > 0, no matter how small, universal ampli\ufb01cation is impossible among unweighted graphs. Next, we turn our attention to the limit of large N, and ask whether strong ampli\ufb01cation is possible for the \u03b4-dB Moran process. We show the following theorem. 9 \fb a c Figure 5: Fixation probability under \u03b4-dB updating. Three di\ufb00erent graphs on N = 10 vertices: a Complete graph, b Ring graph, c Star graph. For each \u03b4 \u2208{0, 0.25, 0.5, 0.75, 1} we show the \ufb01xation probability under \u03b4-dB updating as a function of r. On the latter two graphs, the dependence of the \ufb01xation probability on \u03b4 is more pronounced and not roughly linear as is the case for the Complete graph. The Star graph is an ampli\ufb01er under Bd updating and also a \u03b4-dB ampli\ufb01er for small \u03b4 (e.g. for \u03b4 = 0.2 and r = 2 we have \u03c1\u03b4(S10, r) > 0.494 > 0.491 > b \u03c1\u03b4(K10, r)) but ceases to be an ampli\ufb01er for large \u03b4 (e.g. for \u03b4 = 0.5 and r = 2 we have \u03c1\u03b4(S10, r) < 0.37 < 0.47 < b \u03c1\u03b4(K10, r)). Theorem 4 (All \u03b4-dB ampli\ufb01ers are at most linear). Fix r > 1 and \u03b4 \u2208(0, 1]. Then for any graph GN (possibly with directed and/or weighted edges) we have \u03c1\u03b4(GN, r) \u22641 \u2212 1 (r/\u03b4)+1. Theorem 4 implies that for \ufb01xed \u03b4 > 0, no matter how small, no better than linear ampli\ufb01ers exist. In particular, there are no quadratic ampli\ufb01ers and no super ampli\ufb01ers. For \u03b4 \u21921 (pure dB updating), the bound coincides with the one given in Theorem 2. For \u03b4 \u21920 (pure Bd updating), the bound becomes vacuous (it simpli\ufb01es to \u03c1Bd(G, r) \u22641) which is in alignment with the existence of quadratic and super ampli\ufb01ers under (pure) Bd updating. The proofs of Theorems 3 and 4 rely on a \u03b4-analogue of Lemma 1. Even though universal ampli\ufb01cation and super ampli\ufb01cation are impossible for any \u03b4 > 0 due to Theorems 3 and 4, some population structures do achieve reasonable levels of ampli\ufb01cation for various combinations of r and \u03b4. Speci\ufb01cally, we consider Star graphs, Bipartite graphs, and Ring graphs of \ufb01xed size N = 10 and N = 100 and show how strongly they amplify, depending on the \ufb01tness advantage r of the initial mutant and on the portion \u03b4 of dB updates (see Fig. 6). We make several observations. First, when \u03b4 is small enough, both Star graphs and Bipartite graphs do amplify selection, for a certain range of r > 1. Interestingly, large Bipartite graphs are less sensitive to variations in \u03b4 than Star graphs, and for small r > 1 they maintain ampli\ufb01cation even for \u03b4 almost as big as 0.5. On the other hand, if \u03b4 is small enough, Star graphs tend to achieve stronger ampli\ufb01cation than Bipartite graphs. Second, for any of the six population structures and any \ufb01xed r, increasing \u03b4 diminishes any bene\ufb01t that the population structure provides to advantageous mutants. Speci\ufb01cally, there appears to be no regime (r, \u03b4) where a ring graph would amplify selection. This is in perfect alignment with Corollary 1. 10 \fN = 10 N = 100 Fitness advantage, r Portion of dB-updates, \u03b4 a Star graph b Bipartite graph c Ring graph dB Bd Figure 6: Strength of ampli\ufb01cation in terms of r and \u03b4. a, Star graphs, b, complete Bipartite graphs with smaller part of size \u221a N, and c, Ring graphs, of size either N = 10 (top row) or N = 100 (bottom row). For each of the six graphs, we plot the ratio \u03c1\u03b4(GN, r)/b \u03c1\u03b4(KN, r) as a function of the \ufb01tness advantage r (x-axis) and the portion of dB-updates \u03b4 (y-axis). Red (blue) color signi\ufb01es that the population structure ampli\ufb01es (suppresses) selection for the given regime (r, \u03b4). Green curves denote regimes where the ratio equals 1. When r = 1, the \ufb01xation probability equals 1/N regardless of \u03b4 and the population structure. By Theorem 3, all \u03b4-ampli\ufb01ers are transient, hence the \u201chorizontal\u201d green curves eventually hit the x-axis for r large enough. Plotted values were obtained by numerically solving large systems of equations for every r \u2208{1, 1.025, . . . , 3} and \u03b4 \u2208{0, 0.025, . . . , 1}. 4 Discussion In this work, we have investigated the existence of ampli\ufb01ers for the death-Birth (dB) Moran process. We have shown that such ampli\ufb01ers, if they exist, must be both transient and bounded. Transience means that any population structure can amplify selection only in a limited range r \u2208 (1, r\u22c6) of relative \ufb01tness values r of the mutant. Boundedness means that even when a population structure does amplify selection for a \ufb01xed r > 1, it can do so only to a limited extent. In particular, quadratic ampli\ufb01cation which is achieved by the Star graphs under Birth-death (Bd) updating is impossible to achieve under dB updating. As a consequence, there are no super ampli\ufb01ers under dB updating. These results are in sharp contrast to the Bd Moran process, for which ampli\ufb01ers and super ampli\ufb01ers have been constructed repeatedly [16, 27, 30, 32], and, in fact, can be abundant [34]. Our \ufb01ndings suggest that the existence of ampli\ufb01ers is sensitive to speci\ufb01c mechanisms of the evolutionary process, and hence their biological realization depends on which process captures 11 \factual population dynamics more faithfully. Note that the situation is more favorable in the broader family of weighted population structures. Under Bd updating, super ampli\ufb01ers are abundant [31], and under dB updating, transient ampli\ufb01ers have recently been constructed in a companion work [36]. It remains to be seen whether transient ampli\ufb01cation can be achieved by unweighted structures. To reconcile the apparent discrepancy in the results of the two processes, we have also investigated the mixed \u03b4-dB Moran process, which combines dB and Bd updating. On one hand, we have extended our boundedness and transience results to \u03b4-dB updating. Speci\ufb01cally, our results imply that for any \ufb01xed \u03b4 > 0, any ampli\ufb01cation is necessarily transient and that there are no quadratic ampli\ufb01ers or super ampli\ufb01ers under \u03b4-dB updating. In this sense, the case of the (pure) Bd updating is singular. On the other hand, when \u03b4 is small, some population structures that amplify for the pure Bd updating (\u03b4 = 0) maintain reasonable level of ampli\ufb01cation under \u03b4-dB updating, for a wide range of \ufb01tness advantages r. Speci\ufb01cally, we \ufb01nd that suitable Bipartite graphs are less sensitive to variations in \u03b4 than the Star graphs, and maintain ampli\ufb01cation for \u03b4 as big as 0.5, when r is close to 1. There is an interesting connection to the situation of evolutionary games on graphs. There, the desirable population structures are those that promote cooperation. It is known that under any \u03b4-dB updating for \u03b4 > 0, population structures can promote cooperation [37], whereas for pure Bd updating, no regular structure that promotes cooperation exists [39]. Therefore, in the setting of games, the desirable structures exist for all \u03b4 > 0, whereas in our setting of constant selection, the desirable structures (strong and/or universal ampli\ufb01ers) exist only in the regime \u03b4 = 0. In both settings, the case of pure Birth-death updating appears to be a singular one. 5 Methods Here we formally describe the model of Moran process on graphs together with the relevant notions of ampli\ufb01ers and implied scale of \ufb01tness. Population structure. In evolutionary graph theory, a population structure is represented by a graph that has N sites (nodes), some of which are connected by edges. Each site is occupied by a single individual. The edge from node u to node v represents that the individual at node u can replace the individual at node v. Directions and weights. The edges could be undirected (two-way) or directed (one-way) and they could be weighted. Formally, for a pair of nodes u, v, the weight of an edge (u, v) is denoted by wu,v. If the nodes u, v are not connected then wu,v = 0. In the special case of unweighted graphs, each edge is considered to have weight 1. In the special case of undirected graphs, each edge is two-way. In the most general case of directed graphs with weighted edges, two nodes u, v could be interacting in both directions with di\ufb00erent weights wu,v \u0338= wv,u. We don\u2019t allow self-loops, that is, wu,u = 0 for each node u. Mutant initialization. Initially, each site is occupied by a single resident with \ufb01tness 1. Then a single mutant with \ufb01tness r appears at a certain node. This initial mutant node can be selected uniformly at random (uniform initialization) or with probability proportional to the turnover rate of each node (temperature initialization). Unless speci\ufb01ed otherwise, we assume that the initialization is uniform and that the mutation is advantageous (r > 1). 12 \fMoran dB and Bd updating. Once a mutant has appeared, some version of Moran process takes place. Moran process is a discrete-time stochastic process. At each step, one individual is replaced by a copy of another (neighbouring) individual, hence the population size remains constant. Denote by f(v) the \ufb01tness of the individual at node v. The two prototypical updatings are: \u2022 Moran death-Birth (dB) updating. An individual v is selected uniformly at random for death. The individuals at the neighbouring sites then compete for the vacant spot. Speci\ufb01cally, once v is \ufb01xed, an individual u is selected for placing a copy of itself on v with probability proportional to f(u) \u00b7 wu,v. Note that \ufb01tness of an individual doesn\u2019t play a role in the death step (thus \u201cd\u201d is lower case) but it does play a role in the birth step (thus \u201cB\u201d is upper case). \u2022 Moran Birth-death (Bd) updating. An individual u is selected for reproduction with probability proportional to its \ufb01tness f(u). Then it replaces a random neighbor. Speci\ufb01cally, once u is \ufb01xed, an individual v is replaced by a copy of u with probability proportional to wu,v. Mixed \u03b4-dB updating. The two regimes dB and Bd can be understood as two extreme points of a spectrum. We also consider mixed updating where some steps of the process follow the dB updating while the other ones follow Bd updating. Generally, given a \u03b4 \u2208[0, 1], a \u03b4-dB updating is an update rule in which each step is a dB event with probability \u03b4 and a Bd event with probability 1 \u2212\u03b4, independently of all the other steps. With this notation, a 1-dB updating is the same as (pure) dB updating and 0-dB updating is the same as (pure) Bd updating. Fixation probability. Given a graph G, r > 1 and \u03b4 \u2208[0, 1], we denote by \u03c1\u03b4(G, r) the \ufb01xation probability of a \u03b4-dB updating, when the \ufb01rst mutant is initialized uniformly at random. The complement, that is the probability that the evolutionary trajectory goes extinct, is denoted by 1 \u2212\u03c1\u03b4(G, r) = 1 \u2212\u03c1\u03b4(G, r). Speci\ufb01cally, for \u03b4 = 1 we denote the \ufb01xation (resp. extinction) probability under pure dB updating by \u03c1dB(G, r) (resp. epdB(G, r)) and similarly for the pure Bd updating which corresponds to \u03b4 = 0. Fixation probability on well-mixed populations. When studying the e\ufb00ect of population structure on the \ufb01xation probability, our baseline is the \ufb01xation probability on a well-mixed population of the same size. A well-mixed population is modelled by a complete (unweighted) graph KN, without self-loops. Under pure dB and Bd updating there are exact formulas for \ufb01xation probability [22, 16]: \u03c1dB(KN, r) = N \u22121 N \u00b7 1 \u22121 r 1 \u2212 1 rN\u22121 and \u03c1Bd(KN, r) = 1 \u22121 r 1 \u2212 1 rN . For \u03b4-dB updating, no analogous formula is known but numerical computations for various values of N and r show that \u03c1\u03b4(KN, r) is essentially indistinguishable from the linear interpolation b \u03c1\u03b4(KN, r) = \u03b4 \u00b7 \u03c1dB(KN, r) + (1 \u2212\u03b4) \u00b7 \u03c1Bd(KN, r) between \u03c1dB(KN, r) and \u03c1Bd(KN, r) (see Fig. 2 from the main text). Therefore, in \u03b4-dB updating we use b \u03c1\u03b4(KN, r) as the baseline comparison. Ampli\ufb01ers of selection. Given r > 1, some population structures enhance the \ufb01xation probability of mutants, compared to the well-mixed population, whereas others decrease it. We refer to the former ones as ampli\ufb01ers of selection and to the latter ones as suppressors of selection. Formally, 13 \fgiven a graph GN with N nodes and some r > 1, we say that GN is an r-ampli\ufb01er under dB updating if \u03c1dB(GN, r) > \u03c1dB(KN, r), where KN is a complete graph that represents a well-mixed population. If G is an r-ampli\ufb01er under dB updating for all r > 1, we call it universal. In contrast, graphs that amplify only for some range of values r \u2208(1, r\u22c6) are called transient. Similarly, we say that GN is an r-ampli\ufb01er under Bd updating if \u03c1Bd(GN, r) > \u03c1Bd(KN, r) (note that the baseline is the complete graph KN under Bd updating) and, for a \ufb01xed \u03b4 \u2208[0, 1], we say that GN is an r-ampli\ufb01er under \u03b4-dB updating if \u03c1\u03b4(GN, r) > b \u03c1\u03b4(KN, r). Classi\ufb01cation of ampli\ufb01ers by strength: Implied scale of \ufb01tness. Ampli\ufb01ers can be further classi\ufb01ed by strength [23]. We single out bounded ampli\ufb01ers, linear ampli\ufb01ers, quadratic ampli\ufb01ers and super ampli\ufb01ers. The intuition behind the classi\ufb01cation is that, in the limit of large population size, \ufb01xation probability can often be written as 1 \u22121/ isf(r) for a suitable function isf(r) of r. For instance, for large well-mixed population we have isf(r) = r (under any of dB, Bd, \u03b4-dB updating) and for large Star graphs under Bd updating we have isf(r) = r2. The extent to which a large population structure G distorts this \ufb01xation probability can thus be classi\ufb01ed by looking at the function isf(r). Formally, given a family of graphs {GN}\u221e N=1 of increasing population size, the implied scale of \ufb01tness of the family is a function isf(r): (1, \u221e) \u2192R such that lim inf N\u2192\u221e\u03c1dB(GN, r) = 1 \u22121/ isf(r). We say that the family is 1. an (at most) bounded ampli\ufb01er if isf(r) \u2264r + c0 for some constant c0. 2. an (at least) linear ampli\ufb01er if isf(r) \u2265c1r + c0 for some constants c1 > 1, c0. 3. an (at least) quadratic ampli\ufb01er if isf(r) \u2265c2r2 + c1r + c0 for some constants c2 > 0, c1, c0. 4. a super ampli\ufb01er if isf(r) = \u221efor all r > 1. These de\ufb01nitions naturally carry over to Bd updating and \u03b4-dB updating. Remark on the regimes considered. We intentionally restrict our attention to the following regimes: 1. r > 1. If r = 1 then \u03c1\u03b4(GN, r) = 1/N, regardless of the population structure. If r < 1 then \u03c1\u03b4(GN, r) < 1/N \u2192N\u2192\u221e0 for any GN. Thus we focus on r > 1. 2. Uniform initialization. For dB updating, the notions of uniform and temperature initialization coincide, since every node is, on average, selected for death and replaced equally often. Thus we focus on uniform initialization only. 3. No self-loops. For dB updating, self-loops are not biologically realistic: An individual who has just died can not replace itself. Thus we consider graphs with possibly directed and/or weighted edges but without self-loops. 14 \f6 Proofs Our proofs rely on Jensen\u2019s inequality. For reference purposes, we state it here. Essentially, given a convex (or concave) function f and several real numbers x1, . . . , xk, Jensen\u2019s inequality bounds the (weighted) average of values f(x1), . . . , f(xk) by the value that f takes at the (weighted) average of x1, . . . , xk. Claim (Jensen\u2019s inequality). Let a1, . . . , an be non-negative real numbers that sum up to 1 and let f be a real continuous function. Then \u2022 If f is convex then k X i=1 ai \u00b7 f(xi) \u2265f k X i=1 ai \u00b7 xi ! \u2022 If f is concave then k X i=1 ai \u00b7 f(xi) \u2264f k X i=1 ai \u00b7 xi ! 6.1 Theorems on dB updating The key to proving our theorems on dB updating is the following lemma that gives an upper bound on the \ufb01xation probability \u03c1dB(G, r) on an arbitrary graph (possibly with directed and/or weighted edges), in terms of the average in-degree d and the relative \ufb01tness r > 1 of the mutant. Recall that given a graph G and its node v, the in-degree of v is the number of nodes u for which there is an edge (u, v). If G is undirected then the in-degree of a node is the same as the degree (the number of neighbors). For any graph G, the average in-degree is the same as the average out-degree (and as the average degree if G is undirected). Lemma 2. Fix r > 1 and let G be a graph (possibly with directed and/or weighted edges) with average out-degree d. Then \u03c1dB(G, r) \u2264 d \u00b7 r d \u00b7 r + d + r \u22121. Proof. Denote by u the initial node occupied by the mutant and recall that epdB(u) is the extinction probability under dB updating if the initial mutant appears at u. Then epdB(G, r) = 1 N P u epdB(u). Denote by E\u2212(u) (resp. E+(u)) the event that in the next step of the dB updating the number of mutants decreases (resp. increases) and by p\u2212(u) (resp. p+(u)) the corresponding probability. Note that if neither of E\u2212(u), E+(u) happens, the set of nodes occupied by the mutants stays the same, and if E\u2212(u) happens before E+(u), the mutants go extinct. Therefore the extinction probability epdB(u) starting from a con\ufb01guration with a single mutant at node u satis\ufb01es epdB(u) \u2265 p\u2212(u) p\u2212(u) + p+(u) = 1 1 + p+(u) p\u2212(u) . We now compute p\u2212(u) and p+(u). The number of mutants decreases if and only if we select the single mutant for death, i.e. p\u2212(u) = 1/N, for any node u. The number of mutants increases if 15 \fand only if for death we select some node that neighbors u and then we select u for producing an o\ufb00spring. Hence p+(u) = X v p+ u,v, where p+ u,v = 1 N \u00b7 r \u00b7 wu,v (r \u22121) \u00b7 wu,v + P u\u2032 wu\u2032,v is the probability that v was selected for death and then u (the only mutant on G) was selected to place a copy of itself on v. Now we bound epBd(G, r) in terms of p\u2212(u) and p+(u). In the last step we use Jensen\u2019s inequality for a function f(x) = 1/(1 + x) which is convex on x \u2208(0, \u221e): epdB(G, r) = 1 N X u epdB(u) \u22651 N X u 1 1 + p+(u) p\u2212(u) \u2265 1 1 + 1 N P u p+(u) p\u2212(u) . Since p\u2212(u) = 1/N for all u, the right-hand side simpli\ufb01es and we get epdB(G, r) \u2265 1 1 + P u p+(u). In the rest, we \ufb01nd a tight upper bound on P u p+(u). We \ufb01rst rewrite each p+(u) using p+ u,v and interchange the sums to get X u p+(u) = X u X v p+ u,v = X v X u p+ u,v. We focus on the inner sum. Fix a node v and denote by s(v) = P u\u2032 wu\u2032,v the total weight of all edges incoming to v. Using the formula for p+ u,v we obtain X u p+ u,v = 1 N X u r \u00b7 wu,v (r \u22121) \u00b7 wu,v + s(v). We make three observations. First, the summation has at most din(v) terms, where din(v) is the number of incoming edges to v. Second, we have P u wu,v = s(v). Third, for \ufb01xed r > 0 and any s > 0, the function g(x) = r\u00b7x (r\u22121)x+s is concave on x \u2208(0, s). Therefore, by another application of Jensen\u2019s inequality we can write X u p+ u,v \u22641 N \u00b7 din(v) \u00b7 r \u00b7 s(v) din(v) (r \u22121) \u00b7 s(v) din(v) + s(v) = 1 N \u00b7 r \u00b7 din(v) r \u22121 + din(v), Finally, summing up over v we obtain X u p+(u) = X v X u p+ u,v \u22641 N X v r \u00b7 din(v) r \u22121 + din(v) \u2264 r \u00b7 d r \u22121 + d, where in the last step we yet again used Jensen\u2019s inequality, this time for the function h(x) = r\u00b7x r\u22121+x that is concave on x \u2208(0, \u221e), and the fact that the average in-degree of a graph is the same as its average out-degree. 16 \fWe conclude by observing that this upper bound on P u p+(u) yields epdB(G, r) \u2265 1 1 + P u p+(u) \u2265 1 1 + r\u00b7d r\u22121+d = d + r \u22121 dr + d + r \u22121, hence \u03c1dB(G, r) \u22641 \u2212epdB(G, r) \u2264 d \u00b7 r d \u00b7 r + d + r \u22121 as desired With the lemma at hand, we can prove the \ufb01rst two Theorems. Theorem 1 (All dB ampli\ufb01ers are transient). Fix a non-complete graph GN (possibly with directed and/or weighted edges). Then there exists r\u22c6> 1 such that for all r > r\u22c6we have \u03c1dB(GN, r) < \u03c1dB(KN, r), where KN is the complete graph on N vertices. In particular, we can take r\u22c6= 2N2. Proof. Recall that \u03c1dB(KN) = (1 \u22121/N) 1 \u22121/r 1 \u22121/rN\u22121 \u2265(1 \u22121/N)(1 \u22121/r), hence epdB(KN) \u2264N + r \u22121 Nr . Using Lemma 2, it su\ufb03ces to show that for all su\ufb03ciently large r we have d + r \u22121 dr + d + r \u22121 > N + r \u22121 Nr which, after clearing the denominators, is equivalent to r2 (N \u22121 \u2212d) \u22122r(N \u22121) \u2212(d \u22121)(N \u22121) > 0. Since G is not complete, d < N \u22121 (a strict inequality), hence the coe\ufb03cient by r2 is positive and the inequality holds for all su\ufb03ciently large r. In particular, it is straightforward to check that r = 2N2 is large enough: If G misses at least one edge then d \u2264N \u22121 \u22121 N hence for r \u22652N2 the right-hand side is at least (2N2)2 \u00b7 1 N \u22124N2(N \u22121) \u2212N2 = 3N2 > 0. Theorem 2 (All dB ampli\ufb01ers are bounded). Fix r > 1. Then for any graph GN (possibly with directed and/or weighted edges) we have \u03c1dB(GN, r) \u22641 \u2212 1 r+1. Proof. Using Lemma 2, it su\ufb03ces to check that d + r \u22121 dr + d + r \u22121 \u2265 1 r + 1 which, after clearing the denominators, is equivalent to r(r \u22121) \u22650. The equality holds for r = 1. 17 \f6.2 Theorems on \u03b4-dB updating In order to prove Theorems 4 and 3 we \ufb01rst use a similar technique as before to establish an analogue of Lemma 2 that applies to \u03b4-dB updating. Lemma 3. Fix r > 1 and let G be a graph (possibly with directed and/or weighted edges) with average out-degree d. Then 1 \u2212\u03c1\u03b4(G, r) \u2265 1 1 + dr d+r\u22121 + 1\u2212\u03b4 \u03b4 \u00b7 Nr N+r\u22121 . Proof. Denote the initial mutant node by u and, as in Lemma 2, let p\u2212(u) (resp. p+(u)) be the probability that after a single step of \u03b4-dB updating, the number of mutants in the population decreases (resp. increases). The values p\u2212(u) and p+(u) are weighted averages of the corresponding values under (pure) dB and Bd updating, with weights \u03b4, 1 \u2212\u03b4. That is, p\u2212(u) = \u03b4 \u00b7 1 N + (1 \u2212\u03b4) \u00b7 X t 1 N + r \u22121 \u00b7 wt,u P u\u2032 wt,u\u2032 and, using the notation p+ u,v from Lemma 2, p+(u) = \u03b4 \u00b7 X v p+ u,v + (1 \u2212\u03b4) \u00b7 r N + r \u22121. As in Lemma 2, we get 1 \u2212\u03c1\u03b4(G, r) \u2265 1 1 + 1 N P u p+(u) p\u2212(u) . For each \ufb01xed u, we bound p\u2212(u) from below by ignoring the whole Bd-contribution. We get p\u2212(u) \u2265\u03b4 N which yields 1 \u2212\u03c1\u03b4(G, r) \u2265 1 1 + 1 \u03b4 P u p+(u) and it remains to bound P u p+(u) from above. In P u p+(u), the total Bd-contribution (summed over u) equals (1\u2212\u03b4) Nr N+r\u22121 and, as in Lemma 2, the total dB-contribution is at most \u03b4\u00b7P u P v p+ u,v \u2264 \u03b4 \u00b7 rd r\u22121+d. In total, this yields 1 \u2212\u03c1\u03b4(G, r) \u2265 1 1 + dr d+r\u22121 + 1\u2212\u03b4 \u03b4 \u00b7 Nr N+r\u22121 as desired. Using Lemma 3 we present proofs of Theorems 3 and 4 from the main text. Theorem 3 (All \u03b4-dB ampli\ufb01ers are transient). Fix a non-complete graph G on N vertices (possibly with directed and/or weighted edges) and \u03b4 \u2208(0, 1]. Then there exists r\u22c6> 1 such that for all r > r\u22c6 we have \u03c1\u03b4(G, r) < b \u03c1\u03b4(KN, r), where KN is the complete graph on N vertices. 18 \fProof. Let d be the average in-degree of G. Since G is not complete, we have d < N \u22121 (a strict inequality). As in the proof of Theorem 1, recall that epdB(KN, r) \u2264 N+r\u22121 Nr . Moreover, \u03c1Bd(KN, r) = 1\u22121/r 1\u22121/rN \u22651 \u22121/r, hence epBd(KN, r) \u22641 r. This yields 1 \u2212b \u03c1\u03b4(KN, r) = b ep\u03b4(KN, r) = \u03b4 \u00b7 epdB(KN, r) + (1 \u2212\u03b4) epBd(KN, r) \u22641 r + \u03b4 \u00b7 r \u22121 Nr and by Lemma 3 it su\ufb03ces to show that for all su\ufb03ciently large r we have 1 1 + dr d+r\u22121 + 1\u2212\u03b4 \u03b4 \u00b7 Nr N+r\u22121 \u22651 r + \u03b4 \u00b7 r \u22121 Nr , Since N, d and \u03b4 are all \ufb01xed, we can consider both sides as functions of r. As r \u2192\u221e, the left-hand side tends to 1 1+d+ 1\u2212\u03b4 \u03b4 N while the right-hand side tends to \u03b4 N . In order to conclude, it su\ufb03ces to show strict inequality between the respective limits: 1 1 + d + 1\u2212\u03b4 \u03b4 N > \u03b4 N . After clearing the denominators, this is equivalent to \u03b4(N \u22121 \u2212d) > 0 which indeed holds for any \u03b4 > 0 and any non-complete graph KN. Theorem 4 (All \u03b4-dB ampli\ufb01ers are at most linear). Fix r > 1 and \u03b4 \u2208(0, 1]. Then for any graph G (possibly with directed and/or weighted edges) we have \u03c1\u03b4(G, r) \u22641 \u2212 1 (r/\u03b4)+1. Proof. Since d \u2264N \u22121 < N and r > 1, we have dr d + r \u22121 < Nr N + r \u22121, hence Lemma 3 gives 1 \u2212\u03c1\u03b4(G, r) \u2265 1 1 + dr d+r\u22121 + 1\u2212\u03b4 \u03b4 \u00b7 Nr N+r\u22121 > 1 1 + 1 \u03b4 \u00b7 Nr N+r\u22121 \u2265 \u03b4 r + \u03b4 = 1 (r/\u03b4) + 1, where the last inequality is equivalent to \u03b4 \u00b7r(r \u22121) \u22650 after clearing the denominators. The result follows. 7 Further directions Here we list several interesting open questions. 1. Unweighted transient ampli\ufb01ers under dB updating. Weighted transient ampli\ufb01ers for dB updating have been constructed in a companion work [36]. Do there exist transient ampli\ufb01ers among unweighted graphs? 2. Towards universal ampli\ufb01cation under dB updating. The weighted transient ampli\ufb01ers constructed in the companion work [36] amplify for r less than a golden ratio \u03c6 = 1 2(1+ \u221a 5) . = 1.618. Does there exist a graph that ampli\ufb01es for r > \u03c6? If so, does for every r\u22c6exist a graph that ampli\ufb01es for all r \u2208(1, r\u22c6)? If so, does there exist a universal ampli\ufb01er for dB updating? Theorem 1 implies that if so, it has to be a weighted version of a complete graph. 19 \f3. Towards universal ampli\ufb01cation under \u03b4-dB updating. How do the answers change under \u03b4-dB updating instead of (pure) dB updating? Speci\ufb01cally, do large complete Bipartite graphs amplify on arbitrarily large intervals (1, r\u22c6), provided that \u03b4 is small enough? 4. Well-mixed populations with \u03b4-updating. Is there a simple formula for \ufb01xation probability on a complete graph under \u03b4-dB updating for \u03b4 \u2208(0, 1)? 5. Monotonicity in \u03b4. Is \u03c1dB(GN, r) < \u03c1Bd(GN, r) for any \ufb01xed graph G and any \ufb01xed r > 1? If so, is \u03c1\u03b4(GN, r) a decreasing function of \u03b4, for any \ufb01xed graph G and any \ufb01xed r > 1? 6. Optimal graph for a given r. For \ufb01xed r > 1, what is the highest possible \ufb01xation probability \u03c1dB(G, r), attained by any graph G? Theorem 2 states that \u03c1dB(G, r) \u22641\u22121/(r+ 1) for any \ufb01xed r > 1 and any graph G. The bound is attained for r = 1 due to K2 and is relatively tight for r \u2192\u221edue to large Complete graphs which give \u03c1dB(KN, r) \u2192N\u2192\u221e1\u22121/r (see Figure 7). Are those graphs optimal? Or does there exist r > 1 and a graph G (of any size) such that \u03c1dB(G, r) > max{1 2, 1 \u22121 r}? Upper bound \u03c1(K2, r) \u03c1(K3, r) \u03c1(K5, r) \u03c1(K10, r) \u03c1(K100, r) 0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 r Fixation probability Figure 7: Additional \ufb01gure: Tightness of the upper bound. We consider Complete graphs of sizes N \u2208{2, 3, 5, 10, 100} under dB updating. The \ufb01xation probability is always below the upper bound given by Theorem 2. For r = 1 the bound precisely matches the \ufb01xation probability on K2. For large r, the bound is relatively tight with respect to large Complete graphs. 20 \fAcknowledgments J.T. and K.C. acknowledge support from ERC Start grant no. (279307: Graph Games), Austrian Science Fund (FWF) grant no. P23499-N23 and S11407-N23 (RiSE). A.P. acknowledges support from FWF Grant No. J-4220. M.A.N. acknowledges support from O\ufb03ce of Naval Research grant N00014-16-1-2914 and from the John Templeton Foundation. The Program for Evolutionary Dynamics is supported in part by a gift from B. Wu and E. Larson." + }, + { + "url": "http://arxiv.org/abs/1810.02687v2", + "title": "Fixation probability and fixation time in structured populations", + "abstract": "The rate of biological evolution depends on the fixation probability and on\nthe fixation time of new mutants. Intensive research has focused on identifying\npopulation structures that augment the fixation probability of advantageous\nmutants. But these `amplifiers of natural selection' typically increase\nfixation time. Here we study population structures that achieve a trade-off\nbetween high fixation probability and short fixation time. First, we show that\nno amplifiers can have asymptotically lower absorption time than the well-mixed\npopulation. Then we design population structures that substantially augment the\nfixation probability with just a minor increase in fixation time. Finally, we\nshow that those structures enable higher effective rate of evolution than the\nwell-mixed population provided that the rate of generating advantageous mutants\nis relatively low. Our work sheds light on how population structure affects the\nrate of evolution. Moreover, our structures could be useful for lab-based,\nmedical or industrial applications of evolutionary optimization.", + "authors": "Josef Tkadlec, Andreas Pavlogiannis, Krishnendu Chatterjee, Martin A. Nowak", + "published": "2018-09-27", + "updated": "2019-03-08", + "primary_cat": "q-bio.PE", + "cats": [ + "q-bio.PE", + "cs.DM" + ], + "main_content": "Introduction The two primary forces that drive evolutionary processes are mutation and selection. Mutation generates new variants in a population. Selection chooses among them depending on the reproductive rates of individuals. 1 arXiv:1810.02687v2 [q-bio.PE] 8 Mar 2019 \fEvolutionary processes are intrinsically random. A new mutant that is initially present in the population at low frequency can go extinct due to random drift. The key quantities of evolutionary dynamics which a\ufb00ect the rate of evolution are [1, 2, 3, 4, 5]: (a) the mutation rate \u00b5, which is the rate at which new mutants are generated; (b) the \ufb01xation probability \u03c1, which is the probability that the lineage of a mutant takes over the whole population; and (c) the \ufb01xation time \u03c4, which is the expected time until the lineage of a mutant \ufb01xates in the population. A classical and well-studied evolutionary process is the discrete-time Moran birth-death process [6]. Given a population of N individuals, at each time step an individual is chosen for reproduction proportionally to its \ufb01tness; then the o\ufb00spring replaces a random individual (see Figure 1a). In the case of a well-mixed population, each o\ufb00spring is equally likely to replace any other individual. For a single new mutant with relative \ufb01tness r, its \ufb01xation probability is \u03c1 = (1 \u22121/r)/(1 \u22121/rN). Thus, for r > 1 and large N we have \u03c1 \u22481 \u22121/r [7, 3]. For measuring time, there are two natural options. The absorption time is the average number of steps of the Moran process until the population becomes homogeneous, regardless of whether the mutant \ufb01xates or becomes extinct. Alternatively, the (conditional) \ufb01xation time is the average number of steps of those evolutionary trajectories that lead to the \ufb01xation of the mutant, ignoring trajectories that lead to the extinction of the mutant. Since the evolutionary trajectories leading to extinction are typically shorter than those leading to \ufb01xation, the \ufb01xation time tends to be longer than the absorption time. Therefore, in our results concerning time we present lower bounds on the absorption time and upper bounds on the \ufb01xation time. For the well-mixed population, both the absorption time and the \ufb01xation time are of order of N log N [8, 9]. Speci\ufb01cally, for r > 1 and large N, the absorption time is approximately r+1 r \u00b7 N log N while the \ufb01xation time is approximately r+1 r\u22121 \u00b7 N log N. For neutral evolution, r = 1, the absorption time is approximately N log N while the \ufb01xation time is (N \u22121)2. Both the \ufb01xation probability and the \ufb01xation time depend on population structure [10, 11, 12, 13, 14, 15, 16, 17, 18]. Evolutionary graph theory is a framework to study the e\ufb00ect of population structure. In evolutionary graph theory, the structure of a population is represented by a graph [7, 19, 20, 21, 22, 23]: each individual occupies a vertex; the edges represent the connections to neighboring sites where a reproducing individual can place an o\ufb00spring. The edge weights represent the proportional preference to make such a choice. The well-mixed population is given by the complete graph KN where each individual is connected to each other individual (Figure 1b). Graphs can also represent deme structured populations, where islands are represented by complete graphs and connections (of di\ufb00erent weights) exist between islands. Graphs can also represent spatial lattices or asymmetric structures. A well-studied example is the star graph SN, which has one central vertex and N \u22121 surrounding vertices each connected to the central vertex (Figure 1b). For the star graph, the \ufb01xation probability tends to approximately 1 \u22121/r2 for r > 1 and large N while both the absorption and the \ufb01xation time is of the order of N 2 log N [24, 25, 26]. Hence, if a mutant has 10 % \ufb01tness advantage, which means r = 1.1, the star graph ampli\ufb01es the advantage to 21 %, but at the cost of increasing the time to \ufb01xation (Figure 1c). Several population structures have been identi\ufb01ed that alter the \ufb01xation probability of advantageous mutants. Structures that decrease the \ufb01xation probability are known as suppressors of selection and those that increase it are known as ampli\ufb01ers of selection [7, 27, 28, 29]. However, ampli\ufb01cation is usually achieved at the cost of increasing \ufb01xation time [13, 17, 30, 31]. For example, the star graph has higher \ufb01xation probability but also longer \ufb01xation time as compared to the well-mixed population. There also exist superampli\ufb01ers (also known as arbitrarily strong ampli\ufb01ers of natural selection) that guarantee \ufb01xation of advantageous mutants in the limit of large population size [32, 33, 34, 35]. But those structures tend to require even longer \ufb01xation times. We can refer to population structures that decrease the \ufb01xation time with respect to the well-mixed population as accelerators. Both the \ufb01xation probability and the \ufb01xation time play an important role in the speed of evolution. Ideally, we prefer a population structure that is both an ampli\ufb01er and an accelerator, but all known ampli\ufb01ers achieve ampli\ufb01cation at the cost of deceleration. In fact, this slowdown can be so prominent that it outweighs the ampli\ufb01cation and leads to longer evolutionary timescales [17]. 2 \f. . . a b . . . c Complete Star New mutant Extinction Extinction Fixation Fixation Figure 1: Moran process on graphs. a, A new mutant (blue) appears in a population of \ufb01nite size. The lineage of the new mutant can either become extinct or reach \ufb01xation. The Moran process is a birth-death process; in any one time step one new o\ufb00spring is generated and one individual dies. b, All \ufb01xed spatial structures can be described by graphs. The classical, well-mixed population corresponds to a complete graph, where all positions are equivalent. The star graph is a well-studied example of extreme heterogeneity, where one individual, the center, is connected to all others, but each leaf is only connected to the center. c, Population structure in\ufb02uences both the \ufb01xation probability and the \ufb01xation time. An advantageous mutant introduced at a random vertex of a star graph is more likely to \ufb01xate than on a complete graph (the arrows pointing to the right are thicker), but the (average) \ufb01xation time on the star graph is much longer than on the complete graph (the arrows are longer). The star graph achieves ampli\ufb01cation at the cost of deceleration. Here we show that absorption time on any ampli\ufb01er is asymptotically at least as large as both the absorption and the \ufb01xation time on the well-mixed population. Given this negative result, we proceed to study the trade-o\ufb00between \ufb01xation probability and time more closely. We have computed \ufb01xation probabilities and \ufb01xation times for a large class of graphs. While within this class, the well-mixed population is optimal with respect to \ufb01xation time, and the star graph is favorable with respect to \ufb01xation probability, there is a very interesting trade-o\ufb00curve between \ufb01xation probability and \ufb01xation time. In other words, there exist population structures which provide di\ufb00erent trade-o\ufb00s between high \ufb01xation probability and short \ufb01xation time. As our main analytical results, we present population structures that asymptotically achieve \ufb01xation probability equal to that of star graphs and \ufb01xation time similar to that of well-mixed populations. Thus, we achieve ampli\ufb01cation with negligible deceleration. Finally, while the above analytical results are established for large population sizes, we also study evolutionary processes on population structures of small or intermediate size by numerical simulation. Speci\ufb01cally, we consider the e\ufb00ective rate of evolution as proposed by Frean, Rainey, and Traulsen [17]. Generally speaking, the well-mixed population has a high e\ufb00ective rate of evolution if the mutation rate is high, while the star graph has a high e\ufb00ective rate of evolution if the mutation rate is very low. We show that for a wide range of intermediate mutation rates, our new structures achieve higher e\ufb00ective rate of evolution than both the well-mixed population and the star graph. 2 Results We study several fundamental questions related to the probability-time trade-o\ufb00of a single advantageous mutant in a population of size N. Mutants can arise either spontaneously or during reproduction. Mutants that arise spontaneously appear at a vertex chosen uniformly at random among all N vertices. This is called uniform initialization. Mutants that arise during reproduction appear at each vertex proportionally to its replacement rate, which is called temperature of that vertex. This is called temperature initialization. We 3 \fstudy the probability-time trade-o\ufb00for both types of initialization. 2.1 Ampli\ufb01ers and accelerators First, we investigate whether there are population structures that are ampli\ufb01ers and asymptotic accelerators of selection as compared to the complete graph (well-mixed population). We show that for any ampli\ufb01er with population size N, the absorption time is of the order of at least N log N, for both types of initialization. Since the \ufb01xation time tends to be even longer than the absorption time and the \ufb01xation time for the complete graph is also of the order of N log N, regardless of initialization, this suggests that no ampli\ufb01er is an asymptotic accelerator. Moreover, we show that the same conclusion holds for graphs that decrease the \ufb01xation probability by not more than a constant factor. Our result doesn\u2019t completely exclude the possibility of population structures with absorption time asymptotically shorter than that of the complete graphs but it shows that, if such structures do exist, then the \ufb01xation probability has to tend to 0 as the population size N grows large. While the above results holds in the limit of large population size, we present a small directed graph that is a suppressor and has a slightly shorter \ufb01xation time than the complete graph for the same population size (see Appendix 1). 2.2 Uniform initialization: \u03b1-Balanced bipartite graphs Second, we consider uniform initialization. There are two interesting questions: (1) For \ufb01xed (small) population size, how do di\ufb00erent population structures fare with respect to the probability-time trade-o\ufb00? (2) In the limit of large population size, do there exists population structures that achieve the same ampli\ufb01cation as the star graph, with shorter \ufb01xation time? Our results are as follows: (1) For small population size, both \ufb01xation probability and \ufb01xation time can be computed numerically [36]. We do this for all graphs with N = 8 vertices and a mutant with relative \ufb01tness advantage r = 1.1 (see Figure 2). We observe that the complete graph has the shortest \ufb01xation time, and the star graph has the highest \ufb01xation probability. However, the star graph has much longer \ufb01xation time than the complete graph. While some graphs have smaller \ufb01xation probability and longer \ufb01xation time than the complete graph, there are other graphs which provide a trade-o\ufb00between high \ufb01xation probability and short \ufb01xation time. In particular, there are Pareto-optimal graphs. Recall that in twoor multidimensional optimization problems, the Pareto front is the set of non-dominated objects. In our case, the Pareto front consists of graphs for which the \ufb01xation probability can not be improved without increasing the \ufb01xation time. For N = 8 and r = 1.1 the complete graph and the star graph are the two extreme points of the Pareto front. This \ufb01nding holds for other values of r > 1 and N as well (see Figures 7 and 8). (2) We answer the second question in the a\ufb03rmative. The trade-o\ufb00results (Figure 2) that we study allow us to obtain graphs which we call \u03b1-Balanced bipartite graphs. Intuitively, they are de\ufb01ned as follows: We split the vertices into two groups such that one is much smaller than the other, but both are relatively large. Then we connect every two vertices that belong to di\ufb00erent groups. We show that, in the limit of large population size, this bipartite graph achieves the same \ufb01xation probability as the star graph and that its \ufb01xation time asymptotically approaches that of the complete graph. Formally, an \u03b1-Balanced bipartite graph BN,\u03b1 is a complete bipartite graph with the parts containing N and N 1\u2212\u03b1 vertices (see Figure 3a for illustration with N = 8 and \u03b1 = 1/3). We show that the \ufb01xation probability of such graphs tends to 1 \u22121/r2 while the \ufb01xation time is of the order of N 1+\u03b1 log N, for any \u03b1 > 0 (compared to N log N \ufb01xation time of complete graph). Thus we achieve the best of two worlds, that is, we present a graph family that, in the limit of large population size, is as good an ampli\ufb01er as the star graph and almost as good with respect to time as the complete graph. As a byproduct, we prove that on a star graph, both the absorption and the \ufb01xation time are of the order of N 2 log N for any \ufb01xed r > 1 which is in alignment with known bounds and approximation results [25, 26]. 4 \fPareto front N = 8 Figure 2: Fixation probability and time under uniform initialization. Numerical solutions for all 11,117 undirected connected graphs of size N = 8. Each graph is represented by a dot and color corresponds to the number of its edges. The xand y-coordinates show the \ufb01xation probability and the \ufb01xation time for a single mutant with relative \ufb01tness r = 1.1, under uniform initialization. The graphs to the right of the complete graph are ampli\ufb01ers of selection: they increase the \ufb01xation probability. Any graph below the complete graph would be an accelerator of selection: it would decrease the \ufb01xation time. Graphs close to the bottom right corner provide good trade-o\ufb00between high \ufb01xation probability and short \ufb01xation time. All the values are computed by numerically solving large systems of linear equations (see e.g. [36]). See Figures 7 and 8 for other r values and N = 9. 5 \f\u03b1-Balanced bipartite graph a N1\u2212\u03b1 N Cycle Path Complete Trees Erd\u02dd os\u2013R\u00b4 enyi + edges Star Bipartite + edges N = 100 b c d Figure 3: \u03b1-Balanced bipartite graphs. a, An \u03b1-Balanced bipartite graph BN,\u03b1 is a complete bipartite graph with N vertices in the larger part and N 1\u2212\u03b1 vertices in the smaller part. Here N = 8 and \u03b1 = 1/3. We prove that for large N, the \u03b1-Balanced bipartite graphs achieve the \ufb01xation probability of a star and, for \u03b1 small, approach the \ufb01xation time of the complete graph (see Figures 9 and 10). b, In general, bipartite graphs provide great trade-o\ufb00s between high \ufb01xation probability and short \ufb01xation time. Comparison is with selected graphs of size N = 100 such as Trees (100\u00d7), random Erd\u02dd os\u2013Renyi graphs (100\u00d7, p = 0.03), star graphs with additional 10, 30, 50, 100 random edges (10\u00d7 each), and cycle graphs with additional 1, 3, 5, 10 random edges (5\u00d7 each). The values were obtained by simulating the Moran process 105 times. c, Star graph with several random edges. d, Cycle graph with several random edges. Moreover, we support the analytical result with computer simulations for \ufb01xed population size N = 100. We compute the \ufb01xation probability and time for selected families of graphs, such as Trees or random Erd\u02dd os\u2013 R\u00b4 enyi graphs (see Figure 3b). The \u03b1-Balanced bipartite graphs outperform all of them. Hence the analytical results are interesting not only in the limit of large population but already for relatively small population sizes. 2.3 Temperature initialization: \u03b1-Weighted bipartite graphs Third, we consider temperature initialization. The above questions for uniform initialization are also the relevant questions for temperature initialization. Our results are as follows: (1) Simulation results. Again we numerically compute the \ufb01xation probability and the \ufb01xation time for all graphs with N = 8 vertices (see Figure 4). In contrast to the results for uniform initialization (Figure 2), under temperature initialization, the complete graph has both the highest \ufb01xation probability and the shortest \ufb01xation time. This \ufb01nding holds for other values of r > 1 and N as well (see Figures 11 and 12). (2) Analytical results. Figure 4 shows that there is no trade-o\ufb00for temperature initialization. The result is not surprising as it has recently been shown that, for temperature initialization, no unweighted graphs can achieve substantial ampli\ufb01cation [35], and in the present work we have established that the complete graph is asymptotically optimal among ampli\ufb01ers with respect to absorption time. Thus, 6 \fN = 8 Figure 4: Fixation probability and time under temperature initialization. Numerical solutions for all undirected connected graphs of size N = 8, under temperature initialization (r = 1.1). There are no ampli\ufb01ers and no (strict) accelerators. By the isothermal theorem [7], all the regular graphs achieve the same \ufb01xation probability as the complete graph. See Figures 11 and 12 for other r values and N = 9. 7 \fa \u03b1-Weighted bipartite graph N1\u2212\u03b1/2 N N1\u2212\u03b1 b Star Bipartite \u03b1-Weighted bipartite Weighted Star N = 100 Cycle Path + edges Complete Trees + edges Erd\u02dd os\u2013R\u00b4 enyi Figure 5: \u03b1-Weighted bipartite graphs. a, An \u03b1-Weighted bipartite graph WN,\u03b1 is obtained by adding self-loops with weight w . = N 1\u2212\u03b1/2 to all vertices in the larger part of an \u03b1-Balanced bipartite graph. Here N = 8 and \u03b1 = 1/3. We prove that for large N, the \u03b1-Weighted bipartite graphs improve the \ufb01xation probability to 1 \u22121/r2 and, for \u03b1 small, approach the \ufb01xation time of the complete graph. b, Computer simulations for selected graphs of size N = 100 (as in Figure 3b). It is known than among unweighted graphs, only a very limited ampli\ufb01cation can be achieved [35]. Our \u03b1-Weighted bipartite graphs (with self-loops of varying weight) overcome this limitation and provide trade-o\ufb00s between high \ufb01xation probability and short \ufb01xation time. the relevant analytical question is whether weighted graphs can achieve interesting trade-o\ufb00s between \ufb01xation probability and time. We answer this question in the a\ufb03rmative by presenting a weighted version of \u03b1-Balanced bipartite graphs (see Figure 5a). Intuitively, we add weighted self-loops to all vertices in the larger group of an \u03b1-Balanced bipartite graph, such that when such a vertex is selected for reproduction, its o\ufb00spring replaces the parent most of the time and migrates to the smaller group only rarely. Formally, the \u03b1-Weighted bipartite graph WN,\u03b1 is a complete bipartite graph with the parts containing N and N 1\u2212\u03b1 vertices. Moreover, each vertex in the larger part has an extra self-loop of weight approximately N 1\u2212\u03b1/2. We show that, in the limit of large population size, this weighted bipartite graph structure achieves \ufb01xation probability 1 \u22121/r2 (which is the same as the star graph under uniform initialization), while the \ufb01xation time is of the order of N 1+\u03b1 log N, for any \u03b1 > 0 (compared to N log N \ufb01xation time of complete graph). Thus we again achieve the best of two worlds, that is, we present a graph family that, in the limit of large population, is as good an ampli\ufb01er as the star graph (under uniform initialization) and almost as good with respect to time as the complete graph. As before, Figure 5b shows computer simulations for N = 100, including Trees, random Erd\u02dd os\u2013R\u00b4 enyi graphs and the Bipartite graphs. The \u03b1-Weighted bipartite graphs are the only graphs that considerably increase the \ufb01xation probability as compared to the complete graph. 2.4 E\ufb00ective rate of evolution Finally, we study the e\ufb00ectiveness of the presented population structures for small population sizes. We use an elegant mathematical formula for the e\ufb00ective rate of evolution that combines both \ufb01xation probability 8 \fand \ufb01xation time [17]. Let t1 = 1 N\u00b5\u03c1 denote the expected number of generations to generate a mutant that eventually \ufb01xates, where N is the population size, \u00b5 is the mutation rate and \u03c1 is the \ufb01xation probability. Let t2 = \u03c4 N denote the expected number of generations for a mutant to \ufb01xate once it is generated. Note that \u03c4 is the \ufb01xation time measured in steps of the Moran process, and \u03c4 N represents the number of generations. The e\ufb00ective rate of evolution is de\ufb01ned as the inverse of the sum of the above two quantities, i.e., 1 t1+t2 . The e\ufb00ective rate of evolution was studied for the complete graph and for the star graph under uniform initialization [17]. Here we further investigate the e\ufb00ective rate of evolution for \u03b1-Balanced bipartite graph under uniform initialization, and for \u03b1-Weighted bipartite graphs under temperature initialization, for relatively small population sizes. Regarding uniform initialization, we numerically compute the e\ufb00ective rate of evolution on \u03b1-Balanced bipartite graphs for a wide range of mutation rates \u00b5 and compare it to the e\ufb00ective rate of evolution on complete graphs and star graphs (see Figure 6a for \ufb01xed population size and Figure 6b for varying population sizes). The complete graph is more e\ufb00ective for high mutation rates and the star graph is more e\ufb00ective for low mutation rates but in the intermediate regime, suitable \u03b1-Balanced bipartite graphs are more e\ufb00ective than both the complete graph and the star graph. This is in a perfect alignment with the Pareto front presented in Figure 2. Regarding temperature initialization, we study \u03b1-Weighted bipartite graphs instead of \u03b1-Balanced bipartite graphs (Figure 6c,d). As before, the complete graph is the most e\ufb00ective population structure for high mutation rates. However, star graph is a suppressor under temperature initialization and performs poorly. Therefore, except for the high mutation rate regime, various \u03b1-Weighted bipartite graphs achieve higher e\ufb00ective rate of evolution than both the complete graph and the star graph. 3 Discussion Many previous studies have explored how population structure a\ufb00ects the \ufb01xation probability of new mutants [7, 16, 19, 20, 21, 23, 32, 33, 34, 35, 37, 38]. While such studies cover one major aspect of evolutionary dynamics, the other aspect, which is \ufb01xation time, is much less studied. Both \ufb01xation probability and \ufb01xation time play an important role in determining the rate of evolution. If the mutation rate is low, the rate-limiting step is waiting for an advantageous mutant to occur. In this regime the \ufb01xation probability is more important than the \ufb01xation time. Conversely, if the mutation rate is high, then \ufb01xation time is more relevant than \ufb01xation probability. In the intermediate-mutation rate regime, the trade-o\ufb00between \ufb01xation probability and \ufb01xation time must be considered. We study this trade-o\ufb00and propose population structures, called \u03b1-Balanced bipartite graphs and \u03b1-Weighted bipartite graphs, that provide substantial ampli\ufb01cation with negligible increase in the \ufb01xation time. This is in stark contrast with all previous works that achieve ampli\ufb01cation at the cost of asymptotically increasing the \ufb01xation time. As a consequence, compared to previous works, our population structures enable higher e\ufb00ective rate of evolution than the well-mixed population for a wide range of mutation-rate regimes. There are some interesting mathematical questions that remain open. While we show that (i) ampli\ufb01ers cannot have better asymptotic absorption time than the well-mixed population (in the limit of large population size, N \u2192\u221e), and (ii) there are graphs of \ufb01xed population size N, that are suppressors and have shorter \ufb01xation time than the well-mixed population, two particularly interesting questions are: (a) Does there exist an ampli\ufb01er of \ufb01xed population size that has shorter \ufb01xation time than the well-mixed population? (b) Does there exist a graph family (which must be suppressing) that has better asymptotic \ufb01xation time (for N \u2192\u221e) than the well-mixed population? Note that, in general, clonal interference can occur and the \ufb01xation of a mutant need not be achieved before the next mutation arrives [4, 39, 40]. Thus, the \ufb01xation probability and \ufb01xation time alone may not completely characterize the performance of a population structure with respect to the overall rate and e\ufb03ciency of an evolutionary search process. Nevertheless, the e\ufb00ective rate of evolution and the probabilitytime trade-o\ufb00curves are indicative of the e\ufb03cacy of each population structure in speeding-up evolution. The numerical and experimental study of population structures in the presence of clonal interference is another interesting direction for future work. 9 \fa b c d Uniform initialization Temperature initialization Star Complete \u03b5-Balanced Complete \u03b5-Weighted N = 100 N = 100 , , , , Figure 6: Fig. 4. E\ufb00ective rate of evolution. The e\ufb00ective rate of evolution depends on the population size, N, the mutation rate, \u00b5, and the population structure. For uniform initialization, we compare \ufb01ve di\ufb00erent population structures: the complete graph (blue), \u03b1-Balanced graphs with \u03b1 \u2208{0.1, 0.25, 0.5} (orange, green, red), and the star graph (purple), always showing the relative rate of evolution with respect to the complete graph. a, We \ufb01x N = 100, r = 1.1 and vary \u00b5 = 10\u22127, . . . , 100. The complete graph has a higher e\ufb00ective rate of evolution if the mutation rate is high (\u00b5 > 10\u22123) and star graph is favorable if the mutation rate is low (\u00b5 < 3\u00b710\u22126). In the intermediate regime, suitable \u03b1-Balanced graphs outperform both of them. b, We \ufb01x r = 1.1 and N \u00b7 \u00b5 \u2208{10\u22122, 10\u22123, 10\u22124} and vary N = 10, 20, . . . , 500. The star graph is favorable if mutations are rare (N \u00b7\u00b5 = 10\u22124 and N small). Otherwise, suitable \u03b1-Balanced graphs are more e\ufb03cient. c, d Analogous data for temperature initialization. This time we compare the complete graph (blue) and the star graph (purple) with \u03b1-Weighted bipartite graphs for \u03b1 \u2208{0.25, 0.5, 1} (orange, green, red). The complete graph dominates if mutations are common (N \u00b7 \u00b5 = 10\u22122). In other cases, \u03b1-Weighted bipartite graphs are preferred. The star is not an ampli\ufb01er for temperature initialization. 10 \fThe population structures which we have described here could become an important tool for in vitro evolution [41, 42, 43, 44], since they can substantially speed up the process of \ufb01nding advantageous mutants. In vitro evolution, can be used to discover optimized protein or nucleotide sequences for any medical or industrial purpose. Depending on the mutation-rate regime, our work shows that di\ufb00erent population structures can lead to more e\ufb00ective time scales of discovery. 4 Methods In this section we introduce the model in detail and formally state our results, pointing to the relevant appendices for the full derivation. Moran process on graphs Moran Birth-death process is a discrete-time stochastic (random) process that models evolutionary dynamics in a spatially structured population. The population structure is represented by a connected graph G, possibly with weighted edges and/or self-loops. At all times, each vertex of the graph is occupied by a single individual that is of one of two types: either a resident or a mutant. The individuals of one type are considered indistinguishable. Moreover, residents are assigned (normalized) \ufb01tness 1 while the mutants have \ufb01tness r. Here we consider advantageous mutants (r > 1). In one step of the process, an individual is selected for reproduction randomly and proportionally to its \ufb01tness. This individual produces an o\ufb00spring that is a copy of itself. This o\ufb00spring then selects one of the adjacent edges proportionally to the edge weight and travels along that edge to replace the individual at its other endpoint. (If the selected edge happened to be a self-loop then the o\ufb00spring replaces the parent and nothing changes.) These steps continue until the population becomes homogeneous: either all individuals are mutants (\ufb01xation occurred) or they are all residents (extinction occurred). The well-mixed population is modelled by an unweighted complete graph (without self-loops). Initialization scheme We study the situation of a single mutant invading a population of residents. This initial mutant can appear either spontaneously or during reproduction. In the \ufb01rst case, called uniform initialization, the mutant is placed at a vertex chosen uniformly at random. In the second case, called temperature initialization, we perform one step of the Moran process in a population that consists entirely of residents and place the mutant at the vertex that the o\ufb00spring migrates to. Formally, the mutant is placed at a random vertex, proportionally to the temperature (or turnover rate) of that vertex. Here temperature t(v) of a vertex v is de\ufb01ned by t(v) = X u\u2208N(v) w(u, v) P v\u2032\u2208N(u) w(u, v\u2032), where w(u, v) is the weight of edge between u and v and N(v) is the set of neighbors of v, that is vertices connected to v by an edge. Fixation probability and time Given a graph G with N vertices and one speci\ufb01c vertex v, we denote by fp(G, v, r) the \ufb01xation probability of a single mutant with \ufb01tness r starting at vertex v, in a standard Moran Birth-death process. Then the \ufb01xation probability under uniform initialization is simply the average fp(G, r) = 1 N P v fp(G, v, r). The \ufb01xation probability under temperature initialization is a weighted average fpT (G, r) = P v t(v) \u00b7 fp(G, v, r), where t(v) is the temperature of vertex v. Similarly, we de\ufb01ne T(G, r) (or TT (G, r)) to be the \ufb01xation time, that is the expected number of steps of the Moran process until the mutants reach \ufb01xation (conditioning on them doing so). Likewise we de\ufb01ne ET(G, r) (or ETT (G, r)) to be the extinction time and AT(G, r) (or ATT (G, r)) to be the (unconditional) absorption time. 11 \fAmpli\ufb01ers and suppressors A graph GN with N vertices is called an ampli\ufb01er if it increases the \ufb01xation probability of any advantageous mutant, as compared to the Complete graph (that is, fp(GN, r) > fp(KN, r) for any r > 1). On the other hand, a graph G\u2032 N with N vertices is called a suppressor if it decreases the \ufb01xation probability of any advantageous mutant, as compared to the Complete graph (that is, fp(G\u2032 N, r) < fp(KN, r) for any r > 1). Notation for asymptotic behavior To talk about asymptotic behavior (in the limit of large population size N), we use standard mathematical notations o(\u00b7), O(\u00b7), and \u0398(\u00b7) that denote asymptotically strictly smaller, asymptotically less than or equal to, and asymptotically equal to (up to a constant factor), respectively. For example, we will write 1/N = o(1) (as 1/N is much smaller than 1, for large N) or 1 2N(N + 1) = \u0398(N 2). For detailed treatment see [45, Section 1.3]. Graphs We introduce and study the following graphs. Complete graph Complete graph KN on N vertices models a well-mixed population. This case is well understood. In particular, the \ufb01xation probability satis\ufb01es fp(KN, r) = fpT (KN, r) = 1 \u22121/r 1 \u22121/rN \u21921 \u22121/r for r > 1 as N \u2192\u221eand the (unconditional) absorption time is of the order of \u0398(N log N) [8]. In fact, using a standard di\ufb00erence method one can derive that, for r > 1, we have AT(KN, r) \u2248r+1 r \u00b7 N log N and T(KN, r) \u2248r+1 r\u22121 \u00b7 N log N. For reference purposes we present those proofs in Appendix 4. Star graph Star graph SN consists of one central vertex connected to each of the remaining N \u22121 vertices on the periphery. For large N, it is known that fp(SN, r) \u21921 \u22121/r2 and that the absorption and \ufb01xation time are of the order of at most O(N 2 log N) and O(N 3), respectively [25]. In fact, as a corollary of our results on \u03b1-Balanced bipartite graph, we show that both the absorption time and the \ufb01xation time are of the order of \u0398(N 2 log N). The bottom line is that, under uniform initialization, the Star graph ampli\ufb01es the \ufb01xation probability but at the cost of substantially increasing the \ufb01xation time. Under temperature initialization, the star graph is a suppressor (in fact, fpT (Sn, r) \u21920 as n \u2192\u221e). \u03b1-Balanced bipartite graph For uniform initialization we present a family of graphs that, in the limit of large population size, achieve the \ufb01xation probability of the Star graph and the \ufb01xation time almost as good as the Complete graph. The graphs are complete bipartite graphs with both parts large but one part asymptotically larger than the other one. Formally, given N and \u03b1 \u2208(0, 1], the \u03b1-Balanced bipartite graph BN,\u03b1 is a complete bipartite graph with parts of size N 1\u2212\u03b1 and N. That is, there are N 1\u2212\u03b1 vertices in one part, N vertices in the other part, and all edges that connect vertices in di\ufb00erent parts. The case \u03b1 = 1 corresponds to a Star graph. 12 \f\u03b1-Weighted bipartite graphs For temperature initialization, the Star graph and the \u03b1-Balanced bipartite graphs fail to amplify. We present another family of weighted graphs with self-loops that, in the limit of large population size, provide \ufb01xation probability 1 \u22121/r2 (the same as Star graph under uniform initialization) and the \ufb01xation time almost as good as the Complete graph. The graphs are obtained by adding self-loops of relatively large weight to all vertices in the larger part of an \u03b1-Balanced bipartite graph. Formally, given N and \u03b1 \u2208(0, 1), the Weighted bipartite graph WN,\u03b1 is a complete bipartite graphs with one (smaller) part of size N 1\u2212\u03b1, one (larger) part of size N, and every vertex of the larger part having a self-loop of such a weight w that N \u2212\u03b1/2 = N 1\u2212\u03b1 w+N 1\u2212\u03b1 . The case \u03b1 = 1 is closely related to a Looping Star [27]. Analytical results Here we summarize our analytical results. They are all related to the trade-o\ufb00between \ufb01xation probability and \ufb01xation time, under both uniform and temperature initialization. First, we prove that no ampli\ufb01er is asymptotically faster than the Complete graph in terms of absorption time (recall that T(KN, r) = \u0398(N log N), see Appendix 4). Informally, the idea is as follows: For every k = 1, . . . , N \u22121, we denote by tX k the expected time it takes to gain a single mutant from any con\ufb01guration X consisting of k mutants. To gain a mutant, one of the k mutants has to be selected for reproduction (and then the mutant has to replace a resident). We show that this yields a lower bound for tk that is proportional to N/k. Summing over all k\u2019s we get that the total absorption time is of the order of at least N/1+N/2+\u00b7 \u00b7 \u00b7+N/(N \u22121) \u2248N log N. Since the absorption time for the complete graph is also proportional to N log N, no ampli\ufb01er is signi\ufb01cantly faster than the complete graph. Theorem 1. Fix r > 1. Let G be any graph with N \u22652 vertices and let p = fp(G, r) be the \ufb01xation probability of a single mutant under uniform initialization. Then AT(G, r) \u2265p r \u00b7 N \u00b7 HN\u22121, where HN\u22121 = 1 1 + 1 2 + \u00b7 \u00b7 \u00b7 + 1 N\u22121 \u2265log N. In particular, AT(G, r) \u2265p r \u00b7 N log N for an arbitrary graph G and AT(A, r) \u2265r\u22121 r2 \u00b7 N log N for an arbitrary ampli\ufb01er A. For the formal proof, see Appendix 1. Second, we give tight results for the \ufb01xation time on Bipartite graphs. In particular, we prove that under uniform initialization, certain \u03b1-Balanced bipartite graphs BN,\u03b1 asymptotically achieve the \ufb01xation probability of the Star graph and the \ufb01xation time almost as good as the Complete graph. The analysis of \ufb01xation probability is relatively straightforward. For \ufb01xation time, we provide tight lower and upper bounds. We \ufb01rst present the lower bound that is proportional to N 1+\u03b1 log N. For the upper bound we then distinguish two cases: If the size of the smaller part is small, that is N 1\u2212\u03b1 = o( \u221a N), then the argument is simpler and we get a matching upper bound. If the size of the smaller part is relatively close to N, the upper bound has an additional factor of N \u03b1. As a consequence, we can prove the following theorem. Theorem 2. Fix \u03b1 \u2208(0, 1] and r > 1. Let BN,\u03b1 be the \u03b1-Balanced bipartite graph. Then \u2022 fp(BN,\u03b1, r) \u21921 \u22121/r2. \u2013 (small center) If \u03b1 \u2208(0.5, 1) then there exist constants c1, c2 such that c1 \u00b7 N 1+\u03b1 log N \u2264AT(BN,\u03b1, r) \u2264c2 \u00b7 N 1+\u03b1 log N. \u2013 (large center) If \u03b1 \u2208(0, 0.5) then there exist constants c1, c2 such that c1 \u00b7 N 1+\u03b1 log N \u2264AT(BN,\u03b1, r) \u2264c2 \u00b7 N 1+2\u03b1 log N. Moreover, the \ufb01xation time T(BN,\u03b1, r) satis\ufb01es the same inequalities. 13 \fAs an immediate corollary, we obtain that for any \ufb01xed r > 1, both the absorption and the \ufb01xation time on a Star graph (\u03b1 = 1) are of the order of \u0398(N 2 log N). This is in alignment with earlier results [25, 26]. For the formal proof, see Appendix 2. Third, we prove that under temperature initialization, analogous results can be achieved using \u03b1-Weighted bipartite graphs WN,\u03b1. Theorem 3. Fix \u03b1 \u2208(0, 1] and r > 1. Let WN,\u03b1 be the Weighted bipartite graph. Then \u2022 fp(WN,\u03b1, r) \u21921 \u22121/r2. \u2022 There exist constants c1, c2 such that c1 \u00b7 N 1+\u03b1 log N \u2264AT(BN,\u03b1, r) \u2264c2 \u00b7 N 1+ 3 2 \u03b1 log N. Moreover, the \ufb01xation time T(BN,\u03b1, r) satis\ufb01es the same inequalities. For the formal proof, see Appendix 3. Finally, for reference purposes we compute the absorption, \ufb01xation, and extinction times of a single advantageous mutant (r > 1) on a Complete graph, using the standard di\ufb00erence method. Theorem 4. Fix r > 1 and let KN be the Complete graph on N vertices. Then AT(KN, r) = (N \u22121)HN\u22121 \u00b7 r + 1 r + (N \u22121) \u00b7 log(1 \u22121/r) \u2212 1 r(r \u22121) + o(1), T(KN, r) = (N \u22121)HN\u22121 \u00b7 r + 1 r \u22121 + (N \u22121) \u00b7 r + 1 r \u22121 log(1 \u22121/r) + o(N), ET(KN, r) = (N \u22121) \u00b7 log \u0012 r r \u22121 \u0013 + o(N). In particular for r = 1 + s, s > 0 small, we have AT(KN, r) \u22482 \u00b7 N log N, T(KN, r) \u22482 s \u00b7 N log N, and ET(KN, r) \u22481 s \u00b7 N. For the formal proof, see Appendix 4. Acknowledgments J.T. and K.C. acknowledge support from ERC Start grant no. (279307: Graph Games), Austrian Science Fund (FWF) grant no. P23499-N23 and S11407-N23 (RiSE). A.P. acknowledges support from FWF Grant No. J-4220. M.A.N. acknowledges support from O\ufb03ce of Naval Research grant N00014-16-1-2914 and from the John Templeton Foundation. The Program for Evolutionary Dynamics is supported in part by a gift from B. Wu and E. Larson. Competing interests The authors declare that no competing interests exist." + }, + { + "url": "http://arxiv.org/abs/0712.3714v1", + "title": "Atomistic and orthoatomistic effect algebras", + "abstract": "We characterize atomistic effect algebras, prove that a weakly orthocomplete\nArchimedean atomic effect algebra is orthoatomistic and present an example of\nan orthoatomistic orthomodular poset that is not weakly orthocomplete.", + "authors": "Josef Tkadlec", + "published": "2007-12-21", + "updated": "2007-12-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "main_content": "Introduction One of the basic concepts in the foundation of quantum physics is a quantum e\ufb00ect that play an important role in the theory of the so-called unsharp measurements [1, 2]. Quantum e\ufb00ects are studied within a general algebraic framework called an e\ufb00ect algebra [2, 3, 5]. An important role in quantum structures play atoms (minimal nonzero elements) especially if every element of the structure can be built up from atoms, i.e., if the structure is atomistic or orthoatomistic\u2014hence these properties are of particular interest [3, 7, 8, 9, 10]. In this paper we generalize some results concerning atomistic and orthoatomistic quantum structures and present a few illustrating examples. 2 Basic notions and properties 2.1. De\ufb01nition. An e\ufb00ect algebra is an algebraic structure (E, \u2295, 0, 1) such that E is a set, 0 and 1 are di\ufb00erent elements of E and \u2295is a partial binary operation on E such that for every a, b, c \u2208E the following conditions hold: (1) a \u2295b = b \u2295a if a \u2295b exists, (2) (a \u2295b) \u2295c = a \u2295(b \u2295c) if (a \u2295b) \u2295c exists, (3) there is a unique a\u2032 \u2208E such that a \u2295a\u2032 = 1 (orthosupplement), (4) a = 0 whenever a \u22951 is de\ufb01ned. For simplicity, we use the notation E for an e\ufb00ect algebra. A partial ordering on an e\ufb00ect algebra E is de\ufb01ned by a \u2264b i\ufb00there is a c \u2208E such that b = a \u2295c. Such an element c is unique (if it exists) and is denoted by b \u2296a. 0 (1, resp.) is the least (the greatest, resp.) element of E with respect to this partial ordering. For every a, b \u2208E, a\u2032\u2032 = a and b\u2032 \u2264a\u2032 whenever a \u2264b. It can be shown that a\u22950 = a for every a \u2208E and that a cancellation law is valid: for every a, b, c \u2208E with a \u2295b \u2264a \u2295c \u2217Department of Mathematics, Faculty of Electrical Engineering, Czech Technical University, 166 27 Praha, Czech Republic, tkadlec@fel.cvut.cz. 1 \fwe have b \u2264c. An orthogonality relation on E is de\ufb01ned by a \u22a5b i\ufb00a \u2295b exists (i\ufb00 a \u2264b\u2032). See, e.g., [2, 3]. Obviously, if a \u22a5b and a \u2228b exist in an e\ufb00ect algebra, then a \u2228b \u2264a \u2295b. The reverse inequality need not be true (it holds in orthomodular posets). 2.2. De\ufb01nition. Let E be an e\ufb00ect algebra. An element a \u2208E is principal if b \u2295c \u2264a for every b, c \u2208E such that b, c \u2264a and b \u22a5c. 2.3. De\ufb01nition. An orthoalgebra is an e\ufb00ect algebra E in which, for every a \u2208E, a = 0 whenever a \u2295a is de\ufb01ned. An orthomodular poset is an e\ufb00ect algebra in which every element is principal. An orthomodular lattice is an orthomodular poset that is a lattice. Every orthomodular poset is an orthoalgebra. Indeed, if a \u2295a is de\ufb01ned then a \u2295a \u2264a = a \u22950 and, according to the cancellation law, a \u22640 and therefore a = 0. Orthomodular posets are characterized as e\ufb00ect algebras such that a \u2295b = a \u2228b for every orthogonal pair a, b. (See [3, 4].) Let us remark that an orthomodular poset is usually de\ufb01ned as a bounded partially ordered set with an orthocomplementation in which the orthomodular law is valid. 2.4. De\ufb01nition. Let E be an e\ufb00ect algebra. The isotropic index of an element a \u2208E is sup{n \u2208N : na is de\ufb01ned}, where na = Ln i=1 a is the sum of n copies of a. An e\ufb00ect algebra is Archimedean if every its nonzero element has a \ufb01nite isotropic index. The isotropic index of 0 is \u221e. In an orthoalgebra we have that a \u2295a is de\ufb01ned only for a = 0, hence the isotropic index of every nonzero element is 1. Therefore we obtain: 2.5. Proposition. Every orthoalgebra is Archimedean. 2.6. De\ufb01nition. Let E be an e\ufb00ect algebra. A system (ai)i\u2208I of (not necessarily distinct) elements of E is called orthogonal, if L i\u2208F ai is de\ufb01ned for every \ufb01nite set F \u2282I. We de\ufb01ne L i\u2208I ai = W{L i\u2208F ai : F \u2282I is \ufb01nite} if the supremum exists. An e\ufb00ect algebra E is orthocomplete if L i\u2208I ai is de\ufb01ned for every orthogonal system (ai)i\u2208I of elements of E. An e\ufb00ect algebra E is weakly orthocomplete if for every orthogonal system (ai)i\u2208I of elements of E either L i\u2208I ai exists or there is no minimal upper bound of the set {L i\u2208F ai : F \u2282I is \ufb01nite} in E. Every pair of elements of an orthogonal system is orthogonal. On the other hand, there are mutually orthogonal elements that do not form an orthogonal system if the e\ufb00ect algebra is not an orthomodular poset. Since only the zero element is orthogonal to itself in an orthoalgebra, we may consider sets instead of systems in orthoalgebras. 2.7. Proposition. Every orthocomplete e\ufb00ect algebra is Archimedean. 2 \fProof. Let E be an orthocomplete e\ufb00ect algebra and let a \u2208E has an in\ufb01nite isotropic index. There is an element b \u2208E such that b = L n\u2208N a = W n\u2208N na. Since a \u2264b, there is an element c \u2208E such that b = a \u2295c. For every n \u2208N we have a \u2295c = b \u2265(n + 1)a = a \u2295na and therefore, according to the cancellation law, c \u2265na. Hence c \u22950 = c \u2265W n\u2208N na = b = c \u2295a and, according to the cancellation law, 0 \u2265a and therefore a = 0. 2.8. De\ufb01nition. An atom of an e\ufb00ect algebra E is a minimal element of E \\ {0}. An e\ufb00ect algebra is atomic if every nonzero element dominates an atom (i.e., there is an atom less than or equal to it). An e\ufb00ect algebra is atomistic if every nonzero element is a supremum of a set of atoms (i.e., of the set of all atoms it dominates). An e\ufb00ect algebra is orthoatomistic if every nonzero element is a sum of a set of atoms. It is easy to see that every atomistic and every orthoatomistic e\ufb00ect algebra is atomic and that every orthoatomistic orthomodular poset is atomistic. There are atomic orthomodular posets that are not atomistic [7], atomistic orthomodular posets that are not orthoatomistic [8] and orthoatomistic orthoalgebras that are not atomistic\u2014e.g., the so-called Wright triangle [4, Example 2.13]. 3 Results First, let us present a characterization of atomistic e\ufb00ect algebras that generalizes the result of [8] stated for orthomodular posets. 3.1. De\ufb01nition. An e\ufb00ect algebra E is disjunctive if for every a, b \u2208E with a \u0338\u2264b there is a nonzero element c \u2208E such that c \u2264a and c \u2227b = 0. 3.2. Theorem. An e\ufb00ect algebra is atomistic if and only if it is atomic and disjunctive. Proof. Let E be an e\ufb00ect algebra and let us for every x \u2208E denote by Ax the set of atoms dominated by x. \u21d2: Obviously, every atomistic e\ufb00ect algebra is atomic. Let a, b \u2208E such that a \u0338\u2264b. Then there is an atom c \u2208Aa \\ Ab, hence c \u2264a and c \u2227b = 0. \u21d0: Let us prove that a \u2264b for every nonzero a \u2208E and for every upper bound b \u2208E of Aa (hence a = W Aa). Let us suppose that a \u0338\u2264b and seek a contradiction. Since E is disjunctive, there is a nonzero element c \u2208E such that c \u2264a and c\u2227b = 0. Since E is atomic, there is an atom d \u2208E such that d \u2264c. Hence d \u2264a and d\u2227b = 0. Since d is an atom, d \u0338\u2264b and therefore d \u2208Aa \\ Ab\u2014a contradiction. Before stating the second main result of this paper, let us discuss relations of some properties. 3.3. Proposition. Let E be an e\ufb00ect algebra ful\ufb01lling at least one of the following conditions: (OC) E is orthocomplete. 3 \f(L) E is a lattice. Then E is weakly orthocomplete. Proof. (OC): Obvious. (L): Let (ai)i\u2208I be an orthogonal system of elements of E. Let us show that if a minimal upper bound a of the set A = {L i\u2208F ai : F \u2282I is \ufb01nite} exists then a = W A. Let b be an upper bound of A. Then b \u2227a \u2264a is an upper bound of A and, since a is minimal, b \u2227a = a. Hence a \u2264b. Let us present examples showing that the scheme of implications in the previous proposition cannot be improved. 3.4. Example. Let X be a countable in\ufb01nite set. Let E be a family of \ufb01nite and co\ufb01nite subsets of X with the \u2295operation de\ufb01ned as the union of disjoint sets. Then (E, \u2295, \u2205, X) is an orthomodular lattice (it forms a Boolean algebra) that is not orthocomplete. 3.5. Example. Let X be a 6-element set. Let E be the family of even-element subsets of X with the \u2295operation de\ufb01ned as the union of disjoint sets from E. Then (E, \u2295, \u2205, X) is a \ufb01nite (hence orthocomplete) orthomodular poset that is not a lattice. 3.6. Example. Let X1, X2, X3, X4 be mutually disjoint in\ufb01nite sets, X = S4 i=1 Xi, E0 = {\u2205, X1 \u222aX2, X2 \u222aX3, X3 \u222aX4, X4 \u222aX1, X} , E = {(A \\ F) \u222a(F \\ A) : F \u2282X is \ufb01nite, A \u2208E0} , A \u2295B = A \u222aB for disjoint A, B \u2208E. Then (E, \u2295, \u2205, X) is a weakly orthocomplete orthomodular poset that is neither orthocomplete (e.g., W\b {x} : x \u2208X1 \t does not exist) nor a lattice (e.g., (X1 \u222aX2) \u2227(X2 \u222aX3) does not exist). 3.7. Theorem. Every weakly orthocomplete Archimedean atomic e\ufb00ect algebra is orthoatomistic. Proof. Let E be a weakly orthocomplete Archimedean atomic e\ufb00ect algebra and let a \u2208E \\ {0}. Let us consider orthogonal systems of atoms such that their \ufb01nite sums are dominated by a. Since E is atomic, there are such systems. Since E is Archimedean, using the Zorn\u2019s lemma we obtain that there is a maximal such system, M. Let us show that a is a minimal upper bound of the set A = {L F : F \u2282M is \ufb01nite}. Indeed, if there is an upper bound b \u2208E of A such that b < a then a\u2296b \u0338= 0, there is an atom c \u2208E such that c \u2264a\u2296b and therefore c\u2295L F \u2264a for every \ufb01nite set F \u2282M\u2014this contradicts to the maximality of M. Since E is weakly orthocomplete, a = W A = L M. The previous theorem generalizes the result of [8] stated for weakly orthocomplete atomic orthomodular posets, the result of [3, Proposition 4.11] stated for chain \ufb01nite e\ufb00ect algebras and the result of [9, Theorem 3.1] stated for lattice Archimedean atomic e\ufb00ect algebras. 4 \fNone of the assumptions in Theorem 3.7 can be omitted. Indeed, there are atomistic orthomodular posets that are not orthoatomistic [8], Boolean algebras that are not atomic (e.g., exp N|F (N) where F(N) denotes the family of \ufb01nite subsets of the set N of natural numbers), and, as the following example shows, weakly orthocomplete atomic e\ufb00ect algebras that are not orthoatomistic. 3.8. Example. Let E = {0, 1, 2, . . . , n, . . . , n\u2032, . . . , 2\u2032, 1\u2032, 0\u2032} with the \u2295operation de\ufb01ned by m \u2295n = m + n for every m, n \u2208N and m \u2295n\u2032 = (n \u2212m)\u2032 for every m, n \u2208N with m \u2264n. Then (E, \u2295, 0, 0\u2032) is an atomic e\ufb00ect algebra (it forms a chain) that is weakly orthocomplete. Indeed, if an orthogonal system M of nonzero elements of E is \ufb01nite then L M is de\ufb01ned; if M is in\ufb01nite then the set of \ufb01nite sums of elements of M forms an unbounded set of natural numbers and therefore does not have a minimal upper bound. The e\ufb00ect algebra is not orthoatomistic because no element n\u2032, n \u2208N, is a sum of atoms. Let us present an example that an orthoatomistic orthomodular poset need not be weakly orthocomplete. 3.9. Example. Let X, Y be disjoint in\ufb01nite countable sets, E0 = {A \u2282(X \u222aY ) : card(A \u2229X) = card(A \u2229Y ) is \ufb01nite} , E = E0 \u222a{(X \u222aY ) \\ A : A \u2208E0} , A \u2295B = A \u222aB for disjoint A, B \u2208E. Then (E, \u2295, \u2205, X \u222aY ) is an orthomodular poset. It is orthoatomistic because for every nonempty A \u2208E we have card(A\u2229X) = card(A \u2229Y ), there is a bijection f : (A \u2229X) \u2192(A \u2229Y ) and A = L\b {x, f(x)} : x \u2208(A \u2229X) \t . The orthomodular poset is not weakly orthocomplete because for x0 \u2208X, y0 \u2208Y there is a bijection f : X \u2192(Y \\ {y0}) and the orthogonal set \b {x, f(x)} : x \u2208X \\{x0} \t has di\ufb00erent minimal upper bounds (X \u222aY )\\{x0, f(x0)} and (X \u222aY ) \\ {x0, y0}. Acknowledgements The work was supported by the grant of the Grant Agency of the Czech Republic no. 201/07/1051 and by the research plan of the Ministry of Education of the Czech Republic no. 6840770010." + } + ], + "Shibashis Guha": [ + { + "url": "http://arxiv.org/abs/2207.07694v3", + "title": "Parikh Automata over Infinite Words", + "abstract": "Parikh automata extend finite automata by counters that can be tested for\nmembership in a semilinear set, but only at the end of a run, thereby\npreserving many of the desirable algorithmic properties of finite automata.\nHere, we study the extension of the classical framework onto infinite inputs:\nWe introduce reachability, safety, B\\\"uchi, and co-B\\\"uchi Parikh automata on\ninfinite words and study expressiveness, closure properties, and the complexity\nof verification problems.\n We show that almost all classes of automata have pairwise incomparable\nexpressiveness, both in the deterministic and the nondeterministic case; a\nresult that sharply contrasts with the well-known hierarchy in the\n$\\omega$-regular setting. Furthermore, emptiness is shown decidable for Parikh\nautomata with reachability or B\\\"uchi acceptance, but undecidable for safety\nand co-B\\\"uchi acceptance. Most importantly, we show decidability of model\nchecking with specifications given by deterministic Parikh automata with safety\nor co-B\\\"uchi acceptance, but also undecidability for all other types of\nautomata. Finally, solving games is undecidable for all types.", + "authors": "Shibashis Guha, Isma\u00ebl Jecker, Karoliina Lehtinen, Martin Zimmermann", + "published": "2022-07-15", + "updated": "2022-12-20", + "primary_cat": "cs.FL", + "cats": [ + "cs.FL", + "cs.LO" + ], + "main_content": "Introduction While \ufb01nite-state automata are the keystone of automata-theoretic veri\ufb01cation, they are not expressive enough to deal with the many nonregular aspects of realistic veri\ufb01cation problems. Various extensions of \ufb01nite automata have emerged over the years, to allow for the speci\ufb01cation of context-free properties and beyond, as well as the modelling of timed and quantitative aspects of systems. Among these extensions, Parikh automata, introduced by Klaedtke and Rue\u00df [18], consist of \ufb01nite automata augmented with counters that can only be incremented. A Parikh automaton only accepts a word if the \ufb01nal counter-con\ufb01guration is within a semilinear set speci\ufb01ed by the automaton. As the counters do not interfere with the control \ufb02ow of the automaton, that is, counter values do not a\ufb00ect whether transitions are enabled, they allow for mild quantitative computations without the full power of vector addition systems or other more powerful models. \u2217Shibashis Guha is supported by the DST-SERB project SRG/2021/000466 Zero-sum and Nonzero-sum Games for Controller Synthesis of Reactive Systems. Isma\u00a8 el Jecker is supported by the ERC grant 950398 (INFSYS). Martin Zimmermann is supported by DIREC \u2013 Digital Research Centre Denmark. 1 \fFor example, the nonregular language of words that have more a\u2019s than b\u2019s is accepted by a Parikh automaton obtained from the one-state DFA accepting {a, b}\u2217by equipping it with two counters, one counting the a\u2019s in the input, the other counting the b\u2019s, and a semilinear set ensuring that the \ufb01rst counter is larger than the second one. With a similar approach, one can construct a Parikh automaton accepting the noncontext-free language of words that have more a\u2019s than b\u2019s and more a\u2019s than c\u2019s. Klaedtke and Rue\u00df [18] showed Parikh automata to be expressively equivalent to a quantitative version of existential WMSO that allows for reasoning about set cardinalities. Their expressiveness also coincides with that of reversal-bounded counter machines [18], in which counters can go from decrementing to incrementing only a bounded number of times, but in which counters a\ufb00ect control \ufb02ow [17]. The (weakly) unambiguous restriction of Parikh automata, that is, those that have at most one accepting run, on the other hand, coincide with unambiguous reversal-bounded counter machines [2]. Parikh automata are also expressively equivalent to weighted \ufb01nite automata over the groups (Zk, +, 0) [9, 20] for k \u2a7e1. This shows that Parikh automata accept a natural class of quantitative speci\ufb01cations. Despite their expressiveness, Parikh automata retain some decidability: nonemptiness, in particular, is NP-complete [12]. For weakly unambiguous Parikh automata, inclusion [5] and regular separability [6] are decidable as well. Figueira and Libkin [12] also argued that this model is well-suited for querying graph databases, while mitigating some of the complexity issues related with more expressive query languages. Further, they have been used in the model checking of transducer properties [14]. As Parikh automata have been established as a robust and useful model, many variants thereof exist: pushdown (visibly [8] and otherwise [22]), two-way with [8] and without stack [13], unambiguous [4], and weakly unambiguous [2] Parikh automata, to name a few. Despite this attention, so far, some more elementary questions have remained unanswered. For instance, despite Klaedtke and Rue\u00df\u2019s suggestion in [18] that the model could be extended to in\ufb01nite words, we are not aware of previous work on \u03c9-Parikh automata. Yet, speci\ufb01cations over in\ufb01nite words are a crucial part of the modern veri\ufb01cation landscape. Indeed, programs, especially safety-critical ones, are often expected to run continuously, possibly in interaction with an environment. Then, executions are better described by in\ufb01nite words, and accordingly, automata over in\ufb01nite, rather than \ufb01nite, words are appropriate for capturing speci\ufb01cations. This is the starting point of our contribution: we extend Parikh automata to in\ufb01nite inputs, and consider reachability, safety, B\u00a8 uchi, and co-B\u00a8 uchi acceptance conditions. We observe that when it comes to reachability and B\u00a8 uchi, there are two possible de\ufb01nitions: an asynchronous one that just requires both an accepting state and the semilinear set to be reached (once or in\ufb01nitely often) by the run, but not necessarily at the same time, and a synchronous one that requires both to be reached (once or in\ufb01nitely often) simultaneously. Parikh automata on in\ufb01nite words accept, for example, the languages of in\ufb01nite words \u2022 with some pre\ufb01x having more a\u2019s than b\u2019s (reachability acceptance), \u2022 with all nonempty pre\ufb01xes having more a\u2019s than b\u2019s (safety acceptance), \u2022 with in\ufb01nitely many pre\ufb01xes having more a\u2019s than b\u2019s (B\u00a8 uchi acceptance), and \u2022 with almost all pre\ufb01xes having more a\u2019s than b\u2019s (co-B\u00a8 uchi acceptance). We establish that, both for reachability and B\u00a8 uchi acceptance, both the synchronous and the asynchronous variant are linearly equivalent in the presence of nondeterminism, but not for deterministic automata. Hence, by considering all acceptance conditions and (non)determinism, we end up with twelve di\ufb00erent classes of automata. We show that almost all of these classes have pairwise incomparable expressiveness, which is in sharp contrast to the well-known hierarchies in the \u03c9-regular case. Furthermore, we establish an almost complete picture of the Boolean closure properties of these twelve classes of automata. Most notably, they lack closure under negation, even for nondeterministic B\u00a8 uchi Parikh automata. Again, this result should be contrasted with the \u03c9-regular case, where nondeterministic B\u00a8 uchi automata are closed under negation [21]. We then study the complexity of the most important veri\ufb01cation problems, e.g., nonemptiness, universality, model checking, and solving games. We show that nonemptiness is undecidable for deterministic safety 2 \fand co-B\u00a8 uchi Parikh automata. However, perhaps surprisingly, we also show that nonemptiness is decidable, in fact NP-complete, for reachability and B\u00a8 uchi Parikh automata, both for the synchronous and the asynchronous versions. Strikingly, for Parikh automata, the B\u00a8 uchi acceptance condition is algorithmically simpler than the safety one (recall that their expressiveness is pairwise incomparable). Next, we consider model checking, arguably the most successful application of automata theory in the \ufb01eld of automated veri\ufb01cation. Model checking asks whether a given \ufb01nite-state system satis\ufb01es a given speci\ufb01cation. Here, we consider quantitative speci\ufb01cations given by Parikh automata. Model checking is decidable for speci\ufb01cations given by deterministic Parikh automata with safety or co-B\u00a8 uchi acceptance. On the other hand, the problem is undecidable for all other classes of automata. The positive results imply that one can model-check an arbiter serving requests from two clients against speci\ufb01cations like \u201cthe accumulated waiting time between requests and responses of client 1 is always at most twice the accumulated waiting time for client 2 and vice versa\u201d and \u201cthe di\ufb00erence between the number of responses for client 1 and the number of responses for client 2 is from some point onward bounded by 100\u201d. Note that both properties are not \u03c9-regular. Finally, we consider solving games with winning conditions expressed by Parikh automata. Zero-sum two-player games are a key formalism used to model the interaction of programs with an uncontrollable environment. In particular, they are at the heart of solving synthesis problems in which, rather than verifying the correctness of an existing program, we are interested in generating a program that is correct by construction, from its speci\ufb01cations. In these games, the speci\ufb01cation corresponds to the winning condition: one player tries to build a word (i.e., behaviour) that is in the speci\ufb01cation, while the other tries to prevent this. As with model checking, using Parikh automata to capture the speci\ufb01cation would enable these well-understood game-based techniques to be extended to mildly quantitative speci\ufb01cations. However, we show that games with winning conditions speci\ufb01ed by Parikh automata are undecidable for all acceptance conditions we consider. All proofs omitted due to space restrictions can be found in the appendix. 2 De\ufb01nitions An alphabet is a \ufb01nite nonempty set \u03a3 of letters. As usual, \u03b5 denotes the empty word, \u03a3\u2217(\u03a3+, \u03a3\u03c9) denotes the set of \ufb01nite (\ufb01nite nonempty, in\ufb01nite) words over \u03a3. The length of a \ufb01nite word w is denoted by |w| and, for notational convenience, we de\ufb01ne |w| = \u221efor in\ufb01nite words w. The number of occurrences of the letter a in a \ufb01nite word w is denoted by |w|a. Let a, b \u2208\u03a3. A word w \u2208\u03a3\u2217is (a, b)-balanced if |w|a = |w|b, otherwise it is (a, b)-unbalanced. Note that the empty word is (a, b)-balanced. Semilinear Sets Let N denote the set of nonnegative integers. Let \u20d7 v = (v0, . . . , vd\u22121) \u2208Nd and \u20d7 v \u2032 = (v\u2032 0, . . . , v\u2032 d\u2032\u22121) \u2208Nd\u2032 be a pair of vectors. We de\ufb01ne their concatenation as \u20d7 v\u00b7\u20d7 v \u2032 = (v0, . . . , vd\u22121, v\u2032 0, . . . , v\u2032 d\u2032\u22121) \u2208 Nd+d\u2032. We lift the concatenation of vectors to sets D \u2286Nd and D\u2032 \u2286Nd\u2032via D\u00b7D\u2032 = {\u20d7 v \u00b7\u20d7 v \u2032 | \u20d7 v \u2208D and \u20d7 v \u2032 \u2208 D\u2032}. Let d \u2a7e1. A set C \u2286Nd is linear if there are vectors \u20d7 v0, . . . ,\u20d7 vk \u2208Nd such that C = \u001a \u20d7 v0 + Xk i=1 ci\u20d7 vi \f \f \f \f ci \u2208N for i = 1, . . . , k \u001b . Furthermore, a subset of Nd is semilinear if it is a \ufb01nite union of linear sets. Proposition 1 ([16]). If C, C\u2032 \u2286Nd are semilinear, then so are C \u222aC\u2032, C \u2229C\u2032, Nd \\ C, as well as Nd\u2032 \u00b7 C and C \u00b7 Nd\u2032 for every d\u2032 \u2a7e1. Finite Automata A (nondeterministic) \ufb01nite automaton (NFA) A = (Q, \u03a3, qI, \u2206, F) over \u03a3 consists of a \ufb01nite set Q of states containing the initial state qI, an alphabet \u03a3, a transition relation \u2206\u2286Q \u00d7 \u03a3 \u00d7 Q, 3 \fa, (1, 0) b, (0, 1) b, (0, 1) Figure 1: The automaton for Example 1. and a set F \u2286Q of accepting states. The NFA is deterministic (i.e., a DFA) if for every state q \u2208Q and every letter a \u2208\u03a3, there is at most one q\u2032 \u2208Q such that (q, a, q\u2032) is a transition of A. A run of A is a (possibly empty) sequence (q0, w0, q1)(q1, w1, q2) \u00b7 \u00b7 \u00b7 (qn\u22121, wn\u22121, qn) of transitions with q0 = qI. It processes the word w0w1 \u00b7 \u00b7 \u00b7 wn\u22121 \u2208\u03a3\u2217. The run is accepting if it is either empty and the initial state is accepting or if it is nonempty and qn is accepting. The language L(A) of A contains all \ufb01nite words w \u2208\u03a3\u2217such that A has an accepting run processing w. Parikh Automata Let \u03a3 be an alphabet, d \u2a7e1, and D a \ufb01nite subset of Nd. Furthermore, let w = (a0,\u20d7 v0) \u00b7 \u00b7 \u00b7 (an\u22121,\u20d7 vn\u22121) be a word over \u03a3 \u00d7 D. The \u03a3-projection of w is p\u03a3(w) = a0 \u00b7 \u00b7 \u00b7 an\u22121 \u2208\u03a3\u2217 and its extended Parikh image is \u03a6e(w) = Pn\u22121 j=0 \u20d7 vj \u2208Nd with the convention \u03a6e(\u03b5) = \u20d7 0, where \u20d7 0 is the d-dimensional zero vector. A Parikh automaton (PA) is a pair (A, C) such that A is an NFA over \u03a3 \u00d7 D for some input alphabet \u03a3 and some \ufb01nite D \u2286Nd for some d \u2a7e1, and C \u2286Nd is semilinear. The language of (A, C) consists of the \u03a3-projections of words w \u2208L(A) whose extended Parikh image is in C, i.e., L(A, C) = {p\u03a3(w) | w \u2208L(A) with \u03a6e(w) \u2208C}. The automaton (A, C) is deterministic, if for every state q of A and every a \u2208\u03a3, there is at most one pair (q\u2032,\u20d7 v) \u2208Q \u00d7 D such that (q, (a,\u20d7 v), q\u2032) is a transition of A. Note that this de\ufb01nition does not coincide with A being deterministic: As mentioned above, A accepts words over \u03a3 \u00d7 D while (A, C) accepts words over \u03a3. Therefore, determinism is de\ufb01ned with respect to \u03a3 only. Note that the above de\ufb01nition of L(A, C) coincides with the following alternative de\ufb01nition via accepting runs: A run \u03c1 of (A, C) is a run \u03c1 = (q0, (a0,\u20d7 v0), q1)(q1, (a1,\u20d7 v1), q2) \u00b7 \u00b7 \u00b7 (qn\u22121, (an\u22121,\u20d7 vn\u22121), qn) of A. We say that \u03c1 processes the word a0a1 \u00b7 \u00b7 \u00b7 an\u22121 \u2208\u03a3\u2217, i.e., the \u20d7 vj are ignored, and that \u03c1\u2019s extended Parikh image is Pn\u22121 j=0 \u20d7 vj. The run is accepting, if it is either empty and both the initial state of A is accepting and the zero vector (the extended Parikh image of the empty run) is in C, or if it is nonempty, qn is accepting, and \u03c1\u2019s extended Parikh image is in C. Finally, (A, C) accepts w \u2208\u03a3\u2217if it has an accepting run processing w. Example 1. Consider the deterministic PA (A, C) with A in Figure 1 and C = {(n, n) | n \u2208N} \u222a{(n, 2n) | n \u2208N}. It accepts the language {anbn | n \u2208N} \u222a{anb2n | n \u2208N}. A cycle is a nonempty \ufb01nite run in\ufb01x (q0, w0, q1)(q1, w1, q2) \u00b7 \u00b7 \u00b7 (qn\u22121, wn\u22121, qn)(qn, wn, q0) starting and ending in the same state and such that the qj are pairwise di\ufb00erent. Note that every run in\ufb01x containing at least n transitions contains a cycle, where n is the number of states of the automaton. Many of our proofs rely on the following shifting argument, which has been used before to establish inexpressibility results for Parikh automata [3]. Remark 1. Let \u03c10\u03c11\u03c12\u03c13 be a run of a PA such that \u03c11 and \u03c13 are cycles starting in the same state. Then, \u03a6e(\u03c10\u03c11\u03c12\u03c13) = \u03a6e(\u03c10\u03c12\u03c11\u03c13) = \u03a6e(\u03c10\u03c11\u03c13\u03c12). Furthermore, all three runs end in the same state and visit the same set of states (but maybe in di\ufb00erent orders). 4 \f3 Parikh Automata over In\ufb01nite Words In this section, we introduce Parikh automata over in\ufb01nite words by lifting safety, reachability, B\u00a8 uchi, and co-B\u00a8 uchi acceptance from \ufb01nite automata to Parikh automata. Recall that a Parikh automaton on \ufb01nite words accepts if the last state of the run is accepting and the extended Parikh image of the run is in the semilinear set, i.e., both events are synchronized. For reachability and B\u00a8 uchi acceptance it is natural to consider both a synchronous and an asynchronous variant while for safety and co-B\u00a8 uchi there is only a synchronous variant. All these automata have the same format as Parikh automata on \ufb01nite words, but are now processing in\ufb01nite words. Formally, consider (A, C) with A = (Q, \u03a3\u00d7D, qI, \u2206, F). Fix an in\ufb01nite run (q0, w0, q1)(q1, w1, q2)(q2, w2, q3) \u00b7 \u00b7 \u00b7 of A with q0 = qI (recall that each wj is in \u03a3 \u00d7 D), which we say processes p\u03a3(w0w1w2 \u00b7 \u00b7 \u00b7 ). \u2022 The run is safety accepting if \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C and qn \u2208F for all n \u2a7e0. \u2022 The run is synchronous reachability accepting if \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C and qn \u2208F for some n \u2a7e0. \u2022 The run is asynchronous reachability accepting if \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C for some n \u2a7e0 and qn\u2032 \u2208F for some n\u2032 \u2a7e0. \u2022 The run is synchronous B\u00a8 uchi accepting if \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C and qn \u2208F for in\ufb01nitely many n \u2a7e0. \u2022 The run is asynchronous B\u00a8 uchi accepting if \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C for in\ufb01nitely many n \u2a7e0 and qn\u2032 \u2208F for in\ufb01nitely many n\u2032 \u2a7e0. \u2022 The run is co-B\u00a8 uchi accepting if there is an n0 such that \u03a6e(w0 \u00b7 \u00b7 \u00b7 wn\u22121) \u2208C and qn \u2208F for every n \u2a7en0. As mentioned before, we do not distinguish between synchronous and asynchronous co-B\u00a8 uchi acceptance, as these de\ufb01nitions are equivalent. Also, note that all our de\ufb01nitions are conjunctive in the sense that acceptance requires visits to accepting states and extended Parikh images in C. Thus, e.g., reachability and safety are not dual on a syntactic level. Nevertheless, we later prove dualities on a semantic level. Similarly, one can easily show that a disjunctive de\ufb01nition is equivalent to our conjunctive one: One can re\ufb02ect in the extended Parikh image of a run pre\ufb01x whether it ends in an accepting stateand then encode acceptance in the semilinear set. So, any given Parikh automaton (A, C) (with disjunctive or conjunctive acceptance) can be turned into another one (A\u2032, C\u2032) capturing acceptance in (A, C) by Parikh images only. So, with empty (full) set of accepting states and C\u2032 mimicking disjunction (conjunction), it is equivalent to the original automaton with disjunctive (conjunctive) acceptance. Now, the language LS(A, C) of a safety Parikh automaton (SPA) (A, C) contains those words w \u2208\u03a3\u03c9 such that (A, C) has a safety accepting run processing w. Similarly, we de\ufb01ne the languages \u2022 Ls R(A, C) of synchronous reachability Parikh automata (sRPA), \u2022 La R(A, C) of asynchronous reachability Parikh automata (aRPA), \u2022 Ls B(A, C) of synchronous B\u00a8 uchi Parikh automata (aBPA), \u2022 La B(A, C) of asynchronous B\u00a8 uchi Parikh automata (sBPA), and \u2022 LC(A, C) of co-B\u00a8 uchi Parikh automata (CPA). Determinism for all types of automata is de\ufb01ned as for Parikh automata on \ufb01nite words. Unless explicitly stated otherwise, every automaton is assumed to be nondeterministic. Example 2. Let A be the DFA shown in Figure 2 and let C = {(n, n) | n \u2208N} and C = {(n, n\u2032) | n \u0338= n\u2032} = N2 \\ C. Recall that a \ufb01nite word w is (a, b)-balanced if |w|a = |w|b, i.e., the number of a\u2019s and b\u2019s in w is equal. The empty word is (a, b)-balanced and every odd-length word over {a, b} is (a, b)-unbalanced. 5 \fa, (1, 0) b, (0, 1) Figure 2: The automaton for Example 2. 1. When interpreting (A, C) as a PA, it accepts the language of \ufb01nite (a, b)-balanced words; when interpreting (A, C) as a PA, it accepts the language of \ufb01nite (a, b)-unbalanced words. 2. When interpreting (A, C) as an aRPA or sRPA, it accepts the language of in\ufb01nite words that have an (a, b)-balanced pre\ufb01x; when interpreting (A, C) as an aRPA or sRPA, it accepts the language of in\ufb01nite words that have an (a, b)-unbalanced pre\ufb01x. Note that both languages are universal, as the empty pre\ufb01x is always (a, b)-balanced and every odd-length pre\ufb01x is (a, b)-unbalanced. 3. When interpreting (A, C) as an SPA, it accepts the language of in\ufb01nite words that have only (a, b)balanced pre\ufb01xes; when interpreting (A, C) as an SPA, it accepts the language of in\ufb01nite words that have only (a, b)-unbalanced pre\ufb01xes. Here, both languages are empty, which follows from the same arguments as for universality in the previous case. 4. When interpreting (A, C) as an aBPA or sBPA, it accepts the language of in\ufb01nite words with in\ufb01nitely many (a, b)-balanced pre\ufb01xes; when interpreting (A, C) as an aBPA or sBPA, it accepts the language of in\ufb01nite words with in\ufb01nitely many (a, b)-unbalanced pre\ufb01xes. The latter language is universal, as every odd-length pre\ufb01x is unbalanced. 5. When interpreting (A, C) as a CPA, it accepts the language of in\ufb01nite words such that almost all pre\ufb01xes are (a, b)-balanced; when interpreting (A, C) as a CPA, it accepts the language of in\ufb01nite words such that almost all pre\ufb01xes are (a, b)-unbalanced. Again, the former language is empty. Let (A, C) be a Parikh automaton. We say that a run pre\ufb01x is an F-pre\ufb01x if it ends in an accepting state of A, a C-pre\ufb01x if its extended Parikh image is in C, and an FC-pre\ufb01x if it is both an F-pre\ufb01x and a C-pre\ufb01x. Note that both asynchronous acceptance conditions are de\ufb01ned in terms of the existence of F-pre\ufb01xes and C-pre\ufb01xes and all other acceptance conditions in terms of the existence of FC-pre\ufb01xes. Remark 2. sRPA and aRPA (SPA, sBPA and aBPA, CPA) are strictly more expressive than \u03c9-regular reachability (safety, B\u00a8 uchi, co-B\u00a8 uchi) automata. Inclusion follows by de\ufb01nition while strictness is witnessed by the languages presented in Example 2. 4 Expressiveness In this section, we study the expressiveness of the various types of Parikh automata on in\ufb01nite words introduced above, by comparing synchronous and asynchronous variants, deterministic and nondeterministic variants, and the di\ufb00erent acceptance conditions. Remark 3. In this, and only this, section, we consider only reachability Parikh automata that are complete in the following sense: For every state q and every letter a there is a vector \u20d7 v and a state q\u2032 such that (q, (a,\u20d7 v), q\u2032) is a transition of A, i.e., every letter can be processed from every state. Without this requirement, one can express safety conditions by incompleteness, while we want to study the expressiveness of \u201cpure\u201d reachability automata. Safety, B\u00a8 uchi, and co-B\u00a8 uchi automata can be assumed, without loss of generality, to be complete, as one can always add a nonaccepting sink to complete such an automaton without modifying the accepted language. We begin our study by comparing the synchronous and asynchronous variants of reachability and B\u00a8 uchi automata. All transformations proving the following inclusions are e\ufb00ective and lead to a linear increase in the number of states and a constant increase in the dimension of the semilinear sets. 6 \fTheorem 1. 1. aRPA and sRPA are equally expressive. 2. Deterministic aRPA are strictly more expressive than deterministic sRPA. 3. aBPA and sBPA are equally expressive. 4. Deterministic aBPA are strictly more expressive than deterministic sBPA. Due to the equivalence of synchronous and asynchronous (nondeterministic) reachability Parikh automata, we drop the quali\ufb01ers whenever possible and just speak of reachability Parikh automata (RPA). We do the same for (nondeterministic) B\u00a8 uchi Parikh automata (BPA). Next, we compare the deterministic and nondeterministic variants for each acceptance condition. Note that all separations are as strong as possible, i.e., for reachability and B\u00a8 uchi we consider deterministic asynchronous automata, which are more expressive than their synchronous counterparts (see Theorem 1). Theorem 2. 1. Nondeterministic RPA are strictly more expressive than deterministic aRPA. 2. Nondeterministic SPA are strictly more expressive than deterministic SPA. 3. Nondeterministic BPA are strictly more expressive than deterministic aBPA. 4. Nondeterministic CPA are strictly more expressive than deterministic CPA. After having separated deterministic and nondeterministic automata for all acceptance conditions, we now consider inclusions and separations between the di\ufb00erent acceptance conditions. Here, the picture is notably di\ufb00erent than in the classical \u03c9-regular setting, as almost all classes can be separated. Theorem 3. Every RPA can be turned into an equivalent BPA and into an equivalent CPA. All other automata types are pairwise incomparable. Our separations between the di\ufb00erent acceptance conditions are as strong as possible, e.g., when we show that not every RPA has an equivalent SPA, we exhibit a deterministic sRPA (the weakest class of RPA) whose language is not accepted by any nondeterministic SPA (the strongest class of SPA). The same is true for all other separations. 5 Closure Properties In this section, we study the closure properties of Parikh automata on in\ufb01nite words. We begin by showing that, for deterministic synchronous automata, reachability and safety acceptance as well as B\u00a8 uchi and coB\u00a8 uchi acceptance are dual, although they are not syntactically dual due to all acceptance conditions being de\ufb01ned by a conjunction. On the other hand, deterministic asynchronous automata can still be complemented, however only into nondeterministic automata. Theorem 4. 1. Let (A, C) be a deterministic sRPA. The complement of Ls R(A, C) is accepted by a deterministic SPA. 2. Let (A, C) be a deterministic aRPA. The complement of La R(A, C) is accepted by an SPA, but not necessarily by a deterministic SPA. 3. Let (A, C) be a deterministic SPA. The complement of LS(A, C) is accepted by a deterministic sRPA. 4. Let (A, C) be a deterministic sBPA. The complement of Ls B(A, C) is accepted by a deterministic CPA. 7 \fClosure Decision Problems \u222a \u2229 Nonemptiness Universality Model Check. Games RPA \u2713 \u2713 \u2717 NP-compl. undec. undec. undec. det. aRPA \u2717 \u2717 \u2717 NP-compl. undec. undec. undec. det. sRPA \u2713 \u2717 \u2717 NP-compl. undec. undec. undec. SPA \u2713 \u2713 \u2717 undec. undec. undec. undec. det. SPA \u2717 \u2713 \u2717 undec. coNP-compl. coNP-compl. undec. BPA \u2713 \u2717 \u2717 NP-compl. undec. undec. undec. det. aBPA ? \u2717 \u2717 NP-compl. undec. undec. undec. det. sBPA \u2713 \u2717 \u2717 NP-compl. undec. undec. undec. CPA \u2713 \u2713 \u2717 undec. undec. undec. undec. det. CPA \u2717 \u2713 \u2717 undec. coNP-compl. coNP-compl. undec. Table 1: Closure properties and decidability of decision problems for Parikh automata on in\ufb01nite words. 5. Let (A, C) be a deterministic aBPA. The complement of La B(A, C) is accepted by a CPA, but not necessarily by a deterministic CPA. 6. Let (A, C) be a deterministic CPA. The complement of LC(A, C) is accepted by a deterministic sBPA. The positive results above are for deterministic automata. For nondeterministic automata, the analogous statements fail. Theorem 5. 1. There exists an sRPA (A, C) such that no SPA accepts the complement of Ls R(A, C). 2. There exists an SPA (A, C) such that no RPA accepts the complement of LS(A, C). 3. There exists an sBPA (A, C) such that no CPA accepts the complement of Ls B(A, C). 4. There exists a CPA (A, C) such that no BPA accepts the complement of LC(A, C). Next, we consider closure under union, intersection, and complementation of the various classes of Parikh automata on in\ufb01nite words. Notably, all nondeterministic (and some deterministic) classes are closed under union, the picture for intersection is more scattered, and we prove failure of complement closure for all classes. Again, this is in sharp contrast to the setting of classical B\u00a8 uchi automata, which are closed under all three Boolean operations. Theorem 6. The closure properties depicted in Table 1 hold. Note that there is one question mark in the closure properties columns in Table 1, which we leave for further research. 6 Decision Problems In this section, we study the complexity of the nonemptiness and the universality problem, model checking, and solving games for Parikh automata on in\ufb01nite words. Before we can do so, we need to specify how a Parikh automaton (A, C) is represented as input for algorithms: The vectors labeling the transitions of A are represented in binary and a linear set \u001a \u20d7 v0 + Xk i=1 ci\u20d7 vi \f \f \f \f ci \u2208N for i = 1, . . . , k \u001b 8 \fis represented by the list (\u20d7 v0, . . . ,\u20d7 vk) of vectors, again encoded in binary. A semilinear set is then represented by a set of such lists. 6.1 Nonemptiness We begin by settling the complexity of the nonemptiness problem. The positive results are obtained by reductions to the nonemptiness of Parikh automata on \ufb01nite words while the undecidability results are reductions from the termination problem for two-counter machines. Theorem 7. The following problems are NP-complete: 1. Given an RPA, is its language nonempty? 2. Given a BPA, is its language nonempty? The following problems are undecidable: 3. Given a deterministic SPA, is its language nonempty? 4. Given a deterministic CPA, is its language nonempty? Proof. 1.) Due to Theorem 1.1, we only consider the case of sRPA for the NP upper bound. Given such an automaton (A, C) with A = (Q, \u03a3 \u00d7 D, qI, \u2206, F) let F \u2032 \u2286F be the set of accepting states from which a cycle is reachable. Now, de\ufb01ne A\u2032 = (Q, \u03a3 \u00d7 D, qI, \u2206, F \u2032). Then, we have Ls R(A, C) \u0338= \u2205if and only if L(A\u2032, C) \u0338= \u2205(i.e., we treat (A\u2032, C) as a PA), with the latter problem being in NP [12]. The matching NP lower bound is, again due to Theorem 1.1, only shown for sRPA. We proceed by a reduction from the NP-complete [12] nonemptiness problem for Parikh automata. Given a Parikh automaton (A, C), let A\u2032 be obtained from A by adding a fresh state q with a self-loop labeled by (#,\u20d7 0) as well as transitions labeled by (#,\u20d7 0) leading from the accepting states of A to q. Here, # is a fresh letter and \u20d7 0 is the zero vector of the correct dimension. By declaring q to be the only accepting state in A\u2032, we have that Ls R(A\u2032, C) is nonempty if and only if L(A, C) is nonempty. Note that hardness holds already for deterministic automata, as one can always rename letters to make a nondeterministic PA deterministic without changing the answer to the nonemptiness problem. 2.) Due to Theorem 1.3, it is enough to consider synchronous B\u00a8 uchi acceptance for the upper bound. So, \ufb01x some sBPA (A, C) with A = (Q, \u03a3 \u00d7 D, qI, \u2206, F). Let C = S i Li where the Li are linear sets. The language Ls B(A, C) is nonempty if and only if Ls B(A, Li) is nonempty for some i. Hence, we show how to solve nonemptiness for automata with linear C, say C = n \u20d7 v0 + Pk i=1 ci\u20d7 vi \f \f \f ci \u2208N for i = 1, . . . , k o . We de\ufb01ne P = nPk i=1 ci\u20d7 vi \f \f \f ci \u2208N for i = 1, . . . , k o and, for a given state q \u2208Q, the NFA \u2022 Aq obtained from A by replacing the set of accepting states by {q}, and \u2022 Aq,q obtained from A by replacing the initial state by q, by replacing the set of accepting states by {q}, and by modifying the resulting NFA such that it does not accept the empty word (but leaving its language unchanged otherwise). We claim that Ls B(A, C) is nonempty if and only if there is a q \u2208F such that both L(Aq, C) and L(Aq,q, P) are nonempty. As nonemptiness of Parikh automata is in NP, this yields the desired upper bound. So, assume there is such a q. Then, there is a \ufb01nite run \u03c11 of A that starts in qI, ends in q, and processes some w1 \u2208\u03a3\u2217with extended Parikh image in C. Also, there is a \ufb01nite run \u03c12 of A that starts and ends in q and processes some nonempty w2 \u2208\u03a3\u2217with extended Parikh image in P. For every n \u2a7e1, \u03c11(\u03c12)n is a \ufb01nite run of (A, C) ending in the accepting state q that processes w1(w2)n and whose extended Parikh image is in C. So, \u03c11(\u03c12)\u03c9 is a synchronous B\u00a8 uchi accepting run of (A, C). For the converse direction, assume that there is some synchronous B\u00a8 uchi accepting run (q0, w0, q1)(q1, w1, q2)(q2, w2, q3) \u00b7 \u00b7 \u00b7 of (A, C). Then, there is also an accepting state q \u2208F and an in\ufb01nite set of positions S \u2286N such that 9 \fqs = q and \u03a6e(w0 \u00b7 \u00b7 \u00b7 ws\u22121) \u2208C for all s \u2208S. Hence, for every s \u2208S there is a vector (cs 1, . . . , cs k) \u2208Nk such that \u03a6e(w0 \u00b7 \u00b7 \u00b7 ws\u22121) = \u20d7 v0 + Pk i=1 cs i\u20d7 vi. By Dickson\u2019s Lemma [10], there are s1 < s2 such that cs1 j \u2264cs2 j for every 1 \u2264j \u2264k. Then, \u03a6e(ws1 \u00b7 \u00b7 \u00b7 ws2\u22121) = Pk i=1(cs2 i \u2212cs1 i )\u20d7 vi, which implies \u03a6e(ws1 \u00b7 \u00b7 \u00b7 ws2\u22121) \u2208P. Thus, the pre\ufb01x of \u03c1 of length s1 is an accepting run of the PA (Aq, C) and the next (s2 \u2212s1) transitions of \u03c1 form a nonempty accepting run of the PA (Aq,q, P). The NP lower bound follows from the proof of Theorem 7.1 by noticing that we also have that Ls B(A\u2032, C) is nonempty if and only if L(A, C) is nonempty. 3.) and 4.) The two undecidability proofs, based on reductions from undecidable problems for twocounter machines, are relegated to the appendix, which introduces all the required technical details on such machines. We conclude Subsection 6.1 by displaying an interesting consequence of the decomposition used in the proof of Theorem 7.2: we show that the language of every BPA (on in\ufb01nite words) can be expressed by combining the languages of well-chosen PA (on \ufb01nite words). This is similar to what happens in other settings: For instance, \u03c9-regular languages are exactly the languages of the form Sn j=1 Lj \u00b7(L\u2032 j)\u03c9, where each Lj, L\u2032 j is regular. Analogously, \u03c9-context-free languages can be characterized by context-free languages [7]. For B\u00a8 uchi Parikh automata, one direction of the above characterization holds: Lemma 1. If a language L is accepted by a BPA then L = Sn j=1 Lj \u00b7 (L\u2032 j)\u03c9, where each Lj, L\u2032 j is accepted by some PA. Proof. Let L be accepted by an sBPA (A, C) with C = S j\u2208J Cj for some \ufb01nite set J, where each Cj is a linear set. In the proof of Theorem 7.2, we have de\ufb01ned the NFA Aq and Aq,q for every state q of A. Now, consider some w \u2208L and an accepting run \u03c1 = (q0, w0, q1)(q1, w1, q2)(q2, w2, q3) \u00b7 \u00b7 \u00b7 of (A, C) processing w. Then, there is an accepting state q of A, some Cj, and an in\ufb01nite set S \u2286N of positions such that qs = q and \u03a6e(w0 \u00b7 \u00b7 \u00b7 ws\u22121) \u2208Cj for all s \u2208S. Let Cj = n \u20d7 v0 + Pk i=1 ci\u20d7 vi \f \f \f ci \u2208N for i = 1, . . . , k o and let Pj = nPk i=1 ci\u20d7 vi \f \f \f ci \u2208N for i = 1, . . . , k o . As before, for every s \u2208S, there is a vector (cs 1, . . . , cs k) \u2208Nk such that \u03a6e(w0 \u00b7 \u00b7 \u00b7 ws\u22121) = \u20d7 v0 + Pk i=1 cs i\u20d7 vi. Now, we apply an equivalent formulation of Dickson\u2019s Lemma [10], which yields an in\ufb01nite subset S\u2032 \u2286S with cs j \u2264cs\u2032 j for all 1 \u2264j \u2264k and all s, s\u2032 \u2208S\u2032 with s < s\u2032, i.e., we have an increasing chain in S. Let s0 < s1 < s2 < \u00b7 \u00b7 \u00b7 be an enumeration of S\u2032. As above, \u03a6e(wsn \u00b7 \u00b7 \u00b7 wsn+1\u22121) \u2208Pj for all n. So, (q0, w0, q1) \u00b7 \u00b7 \u00b7 (qs0\u22121, ws0\u22121, qs0) is an accepting run of the PA (Aq, C) and each (qsn, wsn, qsn+1) \u00b7 \u00b7 \u00b7 (qsn+1\u22121, wsn+1\u22121, qsn+1) is an accepting run of the PA (Aq,q, P). So, w \u2208S j\u2208J S q\u2208Q L(Aq, Cj) \u00b7 (L(Aq,q, Pj))\u03c9. Recall that a word is ultimately periodic if it is of the form xy\u03c9. Every nonempty \u03c9-regular and every nonempty \u03c9-context-free language contains an ultimately periodic word, which is a simple consequence of them being of the form Sn j=1 Lj \u00b7 (L\u2032 j)\u03c9. Corollary 1. Every nonempty language accepted by a BPA contains an ultimately periodic word. Let us brie\ufb02y comment on the other direction of the implication stated in Lemma 1, i.e., is every language of the form Sn j=1 Lj \u00b7 (L\u2032 j)\u03c9, where each Lj, L\u2032 j is accepted by some PA, also accepted by some BPA? The answer is no: Consider L = {anbn | n > 1}, which is accepted by a deterministic PA. However, using the shifting technique (see Remark 1), one can show that L\u03c9 is not accepted by any BPA: Every accepting run of an n-state BPA processing (anbn)\u03c9 can be turned into an accepting run on a word of the form (anbn)\u2217an+kbn(anbn)\u2217an\u2212kbn(anbn)\u03c9 for some k > 0 by shifting some cycle to the front while preserving B\u00a8 uchi acceptance. For reachability acceptance, a similar characterization holds, as every RPA can be turned into an equivalent BPA. But for safety and co-B\u00a8 uchi acceptance the characterization question is nontrivial, as for these acceptance conditions all (almost all) run pre\ufb01xes have to be FC-pre\ufb01xes. We leave this problem for future work. 10 \f6.2 Universality Now, we consider the universality problem. Here, the positive results follow from the duality of deterministic SPA and RPA (CPA and BPA) and the decidability of nonemptiness for the dual automata classes. Similarly, the undecidability proofs for deterministic sRPA and sBPA follow from duality and undecidability of nonemptiness for the dual automata classes. Finally, the remaining undecidability results follow from reductions from undecidable problems for two-counter machines and Parikh automata over \ufb01nite words. Theorem 8. The following problems are coNP-complete: 1. Given a deterministic SPA, is its language universal? 2. Given a deterministic CPA, is its language universal? The following problems are undecidable: 3. Given a deterministic sRPA, is its language universal? 4. Given an SPA, is its language universal? 5. Given a deterministic sBPA, is its language universal? 6. Given a CPA, is its language universal? Proof. The proofs of the results for deterministic automata follow immediately from the fact that a language is universal if and only if its complement is empty, Theorem 4, and Theorem 7. The proof of undecidability for SPA, based on a reduction from the termination problem for two-counter machines, is relegated to the appendix where all the necessary technical details are presented. To conclude, let us consider universality of CPA. 6.) Universality of Parikh automata over \ufb01nite words is undecidable [18]. Now, given a PA (A, C) over \u03a3, one can construct a CPA (A\u2032, C\u2032) for the language L(A, C) \u00b7 # \u00b7 (\u03a3 \u222a{#})\u03c9 \u222a\u03a3\u03c9, where # / \u2208\u03a3 is a fresh letter. This construction relies on freezing the counters (i.e., moving to a copy of the automaton with the same transition structure, but where the counters are no longer updated) and closure of CPA under union. Now, L(A, C) is universal if and only if LC(A\u2032, C\u2032) is universal. 6.3 Model Checking Model checking is arguably the most successful application of automata theory to automated veri\ufb01cation. The problem asks whether a given system satis\ufb01es a speci\ufb01cation, often given by an automaton. More formally, and for the sake of notational convenience, we say that a transition system T is a (possibly incomplete) SPA (A, C) so that every state of A is accepting and C = Nd, i.e., every run is accepting. Now, the model-checking problem for a class L of languages of in\ufb01nite words asks, given a transition system T and a language L \u2208L, whether LS(T ) \u2286L, i.e., whether every word in the transition system satis\ufb01es the speci\ufb01cation L. Note that our de\ufb01nition here is equivalent to the standard de\ufb01nition of model checking of \ufb01nite-state transition systems. Here, we study the model-checking problem for di\ufb00erent types of Parikh automata. Theorem 9. The following problems are coNP-complete: 1. Given a transition system T and a deterministic SPA (A, C), is LS(T ) \u2286LS(A, C)? 2. Given a transition system T and a deterministic CPA (A, C), is LS(T ) \u2286LC(A, C)? The following problems are undecidable: 3. Given a transition system T and a deterministic sRPA (A, C), is LS(T ) \u2286Ls R(A, C)? 4. Given a transition system T and an SPA (A, C), is LS(T ) \u2286LS(A, C)? 11 \f5. Given a transition system T and a deterministic sBPA (A, C), is LS(T ) \u2286Ls B(A, C)? 6. Given a transition system T and a CPA (A, C), is LS(T ) \u2286LC(A, C)? Proof. Let T be a transition system with LS(T ) = \u03a3\u03c9, e.g., a one-state transition system with a self-loop labeled with all letters in \u03a3. Then, L \u2286\u03a3\u03c9 is universal if and only if L LS(T ) \u2286L. Thus, all six lower bounds (coNP-hardness and undecidability) immediately follow from the analogous lower bounds for universality (see Theorem 8). So, it remains to consider the two coNP upper bounds. So, \ufb01x a deterministic SPA (A, C) and a transition system T . We apply the usual approach to automatatheoretic model checking: We have LS(T ) \u2286LS(A, C) if and only if LS(T ) \u2229LS(A, C) = \u2205. Due to Theorem 4.3 there is a deterministic sRPA (A\u2032, C\u2032) accepting LS(A, C). Furthermore, using a product construction, one can construct an RPA (A\u2032\u2032, C\u2032\u2032) accepting LS(T ) \u2229LS(A, C), which can then be tested for emptiness, which is in coNP (see Theorem 7). Note that the product construction depends on the fact that every run of T is accepting, i.e., the acceptance condition of the product automaton only has to check one acceptance condition. The proof for deterministic co-B\u00a8 uchi Parikh automata is analogous, but using B\u00a8 uchi automata (A\u2032, C\u2032) and (A\u2032\u2032, C\u2032\u2032). 6.4 In\ufb01nite Games In this section, we study in\ufb01nite games with winning conditions speci\ufb01ed by Parikh automata. Such games are the technical core of the synthesis problem, the problem of determining whether there is a reactive system satisfying a given speci\ufb01cation on its input-output behavior. Our main result is that solving in\ufb01nite games is undecidable for all acceptance conditions we consider here. Here, we consider Gale-Stewart games [15], abstract games induced by a language L of in\ufb01nite words, in which two players alternately pick letters, thereby constructing an in\ufb01nite word w. One player aims to ensure that w is in L while the other aims to ensure that it is not in L. Formally, given a language L \u2286(\u03a31 \u00d7 \u03a32)\u03c9, the game G(L) is played between Player 1 and Player 2 in rounds i = 0, 1, 2, . . . as follows: At each round i, \ufb01rst Player 1 plays a letter ai \u2208\u03a31 and then Player 2 answers with a letter bi \u2208\u03a32. A play of G(L) is an in\ufb01nite outcome w = \u0000a0 b0 \u0001\u0000a1 b1 \u0001 \u00b7 \u00b7 \u00b7 and Player 2 wins it if and only if w \u2208L. A strategy for Player 2 in G(L) is a mapping from \u03a3+ 1 to \u03a32 that gives for each pre\ufb01x played by Player 1 the next letter to play. An outcome \u0000a0 b0 \u0001\u0000a1 b1 \u0001 \u00b7 \u00b7 \u00b7 agrees with a strategy \u03c3 if for each i, we have that bi = \u03c3(a0a1 . . . ai). Player 2 wins G(L) if she has a strategy that only agrees with outcomes that are winning for Player 2. The next result follows immediately from the fact that for all classes of deterministic Parikh automata, either nonemptiness or universality is undecidable, and that these two problems can be reduced to solving Gale-Stewart games. Theorem 10. The problem \u201cGiven an automaton (A, C), does Player 2 win G(L(A, C))?\u201d is undecidable for the following classes of automata: (deterministic) sRPA, (deterministic) SPA, (deterministic) sBPA, and (deterministic) CPA. Proof. The results follow immediately from the following two facts and the undecidability of nonemptiness or universality for the corresponding automata types. Fix a language L. \u2022 Player 2 wins G( \u0000# L \u0001 ) with \u0000# L \u0001 = { \u0000 # w0 \u0001\u0000 # w1 \u0001\u0000 # w2 \u0001 \u00b7 \u00b7 \u00b7 | w0w1w2 \u00b7 \u00b7 \u00b7 \u2208L} if and only if L is nonempty. \u2022 Player 2 wins G( \u0000L # \u0001 ) with \u0000L # \u0001 = { \u0000w0 # \u0001\u0000w1 # \u0001\u0000w2 # \u0001 \u00b7 \u00b7 \u00b7 | w0w1w2 \u00b7 \u00b7 \u00b7 \u2208L} if and only if L is universal. To conclude, note that a Parikh automaton for L can be turned into an equivalent one for \u0000# L \u0001 and \u0000L # \u0001 while preserving determinism and the acceptance type, by just replacing each transition label a by \u0000# a \u0001 and \u0000 a # \u0001 , respectively. 12 \f7" + }, + { + "url": "http://arxiv.org/abs/1607.08480v1", + "title": "Mean-Payoff Games on Timed Automata", + "abstract": "Mean-payoff games on timed automata are played on the infinite weighted graph\nof configurations of priced timed automata between two players, Player Min and\nPlayer Max, by moving a token along the states of the graph to form an infinite\nrun. The goal of Player Min is to minimize the limit average weight of the run,\nwhile the goal of the Player Max is the opposite. Brenguier, Cassez, and Raskin\nrecently studied a variation of these games and showed that mean-payoff games\nare undecidable for timed automata with five or more clocks. We refine this\nresult by proving the undecidability of mean-payoff games with three clocks. On\na positive side, we show the decidability of mean-payoff games on one-clock\ntimed automata with binary price-rates. A key contribution of this paper is the\napplication of dynamic programming based proof techniques applied in the\ncontext of average reward optimization on an uncountable state and action\nspace.", + "authors": "Shibashis Guha, Marcin Jurdzinski, Krishna S., Ashutosh Trivedi", + "published": "2016-07-28", + "updated": "2016-07-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.FL", + "cs.LO" + ], + "main_content": "Introduction The classical mean-payo\ufb00games [24, 13, 15, 4] are two-player zero-sum games that are played on weighted \ufb01nite graphs, where two players\u2014Max and Min\u2014take turn to move a token along the edges of the graph to jointly construct an in\ufb01nite play. The objectives of the players Max and Min are to respectively maximize and minimize the limit average reward associated with the play. Mean-payo\ufb00games are well-studied in the context of optimal controller synthesis in the framework of Ramadge-Wonham [22], where the goal of the game is to \ufb01nd a control strategy that maximises the average reward earned during the evolution of the system. Mean-payo\ufb00games enjoy a special status in veri\ufb01cation, since \u00b5-calculus model checking and parity games can be reduced in polynomial-time to solving mean-payo\ufb00 games. Mean-payo\ufb00objectives can also be considered as quantitative extensions [16] of classical B\u00fcchi objectives, where we are interested in the limit-average share of occurrences of accepting states rather than merely in whether or not in\ufb01nitely many accepting states occur. For a broader discussion on quantitative veri\ufb01cation, in general, and the transition from the classical qualitative to the modern quantitative interpretation of deterministic B\u00fcchi automata, we refer the reader to Henzinger\u2019s excellent survey [16]. We study mean-payo\ufb00games played on an in\ufb01nite con\ufb01guration graph of timed automata. Asarin and Maler [3] were the \ufb01rst to study games on timed automata and they gave an algorithm to solve timed games with reachability time objective. Their work was later generalized and improved upon by Alur et al. [1] and Bouyer et al. [8]. Bouyer et al. [7, 5] also studied the more di\ufb03cult average payo\ufb00s, but only in the context of scheduling, which in game-theoretic terminology corresponds to 1-player games. However, they left the problem of proving decidability of 2-player average reward games on priced timed automata open. Jurdzi\u0144ski and Trivedi [19] proved the decidability of the special case of average time games \u00a9 Guha, Jurdzi\u0144ski, Krishna, and Trivedi; licensed under Creative Commons License CC-BY Leibniz International Proceedings in Informatics Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik, Dagstuhl Publishing, Germany arXiv:1607.08480v1 [cs.GT] 28 Jul 2016 \fXX:2 Mean-Payo\ufb00Games on Timed Automata where all locations have unit costs. More recently, mean-payo\ufb00games on timed automata have been studied by Brenguier, Cassez and Raskin [10] where they consider average payo\ufb00 per time-unit. Using the undecidability of energy games [9], they showed undecidability of mean-payo\ufb00games on weighted timed games with \ufb01ve or more clocks. They also gave a semi-algorithm to solve cycle-forming games on timed automata and characterized the conditions under which a solution of these games gives a solution for mean-payo\ufb00games. On the positive side, we characterize general conditions under which dynamic programming based techniques can be used to solve the mean-payo\ufb00games on timed automata. As a proof-of-concept, we consider one-clock binary-priced timed games, and prove the decidability of mean-payo\ufb00games for this subclass. Our decidability result can be considered as the average-payo\ufb00analog of the decidability result by Brihaye et al. [11] for reachability-price games on timed automata. We strengthen the known undecidability results for mean-payo\ufb00 games on timed automata in three ways: i) we show that the mean-payo\ufb00games over priced timed games is undecidable for timed games with only three clocks; ii) secondly, we show that undecidability can be achieved with binary price-rates; and \ufb01nally, iii) our undecidability results are applicable for problems where the average payo\ufb00is considered per move as well as for problems when it is de\ufb01ned per time-unit. Howard [17, 21] introduced gain and bias optimality equations to characterize optimal average on one-player \ufb01nite game arenas. Gain and bias optimality equations based characterization has been extended to two-player game arenas [14] as well as many subclasses of uncountable state and action spaces [12, 6]. The work of Bouyer et al. [6] is perhaps the closest to our approach\u2014they extended optimality equations approach to solve games on hybrid automata with certain strong reset assumption that requires all continuous variables to be reset at each transition, which in the case of timed automata is akin to requiring all clocks to be reset at each transition. To the best of our knowledge, the exact decidability for timed games does not immediately follow from any previously known results. Howard\u2019s Optimality equations requires two variable per state: the gain of the state and the bias of the state. Informally speaking, the gain of a state corresponds to the optimal mean-payo\ufb00for games starting from that state, while the bias corresponds to the limit of transient sum of step-wise deviations from the optimal average. Hence, intuitively at a given point in a game, both players would prefer to \ufb01rst optimize the gain, and then choose to optimize bias among choices with equal gains. We give general conditions under which a solution of gain-bias equations for a \ufb01nitary abstraction of timed games can provide a solution of gain-bias equations for the original timed game. For this purpose, we exploit a region-graph like abstraction of timed automata [18] called the boundary region abstraction (BRA). Our key contribution is the theorem that states that every solution of gain-bias optimality equations for boundary region abstraction carries over to the original timed game, as long as for every region, the gain values are constant and the bias values are a\ufb03ne. The paper is organized in the following manner. In Section 2 we describe mean-payo\ufb00 games and introduce the notions of gain and bias optimality equations. This section also introduces mean-payo\ufb00games over timed automata and states the key results of the paper. Section 3 introduces the boundary region abstraction for timed automata and characterizes the conditions under which the solution of a game played over the boundary region abstraction can be lifted to a solution of mean payo\ufb00game over priced timed automata. In Section 4 we present the strategy improvement algorithm to solve optimality equations for mean-payo\ufb00games played over boundary region abstraction and connect them to solution of optimality equations over corresponding timed automata. Finally, Section 5 sketches the undecidability of mean-payo\ufb00games for binary-priced timed automata with three clocks. \fGuha, Jurdzi\u0144ski, Krishna, and Trivedi XX:3 2 Mean-Payo\ufb00Games on Timed Automata We begin this section by introducing mean-payo\ufb00games on graphs with uncountably in\ufb01nite vertices and edges, and show how, and under what conditions, gain-bias optimality equations characterize the value of mean-payo\ufb00games. We then set-up mean-payo\ufb00games for timed automata and state our key contributions. 2.1 Mean-Payo\ufb00Games \u25b6De\ufb01nition 1 (Turn-Based Game Arena). A game arena \u0393 is a tuple (S, SMin, SMax, A, T, \u03c0) where S is a (potentially uncountable) set of states partitioned between sets SMin and SMax of states controlled by Player Min and Player Max, respectively; A is a (potentially uncountable) set of actions; T : S \u00d7 A \u2192S is a partial function called transition function; and \u03c0 : S \u00d7 A \u2192R is a partial function called price function. We say that a game arena is \ufb01nite if both S and A are \ufb01nite. For any state s \u2208S, we let A(s) denote the set of actions available in s, i.e., the actions a \u2208A for which T(s, a) and \u03c0(s, a) are de\ufb01ned. A transition of a game arena is a tuple (s, a, s\u2032) \u2208S\u00d7A\u00d7S such that s\u2032 = T(s, a) and we write s a \u2212 \u2192s\u2032. A \ufb01nite play starting at a state s0 is a sequence of transitions \u27e8s0, a1, s1, a2, . . . , sn\u27e9\u2208S\u00d7(A\u00d7S)\u2217such that for all 0 \u2a7di < n we have that si ai+1 \u2212 \u2212 \u2212 \u2192si+1 is a transition. For a \ufb01nite play \u03c1 = \u27e8s0, a1, . . . , sn\u27e9we write Last(\u03c1) for the \ufb01nal state of \u03c1, here Last(\u03c1) = sn. The concept of an in\ufb01nite play \u27e8s0, a1, s1, . . .\u27e9is de\ufb01ned in an analogous way. We write Runs(s) and Runs\ufb01n(s) for the set of plays and the set of \ufb01nite plays starting at s \u2208S respectively. A strategy of Player Min is a function \u00b5 : Runs\ufb01n \u2192A such that \u00b5(\u03c1) \u2208A(Last(\u03c1)) for all \ufb01nite plays \u03c1 \u2208Runs\ufb01n, i.e. for any \ufb01nite play, a strategy of Min returns an action available to Min in the last state of the play. A strategy \u03c7 of Max is de\ufb01ned analogously and we let \u03a3Min and \u03a3Max denote the sets of strategies of Min and Max, respectively. A strategy \u03c3 is positional if Last(\u03c1)=Last(\u03c1\u2032) implies \u03c3(\u03c1)=\u03c3(\u03c1\u2032) for all \u03c1, \u03c1\u2032 \u2208Runs\ufb01n. This allows us to represent a positional strategy as a function in [S \u2192A]. Let \u03a0Min and \u03a0Max denote the set of positional strategies of Min and Max, respectively. For any state s and strategy pair (\u00b5, \u03c7) \u2208\u03a3Min\u00d7\u03a3Max, let Run(s, \u00b5, \u03c7) denote the unique in\ufb01nite play \u27e8s0, a1, s1, . . .\u27e9in which Min and Max play according to \u00b5 and \u03c7, respectively, i.e. for all i \u2a7e0 we have that si \u2208SMin implies ai+1 = \u00b5(\u27e8s0, a1, . . . , si\u27e9) and si \u2208SMax implies ai+1 = \u03c7(\u27e8s0, a1, . . . , si\u27e9). In a mean-payo\ufb00game on a game arena, players Min and Max move a token along the transitions inde\ufb01nitely thus forming an in\ufb01nite play \u03c1 = \u27e8s0, a1, s1, . . .\u27e9in the game graph. The goal of player Min is to minimize AMin(\u03c1) = lim supn\u2192\u221e 1 n \u00b7 Pn\u22121 i=0 \u03c0(si, ai+1) and the goal of player Max is to maximize AMax(\u03c1) = lim infn\u2192\u221e1 n \u00b7 Pn\u22121 i=0 \u03c0(si, ai+1). The upper value Val\u2217(s) and the lower value Val\u2217(s) of a state s \u2208S are de\ufb01ned as: Val\u2217(s) = inf \u00b5\u2208\u03a3Min sup \u03c7\u2208\u03a3Max AMin(Run(s, \u00b5, \u03c7)) and Val\u2217(s) = sup \u03c7\u2208\u03a3Max inf \u00b5\u2208\u03a3Min AMax(Run(s, \u00b5, \u03c7)) respectively. It is always the case that Val\u2217(s) \u2a7dVal\u2217(s). A mean-payo\ufb00game is called determined if for every state s \u2208S we have that Val\u2217(s) = Val\u2217(s). Then, we write Val(s) for this number and we call it the value of the mean-payo\ufb00game at state s. We say that a game is positionally-determined if for every \u03b5 > 0 we have strategies \u00b5\u03b5 \u2208\u03a0Min and \u03c7\u03b5 \u2208\u03a0Max such that for every initial state s \u2208S, we have that Val\u2217(s)\u2212\u03b5 \u2a7d inf \u00b5\u2032\u2208\u03a3Min AMax(Run(s, \u00b5\u2032, \u03c7\u03b5)) and Val\u2217(s)+\u03b5 \u2a7e sup \u03c7\u2032\u2208\u03a3Max AMin(Run(s, \u00b5\u03b5, \u03c7\u2032)). \fXX:4 Mean-Payo\ufb00Games on Timed Automata For a given \u03b5 we call each such strategy an \u03b5-optimal strategy for the respective player. Given two functions G : S \u2192R (gain) and B : S \u2192R (bias), we say that (G, B) is a solution to the optimality equations for mean-payo\ufb00game on \u0393 = (S, SMin, SMax, A, T, \u03c0), denoted (G, B) | = Opt(\u0393) if G(s) = ( supa\u2208A(s){G(s\u2032) : s a \u2212 \u2192s\u2032} if s \u2208SMax infa\u2208A(s){G(s\u2032) : s a \u2212 \u2192s\u2032} if s \u2208SMin. B(s) = ( supa\u2208A(s){\u03c0(s, a) \u2212G(s) + B(s\u2032) : s a \u2212 \u2192s\u2032 and G(s) = G(s\u2032)} if s \u2208SMax infa\u2208A(s){\u03c0(s, a) \u2212G(s) + B(s\u2032) : s a \u2212 \u2192s\u2032 and G(s) = G(s\u2032)} if s \u2208SMin. We prove the following theorem connecting a solution of the optimality equations with meanpayo\ufb00games. We exploit this theorem to solve mean-payo\ufb00games on timed automata. \u25b6Theorem 2. If there exists a function G : S \u2192R with \ufb01nite image and a function B : S \u2192R with bounded image such that (G, B) | = Opt(\u0393) then for every state s \u2208S, we have that G(s) = Val(s) and for every \u03b5 > 0 both players have positional \u03b5-optimal strategies. Proof. Assume that we are given the functions G : S \u2192R with \ufb01nite image and B : S \u2192R with bounded image such that (G, B) | = Opt(\u0393). In order to prove the result we show, for every \u03b5 > 0, the existence of positional strategies \u00b5\u03b5 and \u03c7\u03b5 such that G(s) \u2212\u03b5 \u2a7d inf \u00b5\u2032\u2208\u03a3Min AMax(Run(s, \u00b5\u2032, \u03c7\u03b5)) and G(s) + \u03b5 \u2a7e sup \u03c7\u2032\u2208\u03a3Max AMin(Run(s, \u00b5\u03b5, \u03c7\u2032)). The proof is in two parts. Given \u03b5 > 0 we compute the positional strategy \u00b5\u03b5 \u2208\u03a0Min satisfying the following conditions: \u00b5\u03b5(s) = a if G(s) = G(s\u2032) (1) B(s) \u2a7e \u03c0(s, a) \u2212G(s) + B(s\u2032) \u2212\u03b5, (2) where s a \u2212 \u2192s\u2032. Notice that it is always possible to \ufb01nd such strategy since (G, B) satis\ufb01es optimality equations and G is \ufb01nite image. Now consider an arbitrary strategy \u03c7 \u2208\u03a3Max and consider the run Run(s, \u00b5\u03b5, \u03c7) = \u27e8s0, a1, s1, . . . , sn, . . .\u27e9. Notice that for every i \u2a7e0 we have that G(si) \u2a7eG(si+1) if si \u2208SMax and G(si) = G(si+1) if si \u2208SMin. Hence G(s0), G(s1), . . . is a non-increasing sequence. Since G is \ufb01nite image, the sequence eventually becomes constant. Assume that for i \u2a7eN we have that G(si) = g. Now notice that for all i \u2a7eN we have that B(si) \u2a7e\u03c0(si, ai+1) \u2212g + B(si+1) if si \u2208SMax and B(si) \u2a7e\u03c0(si, ai+1) \u2212g + B(si+1) \u2212\u03b5 if si \u2208SMin. Summing these equations sidewise form i = N to N + k we have that B(sN) \u2a7ePN+k i=N \u03c0(si, ai+1) \u2212(k + 1) \u00b7 g + B(sN+k+1) \u2212(k + 1) \u00b7 \u03b5. Rearranging, we get g \u2a7e 1 k + 1 N+k X i=N \u03c0(si, ai+1) + 1 k + 1(B(sN+k+1) \u2212B(sN)) \u2212\u03b5. Hence g \u2a7e lim sup k\u2192\u221e 1 k + 1 N+k X i=N \u03c0(si, ai+1) + lim sup k\u2192\u221e 1 k + 1(B(sN+k+1) \u2212B(sN)) \u2212\u03b5 = lim sup k\u2192\u221e 1 k k X i=0 \u03c0(si, ai+1) \u2212\u03b5 Hence G(s) + \u03b5 \u2a7e AMin(Run(s, \u00b5\u03b5, \u03c7)). Since \u03c7 is an arbitrary strategy in \u03a3Max, we have G(s)+\u03b5 \u2a7esup\u03c7\u2032\u2208\u03a3Max AMin(Run(s, \u00b5\u03b5, \u03c7\u2032)). This part is analogous to the \ufb01rst part of the proof and is omitted. The proof is now complete. \u25c0 \fGuha, Jurdzi\u0144ski, Krishna, and Trivedi XX:5 2.2 Timed Automata Priced Timed Game Arenas (PTGAs) extend classical timed automata [2] with a partition of the actions between two players Min and Max. Before we present the syntax and semantics of PTGAs, we need to introduce the concept of clock variables and related notions. Clocks. Let X be a \ufb01nite set of clocks. A clock valuation on X is a function \u03bd : X\u2192R\u2a7e0 and we write V (X) (or just V when X is clear from the context) for the set of clock valuations. Abusing notation, we also treat a valuation \u03bd as a point in (R\u2a7e0)|X|. Let 0 denote the clock valuation that assigns 0 to all clocks. If \u03bd \u2208V and t \u2208R\u2a7e0 then we write \u03bd+t for the clock valuation de\ufb01ned by (\u03bd+t)(c) = \u03bd(c)+t for all c \u2208X. For C \u2286X, we write \u03bdC for the valuation where \u03bdC(c) equals 0 if c \u2208C and \u03bd(c) otherwise. For X \u2286V (X), we write X for the smallest closed set in V containing X. Although clocks are usually allowed to take arbitrary non-negative values, for notational convenience we assume that there is a K \u2208N such that for every c \u2208X we have \u03bd(c) \u2a7dK. Clock Constraints. A clock constraint over X with upper bound K \u2208N is a conjunction of simple constraints of the form c \u25b7 \u25c1i or c\u2212c\u2032 \u25b7 \u25c1i, where c, c\u2032 \u2208X, i \u2208N, i\u2a7dK, and \u25b7 \u25c1\u2208{<, >, =, \u2a7d, \u2a7e}. For \u03bd \u2208V (X) and K \u2208N, let CC(\u03bd, K) be the set of clock constraints with upper bound K which hold in \u03bd, i.e. those constraints that resolve to true after substituting each occurrence of a clock x with \u03bd(x). Regions and Zones. Every clock region is an equivalence class of the indistinguishabilityby-clock-constraints relation. For a given set of clocks X and upper bound K \u2208N on clock constraints, a clock region is a maximal set \u03b6\u2286V (X) such that CC(\u03bd, K)=CC(\u03bd\u2032, K) for all \u03bd, \u03bd\u2032 \u2208\u03b6. For the set of clocks X and upper bound K we write R(X, K) for the corresponding \ufb01nite set of clock regions. We write [\u03bd] for the clock region of \u03bd. A clock zone is a convex set of clock valuations that satis\ufb01es constraints of the form \u03b3 ::= c1 \u25b7 \u25c1k | c1 \u2212c2 \u25b7 \u25c1k | \u03b3 \u2227\u03b3, k \u2208N, c1, c2 \u2208X and \u25b7 \u25c1\u2208{\u2264, <, =, >, \u2265}. We write Z(X, K) for the set of clock zones over the set of clocks X and upper bound K. When X and K are clear from the context we write R and Z for the set of regions and zones. In this paper we \ufb01x a positive integer K, and work with K-bounded clocks and clock constraints. 2.3 Priced Timed Game Arena: Syntax and Semantics \u25b6De\ufb01nition 3. A priced timed game arena is a tuple T=(LMin, LMax, Act, X, Inv, E, \u03c1, \u03b4, p) where LMin and LMax are sets of locations controlled by Player Min and Player Max and we write L = LMin \u222aLMax; Act is a \ufb01nite set of actions; X is a \ufb01nite set of clocks; Inv : L \u2192Z is an invariant condition; E : L\u00d7Act \u2192Z is an action enabledness function; \u03c1 : Act \u21922C is a clock reset function; \u03b4 : L\u00d7Act \u2192L is a transition function; and p : L \u222aL\u00d7Act \u2192R is a price information function. A PTGA is binary-priced when p(\u2113) \u2208{0, 1} for all \u2113\u2208L. When we consider a PTGA as an input of an algorithm, its size is understood as the sum of the sizes of encodings of L, X, Inv, Act, E, \u03c1, \u03b4 and p. We draw the states of Min players as circles, while states of Max player as boxes. Let T = (LMin, LMax, Act, X, Inv, E, \u03c1, \u03b4, p) be a PTGA. A con\ufb01guration of a PTGA is a pair (\u2113, \u03bd), where \u2113is a location and \u03bd a clock valuation such that \u03bd \u2208Inv(\u2113). For any t \u2208R\u2a7e0, we let (\u2113, \u03bd)+t equal the con\ufb01guration (\u2113, \u03bd+t). In a con\ufb01guration (\u2113, \u03bd), a timed action (time-action pair) (t, a) is available if and only if the invariant condition Inv(\u2113) is continuously satis\ufb01ed while t time units elapse, and a is enabled (i.e. the enabling condition E(\u2113, a) is satis\ufb01ed) after t time units have elapsed. Furthermore, if the timed action (t, a) is performed, then the next con\ufb01guration is determined by the transition relation \u03b4 and the reset function \u03c1, i.e. the clocks in \u03c1(a) are reset and we move to the location \u03b4(\u2113, a). \fXX:6 Mean-Payo\ufb00Games on Timed Automata A game on a PTGA starts in an initial con\ufb01guration (\u2113, \u03bd) \u2208L \u00d7 V and players Min and Max construct an in\ufb01nite play by taking turns to choose available timed actions (t, a) whenever the current location is controlled by them and the price p(\u2113) \u00b7 t + p(\u2113, a) is paid to the Max by player Min. Formally, PTGA semantics is given as a game arena. \u25b6De\ufb01nition 4 (PTGA Semantics). Let T = (LMin, LMax, Act, X, Inv, E, \u03c1, \u03b4, p) be a PTGA. The semantics of T is given by game arena [ [T] ]=(S, SMin, SMax, A, T, \u03c0) where S \u2286L\u00d7V is the set of states such that (\u2113, \u03bd) \u2208S if and only if \u03bd \u2208Inv(\u2113); (\u2113, \u03bd) \u2208SMin (or (\u2113, \u03bd) \u2208SMax) if (\u2113, \u03bd) \u2208S and \u2113\u2208LMin (or \u2113\u2208LMax, respectively). A = Act\u00d7R\u2a7e0 is the set of timed actions; T : S \u00d7 Act \u2192S is the transition function such that for (\u2113, \u03bd) \u2208S and (a, t) \u2208Act, we have T((\u2113, \u03bd), (a, t)) = (\u2113\u2032, \u03bd\u2032) if and only if \u03bd+t\u2032 \u2208Inv(\u2113) for all t\u2032 \u2208[0, t]; \u03bd+t \u2208E(\u2113, a); (\u2113\u2032, \u03bd\u2032) \u2208S, \u03b4(\u2113, a) = \u2113\u2032, (\u03bd +t)\u03c1(a) = \u03bd\u2032. \u03c0 : S\u00d7Act\u2192R is the reward function where \u03c0((\u2113, \u03bd), (a, t))=p(\u2113, a) + p(\u2113) \u00b7 t. We are interested in the mean-payo\ufb00decision problem for timed automata T that asks to decide whether the value of the mean-payo\ufb00game for a given state is below a given budget. For a PTGA T and budget r \u2208R, we write MPG(T, r) for the r-mean payo\ufb00 decision problem that asks whether value of the game at the state (\u2113, 0) is smaller than r. The following theorem summarizes the key contribution of this paper. \u25b6Theorem 5. The decision problem MPG(T, r) for binary-priced timed automata T is undecidable for automata with three clocks, and decidable for automata with one clock. 3 Boundary Region Graph Abstraction In this section we introduce an abstraction of priced timed games called the boundary region abstraction (that generalizes classical corner-point abstraction [7]), and characterize conditions under which a solution of optimality equations for the boundary region abstraction can be lifted to a solution of optimality equations for timed automata. Observe that in order to keep our result as general as possible, we present the abstraction and corresponding results for timed automata with an arbitrary number of clocks. In the following section, we show that the required conditions hold for the case of one-clock binary-priced timed automata. Timed Successor Regions. Recall that R is the set of clock regions. For \u03b6, \u03b6\u2032 \u2208R, we say that \u03b6\u2032 is in the future of \u03b6, denoted \u03b6 \u2217 \u2212 \u2192\u03b6\u2032, if there exist \u03bd \u2208\u03b6, \u03bd\u2032 \u2208\u03b6\u2032 and t \u2208R\u2a7e0 such that \u03bd\u2032 = \u03bd+t and say \u03b6\u2032 is the time successor of \u03b6 if \u03bd+t\u2032 \u2208\u03b6 \u222a\u03b6\u2032 for all t\u2032 \u2a7dt and write \u03b6 \u2192\u03b6\u2032, or equivalently \u03b6\u2032 \u2190\u03b6, to denote this fact. For regions \u03b6, \u03b6\u2032 \u2208R such that \u03b6 \u2217 \u2212 \u2192\u03b6\u2032 we write [\u03b6, \u03b6\u2032] for the zone S{\u03b6\u2032\u2032 | \u03b6 \u2217 \u2212 \u2192\u03b6\u2032\u2032 \u2227\u03b6\u2032\u2032 \u2217 \u2212 \u2192\u03b6\u2032}. Thin and Thick Regions. We say that a region \u03b6 is thin if [\u03bd]\u0338=[\u03bd+\u03b5] for every \u03bd \u2208\u03b6 and \u03b5>0 and thick otherwise. We write RThin and RThick for the sets of thin and thick regions, respectively. Observe that if \u03b6 \u2208RThick then, for any \u03bd \u2208\u03b6, there exists \u03b5>0, such that [\u03bd]=[\u03bd+\u03b5] and the time successor of a thin region is thick, and vice versa. Intuition for the Boundary Region Graph (BRG). Recall that K is an upper bound on clock values and let JKKN = {0, 1, . . . , K}. For any \u03bd \u2208V , b \u2208JKKN and c \u2208X, we de\ufb01ne time(\u03bd, (b, c)) def =b\u2212\u03bd(c) if \u03bd(c)\u2a7db, and time(\u03bd, (b, c)) def =0 if \u03bd(c)>b. Intuitively, time(\u03bd, (b, c)) returns the amount of time that must elapse in \u03bd before the clock c reaches the integer value b. Observe that, for any \u03b6\u2032 \u2208RThin, there exists b \u2208JKKN and c \u2208X, such that \u03bd \u2208\u03b6 implies (\u03bd+(b\u2212\u03bd(c)) \u2208\u03b6\u2032 for all \u03b6 \u2208R in the past of \u03b6\u2032 and write \u03b6 \u2192b,c \u03b6\u2032. The boundary region abstraction is motivated by the following. Consider a \u2208Act, (\u2113, \u03bd) and \u03b6 \u2217 \u2212 \u2192\u03b6\u2032 such that \u03bd \u2208\u03b6, [\u03b6, \u03b6\u2032] \u2286Inv(\u2113) and \u03bd\u2032 \u2208E(\u2113, a). (For illustration, see Figure 2 in Appendix). \fGuha, Jurdzi\u0144ski, Krishna, and Trivedi XX:7 If \u03b6\u2032 \u2208RThick, then there are in\ufb01nitely many t \u2208R\u2a7e0 such that \u03bd+t \u2208\u03b6\u2032. However, amongst all such t\u2019s, for one of the boundaries of \u03b6\u2032, the closer \u03bd+t is to this boundary, the \u2018better\u2019 the timed action (t, a) becomes for a player\u2019s objective. However, since \u03b6\u2032 is a thick region, the set {t \u2208R\u2a7e0 | \u03bd+t \u2208\u03b6\u2032} is an open interval, and hence does not contain its boundary values. Let the closest boundary of \u03b6\u2032 from \u03bd be de\ufb01ned by the hyperplane c = binf and the farthest boundary of \u03b6\u2032 from \u03bd be de\ufb01ned by the hyperplane c = bsup. binf, bsup \u2208N are such that binf \u2212\u03bd(c) (bsup\u2212\u03bd(c)) is the in\ufb01mum (supremum) of the time spent to reach the lower (upper) boundary of region \u03b6\u2032. Let the zones that correspond to these boundaries be denoted by \u03b6\u2032 inf and \u03b6\u2032 sup respectively. Then \u03b6 \u2192binf,c \u03b6\u2032 inf \u2192\u03b6\u2032 and \u03b6 \u2192bsup,c \u03b6\u2032 sup \u2190\u03b6\u2032. In the boundary region abstraction we include these \u2018best\u2019 timed actions through (binf, c, a, \u03b6\u2032) and (bsup, c, a, \u03b6\u2032). If \u03b6\u2032 \u2208RThin, then there exists a unique t \u2208R\u2a7e0 such that \u03bd+t \u2208\u03b6\u2032. Moreover since \u03b6\u2032 is a thin region, there exists a clock c \u2208C and a number b \u2208N such that \u03b6 \u2192b,c \u03b6\u2032 and t = b\u2212\u03bd(c). In the boundary region abstraction we summarise this \u2018best\u2019 timed action from region \u03b6 via region \u03b6\u2032 through the action (b, c, a, \u03b6\u2032). Based on this intuition above the boundary region abstraction (BRA) is de\ufb01ned as follows. \u25b6De\ufb01nition 6. For a priced timed game arena T = (LMin, LMax, Act, X, Inv, E, \u03c1, \u03b4, p) the boundary region abstraction of T is given by the game arena b T = (b S, b SMin, b SMax, b A, b T, b \u03c0) b S \u2286L\u00d7V \u00d7R is the set of states such that (\u2113, \u03bd, \u03b6) \u2208b S if and only if \u03b6 \u2286Inv(\u2113) and \u03bd \u2208\u03b6 (recall that \u03b6 denotes the closure of \u03b6); (\u2113, \u03bd, \u03b6) \u2208b SMin (or (\u2113, \u03bd, \u03b6) \u2208b SMax) if (\u2113, \u03bd, \u03b6) \u2208b S and \u2113\u2208LMin (or \u2113\u2208LMax, resp.). b A = (JKKN\u00d7X\u00d7Act\u00d7R) is the set of actions; For \u02c6 s=(\u2113, \u03bd, \u03b6)\u2208b S and \u03b1=(b\u03b1, c\u03b1, a\u03b1, \u03b6\u03b1)\u2208b A, function b T(\u02c6 s, \u03b1) is de\ufb01ned if [\u03b6, \u03b6\u03b1]\u2286Inv(\u2113) and \u03b6\u03b1 \u2286E(\u2113, a\u03b1) and it equals (\u2113\u2032, \u03bd\u2032, \u03b6\u2032) \u2208b S where \u03b4(\u2113, a\u03b1) = \u2113\u2032, \u03bd\u03b1[C:=0] = \u03bd\u2032 and \u03b6\u03b1[C:=0] = \u03b6\u2032 with \u03bd\u03b1 = \u03bd+time(\u03bd, (b\u03b1, c\u03b1)) and one of the following conditions holds: \u03b6 \u2192b\u03b1,c\u03b1 \u03b6\u03b1; \u03b6 \u2192b\u03b1,c\u03b1 \u03b6inf \u2192\u03b6\u03b1 for some \u03b6inf \u2208R; \u03b6 \u2192b\u03b1,c\u03b1 \u03b6sup \u2190\u03b6\u03b1 for some \u03b6sup \u2208R; for (\u2113, \u03bd, \u03b6) \u2208b S and (b\u03b1, c\u03b1, a\u03b1, \u03b6\u03b1) \u2208b A the reward function b \u03c0 is given by: b \u03c0((\u2113, \u03bd, \u03b6), (b\u03b1, c\u03b1, a\u03b1, \u03b6\u03b1)) = p(\u2113, a\u03b1) + p(\u2113) \u00b7 (b\u03b1\u2212\u03bd(c\u03b1)) Although the boundary region abstraction is not a \ufb01nite game arena, every state has only \ufb01nitely many time-successor (the boundaries of the regions) and for a \ufb01xed initial state we can restrict attention to a \ufb01nite game arena due to the following observation. \u25b6Lemma 7 ( [23].). Let T be a priced timed game arena and b T the corresponding BRA. For any state of b T, its reachable sub-graph is \ufb01nite and can be constructed in time exponential in the size of T when T has more than one clock. For one clock T, the reachable sub-graph of b T can be constructed in time polynomial in the size of T. Moreover, the reachable sub-graph from the initial location and clock valuation is precisely the corner-point abstraction. 3.1 Reduction to Boundary Region Abstraction In what follows, unless speci\ufb01ed otherwise, we \ufb01x a PTGA T = (LMin, LMax, Act, X, Inv, E, \u03c1, \u03b4, p) with semantics [ [T] ]=(S, SMin, SMax, A, T, \u03c0) and BRA b T = (b S, b SMin, b SMax, b A, b T, b \u03c0). Let G : b S \u2192R and B : b S \u2192R be such that (G, B) | = Opt(b T), i.e. for every \u02c6 s \u2208b S we have that G(\u02c6 s) = \uf8f1 \uf8f2 \uf8f3 max\u03b1\u2208b A(\u02c6 s){G(\u02c6 s\u2032) : \u02c6 s \u03b1 \u2212 \u2192\u02c6 s\u2032} if \u02c6 s \u2208b SMax min\u03b1\u2208b A(\u02c6 s){G(\u02c6 s\u2032) : \u02c6 s \u03b1 \u2212 \u2192\u02c6 s\u2032} if \u02c6 s \u2208b SMin. B(\u02c6 s) = \uf8f1 \uf8f2 \uf8f3 max\u03b1\u2208b A(\u02c6 s){\u03c0(\u02c6 s, \u03b1) \u2212G(\u02c6 s) + B(\u02c6 s\u2032) : \u02c6 s \u03b1 \u2212 \u2192\u02c6 s\u2032 and G(\u02c6 s) = G(\u02c6 s\u2032)} if \u02c6 s \u2208b SMax min\u03b1\u2208b A(\u02c6 s){\u03c0(\u02c6 s, \u03b1) \u2212G(\u02c6 s) + B(\u02c6 s\u2032) : \u02c6 s \u03b1 \u2212 \u2192\u02c6 s\u2032 and G(\u02c6 s) = G(\u02c6 s\u2032)} if \u02c6 s \u2208b SMin. \fXX:8 Mean-Payo\ufb00Games on Timed Automata For a function F : b S \u2192R we de\ufb01ne a function F \u229e: S \u2192R as (\u2113, \u03bd) 7\u2192F(\u2113, \u03bd, [\u03bd]). In this section we show under what conditions we can lift a solution (G, B) of optimality equations of BRA to (G\u229e, B\u229e) for priced timed game arena. Given a set of valuations X\u2286V , a function f : X \u2192R\u2a7e0 is a\ufb03ne if for any valuations \u03bdx, \u03bdy \u2208X we have that for all \u03bb \u2208[0, 1], f(\u03bb\u03bdx +(1\u2212\u03bb)\u03bdy) = \u03bbf(\u03bdx)+(1\u2212\u03bb)f(\u03bdy). We say that a function f : b S \u2192R\u2a7e0 is regionally a\ufb03ne if f(\u2113, \u00b7, \u03b6) is a\ufb03ne over a region for all \u2113\u2208L and \u03b6 \u2208R, and f is regionally constant if f(\u2113, \u00b7, \u03b6) is constant over a region for all \u2113\u2208L and \u03b6 \u2208R. Some properties of a\ufb03ne functions that are useful in the proof of the key lemma are given in Lemma 8. \u25b6Lemma 8. Let X \u2286V and Y \u2286R\u2a7e0 be convex sets. Let f : X \u2192R and w : X \u00d7 Y \u2192R be a\ufb03ne functions. Then for C \u2286X we have that \u03c6C(\u03bd, t) = w(\u03bd, t) + f((\u03bd + t)[C:=0]) is also an a\ufb03ne function, and inft1= 0. \fXX:12 Mean-Payo\ufb00Games on Timed Automata 0 \u2113k 0 Check [x3 = 0] \u2113k+1 WD1 1 WD1 2 \u22125 A 20 B \u221215 C 0 D 0 E x1\u2a7d1 {x3} {x2} x2=1 {x2} x1=2 {x1} x1=1 {x1} x2=2 {x2} x3 = 3, {x3} WD1 1 Figure 1 Simulation to decrement counter C1, mean cost is \u03b5 for error \u03b5. The widget WD1 2 has exactly the same structure and guards on all transitions as WD1 1, but the price signs are reversed. Proof. We \ufb01rst show the undecidability result of the mean-payo\ufb00problem MPG(T, 0) with location prices {1, 0, \u22121} and no edge prices. We prove the result by reducing the nonhalting problem of 2 counter machines. Our reduction uses a PTGA with 3 clocks x1, x2, x3, location prices {1, 0, \u22121}, and no edge prices. Each counter machine instruction (increment, decrement, zero check) is speci\ufb01ed using a PTGA module. The main invariant in our reduction is that on entry into any module, we have x1 = 1 5c17c2 , x2 = 0 and x3 = 0, where c1, c2 are the values of counters C1, C2. We outline the construction for the decrement instruction of counter C1 in Figure 1. For conciseness, we present here modules using arbitrary location prices. However, we can redraw these with extra locations and edges using only the location prices from {1, 0, \u22121} as shown for WD1 1 in Figure 5 in Appendix. The role of the Min player is to faithfully simulate the two counter machine, by choosing appropriate delays to adjust the clocks to re\ufb02ect changes in counter values. Player Max will have the opportunity to verify that player Min did not cheat while simulating the machine. We enter location \u2113k with x1 = 1 5c17c2 , x2 = 0 and x3 = 0. Lets denote by xold the value 1 5c17c2 . To correctly decrement C1, player Min should choose a delay of 4xold at location \u2113k. At location Check, there is no time elapse and player Max has three possibilities : (i) to go to \u2113k+1 and continue the simulation, or (ii) to enter the widget WD1 1, or (iii) to enter the widget WD1 2. If player Min makes an error, and delays 4xold + \u03b5 or 4xold \u2212\u03b5 at \u2113k (\u03b5 > 0), then player Max can enter one of the widgets and punish player Min. Player Max enters widget WD1 1 if the error made by player Min is of the form 4xold +\u03b5 at \u2113k and enters widget WD1 2 if the error made by player Min is of the form 4xold \u2212\u03b5 at \u2113k. Let us examine the widget WD1 1. When we enter WD1 1 for the \ufb01rst time, we have x1 = xold + 4xold + \u03b5, x2 = 4xold + \u03b5 and x3 = 0. In WD1 1, the cost of going once from location A to E is 5\u03b5. Also, when we get back to A after going through the loop once, the clock values with which we entered WD1 1 are restored; thus, each time, we come back to A, we restore the starting values with which we enter WD1 1. The third clock is really useful for this purpose only. It can be seen that the mean cost of transiting from A to A through E is \u03b5. In a similar way, it can be checked that the mean cost of transiting from A to A through E in widget WD1 2 is \u03b5 when player Min chooses a delay 4xold \u2212\u03b5 at \u2113k. Thus, if player Min makes a simulation error, player Max can always choose to goto one of the widgets, and ensure that the mean pay-o\ufb00is not \u2a7d0. Note that when \u03b5 = 0, then player Min will achieve his objective: the mean pay-o\ufb00will be 0. Details of other gadgets are in Appendix D.1. \u25c0 In the Appendix D.2, we show how this undecidability results extends (with the same parameters) if one de\ufb01nes mean payo\ufb00per time unit instead of per step. This way of averaging across time spent was considered in [10], where the authors show the undecidability of MPG(T, 0) with 5 clocks. We improve this result to show undecidability already in 3 clocks. \fGuha, Jurdzi\u0144ski, Krishna, and Trivedi XX:13" + }, + { + "url": "http://arxiv.org/abs/1507.05787v1", + "title": "Revisiting Robustness in Priced Timed Games", + "abstract": "Priced timed games are optimal-cost reachability games played between two\nplayers---the controller and the environment---by moving a token along the\nedges of infinite graphs of configurations of priced timed automata. The goal\nof the controller is to reach a given set of target locations as cheaply as\npossible, while the goal of the environment is the opposite. Priced timed games\nare known to be undecidable for timed automata with $3$ or more clocks, while\nthey are known to be decidable for automata with $1$ clock.\n In an attempt to recover decidability for priced timed games Bouyer, Markey,\nand Sankur studied robust priced timed games where the environment has the\npower to slightly perturb delays proposed by the controller. Unfortunately,\nhowever, they showed that the natural problem of deciding the existence of\noptimal limit-strategy---optimal strategy of the controller where the\nperturbations tend to vanish in the limit---is undecidable with $10$ or more\nclocks. In this paper we revisit this problem and improve our understanding of\nthe decidability of these games. We show that the limit-strategy problem is\nalready undecidable for a subclass of robust priced timed games with $5$ or\nmore clocks. On a positive side, we show the decidability of the existence of\nalmost optimal strategies for the same subclass of one-clock robust priced\ntimed games by adapting a classical construction by Bouyer at al. for one-clock\npriced timed games.", + "authors": "Shibashis Guha, Shankara Narayanan Krishna, Lakshmi Manasa, Ashutosh Trivedi", + "published": "2015-07-21", + "updated": "2015-07-21", + "primary_cat": "cs.LO", + "cats": [ + "cs.LO", + "cs.FL" + ], + "main_content": "Introduction Two-player zero-sum games on priced timed automata provide a mathematically elegant modeling framework for the control-program synthesis problem in real-time systems. In these games, two players\u2014the controller and the environment\u2014move a token along the edges of the in\ufb01nite graph of con\ufb01gurations of a timed automaton to construct an in\ufb01nite execution of the automaton in order to optimize a given performance criterion. The optimal strategy of the controller in such game then corresponds to control-program with the optimal performance. By priced timed games (PTGs) we refer to such games on priced timed automata with optimal reachability-cost objective. The problem of deciding the existence of the optimal controller strategy in PTGs is undecidable [8] with 3 or more clocks, while it is known to be decidable [5] for automata with 1 clock. Also, the \u03b5-optimal strategies can be computed for priced timed games under the non-Zeno assumption [1, 4]. Unfortunately, however, the optimal controller strategies obtained as a result of solving games on timed automata may not be physically realizable due to unrealistic assumptions made in the modeling using timed automata, regarding the capability of the controller in enforcing precise delays. This severely limits the application of priced timed games in control-program synthesis for real-time systems. In order to overcome this limitation, Bouyer, Markey, and Sankur [7] argued the need for considering the existence of robust optimal strategies and introduced two di\ufb00erent robustness semantics\u2014excess and conservative\u2014in priced timed games. The key assumption in their modeling is that the controller may not be able to apply an action at the exact time delays suggested by the optimal strategy. This phenomenon is modeled as a perturbation game where the time delay suggested by the controller can be perturbed by a bounded quantity. Notice that such a perturbation may result in the guard of the corresponding action being disabled. In the conservative semantics, it is the controller\u2019s responsibility to make sure that the guards are satis\ufb01ed after the perturbation. On the other hand, in the excess semantics, the controller is supposed to make sure that the guard is satis\ufb01ed before the perturbation: an action can be executed even when its guard is disabled (\u201cexcess\u201d) post perturbation and the valuations post perturbation will be re\ufb02ected in the next state. The game based characterization for robustness in timed automata under \u201cexcess\u201d semantics was \ufb01rst proposed by Bouyer, Markey, and Sankur [6] where they study the parameterized robust (qualitative) reachability problem and show it to be EXPTIME-complete. The \u201cconservative\u201d semantics were studied for reachability and B\u00fcchi objectives in [13] and shown to be PSPACE-complete. For a detailed survey on robustness in timed setting we refer to an excellent survey by Markey [11]. Bouyer, Markey, and Sankur [7] showed that the problem for deciding the existence of the optimal strategy is undecidable for priced timed games with 10 or more clocks under the excess semantics. In this paper we further improve the understanding of the decidability of these games. However, to keep the presentation simple, we restrict our attention to turn-based games under excess semantics. To further generalize the setting, we permit both positive and negative price rates with the restriction that the accumulated cost in any cycle is non-negative (akin to the standard no-negative-cycle restriction in shortest path game problems on \ufb01nite graphs). We improve the undecidability result of [7] by proving that optimal reachability remains undecidable for robust priced timed automata with 5 clocks. Our second key result is that, for a \ufb01xed \u03b4, the cost optimal reachability problem for one clock priced timed games with no-negative-cycle restriction is decidable for robust priced timed games with given bound on perturbations. To the best of our knowledge, this is the \ufb01rst decidability result known for robust timed games under the excess semantics. A closely related result is [9], where decidability is shown for robust timed games under the conservative semantics for a \ufb01xed \u03b4. 2 Preliminaries We write R for the set of reals and Z for the set of integers. Let C be a \ufb01nite set of real-valued variables called clocks. A valuation on C is a function \u03bd : C \u2192R. We assume an arbitrary \fGuha, Krishna, Manasa and Trivedi 3 but \ufb01xed ordering on the clocks and write xi for the clock with order i. This allows us to treat a valuation \u03bd as a point (\u03bd(x1), \u03bd(x2), . . . , \u03bd(xn)) \u2208R|C|. Abusing notations slightly, we use a valuation on C and a point in R|C| interchangeably. For a subset of clocks X \u2286C and valuation \u03bd \u2208R|C|, we write \u03bd[X:=0] for the valuation where \u03bd[X:=0](x) = 0 if x \u2208X, and \u03bd[X:=0](x) = \u03bd(x) otherwise. The valuation 0 \u2208R|C| is a special valuation such that 0(x) = 0 for all x \u2208C. A clock constraint over C is a subset of R|C|. We say that a constraint is rectangular if it is a conjunction of a \ufb01nite set of constraints of the form x \u25b7 \u25c1k, where k \u2208Z, x \u2208C, and \u25b7 \u25c1\u2208{<, \u2264, =, >, \u2265}. For a constraint g \u2208\u03d5(C), we write [ [g] ] for the set of valuations in R|C| satisfying g. We write \u03d5(C) for the set of rectangular constraints over C. We use the terms constraints and guards interchangeably. Following [5] we introduce priced timed games with external cost function on target locations (see Appendix A). For this purpose, we de\ufb01ne a cost function[5] as a piecewise a\ufb03ne continuous function f : Rn \u22650 \u2192R \u222a{+\u221e, \u2212\u221e}. We write F for the set of all cost functions. \u25b6De\ufb01nition 1 (Priced Timed Games). A turn-based two player priced timed game is a tuple G = (L1, L2, Linit, C, X, \u03b7, T, fgoal) where Li is a \ufb01nite set of locations of Player i, Linit \u2286 L1 \u222aL2(let L1 \u222aL2 = L) is a set of initial locations, C is an (ordered) set of clocks, X \u2286 L \u00d7 \u03d5(C) \u00d7 2C \u00d7 (L \u222aT) is the transition relation, \u03b7: L \u2192Z is the price function, T is the set of target locations, T \u2229L = \u2205; and fgoal : T \u2192F assigns external cost functions to target locations. We refer to Player 1 as the controller and Player 2 as the environment. A priced timed game begins with a token placed on some initial location \u2113with valuation 0 and cost accumulated being so far being 0. At each round, the player who controls the current location \u2113chooses a delay t (to be elapsed in l) and an outgoing transition e = (\u2113, g, r, \u2113\u2032) \u2208X to be taken after t delay at \u2113. The clock valuation is then updated according to the delay t, the reset r, the cost is incremented by \u03b7(\u2113) \u00b7 t and the token is moved to the location \u2113\u2032. The two players continue moving the token in this fashion, and give rise to a sequence of locations and transitions called a play of the game. A con\ufb01guration or state of a PTG is a tuple (\u2113, \u03bd, c) where \u2113\u2208L is a location, \u03bd \u2208R|C| is a valuation, and c is the cost accumulated from the start of the play. We assume, w.l.o.g [2], that the clock valuations are bounded. \u25b6De\ufb01nition 2 (PTG semantics). The semantics of a PTG G is a labelled state-transition game arena [ [G] ] = (S = S1 \u228eS2, Sinit, A, E, \u03c0, \u03ba) where Sj = Lj \u00d7 R|C| are the Player j states with S = S1 \u228eS2, Sinit \u2286S are initial states s.t. (\u2113, \u03bd) \u2208Sinit if \u2113\u2208Linit, \u03bd = 0, A = R\u22650 \u00d7 X is the set of timed moves, E : (S \u00d7 A) \u2192S is the transition function s.t. for s = (\u2113, \u03bd), s\u2032 = (\u2113\u2032, \u03bd\u2032)\u2208S and \u03c4 = (t, e) \u2208A the function E(s, \u03c4) is de\ufb01ned if e = (\u2113, g, r, \u2113\u2032) is a transition of the PTG and \u03bd \u2208[ [g] ]; moreover E(s, \u03c4) = s\u2032 if \u03bd\u2032 = (\u03bd + t)[r:=0] (we write s \u03c4 \u2212 \u2192s\u2032 when E(s, \u03c4) = s\u2032); \u03c0 : S \u00d7 A \u2192R is the price function such that \u03c0((\u2113, \u03bd), (t, e)) = \u03b7(\u2113) \u00b7 t; and \u03ba : S \u2192R is an external cost function such that \u03ba(\u2113, \u03bd) is de\ufb01ned when \u2113\u2208T such that \u03ba(\u2113, \u03bd) = fgoal(\u2113)(\u03bd). A play \u03c1 = \u27e8s0, \u03c41, s1, \u03c42, . . . , sn\u27e9is a \ufb01nite sequence of states and actions s.t. s0 \u2208Sinit and si \u03c4i+1 \u2212 \u2212 \u2212 \u2192si+1 for all 0 \u2264i < n. The in\ufb01nite plays are de\ufb01ned in an analogous manner. For a \ufb01nite play \u03c1 we write its last state as last(\u03c1) = sn. For a (in\ufb01nite or \ufb01nite) play \u03c1 we write stop(\u03c1) for the index of \ufb01rst target state and if it doesn\u2019t visit a target state then stop(\u03c1) = \u221e. We denote the set of plays as PlaysG. For a play \u03c1 = \u27e8s0, (t1, a1), s1, (t2, a2), . . .\u27e9 if stop(\u03c1) = n < \u221ethen CostG(\u03c1) = \u03ba(sn) + Pn j=1 \u03c0(si\u22121, (ti, ai)) else CostG(\u03c1) = +\u221e. A strategy of player j in G is a function \u03c3 : PlaysG \u2192A such that for a play \u03c1 the function \u03c3(\u03c1) is de\ufb01ned if last(\u03c1) \u2208Sj. We say that a strategy \u03c3 is memoryless if \u03c3(\u03c1) = \u03c3(\u03c1\u2032) when last(\u03c1) = last(\u03c1\u2032), otherwise we call it memoryful. We write Strat1 and Strat2 for the set of strategies of player 1 and 2, respectively. A play \u03c1 is said to be compatible to a strategy \u03c3 of player j \u2208{1, 2} if for every state si in \u03c1 that belongs to Player j, si+1 = \u03c3(si). Given a pair of strategies (\u03c31, \u03c32) \u2208Strat1 \u00d7 Strat2, \f4 Revisiting Robustness in Priced Timed Games and a state s, the outcome of (\u03c31, \u03c32) from s denoted Outcome(s, \u03c31, \u03c32) is the unique play that starts at s and is compatible with both strategies. Given a player 1 strategy \u03c31 \u2208Strat1 we de\ufb01ne its cost CostG(s, \u03c31) as sup\u03c32\u2208Strat2(Cost(Outcome(s, \u03c31, \u03c32))). We now de\ufb01ne the optimal reachability-cost for Player 1 from a state s as OptCostG(s) = inf \u03c31\u2208Strat1 sup \u03c32\u2208Strat2 (Cost(Outcome(s, \u03c31, \u03c32))). A strategy \u03c31 \u2208Strat1 is said to be optimal from s if CostG(s, \u03c31) = OptCostG(s). Since the optimal strategies may not always exist [5] we de\ufb01ne \u03f5 optimal strategies. For \u03f5 > 0 a strategy \u03c3\u03f5 \u2208Strat1 is called \u03f5-optimal if OptCostG(s) \u2264CostG(s, \u03c3\u03f5) < OptCostG(s)+\u03f5. Given a PTG G and a bound K \u2208Z, the cost-optimal reachability problem for PTGs is to decide whether there exists a strategy for player 1 such that OptCostG(s) \u2264K from some starting state s. \u25b6Theorem 3 ([3]). Cost-optimal reachability problem is undecidable for PTGs with 3 clocks. \u25b6Theorem 4 ([5, 10, 12]). The \u03f5-optimal strategy is computable for 1 clock PTGs. 3 Robust Semantics Under the robust semantics of priced timed games the environment player\u2014also called as the perturbator\u2014is more privileged as it has the power to perturb any delay chosen by the controller by an amount in [\u2212\u03b4, \u03b4], where \u03b4 > 0 is a pre-de\ufb01ned bounded quantity. However, in order to ensure time-divergence there is a restriction that the time delay at all locations of the RPTG must be \u2265\u03b4. There are the following two perturbation semantics as de\ufb01ned in [7]. Excess semantics. At any controller location, the time delay t chosen by the controller is altered to some t\u2032 \u2208[t \u2212\u03b4, t + \u03b4] by the perturbator. However, the constraints on the outgoing transitions of the controller locations are evaluated with respect to the time elapse t chosen by the controller. If the constraint is satis\ufb01ed with respect to t, then the values of all variables which are not reset on the transition are updated with respect to t\u2032; the variables which are reset obtain value 0. Conservative semantics. In this, the constraints on the outgoing transitions are evaluated with respect to t\u2032. In both semantics, the delays chosen by perturbator at his locations are not altered, and the constraints on outgoing transitions are evaluated in the usual way, as in PTG. A Robust-Priced Timed Automata (RPTA) is an RPTG which has only controller locations. At all these locations, for any time delay t chosen by controller, perturbator can implicitely perturb t by a quantity in [\u2212\u03b4, \u03b4]. The excess as well as the conservative perturbation semantics for RPTA are de\ufb01ned in the same way as in the RPTG. Note that our RPTA coincides with that of [7] when the cost functions at all target locations are of the form cf : Rn \u22650 \u2192{0}. Our RPTG are turn-based, and have cost funtions at the targets, while RPTGs studied in [7] are concurrent. \u25b6De\ufb01nition 5 (Excess Perturbation Semantics). Let R = (L1, L2, Linit, C, X, \u03b7, T, fgoal) be a RPTG. Given a \u03b4 > 0, the excess perturbation semantics of RPTG R is a LTS [ [R] ] = (S, A, E) where S = S1 \u222aS2 \u222a(T \u00d7 R\u22650), A = A1 \u222aA2 and E = E1 \u222aE2. We de\ufb01ne the set of states, actions and transitions for each player below. S1 = L1 \u00d7 R|C| are the controller states, S2 = (L2 \u00d7 R|C|) \u222a(S1 \u00d7 R\u22650 \u00d7 X) are the perturbator states. The \ufb01rst kind of states are encountered at perturbator locations. The second kind of states are encountered when controller chooses a delay t \u2208R\u22650 and a transition e \u2208X at a controller location. A1 = R\u22650 \u00d7 X are controller actions A2 = (R\u22650 \u00d7 X) \u222a[\u2212\u03b4, \u03b4] are perturbator actions. The \ufb01rst kind of actions (R\u22650 \u00d7 X) are chosen at states of the form L2 \u00d7 R|C| \u2208S2, while the second kind of actions are chosen at states of the form S1 \u00d7 R\u22650 \u00d7 X \u2208S2, \fGuha, Krishna, Manasa and Trivedi 5 E1 = (S1 \u00d7 A1 \u00d7 S2) is the set of controller transitions such that for a controller state (l, \u03bd) and a controller action (t, e), E1((l, \u03bd), (t, e)) is de\ufb01ned i\ufb00there is a transition e = (l, g, a, r, l\u2032) in R such that \u03bd + t \u2208[ [g] ]. E2 = S2 \u00d7 A2 \u00d7 (S1 \u222aS2 \u222a(T \u00d7 R\u22650)) is the set of perturbator transitions such that For a perturbator state of the type (l, \u03bd) and a perturbator action (t, e), we have (l\u2032, \u03bd\u2032) = E2((l, \u03bd), (t, e)) i\ufb00there is a transition e = (l, g, a, r, l\u2032) in R such that \u03bd + t \u2208 [ [g] ], \u03bd\u2032 = (\u03bd + t)[r := 0], For a perturbator state of type ((l, \u03bd), t, e) and a perturbator action \u03b5 \u2208[\u2212\u03b4, \u03b4], we have (l\u2032, \u03bd\u2032) = E2(((l, \u03bd), t, e), \u03b5) i\ufb00e = (l, g, a, r, l\u2032), and \u03bd\u2032 = (\u03bd + t + \u03b5)[r := 0]. We now de\ufb01ne the cost of the transitions, denoted as Cost(t, e) as follows : For controller transitions : (l, \u03bd) (t,e) \u2212 \u2212 \u2212 \u2192((l, \u03bd), t, e) : the cost accumulated is Cost(t, e) = 0. For perturbator transitions : From perturbator states of type (l, \u03bd) : (l, \u03bd) t,e \u2212 \u2212 \u2192(l\u2032, \u03bd\u2032), the cost accumulated is Cost(t, e) = t \u2217\u03b7(l). From perturbator states of type ((l, \u03bd), t, e) : ((l, \u03bd), t, e) \u03b5 \u2212 \u2192(l\u2032, \u03bd\u2032), the cost accumulated is (t + \u03b5) \u2217\u03b7(l). Note that although this transition has no edge choice involved and the perturbation delay chosen is \u03b5 \u2208[\u2212\u03b4, \u03b4], the controller action (t, e) chosen in the state (l, \u03bd) comes into e\ufb00ect in this transition. Hence for the sake of uniformity, we denote the cost accumulated in this transition to be Cost(t + \u03b5, e) = (t + \u03b5) \u2217\u03b7(l). Note that we check satis\ufb01ability of the constraint g before the perturbation; however, the reset occurs after the perturbation. The notions of a path and a winning play are the same as in PTG. We shall now adapt the de\ufb01nitions of cost of a play, and a strategy for the excess perturbation semantics. Let \u03c1 = \u27e8s1, (t1, e1), s2, (t2, e2), \u00b7 \u00b7 \u00b7 (tn\u22121, en\u22121), sn\u27e9 be a path in the LTS [ [R] ]. Given a \u03b4 > 0, for a \ufb01nite play \u03c1 ending in target location, we de\ufb01ne Cost\u03b4 R(\u03c1) = Pn i=1 Cost(ti, ei) + fgoal(ln)(\u03bdn) as the sum of the costs of all transitions as de\ufb01ned above along with the value from the cost function of the target location ln. Also, we re-de\ufb01ne the cost of a strategy \u03c31 from a state s for a given \u03b4 > 0 as Cost\u03b4 R(s, \u03c31) = sup\u03c32\u2208Strat2(R) Cost\u03b4 R(Outcome(s, \u03c31, \u03c32)). Similarly, OptCost\u03b4 R is the optimal cost under excess perturbation semantics for a given \u03b4 > 0 de\ufb01ned as OptCost\u03b4 R(s) = inf \u03c31\u2208Strat1(R) sup \u03c32\u2208Strat2(R) (Cost\u03b4 R(Outcome(s, \u03c31, \u03c32))). Since optimal strategies may not always exist, we de\ufb01ne \u03f5\u2212optimal strategies such that for every \u03f5 > 0, OptCost\u03b4 R(s) \u2264Cost\u03b4 R(s, \u03c31) < OptCost\u03b4 R(s) + \u03f5. Given a \u03b4 and a RPTG R with a single clock x, a strategy \u03c31 is called (\u03f5, N)\u2212acceptable [5] for \u03f5 > 0, N \u2208N when (1)it is memoryless, (2)it is \u03f5\u2212optimal and (3)there exist N consecutive intervals (Ii)1\u2264i\u2264N partitioning [0, 1] such that for every location l, for every 1\u2264i\u2264N and every integer \u03b1 < M (where M is the maximum bound on the clock value), the function that maps the clock values \u03bd(x) to the cost of the strategy \u03c31 at every state (l, \u03bd(x)), (\u03bd(x) 7\u2192Cost\u03b4 R((l, \u03bd(x)), \u03c31)) is a\ufb03ne for every interval \u03b1 + Ii. Also, the strategy \u03c31 is constant over the values \u03b1 + Ii at all locations, that is, when \u03bd(x) \u2208\u03b1 + Ii, the strategy \u03c31(l, \u03bd(x)) is constant. The number N is an important attribute of the strategy as it establishes that the strategy does not \ufb02uctuate in\ufb01nitely often and is implementable. Now, we shall de\ufb01ne limit variations of costs, strategies and values as \u03b4 \u21920. The limitcost of a controller strategy \u03c31 from state s is de\ufb01ned over all plays \u03c1 starting from s that are compatible with \u03c31 as: LimCostR(s, \u03c31) = lim \u03b4\u21920 sup \u03c32\u2208Strat2(R) Cost\u03b4 R(Outcome(s, \u03c31, \u03c32)). The limit strategy upper-bound problem [7] for excess perturbation semantics asks, given a RPTG R, state s = (l, 0) with cost 0 and a rational number K, whether there exists a strategy \u03c31 such that LimCostR(s, \u03c31) \u2264K. The following are the main results of [7]. \f6 Revisiting Robustness in Priced Timed Games (Non-)Negative Cycles -1 x < 1 1 -1 x < 1 1 0 x < 1 x := 0 x < 1 x = 1 y := 0 x < 1 y = 0 x = 1, y = 0 x := 0 \u25b6Theorem 6 (Known results [7]). 1. The limit-strategy upper-bound problem is undecidable for RPTA and RPTG under excess perturbation semantics, for \u226510 clocks. 2. For a \ufb01xed \u03b4 \u2208[0, 1 3], and a given RPTA A, a target location l and a rational K, it is undecidable whether inf\u03c31 sup\u03c32 cost\u03c31,\u03c32(\u03c1) < K such that \u03c1 ends in l. cost\u03c31,\u03c32(\u03c1) is the cost of the unique run \u03c1 obtained from the pair of strategies (\u03c31, \u03c32). We consider a semantic subclass of RPTGs in which the accumulated cost of any cycle is non-negative: that is, any iteration of a cycle will always have a non-negative cost. Consider the two cycles depicted. The one on top has a non-negative cost, while the one below always has a negative cost. In the cycle below, the perturbator will not perturb, since that will lead to a target state. In the rest of the paper, we consider this semantic class of RPTGs (RPTAs), and prove decidability and undecidability results; however, we will refer to them as RPTGs(RPTAs). Our key contributions are the following theorems. \u25b6Theorem 7. The limit-strategy upper-bound problem is undecidable for RPTA with 5 clocks, location prices in {0, 1}, and cost functions cf : Rn \u22650 \u2192{0} at all target locations. \u25b6Theorem 8. Given a 1-clock RPTG R and a \u03b4 > 0, we can compute OptCost\u03b4 R(s) for every state s = (l, \u03bd). For every \u03f5 > 0, there exists an N \u2208N such that the controller has an (\u03f5, N)-acceptable strategy. The rest of the paper is devoted to the proof sketches of these two theorems, while we give detailed proofs in the appendix. 4 Undecidability with 5 clocks In this section, we improve the result of [7] by showing that the limit strategy upper bound problem is undecidable for robust priced timed automata with 5 or more clocks. The undecidability result is obtained using a reduction to the halting problem of two-counter machines. A two-counter machine has counters C1 and C2, and a list of instructions I1, I2, . . . , In, where In is the halt instruction. For each 1 \u2264i \u2264n\u22121, Ii is one of the following instructions: increment cb: cb := cb + 1; goto Ij, for b = 1 or 2, decrement cb with zero test: if (cb = 0) goto Ij else cb := cb \u22121; goto Ij, where c1, c2 represent the counter values. The initial values of both counters are 0. Given the initial con\ufb01guration (I1, 0, 0) the halting problem for two counter machines is to \ufb01nd if the con\ufb01guration (In, c1, c2) is reachable, with c1, c2 \u22650. This problem is known to be undecidable. We simulate the two counter machine using a RPTA with 5 clocks x1, z, x2, y1 and y2 under the excess perturbation semantics. The counters are encoded in clocks x1 and z as x1 = 1 2i + \u03b51 and z = 1 2j + \u03b52 where i, j are respectively the values of counters C1, C2, and \u03b51 and \u03b52 denote accumulated values due to possible perturbations. Clocks x2, y1 and y2 help with the rough work. The simulation is achieved as follows: for each instruction, we have a module simulating it. Upon entering the module, the clocks are in their normal form i.e. x1 = 1 2i + \u03b51, z = 1 2j + \u03b52 and x2 = 0 and y1 = y2 = 0. 4.1 Increment module The module in Figure 1 simulates the increment of counter C1. The value of counter C2 remains unchanged since the value of clock z remains unchanged at the exit from the module. Upon entering A the clock values are x1 = 1 2i + \u03b51, z = 1 2j + \u03b52, x2 = y1 = y2 = 0. Here \u03b51 and \u03b52 respectively denote the perturbations accumulated so far. We denote by \u03b1, the value of clock x1, i.e. 1 2i + \u03b51. Thus at A, the delay is 1 \u2212\u03b1. Note that the dashed edges are unperturbed (this is a short hand notation. A small gadget that implements this is described in Appendix B), so x1 = 1 on entering B. No time elapse happens at B, and at C, controller \fGuha, Krishna, Manasa and Trivedi 7 0 A 0 B 0 C 0 D mChoice 0 E RestoreC1C2 Inc RestoreC2C1 Inc 0 F x2=0 {y2} x1=1 {x2} x2=0 {x1} x1\u22641 {x1} y1=1 {y2} y1=0 {x2, y2} y1=0 y1=0 y1=0 y1=0 Choice Test IncC1 < y1=0 y1=0 Test IncC1 > 1 A\u2032 0 B\u2032 1 C\u2032 1 D\u2032 0 y1=0 x1=7 {x1} x2=8 x1=1 {x1} x1=1 Figure 1 Increment C1 module : The module keeps the fractional part of the clock z unchanged. The dashed edges represent unperturbed edges (detailed in Appendix B). chooses a delay t. This t must be \u03b1 2 to simulate the increment correctly. t can be perturbed by an amount \u03b4 by the perurbator, where \u03b4 can be both positive or negative, obtaining x2 = t + \u03b4, x1 = 0, y1 = 1 \u2212\u03b1 + t + \u03b4 on entering D. At D, the delay is \u03b1 \u2212t \u2212\u03b4. Thus the total delay from the entry point A in this module to the mChoice module is 1 time unit. At the entry of the mChoice (mChoice and Restore modules are in Appendix B) module, the clock values are x1 = \u03b1 \u2212t \u2212\u03b4, z = 1 + 1 2j + \u03b52, x2 = \u03b1, y1 = 1, y2 = 0. To correctly simulate the increment of C1, t should be exactly \u03b1 2 . At the mChoice module, perturbator can either continue the simulation (by going through the Restore module) or verify the correctness of controller\u2019s delay (check t = \u03b1 2 ). The mChoice module adds 3 units to the values of x1, x2 and z, and resets y1, y2. Due to the mChoice module, the clock values are x1 = 3 + \u03b1 \u2212t \u2212\u03b4, z = 4 + 1 2j + \u03b52, x2 = 3 + \u03b1, y1 = 1, y2 = 0. If perturbator chooses to continue the simulation, then Restore module brings all the clocks back to normal form. Hence upon entering F, the clock values are x1 = \u03b1 \u2212t \u2212\u03b4, z = 1 2j + \u03b52, x2 = y1 = 1, y2 = 0. This value of x1 is \u03b1 2 + \u03b51, since t = \u03b1 2 and \u03b51 = \u2212\u03b4, the perturbation e\ufb00ect. Let us now see how perturbator veri\ufb01es t = \u03b1 2 by entering the Choice module. The Choice module also adds 3 units to the values of x1, x2 and z, and resets y1, y2. The module Test IncC1 > is invoked to check if t > \u03b1 2 , and the module Test IncC1 < is invoked to check if t < \u03b1 2 . Note that using the mChoice module and the Choice module one after the other, the clock values upon entering Test IncC1 > or Test IncC1 < are x1 = 6+\u03b1\u2212t\u2212\u03b4, z = 7+ 1 2j +\u03b52, x2 = 6 + \u03b1, y1 = 0, y2 = 0. Test IncC1 > : The delay at A\u2032 is 1 \u2212\u03b1 + t + \u03b4, obtaining x2 = 7 + t + \u03b4, and the cost accumulated is 1 \u2212\u03b1 + t + \u03b4. At B\u2032, 1 \u2212t \u2212\u03b4 time is spent, obtaining x1 = 1 \u2212t \u2212\u03b4. Finally, at C\u2032, a time t + \u03b4 is spent, and at D\u2032, one time unit, making the total cost accumulated 2 \u2212\u03b1 + 2t + 2\u03b4 at the target location. The cost function at the target assigns the cost 0 for all valuations, hence the total cost to reach the target is 2 + 2t \u2212\u03b1 + 2\u03b4 which is greater than 2 + 2\u03b4 i\ufb002t \u2212\u03b1 > 0, i.e. i\ufb00t > \u03b1 2 . \u25b6Lemma 9. Assume that an increment Cb (b \u2208{0, 1}) module is entered with the clock valuations in their normal forms. Then controller has a strategy to reach either location lj corresponding to instruction Ij of the two-counter machine or a target location is reached with cost at most 2 + |2\u03b4|, where \u03b4 is the perturbation added by perturbator. 4.2 Complete Reduction The entire reduction consists of constructing a module corresponding to each instruction Ii, 1 \u2264i \u2264n, of the two-counter machine. The \ufb01rst location of the module corresponding to instruction I1 is the initial location. We simulate the halting instruction In by a target location \f8 Revisiting Robustness in Priced Timed Games with cost function cf : R5 \u22650 \u2192{0}. We denote the robust timed automaton simulating the two counter machine by A, s is the initial state (l, 0, 0). \u25b6Lemma 10. The two counter machine halts if and only if there is a strategy \u03c3 of controller such that limcostA(\u03c3, s) \u22642. The details of the decrement and zero test modules are in Appendix B. They are similar to the increment module; if player 2 desires to verify the correctness of player 1\u2019s simulation, a cost > 2 + |2\u03b4| is accumulated on reaching a target location i\ufb00player 1 cheats. In the limit, as \u03b4 \u21920, the limcost will be > 2 i\ufb00controller cheats. The other possibility to obtain a limcost > 2 is when the two counter machine does not halt. 5 Decidability of One-clock RPTG A Dwell-time PTG -1 [1, 2] A 1 [0, 3] B x < 2 x := 0 x < 1 In order to show the decidability of the optimal reachability game for 1 clock RPTG R and a \ufb01xed \u03b4 > 0, we perform a series of reachability and optimal cost preserving transformations. The idea is to reduce the RPTG into a simpler priced timed game, while preserving the optimal costs. The advantages of this conversion is that the semantics of PTGs are easier to understand, and one could adapt known algorithms to solve PTGs. On the other hand, the PTGs that we obtain are 1-clock PTGs with dwell-time requirement (having restrictions on minimum as well as maximum amount of time spent at certain locations), see for example, a dwell-time PTG with two locations A, B. A minimum of 1 and a maximum of two units of time should be spent at A, while a maximum of 3 time units can be spent at B. If we wish to model this using standard PTGs, we need one extra clock and we can not use the decidability results of 1 clock PTG to show the decidability of our model. We show in Section 5.4 how to solve 1-clock PTGs with dwell-time requirements. Our transformations are as follows: (i) for a given \u03b4, our \ufb01rst transformation reduces the RPTG R into a dwell-time PTG G (Section 5.1); (ii) our second transformation restricts to dwell-time PTGs where the clock is bounded by 1 + \u03b4. To achieve this, we use a notion of fractional resets, and denote these PTGs as GF (Section 5.2); (iii) our third and last transformation restricts GF without resets (Section 5.3). The reset-free dwell-time PTG is denoted GF \u2032. For each transformation, we prove that the optimal cost in each state of the original game is the same as the optimal cost at some corresponding state of the new game. We also show that an (\u03f5, N)-strategy of the original game can be computed from some (\u03f5\u2032, N \u2032)strategy in the new game. The details of each transformation and correctness is established in subequent sections. We then solve GF \u2032 employing a technique inspired by [5] while ensuring that the robust semantics are satis\ufb01ed. 5.1 Transformation 1: RPTG R to dwell-time PTG G R and G k A t k\u2032 B e g, r k A t \u2212\u03b4 0 (A, e) 0 k (A, e)+ [\u03b4, 2\u03b4] k (A, e)\u2212 [0, \u03b4] k\u2032 B g\u2032 r r Given a one clock RPTG R = (L1, L2, {x} , X, \u03b7, T, fgoal) and a \u03b4 > 0, we construct a dwell-time PTG G = (L1, L2 \u222aL\u2032, {x} , X\u2032, \u03b7\u2032, T, fgoal). All the controller, perturbator locations of R (L1 and L2) are carried over respectively as player 1, player 2 locations in G. In addition, we have some new player 2 locations L\u2032 in G. The dwell-time PTG G constructed has dwell-time restrictions for the new player 2 locations L\u2032. The locations of L\u2032 are either urgent, or have a a dwell-time of [\u03b4, 2\u03b4] or [0, \u03b4]. All the perturbator transitions of R are retained as it is in G. Every transition in R from a controller location A to some location B is replaced in G by a game graph as shown. Let e = (A, g, r, B) be the transition from a controller location A to a location B with guard g, and reset r. Depending on the guard g, in the transformed game graph, we have the new guard g\u2032. If g is x = H, then g\u2032 is x = H \u2212\u03b4, while if g is H < x < H + 1, then g\u2032 is H \u2212\u03b4 < x < H + 1 \u2212\u03b4, for H > 0. When g is 0 < x < K, then g\u2032 is 0 \u2264x < K \u2212\u03b4 and x = 0 \fGuha, Krishna, Manasa and Trivedi 9 stays unchanged. It can be seen that doing this transformation to all the controller edges of a RPTG R gives rise to a dwell-time PTG G. Lets consider the transition from A to B in R. Assume that the transition from A to B (called edge e) had a constraint x = 1, and assume that x = \u03bd on entering A. Then, in R, controller elapses a time 1 \u2212\u03bd, and reaches B; however on reaching B, the value of x is in the range [1 \u2212\u03b4, 1 + \u03b4] depending on the perturbation. Also, the cost accumulated at A is k \u2217(1 \u2212\u03bd + \u03b3), where \u03b3 \u2208[\u2212\u03b4, \u03b4]. To take into consideration these semantic restrictions of R, we transform the RPTG R into a dwell-time PTG G. First of all, we change the constraint x = 1 into x = 1 \u2212\u03b4 from A (a player 1 location) and enter a new player 2 location (A, e). This player 2 location is an urgent location. The correct strategy for player 1 is to spend a time 1 \u2212\u03bd \u2212\u03b4 at A (corresponding to the time 1 \u2212\u03bd he spent at A in R). At (A, e), player 2 can either proceed to one of the player 2 locations (A, e)\u2212or (A, e)+. The player 2 location (A, e) models perturbator\u2019s choices of positive or negative perturbation in R. If player 2 goes to (A, e)\u2212, then on reaching B, the value of x is in the interval [1 \u2212\u03b4, 1] (this corresponds to perturbator\u2019s choice of [\u2212\u03b4, 0] in R) and if he goes to (A, e)+, then the value of x at B is in the interval [1, 1 + \u03b4] (this corresponds to perturbator\u2019s choice of [0, \u03b4] in R). The reset happening in the transition from A to B in R is now done on the transition from (A, e)\u2212to B and from (A, e)+ to B. Thus, note that the possible ranges of x as well as the accumulated cost in R while reaching B are preserved in the transformed dwell-time PTG. \u25b6Lemma 11. Let R be a RPTG and G be the corresponding dwell-time PTG obtained using the transformation above. Then for every state s in R, OptCostR(s) = OptCostG(s). An (\u03f5, N)\u2212strategy in R can be computed from a (\u03f5, N)\u2212strategy in G and vice versa. Proof In Appendix C. 5.2 Transformation 2: Dwell-time PTG G to Dwell-time FRPTG GF GF k Ab t \u2212\u03b4 0 (A, e)b 0 k (A, e)+ b [\u03b4, 2\u03b4] k (A, e)0 b+1 0 k (A, e)\u2212 b [0, \u03b4] k\u2032 Bb k\u2032 Bb+1 x=1\u2212\u03b4 r r r x\u22651, [x]:=0 x=1 {x} Recall that the locations of the dwell-time PTG G is L1 \u222aL2 \u222aL\u2032 where L1 \u222aL2 are the set of locations of R, and L\u2032 are new player 2 locations introduced in G. In this section, we transform the dwell-time PTG G into a dwell-time PTG GF having the restriction that the value of x is in [0,1] at all locations corresponding to L1 \u222aL2, and is in [0, 1 + \u03b4] at all locations corresponding to L\u2032. While this transformation is the same as that used in [5], the main di\ufb00erence is that we introduce special resets called fractional resets which reset only the integral part of clock x while its fractional part is retained. For instance, if the value of x was 1.3, then the operation [x] := 0 makes the value of x to be 0.3. Given a one clock, dwell-time PTG G = (L1, L2 \u222aL\u2032, {x} , X, \u03b7, T, fgoals) with M being the maximum value that can be assumed by clock x, we de\ufb01ne a dwell-time PTG with fractional resets (FRPTG) GF. In GF, we have M + 1 copies of the locations in L1 \u222aL2 as well as the locations in L\u2032 with dwell time [0, \u03b4], [0, 0]. These M + 1 copies of L\u2032 have the same dwell-time restrictions in GF. The copies are indexed by i, 0 \u2264i \u2264M, capturing the integral part of clock x in G. Finally, we have in G, the locations of L\u2032 with dwell-time restriction [\u03b4, 2\u03b4]. For each such location (A, e)+, we have in GF, the locations (A, e)+ i and (A, e)0 i+1 for 0 \u2264i \u2264M. The dwell-time restriction for (A, e)+ i is same as (A, e)+, while locations (A, e)0 i+1 are urgent. The prices of locations are carried over as they are in the various copies. The transitions in GF consists of the following: (1) li (g\u2212i)\u22290\u2264x<1 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192mi1 if l g \u2212 \u2192m \u2208X; (2) li (g\u2212i)\u22290\u2264x<1;{x} \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192m0 if l g;{x} \u2212 \u2212 \u2212 \u2192m \u2208X; (3) li x=1,{x} \u2212 \u2212 \u2212 \u2212 \u2212 \u2192li+1, for l \u2208L1 \u222aL2, and (A, e)+ i x\u22651,[x]:=0 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192(A, e)0 i+1 for i < M. Consider for example, the constraint g\u2032 between 1 g \u2212i represents the constraint obtained by shifting the constraint by \u2212i \f10 Revisiting Robustness in Priced Timed Games A and (A, e) as x = (b + 1) \u2212\u03b4 in G. Then the value of x is b + (1 \u2212\u03b4) for b < M when (A, e)+ is entered in G. The location (A, e)+ with \u03bd(x) = b + (1 \u2212\u03b4) is represented in GF as (A, e)+ b with \u03bd(x) = 1 \u2212\u03b4. If player 2 spends [\u03b4, 2\u03b4] time at (A, e)+ in G, then \u03bd(x) \u2208[b + 1, b + 1 + \u03b4]. If there are no resets to goto B, then \u03bd(x) \u2208[b + 1, (b + 1) + \u03b4] at B. Correspondingly in GF, \u03bd(x) \u2208[1, 1 + \u03b4] at (A, e)+ b . By construction, Bb is not reachable, since we check 0 \u2264x < 1 on the transition to Bb. The fractional reset is employed to obtain x = \u03b4 while moving to (A, e)0 b+1. This ensures that x = \u03b4 on reaching Bb+1, thereby preserving the perturbation, and keeping x < 1. A normal reset would have destroyed the value obtained by perturbation. The mapping f between states of G and GF is as follows: f(l, x) = (lb, x \u2212b), b < M, and x \u2208[b, b + 1], l \u2208L1 \u222aL2, f((A, e), x) = ((A, e)b, x \u2212b), b < M, and x \u2208[b, b + 1], f((A, e)\u2212, x) = ((A, e)\u2212 b , x \u2212b), b < M, and x \u2208[b, b + 1]. Finally, f((A, e)+, x) = ((A, e)+ b , x \u2212b), b < M, and x \u2208[b, b + 1] \u222a[b + 1, b + 2]. Note that in the last case, the value of x \u2212b can exceed 1 but is less than or equal to 1 + \u03b4. \u25b6Lemma 12. For every state (l, \u03bd) in G, OptCostG(l, \u03bd) in G is the same as OptCostGF (f(l, \u03bd)) in GF. For every \u03f5 > 0, N \u2208N, an (\u03f5, N)-acceptable strategy in G can be computed from an (\u03f5, N)-acceptable strategy in GF and vice versa. 5.3 Transformation 3: Dwell-time FRPTG GF to resetfree FRPTG GF \u2032 We now apply the \ufb01nal transformation to the FRPTG GF and construct a reset-free version of the FRPTG denoted GF \u2032. Assume that there are a total of n resets (including fractional resets) in the FRPTG. GF \u2032 consists of n+1 copies of the FRPTG : GF 0, GF 1, . . . , GF n. Given the locations L of the FRPTG, the locations of GF i are Li, 0 \u2264i \u2264n. GF 0 starts with l0, where l is the initial location of the FRPTG and continues until a resetting transition happens. At the \ufb01rst resetting transition, GF 0 makes a transition to GF 1. The nth copy is directed to a sink target location S with cost function cf : R\u22650 \u2192{+\u221e} on the (n+1)th reset. Note that each GF i is reset-free. One crucial property of each GF i is that on entering with some value of x in [0, \u03b4], the value of x only increases as the transitions go along in GF i; moreover, x \u22641 + \u03b4 in each GF i by construction. The formal details and proof of Lemma 13 can be found in Appendix E. Example y x Superimposition f2 f1 y x Interior y x Exterior Using the cost function of S and those of the targets, we compute the optimal cost functions for all the locations of the deepest component GF n. The cost functions of the locations of GF i are used to compute that of GF i\u22121, and so on until the cost function of l0, the starting location of GF 0 is computed. An example can be seen in Appendix F. \u25b6Lemma 13. For every state (l, \u03bd) in GF, OptCostGF (l, \u03bd) = OptCostGF \u2032(l0, \u03bd), where GF \u2032 is the resetfree FRPTG. For every \u03f5 > 0, N \u2208N, given an (\u03f5, N)-acceptable strategy \u03c3\u2032 in GF \u2032, we can compute a (2\u03f5, N)-acceptable strategy \u03c3 in GF and vice versa. 5.4 Solving the Resetfree FRPTG Before we sketch the details, let us introduce some key notations. Observe that after our simplifying transformations, the cost functions cf are piecewise-a\ufb03ne continuous functions that assign a value to every valuation x \u2208[0, 1+\u03b4] (construction of FRPTG ensures x\u22641+\u03b4 always). The interior of two cost functions f1 and f2 is a cost function f3 : [0, 1 + \u03b4] \u2192 R de\ufb01ned by f3(x) = min(f1(x), f2(x)). Similarly, the exterior of f1 and f2 is a cost function f4 : [0, 1 + \u03b4] \u2192R de\ufb01ned as f4(x) = max(f1(x), f2(x)). Clearly, f3 and f4 are also piecewise-a\ufb03ne continuous. The interior and exterior can be easily computed by superimposing f1 and f2 as shown graphically in the example by computing lower envelope and upper envelope respectively. We now work on the reset-free components GF i, and give an algorithm to compute OptCostGF i(l, \u03bd) for every state (l, \u03bd) of GF i, \u03bd(x) \u2208[0, 1 + \u03b4]. We also show the existence of an N such that for any \u03f5 > 0, and every l \u2208Li, \u03bd(x) \u2208[0, 1 + \u03b4], an (\u03f5, N)-acceptable strategy can be computed. Consider the location of GF i that has the smallest price and \fGuha, Krishna, Manasa and Trivedi 11 call it lmin. If this is a player 1 location, then intuitively, player 1 would want to spend as much time as possible here, and if this is a player 2 location, then player 2 would want to spend as less time as possible here. By our assumption, all the cycles in GF i are non-negative, and hence if lmin is part of a cycle, revisiting it will only increase the total cost if at all. Player 1 thus would like to spend all the time he wants to during the \ufb01rst visit itself. We now prove that this is indeed the case. We consider two cases separately. 5.4.1 lmin is a Player 1 location We split GF i such that lmin is visited only once. We transform GF i into GF \u2032\u2032 which has two copies of all locations except lmin such that corresponding to every location l \u0338= lmin, we have the copies (l, 0) and (l, 1). A special target location S is added with cost function assigning +\u221eto all clock valuations. Duplicate L \u2212lmin A B l lmin C D unroll to A, 0 B, 0 C, 0 D, 0 l lmin A, 1 B, 1 C, 1 D, 1 \u221e Given the transitions X of GF i, the FRPTG GF \u2032\u2032 has the following transitions. if l g \u2212 \u2192l\u2032 \u2208X and l, l\u2032 \u0338= lmin then (l, 0) g \u2212 \u2192(l\u2032, 0) and (l, 1) g \u2212 \u2192(l\u2032, 1) if l g \u2212 \u2192l\u2032 \u2208X and l\u2032 = lmin then (l, 0) g \u2212 \u2192lmin and (l, 1) g \u2212 \u2192S, if lmin g \u2212 \u2192l, then lmin g \u2212 \u2192(l, 1) \u25b6Lemma 14. For every state (l, \u03bd) if \u03bd\u2208[0, 1 + \u03b4] and l\u0338=lmin, we have that OptCostGF i(l, \u03bd) = OptCostGF \u2032\u2032((l, 0), \u03bd) and OptCostGF i(lmin, \u03bd) = OptCostGF \u2032\u2032(lmin, \u03bd). We give an intuition for Lemma 14. Locations (l, 0) have all the transitions available to location l in GF i. Also, any play in GF \u2032\u2032 which is compatible with a winning strategy of player 1 in GF i contains only one of the locations (l, 0), (l, 1) by construction of GF \u2032\u2032. The outcomes from (l, 0) are more favourable than (l, 1) for l as a player 1 location. Based on these intuitions, we conclude that OptCostGF i(l, \u03bd) is same as that for ((l, 0), \u03bd). This observation also leads to the \u03f5\u2212optimal strategy being the same as that for (l, 0). Given a strategy \u03c3\u2032 in GF \u2032\u2032, we construct \u03c3 in GF i as \u03c3(l, \u03bd) = \u03c3\u2032((l, 0), \u03bd). Further, any strategy that revisits lmin in GF i cannot be winning for player 1, since all cycles are non-negative; we end up at S with cost \u221ein GF \u2032\u2032. However, all strategies that do not revisit lmin in GF i are preserved in GF \u2032\u2032, and hence OptCostGF i(lmin, \u03bd) = OptCostGF \u2032\u2032(lmin, \u03bd). We iteratively solve the part of GF \u2032\u2032 with locations indexed 1 (i.e; (l, 1)) in the same fashion (picking minimal price locations) each time obtaining a smaller PTG. Computing the cost function of the minimal price location of the last such PTG, and propagating this backward, we compute the cost function of lmin. We then use the cost function of lmin to solve the part of GF \u2032\u2032 with locations indexed 0 (i.e; (l, 0)). Computing the Optcost function of lmin: Algorithm 1 computes the optcost function for a player 1 location lmin, assuming all the constraints on outgoing transitions from lmin are the same, namely x \u2208[0, 1]. We discuss adapting the algorithm to work for transitions with di\ufb00erent constraints in Appendix G. A few words on the notation used: if a location l has price \u03b7(l), then slope associated with l is \u2212\u03b7(l) (see STEP 3 in Algorithm 1). Let l1, . . . , ln be the successors of lmin, with cost functions f1, . . . , fn. Each of these cost functions are piecewise a\ufb03ne continuous over the domain [0, 1]. The \ufb01rst thing to do is to superimpose f1, . . . , fn, and obtain the cost function f corresponding to the interior of f1, . . . , fn (lmin is a player 1 location and would like to obtain the minimal cost, hence the interior). The line segments comprising f come from the various fi. Let dom(f) = [0, 1] be composed of 0 = ui1 \u2264vi1 = ui2 \u2264. . . uim \u2264vim = 1 : that is, f(x) = fij(x), dom(fij) = [uij, vij], for ij \u2208{1, 2, . . . , n} and 1 \u2264j \u2264m. Let us denote fij by gj, for 1 \u2264j \u2264m. Then, f is composed of g1, g2, . . . , gm, and dom(f) is composed of dom(g1), . . . , dom(gm) from left to right. Let dom(gi) = [ui, vi]. Step 2 of the algorithm achieves this. For a given valuation \u03bd(x), if lmin is an urgent location, then player 1 would go to a location lk if the interior f is such that f(\u03bd(x)) = gk(\u03bd(x))(the least cost is given by gk, obtained from the outside cost function of lk). If lmin is not an urgent location, then player 1 \f12 Revisiting Robustness in Priced Timed Games Algorithm 1: Optimal Cost Algorithm when lmin is a Player 1 location Let l1, . . . , ln be the successors of lmin with optcost functions f1, f2 \u00b7 \u00b7 \u00b7 fn.; STEP 1 : Superimpose : Superimpose all the optcost functions f1, f2 \u00b7 \u00b7 \u00b7 fn.; STEP 2 : Interior : Take the interior of the superimposition; call it f.; Let f be composed of line segments g1, g2 \u00b7 \u00b7 \u00b7 gm such that gi \u2208{f1, . . . , fn}, for all i. \u2200k, let the domain of gk be [uk, vk]. Set i = m.; STEP 3 : Selective Replacement : while i \u22651 do if slope of gi \u2264\u2212\u03b7(lmin) then replace gi with line hi with slope \u2212\u03b7(lmin) and passing through (vi, gi(vi)); Let hi intersect gj (largest j < i) at some point x = v\u2032\u2032 j , v\u2032\u2032 j \u2208[uj, vj]; Update domain of gj from [uj, vj] to [uj, v\u2032\u2032 j ]; if j < i \u22121 then Remove functions gj+1 to gi\u22121 from f Set i = j; else i = i \u22121; STEP 4 : Refresh Interior : Take the interior after STEP 3 and call it f \u2032.; if l\u2032\u2032 \u2212 \u2192lmin then update the optcost function of l\u2032\u2032 would prefer delaying t units at lmin so that \u03bd(x) + t \u2208[ui, vi] rather than goto some location li if gi(\u03bd(x)) > \u03b7(lmin)(vi \u2212\u03bd(x)). Again, gi is a part of the ouside cost function of li, and player 1 prefers delaying time at lmin rather than goto li since that minimizes the cost. In this case, the cost function f is re\ufb01ned by replacing the line segment gi over [ui, vi] by another line segment hi passing through (vi, gi(vi)), and having a slope \u2212\u03b7(lmin). Step 3 of the algorithm does this. Recall that by our transformation 2, the value of clock x in any player 1 location is \u22641\u2212\u03b4. The value of x is in [1 \u2212\u03b4, 1 + \u03b4] only at a player 2 location ((A, e)b + in the FRPTG, section 5.2). Hence, the domain of cost functions for player 1 locations is actually [0, 1 \u2212\u03b4], and not [0, 1 + \u03b4]. Let the domain of gm be [um, 1]. Then we can split gm into two functions g1 m, g2 m with domains [um, 1 \u2212\u03b4] and [1 \u2212\u03b4, 1]. Now, we ensure that no time is spent in the player 1 location lmin over dom(g2 m), by not applying step 3 of the algorithm for g2 m. This way, selective replacement of the cost functions gi occur only in the domain [0, 1 \u2212\u03b4], and we remain faithful to transformation 2, and the semantics of RPTGs. Computing Almost Optimal Strategies: The strategy corresponding to this computed optcost is derived as follows. f \u2032 is the optcost of location lmin computed in Step 4 of the algorithm. f \u2032 is composed of two kinds of functions (a) the functions gi computed in step 2 as a result of the interior of superimposition and (b) functions hi which replaced some functions gj from f, corresponding to delay at lmin. For functions hj of f \u2032 with domain [uj, vj], we prescribe the strategy to delay at lmin till x = vj when entered with clock x \u2208[uj, vj]. For functions gi, that come from f at Step 2, where gi is part of some optcost function fk, (fk is the optcost function of one of the successors lk of lmin), the strategy dictates moving immediately to lk when entered with clock x \u2208[ui, vi]. Termination: Finally, we prove the existence of a number N, the number of a\ufb03ne segments that appear in the cost functions of all locations. Start with the resetfree FRPTG with m locations having p segments in the outside cost functions. Let \u03b1(m, p) denote the total number of a\ufb03ne segments appearing in cost functions across all locations. The transformation of resetfree components GF into GF \u2032\u2032 gives rise to two smaller resetfree FRPTGs with m \u22121 locations each, after separating out lmin. The resetfree FRPTG (GF, 1) with m \u22121 locations indexed with 1 of the form (l, 1) are solved \ufb01rst, these cost functions are added as outside cost functions to solve lmin, and \ufb01nally, the cost function of lmin is added as an outside cost \fGuha, Krishna, Manasa and Trivedi 13 function to solve the resetfree FRPTG (GF, 0) with m\u22121 locations indexed with 0 of the form (l, 0). Taking into account the new sink target location added, we have \u2264p + 1 segments in outside cost functions in (GF, 1). This gives atmost \u03b2 = \u03b1(m \u22121, p + 1) segments in solving (GF, 1), and \u03b1(1, p + \u03b2) = \u03b3 segments to solve lmin, and \ufb01nally \u03b1(m \u22121, p + \u03b3) segments to solve (GF, 0). Solving this, one can easily check that \u03b1(m, p) is atmost triply exponential in the number of locations m of the resetfree component GF. Obtaining a bound of the number of a\ufb03ne segments, it is easy to see that Algorithm 1 terminates; the time taken to compute almost optimal strategies and optcost functions is triply exponential. We illustrate the computation of Optcost of a Player 1 location in Figure 2. The proof of Lemma 15 is given in Appendix G, while Lemma 16 follows from Lemma 15 and Step 4 of Algorithm 1. \u25b6Lemma 15. In Algorithm 1, if a function gi (in f of Step 2) has domain [ui, vi] and slope \u2264\u2212\u03b7(l) then OptCost(l, \u03bd) = (vi \u2212\u03bd) \u2217\u03b7(l) + g(vi). \u25b6Lemma 16. The function f \u2032 in Algorithm 1 computes the optcost at any location l. That is, \u2200x \u2208[0, 1], OptCostG(l, x) = f \u2032(x). Note that the strategy under construction is a player 1 strategy, and player 1 has no control over the interval [1, 1 + \u03b4]. x \u2208[1, 1 + \u03b4] after a positive perturbation, and is under player 2\u2019s control. Thus, at a player 1 location, proving for x \u2208[0, 1] su\ufb03ces. 5.4.2 lmin is a Player 2 location If lmin is a player 2 location in the reset-free component GF i, then intuitively, player 2 would want to spend as little time as possible there. Keeping this in mind, we \ufb01rst run steps 1, 2 of Algorithm 1 by taking the exterior of f1, . . . , fn instead of the interior(player 2 would maximise the cost). There is no time elapse at lmin on running steps 1,2 of the algorithm. Let f be the computed exterior using steps 1,2. If f comprises of functions gi having a greater slope than \u2212\u03b7(l), then it is better to delay at lmin to increase the cost. In this case, player 2 would want to improve his optcost using Step 3, by spending time at lmin. Finally, while doing Step 4, we take the exterior of the replaced functions hi and old functions gi. Recall that our transformations resulted in 3 kinds of player 2 locations : urgent, those with dwell-time restriction [0, \u03b4] and \ufb01nally those with [\u03b4, 2\u03b4]. The 3 cases are discussed in detail in Appendix H. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file