diff --git "a/abs_29K_G/test_abstract_long_2405.01029v2.json" "b/abs_29K_G/test_abstract_long_2405.01029v2.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01029v2.json" @@ -0,0 +1,457 @@ +{ + "url": "http://arxiv.org/abs/2405.01029v2", + "title": "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts", + "abstract": "Learning to solve vehicle routing problems (VRPs) has garnered much\nattention. However, most neural solvers are only structured and trained\nindependently on a specific problem, making them less generic and practical. In\nthis paper, we aim to develop a unified neural solver that can cope with a\nrange of VRP variants simultaneously. Specifically, we propose a multi-task\nvehicle routing solver with mixture-of-experts (MVMoE), which greatly enhances\nthe model capacity without a proportional increase in computation. We further\ndevelop a hierarchical gating mechanism for the MVMoE, delivering a good\ntrade-off between empirical performance and computational complexity.\nExperimentally, our method significantly promotes zero-shot generalization\nperformance on 10 unseen VRP variants, and showcases decent results on the\nfew-shot setting and real-world benchmark instances. We further conduct\nextensive studies on the effect of MoE configurations in solving VRPs, and\nobserve the superiority of hierarchical gating when facing out-of-distribution\ndata. The source code is available at:\nhttps://github.com/RoyalSkye/Routing-MVMoE.", + "authors": "Jianan Zhou, Zhiguang Cao, Yaoxin Wu, Wen Song, Yining Ma, Jie Zhang, Chi Xu", + "published": "2024-05-02", + "updated": "2024-05-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Learning to solve vehicle routing problems (VRPs) has garnered much\nattention. However, most neural solvers are only structured and trained\nindependently on a specific problem, making them less generic and practical. In\nthis paper, we aim to develop a unified neural solver that can cope with a\nrange of VRP variants simultaneously. Specifically, we propose a multi-task\nvehicle routing solver with mixture-of-experts (MVMoE), which greatly enhances\nthe model capacity without a proportional increase in computation. We further\ndevelop a hierarchical gating mechanism for the MVMoE, delivering a good\ntrade-off between empirical performance and computational complexity.\nExperimentally, our method significantly promotes zero-shot generalization\nperformance on 10 unseen VRP variants, and showcases decent results on the\nfew-shot setting and real-world benchmark instances. We further conduct\nextensive studies on the effect of MoE configurations in solving VRPs, and\nobserve the superiority of hierarchical gating when facing out-of-distribution\ndata. The source code is available at:\nhttps://github.com/RoyalSkye/Routing-MVMoE.", + "main_content": "Introduction Vehicle routing problems (VRPs) are a class of canonical combinatorial optimization problems (COPs) in operation research and computer science, with a wide spectrum of 1College of Computing and Data Science, Nanyang Technological University, Singapore 2School of Computing and Information Systems, Singapore Management University, Singapore 3Department of Information Systems, Eindhoven University of Technology, The Netherlands 4Institute of Marine Science and Technology, Shandong University, China 5Singapore Institute of Manufacturing Technology (SIMTech), Agency for Science, Technology and Research (A*STAR), Singapore. Correspondence to: Yaoxin Wu . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). applications in logistics (Cattaruzza et al., 2017), transportation (Wu et al., 2023), and manufacturing (Zhang et al., 2023). The intrinsic NP-hard nature makes VRPs exponentially expensive to be solved by exact solvers. As an alternative, heuristic solvers deliver suboptimal solutions within reasonable time, but need substantial domain expertise to be designed for each problem. Recently, learning to solve VRPs has received much attention (Bengio et al., 2021; Bogyrbayeva et al., 2024), with fruitful neural solvers being developed. Most of them apply deep neural networks to learn solution construction policies via various training paradigms (e.g., reinforcement learning (RL)). Besides gaining decent performance, they are characterized by less computational overhead and domain expertise than conventional solvers. However, prevailing neural solvers still need network structures tailored and trained independently for each specific VRP, instigating prohibitive training overhead and less practicality when facing multiple VRPs. In this paper, we aim to develop a unified neural solver, which can be trained for solving a range of VRP variants simultaneously, and has decent zero-shot generalization capability on unseen VRPs. A few recent works explore similar problem settings. Wang & Yu (2023) applies multi-armed bandits to solve multiple VRPs, while Lin et al. (2024) adapts the model pretrained on one base VRP to target VRPs by efficient fine-tuning. They fail to achieve zero-shot generalization to unseen VRPs due to the dependence on networks structured for predetermined problem variants. Liu et al. (2024) empowers the neural solver with such generalizability by the compositional zero-shot learning (Ruis et al., 2021), which treats VRP variants as different combinations of a set of underlying attributes and uses a shared network to learn their representations. However, it still leverages existing network structure proposed for simple VRPs, which is limited by its model capacity and empirical performance. Motivated by the recent advance of large language models (LLMs) (Kaplan et al., 2020; Floridi & Chiriatti, 2020; Touvron et al., 2023), we propose a multi-task VRP solver with mixture-of-experts (MVMoE). Typically, a mixture-ofexpert (MoE) layer replaces a feed-forward network (FFN) with several \"experts\" in a Transformer-based model, which are a group of FFNs with respective trainable parameters. An input to the MoE layer is routed to specific expert(s) by a gating network, and only parameters in selected expert(s) 1 arXiv:2405.01029v2 [cs.AI] 6 May 2024 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts are activated (i.e., conditional computation (Jacobs et al., 1991; Jordan & Jacobs, 1994)). In this manner, partially activated parameters can effectively enhance the model capacity without a proportional increase in computation, making the training and deployment of LLMs viable. Therefore, towards a more generic and powerful neural solver, we first propose an MoE-based neural VRP solver, and present a hierarchical gating mechanism for a good trade-off between empirical performance and computational complexity. We choose the setting from Liu et al. (2024) as a test bed due to its potential to solve an exponential number of new VRP variants as any combination of the underlying attributes. Our contributions are summarized as follows. 1) We propose a unified neural solver MVMoE to solve multiple VRPs, which first brings MoEs into the study of COPs. The sole MVMoE can be trained on diverse VRP variants, and facilitate a strong zero-shot generalization capability on unseen VRPs. 2) We develop a hierarchical gating mechanism for MVMoE to attain a favorable balance between empirical performance and computational overhead. Surprisingly, it exhibits much stronger out-of-distribution generalization capability than the base gating. 3) Extensive experiments demonstrate that MVMoE significantly improves the zeroshot generalization against baselines on 10 unseen VRP variants, and achieves decent results on the few-shot setting and real-world instances. We further provide extensive studies on the effect of MoE configurations (such as the position of MoEs, the number of experts, and the gating mechanism) on the zero-shot generalization performance. 2. Related Work Neural VRP Solvers. Two mainstreams exist in literature on learning to solve VRPs: 1) Construction-based solvers, which learn policies to construct solutions in an end-to-end manner. Vinyals et al. (2015) proposes Pointer Network to estimate the optimal solution to the traveling salesman problem (TSP) in an autoregressive way. The follow-up works apply RL to explore better approximate solutions to TSP (Bello et al., 2017) and capacitated vehicle routing problem (CVRP) (Nazari et al., 2018). Kool et al. (2018) proposes an attention-based model (AM) that uses Transformer to solve a series of VRPs independently. By leveraging the symmetry property in solutions, Kwon et al. (2020) proposes the policy optimization with multiple optima (POMO) to further promote the performance in solving TSP and CVRP. Other construction-based solvers are often developed on top of AM and POMO (Kwon et al., 2021; Li et al., 2021a; Kim et al., 2022; Berto et al., 2023; Chen et al., 2023; Grinsztajn et al., 2023; Chalumeau et al., 2023; Hottung et al., 2024). Besides the autoregressive manner, several works construct a heatmap to solve VRPs in a non-autoregressive manner (Joshi et al., 2019; Fu et al., 2021; Kool et al., 2022; Qiu et al., 2022; Sun & Yang, 2023; Min et al., 2023; Ye et al., 2023; Kim et al., 2024). 2) Improvement-based solvers, which learn policies to iteratively refine an initial solution until a termination condition is satisfied. The policies are often trained in contexts of classic local search (Croes, 1958; Shaw, 1998) or specialized heuristic solvers (Helsgaun, 2017) for obtaining more efficient or effective search components (Chen & Tian, 2019; Lu et al., 2020; Hottung & Tierney, 2020; d O Costa et al., 2020; Wu et al., 2021; Xin et al., 2021; Hudson et al., 2022; Zhou et al., 2023a; Ma et al., 2023). In general, constructionbased solvers can efficiently achieve desired performance, whereas improvement-based solvers have the potential to deliver better solutions given prolonged inference time. Recent research uncovers the deficient generalization capability of neural solvers, which suffer from drastic performance decrement on unseen data (Joshi et al., 2021). Previous works mainly focus on the cross-size generalization (Fu et al., 2021; Hou et al., 2023; Son et al., 2023; Luo et al., 2023; Drakulic et al., 2023) or cross-distribution generalization (Zhang et al., 2022; Geisler et al., 2022; Bi et al., 2022; Jiang et al., 2023) or both (Manchanda et al., 2022; Zhou et al., 2023b; Wang et al., 2024) on a single problem. In this paper, we step further to explore the generalization across different VRP variants (Wang & Yu, 2023; Liu et al., 2024; Lin et al., 2024). Mixture-of-Experts. The original idea of MoEs was proposed three decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994). In early concepts, the expert was defined as an entire neural network, and hence MoEs was similar to an ensemble of neural networks. Eigen et al. (2013) launchs the era when researchers start applying MoEs as components of neural networks. As an early success of MoEs applied in large neural networks, Shazeer et al. (2017) introduces the sparsely-gated MoEs in language modeling and machine translation, achieving state-of-the-art results at the time with only minor losses in computational efficiency. Follow-up works mainly focus on improving the gating mechanism (Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2022; Zhou et al., 2022; Puigcerver et al., 2024; Xue et al., 2024) or applications to other domains (Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022b). We refer interested readers to Yuksel et al. (2012); Fedus et al. (2022a) for a comprehensive survey. 3. Preliminaries In this section, we first present the definition of CVRP, and then introduce its variants featured by additional constraints. Afterwards, we delineate recent construction-based neural solvers for VRPs (Kool et al., 2018; Kwon et al., 2020). VRP Variants. We define a CVRP instance of size n over 2 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts -0.5 +0.2 -0.3 +0.1 Linehaul Backhaul Depot [0, 3.0] (O) (B) (L) (TW) 0.2 0.2 0.2 0.1 0.3 < L [1.0, 1.5] [1.4, 1.8] [2.0, 2.4] [2.4, 2.8] Figure 1. Illustrations of sub-tours with various constraints: open route (O), backhaul (B), duration limit (L), and time window (TW). a graph G = {V, E}, where V includes a depot node v0 and customer nodes {vi}n i=1, and E includes edges e(vi, vj) between node vi and vj(i \u0338= j). Each customer node is associated with a demand \u03b4i, and a capacity limit Q is set for each vehicle. The solution (i.e., tour) \u03c4 is represented as a sequence of nodes, consisting of multiple sub-tours. Each sub-tour represents that a vehicle starts from the depot, visits a subset of customer nodes and returns to the depot. The solution is feasible if each customer node is visited exactly once, and the total demand in each sub-tour does not exceed the capacity limit Q. We consider the Euclidean space with the cost function c(\u00b7) defined as the total length of the tour. The objective is to find the optimal tour \u03c4 \u2217with the minimal cost: \u03c4 \u2217= arg min\u03c4\u2208\u03a6 c(\u03c4|G), where \u03a6 is the discrete search space that contains all feasible tours. On top of CVRP (featured by the capacity constraint (C)), several VRP variants involve additional practical constraints. 1) Open Route (O): The vehicle does not need to return to the depot v0 after visiting customers; 2) Backhaul (B): The demand \u03b4i is positive in CVRP, representing a vehicle unloads goods at the customer node. In practice, a customer can have a negative demand, requiring a vehicle to load goods. We name the customer nodes with \u03b4i > 0 as linehauls and the ones with \u03b4i < 0 as backhauls. Hence, VRP with backhaul allows the vehicle traverses linehauls and backhauls in a mixed manner, without strict precedence between them; 3) Duration Limit (L): To maintain a reasonable workload, the cost (i.e., length) of each route is upper bounded by a predefined threshold; 4) Time Window (TW): Each node vi \u2208V is associated with a time window [ei, li] and a service time si. A vehicle must start serving customer vi in the time slot from ei to li. If the vehicle arrives earlier than ei, it has to wait until ei. All vehicles must return to the depot v0 no later than l0. The aforementioned constraints are illustrated in Fig. 1. By combining them, we can obtain 16 typical VRP variants, which are summarized in Table 3. Note that the combination is not a trivial addition of different constraints. For example, when the open route is coupled with the time window, the vehicle does not need to return to the depot, and hence the constraint imposed by l0 at the depot is relaxed. We present more details of VRP variants and the associated data generation process in Appendix A. Learning to Solve VRPs. Typical neural solvers (Kool et al., 2018; Kwon et al., 2020) parameterize the solution construction policy by an attention-based neural network \u03c0\u03b8, which is trained to generate a solution in an autoregressive way. The feasibility of the generated solution is guaranteed by the masking mechanism during decoding. Without loss of generality, we consider RL training paradigm, wherein the solution construction process is formulated as a Markov Decision Process (MDP). Given an input instance, the encoder processes it and attains all node embeddings, which, with the context representation of the constructed partial tour, represent the current state. The decoder takes them as inputs and outputs the probabilities of valid nodes (i.e., actions) to be selected. After a complete solution \u03c4 is constructed, its probability can be factorized via the chain rule such that p\u03b8(\u03c4|G) = QT t=1 p\u03b8(\u03c0(t) \u03b8 |\u03c0(