diff --git "a/related_34K/test_related_short_2405.00568v1.json" "b/related_34K/test_related_short_2405.00568v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2405.00568v1.json" @@ -0,0 +1,1435 @@ +[ + { + "url": "http://arxiv.org/abs/2405.00568v1", + "title": "Powering In-Database Dynamic Model Slicing for Structured Data Analytics", + "abstract": "Relational database management systems (RDBMS) are widely used for the\nstorage and retrieval of structured data. To derive insights beyond statistical\naggregation, we typically have to extract specific subdatasets from the\ndatabase using conventional database operations, and then apply deep neural\nnetworks (DNN) training and inference on these respective subdatasets in a\nseparate machine learning system. The process can be prohibitively expensive,\nespecially when there are a combinatorial number of subdatasets extracted for\ndifferent analytical purposes. This calls for efficient in-database support of\nadvanced analytical methods In this paper, we introduce LEADS, a novel\nSQL-aware dynamic model slicing technique to customize models for subdatasets\nspecified by SQL queries. LEADS improves the predictive modeling of structured\ndata via the mixture of experts (MoE) technique and maintains inference\nefficiency by a SQL-aware gating network. At the core of LEADS is the\nconstruction of a general model with multiple expert sub-models via MoE trained\nover the entire database. This SQL-aware MoE technique scales up the modeling\ncapacity, enhances effectiveness, and preserves efficiency by activating only\nnecessary experts via the gating network during inference. Additionally, we\nintroduce two regularization terms during the training process of LEADS to\nstrike a balance between effectiveness and efficiency. We also design and build\nan in-database inference system, called INDICES, to support end-to-end advanced\nstructured data analytics by non-intrusively incorporating LEADS onto\nPostgreSQL. Our extensive experiments on real-world datasets demonstrate that\nLEADS consistently outperforms baseline models, and INDICES delivers effective\nin-database analytics with a considerable reduction in inference latency\ncompared to traditional solutions.", + "authors": "Lingze Zeng, Naili Xing, Shaofeng Cai, Gang Chen, Beng Chin Ooi, Jian Pei, Yuncheng Wu", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Mixture AND of AND Experts", + "gt": "Mixture-of-Experts [15, 27, 62] integrates the outputs of different experts in an input-driven manner. In the case of Sparse MoE, only a small subset of experts is chosen for each input, facilitating Model Loading Data Retrieval Data Copying Data Preprocessing & Inference 0.0 500.0 1.0k 1.5k 2.0k 2.5k Response Time (ms) INDICES w/ all optimizations INDICES w/o memory sharing INDICES w/o SPI INDICES w/o state caching INDICES w/o all optimizations Figure 14: Effects of INDICES\u2019 optimization techniques. substantial model scaling without additional computation overhead. Sparse MoE has been used to build large language models [16, 33, 48] and applied to vision-related tasks[47, 57]. Our research delves into the potential of MoE in structured data analytics. We closely combine it with database data analytics, selecting experts based on the filter conditions in SQL queries. In-Database Machine Learning involves running machine learning directly within the database. MADlib [24] is an open-source library providing SQL-based ML functions in PostgreSQL. Google ML library[2], and Microsoft\u2019s SQL Server Machine Learning Services [4] offer SQL APIs for ML functions on Oracle, bigquery, and Microsoft SQL Server, respectively. Recently, [54] proposed to integrate neural architectural search (NAS) model selection into PostgreSQL as an extension, which is orthogonal to our proposal. In summary, these works incorporate existing ML algorithms into the database for analysts but typically lack optimization for specific data analysis scenarios and seldom support deep learning algorithms.", + "pre_questions": [], + "main_content": "INTRODUCTION Relational Database Management Systems (RDBMS) are extensively employed as the primary storage solution for structured data across various applications [30, 34, 41, 46]. They serve as a fundamental infrastructure for various domains and are critical to the operation of numerous businesses [23, 28, 64]. In the contemporary business landscape, structured data analytics via databases has become an indispensable component for driving business growth and success [23, 28, 37, 43, 64]. Traditional structured data analytics approaches rely on database-driven filtering or aggregation operations to derive insights. However, these insights only offer a limited statistical view, which often fails to capture the complexity and intricacies of the underlying patterns [21, 45]. Fortunately, recent advancements in Deep Neural Networks (DNNs) open up new horizons for advanced analytics beyond simple statistical aggregation [8, 9, 19, 35]. At its core, exploiting DNNs for advanced structured data analytics comprises two main phases: training and inference [19]. The former primarily involves the construction of a DNN model and the training of this model on targeted data, while the latter utilizes the trained model to make predictions on new data. Notably, to deliver advanced DNN-driven analytics for informed decision-making, effectiveness and efficiency are the two most important metrics to optimize for [7, 14, 32, 52]. Specifically, effectiveness focuses on the inference phase, measuring the extent to which the predictions delivered by the model are accurate. Meanwhile, efficiency evaluates the requirements of the model in terms of response time and computational resources in both phases [32]. In real-world scenarios, analysts are often more interested in performing analytics on specific subsets of data. For instance, they may assess trends among patients diagnosed with a particular disease, or, study behaviors of consumers of a certain age group. Consider the scenario illustrated in Figure 1, where an analyst aims to evaluate the influence of education and city location on the incomes of different subdatasets, i.e., tuples grouped by gender and age. Naturally, the analyst seeks to build a predictive model that is effective, delivering accurate predictions for these subsets of tuples, and meanwhile, executes predictions efficiently with minimal response time and computational resources. However, there are two main challenges in achieving this objective. First, achieving efficient training for effective predictive modeling across analyst-specified subdatasets is challenging. Conventionally, a single general model is trained to support inference across all data tuples [19, 23, 30]. This approach is efficient, which requires training only one model. However, such a model, optimized to capture the common patterns and general behaviors of the whole dataset, is likely not as effective in providing accurate predictions as a dedicated model trained on a specific subdataset of interest. Taking for example the scenario in Figure 1, a model trained explicitly for the group of the gender male and age 24 would probably identify finer-grained patterns and behaviors pertinent to this subdataset, given sufficient training tuples, this dedicated model could outperform the general model significantly. Nonetheless, training a separate model for each subdataset is computationally prohibitive due to the combinatorial nature of potential subdatasets. Second, efficiently integrating the inference phase into an RDBMS while ensuring effectiveness is also challenging from a system perspective. One major obstacle is how to reconcile the practices of managing structured data within an RDBMS and the execution of inference on a separate ML system. Many existing solutions support arXiv:2405.00568v1 [cs.DB] 1 May 2024 Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 Jian Pei3, Yuncheng Wu4 Application Healthcare Analytics Business Analytics \u2026 ... Sliced Model SQL-Aware Dynamic Model Slicing RDBMS Table Filtered Candidate Tuples Result Base Model \ud835\udc3eexperts Base Model --UDF --propositional formula -- tableName SELECT indices_inference \u201cCensus\u201d, \u201cgender = \u2018M\u2019 and age = 24\u201d ) -- taskName \u201cIncomePredict\u201d, ( EXECUTE IncomePredict ON YoungMale WITH YoungMale AS SELECT * FROM Census WHERE gender = \u2018M\u2019 and age = 24; Database Encoder Gating Weight Invoke Inference Task Figure 1: An illustration of in-database analytics on income via SQL-aware dynamic model slicing. the inference process with two separate systems [13, 51], which requires transferring the inference data from an RDBMS, typically a subset of tuples, to another inference system. Such a process is timeconsuming, susceptible to errors, and might also violate privacy and security requirements [55]. Recently, several preliminary attempts have been made to integrate the inference process directly into RDBMS via User-Defined Functions (UDFs) [5, 17, 24, 55, 58, 63], which improves user experience by enabling in-database inference through SQL statements. Specifically, running Python in UDFs can tap into its rich machine learning libraries [18], while it misses the opportunity to leverage the more efficient data retrieval APIs offered by RDBMS, e.g., server programming interface (SPI) A proposed enhancement involves utilizing multiple programming languages within UDFs, aiming to harness both data retrieval APIs and advanced ML libraries. However, this approach introduces extra overhead and affects inference efficiency, especially when conversions and copying of inference data between different language execution environments become necessary [18]. Therefore, achieving efficient and seamless integration of the inference process into RDBMS is an imperative problem to address. To address the above challenges, we build an efficient and effective IN-Database InferenCE System (INDICES). The system is designed to produce effective predictions across subsets of data dynamically specified and retrieved by SQL queries. To this end, we propose a novel SQL-awarE dynAmic moDel Slicing (LEADS) technique, which enhances the effectiveness of the base model via the mixture of experts (MoE) technique, and maintains the inference efficiency using a SQL-aware gating network for dynamic model customization for subdatasets specified by SQL queries. Specifically, in LEADS, we propose to enhance the modeling capacity of the base model by constructing a general model that is composed of multiple replicas of this base model. These replicas, termed as experts, are trained to specialize in different problem subspaces for more effective predictive modeling. To enhance effectiveness via MoE without incurring reduced inference efficiency, we further introduce a SQL-aware gating network that dynamically generates sparse gating weights based on filter conditions in the SQL query to slice a subset of only necessary experts from the general model. Such a sliced model is optimized for the corresponding SQL query during training, and is dedicated to the specified subdataset for enhancing inference effectiveness while maintaining efficiency. To support end-to-end structured data analytics, our system INDICES seamlessly incorporates LEADS into PostgreSQL, an opensource RDBMS widely used in both industry and academia. For ease of use and inference efficiency, we divide the proposed indatabase inference process into four separate stages and propose three optimization techniques to minimize the overhead of each stage: efficient execution allocation, memory sharing, and state caching. Given that all the stages of the inference process are supported within a single UDF, analysts can now conveniently invoke inference queries using a single SQL statement. This approach obviates the need to transfer and manage data in separate systems and reduce data copying overhead between executions of different programming languages. Additionally, while the current system is supported by PostgreSQL, INDICES can be readily integrated into other RDBMSs, e.g., MySQL. We summarize the main contributions as follows. \u2022 We formulate the SQL-aware structured data analytics problem, which requires efficient and effective predictive modeling on subdatasets specified by corresponding SQL queries. To the best of our knowledge, this is the first work that develops techniques and a system to address the problem. \u2022 We propose a novel SQL-aware dynamic model slicing technique LEADS, which scales up the modeling capacity of the base model via MoE and devises a SQL-aware gating network for efficient and effective dynamic model customization for SQL-specified subdataset. \u2022 We design and build an end-to-end in-database inference system INDICES for advanced structured data analytics, which nonintrusively incorporate LEADS onto PostgreSQL with three optimization techniques for further improving the inference efficiency. \u2022 We conduct extensive experiments on four real-world datasets. The results confirm the effectiveness of LEADS, with up to 3.95% improvement in accuracy for given workloads of datasets compared with the baseline models, while INDICES achieves up to Powering In-Database Dynamic Model Slicing for Structured Data Analytics 2.06x speedup in terms of inference efficiency compared with the traditional solution. In the remainder of this paper, we introduce preliminaries in Section 2. We formulate our problem in Section 3. We present LEADS with detailed descriptions of its modules and optimization schemes in Section 4. We discuss the integration of LEADS and INDICES with PostgreSQL in Section 5. Experimental results are presented in Section 6. We review related work in Section 7 and conclude the paper in Section 8. 2 PRELIMINARIES In this section, we present two key techniques central to our system, namely Mixture of Experts (MoE) for scaling up the model capacity while maintaining its inference efficiency via conditional computation [48], and sparse softmax [38] for the informed selection of active experts for enhancing efficiency. Scalars, vectors and matrices are denoted by \ud835\udc65, x and X respectively. Mixture of Experts (MoE) [29, 48, 53] is a general ensemble learning and conditional computation technique to scale up the modeling capacity without incurring much computational overhead. In DNNbased MoE, a series of expert DNNs are adopted to divide problem space into different regions, where each expert specializes in handling a certain sub-region. MoE is particularly useful when the data exhibits complex patterns or variations [16, 47, 65] due to its capability of enhancing model capacity. There are two main components in an MoE layer: expert models and a gating network. For simplicity of construction, expert models can be composed of homogeneous models that share the same model architecture. During training, expert models are trained to specialize in different problem subspaces. The gating network is also trained to produce a set of gating weights dynamically, which determines the importance assigned to corresponding experts. Denoting the gating weights and outputs of experts as w = [\ud835\udc641,\ud835\udc642, ...,\ud835\udc64\ud835\udc3e] and H = [h1, h2..., h\ud835\udc3e] respectively, where \ud835\udc3eis the number of experts, and h\ud835\udc56is the output of the \ud835\udc56-th expert, the output of the MoE for the current input is then a weighted average of these experts: \u02c6 y = \u00cd\ud835\udc3e \ud835\udc56=1 \ud835\udc64\ud835\udc56h\ud835\udc56. During training, the MoE model optimizes the gating network and experts simultaneously. The gating network learns to assign appropriate weight to experts, while the experts learn to make accurate predictions within their respective regions of expertise. MoE has found extensive application in various domains, notably in the large language model GPT-4 [42] for texts and the large visionlanguage model MoE-LLaVA [36] for images, which combines the benefits of large model capacity with efficient computation, by only engaging a fraction of the model parameters for each input. In LEADS, we focus on the applicability of the MoE technique to structured data analytics, intending to harness its scalable modeling capacity for enhanced predictive accuracy and efficiency. Sparse Softmax. Softmax transformation is a crucial function in the gating network, which maps an input vector z into a probability distribution p whose probabilities correspond proportionally to the exponential of its input values, i.e., softmax(zj) = exp(zj) \u00cd i exp(zi) . The output of softmax can thus be subsequently used as the network output denoting the class probabilities or weights indicating the importance of corresponding inputs. Specifically, denoting the \ud835\udc51dimension probability as \u2206d := {p \u2208Rd : p \u22650. ||p||1 = 1}, softmax can be interpreted in the variational form with entropy: softmax(z) = argmax p\u2208\u2206d pTz + HS(p) (1) where HS(p) = \u2212\u00cd \ud835\udc57\ud835\udc5d\ud835\udc57log\ud835\udc5d\ud835\udc57is the Shannon entropy. The softmax function is extensively used in DNNs, largely due to its differentiable and convex properties. However, softmax always assigns dense probabilities to inputs, which is less interpretable and effective [8, 20], as compared with sparse credit assignment. To overcome this limitation, sparse softmax is proposed to produce sparse distributions, by assigning zero probability to certain outputs. Particularly, \ud835\udefc-entmax [44] generalizes both dense and sparse softmax with Tsallis \ud835\udefc-entropies HT \ud835\udefc(p) [50]: \ud835\udefc-entmax(z) = argmax p\u2208\u2206d pTz + HS \ud835\udefc(p) (2) where HS(p) = \u2212\u00cd \ud835\udc57(\ud835\udc5d\ud835\udc57= \ud835\udc5d\ud835\udefc \ud835\udc57) if \ud835\udefc\u22601, else HT 1 (p) = HS(p). With a larger \ud835\udefc, \ud835\udefc-entmax tends to produce a sparser probability distribution. Another appealing property is that the hyper-parameter \ud835\udefc, which controls the shape and sparsity of the mapping, can be learned adaptively to the predictive task in the training stage. Let p\u2217= \ud835\udefc-entmax(z) denote the distribution e \ud835\udc5d\ud835\udc56= (\ud835\udc5d\u2217 \ud835\udc56)2\u2212\ud835\udefc/\u00cd \ud835\udc57(\ud835\udc5d\u2217 \ud835\udc57)2\u2212\ud835\udefc and the Shannon entropy \u210e\ud835\udc56= \u2212(\ud835\udc5d\u2217 \ud835\udc56)\ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc5d\u2217 \ud835\udc56). The gradient of \ud835\udefcis derived as: \ud835\udf15\ud835\udefc-entmax(z) \ud835\udf15\ud835\udefc = (\ud835\udc5d\u2217 \ud835\udc56)\u2212e \ud835\udc5d\ud835\udc56 (\ud835\udefc\u22121)2 + \u210e\ud835\udc56\u2212e \ud835\udc5d\ud835\udc56 \u00cd \ud835\udc57\u210e\ud835\udc57 \ud835\udefc\u22121 , \ud835\udefc> 1 , which can be optimized end-to-end together with the parameters of the predictive model [44]. We aim to adopt a learnable sparse softmax in LEADS to improve model training and further enhance the predictive efficiency. 3 PROBLEM FORMULATION Technically, structured data can be viewed as one logical table T, which comprises \ud835\udc41rows and \ud835\udc40attributes within RDBMS. Each row, represented as a tuple x = (\ud835\udc651,\ud835\udc652, \u00b7 \u00b7 \u00b7 ,\ud835\udc65\ud835\udc40), serves as a feature vector in predictive modeling, with \ud835\udc65\ud835\udc56denoting the value of the \ud835\udc56-th attribute. In structured data analytics, data analysts typically focus on specific subsets of data characterized by shared attributes. For example, analysts may assess the readmission rates among the patients diagnosed with a certain disease, or predict the ecommerce click-through rate (CTR) within a particular age group. Typically, for complex analytical queries that involve prediction, WHERE statement in a SQL query is executed first to select relevant tuples, to which DNNs are applied subsequently for prediction. In this paper, we refer to this process as SQL-aware predictive modeling. Given a SQL query, denoted by \ud835\udc5e, there are two main steps in SQL-aware predictive modeling: data selection and model prediction. Utilizing relational algebra, a generalized SQL query selection \ud835\udc5eis expressed as \ud835\udf0e\ud835\udf11(T), where \ud835\udf0eis the unary operator for selection and \ud835\udf11is the propositional formula in \ud835\udc5e. Typically, \ud835\udf11consists of multiple predicates connected by logical operators. The selection \ud835\udf0e\ud835\udf11(T) retrieves all tuples in table T that satisfies \ud835\udf11, formally defined as \ud835\udf0e\ud835\udf11(T) = {x : x \u2208T,\ud835\udf11(x)}. For simplicity, the subdataset retrieved by the SQL query \ud835\udc5eis denoted as T\ud835\udf11= {x1, x2, \u00b7 \u00b7 \u00b7 , x\ud835\udc5b}, where \ud835\udc5bis the number of tuples. Each tuple x\ud835\udc56\u2208R\ud835\udc40in T\ud835\udf11comprises Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 Jian Pei3, Yuncheng Wu4 SELECT * FROM Census WHERE city = \u201cNYC\u201d OR city = \u201cBOS\u201d SELECT * FROM Census WHERE age > 25 AND gender = \u201cMale\u201d SELECT * FROM Census WHERE edu = \u201cMSc.\u201d AND gender = \u201cMale\u201d (a) Examples of a primitive SQL query. SELECT * FROM Census WHERE city = \u201cNYC\u201d AND gender = \u201cMale\u201d AND age = 24 Discretization age = \u201c20-25\u201d Encode Encode Encode SQL query embedding vector [ \u2206! , 3 , \u2026 , 14 , \u2206\" , \u2206\"#! , \u2026 , 45] (b) Process of Encoding SQL query. Figure 2: SQL query encoder. \ud835\udc40attributes, and x\ud835\udc56can be represented as a vector of categorical and/or numerical features, i.e., x\ud835\udc56= [\ud835\udc65\ud835\udc56,1,\ud835\udc65\ud835\udc56,2, \u00b7 \u00b7 \u00b7 ,\ud835\udc65\ud835\udc56,\ud835\udc40]. DNNs are then applied to perform prediction on these selected tuples, e.g., to predict the labels y = {\ud835\udc661,\ud835\udc662, \u00b7 \u00b7 \u00b7 ,\ud835\udc66\ud835\udc5b}, aiming to derive meaningful insights, such as patients readmission rates in healthcare analytics or CTR in e-commerce. Technically, SQL-aware predictive modeling refers to making predictions on a selected subset of tuples retrieved from a logical table T based on a SQL query \ud835\udc5ewith a propositional formula \ud835\udf11. 4 SQL-AWARE DYNAMIC MODEL SLICING In contrast to conventional machine learning paradigms, SQLaware predictive modeling makes use of constraints specified in SQL queries to provide more accurate predictions with respect to the data of interest. For example, the query outlined in Figure 2 is interested in data constrained based on age, location, and gender. Such selection constraints present optimization opportunities for prediction accuracy and efficiency. To this end, we propose SQLaware dynamic model slicing technique, LEADS, to leverage the propositional formula \ud835\udf11from SQL queries as meta-information to customize the base model, as a means to improve the effectiveness and efficiency of the prediction of a specified datasubset. In this section, we first introduce the SQL query encoder to translate \ud835\udf11into a vectorized format. Next, we present two key components of LEADS for scaling up the modeling capacity of the base model via the Mixture of Experts (MoE) technique and dynamic model slicing via a SQL-aware gating network. For optimization, we further design two regularization terms to strike a balance between effectiveness and efficiency. 4.1 SQL Query Encoder In SQL-aware predictive modeling, the WHERE clause of SQL queries filters tuples according to a predefined propositional formula \ud835\udf11. This formula comprises one or more predicates, each setting a logical condition on a particular attribute. For instance, \"gender = \u2018M\u2019 \" mandates that the gender attribute of the filtered tuples must be \u2018M\u2019. These predicates are interconnected via logical operators such as \"AND\" or \"OR\" to form a complete propositional formula. The exponential number of possible predicate combinations in SQL queries renders the direct development of models for every conceivable subdataset, as in traditional machine learning approaches, both complex and intractable. For the SQL query encoder, we focus on individual queries, referred to as primitive SQL query. Considering a table T with \ud835\udc40 attributes, the \ud835\udc57-th attribute denoted as \ud835\udc34\ud835\udc57, each attribute is linked to either a numerical or categorical feature in predictive modeling. Particularly, each numerical feature needs to be converted into a corresponding categorical feature through discretization, which will be detailed subsequently. In a primitive SQL query, each attribute \ud835\udc34\ud835\udc57 may be associated with zero or one predicate, with predicates across attributes conjoined using the logical operator \u2227(AND), as depicted in Figure 2a. Technically, a predicate for attribute \ud835\udc34\ud835\udc57in primitive SQL query can be expressed as \ud835\udc43\ud835\udc57: \ud835\udc34\ud835\udc57= \ud835\udc4e\ud835\udc57, where \ud835\udc4e\ud835\udc57\u2208D\ud835\udc57\u222a{\u0394\ud835\udc57}, D\ud835\udc57represents the domain of possible values for \ud835\udc34\ud835\udc57, and \u0394\ud835\udc57denotes a default value assigned to \ud835\udc34\ud835\udc57when it is not specified in the query. Figure 2a illustrates a valid primitive SQL query example, contrasting with two non-examples. Thus, the propositional formula \ud835\udf11can be represented as: \ud835\udf11= \ud835\udc431 \u2227\ud835\udc432 \u2227\u00b7 \u00b7 \u00b7 \u2227\ud835\udc43\ud835\udc40. The objective of the SQL query encoder is to generate a categorical feature vector q for each primitive SQL query based on the metainformation \ud835\udf11, achieved by concatenating the attribute values of the predicates. Formally, the feature vector of the SQL query encoding can be obtained by: q = [\ud835\udc5e1,\ud835\udc5e2, \u00b7 \u00b7 \u00b7 ,\ud835\udc5e\ud835\udc40] where\ud835\udc5e\ud835\udc57is the categorical attribute value for predicate \ud835\udc43\ud835\udc57. Figure 2b demonstrates the transformation of a primitive SQL query into a feature vector. Notably, the numerical attribute \"age\" here is first discretized before being encoded alongside categorical attributes \"city\" and \"gender\", and columns lacking predicates are filled by the default value \u0394\ud835\udc56. Discretization. Discretization is essential for encoding numerical attributes, e.g., weight or salary. The infinite possible values of numerical attributes make direct encoding infeasible, hence requiring discretization. This process first partitions the domain D of each numerical attribute into a fixed number of bins, akin to approximating a k-nearest neighbors classifier in predictive modeling. The goal of discretization is to preserve the key information in the embedding space for maintaining predictive modeling capacity. To this end, we employ a supervised discretization approach that accounts for the correlation between numerical attributes and the target attribute. This aims to maximize information value (IV), which measures the reduction of uncertainty within each bin relative to the prediction target. Higher IV values indicate a significant decrease in uncertainty, thereby preserving the predictive capacity. In particular, we introduce the open-source OptBinning [40] implementation for discretization, which optimizes IV effectively while supporting constraints like the maximum bin count per attribute. 4.2 SQL-Aware Dynamic Model Slicing The categorical feature vector q, obtained from the SQL query encoder, captures key information that facilitates dynamic customization of a predictive model to an optimal configuration for Powering In-Database Dynamic Model Slicing for Structured Data Analytics \u2026 \u2026 0 1 0 3 0 1 \u2026 Gating Network Sparse Softmax Recalibrated Gating Weights ! \ud835\udc30 Base Model \u2026 \u2026 Activated Deactivated K-Experts Base Model Selected Experts \ud835\udc64! \ud835\udc64\"#$ Sliced Gating Weights \ud835\udc30\ud835\udc2c Output Slice \u2026 General Model Sliced Model Based on SQL SQL Query Input Structured Data Input Structured Data Input \ud835\udc64$ \u2026 \u2026 0 1 0 Figure 3: Overview of SQL-aware dynamic model slicing. the targeted subdataset. The customization should significantly enhance predictive performance in SQL-aware predictive modeling. As illustrated in Figure 3, LEADS first scales up the modeling capacity via MoE by replicating the base model to construct a general model, and subsequently, LEADS integrates a SQL-aware gating network based on the SQL encoding vector q to selectively activate experts in the general model to derive a sliced model for higher efficiency and effectiveness. In this subsection, we will first introduce the preprocessing module that prepares the embedding vector q for the predictive modeling, elaborate on the two key modules, i.e., the general model and the SQL-aware gating network, and finally, explain our SQL-aware dynamic model slicing technique in detail. 4.2.1 Preprocessing Module. There are two sets of input constructed for the SQL-aware prediction modeling given an input tuple. The first set of input is constructed for the gating network and can be uniformly represented as a categorical feature vector q. q = [\ud835\udc5e1,\ud835\udc5e2, . . . ,\ud835\udc5e\ud835\udc40] comprises \ud835\udc40feature values from respective attribute fields, where numerical attributes need to be converted into categorical attributes via discretization, as discussed in the previous subsection. The second set is the attribute values of the input tuple x = [\ud835\udc651,\ud835\udc652, . . . ,\ud835\udc65\ud835\udc40], and each attribute value \ud835\udc65\ud835\udc56can be either categorical or numerical. For both q and x, each field of attribute value \ud835\udc63\ud835\udc56(\ud835\udc5e\ud835\udc56/\ud835\udc65\ud835\udc56) needs to be transformed into a corresponding embedding vector e\ud835\udc56to participate the subsequent predictive modeling. Specifically, each categorical attribute is transformed via embedding lookup, i.e., e\ud835\udc56= E\ud835\udc56[\ud835\udc5e\ud835\udc56], e\ud835\udc56\u2208R\ud835\udc5b\ud835\udc52, where \ud835\udc5b\ud835\udc52is the feature embedding size, and E\ud835\udc56 is the embedding matrix of this categorical attribute. Note that different embedding vectors of E\ud835\udc56correspond to their respective values of this attribute. As for each numerical attribute \ud835\udc65\ud835\udc57of x, the corresponding embedding vector is obtained by linearly scaling up a learnable embedding vector \u02c6 e\ud835\udc57for this numerical attribute, namely e\ud835\udc57= \ud835\udc65\ud835\udc57\u00b7 \u02c6 e\ud835\udc57. In this way, we obtain fixed-size inputs, i.e., embedding vectors \u02c6 q = [q1, q2, . . . , q\ud835\udc40] and \u02c6 x = [x1, x2, . . . , x\ud835\udc40]. 4.2.2 General Model and SQL-aware Gating Network. The general model comprises a set of \ud835\udc3ereplicated base models, denoted as F = [F1, F2, . . . , F\ud835\udc3e], which are referred to as \"expert models\". These experts share the same model architecture but learn distinct model parameters during training, which take the same input \u02c6 x and produce different outputs that need to be aggregated for final predictions. The output of the \ud835\udc56-th expert for a given input x is denoted as F\ud835\udc56(\u02c6 x). As for the SQL-aware gating network G, it takes the SQL query embedding vectors \u02c6 q as input to produce a \ud835\udc3e-dimensional vector, termed the gating weight w, with w \u2208R\ud835\udc3e. Specifically, a two-layer multilayer perceptron (MLP) is employed as the gating network following the practice [16, 42, 47]. We concatenate all embeddings in \u02c6 q as the input of the gating network e q = q1 \u2295q2 . . . \u2295q\ud835\udc40, where e q \u2208R\ud835\udc40\u00b7\ud835\udc5b\ud835\udc52, then feed e q to G, and obtain the gating weight w by: z = \ud835\udf19(W1e q + b1) w = \ud835\udc3a(s) = W2z + b2 (3) where W1 \u2208R\ud835\udc5b\ud835\udc67\u00d7\ud835\udc40\ud835\udc5b\ud835\udc52, W2 \u2208R\ud835\udc3e\u00d7\ud835\udc5b\ud835\udc67and b1 \u2208R\ud835\udc5b\ud835\udc67, b2 \u2208R\ud835\udc3eare the weights and biases respectively, \ud835\udc5b\ud835\udc67is the hidden layer size, and \ud835\udf19represents the ReLU activation function. Given the gating weight w, the \ud835\udefc-entmax function [12, 44] is further applied to recalibrate w to a probability distribution. As introduced in Section 2, the hyper-parameter \ud835\udefcin \ud835\udefc-entmax controls the level of sparsity, and a larger value of \ud835\udefcsets more gating weights to zero and thus deactivates more experts for higher efficiency. The output of \ud835\udefc-entmax e w is thus: e w = \ud835\udefc-entmax(w), e w \u2208R\ud835\udc3e (4) which is used to aggregate expert outputs. The final output of the general model is a weighted average of expert outputs: \u02c6 y = \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 e \ud835\udc64\ud835\udc56\u00b7 F\ud835\udc56(x) (5) where \u02c6 y is the prediction given the input x and the corresponding query q of the SQL-aware predictive modeling. 4.2.3 Dynamic Model Slicing via Gating Network. For a given SQL query, all the retrieved data tuples share the same recalibrated Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 Jian Pei3, Yuncheng Wu4 gating weight e w. Further, e \ud835\udc64\ud835\udc56= 0 in Equation 5 indicates that the corresponding \ud835\udc56-th expert is not required in the current predictive modeling, and thus, only a small fraction of experts F\ud835\udc56need to be activated for prediction for much higher computational efficiency. Denoting the set of indices of activated experts as {\ud835\udc3c1, \ud835\udc3c2, \u00b7 \u00b7 \u00b7 , \ud835\udc3c\ud835\udc5b\ud835\udc5c}, where \ud835\udc5b\ud835\udc5cis the current number of activated experts and e \ud835\udc64\ud835\udc3c\ud835\udc57\u2260 0, \u2200\ud835\udc57\u2208{1, 2, . . . ,\ud835\udc5b\ud835\udc5c}, and given the corresponding SQL query encoding q, we index the activated experts to form a sliced model, i.e., Fq = [F\ud835\udc3c1, F\ud835\udc3c2, \u00b7 \u00b7 \u00b7 , F\ud835\udc3c\ud835\udc5b\ud835\udc5c]. Therefore, the final output of the sliced model is as follows: \u02c6 y = \ud835\udc5b\ud835\udc5c \u2211\ufe01 \ud835\udc57=1 e \ud835\udc64[\ud835\udc3c\ud835\udc57] \u00b7 F\ud835\udc3c\ud835\udc57(x) (6) where the number of activated experts \ud835\udc5b\ud835\udc5cdirectly affects the effectiveness and efficiency of the sliced model. A large \ud835\udc5b\ud835\udc5cindicates larger model capacity while incurring higher computational overhead, and vice versa. In LEADS, \ud835\udc5b\ud835\udc5cis determined by the gating network based on the SQL query encoding q and the hyper-parameter \ud835\udefcof the sparse softmax function. Notably, instead of predefining a fixed value, \ud835\udefcin \ud835\udefc-entmax is learnable and optimized based on the input tuples and corresponding queries during training. Subsequently, during inference, LEADS can dynamically adapt \ud835\udc5b\ud835\udc5cbased on the current SQL query, trading off between the effectiveness and efficiency of the predictive modeling. 4.3 Optimization Our LEADS framework can be applied to different predictive tasks by configuring a proper objective function, such as mean squared error (MSE) for regression or cross-entropy for classification. For instance, in binary classification, the objective function employed is binary cross-entropy: LogLoss(\u02c6 y, y) = \u22121 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56 {\ud835\udc66\ud835\udc56log\ud835\udf0e( \u02c6 \ud835\udc66\ud835\udc56) + (1 \u2212\ud835\udc66\ud835\udc56)log(1 \u2212\ud835\udf0e( \u02c6 \ud835\udc66\ud835\udc56))} (7) where \u02c6 y represents the prediction labels, y denotes the ground truth labels, \ud835\udc41is the number of tuples for prediction, and \ud835\udf0e(\u00b7) is the sigmoid function. To make the optimization more robust and effective, we introduce two regularization terms to the main loss function. The first term is the balance loss, L\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5b, which is introduced to address the issue of imbalanced expert utilization. This imbalance occurs when the gating network G tends to favor a small subset of experts, leading to a skewed training process where these preferred experts are overutilized while others are underutilized. Such a scenario undermines the capacity of MoE and can detrimentally affect the model performance. Let X denote a mini-batch of training instances with \ud835\udc5b\ud835\udc4ftuples, and e W = [e w1, e w2, \u00b7 \u00b7 \u00b7 , e w\ud835\udc5b\ud835\udc4f] is the recalibrated gating weights of X, where e \ud835\udc64\ud835\udc56\ud835\udc57is the \ud835\udc57-th weight of e w\ud835\udc56. L\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5bis defined as: L\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5b= cv(\u03a6) = \ud835\udc3e \u2211\ufe01 \ud835\udc57=1 \ud835\udf19\ud835\udc57\u2212E(\u03a6) E(\u03a6)2 \u03a6 = [\ud835\udf191,\ud835\udf192, \u00b7 \u00b7 \u00b7 ,\ud835\udf19\ud835\udc3e],\ud835\udf19\ud835\udc57= \ud835\udc5b\ud835\udc4f \u2211\ufe01 \ud835\udc56=1 e \ud835\udc64\ud835\udc56\ud835\udc57 (8) where E(\u03a6) = 1 \ud835\udc3e \u00cd\ud835\udc3e \ud835\udc57=1 \ud835\udf19\ud835\udc57. The balance loss term \ud835\udc3f\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5bencourages uniform distribution of weights across experts within the minibatch to ensure more balanced importance among all experts. The balance loss term is designed to encourage a more balanced selection of experts, which however, empirically results in activating a large number of experts despite the introduction of sparse softmax. To maintain sparsity within the model and counteract this tendency for more efficient computation, we further introduce a sparsity loss term, L\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc60: L\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc60= \u22121 \ud835\udc5b\ud835\udc4f \ud835\udc5b\ud835\udc4f \u2211\ufe01 \ud835\udc56=1 (e w\ud835\udc56)2 (9) which encourages the gating network to allocate higher weights to select a few experts while minimal or zero weights to others. Both loss terms are scaled by their respective regularization coefficient, \ud835\udf061 and \ud835\udf062, and then added to the main loss: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60= LogLoss(\u02c6 y, y) + \ud835\udf061\ud835\udc3f\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5b+ \ud835\udf062\ud835\udc3f\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc60. (10) With this objective function, LEADS can then be trained effectively with gradient-based optimizers, e.g., SGD or Adam [31]. 5 IN-DATABASE MODEL INFERENCE In this section, we present our in-database model inference system INDICES. We use PostgreSQL [61] as our underlying database system. By exploiting Postgres extension and User-Defined-Functions (UDFs), we seamlessly incorporate the SQL-aware dynamic model slicing technique LEADS onto PostgreSQL to enable in-database model inference. The typical model inference pipeline consists of four stages: model loading, data retrieval, data preprocessing, and inference. A naive way to support LEADS is to decouple the database system and the inference system. That is, analysts retrieve data from the database via SQL query, preprocess the data, and perform inference in a dedicated inference system. However, this decoupled solution presents three drawbacks. First, moving the data out from the database can expose it to potential security risks and may not align with compliance standards. Second, it is troublesome for the users to maintain two separate systems with complicated data analytics workflow. For traditional data analysts accustomed to SQL queries, learning an additional inference framework represents an extra burden. Third, moving data from the database to the inference system may incur additional overhead and latency, particularly when dealing with large datasets being moved across the network to an external inference system. We conduct a profiling experiment to evaluate the time usage breakdown of the decoupled solution using LEADS with the PyTorch [26] runtime. As shown in Figure 4, data retrieval time occupies approximately 33% to 43% of the total inference time. This overhead primarily stems from the database connection, serialization, network communication, and deserialization for data movement. Therefore, we focus on creating an in-database inference system by integrating the inference procedure onto PostgreSQL via UDF to avoid transferring data out from the database and reduce data retrieval overhead. Powering In-Database Dynamic Model Slicing for Structured Data Analytics Number of Records 80K 160k 320k 0% 20% 40% 60% 80% 100% 53% 51% 56% 43% 42% 33% 4% 7% 10% Model Loading Data Retrieval Data Preprocessing & Inference Figure 4: The breakdown response time of the inference stages for the decoupled approach. 5.1 Inference UDF Design By utilizing a UDF that encompasses all stages of the inference process, users can initiate queries by specifying parameters such as \u2018TableName\u2019 and \u2018WHERE\u2019 conditions, as illustrated in Figure 5. The UDF then retrieves relevant subdatasets from the database by applying the \u2018WHERE\u2019 condition on \u2018TableName\u2019. Subsequently, the UDF dynamically loads the trained model and performs the designated inference task. Upon completion, the prediction results are returned to the users. However, relying solely on Python in UDFs for the entire model inference pipeline remains suboptimal due to its inefficient data retrieval process. Therefore, we introduce three optimizations to improve the UDF inference efficiency. Efficient Execution Allocation. We utilize a multi-language strategy in UDFs, combining low-level languages like C or Rust for efficient data retrieval, and high-level languages like Python for model loading, data preprocessing, and inference with its extensive ML libraries, such as Pytorch and Sklearn. Rust is employed in our system due to its Postgres extension development library PGRX [1] which helps access advanced low-level data retrieval APIs in Postgres like the Server Programming Interface (SPI) for faster data retrieval. However, even with this approach, there are two main challenges for efficient model inference: (1) data copying overhead, arises from different execution environments and data representations between RDBMS and the inference runtime. It necessitates extensive copying and conversations. (2) state initializing overhead, comes from repeatedly loading and releasing the deep learning model when handling an inference request. To mitigate these overheads, we further design memory sharing and state caching techniques in INDICES to improve inference efficiency. Memory Sharing. Due to the isolation between Python and Rust execution environments, data transfer between them requires two read-write operations: first, data is fetched from an RDBMS and stored in Rust\u2019s environment, then it is transferred from Rust\u2019s memory to Python\u2019s memory. To mitigate data transfer inefficiencies, we leverage shared memory to bypass redundant read-write operations. Initially, data is filtered and retrieved within Rust\u2019s environment using SPI. The data is then directly written to shared memory. This shared memory, allocated at the commencement of UDFs invocation, is accessible in both environments. Thus, the Python environment can directly access and extract data, eliminating the need for an additional copying step. State Caching. In handling numerous inference requests, the frequent loading and releasing of the model during each inference execution incur significant overhead. To address it, we persistently cache the general model trained via the LEADS technique at the PostgreSQL session-level and maintain a state cache for the utilized sliced model. Specifically, when our inference UDF encounters an SQL query with a new filter condition, it first checks the cache for Compiled MoE Inference UDF Rust Execution Environment Python Execution Environment Prediction General Model SPI Retrieval Data SELECT * FROM Table WHERE \u2026 UDF Parameters Table Name WHERE conditions Result Data Tables Pre-\u2028 processing Shared Memory WHERE conditions LRU Cache Sliced Model Sliced Model Sliced Model Figure 5: INDICES inference UDF execution. Table 1: Dataset statistics. Dataset Tuples Positive Ratio Attributes Features Payment 30,000 21.4% 23 350 Credit 244,280 7.8% 69 550 Census 269,356 6.4% 41 540 Diabetes 101,766 46.8% 48 850 the existing sliced model related to this condition. If the corresponding dedicated model is not found, a new model is derived from the general model and stored in the cache. To ensure efficient memory utilization and achieve constant time complexity for management, we adopt the least recently used (LRU) caching policy to manage cached sliced models. 5.2 System Workflow Next, we present INDICES\u2019 workflow, which comprises training and in-database inference phases, as shown in Figure 6. Training Phase. In the training phase, we construct a general model based on tables in RDBMS following LEADS for the prediction task. We collect SQL query logs from the real-world database to help construct the training workload. The frequent filter conditions in these logs reveal the attributes and features that data analysts consider most relevant and significant. Using these queries, we can extract the corresponding subdatasets as the training dataset. Both the SQL queries and the selected subdatasets are preprocessed, vectorized, and fed into the general model for iterative training (Step 1 in Figure 6). Once this well-trained general model is prepared, it is serialized and saved as a state dictionary (Step 2). When the associated UDF is invoked, the model is loaded into Postgres and dynamically sliced based on the SQL query, allowing for the handling of online inference requests. In-Database Inference Phase. In the inference phase, we integrate the inference process into Postgres by implementing a UDF through extension installation. The UDF, named indices_inference, offers a SQL interface for issuing inference queries using the following statement: SELECT indices_inference(, , ); which accepts three arguments, the tableName refers to the table in RDBMS from which the subdataset is selected, the taskName represents the prediction target (e.g., click-through-rate, readmissionrate). The final argument filter denotes the propositional formula following the \u201cWHERE\u201d clause. Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 Jian Pei3, Yuncheng Wu4 Load RDBMS id t1 t2 F1 x1,1 x2,1 F2 x1,2 x2,2 Save General Model Gating Network K-Experts \u2026 Expert 1 Expert 2 Expert K Training Phase In-Database Inference Phase Query Parser UDF Runtime Compiled MoE Inference UDF SPI Query to retrieve data Execute UDF Tables on Disk Online SQL Query Fetch Training Data State Dict Execution Engine Storage Engine SQL Query Log Training Data Fetch SQL Query Log 1 1 2 3 5 4 6 Figure 6: INDICES workflow: model training and in-database inference. When the query parser receives the inference query (Step 3), it initiates the execution of the UDF within the PostgreSQL UDF runtime environment (Step 4), as illustrated in Figure 5. Upon activation, the UDF performs four main tasks for an online SQL query. First, using the initial two parameters, it identifies the prediction target and determines the required general model. It checks whether the model is already cached; if not, it locates and loads the trained general model (Step 5). Second, the UDF customizes the model via LEADS technique mentioned in Section 4 based on the filter in the query and subsequently caches it in the Least Recently Used (LRU) cache. Third, it retrieves the relevant data based on the filter conditions specified in the query via the server programming interface (SPI) and writes the selected data to a shared memory (Step 6). The model inference stage then reads the data from the shared memory and executes model inference. Finally, the UDF creates a view based on the original table, incorporating a new column that holds the predictive results. 6 EXPERIMENTS In this section, we evaluate the effectiveness of LEADS and efficiency of our in-database inference system, INDICES, using four real-world datasets. Particularly, we devise the experiments to answer the following three key research questions (RQs): \u2022 RQ1: Does the LEADS technique improve the SQL-Aware predictive modeling task compared with the original base models? \u2022 RQ2: How effective is each component of LEADS in these prediction tasks? \u2022 RQ3: Does the INDICES system improve the inference efficiency compared with the traditional decoupled approach? We report our findings with regards to the above questions respectively in Sections 6.2, 6.3, and 6.4. 6.1 Experimental Setup 6.1.1 Datasets. We conduct experiments on four real-world datasets from the domains of finance, sociology, and healthcare. The statistics of the datasets are summarized in Table 1. (1) Payment [59, 60] consists of the profile of credit card clients and their past bill payments. The task is to predict whether the payment on a credit card will be in default in the next month. (2) Credit [6, 25] is gathered by Home Credit Group, focusing on the unbanked population. The task is to predict the repayment abilities of this population for better loan experience. (3) Census [3, 56] contains data from the Current Population Survey conducted by the U.S. Census Bureau. The task is to determine Algorithm 1 Synthetic Workload Generation Require: dataset \ud835\udc37, the number of SELECT queries \ud835\udc41, the maximum filter condition size \ud835\udc5a\ud835\udc4e\ud835\udc65_\ud835\udc50\ud835\udc5c\ud835\udc59 Ensure: a synthetic workload \ud835\udc4acontaining \ud835\udc41SELECT queries 1: \ud835\udc4a= \u2205 2: for \ud835\udc56\u21901 to \ud835\udc41do 3: Randomly select a data tuple x \u2208R\ud835\udc40from \ud835\udc37 4: Randomly sample the number of selected columns \ud835\udc5a\u2208 [1, min(\ud835\udc5a\ud835\udc4e\ud835\udc65_\ud835\udc50\ud835\udc5c\ud835\udc59, \ud835\udc40)] 5: Randomly sample \ud835\udc5acolumns from data tuple x along with their corresponding values 6: Form a SELECT query with a filter condition of size\ud835\udc5abased on the selected columns and values 7: Add the generated SELECT query to the workload \ud835\udc4a 8: end for 9: return synthetic workload \ud835\udc4a whether a person\u2019s annual income exceeds 50K based on their profile information, including age, class education, etc. (4) Diabetes [10, 49] contains ten years of clinical care at 130 US hospitals. Each tuple pertains to hospital records of patients diagnosed with diabetes, including details like medications and laboratory results. The task is to predict the patient\u2019s readmission. 6.1.2 Workloads. In SQL-aware predictive modeling, there is currently no benchmark that fulfills the criteria of having both SQL queries and supervised data. Traditional OLAP benchmarks, like TPC-DS [39] and YCSB [11], primarily focus on assessing query performance with complex operations such as JOIN and GROUPBY. However, they lack prediction tasks and labeled data. On the other hand, conventional datasets for evaluating deep learning algorithms do not incorporate OLAP queries. To bridge this gap, we opt to create synthetic inference queries as workloads based on deep learning datasets to evaluate LEADS and INDICES. Our workload generation method is outlined in Algorithm 1, which employs a random strategy to generate a workload that comprises a set of synthetic SQL queries, with each query retrieving a subset of the dataset for prediction. The procedure begins by randomly selecting a data tuple x from the dataset \ud835\udc37(Step 3). Then, a value \ud835\udc5ais sampled from the range [1, min(\ud835\udc5a\ud835\udc4e\ud835\udc65_\ud835\udc50\ud835\udc5c\ud835\udc59, \ud835\udc40)] to determine the number of predicates in the SQL query, where \ud835\udc40is the number of attributes in \ud835\udc37(Step 4). \ud835\udc5a\ud835\udc4e\ud835\udc65_\ud835\udc50\ud835\udc5c\ud835\udc59is a parameter that indicates the maximum number of predicates in any SQL query. Subsequently, \ud835\udc5aattributes are randomly chosen from x, and the Powering In-Database Dynamic Model Slicing for Structured Data Analytics Table 2: Evaluation of performance improvements with LEADS. DNN CIN AFN ARMNet Datasets Metric w/o w/ Imprv. w/o w/ Imprv. w/o w/ Imprv. w/o w/ Imprv. Workload-AUC 0.7003 0.7089 +1.23% 0.7164 0.7189 +0.35% 0.7067 0.7143 +1.08% 0.7141 0.7212 +0.99% Payment Worst-AUC 0.4733 0.5467 +15.51% 0.3836 0.4463 +16.35% 0.4467 0.6333 +41.77% 0.5267 0.6067 +15.19% Workload-AUC 0.7145 0.7427 +3.95% 0.7234 0.7408 +2.41% 0.7171 0.7218 +0.66% 0.7231 0.7347 +1.60% Credit Worst-AUC 0.3852 0.6000 +55.76% 0.3333 0.4074 +22.23% 0.3852 0.4074 +5.76% 0.4444 0.6264 +40.95% Workload-AUC 0.9157 0.9200 +0.47% 0.9187 0.9224 +0.40% 0.9151 0.9216 +0.71% 0.9196 0.9237 +0.45% Census Worst-AUC 0.7692 0.8041 +4.54% 0.7692 0.7845 +1.99% 0.7577 0.7892 +4.16% 0.7692 0.7962 +3.51% Workload-AUC 0.8308 0.8375 +0.81% 0.8322 0.8419 +1.17% 0.8329 0.8390 +0.73% 0.8342 0.8402 +0.72% Diabetes Worst-AUC 0.5495 0.6374 +16.00% 0.6264 0.7033 +12.28% 0.6484 0.6813 +5.07% 0.6044 0.6593 +9.08% Table 3: Top-4 SQL queries in terms of AUC improvement due to LEADS. query# no. of tuples (test/train) propositional formula in query 1 20/134 change = \u201cNo\u201d && admission_type = 3 2 20/153 outpatient = 20 && metformin.pio = \u201cUp\u201d 3 55/451 glipizide = \u201cDown\u201d 4 61/481 diag_1 = \u201c50\u201d values of these selected attributes are collected to form a propositional formula for the generated SQL query (Steps 5-6). This query is put into the workload (Step 7). We repeat this process \ud835\udc41times to create a complete workload. In the experiments, we set \ud835\udc41to 30 and \ud835\udc5a\ud835\udc4e\ud835\udc65_\ud835\udc50\ud835\udc5c\ud835\udc59to 3, and generate workloads for each dataset. 6.1.3 Baseline Methods. We select four kinds of base models designed for structured data and enhance these models via the LEADS technique. We evaluate LEADS\u2019s effectiveness by comparing the performance of these base models with and without the integration of LEADS. We introduce each base model as follows. (1) DNN [19]: it is a perceptron with multiple linear and activation layers, representing the most fundamental neural network. (2) CIN [35]: it is a convolutional layer-based neural network, which models higher-order feature interactions through compressed interaction with input embeddings. (3) AFN [9]: it incorporates logarithm neurons in the network layer, aiding in capturing the feature interaction in arbitrary order. (4) ARMNet [8]: it introduces multi-head attention to adaptively extract the combination of features, demonstrating state-of-the-art performance in structured data prediction tasks. Moreover, to evaluate the efficiency of our in-database inference system INDICES, we compare it with INDICES-decoupled, a variant of INDICES that follows the traditional inference approach mentioned in Section 5. For INDICES-decoupled, since data is retrieved from PostgreSQL through network communication based on psycopg, there is no data copy process between different execution environments as in INDICES. To ensure a fair comparison, we warm up both systems by caching the general model in advance. 6.1.4 Evaluation Metric. Notice that a workload contains a series of prediction queries, with each query representing a specific prediction task. We use the AUC (Area Under the ROC Curve) metric 16.00% 5.87% 4.77% 2.78% Figure 7: AUC improvement of SQL queries listed in Table 3. to evaluate the effectiveness of LEADS on a query, denoted by \ud835\udc5e. A higher value indicates a better performance. Then, we use two metrics to assess the overall performance of LEADS on the entire workload. The first is the average AUC value across all queries in the workload, denoted as \ud835\udc4a\ud835\udc5c\ud835\udc5f\ud835\udc58\ud835\udc59\ud835\udc5c\ud835\udc4e\ud835\udc51-\ud835\udc34\ud835\udc48\ud835\udc36, which is calculated by: Workload-AUC(\ud835\udc4a) = 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=0 \ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udc5e\ud835\udc56), (11) where \ud835\udc41is the number of SQL queries in the workload \ud835\udc4a. The second is the lowest AUC value among all prediction queries in the workload, termed \ud835\udc4a\ud835\udc5c\ud835\udc5f\ud835\udc60\ud835\udc61-\ud835\udc34\ud835\udc48\ud835\udc36, which is calculated as follows: Worst-AUC(\ud835\udc4a) = \ud835\udc40\ud835\udc56\ud835\udc5b(\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udc5e1),\ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udc5e2) . . . \ud835\udc34\ud835\udc48\ud835\udc36(\ud835\udc5e\ud835\udc41)). (12) In fields like finance or healthcare, where prediction errors can result in significant losses, focusing on the lower bound of the performance is crucial. The Worst-AUC metric provides insights into the worst-case scenario, ensuring that the technique\u2019s performance is reliable and does not lead to the worst decisions. For model-level efficiency, we utilize the floating-point operations per second (FLOPs) metric to measure the computations during the inference phase of a workload. As for our system INDICES, we measure the performance using the end-to-end response time, which calculates the CPU time elapsed from the moment a user invokes an inference query to the moment when the user receives the prediction results. 6.1.5 Hyper-parameter Settings. For fair comparisons, we fix the feature embedding size at 10 and set the size of the hidden layer to 32 for all the base models. Given the ability to select multiple experts in LEADS, we reduce the hidden layer size of each expert to 16 for better efficiency. The depth of each base model is set to 3. Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 Jian Pei3, Yuncheng Wu4 DNN CIN AFN ARMNet 0.690 0.700 0.710 0.720 0.730 Workload-AUC (a) Payment DNN CIN AFN ARMNet 0.710 0.720 0.730 0.740 0.750 Workload-AUC (b) Credit DNN CIN AFN ARMNet 0.910 0.914 0.918 0.922 0.926 Workload-AUC (c) Census DNN CIN AFN ARMNet 0.830 0.834 0.838 0.842 0.846 Workload-AUC (d) Diabetes Figure 8: Effects of SQL-aware gating network on accuracy. 2 4 8 16 32 64 128 256 number of experts 0.695 0.700 0.705 0.710 0.715 Workload-AUC (a) Payment 2 4 8 16 32 64 128 256 number of experts 0.710 0.720 0.730 0.740 0.750 Workload-AUC (b) Credit Figure 9: Effects of \ud835\udefc-entmax and number of experts on accuracy. 2 4 8 16 32 64 128 256 number of experts 100 101 102 Millions of FLOPs (a) Payment 2 4 8 16 32 64 128 256 number of experts 102 103 104 Millions of FLOPs (b) Credit Figure 10: Effects of \ud835\udefc-entmax and number of experts on efficiency. For ARMNet, we specify the number of heads and the hidden size of the self-attention module as 8 and 16, respectively. The initial \ud835\udc4e\ud835\udc59\ud835\udc5d\u210e\ud835\udc4ein \ud835\udefc-entmax is set to 2.5. The number of experts of LEADS is searched in 2~256 and fixed at 16. Also, the balance regularization factor \ud835\udf061 and the sparsity regularization factor \ud835\udf062 are searched in 1e-3~5e-2, and fixed at 1e-3. We perform a sensitivity analysis on these hyper-parameters and report the best results. 6.1.6 Training Details. Since there are no specific SQL queries in the training dataset to update the parameters of the gating network, we simulate a SQL query for each input data following Steps 3-6 in Algorithm1. Additionally, we adopt the Adam [31] optimizer with a learning rate searched in 1e-3~0.1 and a batch size of 1024 for all base models and datasets. All the experiments are conducted on a server equipped with Xeon(R) Silver 4114 CPU@2.2GHz (10 cores), 256G memory, and GeForce RTX 3090 Ti. We implement all the models with PyTorch 1.6.0 with CUDA 10.2. 6.2 SQL-aware Predictions To evaluate the efficacy of LEADS, we investigate the performance improvement of four base models after integrating with LEADS for given workloads. The experimental results are summarized in Table 2. The main observation is that the prediction performance w.r.t. both WorkloadAUC and Worst-AUC consistently improve when utilizing LEADS for all base models and all workloads. Further, we note that the most significant improvement is in the Worst-AUC metric. For instance, when using DNN as the base model, LEADS achieves improvements of 55.76% and 16.00% on the Credit and Diabetes datasets, respectively. The reason for the base model\u2019s low performance could be significant variability or nuances in the instances of the retrieved subset that are not well-represented in the training data. As a consequence, the trained base model fails to provide accurate predictions for these instances. To further analyze the Worst-AUC improvement, we perform a breakdown analysis on the Diabetes dataset with a DNN base model. Table 3 describes the top-4 SQL queries in terms of AUC improvement due to LEADS, and Figure 7 presents the respective AUC improvement of these queries. We observe that the AUC values under these SQL queries are less than Workload-AUC (see Table 2), the average AUC value of all queries in the workload. In addition, the number of training/testing tuples in the selected subset for each SQL query is very small. For instance, there are only 134 training tuples for SQL query#1, while the whole training dataset contains 101,766 tuples (see Table 1). Without sufficient examples, the base model cannot generalize effectively in these subdatasets, resulting in a misleading prediction. In our LEADS, the SQL-aware network leverages the propositional formula from SQL queries as meta-information to assist the general model in learning associated patterns within these subdatasets. Therefore, a base model handled by LEADS yields better performance for queries with limited numbers of related training samples. We note that the propositional formulas of these four queries in the Diabetes dataset all revolve around the drug\u2019s status. For instance, in query #2, glipizide=\u201cDown\u201d denotes that the drug \u201cglipizide\u201d has been prescribed and its dosage reduced. In healthcare records, the drug\u2019s status is sparse, seen in only a few training samples. Despite its rarity, it significantly affects readmission rates, making it essential for analysts to gain insights into retrieving these subsets. This makes LEADS valuable for improving prediction models and preventing performance collapse, especially in healthcare, Powering In-Database Dynamic Model Slicing for Structured Data Analytics 0.695 0.700 0.705 0.710 Workload-AUC (a) Payment 0.708 0.720 0.733 0.744 Workload-AUC (b) Credit Figure 11: Effects of the regularization terms on accuracy. where insights from specific subsets matter. Our experimental results demonstrate that LEADS effectively handles this scenario. The improvement results from dynamically integrating multiple expert models. When faced with challenging input samples, LEADS strategically allocates more experts for the sliced model, enhancing the network\u2019s complexity and improving prediction capabilities. 6.3 Ablation Study In this subsection, we shall conduct an ablation study to evaluate the effectiveness of each component in LEADS. SQL-aware gating network. In this evaluation, we compare LEADS with two methods: w/o LEADS and LEADS w/o SQL-aware gating network. For the latter, we create a default SQL query vector by concatenating a set of default values for each attribute, denoted as q\ud835\udc51= [\u03941, \u03942, \u00b7 \u00b7 \u00b7 , \u0394\ud835\udc40], indicating the absence of predicates in the SQL query for slicing a model. The comparison results are shown in Figure 8. There are two main observations. First, the w/o LEADS method achieves the lowest Workload-AUC because it simply uses the base model to handle the SQL queries. Second, the LEADS w/o SQL-aware gating network method results in a performance reduction across four base models compared to LEADS. For example, on the Credit dataset, this reduction can reach up to 0.02 in terms of Workload-AUC. It is because without the SQL-aware gating network, LEADS loses the ability for dynamic model customization based on SQL query vectors, leading to unsatisfactory results. \ud835\udefc-entmax. In this investigation, we evaluate the effect of the \ud835\udefcentmax function in LEADS. We compare LEADS to w/o LEADS and LEADS w/o \ud835\udefc-entmax, where the latter is a variant that substitutes the \ud835\udefc-entmax function with the softmax function. We use DNN as the base model and vary the number of experts from 2 to 256 to evaluate the performance w.r.t. Workload-AUC and FLOPs on Payment and Credit datasets. The results are presented in Figures 9-10. We can observe from Figure 9 that as the number of experts increases from 2 to 32, there is a notable improvement in AUC. This is expected because the model is able to generate more accurate predictions with additional experts. However, when the number of experts exceeds 32, there is a reduction in the Credit dataset for LEADS w/o \ud835\udefc-entmax, because the model becomes overly complex and results in overfitting and subsequently a decline in AUC performance. Figure 10 highlights the advantages of \ud835\udefc-entmax in terms of FLOPs saving, particularly when the number of experts increases. Specifically, the FLOPs of LEADS w/o \ud835\udefc-entmax show a linear increase with the growing number of experts. In contrast, LEADS with \ud835\udefc-entmax experiences 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 w/o both w/o Lbaln w/o Lsprs LEADS 0.0 0.1 0.2 0.1 0.1 0.9 0.1 0.0 0.0 1.0 0.0 0.0 0.5 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 0.0 1.0 0.0 0.1 1.0 0.0 0.0 0.0 0.4 0.5 0.6 0.4 0.6 1.0 0.4 1.0 0.6 1.0 0.9 1.0 0.9 0.9 1.0 0.7 0.1 0.1 0.4 0.3 0.9 1.0 0.5 0.9 0.0 0.8 0.4 0.8 1.0 0.2 1.0 0.8 Expert ID 0.0 0.2 0.4 0.6 0.8 1.0 Activated Frequency Figure 12: Analysis of the activated frequency for each expert w.r.t. the query workload. The x-axis denotes the expert ID, and the y-axis denotes the frequency (1.0 means that the expert is activated in every SQL query). an initial increase with a much gentler incline due to the fact that \ud835\udefc-entmax assigns small values to zero instead of retaining all experts when using softmax. In the slicing process, we then remove these unused experts with zero weights, effectively conserving computational resources. Regularization terms. For the balance term \ud835\udc3f\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5band the sparsity term \ud835\udc3f\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc60, we compare the performance of LEADS with three variants: without the balance term (LEADS w/o \ud835\udc3f\ud835\udc4f\ud835\udc4e\ud835\udc59\ud835\udc5b), without the sparsity term (LEADS w/o \ud835\udc3f\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc60), and without both terms (LEADS w/o both). We use DNN as the base model and conduct experiments on the Payment and Credit datasets. Figure 11 presents the comparison results w.r.t. Workload-AUC, and Figure 12 provides a breakdown analysis of activated frequency for each expert during the execution of the query workload, where a higher frequency indicates more extensive usage of an expert in the workload. There are three main findings from the results. First, removing the balance term significantly reduces the Workload-AUC of LEADS, as shown in Figure 11. While the sparse term is designed to counteract the effect of the balance term, using it solely will lead to a much smaller number of experts being utilized for each SQL query, resulting in lower prediction accuracy. This phenomenon can be validated in Figure 12, where only two experts are predominately selected. Second, when only adding the balance term, the performance is slightly lower than that of LEADS (see Figure 11), but the model utilizes almost all experts for every SQL query (see Figure 12), increasing computational costs. This is because the balance term encourages an even expert selection and drives the LEADS to utilize as many experts as possible. Third, when enabling both terms in LEADS, we can observe that the experts are utilized in a balanced manner while achieving the best performance, which demonstrates the effectiveness of our regularization terms. 6.4 System Efficiency In this subsection, we evaluate the efficiency of INDICES in terms of its end-to-end response time and compare it with the INDICESdecoupled baseline (see Section 6.1.3). We also measure the breakdown response time for each system. Comparison with the baseline. We utilize a SQL query that selects 100k records for inference. We report the response time of INDICES and INDICES-decoupled on the four datasets, as presented in Figure 13a. Compared to INDICES-decoupled, INDICES achieves speedup of 1.94x, 2.06x, 2.00x, and 1.82x on the Census, Credit, Diabetes, and Payment, respectively. There are three main reasons Lingze Zeng1, Naili Xing1, Shaofeng Cai1, Gang Chen2, Beng Chin Ooi1 34 g Chen, Beng Chin Ooi Jian Pei3, Yuncheng Wu4 Model Loading Data Retrieval Data Copying Data Preprocessing Inference I I-d I-d I I-d I I-d I (a) Response time for predicting 100k records on four datasets. 40k 80k 160k 320k 640k 0.0 2k 4k 6k 8k . Response Time (ms) (b) Response time w.r.t. #predicting records on Payment dataset. Figure 13: Efficiency evaluation of INDICES in terms of response time. In the sub-figures, the \u2018Left Bar\u2019 denotes INDICES-decoupled (I-d), and the \u2018Right Bar\u2019 denotes INDICES (I). for such superior performance. First, INDICES reduces the costly data movement overheads between PostgreSQL and the inference system, with lower data retrieval time usage. Second, INDICES is further enhanced with the aforementioned optimizations: shared memory to reduce data copying overhead, and state caching to eliminate the cost of model loading during the entire inference UDF execution process. Last, within Frame\u2019s Python execution, data from shared memory is directly read as a numpy.ndarray[22] , which facilitates quicker conversion to tensors for inference tasks and results in lower data preprocessing time in INDICES. Effects of the number of predicting records. We next evaluate the effect of the number of selected records in the SQL query in terms of the response time. We create SQL queries that select various numbers of records ranging from 40k to 640k from the Payment dataset. Figure 13b shows the response time of INDICES and INDICES-decoupled. We observe that INDICES consistently surpasses INDICES-decoupled across all record sizes, with performance improvement ranging from 1.47x to 1.93x. Moreover, the response time of INDICES increases more slowly than that of INDICES-decoupled. This is because the data movement overhead between the database and the inference system becomes more pronounced with more records. Evaluation of optimization techniques in INDICES. Further, we experiment to evaluate the benefits of the optimizations in Section 5.1. Specifically, we compare INDICES with (i) INDICES without memory sharing; (ii) INDICES without SPI; (iii) INDICES without state caching; and (iv) INDICES without any optimizations. Figure 14 presents the comparison results w.r.t. the response time to predict 100k records on the Payment dataset. The absence of shared memory results in significant data copying overhead between Rust and Python execution environments. Likewise, without SPI, the data retrieval time is high. Besides, if we do not enable state caching, it results in a substantial model loading overhead. With all the optimizations enabled, INDICES can greatly reduce the in-database inference response time. In this paper, we propose a novel SQL-aware dynamic model slicing technique called LEADS. We enhance the general model trained on the entire database with the Mixture of Experts (MoE) technique and devise a SQL-aware gating network to effectively customize a sliced model given the propositional formula in the user\u2019s SQL query. Further, we integrate LEADS into our end-to-end indatabase inference system INDICES. We build the system on top of a full-fledged and open-source RDBMS, PostgreSQL, and introduce three optimization techniques to reduce the response time of inference queries. Extensive experiments conducted on four real-world datasets demonstrate that LEADS consistently outperforms four baseline models and INDICES significantly reduces the inference time compared to the conventional approach. Powering In-Database Dynamic Model Slicing for Structured Data Analytics" + }, + { + "url": "http://arxiv.org/abs/2006.16668v1", + "title": "GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding", + "abstract": "Neural network scaling has been critical for improving the model quality in\nmany real-world machine learning applications with vast amounts of training\ndata and compute. Although this trend of scaling is affirmed to be a sure-fire\napproach for better model quality, there are challenges on the path such as the\ncomputation cost, ease of programming, and efficient implementation on parallel\ndevices. GShard is a module composed of a set of lightweight annotation APIs\nand an extension to the XLA compiler. It provides an elegant way to express a\nwide range of parallel computation patterns with minimal changes to the\nexisting model code. GShard enabled us to scale up multilingual neural machine\ntranslation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600\nbillion parameters using automatic sharding. We demonstrate that such a giant\nmodel can efficiently be trained on 2048 TPU v3 accelerators in 4 days to\nachieve far superior quality for translation from 100 languages to English\ncompared to the prior art.", + "authors": "Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, Zhifeng Chen", + "published": "2020-06-30", + "updated": "2020-06-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1701.06538v1", + "title": "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer", + "abstract": "The capacity of a neural network to absorb information is limited by its\nnumber of parameters. Conditional computation, where parts of the network are\nactive on a per-example basis, has been proposed in theory as a way of\ndramatically increasing model capacity without a proportional increase in\ncomputation. In practice, however, there are significant algorithmic and\nperformance challenges. In this work, we address these challenges and finally\nrealize the promise of conditional computation, achieving greater than 1000x\nimprovements in model capacity with only minor losses in computational\nefficiency on modern GPU clusters. We introduce a Sparsely-Gated\nMixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward\nsub-networks. A trainable gating network determines a sparse combination of\nthese experts to use for each example. We apply the MoE to the tasks of\nlanguage modeling and machine translation, where model capacity is critical for\nabsorbing the vast quantities of knowledge available in the training corpora.\nWe present model architectures in which a MoE with up to 137 billion parameters\nis applied convolutionally between stacked LSTM layers. On large language\nmodeling and machine translation benchmarks, these models achieve significantly\nbetter results than state-of-the-art at lower computational cost.", + "authors": "Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean", + "published": "2017-01-23", + "updated": "2017-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.NE", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.03961v3", + "title": "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity", + "abstract": "In deep learning, models typically reuse the same parameters for all inputs.\nMixture of Experts (MoE) defies this and instead selects different parameters\nfor each incoming example. The result is a sparsely-activated model -- with\noutrageous numbers of parameters -- but a constant computational cost. However,\ndespite several notable successes of MoE, widespread adoption has been hindered\nby complexity, communication costs and training instability -- we address these\nwith the Switch Transformer. We simplify the MoE routing algorithm and design\nintuitive improved models with reduced communication and computational costs.\nOur proposed training techniques help wrangle the instabilities and we show\nlarge sparse models may be trained, for the first time, with lower precision\n(bfloat16) formats. We design models based off T5-Base and T5-Large to obtain\nup to 7x increases in pre-training speed with the same computational resources.\nThese improvements extend into multilingual settings where we measure gains\nover the mT5-Base version across all 101 languages. Finally, we advance the\ncurrent scale of language models by pre-training up to trillion parameter\nmodels on the \"Colossal Clean Crawled Corpus\" and achieve a 4x speedup over the\nT5-XXL model.", + "authors": "William Fedus, Barret Zoph, Noam Shazeer", + "published": "2021-01-11", + "updated": "2022-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2107.11817v3", + "title": "Go Wider Instead of Deeper", + "abstract": "More transformer blocks with residual connections have recently achieved\nimpressive results on various tasks. To achieve better performance with fewer\ntrainable parameters, recent methods are proposed to go shallower by parameter\nsharing or model compressing along with the depth. However, weak modeling\ncapacity limits their performance. Contrastively, going wider by inducing more\ntrainable matrixes and parameters would produce a huge model requiring advanced\nparallelism to train and inference.\n In this paper, we propose a parameter-efficient framework, going wider\ninstead of deeper. Specially, following existing works, we adapt parameter\nsharing to compress along depth. But, such deployment would limit the\nperformance. To maximize modeling capacity, we scale along model width by\nreplacing feed-forward network (FFN) with mixture-of-experts (MoE). Across\ntransformer blocks, instead of sharing normalization layers, we propose to use\nindividual layernorms to transform various semantic representations in a more\nparameter-efficient way. To evaluate our plug-and-run framework, we design\nWideNet and conduct comprehensive experiments on popular computer vision and\nnatural language processing benchmarks. On ImageNet-1K, our best model\noutperforms Vision Transformer (ViT) by $1.5\\%$ with $0.72 \\times$ trainable\nparameters. Using $0.46 \\times$ and $0.13 \\times$ parameters, our WideNet can\nstill surpass ViT and ViT-MoE by $0.8\\%$ and $2.1\\%$, respectively. On four\nnatural language processing datasets, WideNet outperforms ALBERT by $1.8\\%$ on\naverage and surpass BERT using factorized embedding parameterization by $0.8\\%$\nwith fewer parameters.", + "authors": "Fuzhao Xue, Ziji Shi, Futao Wei, Yuxuan Lou, Yong Liu, Yang You", + "published": "2021-07-25", + "updated": "2021-09-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.05974v1", + "title": "Scaling Vision with Sparse Mixture of Experts", + "abstract": "Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent\nscalability in Natural Language Processing. In Computer Vision, however, almost\nall performant networks are \"dense\", that is, every input is processed by every\nparameter. We present a Vision MoE (V-MoE), a sparse version of the Vision\nTransformer, that is scalable and competitive with the largest dense networks.\nWhen applied to image recognition, V-MoE matches the performance of\nstate-of-the-art networks, while requiring as little as half of the compute at\ninference time. Further, we propose an extension to the routing algorithm that\ncan prioritize subsets of each input across the entire batch, leading to\nadaptive per-image compute. This allows V-MoE to trade-off performance and\ncompute smoothly at test-time. Finally, we demonstrate the potential of V-MoE\nto scale vision models, and train a 15B parameter model that attains 90.35% on\nImageNet.", + "authors": "Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, Andr\u00e9 Susano Pinto, Daniel Keysers, Neil Houlsby", + "published": "2021-06-10", + "updated": "2021-06-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1312.4314v3", + "title": "Learning Factored Representations in a Deep Mixture of Experts", + "abstract": "Mixtures of Experts combine the outputs of several \"expert\" networks, each of\nwhich specializes in a different part of the input space. This is achieved by\ntraining a \"gating\" network that maps each input to a distribution over the\nexperts. Such models show promise for building larger networks that are still\ncheap to compute at test time, and more parallelizable at training time. In\nthis this work, we extend the Mixture of Experts to a stacked model, the Deep\nMixture of Experts, with multiple sets of gating and experts. This\nexponentially increases the number of effective experts by associating each\ninput with a combination of experts at each layer, yet maintains a modest model\nsize. On a randomly translated version of the MNIST dataset, we find that the\nDeep Mixture of Experts automatically learns to develop location-dependent\n(\"where\") experts at the first layer, and class-specific (\"what\") experts at\nthe second layer. In addition, we see that the different combinations are in\nuse when the model is applied to a dataset of speech monophones. These\ndemonstrate effective use of all expert combinations.", + "authors": "David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever", + "published": "2013-12-16", + "updated": "2014-03-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.00951v1", + "title": "From Sparse to Soft Mixtures of Experts", + "abstract": "Sparse mixture of expert architectures (MoEs) scale model capacity without\nlarge increases in training or inference costs. Despite their success, MoEs\nsuffer from a number of issues: training instability, token dropping, inability\nto scale the number of experts, or ineffective finetuning. In this work, we\nproposeSoft MoE, a fully-differentiable sparse Transformer that addresses these\nchallenges, while maintaining the benefits of MoEs. Soft MoE performs an\nimplicit soft assignment by passing different weighted combinations of all\ninput tokens to each expert. As in other MoE works, experts in Soft MoE only\nprocess a subset of the (combined) tokens, enabling larger model capacity at\nlower inference cost. In the context of visual recognition, Soft MoE greatly\noutperforms standard Transformers (ViTs) and popular MoE variants (Tokens\nChoice and Experts Choice). For example, Soft MoE-Base/16 requires 10.5x lower\ninference cost (5.7x lower wall-clock time) than ViT-Huge/14 while matching its\nperformance after similar training. Soft MoE also scales well: Soft MoE Huge/14\nwith 128 experts in 16 MoE layers has over 40x more parameters than ViT\nHuge/14, while inference time cost grows by only 2%, and it performs\nsubstantially better.", + "authors": "Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Neil Houlsby", + "published": "2023-08-02", + "updated": "2023-08-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2405.01778v1", + "title": "Hierarchical mixture of discriminative Generalized Dirichlet classifiers", + "abstract": "This paper presents a discriminative classifier for compositional data. This\nclassifier is based on the posterior distribution of the Generalized Dirichlet\nwhich is the discriminative counterpart of Generalized Dirichlet mixture model.\nMoreover, following the mixture of experts paradigm, we proposed a hierarchical\nmixture of this classifier. In order to learn the models parameters, we use a\nvariational approximation by deriving an upper-bound for the Generalized\nDirichlet mixture. To the best of our knownledge, this is the first time this\nbound is proposed in the literature. Experimental results are presented for\nspam detection and color space identification.", + "authors": "Elvis Togban, Djemel Ziou", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.05526v1", + "title": "Buffer Overflow in Mixture of Experts", + "abstract": "Mixture of Experts (MoE) has become a key ingredient for scaling large\nfoundation models while keeping inference costs steady. We show that expert\nrouting strategies that have cross-batch dependencies are vulnerable to\nattacks. Malicious queries can be sent to a model and can affect a model's\noutput on other benign queries if they are grouped in the same batch. We\ndemonstrate this via a proof-of-concept attack in a toy experimental setting.", + "authors": "Jamie Hayes, Ilia Shumailov, Itay Yona", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2311.10768v1", + "title": "Memory Augmented Language Models through Mixture of Word Experts", + "abstract": "Scaling up the number of parameters of language models has proven to be an\neffective approach to improve performance. For dense models, increasing model\nsize proportionally increases the model's computation footprint. In this work,\nwe seek to aggressively decouple learning capacity and FLOPs through\nMixture-of-Experts (MoE) style models with large knowledge-rich vocabulary\nbased routing functions and experts. Our proposed approach, dubbed Mixture of\nWord Experts (MoWE), can be seen as a memory augmented model, where a large set\nof word-specific experts play the role of a sparse memory. We demonstrate that\nMoWE performs significantly better than the T5 family of models with similar\nnumber of FLOPs in a variety of NLP tasks. Additionally, MoWE outperforms\nregular MoE models on knowledge intensive tasks and has similar performance to\nmore complex memory augmented approaches that often require to invoke custom\nmechanisms to search the sparse memory.", + "authors": "Cicero Nogueira dos Santos, James Lee-Thorp, Isaac Noble, Chung-Ching Chang, David Uthus", + "published": "2023-11-15", + "updated": "2023-11-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2312.00968v2", + "title": "Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts", + "abstract": "Large multi-modal models (LMMs) exhibit remarkable performance across\nnumerous tasks. However, generalist LMMs often suffer from performance\ndegradation when tuned over a large collection of tasks. Recent research\nsuggests that Mixture of Experts (MoE) architectures are useful for instruction\ntuning, but for LMMs of parameter size around O(50-100B), the prohibitive cost\nof replicating and storing the expert models severely limits the number of\nexperts we can use. We propose Omni-SMoLA, an architecture that uses the Soft\nMoE approach to (softly) mix many multimodal low rank experts, and avoids\nintroducing a significant number of new parameters compared to conventional MoE\nmodels. The core intuition here is that the large model provides a foundational\nbackbone, while different lightweight experts residually learn specialized\nknowledge, either per-modality or multimodally. Extensive experiments\ndemonstrate that the SMoLA approach helps improve the generalist performance\nacross a broad range of generative vision-and-language tasks, achieving new\nSoTA generalist performance that often matches or outperforms single\nspecialized LMM baselines, as well as new SoTA specialist performance.", + "authors": "Jialin Wu, Xia Hu, Yaqing Wang, Bo Pang, Radu Soricut", + "published": "2023-12-01", + "updated": "2024-04-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2401.06066v1", + "title": "DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models", + "abstract": "In the era of large language models, Mixture-of-Experts (MoE) is a promising\narchitecture for managing computational costs when scaling up model parameters.\nHowever, conventional MoE architectures like GShard, which activate the top-$K$\nout of $N$ experts, face challenges in ensuring expert specialization, i.e.\neach expert acquires non-overlapping and focused knowledge. In response, we\npropose the DeepSeekMoE architecture towards ultimate expert specialization. It\ninvolves two principal strategies: (1) finely segmenting the experts into $mN$\nones and activating $mK$ from them, allowing for a more flexible combination of\nactivated experts; (2) isolating $K_s$ experts as shared ones, aiming at\ncapturing common knowledge and mitigating redundancy in routed experts.\nStarting from a modest scale with 2B parameters, we demonstrate that\nDeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5\ntimes the expert parameters and computation. In addition, DeepSeekMoE 2B nearly\napproaches the performance of its dense counterpart with the same number of\ntotal parameters, which set the upper bound of MoE models. Subsequently, we\nscale up DeepSeekMoE to 16B parameters and show that it achieves comparable\nperformance with LLaMA2 7B, with only about 40% of computations. Further, our\npreliminary efforts to scale up DeepSeekMoE to 145B parameters consistently\nvalidate its substantial advantages over the GShard architecture, and show its\nperformance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%)\nof computations.", + "authors": "Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, Wenfeng Liang", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2304.02806v2", + "title": "Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling", + "abstract": "Graph neural networks (GNNs) have found extensive applications in learning\nfrom graph data. However, real-world graphs often possess diverse structures\nand comprise nodes and edges of varying types. To bolster the generalization\ncapacity of GNNs, it has become customary to augment training graph structures\nthrough techniques like graph augmentations and large-scale pre-training on a\nwider array of graphs. Balancing this diversity while avoiding increased\ncomputational costs and the notorious trainability issues of GNNs is crucial.\nThis study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the\naim of augmenting their capacity to adapt to a diverse range of training graph\nstructures, without incurring explosive computational overhead. The proposed\nGraph Mixture of Experts (GMoE) model empowers individual nodes in the graph to\ndynamically and adaptively select more general information aggregation experts.\nThese experts are trained to capture distinct subgroups of graph structures and\nto incorporate information with varying hop sizes, where those with larger hop\nsizes specialize in gathering information over longer distances. The\neffectiveness of GMoE is validated through a series of experiments on a diverse\nset of tasks, including graph, node, and link prediction, using the OGB\nbenchmark. Notably, it enhances ROC-AUC by $1.81\\%$ in ogbg-molhiv and by\n$1.40\\%$ in ogbg-molbbbp, when compared to the non-MoE baselines. Our code is\npublicly available at https://github.com/VITA-Group/Graph-Mixture-of-Experts.", + "authors": "Haotao Wang, Ziyu Jiang, Yuning You, Yan Han, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Zhangyang Wang", + "published": "2023-04-06", + "updated": "2023-10-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2405.00361v1", + "title": "AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts", + "abstract": "We introduce AdaMoLE, a novel method for fine-tuning large language models\n(LLMs) through an Adaptive Mixture of Low-Rank Adaptation (LoRA) Experts.\nMoving beyond conventional methods that employ a static top-k strategy for\nactivating experts, AdaMoLE dynamically adjusts the activation threshold using\na dedicated threshold network, adaptively responding to the varying\ncomplexities of different tasks. By replacing a single LoRA in a layer with\nmultiple LoRA experts and integrating a gating function with the threshold\nmechanism, AdaMoLE effectively selects and activates the most appropriate\nexperts based on the input context. Our extensive evaluations across a variety\nof commonsense reasoning and natural language processing tasks show that\nAdaMoLE exceeds baseline performance. This enhancement highlights the\nadvantages of AdaMoLE's adaptive selection of LoRA experts, improving model\neffectiveness without a corresponding increase in the expert count. The\nexperimental validation not only confirms AdaMoLE as a robust approach for\nenhancing LLMs but also suggests valuable directions for future research in\nadaptive expert selection mechanisms, potentially broadening the scope for\noptimizing model performance across diverse language processing tasks.", + "authors": "Zefang Liu, Jiahua Luo", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1605.01652v1", + "title": "LSTM-based Mixture-of-Experts for Knowledge-Aware Dialogues", + "abstract": "We introduce an LSTM-based method for dynamically integrating several\nword-prediction experts to obtain a conditional language model which can be\ngood simultaneously at several subtasks. We illustrate this general approach\nwith an application to dialogue where we integrate a neural chat model, good at\nconversational aspects, with a neural question-answering model, good at\nretrieving precise information from a knowledge-base, and show how the\nintegration combines the strengths of the independent components. We hope that\nthis focused contribution will attract attention on the benefits of using such\nmixtures of experts in NLP.", + "authors": "Phong Le, Marc Dymetman, Jean-Michel Renders", + "published": "2016-05-05", + "updated": "2016-05-05", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2012.02130v4", + "title": "A similarity-based Bayesian mixture-of-experts model", + "abstract": "We present a new nonparametric mixture-of-experts model for multivariate\nregression problems, inspired by the probabilistic k-nearest neighbors\nalgorithm. Using a conditionally specified model, predictions for out-of-sample\ninputs are based on similarities to each observed data point, yielding\npredictive distributions represented by Gaussian mixtures. Posterior inference\nis performed on the parameters of the mixture components as well as the\ndistance metric using a mean-field variational Bayes algorithm accompanied with\na stochastic gradient-based optimization procedure. The proposed method is\nespecially advantageous in settings where inputs are of relatively high\ndimension in comparison to the data size, where input-output relationships are\ncomplex, and where predictive distributions may be skewed or multimodal.\nComputational studies on five datasets, of which two are synthetically\ngenerated, illustrate clear advantages of our mixture-of-experts method for\nhigh-dimensional inputs, outperforming competitor models both in terms of\nvalidation metrics and visual inspection.", + "authors": "Tianfang Zhang, Rasmus Bokrantz, Jimmy Olsson", + "published": "2020-12-03", + "updated": "2022-08-03", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "stat.ME" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2206.00277v2", + "title": "Task-Specific Expert Pruning for Sparse Mixture-of-Experts", + "abstract": "The sparse Mixture-of-Experts (MoE) model is powerful for large-scale\npre-training and has achieved promising results due to its model capacity.\nHowever, with trillions of parameters, MoE is hard to be deployed on cloud or\nmobile environment. The inference of MoE requires expert parallelism, which is\nnot hardware-friendly and communication expensive. Especially for\nresource-limited downstream tasks, such sparse structure has to sacrifice a lot\nof computing efficiency for limited performance gains. In this work, we observe\nmost experts contribute scarcely little to the MoE fine-tuning and inference.\nWe further propose a general method to progressively drop the non-professional\nexperts for the target downstream task, which preserves the benefits of MoE\nwhile reducing the MoE model into one single-expert dense model. Our\nexperiments reveal that the fine-tuned single-expert model could preserve 99.3%\nbenefits from MoE across six different types of tasks while enjoying 2x\ninference speed with free communication cost.", + "authors": "Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei", + "published": "2022-06-01", + "updated": "2022-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.09762v1", + "title": "Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer", + "abstract": "The Mixture of Experts (MoE) has emerged as a highly successful technique in\ndeep learning, based on the principle of divide-and-conquer to maximize model\ncapacity without significant additional computational cost. Even in the era of\nlarge-scale language models (LLMs), MoE continues to play a crucial role, as\nsome researchers have indicated that GPT-4 adopts the MoE structure to ensure\ndiverse inference results. However, MoE is susceptible to performance\ndegeneracy, particularly evident in the issues of imbalance and homogeneous\nrepresentation among experts. While previous studies have extensively addressed\nthe problem of imbalance, the challenge of homogeneous representation remains\nunresolved. In this study, we shed light on the homogeneous representation\nproblem, wherein experts in the MoE fail to specialize and lack diversity,\nleading to frustratingly high similarities in their representations (up to 99%\nin a well-performed MoE model). This problem restricts the expressive power of\nthe MoE and, we argue, contradicts its original intention. To tackle this\nissue, we propose a straightforward yet highly effective solution: OMoE, an\northogonal expert optimizer. Additionally, we introduce an alternating training\nstrategy that encourages each expert to update in a direction orthogonal to the\nsubspace spanned by other experts. Our algorithm facilitates MoE training in\ntwo key ways: firstly, it explicitly enhances representation diversity, and\nsecondly, it implicitly fosters interaction between experts during orthogonal\nweights computation. Through extensive experiments, we demonstrate that our\nproposed optimization algorithm significantly improves the performance of\nfine-tuning the MoE model on the GLUE benchmark, SuperGLUE benchmark,\nquestion-answering task, and name entity recognition tasks.", + "authors": "Boan Liu, Liang Ding, Li Shen, Keqin Peng, Yu Cao, Dazhao Cheng, Dacheng Tao", + "published": "2023-10-15", + "updated": "2023-10-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2311.05185v1", + "title": "Mixture of Weak & Strong Experts on Graphs", + "abstract": "Realistic graphs contain both rich self-features of nodes and informative\nstructures of neighborhoods, jointly handled by a GNN in the typical setup. We\npropose to decouple the two modalities by mixture of weak and strong experts\n(Mowst), where the weak expert is a light-weight Multi-layer Perceptron (MLP),\nand the strong expert is an off-the-shelf Graph Neural Network (GNN). To adapt\nthe experts' collaboration to different target nodes, we propose a \"confidence\"\nmechanism based on the dispersion of the weak expert's prediction logits. The\nstrong expert is conditionally activated when either the node's classification\nrelies on neighborhood information, or the weak expert has low model quality.\nWe reveal interesting training dynamics by analyzing the influence of the\nconfidence function on loss: our training algorithm encourages the\nspecialization of each expert by effectively generating soft splitting of the\ngraph. In addition, our \"confidence\" design imposes a desirable bias toward the\nstrong expert to benefit from GNN's better generalization capability. Mowst is\neasy to optimize and achieves strong expressive power, with a computation cost\ncomparable to a single GNN. Empirically, Mowst shows significant accuracy\nimprovement on 6 standard node classification benchmarks (including both\nhomophilous and heterophilous graphs).", + "authors": "Hanqing Zeng, Hanjia Lyu, Diyi Hu, Yinglong Xia, Jiebo Luo", + "published": "2023-11-09", + "updated": "2023-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1704.00946v4", + "title": "Approximation results regarding the multiple-output mixture of linear experts model", + "abstract": "Mixture of experts (MoE) models are a class of artificial neural networks\nthat can be used for functional approximation and probabilistic modeling. An\nimportant class of MoE models is the class of mixture of linear experts (MoLE)\nmodels, where the expert functions map to real topological output spaces. There\nare a number of powerful approximation results regarding MoLE models, when the\noutput space is univariate. These results guarantee the ability of MoLE mean\nfunctions to approximate arbitrary continuous functions, and MoLE models\nthemselves to approximate arbitrary conditional probability density functions.\nWe utilize and extend upon the univariate approximation results in order to\nprove a pair of useful results for situations where the output spaces are\nmultivariate.", + "authors": "Hien D. Nguyen, Faicel Chamroukhi, Florence Forbes", + "published": "2017-04-04", + "updated": "2019-05-28", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2109.05238v3", + "title": "Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy", + "abstract": "Simultaneous machine translation (SiMT) generates translation before reading\nthe entire source sentence and hence it has to trade off between translation\nquality and latency. To fulfill the requirements of different translation\nquality and latency in practical applications, the previous methods usually\nneed to train multiple SiMT models for different latency levels, resulting in\nlarge computational costs. In this paper, we propose a universal SiMT model\nwith Mixture-of-Experts Wait-k Policy to achieve the best translation quality\nunder arbitrary latency with only one trained model. Specifically, our method\nemploys multi-head attention to accomplish the mixture of experts where each\nhead is treated as a wait-k expert with its own waiting words number, and given\na test latency and source inputs, the weights of the experts are accordingly\nadjusted to produce the best translation. Experiments on three datasets show\nthat our method outperforms all the strong baselines under different latency,\nincluding the state-of-the-art adaptive policy.", + "authors": "Shaolei Zhang, Yang Feng", + "published": "2021-09-11", + "updated": "2022-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2204.09179v3", + "title": "On the Representation Collapse of Sparse Mixture of Experts", + "abstract": "Sparse mixture of experts provides larger model capacity while requiring a\nconstant computational overhead. It employs the routing mechanism to distribute\ninput tokens to the best-matched experts according to their hidden\nrepresentations. However, learning such a routing mechanism encourages token\nclustering around expert centroids, implying a trend toward representation\ncollapse. In this work, we propose to estimate the routing scores between\ntokens and experts on a low-dimensional hypersphere. We conduct extensive\nexperiments on cross-lingual language model pre-training and fine-tuning on\ndownstream tasks. Experimental results across seven multilingual benchmarks\nshow that our method achieves consistent gains. We also present a comprehensive\nanalysis on the representation and routing behaviors of our models. Our method\nalleviates the representation collapse issue and achieves more consistent\nrouting than the baseline mixture-of-experts methods.", + "authors": "Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, Furu Wei", + "published": "2022-04-20", + "updated": "2022-10-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2305.03288v2", + "title": "Demystifying Softmax Gating Function in Gaussian Mixture of Experts", + "abstract": "Understanding the parameter estimation of softmax gating Gaussian mixture of\nexperts has remained a long-standing open problem in the literature. It is\nmainly due to three fundamental theoretical challenges associated with the\nsoftmax gating function: (i) the identifiability only up to the translation of\nparameters; (ii) the intrinsic interaction via partial differential equations\nbetween the softmax gating and the expert functions in the Gaussian density;\n(iii) the complex dependence between the numerator and denominator of the\nconditional density of softmax gating Gaussian mixture of experts. We resolve\nthese challenges by proposing novel Voronoi loss functions among parameters and\nestablishing the convergence rates of maximum likelihood estimator (MLE) for\nsolving parameter estimation in these models. When the true number of experts\nis unknown and over-specified, our findings show a connection between the\nconvergence rate of the MLE and a solvability problem of a system of polynomial\nequations.", + "authors": "Huy Nguyen, TrungTin Nguyen, Nhat Ho", + "published": "2023-05-05", + "updated": "2023-10-30", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.ST", + "stat.TH" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1905.12969v1", + "title": "Enriched Mixtures of Gaussian Process Experts", + "abstract": "Mixtures of experts probabilistically divide the input space into regions,\nwhere the assumptions of each expert, or conditional model, need only hold\nlocally. Combined with Gaussian process (GP) experts, this results in a\npowerful and highly flexible model. We focus on alternative mixtures of GP\nexperts, which model the joint distribution of the inputs and targets\nexplicitly. We highlight issues of this approach in multi-dimensional input\nspaces, namely, poor scalability and the need for an unnecessarily large number\nof experts, degrading the predictive performance and increasing uncertainty. We\nconstruct a novel model to address these issues through a nested partitioning\nscheme that automatically infers the number of components at both levels.\nMultiple response types are accommodated through a generalised GP framework,\nwhile multiple input types are included through a factorised exponential family\nstructure. We show the effectiveness of our approach in estimating a\nparsimonious probabilistic description of both synthetic data of increasing\ndimension and an Alzheimer's challenge dataset.", + "authors": "Charles W. L. Gadd, Sara Wade, Alexis Boukouvalas", + "published": "2019-05-30", + "updated": "2019-05-30", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2309.05838v1", + "title": "Liu-type Shrinkage Estimators for Mixture of Poisson Regressions with Experts: A Heart Disease Study", + "abstract": "Count data play a critical role in medical research, such as heart disease.\nThe Poisson regression model is a common technique for evaluating the impact of\na set of covariates on the count responses. The mixture of Poisson regression\nmodels with experts is a practical tool to exploit the covariates, not only to\nhandle the heterogeneity in the Poisson regressions but also to learn the\nmixing structure of the population. Multicollinearity is one of the most common\nchallenges with regression models, leading to ill-conditioned design matrices\nof Poisson regression components and expert classes. The maximum likelihood\nmethod produces unreliable and misleading estimates for the effects of the\ncovariates in multicollinearity. In this research, we develop Ridge and\nLiu-type methods as two shrinkage approaches to cope with the ill-conditioned\ndesign matrices of the mixture of Poisson regression models with experts.\nThrough various numerical studies, we demonstrate that the shrinkage methods\noffer more reliable estimates for the coefficients of the mixture model in\nmulticollinearity while maintaining the classification performance of the ML\nmethod. The shrinkage methods are finally applied to a heart study to analyze\nthe heart disease rate stages.", + "authors": "Elsayed Ghanem, Moein Yoosefi, Armin Hatefi", + "published": "2023-09-11", + "updated": "2023-09-11", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "stat.CO", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2204.08753v1", + "title": "Table-based Fact Verification with Self-adaptive Mixture of Experts", + "abstract": "The table-based fact verification task has recently gained widespread\nattention and yet remains to be a very challenging problem. It inherently\nrequires informative reasoning over natural language together with different\nnumerical and logical reasoning on tables (e.g., count, superlative,\ncomparative). Considering that, we exploit mixture-of-experts and present in\nthis paper a new method: Self-adaptive Mixture-of-Experts Network (SaMoE).\nSpecifically, we have developed a mixture-of-experts neural network to\nrecognize and execute different types of reasoning -- the network is composed\nof multiple experts, each handling a specific part of the semantics for\nreasoning, whereas a management module is applied to decide the contribution of\neach expert network to the verification result. A self-adaptive method is\ndeveloped to teach the management module combining results of different experts\nmore efficiently without external knowledge. The experimental results\nillustrate that our framework achieves 85.1% accuracy on the benchmark dataset\nTabFact, comparable with the previous state-of-the-art models. We hope our\nframework can serve as a new baseline for table-based verification. Our code is\navailable at https://github.com/THUMLP/SaMoE.", + "authors": "Yuxuan Zhou, Xien Liu, Kaiyin Zhou, Ji Wu", + "published": "2022-04-19", + "updated": "2022-04-19", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1511.06072v1", + "title": "Mediated Experts for Deep Convolutional Networks", + "abstract": "We present a new supervised architecture termed Mediated Mixture-of-Experts\n(MMoE) that allows us to improve classification accuracy of Deep Convolutional\nNetworks (DCN). Our architecture achieves this with the help of expert\nnetworks: A network is trained on a disjoint subset of a given dataset and then\nrun in parallel to other experts during deployment. A mediator is employed if\nexperts contradict each other. This allows our framework to naturally support\nincremental learning, as adding new classes requires (re-)training of the new\nexpert only. We also propose two measures to control computational complexity:\nAn early-stopping mechanism halts experts that have low confidence in their\nprediction. The system allows to trade-off accuracy and complexity without\nfurther retraining. We also suggest to share low-level convolutional layers\nbetween experts in an effort to avoid computation of a near-duplicate feature\nset. We evaluate our system on a popular dataset and report improved accuracy\ncompared to a single model of same configuration.", + "authors": "Sebastian Agethen, Winston H. Hsu", + "published": "2015-11-19", + "updated": "2015-11-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2208.12830v1", + "title": "Mixtures of Gaussian Process Experts with SMC$^2$", + "abstract": "Gaussian processes are a key component of many flexible statistical and\nmachine learning models. However, they exhibit cubic computational complexity\nand high memory constraints due to the need of inverting and storing a full\ncovariance matrix. To circumvent this, mixtures of Gaussian process experts\nhave been considered where data points are assigned to independent experts,\nreducing the complexity by allowing inference based on smaller, local\ncovariance matrices. Moreover, mixtures of Gaussian process experts\nsubstantially enrich the model's flexibility, allowing for behaviors such as\nnon-stationarity, heteroscedasticity, and discontinuities. In this work, we\nconstruct a novel inference approach based on nested sequential Monte Carlo\nsamplers to simultaneously infer both the gating network and Gaussian process\nexpert parameters. This greatly improves inference compared to importance\nsampling, particularly in settings when a stationary Gaussian process is\ninappropriate, while still being thoroughly parallelizable.", + "authors": "Teemu H\u00e4rk\u00f6nen, Sara Wade, Kody Law, Lassi Roininen", + "published": "2022-08-26", + "updated": "2022-08-26", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "stat.CO" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2009.07806v1", + "title": "Transformer Based Multi-Source Domain Adaptation", + "abstract": "In practical machine learning settings, the data on which a model must make\npredictions often come from a different distribution than the data it was\ntrained on. Here, we investigate the problem of unsupervised multi-source\ndomain adaptation, where a model is trained on labelled data from multiple\nsource domains and must make predictions on a domain for which no labelled data\nhas been seen. Prior work with CNNs and RNNs has demonstrated the benefit of\nmixture of experts, where the predictions of multiple domain expert classifiers\nare combined; as well as domain adversarial training, to induce a domain\nagnostic representation space. Inspired by this, we investigate how such\nmethods can be effectively applied to large pretrained transformer models. We\nfind that domain adversarial training has an effect on the learned\nrepresentations of these models while having little effect on their\nperformance, suggesting that large transformer-based models are already\nrelatively robust across domains. Additionally, we show that mixture of experts\nleads to significant performance improvements by comparing several variants of\nmixing functions, including one novel mixture based on attention. Finally, we\ndemonstrate that the predictions of large pretrained transformer based domain\nexperts are highly homogenous, making it challenging to learn effective\nfunctions for mixing their predictions.", + "authors": "Dustin Wright, Isabelle Augenstein", + "published": "2020-09-16", + "updated": "2020-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1911.08151v2", + "title": "Retrospective and Prospective Mixture-of-Generators for Task-oriented Dialogue Response Generation", + "abstract": "Dialogue response generation (DRG) is a critical component of task-oriented\ndialogue systems (TDSs). Its purpose is to generate proper natural language\nresponses given some context, e.g., historical utterances, system states, etc.\nState-of-the-art work focuses on how to better tackle DRG in an end-to-end way.\nTypically, such studies assume that each token is drawn from a single\ndistribution over the output vocabulary, which may not always be optimal.\nResponses vary greatly with different intents, e.g., domains, system actions.\n We propose a novel mixture-of-generators network (MoGNet) for DRG, where we\nassume that each token of a response is drawn from a mixture of distributions.\nMoGNet consists of a chair generator and several expert generators. Each expert\nis specialized for DRG w.r.t. a particular intent. The chair coordinates\nmultiple experts and combines the output they have generated to produce more\nappropriate responses. We propose two strategies to help the chair make better\ndecisions, namely, a retrospective mixture-of-generators (RMoG) and prospective\nmixture-of-generators (PMoG). The former only considers the historical\nexpert-generated responses until the current time step while the latter also\nconsiders possible expert-generated responses in the future by encouraging\nexploration. In order to differentiate experts, we also devise a\nglobal-and-local (GL) learning scheme that forces each expert to be specialized\ntowards a particular intent using a local loss and trains the chair and all\nexperts to coordinate using a global loss.\n We carry out extensive experiments on the MultiWOZ benchmark dataset. MoGNet\nsignificantly outperforms state-of-the-art methods in terms of both automatic\nand human evaluations, demonstrating its effectiveness for DRG.", + "authors": "Jiahuan Pei, Pengjie Ren, Christof Monz, Maarten de Rijke", + "published": "2019-11-19", + "updated": "2020-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.17404v1", + "title": "Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study", + "abstract": "Mixture-of-Experts (MoE) represents an ensemble methodology that amalgamates\npredictions from several specialized sub-models (referred to as experts). This\nfusion is accomplished through a router mechanism, dynamically assigning\nweights to each expert's contribution based on the input data. Conventional MoE\nmechanisms select all available experts, incurring substantial computational\ncosts. In contrast, Sparse Mixture-of-Experts (Sparse MoE) selectively engages\nonly a limited number, or even just one expert, significantly reducing\ncomputation overhead while empirically preserving, and sometimes even\nenhancing, performance. Despite its wide-ranging applications and these\nadvantageous characteristics, MoE's theoretical underpinnings have remained\nelusive. In this paper, we embark on an exploration of Sparse MoE's\ngeneralization error concerning various critical factors. Specifically, we\ninvestigate the impact of the number of data samples, the total number of\nexperts, the sparsity in expert selection, the complexity of the routing\nmechanism, and the complexity of individual experts. Our analysis sheds light\non \\textit{how \\textbf{sparsity} contributes to the MoE's generalization},\noffering insights from the perspective of classical learning theory.", + "authors": "Jinze Zhao, Peihao Wang, Zhangyang Wang", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2212.00471v1", + "title": "Implicit Mixture of Interpretable Experts for Global and Local Interpretability", + "abstract": "We investigate the feasibility of using mixtures of interpretable experts\n(MoIE) to build interpretable image classifiers on MNIST10. MoIE uses a\nblack-box router to assign each input to one of many inherently interpretable\nexperts, thereby providing insight into why a particular classification\ndecision was made. We find that a naively trained MoIE will learn to 'cheat',\nwhereby the black-box router will solve the classification problem by itself,\nwith each expert simply learning a constant function for one particular class.\nWe propose to solve this problem by introducing interpretable routers and\ntraining the black-box router's decisions to match the interpretable router. In\naddition, we propose a novel implicit parameterization scheme that allows us to\nbuild mixtures of arbitrary numbers of experts, allowing us to study how\nclassification performance, local and global interpretability vary as the\nnumber of experts is increased. Our new model, dubbed Implicit Mixture of\nInterpretable Experts (IMoIE) can match state-of-the-art classification\naccuracy on MNIST10 while providing local interpretability, and can provide\nglobal interpretability albeit at the cost of reduced classification accuracy.", + "authors": "Nathan Elazar, Kerry Taylor", + "published": "2022-12-01", + "updated": "2022-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2105.11706v1", + "title": "Mixture of ELM based experts with trainable gating network", + "abstract": "Mixture of experts method is a neural network based ensemble learning that\nhas great ability to improve the overall classification accuracy. This method\nis based on the divide and conquer principle, in which the problem space is\ndivided between several experts by supervisition of gating network. In this\npaper, we propose an ensemble learning method based on mixture of experts which\nis named mixture of ELM based experts with trainable gating network (MEETG) to\nimprove the computing cost and to speed up the learning process of ME. The\nstructure of ME consists of multi layer perceptrons (MLPs) as base experts and\ngating network, in which gradient-based learning algorithm is applied for\ntraining the MLPs which is an iterative and time consuming process. In order to\novercome on these problems, we use the advantages of extreme learning machine\n(ELM) for designing the structure of ME. ELM as a learning algorithm for single\nhidden-layer feed forward neural networks provides much faster learning process\nand better generalization ability in comparision with some other traditional\nlearning algorithms. Also, in the proposed method a trainable gating network is\napplied to aggregate the outputs of the experts dynamically according to the\ninput sample. Our experimental results and statistical analysis on 11 benchmark\ndatasets confirm that MEETG has an acceptable performance in classification\nproblems. Furthermore, our experimental results show that the proposed approach\noutperforms the original ELM on prediction stability and classification\naccuracy.", + "authors": "Laleh Armi, Elham Abbasi, Jamal Zarepour-Ahmadabadi", + "published": "2021-05-25", + "updated": "2021-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2312.16610v1", + "title": "Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation", + "abstract": "The Mixture-of-Experts (MoE) approach has demonstrated outstanding\nscalability in multi-task learning including low-level upstream tasks such as\nconcurrent removal of multiple adverse weather effects. However, the\nconventional MoE architecture with parallel Feed Forward Network (FFN) experts\nleads to significant parameter and computational overheads that hinder its\nefficient deployment. In addition, the naive MoE linear router is suboptimal in\nassigning task-specific features to multiple experts which limits its further\nscalability. In this work, we propose an efficient MoE architecture with weight\nsharing across the experts. Inspired by the idea of linear feature modulation\n(FM), our architecture implicitly instantiates multiple experts via learnable\nactivation modulations on a single shared expert block. The proposed Feature\nModulated Expert (FME) serves as a building block for the novel\nMixture-of-Feature-Modulation-Experts (MoFME) architecture, which can scale up\nthe number of experts with low overhead. We further propose an\nUncertainty-aware Router (UaR) to assign task-specific features to different FM\nmodules with well-calibrated weights. This enables MoFME to effectively learn\ndiverse expert functions for multiple tasks. The conducted experiments on the\nmulti-deweather task show that our MoFME outperforms the baselines in the image\nrestoration quality by 0.1-0.2 dB and achieves SOTA-compatible performance\nwhile saving more than 72% of parameters and 39% inference time over the\nconventional MoE counterpart. Experiments on the downstream segmentation and\nclassification tasks further demonstrate the generalizability of MoFME to real\nopen-world applications.", + "authors": "Rongyu Zhang, Yulin Luo, Jiaming Liu, Huanrui Yang, Zhen Dong, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Yuan Du, Shanghang Zhang", + "published": "2023-12-27", + "updated": "2023-12-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1809.04853v2", + "title": "Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership", + "abstract": "A method for implicit variable selection in mixture of experts frameworks is\nproposed. We introduce a prior structure where information is taken from a set\nof independent covariates. Robust class membership predictors are identified\nusing a normal gamma prior. The resulting model setup is used in a finite\nmixture of Bernoulli distributions to find homogenous clusters of women in\nMozambique based on their information sources on HIV. Fully Bayesian inference\nis carried out via the implementation of a Gibbs sampler.", + "authors": "Gregor Zens", + "published": "2018-09-13", + "updated": "2019-01-12", + "primary_cat": "econ.EM", + "cats": [ + "econ.EM", + "62F15, 62J07, 62H30, 90-08" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.12550v1", + "title": "Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization", + "abstract": "The Mixture of Experts (MoE) paradigm provides a powerful way to decompose\ninscrutable dense layers into smaller, modular computations often more amenable\nto human interpretation, debugging, and editability. A major problem however\nlies in the computational cost of scaling the number of experts to achieve\nsufficiently fine-grained specialization. In this paper, we propose the\nMultilinear Mixutre of Experts (MMoE) layer to address this, focusing on vision\nmodels. MMoE layers perform an implicit computation on prohibitively large\nweight tensors entirely in factorized form. Consequently, MMoEs both (1) avoid\nthe issues incurred through the discrete expert routing in the popular 'sparse'\nMoE models, yet (2) do not incur the restrictively high inference-time costs of\n'soft' MoE alternatives. We present both qualitative and quantitative evidence\n(through visualization and counterfactual interventions respectively) that\nscaling MMoE layers when fine-tuning foundation models for vision tasks leads\nto more specialized experts at the class-level whilst remaining competitive\nwith the performance of parameter-matched linear layer counterparts. Finally,\nwe show that learned expert specialism further facilitates manual correction of\ndemographic bias in CelebA attribute classification. Our MMoE model code is\navailable at https://github.com/james-oldfield/MMoE.", + "authors": "James Oldfield, Markos Georgopoulos, Grigorios G. Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis A. Nicolaou, Jiankang Deng, Ioannis Patras", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2210.01750v1", + "title": "Modular Approach to Machine Reading Comprehension: Mixture of Task-Aware Experts", + "abstract": "In this work we present a Mixture of Task-Aware Experts Network for Machine\nReading Comprehension on a relatively small dataset. We particularly focus on\nthe issue of common-sense learning, enforcing the common ground knowledge by\nspecifically training different expert networks to capture different kinds of\nrelationships between each passage, question and choice triplet. Moreover, we\ntake inspi ration on the recent advancements of multitask and transfer learning\nby training each network a relevant focused task. By making the\nmixture-of-networks aware of a specific goal by enforcing a task and a\nrelationship, we achieve state-of-the-art results and reduce over-fitting.", + "authors": "Anirudha Rayasam, Anusha Kamath, Gabriel Bayomi Tinoco Kalejaiye", + "published": "2022-10-04", + "updated": "2022-10-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2112.14397v2", + "title": "EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate", + "abstract": "Mixture-of-experts (MoE) is becoming popular due to its success in improving\nthe model quality, especially in Transformers. By routing tokens with a sparse\ngate to a few experts (i.e., a small pieces of the full model), MoE can easily\nincrease the model parameters to a very large scale while keeping the\ncomputation cost in a constant level. Most existing works just initialize some\nrandom experts, set a fixed gating strategy (e.g., Top-k), and train the model\nfrom scratch in an ad-hoc way. We identify that these MoE models are suffering\nfrom the immature experts and unstable sparse gate, which are harmful to the\nconvergence performance. In this paper, we propose an efficient end-to-end MoE\ntraining framework called EvoMoE. EvoMoE starts from training one single expert\nand gradually evolves into a large and sparse MoE structure. EvoMoE mainly\ncontains two phases: the expert-diversify phase to train the base expert for a\nwhile and spawn multiple diverse experts from it, and the gate-sparsify phase\nto learn an adaptive sparse gate and activate a dynamic number of experts.\nEvoMoE naturally decouples the joint learning of both the experts and the\nsparse gate and focuses on learning the basic knowledge with a single expert at\nthe early training stage. Then it diversifies the experts and continues to\ntrain the MoE with a novel Dense-to-Sparse gate (DTS-Gate). Specifically,\ninstead of using a permanent sparse gate, DTS-Gate begins as a dense gate that\nroutes tokens to all experts, then gradually and adaptively becomes sparser\nwhile routes to fewer experts. Evaluations are conducted on three popular\nmodels and tasks, including RoBERTa for masked language modeling task, GPT for\nlanguage modeling task and Transformer for machine translation task. The\nresults show that EvoMoE outperforms existing baselines, including Switch, BASE\nLayer, Hash Layer and StableMoE.", + "authors": "Xiaonan Nie, Xupeng Miao, Shijie Cao, Lingxiao Ma, Qibin Liu, Jilong Xue, Youshan Miao, Yi Liu, Zhi Yang, Bin Cui", + "published": "2021-12-29", + "updated": "2022-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.11412v1", + "title": "Expert Composer Policy: Scalable Skill Repertoire for Quadruped Robots", + "abstract": "We propose the expert composer policy, a framework to reliably expand the\nskill repertoire of quadruped agents. The composer policy links pair of experts\nvia transitions to a sampled target state, allowing experts to be composed\nsequentially. Each expert specializes in a single skill, such as a locomotion\ngait or a jumping motion. Instead of a hierarchical or mixture-of-experts\narchitecture, we train a single composer policy in an independent process that\nis not conditioned on the other expert policies. By reusing the same composer\npolicy, our approach enables adding new experts without affecting existing\nones, enabling incremental repertoire expansion and preserving original motion\nquality. We measured the transition success rate of 72 transition pairs and\nachieved an average success rate of 99.99\\%, which is over 10\\% higher than the\nbaseline random approach, and outperforms other state-of-the-art methods. Using\ndomain randomization during training we ensure a successful transfer to the\nreal world, where we achieve an average transition success rate of 97.22\\%\n(N=360) in our experiments.", + "authors": "Guilherme Christmann, Ying-Sheng Luo, Wei-Chao Chen", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2210.16710v1", + "title": "Prediction Sets for High-Dimensional Mixture of Experts Models", + "abstract": "Large datasets make it possible to build predictive models that can capture\nheterogenous relationships between the response variable and features. The\nmixture of high-dimensional linear experts model posits that observations come\nfrom a mixture of high-dimensional linear regression models, where the mixture\nweights are themselves feature-dependent. In this paper, we show how to\nconstruct valid prediction sets for an $\\ell_1$-penalized mixture of experts\nmodel in the high-dimensional setting. We make use of a debiasing procedure to\naccount for the bias induced by the penalization and propose a novel strategy\nfor combining intervals to form a prediction set with coverage guarantees in\nthe mixture setting. Synthetic examples and an application to the prediction of\ncritical temperatures of superconducting materials show our method to have\nreliable practical performance.", + "authors": "Adel Javanmard, Simeng Shao, Jacob Bien", + "published": "2022-10-30", + "updated": "2022-10-30", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ME", + "stat.ML", + "stat.TH" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.02629v2", + "title": "BA-MoE: Boundary-Aware Mixture-of-Experts Adapter for Code-Switching Speech Recognition", + "abstract": "Mixture-of-experts based models, which use language experts to extract\nlanguage-specific representations effectively, have been well applied in\ncode-switching automatic speech recognition. However, there is still\nsubstantial space to improve as similar pronunciation across languages may\nresult in ineffective multi-language modeling and inaccurate language boundary\nestimation. To eliminate these drawbacks, we propose a cross-layer language\nadapter and a boundary-aware training method, namely Boundary-Aware\nMixture-of-Experts (BA-MoE). Specifically, we introduce language-specific\nadapters to separate language-specific representations and a unified gating\nlayer to fuse representations within each encoder layer. Second, we compute\nlanguage adaptation loss of the mean output of each language-specific adapter\nto improve the adapter module's language-specific representation learning.\nBesides, we utilize a boundary-aware predictor to learn boundary\nrepresentations for dealing with language boundary confusion. Our approach\nachieves significant performance improvement, reducing the mixture error rate\nby 16.55\\% compared to the baseline on the ASRU 2019 Mandarin-English\ncode-switching challenge dataset.", + "authors": "Peikun Chen, Fan Yu, Yuhao Lian, Hongfei Xue, Xucheng Wan, Naijun Zheng, Huan Zhou, Lei Xie", + "published": "2023-10-04", + "updated": "2023-10-08", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "eess.AS" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2208.02813v1", + "title": "Towards Understanding Mixture of Experts in Deep Learning", + "abstract": "The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by\na router, has achieved great success in deep learning. However, the\nunderstanding of such architecture remains elusive. In this paper, we formally\nstudy how the MoE layer improves the performance of neural network learning and\nwhy the mixture model will not collapse into a single model. Our empirical\nresults suggest that the cluster structure of the underlying problem and the\nnon-linearity of the expert are pivotal to the success of MoE. To further\nunderstand this, we consider a challenging classification problem with\nintrinsic cluster structures, which is hard to learn using a single expert. Yet\nwith the MoE layer, by choosing the experts as two-layer nonlinear\nconvolutional neural networks (CNNs), we show that the problem can be learned\nsuccessfully. Furthermore, our theory shows that the router can learn the\ncluster-center features, which helps divide the input complex problem into\nsimpler linear classification sub-problems that individual experts can conquer.\nTo our knowledge, this is the first result towards formally understanding the\nmechanism of the MoE layer for deep learning.", + "authors": "Zixiang Chen, Yihe Deng, Yue Wu, Quanquan Gu, Yuanzhi Li", + "published": "2022-08-04", + "updated": "2022-08-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1907.04377v2", + "title": "Convergence Rates for Gaussian Mixtures of Experts", + "abstract": "We provide a theoretical treatment of over-specified Gaussian mixtures of\nexperts with covariate-free gating networks. We establish the convergence rates\nof the maximum likelihood estimation (MLE) for these models. Our proof\ntechnique is based on a novel notion of \\emph{algebraic independence} of the\nexpert functions. Drawing on optimal transport theory, we establish a\nconnection between the algebraic independence and a certain class of partial\ndifferential equations (PDEs). Exploiting this connection allows us to derive\nconvergence rates and minimax lower bounds for parameter estimation.", + "authors": "Nhat Ho, Chiao-Yu Yang, Michael I. Jordan", + "published": "2019-07-09", + "updated": "2022-03-08", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "cs.LG", + "stat.ML", + "stat.TH" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.02952v1", + "title": "On Least Squares Estimation in Softmax Gating Mixture of Experts", + "abstract": "Mixture of experts (MoE) model is a statistical machine learning design that\naggregates multiple expert networks using a softmax gating function in order to\nform a more intricate and expressive model. Despite being commonly used in\nseveral applications owing to their scalability, the mathematical and\nstatistical properties of MoE models are complex and difficult to analyze. As a\nresult, previous theoretical works have primarily focused on probabilistic MoE\nmodels by imposing the impractical assumption that the data are generated from\na Gaussian MoE model. In this work, we investigate the performance of the least\nsquares estimators (LSE) under a deterministic MoE model where the data are\nsampled according to a regression model, a setting that has remained largely\nunexplored. We establish a condition called strong identifiability to\ncharacterize the convergence behavior of various types of expert functions. We\ndemonstrate that the rates for estimating strongly identifiable experts, namely\nthe widely used feed forward networks with activation functions\n$\\mathrm{sigmoid}(\\cdot)$ and $\\tanh(\\cdot)$, are substantially faster than\nthose of polynomial experts, which we show to exhibit a surprising slow\nestimation rate. Our findings have important practical implications for expert\nselection.", + "authors": "Huy Nguyen, Nhat Ho, Alessandro Rinaldo", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.08245v1", + "title": "Scattered Mixture-of-Experts Implementation", + "abstract": "We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE)\non GPUs. ScatterMoE builds upon existing implementations, and overcoming some\nof the limitations to improve inference and training speed, and memory\nfootprint. This implementation achieves this by avoiding padding and making\nexcessive copies of the input. We introduce ParallelLinear, the main component\nwe use to build our implementation and the various kernels used to speed up the\noperation. We benchmark our implementation against Megablocks, and show that it\nenables a higher throughput and lower memory footprint. We also show how\nParallelLinear enables extension of the Mixture-of-Experts concept by\ndemonstrating with an implementation of Mixture of Attention.", + "authors": "Shawn Tan, Yikang Shen, Rameswar Panda, Aaron Courville", + "published": "2024-03-13", + "updated": "2024-03-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2309.05444v1", + "title": "Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning", + "abstract": "The Mixture of Experts (MoE) is a widely known neural architecture where an\nensemble of specialized sub-models optimizes overall performance with a\nconstant computational cost. However, conventional MoEs pose challenges at\nscale due to the need to store all experts in memory. In this paper, we push\nMoE to the limit. We propose extremely parameter-efficient MoE by uniquely\ncombining MoE architecture with lightweight experts.Our MoE architecture\noutperforms standard parameter-efficient fine-tuning (PEFT) methods and is on\npar with full fine-tuning by only updating the lightweight experts -- less than\n1% of an 11B parameters model. Furthermore, our method generalizes to unseen\ntasks as it does not depend on any prior task knowledge. Our research\nunderscores the versatility of the mixture of experts architecture, showcasing\nits ability to deliver robust performance even when subjected to rigorous\nparameter constraints. Our code used in all the experiments is publicly\navailable here: https://github.com/for-ai/parameter-efficient-moe.", + "authors": "Ted Zadouri, Ahmet \u00dcst\u00fcn, Arash Ahmadian, Beyza Ermi\u015f, Acyr Locatelli, Sara Hooker", + "published": "2023-09-11", + "updated": "2023-09-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.09832v3", + "title": "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts", + "abstract": "Scaling the size of language models usually leads to remarkable advancements\nin NLP tasks. But it often comes with a price of growing computational cost.\nAlthough a sparse Mixture of Experts (MoE) can reduce the cost by activating a\nsmall subset of parameters (e.g., one expert) for each input, its computation\nescalates significantly if increasing the number of activated experts, limiting\nits practical utility. Can we retain the advantages of adding more experts\nwithout substantially increasing the computational costs? In this paper, we\nfirst demonstrate the superiority of selecting multiple experts and then\npropose a computation-efficient approach called \\textbf{\\texttt{Merging Experts\ninto One}} (MEO), which reduces the computation cost to that of a single\nexpert. Extensive experiments show that MEO significantly improves\ncomputational efficiency, e.g., FLOPS drops from 72.0G of vanilla MoE to 28.6G\n(MEO). Moreover, we propose a token-level attention block that further enhances\nthe efficiency and performance of token-level MEO, e.g., 83.3\\% (MEO) vs.\n82.6\\% (vanilla MoE) average score on the GLUE benchmark. Our code will be\nreleased upon acceptance. Code will be released at:\n\\url{https://github.com/Shwai-He/MEO}.", + "authors": "Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao", + "published": "2023-10-15", + "updated": "2023-11-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2311.09179v1", + "title": "SiRA: Sparse Mixture of Low Rank Adaptation", + "abstract": "Parameter Efficient Tuning has been an prominent approach to adapt the Large\nLanguage Model to downstream tasks. Most previous works considers adding the\ndense trainable parameters, where all parameters are used to adapt certain\ntask. We found this less effective empirically using the example of LoRA that\nintroducing more trainable parameters does not help. Motivated by this we\ninvestigate the importance of leveraging \"sparse\" computation and propose SiRA:\nsparse mixture of low rank adaption. SiRA leverages the Sparse Mixture of\nExpert(SMoE) to boost the performance of LoRA. Specifically it enforces the top\n$k$ experts routing with a capacity limit restricting the maximum number of\ntokens each expert can process. We propose a novel and simple expert dropout on\ntop of gating network to reduce the over-fitting issue. Through extensive\nexperiments, we verify SiRA performs better than LoRA and other mixture of\nexpert approaches across different single tasks and multitask settings.", + "authors": "Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng", + "published": "2023-11-15", + "updated": "2023-11-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2102.06034v1", + "title": "Speech enhancement with mixture-of-deep-experts with clean clustering pre-training", + "abstract": "In this study we present a mixture of deep experts (MoDE) neural-network\narchitecture for single microphone speech enhancement. Our architecture\ncomprises a set of deep neural networks (DNNs), each of which is an 'expert' in\na different speech spectral pattern such as phoneme. A gating DNN is\nresponsible for the latent variables which are the weights assigned to each\nexpert's output given a speech segment. The experts estimate a mask from the\nnoisy input and the final mask is then obtained as a weighted average of the\nexperts' estimates, with the weights determined by the gating DNN. A soft\nspectral attenuation, based on the estimated mask, is then applied to enhance\nthe noisy speech signal. As a byproduct, we gain reduction at the complexity in\ntest time. We show that the experts specialization allows better robustness to\nunfamiliar noise types.", + "authors": "Shlomo E. Chazan, Jacob Goldberger, Sharon Gannot", + "published": "2021-02-11", + "updated": "2021-02-11", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.LG", + "eess.AS" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2209.13071v1", + "title": "Diversified Dynamic Routing for Vision Tasks", + "abstract": "Deep learning models for vision tasks are trained on large datasets under the\nassumption that there exists a universal representation that can be used to\nmake predictions for all samples. Whereas high complexity models are proven to\nbe capable of learning such representations, a mixture of experts trained on\nspecific subsets of the data can infer the labels more efficiently. However\nusing mixture of experts poses two new problems, namely (i) assigning the\ncorrect expert at inference time when a new unseen sample is presented. (ii)\nFinding the optimal partitioning of the training data, such that the experts\nrely the least on common features. In Dynamic Routing (DR) a novel architecture\nis proposed where each layer is composed of a set of experts, however without\naddressing the two challenges we demonstrate that the model reverts to using\nthe same subset of experts.\n In our method, Diversified Dynamic Routing (DivDR) the model is explicitly\ntrained to solve the challenge of finding relevant partitioning of the data and\nassigning the correct experts in an unsupervised approach. We conduct several\nexperiments on semantic segmentation on Cityscapes and object detection and\ninstance segmentation on MS-COCO showing improved performance over several\nbaselines.", + "authors": "Botos Csaba, Adel Bibi, Yanwei Li, Philip Torr, Ser-Nam Lim", + "published": "2022-09-26", + "updated": "2022-09-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.15961v1", + "title": "Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation", + "abstract": "Despite the promise of Mixture of Experts (MoE) models in increasing\nparameter counts of Transformer models while maintaining training and inference\ncosts, their application carries notable drawbacks. The key strategy of these\nmodels is to, for each processed token, activate at most a few experts -\nsubsets of an extensive feed-forward layer. But this approach is not without\nits challenges. The operation of matching experts and tokens is discrete, which\nmakes MoE models prone to issues like training instability and uneven expert\nutilization. Existing techniques designed to address these concerns, such as\nauxiliary losses or balance-aware matching, result either in lower model\nperformance or are more difficult to train. In response to these issues, we\npropose Mixture of Tokens, a fully-differentiable model that retains the\nbenefits of MoE architectures while avoiding the aforementioned difficulties.\nRather than routing tokens to experts, this approach mixes tokens from\ndifferent examples prior to feeding them to experts, enabling the model to\nlearn from all token-expert combinations. Importantly, this mixing can be\ndisabled to avoid mixing of different sequences during inference. Crucially,\nthis method is fully compatible with both masked and causal Large Language\nModel training and inference.", + "authors": "Szymon Antoniak, Sebastian Jaszczur, Micha\u0142 Krutul, Maciej Pi\u00f3ro, Jakub Krajewski, Jan Ludziejewski, Tomasz Odrzyg\u00f3\u017ad\u017a, Marek Cygan", + "published": "2023-10-24", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2202.09368v2", + "title": "Mixture-of-Experts with Expert Choice Routing", + "abstract": "Sparsely-activated Mixture-of-experts (MoE) models allow the number of\nparameters to greatly increase while keeping the amount of computation for a\ngiven token or a given sample unchanged. However, a poor expert routing\nstrategy (e.g. one resulting in load imbalance) can cause certain experts to be\nunder-trained, leading to an expert being under or over-specialized. Prior work\nallocates a fixed number of experts to each token using a top-k function\nregardless of the relative importance of different tokens. To address this, we\npropose a heterogeneous mixture-of-experts employing an expert choice method.\nInstead of letting tokens select the top-k experts, we have experts selecting\nthe top-k tokens. As a result, each token can be routed to a variable number of\nexperts and each expert can have a fixed bucket size. We systematically study\npre-training speedups using the same computational resources of the Switch\nTransformer top-1 and GShard top-2 gating of prior work and find that our\nmethod improves training convergence time by more than 2x. For the same\ncomputational cost, our method demonstrates higher performance in fine-tuning\n11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller\nactivation cost, our method outperforms the T5 dense model in 7 out of the 11\ntasks.", + "authors": "Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, James Laudon", + "published": "2022-02-18", + "updated": "2022-10-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1409.4698v1", + "title": "A Mixtures-of-Experts Framework for Multi-Label Classification", + "abstract": "We develop a novel probabilistic approach for multi-label classification that\nis based on the mixtures-of-experts architecture combined with recently\nintroduced conditional tree-structured Bayesian networks. Our approach captures\ndifferent input-output relations from multi-label data using the efficient\ntree-structured classifiers, while the mixtures-of-experts architecture aims to\ncompensate for the tree-structured restrictions and build a more accurate\nmodel. We develop and present algorithms for learning the model from data and\nfor performing multi-label predictions on future data instances. Experiments on\nmultiple benchmark datasets demonstrate that our approach achieves highly\ncompetitive results and outperforms the existing state-of-the-art multi-label\nclassification methods.", + "authors": "Charmgil Hong, Iyad Batal, Milos Hauskrecht", + "published": "2014-09-16", + "updated": "2014-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1901.10668v2", + "title": "Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference", + "abstract": "Computations for the softmax function are significantly expensive when the\nnumber of output classes is large. In this paper, we present a novel softmax\ninference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages\nsparse mixture of sparse experts to efficiently retrieve top-k classes.\nDifferent from most existing methods that require and approximate a fixed\nsoftmax, our method is learning-based and can adapt softmax weights for a\nbetter inference speedup. In particular, our method learns a two-level\nhierarchy which divides entire output class space into several partially\noverlapping experts. Each expert is sparse and only contains a subset of output\nclasses. To find top-k classes, a sparse mixture enables us to find the most\nprobable expert quickly, and the sparse expert enables us to search within a\nsmall-scale softmax. We empirically conduct evaluation on several real-world\ntasks, including neural machine translation, language modeling and image\nclassification, and demonstrate that significant computation reductions can be\nachieved at no performance loss.", + "authors": "Shun Liao, Ting Chen, Tian Lin, Denny Zhou, Chong Wang", + "published": "2019-01-30", + "updated": "2019-07-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2006.13309v4", + "title": "Fast Deep Mixtures of Gaussian Process Experts", + "abstract": "Mixtures of experts have become an indispensable tool for flexible modelling\nin a supervised learning context, allowing not only the mean function but the\nentire density of the output to change with the inputs. Sparse Gaussian\nprocesses (GP) have shown promise as a leading candidate for the experts in\nsuch models, and in this article, we propose to design the gating network for\nselecting the experts from such mixtures of sparse GPs using a deep neural\nnetwork (DNN). Furthermore, a fast one pass algorithm called\nCluster-Classify-Regress (CCR) is leveraged to approximate the maximum a\nposteriori (MAP) estimator extremely quickly. This powerful combination of\nmodel and algorithm together delivers a novel method which is flexible, robust,\nand extremely efficient. In particular, the method is able to outperform\ncompeting methods in terms of accuracy and uncertainty quantification. The cost\nis competitive on low-dimensional and small data sets, but is significantly\nlower for higher-dimensional and big data sets. Iteratively maximizing the\ndistribution of experts given allocations and allocations given experts does\nnot provide significant improvement, which indicates that the algorithm\nachieves a good approximation to the local MAP estimator very fast. This\ninsight can be useful also in the context of other mixture of experts models.", + "authors": "Clement Etienam, Kody Law, Sara Wade, Vitaly Zankin", + "published": "2020-06-11", + "updated": "2023-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2202.13934v1", + "title": "Functional mixture-of-experts for classification", + "abstract": "We develop a mixtures-of-experts (ME) approach to the multiclass\nclassification where the predictors are univariate functions. It consists of a\nME model in which both the gating network and the experts network are\nconstructed upon multinomial logistic activation functions with functional\ninputs. We perform a regularized maximum likelihood estimation in which the\ncoefficient functions enjoy interpretable sparsity constraints on targeted\nderivatives. We develop an EM-Lasso like algorithm to compute the regularized\nMLE and evaluate the proposed approach on simulated and real data.", + "authors": "Nhat Thien Pham, Faicel Chamroukhi", + "published": "2022-02-28", + "updated": "2022-02-28", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2307.05956v2", + "title": "Language-Routing Mixture of Experts for Multilingual and Code-Switching Speech Recognition", + "abstract": "Multilingual speech recognition for both monolingual and code-switching\nspeech is a challenging task. Recently, based on the Mixture of Experts (MoE),\nmany works have made good progress in multilingual and code-switching ASR, but\npresent huge computational complexity with the increase of supported languages.\nIn this work, we propose a computation-efficient network named Language-Routing\nMixture of Experts (LR-MoE) for multilingual and code-switching ASR. LR-MoE\nextracts language-specific representations through the Mixture of Language\nExperts (MLE), which is guided to learn by a frame-wise language routing\nmechanism. The weight-shared frame-level language identification (LID) network\nis jointly trained as the shared pre-router of each MoE layer. Experiments show\nthat the proposed method significantly improves multilingual and code-switching\nspeech recognition performances over baseline with comparable computational\nefficiency.", + "authors": "Wenxuan Wang, Guodong Ma, Yuke Li, Binbin Du", + "published": "2023-07-12", + "updated": "2023-07-14", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "eess.AS" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2208.07109v3", + "title": "Context-aware Mixture-of-Experts for Unbiased Scene Graph Generation", + "abstract": "Scene graph generation (SGG) has gained tremendous progress in recent years.\nHowever, its underlying long-tailed distribution of predicate classes is a\nchallenging problem. For extremely unbalanced predicate distributions, existing\napproaches usually construct complicated context encoders to extract the\nintrinsic relevance of scene context to predicates and complex networks to\nimprove the learning ability of network models for highly imbalanced predicate\ndistributions. To address the unbiased SGG problem, we introduce a simple yet\neffective method dubbed Context-Aware Mixture-of-Experts (CAME) to improve\nmodel diversity and mitigate biased SGG without complicated design.\nSpecifically, we propose to integrate the mixture of experts with a divide and\nensemble strategy to remedy the severely long-tailed distribution of predicate\nclasses, which is applicable to the majority of unbiased scene graph\ngenerators. The biased SGG is thereby reduced, and the model tends to\nanticipate more evenly distributed predicate predictions. To differentiate\nbetween various predicate distribution levels, experts with the same weights\nare not sufficiently diverse. In order to enable the network dynamically\nexploit the rich scene context and further boost the diversity of model, we\nsimply use the built-in module to create a context encoder. The importance of\neach expert to scene context and each predicate to each expert is dynamically\nassociated with expert weighting (EW) and predicate weighting (PW) strategy. We\nhave conducted extensive experiments on three tasks using the Visual Genome\ndataset, showing that CAME outperforms recent methods and achieves\nstate-of-the-art performance. Our code will be available publicly.", + "authors": "Liguang Zhou, Yuhongze Zhou, Tin Lun Lam, Yangsheng Xu", + "published": "2022-08-15", + "updated": "2023-01-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2303.06318v2", + "title": "A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training", + "abstract": "Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely\nactivated expert blocks to a base model, increasing the number of parameters\nwithout impacting computational costs. However, current distributed deep\nlearning frameworks are limited in their ability to train high-quality MoE\nmodels with large base models. In this work, we present DeepSpeed-TED, a novel,\nthree-dimensional, hybrid parallel algorithm that combines data, tensor, and\nexpert parallelism to enable the training of MoE models with 4 to 8x larger\nbase models than the current state-of-the-art. We also describe memory\noptimizations in the optimizer step, and communication optimizations that\neliminate unnecessary data movement. We implement our approach in DeepSpeed and\nachieve speedups of 26% over a baseline (i.e. without our communication\noptimizations) when training a 40 billion parameter MoE model (6.7 billion base\nmodel with 16 experts) on 128 V100 GPUs.", + "authors": "Siddharth Singh, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He, Abhinav Bhatele", + "published": "2023-03-11", + "updated": "2023-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC", + "cs.PF" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.14800v1", + "title": "Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models", + "abstract": "A pivotal advancement in the progress of large language models (LLMs) is the\nemergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs,\nMoE LLMs can achieve higher performance with fewer parameters, but it is still\nhard to deploy them due to their immense parameter sizes. Different from\nprevious weight pruning methods that rely on specifically designed hardware,\nthis paper mainly aims to enhance the deployment efficiency of MoE LLMs by\nintroducing plug-and-play expert-level sparsification techniques. Specifically,\nwe propose, for the first time to our best knowledge, post-training approaches\nfor task-agnostic and task-specific expert pruning and skipping of MoE LLMs,\ntailored to improve deployment efficiency while maintaining model performance\nacross a wide range of tasks. Extensive experiments show that our proposed\nmethods can simultaneously reduce model sizes and increase the inference speed,\nwhile maintaining satisfactory performance. Data and code will be available at\nhttps://github.com/Lucky-Lance/Expert_Sparsity.", + "authors": "Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, Hongsheng Li", + "published": "2024-02-22", + "updated": "2024-02-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2105.01899v1", + "title": "MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering", + "abstract": "We present Mixture of Contrastive Experts (MiCE), a unified probabilistic\nclustering framework that simultaneously exploits the discriminative\nrepresentations learned by contrastive learning and the semantic structures\ncaptured by a latent mixture model. Motivated by the mixture of experts, MiCE\nemploys a gating function to partition an unlabeled dataset into subsets\naccording to the latent semantics and multiple experts to discriminate distinct\nsubsets of instances assigned to them in a contrastive learning manner. To\nsolve the nontrivial inference and learning problems caused by the latent\nvariables, we further develop a scalable variant of the\nExpectation-Maximization (EM) algorithm for MiCE and provide proof of the\nconvergence. Empirically, we evaluate the clustering performance of MiCE on\nfour widely adopted natural image datasets. MiCE achieves significantly better\nresults than various previous methods and a strong contrastive learning\nbaseline.", + "authors": "Tsung Wei Tsai, Chongxuan Li, Jun Zhu", + "published": "2021-05-05", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2011.01613v1", + "title": "Towards a Universal Gating Network for Mixtures of Experts", + "abstract": "The combination and aggregation of knowledge from multiple neural networks\ncan be commonly seen in the form of mixtures of experts. However, such\ncombinations are usually done using networks trained on the same tasks, with\nlittle mention of the combination of heterogeneous pre-trained networks,\nespecially in the data-free regime. This paper proposes multiple data-free\nmethods for the combination of heterogeneous neural networks, ranging from the\nutilization of simple output logit statistics, to training specialized gating\nnetworks. The gating networks decide whether specific inputs belong to specific\nnetworks based on the nature of the expert activations generated. The\nexperiments revealed that the gating networks, including the universal gating\napproach, constituted the most accurate approach, and therefore represent a\npragmatic step towards applications with heterogeneous mixtures of experts in a\ndata-free regime. The code for this project is hosted on github at\nhttps://github.com/cwkang1998/network-merging.", + "authors": "Chen Wen Kang, Chua Meng Hong, Tomas Maul", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2302.13750v1", + "title": "MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition", + "abstract": "Multi-lingual speech recognition aims to distinguish linguistic expressions\nin different languages and integrate acoustic processing simultaneously. In\ncontrast, current multi-lingual speech recognition research follows a\nlanguage-aware paradigm, mainly targeted to improve recognition performance\nrather than discriminate language characteristics. In this paper, we present a\nmulti-lingual speech recognition network named\nMixture-of-Language-Expert(MoLE), which digests speech in a variety of\nlanguages. Specifically, MoLE analyzes linguistic expression from input speech\nin arbitrary languages, activating a language-specific expert with a\nlightweight language tokenizer. The tokenizer not only activates experts, but\nalso estimates the reliability of the activation. Based on the reliability, the\nactivated expert and the language-agnostic expert are aggregated to represent\nlanguage-conditioned embedding for efficient speech recognition. Our proposed\nmodel is evaluated in 5 languages scenario, and the experimental results show\nthat our structure is advantageous on multi-lingual recognition, especially for\nspeech in low-resource language.", + "authors": "Yoohwan Kwon, Soo-Whan Chung", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.CL", + "cs.SD" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1907.05346v1", + "title": "A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts", + "abstract": "End-to-end Task-oriented Dialogue Systems (TDSs) have attracted a lot of\nattention for their superiority (e.g., in terms of global optimization) over\npipeline modularized TDSs. Previous studies on end-to-end TDSs use a\nsingle-module model to generate responses for complex dialogue contexts.\nHowever, no model consistently outperforms the others in all cases. We propose\na neural Modular Task-oriented Dialogue System(MTDS) framework, in which a few\nexpert bots are combined to generate the response for a given dialogue context.\nMTDS consists of a chair bot and several expert bots. Each expert bot is\nspecialized for a particular situation, e.g., one domain, one type of action of\na system, etc. The chair bot coordinates multiple expert bots and adaptively\nselects an expert bot to generate the appropriate response. We further propose\na Token-level Mixture-of-Expert (TokenMoE) model to implement MTDS, where the\nexpert bots predict multiple tokens at each timestamp and the chair bot\ndetermines the final generated token by fully taking into consideration the\noutputs of all expert bots. Both the chair bot and the expert bots are jointly\ntrained in an end-to-end fashion. To verify the effectiveness of TokenMoE, we\ncarry out extensive experiments on a benchmark dataset. Compared with the\nbaseline using a single-module model, our TokenMoE improves the performance by\n8.1% of inform rate and 0.8% of success rate.", + "authors": "Jiahuan Pei, Pengjie Ren, Maarten de Rijke", + "published": "2019-07-10", + "updated": "2019-07-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.17749v1", + "title": "Multi-Task Dense Prediction via Mixture of Low-Rank Experts", + "abstract": "Previous multi-task dense prediction methods based on the Mixture of Experts\n(MoE) have received great performance but they neglect the importance of\nexplicitly modeling the global relations among all tasks. In this paper, we\npresent a novel decoder-focused method for multi-task dense prediction, called\nMixture-of-Low-Rank-Experts (MLoRE). To model the global task relationships,\nMLoRE adds a generic convolution path to the original MoE structure, where each\ntask feature can go through this path for explicit parameter sharing.\nFurthermore, to control the parameters and computational cost brought by the\nincrease in the number of experts, we take inspiration from LoRA and propose to\nleverage the low-rank format of a vanilla convolution in the expert network.\nSince the low-rank experts have fewer parameters and can be dynamically\nparameterized into the generic convolution, the parameters and computational\ncost do not change much with the increase of experts. Benefiting from this\ndesign, we increase the number of experts and its reception field to enlarge\nthe representation capacity, facilitating multiple dense tasks learning in a\nunified network. Extensive experiments on the PASCAL-Context and NYUD-v2\nbenchmarks show that our MLoRE achieves superior performance compared to\nprevious state-of-the-art methods on all metrics. Our code is available at\nhttps://github.com/YuqiYang213/MLoRE.", + "authors": "Yuqi Yang, Peng-Tao Jiang, Qibin Hou, Hao Zhang, Jinwei Chen, Bo Li", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2009.06327v1", + "title": "Double-Wing Mixture of Experts for Streaming Recommendations", + "abstract": "Streaming Recommender Systems (SRSs) commonly train recommendation models on\nnewly received data only to address user preference drift, i.e., the changing\nuser preferences towards items. However, this practice overlooks the long-term\nuser preferences embedded in historical data. More importantly, the common\nheterogeneity in data stream greatly reduces the accuracy of streaming\nrecommendations. The reason is that different preferences (or characteristics)\nof different types of users (or items) cannot be well learned by a unified\nmodel. To address these two issues, we propose a Variational and\nReservoir-enhanced Sampling based Double-Wing Mixture of Experts framework,\ncalled VRS-DWMoE, to improve the accuracy of streaming recommendations. In\nVRS-DWMoE, we first devise variational and reservoir-enhanced sampling to\nwisely complement new data with historical data, and thus address the user\npreference drift issue while capturing long-term user preferences. After that,\nwe propose a Double-Wing Mixture of Experts (DWMoE) model to first effectively\nlearn heterogeneous user preferences and item characteristics, and then make\nrecommendations based on them. Specifically, DWMoE contains two Mixture of\nExperts (MoE, an effective ensemble learning model) to learn user preferences\nand item characteristics, respectively. Moreover, the multiple experts in each\nMoE learn the preferences (or characteristics) of different types of users (or\nitems) where each expert specializes in one underlying type. Extensive\nexperiments demonstrate that VRS-DWMoE consistently outperforms the\nstate-of-the-art SRSs.", + "authors": "Yan Zhao, Shoujin Wang, Yan Wang, Hongwei Liu, Weizhe Zhang", + "published": "2020-09-14", + "updated": "2020-09-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2302.02043v1", + "title": "mixdistreg: An R Package for Fitting Mixture of Experts Distributional Regression with Adaptive First-order Methods", + "abstract": "This paper presents a high-level description of the R software package\nmixdistreg to fit mixture of experts distributional regression models. The\nproposed framework is implemented in R using the deepregression software\ntemplate, which is based on TensorFlow and follows the neural structured\nadditive learning principle. The software comprises various approaches as\nspecial cases, including mixture density networks and mixture regression\napproaches. Various code examples are given to demonstrate the package's\nfunctionality.", + "authors": "David R\u00fcgamer", + "published": "2023-02-04", + "updated": "2023-02-04", + "primary_cat": "stat.CO", + "cats": [ + "stat.CO" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2008.09662v1", + "title": "Biased Mixtures Of Experts: Enabling Computer Vision Inference Under Data Transfer Limitations", + "abstract": "We propose a novel mixture-of-experts class to optimize computer vision\nmodels in accordance with data transfer limitations at test time. Our approach\npostulates that the minimum acceptable amount of data allowing for\nhighly-accurate results can vary for different input space partitions.\nTherefore, we consider mixtures where experts require different amounts of\ndata, and train a sparse gating function to divide the input space for each\nexpert. By appropriate hyperparameter selection, our approach is able to bias\nmixtures of experts towards selecting specific experts over others. In this\nway, we show that the data transfer optimization between visual sensing and\nprocessing can be solved as a convex optimization problem.To demonstrate the\nrelation between data availability and performance, we evaluate biased mixtures\non a range of mainstream computer vision problems, namely: (i) single shot\ndetection, (ii) image super resolution, and (iii) realtime video action\nclassification. For all cases, and when experts constitute modified baselines\nto meet different limits on allowed data utility, biased mixtures significantly\noutperform previous work optimized to meet the same constraints on available\ndata.", + "authors": "Alhabib Abbas, Yiannis Andreopoulos", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.IV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.02410v1", + "title": "Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness", + "abstract": "Large Mixture of Experts (MoE) models could achieve state-of-the-art quality\non various language tasks, including machine translation task, thanks to the\nefficient model scaling capability with expert parallelism. However, it has\nbrought a fundamental issue of larger memory consumption and increased memory\nbandwidth bottleneck at deployment time. In this paper, we propose Mixture of\nQuantized Experts (MoQE) which is a simple weight-only quantization method\napplying ultra low-bit down to 2-bit quantizations only to expert weights for\nmitigating the increased memory and latency issues of MoE models. We show that\nlow-bit quantization together with the MoE architecture delivers a reliable\nmodel performance while reducing the memory size significantly even without any\nadditional training in most cases. In particular, expert layers in MoE models\nare much more robust to the quantization than conventional feedforward networks\n(FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit\nexpert weights can deliver better model performance than the dense model\ntrained on the same dataset. As a result of low-bit quantization, we show the\nmodel size can be reduced by 79.6% of the original half precision floating\npoint (fp16) MoE model. Combined with an optimized GPU runtime implementation,\nit also achieves 1.24X speed-up on A100 GPUs.", + "authors": "Young Jin Kim, Raffy Fahim, Hany Hassan Awadalla", + "published": "2023-10-03", + "updated": "2023-10-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2205.01848v2", + "title": "Optimizing Mixture of Experts using Dynamic Recompilations", + "abstract": "The Mixture of Experts architecture allows for outrageously large neural\nnetworks by scaling model parameter size independently from computational\ndemand (FLOPs). However, current DNN frameworks cannot effectively support the\ndynamic data flow in Mixture of Experts, and implementations on top of these\nframeworks need to use workarounds that introduce significant overheads. To\naddress the limitation of these frameworks, we present DynaMoE, a DNN library\nthat uses dynamic recompilations to optimize and adapt the use of computational\nresources to the dynamic needs of Mixture of Experts models. Our evaluation\nshows that DynaMoE achieves a 1.8x speedup and supports 2.3x larger model sizes\nwhen compared to existing MoE systems, even when not using recompilations. We\nthen present further optimizations enabled by dynamic recompilations that yield\nan additional 1.7x speedup while simultaneously reducing memory pressure and\nimproving model quality.", + "authors": "Ferdinand Kossmann, Zhihao Jia, Alex Aiken", + "published": "2022-05-04", + "updated": "2022-08-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2109.11449v2", + "title": "Dynamic Mixture of Experts Models for Online Prediction", + "abstract": "A mixture of experts models the conditional density of a response variable\nusing a mixture of regression models with covariate-dependent mixture weights.\nWe extend the finite mixture of experts model by allowing the parameters in\nboth the mixture components and the weights to evolve in time by following\nrandom walk processes. Inference for time-varying parameters in richly\nparameterized mixture of experts models is challenging. We propose a sequential\nMonte Carlo algorithm for online inference and based on a tailored proposal\ndistribution built on ideas from linear Bayes methods and the EM algorithm. The\nmethod gives a unified treatment for mixtures with time-varying parameters,\nincluding the special case of static parameters. We assess the properties of\nthe method on simulated data and on industrial data where the aim is to predict\nsoftware faults in a continuously upgraded large-scale software project.", + "authors": "Parfait Munezero, Mattias Villani, Robert Kohn", + "published": "2021-09-23", + "updated": "2022-10-13", + "primary_cat": "stat.CO", + "cats": [ + "stat.CO", + "stat.AP" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2207.09094v1", + "title": "MoEC: Mixture of Expert Clusters", + "abstract": "Sparsely Mixture of Experts (MoE) has received great interest due to its\npromising scaling capability with affordable computational overhead. MoE\nconverts dense layers into sparse experts, and utilizes a gated routing network\nto make experts conditionally activated. However, as the number of experts\ngrows, MoE with outrageous parameters suffers from overfitting and sparse data\nallocation. Such problems are especially severe on tasks with limited data,\nthus hindering the progress for MoE models to improve performance by scaling\nup. In this work, we propose Mixture of Expert Clusters - a general approach to\nenable expert layers to learn more diverse and appropriate knowledge by\nimposing variance-based constraints on the routing stage. We further propose a\ncluster-level expert dropout strategy specifically designed for the expert\ncluster structure. Our experiments reveal that MoEC could improve performance\non machine translation and natural language understanding tasks, and raise the\nperformance upper bound for scaling up experts under limited data. We also\nverify that MoEC plays a positive role in mitigating overfitting and sparse\ndata allocation.", + "authors": "Yuan Xie, Shaohan Huang, Tianyu Chen, Furu Wei", + "published": "2022-07-19", + "updated": "2022-07-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1904.09948v1", + "title": "PLUME: Polyhedral Learning Using Mixture of Experts", + "abstract": "In this paper, we propose a novel mixture of expert architecture for learning\npolyhedral classifiers. We learn the parameters of the classifierusing an\nexpectation maximization algorithm. Wederive the generalization bounds of the\nproposedapproach. Through an extensive simulation study, we show that the\nproposed method performs comparably to other state-of-the-art approaches.", + "authors": "Kulin Shah, P. S. Sastry, Naresh Manwani", + "published": "2019-04-22", + "updated": "2019-04-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2404.15045v1", + "title": "Multi-Head Mixture-of-Experts", + "abstract": "Sparse Mixtures of Experts (SMoE) scales model capacity without significant\nincreases in training and inference costs, but exhibits the following two\nissues: (1) Low expert activation, where only a small subset of experts are\nactivated for optimization. (2) Lacking fine-grained analytical capabilities\nfor multiple semantic concepts within individual tokens. We propose Multi-Head\nMixture-of-Experts (MH-MoE), which employs a multi-head mechanism to split each\ntoken into multiple sub-tokens. These sub-tokens are then assigned to and\nprocessed by a diverse set of experts in parallel, and seamlessly reintegrated\ninto the original token form. The multi-head mechanism enables the model to\ncollectively attend to information from various representation spaces within\ndifferent experts, while significantly enhances expert activation, thus deepens\ncontext understanding and alleviate overfitting. Moreover, our MH-MoE is\nstraightforward to implement and decouples from other SMoE optimization\nmethods, making it easy to integrate with other SMoE models for enhanced\nperformance. Extensive experimental results across three tasks: English-focused\nlanguage modeling, Multi-lingual language modeling and Masked multi-modality\nmodeling tasks, demonstrate the effectiveness of MH-MoE.", + "authors": "Xun Wu, Shaohan Huang, Wenhui Wang, Furu Wei", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2004.03751v4", + "title": "Robust Fitting of Mixture Models using Weighted Complete Estimating Equations", + "abstract": "Mixture modeling, which considers the potential heterogeneity in data, is\nwidely adopted for classification and clustering problems. Mixture models can\nbe estimated using the Expectation-Maximization algorithm, which works with the\ncomplete estimating equations conditioned by the latent membership variables of\nthe cluster assignment based on the hierarchical expression of mixture models.\nHowever, when the mixture components have light tails such as a normal\ndistribution, the mixture model can be sensitive to outliers. This study\nproposes a method of weighted complete estimating equations (WCE) for the\nrobust fitting of mixture models. Our WCE introduces weights to complete\nestimating equations such that the weights can automatically downweight the\noutliers. The weights are constructed similarly to the density power divergence\nfor mixture models, but in our WCE, they depend only on the component\ndistributions and not on the whole mixture. A novel\nexpectation-estimating-equation (EEE) algorithm is also developed to solve the\nWCE. For illustrative purposes, a multivariate Gaussian mixture, a mixture of\nexperts, and a multivariate skew normal mixture are considered, and how our EEE\nalgorithm can be implemented for these specific models is described. The\nnumerical performance of the proposed robust estimation method was examined\nusing simulated and real datasets.", + "authors": "Shonosuke Sugasawa, Genya Kobayashi", + "published": "2020-04-08", + "updated": "2022-03-17", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2107.04694v1", + "title": "Lifelong Mixture of Variational Autoencoders", + "abstract": "In this paper, we propose an end-to-end lifelong learning mixture of experts.\nEach expert is implemented by a Variational Autoencoder (VAE). The experts in\nthe mixture system are jointly trained by maximizing a mixture of individual\ncomponent evidence lower bounds (MELBO) on the log-likelihood of the given\ntraining samples. The mixing coefficients in the mixture, control the\ncontributions of each expert in the goal representation. These are sampled from\na Dirichlet distribution whose parameters are determined through non-parametric\nestimation during lifelong learning. The model can learn new tasks fast when\nthese are similar to those previously learnt. The proposed Lifelong mixture of\nVAE (L-MVAE) expands its architecture with new components when learning a\ncompletely new task. After the training, our model can automatically determine\nthe relevant expert to be used when fed with new data samples. This mechanism\nbenefits both the memory efficiency and the required computational cost as only\none expert is used during the inference. The L-MVAE inference model is able to\nperform interpolation in the joint latent space across the data domains\nassociated with different tasks and is shown to be efficient for disentangled\nlearning representation.", + "authors": "Fei Ye, Adrian G. Bors", + "published": "2021-07-09", + "updated": "2021-07-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.05220v1", + "title": "On Parameter Estimation in Deviated Gaussian Mixture of Experts", + "abstract": "We consider the parameter estimation problem in the deviated Gaussian mixture\nof experts in which the data are generated from $(1 - \\lambda^{\\ast}) g_0(Y|\nX)+ \\lambda^{\\ast} \\sum_{i = 1}^{k_{\\ast}} p_{i}^{\\ast}\nf(Y|(a_{i}^{\\ast})^{\\top}X+b_i^{\\ast},\\sigma_{i}^{\\ast})$, where $X, Y$ are\nrespectively a covariate vector and a response variable, $g_{0}(Y|X)$ is a\nknown function, $\\lambda^{\\ast} \\in [0, 1]$ is true but unknown mixing\nproportion, and $(p_{i}^{\\ast}, a_{i}^{\\ast}, b_{i}^{\\ast}, \\sigma_{i}^{\\ast})$\nfor $1 \\leq i \\leq k^{\\ast}$ are unknown parameters of the Gaussian mixture of\nexperts. This problem arises from the goodness-of-fit test when we would like\nto test whether the data are generated from $g_{0}(Y|X)$ (null hypothesis) or\nthey are generated from the whole mixture (alternative hypothesis). Based on\nthe algebraic structure of the expert functions and the distinguishability\nbetween $g_0$ and the mixture part, we construct novel Voronoi-based loss\nfunctions to capture the convergence rates of maximum likelihood estimation\n(MLE) for our models. We further demonstrate that our proposed loss functions\ncharacterize the local convergence rates of parameter estimation more\naccurately than the generalized Wasserstein, a loss function being commonly\nused for estimating parameters in the Gaussian mixture of experts.", + "authors": "Huy Nguyen, Khai Nguyen, Nhat Ho", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.12656v2", + "title": "HyperMoE: Paying Attention to Unselected Experts in Mixture of Experts via Dynamic Transfer", + "abstract": "The Mixture of Experts (MoE) for language models has been proven effective in\naugmenting the capacity of models by dynamically routing each input token to a\nspecific subset of experts for processing. Despite the success, most existing\nmethods face a challenge for balance between sparsity and the availability of\nexpert knowledge: enhancing performance through increased use of expert\nknowledge often results in diminishing sparsity during expert selection. To\nmitigate this contradiction, we propose HyperMoE, a novel MoE framework built\nupon Hypernetworks. This framework integrates the computational processes of\nMoE with the concept of knowledge transferring in multi-task learning. Specific\nmodules generated based on the information of unselected experts serve as\nsupplementary information, which allows the knowledge of experts not selected\nto be used while maintaining selection sparsity. Our comprehensive empirical\nevaluations across multiple datasets and backbones establish that HyperMoE\nsignificantly outperforms existing MoE methods under identical conditions\nconcerning the number of experts.", + "authors": "Hao Zhao, Zihan Qiu, Huijia Wu, Zili Wang, Zhaofeng He, Jie Fu", + "published": "2024-02-20", + "updated": "2024-02-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.06966v1", + "title": "Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts", + "abstract": "Reinforcement learning (RL) is a powerful approach for acquiring a\ngood-performing policy. However, learning diverse skills is challenging in RL\ndue to the commonly used Gaussian policy parameterization. We propose\n\\textbf{Di}verse \\textbf{Skil}l \\textbf{L}earning (Di-SkilL), an RL method for\nlearning diverse skills using Mixture of Experts, where each expert formalizes\na skill as a contextual motion primitive. Di-SkilL optimizes each expert and\nits associate context distribution to a maximum entropy objective that\nincentivizes learning diverse skills in similar contexts. The per-expert\ncontext distribution enables automatic curricula learning, allowing each expert\nto focus on its best-performing sub-region of the context space. To overcome\nhard discontinuities and multi-modalities without any prior knowledge of the\nenvironment's unknown context probability space, we leverage energy-based\nmodels to represent the per-expert context distributions and demonstrate how we\ncan efficiently train them using the standard policy gradient objective. We\nshow on challenging robot simulation tasks that Di-SkilL can learn diverse and\nperformant skills.", + "authors": "Onur Celik, Aleksandar Taranovic, Gerhard Neumann", + "published": "2024-03-11", + "updated": "2024-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2312.12379v4", + "title": "Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning", + "abstract": "Instruction tuning of Large Vision-language Models (LVLMs) has revolutionized\nthe development of versatile models with zero-shot generalization across a wide\nrange of downstream vision-language tasks. However, the diversity of training\ntasks of different sources and formats would lead to inevitable task conflicts,\nwhere different tasks conflict for the same set of model parameters, resulting\nin sub-optimal instructionfollowing abilities. To address that, we propose the\nMixture of Clusterconditional LoRA Experts (MoCLE), a novel Mixture of Experts\n(MoE) architecture designed to activate the task-customized model parameters\nbased on the instruction clusters. A separate universal expert is further\nincorporated to improve generalization capabilities of MoCLE for novel\ninstructions. Extensive experiments on 11 zero-shot tasks demonstrate the\neffectiveness of MoCLE.", + "authors": "Yunhao Gou, Zhili Liu, Kai Chen, Lanqing Hong, Hang Xu, Aoxue Li, Dit-Yan Yeung, James T. Kwok, Yu Zhang", + "published": "2023-12-19", + "updated": "2024-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2204.08396v1", + "title": "StableMoE: Stable Routing Strategy for Mixture of Experts", + "abstract": "The Mixture-of-Experts (MoE) technique can scale up the model size of\nTransformers with an affordable computational overhead. We point out that\nexisting learning-to-route MoE methods suffer from the routing fluctuation\nissue, i.e., the target expert of the same input may change along with\ntraining, but only one expert will be activated for the input during inference.\nThe routing fluctuation tends to harm sample efficiency because the same input\nupdates different experts but only one is finally used. In this paper, we\npropose StableMoE with two training stages to address the routing fluctuation\nproblem. In the first training stage, we learn a balanced and cohesive routing\nstrategy and distill it into a lightweight router decoupled from the backbone\nmodel. In the second training stage, we utilize the distilled router to\ndetermine the token-to-expert assignment and freeze it for a stable routing\nstrategy. We validate our method on language modeling and multilingual machine\ntranslation. The results show that StableMoE outperforms existing MoE methods\nin terms of both convergence speed and performance.", + "authors": "Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei", + "published": "2022-04-18", + "updated": "2022-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2302.14703v1", + "title": "Improving Expert Specialization in Mixture of Experts", + "abstract": "Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated\nmodular neural network architecture. There is renewed interest in MoE because\nthe conditional computation allows only parts of the network to be used during\neach inference, as was recently demonstrated in large scale natural language\nprocessing models. MoE is also of potential interest for continual learning, as\nexperts may be reused for new tasks, and new experts introduced. The gate in\nthe MoE architecture learns task decompositions and individual experts learn\nsimpler functions appropriate to the gate's decomposition. In this paper: (1)\nwe show that the original MoE architecture and its training method do not\nguarantee intuitive task decompositions and good expert utilization, indeed\nthey can fail spectacularly even for simple data such as MNIST and\nFashionMNIST; (2) we introduce a novel gating architecture, similar to\nattention, that improves performance and results in a lower entropy task\ndecomposition; and (3) we introduce a novel data-driven regularization that\nimproves expert specialization. We empirically validate our methods on MNIST,\nFashionMNIST and CIFAR-100 datasets.", + "authors": "Yamuna Krishnamurthy, Chris Watkins, Thomas Gaertner", + "published": "2023-02-28", + "updated": "2023-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1612.06879v1", + "title": "Robust mixture of experts modeling using the skew $t$ distribution", + "abstract": "Mixture of Experts (MoE) is a popular framework in the fields of statistics\nand machine learning for modeling heterogeneity in data for regression,\nclassification and clustering. MoE for continuous data are usually based on the\nnormal distribution. However, it is known that for data with asymmetric\nbehavior, heavy tails and atypical observations, the use of the normal\ndistribution is unsuitable. We introduce a new robust non-normal mixture of\nexperts modeling using the skew $t$ distribution. The proposed skew $t$ mixture\nof experts, named STMoE, handles these issues of the normal mixtures experts\nregarding possibly skewed, heavy-tailed and noisy data. We develop a dedicated\nexpectation conditional maximization (ECM) algorithm to estimate the model\nparameters by monotonically maximizing the observed data log-likelihood. We\ndescribe how the presented model can be used in prediction and in model-based\nclustering of regression data. Numerical experiments carried out on simulated\ndata show the effectiveness and the robustness of the proposed model in fitting\nnon-linear regression functions as well as in model-based clustering. Then, the\nproposed model is applied to the real-world data of tone perception for musical\ndata analysis, and the one of temperature anomalies for the analysis of climate\nchange data. The obtained results confirm the usefulness of the model for\npractical data analysis applications.", + "authors": "Faicel Chamroukhi", + "published": "2016-12-09", + "updated": "2016-12-09", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.LG", + "stat.ML", + "62, 62F, 62H30, 62h", + "G.3; I.2.6; I.5.1" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2304.13833v2", + "title": "Mixtures of Gaussian process experts based on kernel stick-breaking processes", + "abstract": "Mixtures of Gaussian process experts is a class of models that can\nsimultaneously address two of the key limitations inherent in standard Gaussian\nprocesses: scalability and predictive performance. In particular, models that\nuse Dirichlet processes as gating functions permit straightforward\ninterpretation and automatic selection of the number of experts in a mixture.\nWhile the existing models are intuitive and capable of capturing\nnon-stationarity, multi-modality and heteroskedasticity, the simplicity of\ntheir gating functions may limit the predictive performance when applied to\ncomplex data-generating processes. Capitalising on the recent advancement in\nthe dependent Dirichlet processes literature, we propose a new mixture model of\nGaussian process experts based on kernel stick-breaking processes. Our model\nmaintains the intuitive appeal yet improve the performance of the existing\nmodels. To make it practical, we design a sampler for posterior computation\nbased on the slice sampling. The model behaviour and improved predictive\nperformance are demonstrated in experiments using six datasets.", + "authors": "Yuji Saikai, Khue-Dung Dang", + "published": "2023-04-26", + "updated": "2023-05-05", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2312.03292v1", + "title": "Enhancing Molecular Property Prediction via Mixture of Collaborative Experts", + "abstract": "Molecular Property Prediction (MPP) task involves predicting biochemical\nproperties based on molecular features, such as molecular graph structures,\ncontributing to the discovery of lead compounds in drug development. To address\ndata scarcity and imbalance in MPP, some studies have adopted Graph Neural\nNetworks (GNN) as an encoder to extract commonalities from molecular graphs.\nHowever, these approaches often use a separate predictor for each task,\nneglecting the shared characteristics among predictors corresponding to\ndifferent tasks. In response to this limitation, we introduce the GNN-MoCE\narchitecture. It employs the Mixture of Collaborative Experts (MoCE) as\npredictors, exploiting task commonalities while confronting the homogeneity\nissue in the expert pool and the decision dominance dilemma within the expert\ngroup. To enhance expert diversity for collaboration among all experts, the\nExpert-Specific Projection method is proposed to assign a unique projection\nperspective to each expert. To balance decision-making influence for\ncollaboration within the expert group, the Expert-Specific Loss is presented to\nintegrate individual expert loss into the weighted decision loss of the group\nfor more equitable training. Benefiting from the enhancements of MoCE in expert\ncreation, dynamic expert group formation, and experts' collaboration, our model\ndemonstrates superior performance over traditional methods on 24 MPP datasets,\nespecially in tasks with limited data or high imbalance.", + "authors": "Xu Yao, Shuang Liang, Songqiao Han, Hailiang Huang", + "published": "2023-12-06", + "updated": "2023-12-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "q-bio.QM" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2401.15969v2", + "title": "Routers in Vision Mixture of Experts: An Empirical Study", + "abstract": "Mixture-of-Experts (MoE) models are a promising way to scale up model\ncapacity without significantly increasing computational cost. A key component\nof MoEs is the router, which decides which subset of parameters (experts)\nprocess which feature embeddings (tokens). In this paper, we present a\ncomprehensive study of routers in MoEs for computer vision tasks. We introduce\na unified MoE formulation that subsumes different MoEs with two parametric\nrouting tensors. This formulation covers both sparse MoE, which uses a binary\nor hard assignment between experts and tokens, and soft MoE, which uses a soft\nassignment between experts and weighted combinations of tokens. Routers for\nsparse MoEs can be further grouped into two variants: Token Choice, which\nmatches experts to each token, and Expert Choice, which matches tokens to each\nexpert. We conduct head-to-head experiments with 6 different routers, including\nexisting routers from prior work and new ones we introduce. We show that (i)\nmany routers originally developed for language modeling can be adapted to\nperform strongly in vision tasks, (ii) in sparse MoE, Expert Choice routers\ngenerally outperform Token Choice routers, and (iii) soft MoEs generally\noutperform sparse MoEs with a fixed compute budget. These results provide new\ninsights regarding the crucial role of routers in vision MoE models.", + "authors": "Tianlin Liu, Mathieu Blondel, Carlos Riquelme, Joan Puigcerver", + "published": "2024-01-29", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2402.00893v1", + "title": "MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts", + "abstract": "The application of mixture-of-experts (MoE) is gaining popularity due to its\nability to improve model's performance. In an MoE structure, the gate layer\nplays a significant role in distinguishing and routing input features to\ndifferent experts. This enables each expert to specialize in processing their\ncorresponding sub-tasks. However, the gate's routing mechanism also gives rise\nto narrow vision: the individual MoE's expert fails to use more samples in\nlearning the allocated sub-task, which in turn limits the MoE to further\nimprove its generalization ability. To effectively address this, we propose a\nmethod called Mixture-of-Distilled-Expert (MoDE), which applies moderate mutual\ndistillation among experts to enable each expert to pick up more features\nlearned by other experts and gain more accurate perceptions on their original\nallocated sub-tasks. We conduct plenty experiments including tabular, NLP and\nCV datasets, which shows MoDE's effectiveness, universality and robustness.\nFurthermore, we develop a parallel study through innovatively constructing\n\"expert probing\", to experimentally prove why MoDE works: moderate distilling\nknowledge can improve each individual expert's test performances on their\nassigned tasks, leading to MoE's overall performance improvement.", + "authors": "Zhitian Xie, Yinger Zhang, Chenyi Zhuang, Qitao Shi, Zhining Liu, Jinjie Gu, Guannan Zhang", + "published": "2024-01-31", + "updated": "2024-01-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1110.2058v2", + "title": "Convergence Rates for Mixture-of-Experts", + "abstract": "In mixtures-of-experts (ME) model, where a number of submodels (experts) are\ncombined, there have been two longstanding problems: (i) how many experts\nshould be chosen, given the size of the training data? (ii) given the total\nnumber of parameters, is it better to use a few very complex experts, or is it\nbetter to combine many simple experts? In this paper, we try to provide some\ninsights to these problems through a theoretic study on a ME structure where\n$m$ experts are mixed, with each expert being related to a polynomial\nregression model of order $k$. We study the convergence rate of the maximum\nlikelihood estimator (MLE), in terms of how fast the Kullback-Leibler\ndivergence of the estimated density converges to the true density, when the\nsample size $n$ increases. The convergence rate is found to be dependent on\nboth $m$ and $k$, and certain choices of $m$ and $k$ are found to produce\noptimal convergence rates. Therefore, these results shed light on the two\naforementioned important problems: on how to choose $m$, and on how $m$ and $k$\nshould be compromised, for achieving good convergence rates.", + "authors": "Eduardo F. Mendes, Wenxin Jiang", + "published": "2011-10-10", + "updated": "2011-11-01", + "primary_cat": "math.ST", + "cats": [ + "math.ST", + "stat.ME", + "stat.ML", + "stat.TH" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1703.09302v1", + "title": "Speech Enhancement using a Deep Mixture of Experts", + "abstract": "In this study we present a Deep Mixture of Experts (DMoE) neural-network\narchitecture for single microphone speech enhancement. By contrast to most\nspeech enhancement algorithms that overlook the speech variability mainly\ncaused by phoneme structure, our framework comprises a set of deep neural\nnetworks (DNNs), each one of which is an 'expert' in enhancing a given speech\ntype corresponding to a phoneme. A gating DNN determines which expert is\nassigned to a given speech segment. A speech presence probability (SPP) is then\nobtained as a weighted average of the expert SPP decisions, with the weights\ndetermined by the gating DNN. A soft spectral attenuation, based on the SPP, is\nthen applied to enhance the noisy speech signal. The experts and the gating\ncomponents of the DMoE network are trained jointly. As part of the training,\nspeech clustering into different subsets is performed in an unsupervised\nmanner. Therefore, unlike previous methods, a phoneme-labeled database is not\nrequired for the training procedure. A series of experiments with different\nnoise types verified the applicability of the new algorithm to the task of\nspeech enhancement. The proposed scheme outperforms other schemes that either\ndo not consider phoneme structure or use a simpler training methodology.", + "authors": "Shlomo E. Chazan, Jacob Goldberger, Sharon Gannot", + "published": "2017-03-27", + "updated": "2017-03-27", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2312.04693v2", + "title": "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts", + "abstract": "Graph data are inherently complex and heterogeneous, leading to a high\nnatural diversity of distributional shifts. However, it remains unclear how to\nbuild machine learning architectures that generalize to complex non-synthetic\ndistributional shifts naturally occurring in the real world. Here we develop\nGraphMETRO, a Graph Neural Network architecture, that reliably models natural\ndiversity and captures complex distributional shifts. GraphMETRO employs a\nMixture-of-Experts (MoE) architecture with a gating model and multiple expert\nmodels, where each expert model targets a specific distributional shift to\nproduce a shift-invariant representation, and the gating model identifies shift\ncomponents. Additionally, we design a novel objective that aligns the\nrepresentations from different expert models to ensure smooth optimization.\nGraphMETRO achieves state-of-the-art results on four datasets from GOOD\nbenchmark comprised of complex and natural real-world distribution shifts,\nimproving by 67% and 4.2% on WebKB and Twitch datasets.", + "authors": "Shirley Wu, Kaidi Cao, Bruno Ribeiro, James Zou, Jure Leskovec", + "published": "2023-12-07", + "updated": "2024-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1702.00372v1", + "title": "Visual Saliency Prediction Using a Mixture of Deep Neural Networks", + "abstract": "Visual saliency models have recently begun to incorporate deep learning to\nachieve predictive capacity much greater than previous unsupervised methods.\nHowever, most existing models predict saliency using local mechanisms limited\nto the receptive field of the network. We propose a model that incorporates\nglobal scene semantic information in addition to local information gathered by\na convolutional neural network. Our model is formulated as a mixture of\nexperts. Each expert network is trained to predict saliency for a set of\nclosely related images. The final saliency map is computed as a weighted\nmixture of the expert networks' output, with weights determined by a separate\ngating network. This gating network is guided by global scene information to\npredict weights. The expert networks and the gating network are trained\nsimultaneously in an end-to-end manner. We show that our mixture formulation\nleads to improvement in performance over an otherwise identical non-mixture\nmodel that does not incorporate global scene information.", + "authors": "Samuel Dodge, Lina Karam", + "published": "2017-02-01", + "updated": "2017-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.03994v1", + "title": "Video Relationship Detection Using Mixture of Experts", + "abstract": "Machine comprehension of visual information from images and videos by neural\nnetworks faces two primary challenges. Firstly, there exists a computational\nand inference gap in connecting vision and language, making it difficult to\naccurately determine which object a given agent acts on and represent it\nthrough language. Secondly, classifiers trained by a single, monolithic neural\nnetwork often lack stability and generalization. To overcome these challenges,\nwe introduce MoE-VRD, a novel approach to visual relationship detection\nutilizing a mixture of experts. MoE-VRD identifies language triplets in the\nform of < subject, predicate, object> tuples to extract relationships from\nvisual processing. Leveraging recent advancements in visual relationship\ndetection, MoE-VRD addresses the requirement for action recognition in\nestablishing relationships between subjects (acting) and objects (being acted\nupon). In contrast to single monolithic networks, MoE-VRD employs multiple\nsmall models as experts, whose outputs are aggregated. Each expert in MoE-VRD\nspecializes in visual relationship learning and object tagging. By utilizing a\nsparsely-gated mixture of experts, MoE-VRD enables conditional computation and\nsignificantly enhances neural network capacity without increasing computational\ncomplexity. Our experimental results demonstrate that the conditional\ncomputation capabilities and scalability of the mixture-of-experts approach\nlead to superior performance in visual relationship detection compared to\nstate-of-the-art methods.", + "authors": "Ala Shaabana, Zahra Gharaee, Paul Fieguth", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2403.07816v1", + "title": "Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM", + "abstract": "We investigate efficient methods for training Large Language Models (LLMs) to\npossess capabilities in multiple specialized domains, such as coding, math\nreasoning and world knowledge. Our method, named Branch-Train-MiX (BTX), starts\nfrom a seed model, which is branched to train experts in embarrassingly\nparallel fashion with high throughput and reduced communication cost. After\nindividual experts are asynchronously trained, BTX brings together their\nfeedforward parameters as experts in Mixture-of-Expert (MoE) layers and\naverages the remaining parameters, followed by an MoE-finetuning stage to learn\ntoken-level routing. BTX generalizes two special cases, the Branch-Train-Merge\nmethod, which does not have the MoE finetuning stage to learn routing, and\nsparse upcycling, which omits the stage of training experts asynchronously.\nCompared to alternative approaches, BTX achieves the best accuracy-efficiency\ntradeoff.", + "authors": "Sainbayar Sukhbaatar, Olga Golovneva, Vasu Sharma, Hu Xu, Xi Victoria Lin, Baptiste Rozi\u00e8re, Jacob Kahn, Daniel Li, Wen-tau Yih, Jason Weston, Xian Li", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2309.13850v2", + "title": "Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts", + "abstract": "Top-K sparse softmax gating mixture of experts has been widely used for\nscaling up massive deep-learning architectures without increasing the\ncomputational cost. Despite its popularity in real-world applications, the\ntheoretical understanding of that gating function has remained an open problem.\nThe main challenge comes from the structure of the top-K sparse softmax gating\nfunction, which partitions the input space into multiple regions with distinct\nbehaviors. By focusing on a Gaussian mixture of experts, we establish\ntheoretical results on the effects of the top-K sparse softmax gating function\non both density and parameter estimations. Our results hinge upon defining\nnovel loss functions among parameters to capture different behaviors of the\ninput regions. When the true number of experts $k_{\\ast}$ is known, we\ndemonstrate that the convergence rates of density and parameter estimations are\nboth parametric on the sample size. However, when $k_{\\ast}$ becomes unknown\nand the true model is over-specified by a Gaussian mixture of $k$ experts where\n$k > k_{\\ast}$, our findings suggest that the number of experts selected from\nthe top-K sparse softmax gating function must exceed the total cardinality of a\ncertain number of Voronoi cells associated with the true parameters to\nguarantee the convergence of the density estimation. Moreover, while the\ndensity estimation rate remains parametric under this setting, the parameter\nestimation rates become substantially slow due to an intrinsic interaction\nbetween the softmax gating and expert functions.", + "authors": "Huy Nguyen, Pedram Akbarian, Fanqi Yan, Nhat Ho", + "published": "2023-09-25", + "updated": "2024-02-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2309.14976v4", + "title": "MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection", + "abstract": "Combining the strengths of many existing predictors to obtain a Mixture of\nExperts which is superior to its individual components is an effective way to\nimprove the performance without having to develop new architectures or train a\nmodel from scratch. However, surprisingly, we find that na\\\"ively combining\nexpert object detectors in a similar way to Deep Ensembles, can often lead to\ndegraded performance. We identify that the primary cause of this issue is that\nthe predictions of the experts do not match their performance, a term referred\nto as miscalibration. Consequently, the most confident detector dominates the\nfinal predictions, preventing the mixture from leveraging all the predictions\nfrom the experts appropriately. To address this, when constructing the Mixture\nof Experts, we propose to combine their predictions in a manner which reflects\nthe individual performance of the experts; an objective we achieve by first\ncalibrating the predictions before filtering and refining them. We term this\napproach the Mixture of Calibrated Experts and demonstrate its effectiveness\nthrough extensive experiments on 5 different detection tasks using a variety of\ndetectors, showing that it: (i) improves object detectors on COCO and instance\nsegmentation methods on LVIS by up to $\\sim 2.5$ AP; (ii) reaches\nstate-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$\n$\\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent\ndetection tasks such as Open Vocabulary Object Detection.", + "authors": "Kemal Oksuz, Selim Kuzucu, Tom Joy, Puneet K. Dokania", + "published": "2023-09-26", + "updated": "2024-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1811.10740v2", + "title": "Mixture of Regression Experts in fMRI Encoding", + "abstract": "fMRI semantic category understanding using linguistic encoding models attempt\nto learn a forward mapping that relates stimuli to the corresponding brain\nactivation. Classical encoding models use linear multi-variate methods to\npredict the brain activation (all voxels) given the stimulus. However, these\nmethods essentially assume multiple regions as one large uniform region or\nseveral independent regions, ignoring connections among them. In this paper, we\npresent a mixture of experts-based model where a group of experts captures\nbrain activity patterns related to particular regions of interest (ROI) and\nalso show the discrimination across different experts. The model is trained\nword stimuli encoded as 25-dimensional feature vectors as input and the\ncorresponding brain responses as output. Given a new word (25-dimensional\nfeature vector), it predicts the entire brain activation as the linear\ncombination of multiple experts brain activations. We argue that each expert\nlearns a certain region of brain activations corresponding to its category of\nwords, which solves the problem of identifying the regions with a simple\nencoding model. We showcase that proposed mixture of experts-based model indeed\nlearns region-based experts to predict the brain activations with high spatial\naccuracy.", + "authors": "Subba Reddy Oota, Adithya Avvaru, Naresh Manwani, Raju S. Bapi", + "published": "2018-11-26", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2110.04260v3", + "title": "Taming Sparsely Activated Transformer with Stochastic Experts", + "abstract": "Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can\neasily scale to have outrageously large amounts of parameters without\nsignificant increase in computational cost. However, SAMs are reported to be\nparameter inefficient such that larger models do not always lead to better\nperformance. While most on-going research focuses on improving SAMs models by\nexploring methods of routing inputs to experts, our analysis reveals that such\nresearch might not lead to the solution we expect, i.e., the commonly-used\nrouting methods based on gating mechanisms do not work better than randomly\nrouting inputs to experts. In this paper, we propose a new expert-based model,\nTHOR (Transformer witH StOchastic ExpeRts). Unlike classic expert-based models,\nsuch as the Switch Transformer, experts in THOR are randomly activated for each\ninput during training and inference. THOR models are trained using a\nconsistency regularized loss, where experts learn not only from training data\nbut also from other experts as teachers, such that all the experts make\nconsistent predictions. We validate the effectiveness of THOR on machine\ntranslation tasks. Results show that THOR models are more parameter efficient\nin that they significantly outperform the Transformer and MoE models across\nvarious settings. For example, in multilingual translation, THOR outperforms\nthe Switch Transformer by 2 BLEU scores, and obtains the same BLEU score as\nthat of a state-of-the-art MoE model that is 18 times larger. Our code is\npublicly available at:\nhttps://github.com/microsoft/Stochastic-Mixture-of-Experts.", + "authors": "Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, Jianfeng Gao", + "published": "2021-10-08", + "updated": "2022-02-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1903.07756v1", + "title": "Hierarchical Routing Mixture of Experts", + "abstract": "In regression tasks the distribution of the data is often too complex to be\nfitted by a single model. In contrast, partition-based models are developed\nwhere data is divided and fitted by local models. These models partition the\ninput space and do not leverage the input-output dependency of\nmultimodal-distributed data, and strong local models are needed to make good\npredictions. Addressing these problems, we propose a binary tree-structured\nhierarchical routing mixture of experts (HRME) model that has classifiers as\nnon-leaf node experts and simple regression models as leaf node experts. The\nclassifier nodes jointly soft-partition the input-output space based on the\nnatural separateness of multimodal data. This enables simple leaf experts to be\neffective for prediction. Further, we develop a probabilistic framework for the\nHRME model, and propose a recursive Expectation-Maximization (EM) based\nalgorithm to learn both the tree structure and the expert models. Experiments\non a collection of regression tasks validate the effectiveness of our method\ncompared to a variety of other regression models.", + "authors": "Wenbo Zhao, Yang Gao, Shahan Ali Memon, Bhiksha Raj, Rita Singh", + "published": "2019-03-18", + "updated": "2019-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/2310.01334v2", + "title": "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy", + "abstract": "Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up\nthe learning capacity of neural networks, however, they have issues like (a)\nHigh Memory Usage, due to duplication of the network layers into multiple\ncopies as experts; and (b) Redundancy in Experts, as common learning-based\nrouting policies suffer from representational collapse. Therefore, vanilla SMoE\nmodels are memory inefficient and non-scalable, especially for\nresource-constrained downstream scenarios. In this paper, we ask: Can we craft\na compact SMoE model by consolidating expert information? What is the best\nrecipe to merge multiple experts into fewer but more knowledgeable experts? Our\npilot investigation reveals that conventional model merging methods fail to be\neffective in such expert merging for SMoE. The potential reasons are: (1)\nredundant information overshadows critical experts; (2) appropriate neuron\npermutation for each expert is missing to bring all of them in alignment. To\naddress this, we propose M-SMoE, which leverages routing statistics to guide\nexpert merging. Specifically, it starts with neuron permutation alignment for\nexperts; then, dominant experts and their \"group members\" are formed; lastly,\nevery expert group is merged into a single expert by utilizing each expert's\nactivation frequency as their weight for merging, thus diminishing the impact\nof insignificant experts. Moreover, we observed that our proposed merging\npromotes a low dimensionality in the merged expert's weight space, naturally\npaving the way for additional compression. Hence, our final method, MC-SMoE\n(i.e., Merge, then Compress SMoE), further decomposes the merged experts into\nlow-rank and structural sparse alternatives. Extensive experiments across 8\nbenchmarks validate the effectiveness of MC-SMoE. For instance, our MC-SMoE\nachieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in\nperformance.", + "authors": "Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen", + "published": "2023-10-02", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Mixture AND of AND Experts" + }, + { + "url": "http://arxiv.org/abs/1806.08200v1", + "title": "Mixtures of Experts Models", + "abstract": "Mixtures of experts models provide a framework in which covariates may be\nincluded in mixture models. This is achieved by modelling the parameters of the\nmixture model as functions of the concomitant covariates. Given their mixture\nmodel foundation, mixtures of experts models possess a diverse range of\nanalytic uses, from clustering observations to capturing parameter\nheterogeneity in cross-sectional data. This chapter focuses on delineating the\nmixture of experts modelling framework and demonstrates the utility and\nflexibility of mixtures of experts models as an analytic tool.", + "authors": "Isobel Claire Gormley, Sylvia Fr\u00fchwirth-Schnatter", + "published": "2018-06-21", + "updated": "2018-06-21", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "category": "Mixture AND of AND Experts" + } +] \ No newline at end of file