AcademicEval / related_34K /test_related_short_2404.17762v1.json
username
syn
b9dcaaf
raw
history blame
208 kB
[
{
"url": "http://arxiv.org/abs/2404.17762v1",
"title": "Large Multi-modality Model Assisted AI-Generated Image Quality Assessment",
"abstract": "Traditional deep neural network (DNN)-based image quality assessment (IQA)\nmodels leverage convolutional neural networks (CNN) or Transformer to learn the\nquality-aware feature representation, achieving commendable performance on\nnatural scene images. However, when applied to AI-Generated images (AGIs),\nthese DNN-based IQA models exhibit subpar performance. This situation is\nlargely due to the semantic inaccuracies inherent in certain AGIs caused by\nuncontrollable nature of the generation process. Thus, the capability to\ndiscern semantic content becomes crucial for assessing the quality of AGIs.\nTraditional DNN-based IQA models, constrained by limited parameter complexity\nand training data, struggle to capture complex fine-grained semantic features,\nmaking it challenging to grasp the existence and coherence of semantic content\nof the entire image. To address the shortfall in semantic content perception of\ncurrent IQA models, we introduce a large Multi-modality model Assisted\nAI-Generated Image Quality Assessment (MA-AGIQA) model, which utilizes\nsemantically informed guidance to sense semantic information and extract\nsemantic vectors through carefully designed text prompts. Moreover, it employs\na mixture of experts (MoE) structure to dynamically integrate the semantic\ninformation with the quality-aware features extracted by traditional DNN-based\nIQA models. Comprehensive experiments conducted on two AI-generated content\ndatasets, AIGCQA-20k and AGIQA-3k show that MA-AGIQA achieves state-of-the-art\nperformance, and demonstrate its superior generalization capabilities on\nassessing the quality of AGIs. Code is available at\nhttps://github.com/wangpuyi/MA-AGIQA.",
"authors": "Puyi Wang, Wei Sun, Zicheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai",
"published": "2024-04-27",
"updated": "2024-04-27",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Mixture AND of AND Experts",
"gt": "Traditional IQA models. In the field of No-Reference Image Quality Assessment (NR-IQA) [53], traditional models primarily fall into two categories: handcrafted feature-based and DNN-based. Models based on handcrafted features, such as BRISQUE [24], ILNIQE [54], and NIQE [25], primarily utilize natural scene statistics (NSS) [24, 25] derived from natural images. These models are adept at detecting domain variations introduced by synthetic distortions, including spatial [24, 25, 54], gradient [24], discrete cosine transform (DCT) [36], and wavelet-based distortions [26]. However, despite their effectiveness on datasets with type-specific distortions, these handcrafted feature-based approaches exhibit limited capabilities in modeling real-world distortions. With the advent of deep learning, CNNs have revolutionized many tasks in computer vision. [13] is pioneer in applying deep convolutional neural networks to NR-IQA. Its methodology employs CNNs to directly learn representations of image quality from raw image patches, bypassing the need for handcrafted features or a reference image. Following this, DBCNN [55] introduces a deep bilinear CNN for blind image quality assessment (BIQA) [53], innovatively merging two CNN streams to address both synthetic and authentic image distortions separately. Furthermore, HyperIQA [39], a self-adaptive hyper network, evaluates the quality of authentically distorted images through a novel three-stage process: content understanding, perception rule learning, and quality prediction. The success of Vision Transformers (ViT) [5] in various computer vision tasks has led to significant advancements. In the realm of IQA, IQT [52] leverages the combination of reference and distorted image features, extracted by CNNs, as inputs for a Transformerbased quality prediction task. MUSIQ [14] utilizes a Transformer to encode distortion image features across three scales, addressing the challenge of varying input image sizes during training and testing. TReS introduces relative ranking and self-consistency loss to capitalize on the abundant self-supervisory information available, aiming to decrease the network\u2019s sensitivity. What\u2019s more, MANIQA [49] explored multi-dimensional feature interaction, utilizing spatial and channel structural information to calculate a non-local representation of the image, enhancing the model\u2019s ability to assess image quality comprehensively. LMMs for IQA. Recent methodologies employing LMMs for IQA either utilize LMMs in isolation or combine them with DNNs as feature extractors to enhance performance. [30] introduces an innovative image-prompt fusion module, along with a specially designed quality assessment token, aiming to learn comprehensive representations for AGIs, providing insights from image-prompt alignment. However, the evaluation of AGIs in practical scenarios often does not involve prompts and image-prompt alignment is more significant for assessing the capabilities of generative models rather than images quality. CLIPIQA [42] signifies a breakthrough in assessing image quality and perception by harnessing the strengths of CLIP [31] models. This method bridges the divide between measurable image quality attributes and subjective perceptions of quality without necessitating extensive labeling efforts. Nonetheless, their [30, 42] dependence on visual-text similarity for quality score prediction often constrains their performance, rendering it marginally less effective compared to methods that exclusively focus on visual analysis. What\u2019s more, Q-Bench [43] innovates with a softmax strategy, allowing LMMs to deduce quantifiable quality scores. This is achieved by extracting results from softmax pooling on logits corresponding to five quality-related tokens. And Q-Align [45] employs strategic alignment techniques to foster accuracy. Expanding further, [47] delves into enhancing the assessment of AGIs by focusing on optimizing individual text prompts to leverage the intrinsic capabilities of LMMs, aiming to provide a more nuanced understanding and evaluation of image quality of AGIs. However, these methods, while notable, fall short of achieving satisfying efficacy, leaving considerable room for improvement.",
"pre_questions": [],
"main_content": "INTRODUCTION The rapid advancement of artificial intelligence (AI) has led to a proliferation of AI-generated images (AGIs) on the Internet. However, current AI-driven image generation systems often produce multiple images, necessitating manual selection by users to identify the best ones. This labor-intensive process is not only time-consuming but also a significant barrier to fully automating image processing pipelines. Visual quality, as an important factor to select attractive AGIs, has gained lots of attention in recent years [17, 20]. In this paper, we focus on how to evaluate the visual quality of AGIs, which on the one hand can be used to filter high-quality images from generation systems and on the other hand, can sever as reward function to optimize image generation models [2], propelling progress in the field of AI-based image generation techniques. arXiv:2404.17762v1 [cs.CV] 27 Apr 2024 Puyi Wang, Wei Sun, Zicheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai While a substantial number of deep neural network (DNN)-based image quality assessment (IQA) models, such as HyperIQA [39], MANIQA [49], DBCNN [55], etc., have been developed, these models were specifically designed for and trained on natural scene images. When applied directly to AGIs, these models often exhibit poor performance. This is due to the fact that quality assessment of natural images primarily targets issues such as blur, noise, and other forms of degradation caused by photography equipment or techniques, which are not applicable to AGIs as they do not undergo such degradation during the generation process. Therefore, overemphasizing factors like blur or noise during the evaluation of AGIs is inappropriate. As shown in Figure 1, AI-generated images, derived from advanced image generative models such as generative adversarial networks (GANs) [23], diffision [10] and related variant [4, 6, 11, 29, 32\u2013 34, 48], often exhibit issues not commonly found in naturally captured images. Visual quality of AGIs depends not only on basic visual features such as noise, blur [18, 38, 58], etc., but also on more intricate semantic perception [17], such as existence of reasonable semantic content, scene plausibility, and the coherence among objects [19, 43, 44, 46, 57]. Although re-training existing IQA models on AGIs datasets leads to improved outcomes, it fails to achieve optimal performance. One reason is that traditional DNN models, especially early convolutional neural networks (CNNs), despite their notable achievements in tasks like image recognition and classification [9, 37, 41], still struggle to grasp the fine-grained semantic content of images [56]. What\u2019s more, traditional DNN-based IQA models fail to capture the intrinsic characteristics essential for assessing image quality and thus exhibit poor generalization abilities. Hence, we argue that the quality assessment models of AGIs are still in their infancy and need further exploration. To address the issue of semantic awareness, we resort to large multi-modality models (LMMs). Because LMMs is typically pretrained on large-scale datasets and has already learned a rich set of joint visual and language knowledge, it can effectively capture the fine-grained semantic features relevant to input prompts. However, LMMs perform excellently in high-level visual understanding tasks [1, 16], yet they do not perform well on tasks that are relatively simple for humans, such as identifying structural and textural distortions, color differences, and geometric transformations [47]. In contrast, traditional deep learning networks excel at perceiving low-dimensional features and can fit better to the data distribution of specific task [12]. Therefore, the idea of combining LMMs with traditional deep learning networks is a natural progression. In this paper, we introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) framework, which enhances the capacity of traditional DNN-based IQA models to understand semantic content by incorporating LMM. Our approach initially repurposes a DNN, MANIQA [49], as an extractor for quality-aware features and establishes it as the training backbone for the MA-AGIQA framework. Subsequently, we guide a LMM, mPLUG-Owl2 [50], to focus on fine-grained semantic information through meticulously crafted prompts. We then extract and store the last-layer hidden vector from mPLUG-Owl2, merging it with features extracted by MANIQA to infuse the model with rich semantic insights. Finally, we employ a MoE to dynamically integrate quality-aware features with fine-grained semantic features, Grainy Images Subset SRCC: 0.2545 \u2013 MANIQA SRCC: 0.8364 \u2013 Ours Whole AIGCQA-20k SRCC: 0.8507 \u2013 MANIQA SRCC: 0.8644 \u2013 Ours Figure 2: For the subset of grainy images (extracted from prompts containing \u201cdigital\u201d and generated by LCM_Pixart in AIGCQA-20k) that include semantic content, MANIQA achieves an SRCC of 0.2545, which is 70.0% lower than the overall SRCC of 0.8507. In contrast, our MA-AGIQA model achieves an SRCC of 0.8364. It demonstrates that our model possesses a significantly enhanced understanding of AGIs, particularly those whose quality is deeply intertwined with semantic elements. catering to the unique focal points of different images. As demonstrated in Figure 2, our approach surpasses MANIQA in terms of SRCC, particularly within subsets comprising semantically rich images overflowing with graininess, indicating that our methodology shows remarkable congruence with the human visual system\u2019s (HVS) perceptual capabilities. MA-AGIQA achieves SRCC values of 0.8939 and 0.8644 on the AGIQA-3k and AIGCQA-20k datasets, respectively, exceeding the state-of-the-art models by 2.03% and 1.37%, and also demonstrates superior cross-dataset performance. Our contributions are three-fold: \u2022 We systematically analyze the issue of traditional DNNbased IQA lacking the ability to understand the semantic content of AGIs, emphasizing the importance of incorporating semantic information into traditional DNN-based IQA models. \u2022 We introduce the MA-AGIQA model, which incorporates LMM to extract fine-grained semantic features and dynamically integrates these features with traditional DNN-based IQA models. \u2022 We evaluate the MA-AGIQA model on two AI-generated IQA datasets. Experimental results demonstrate that our model surpasses current state-of-the-art methods without extra training data and also showcases superior crossdataset performance. Extensive ablation studies further validate the effectiveness of each component. Large Multi-modality Model Assisted AI-Generated Image Quality Assessment As depicted in Figure 3, framework of MA-AGIQA is structured into three sections. Section 3.1 introduces our adoption of a DNN, specifically MANIQA [49], tailored for the AGIs quality assessment task, serving as our primary training backbone. In Section 3.2, we incorporate the LMM mPLUG-Owl2 [50] as a feature extractor. This component is crucial for acquiring fine-grained semantic features via carefully crafted text prompts. Lastly, Section 3.3 addresses the variability in focal points across different images. To adaptively integrate the feature vectors during training, we utilizes a MoE structure for feature fusion. This approach ensures that the most salient features are emphasized. Further details are elaborated below. 3.1 Quality-aware Feature Extraction To leverage the capability of DNNs to adapt to the data distribution of specific tasks, we employ MANIQA [49] as a quality-aware feature extractor. MANIQA enhances the evaluation of image quality by applying attention mechanisms across both the channel and spatial dimensions, thereby increasing the interaction among various regions of the image, both globally and locally. This approach generates projections \ud835\udc64\ud835\udc52\ud835\udc56\ud835\udc54\u210e\ud835\udc61(W ) and \ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52(S ) for a given image, and the final rating of the whole image is determined through the sum of multiplication of S by W, which can be illustrated as Equation (1): (S, W) = T ([\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc54\ud835\udc52]), \ufffdS \u00d7 W \ufffd (1) (S, W) = T ([\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc54\ud835\udc52]), rating = \ufffdS \u00d7 W \ufffdW , (1) e dimensional vectors. pplying MANIQA to the quality assessment \ufffd \ufffdW where S and W are one dimensional vectors. However, directly applying MANIQA to the quality assessment of AGIs presents challenges, as illustrated in Figure 4. Image (a) displays a complex, symmetrical pattern, devoid of meaningful semantic content. Image (b) features incoherent areas, such as two grey holes in the sky that are inconsistent with the common sense. The blurriness and fuzziness of the man\u2019s face in image (c) along the edges significantly impair human perception. Conversely, image (d), despite its severe graininess, retains its semantic integrity, representing an appealing artistic form. Traditional DNN-based models like MANIQA, lacking the capacity to comprehend semantic content, tend to overestimate the quality of images (a), (b), and (c), resulting in scores much higher than the ground truth. However, these images should be rated as low quality due to the poor viewing Puyi Wang, Wei Sun, Zicheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai MLLM Evaluate if the image quality is compromised due to violations of coherence. Transformation Block 1 Transformation Block 2 Transformation Block 3 MANIQA Gating network Frozen Learning regression y Figure 3: Overview of our proposed MA-AGIQA framework. Initially, MANIQA is repurposed as the foundational training backbone, whose structure is modified to generate quality-aware features. Second, a parameter fixed LMM, mPLUG-Owl2, serves as a fine-grained semantic feature extractor. This module utilizes carefully crafted prompts to capture the desired semantic information. Finally, the AFM module acts as an organic feature integrator, dynamically combining these features for enhanced performance. (a) GT: 2.684 Prediction: 3.632 (b) GT: 3.145 Prediction: 3.762 (c) GT: 1.823 Prediction: 3.118 (d) GT: 3.606 Prediction: 2.994 Figure 4: Four types of image display with strong correlation between image quality and semantics. The ground truth and model predication of the relevant images are presented below each image, showing a significant difference between the model predication and the ground truth, indicating that the model\u2019s understanding of semantics is not sufficient. experience they offer. For image (d), traditional DNN-based models focus excessively on the graininess, mistaking it for a flaw, and assign a score significantly lower than the ground truth. This highlights the critical need for incorporating semantic information into the quality assessment of AGIs by traditional DNN-based models. To address this issue, modifications were made so that the generated S and W no longer produce a rating. Instead, they yield a quality-aware feature f1, setting the stage for the subsequent fusion with features extracted by LMM. f1 is generated as: f1 = S \u00d7 W. (2) During the training phase, the parameters of modified MANIQA are continuously updated. This refinement process ensures that MANIQA can extract features more relevant to the quality of AGIs. Furthermore, the training process facilitates a more seamless integration between MANIQA and LMM, leading to superior outcomes. 3.2 Fine-grained Semantic Feature Extraction LMMs are capable of understanding and analyzing the semantic content of images and their relationship with human cognition. They assess whether different parts of an image form a cohesive whole and evaluate whether the elements within the picture are semantically coherent [7, 21, 28]. mPLUG-Owl2 [50] employs a modality-adaptive language decoder to handle different modalities within distinct modules, which mitigates the issue of modality interference. Given the importance of effectively guiding the model through textual prompts to elicit the desired output, we have selected mPLUG-Owl2 as our feature extractor. We consider the application of mPLUG-Owl2 in the following aspects of semantic content: Large Multi-modality Model Assisted AI-Generated Image Quality Assessment \u2022 Existence of Semantic Content. The importance of semantic content in an image lies in its ability to convey a clear and meaningful message to the viewer. An image lacking in semantic content may be difficult to understand, fail to effectively convey its intended message, reducing audience engagement and satisfaction. \u2022 Coherence of Semantic Content. The coherence of semantic content in an image relates to whether the generated image can provide a coherent, logically sound visual experience for human viewers. When the various parts of an image are semantically consistent, it is better able to convey a clear story, emotion, or message. In contrast, any inconsistency in the primary focus of images will greatly detract from their quality and convey a significantly negative impression. Consequently, we try to propose the rational design of prompts leading LMMs to obtain those image semantic content. mPLUGOwl2 possess the ability to understand fine-grained semantic contents, but without carefully designed input prompts, some prompts, such as \"Please evaluate if the image quality is compromised due to violations of common human sense or logic?\" although it expresses the desire for the model to assess whether the semantic content of the image contradicts human perception, would lead to unsatisfactory results. To better utilize mPLUG-Owl2 for the task of evaluating AGIs, we meticulously designed prompts to guide the LMM. Specifically, we designed two prompts, denoted as \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4eand \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f respctively, \u2022 \"Evaluate the input image to determine if its quality is compromised due to a lack of meaningful semantic content.\" \u2022 \"Evaluate if the image quality is compromised due to violations of coherence.\" corresponding to the existence of semantic content and the coherence of semantic content in images, respectively. Test results, as shown in Figure 5 using the mPLUG-Owl2 official demo1, have proven these questions to be effective. However, the textual output from mPLUG-Owl2 is not immediately conducive to being utilized by MANIQA to impart semantic insights. To bridge this gap, it\u2019s essential to obtain the information provided by mPLUG-Owl2 into a format that MANIQA can easily leverage. SO we extract features from the final layer of mPLUGOwl2\u2019s hidden layers, achieving an accessible embedded representation of the LMM\u2019s output. This output is a tensor with dimensions of [token_length, hidden_size], where \"token_length\" represents the number of output tokens, and \"hidden_size\" denotes the dimensionality of the hidden layer representations associated with each token. For mPLUG-Owl2, the hidden_size is set to 4096. Subsequently, we conduct an averaging operation across the token dimension, yielding a vector with dimensions 1x4096. This vector then serves as the basis for further feature fusion procedures. The process can be represented as Equation (3) : (m1 \ud835\udc56, m2 \ud835\udc56, \u00b7 \u00b7 \u00b7 , m\ud835\udc5b \ud835\udc56) = M([\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc54\ud835\udc52], [\ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc56])[\u22121], \ud835\udc53\ud835\udc56= Average(m1 \ud835\udc56, m2 \ud835\udc56, \u00b7, m\ud835\udc5b \ud835\udc56), where \ud835\udc56\u2208{\ud835\udc4e,\ud835\udc4f}, (3) 1https://modelscope.cn/studios/iic/mPLUG-Owl2/summary Evaluate if the image quality is compromised due to violations of coherence. Coherence of Semantic Content Yes, the image quality is compromised due to the violations of coherence. The image features a blurry cityscape with a ghostly appearance, but the blurriness and the presence of ghostly elements make it difficult to discern the actual details of the cityscape. The image is not clear, and the ghostly appearance might be distracting or confusing for the viewer. Evaluate the input image to determine if its quality is compromised due to a lack of meaningful semantic content. Existence of Semantic Content The input image is a close-up of a patterned surface, possibly a tiled floor or a decorative piece, The pattern is intricate and features a combination of blue, green, and yellow colors. However, the image lacks meaningful semantic content, as it does not depict any recognizable objects, people, or scenes. The focus is solely on the pattern, which might be visually appealing, but does not provide any context or information. Figure 5: Presentation of mPLUG-Owl2\u2019s answers to two prompts. where m\ud835\udc58 \ud835\udc56represents a hidden vector of token \ud835\udc58corresponding to \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc56, and M denotes mPLUG-Owl2. It is important to note that throughout the entire training and testing process, the parameters of mPLUG-Owl2 are fixed. Because mPLUG-Owl2 is typically pre-trained on large-scale datasets and has already learned a rich set of joint visual and language knowledge, it can effectively capture the fine-grained semantic information relevant to input prompts, even with fixed parameters. Additionally, fine-tuning LMMs in every training iteration would significantly increase training time. Using it solely as a feature extractor significantly reduces computational costs, making the training process more efficient. So, we pre-obtain and save the semantic content features of each image in advance. 3.3 Adaptive Fusion Module Given the complex influence of color, composition, details, semantic content, and other factors on image quality, simply concatenating the extracted features may not always yield the best results. To dynamically fuse a variety of complementary features, we propose the adaptive fusion module (AFM) for organic feature integration. Puyi Wang, Wei Sun, Zicheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai This process can be divided into two main parts. The first part involves transforming the extracted features into a unified vector space of the same dimension, allowing for vector fusion operations. Specifically, for features extracted by MANIQA, this transformation block applies a fully connected (Fc) layer, transforming them to the same dimension as the original features (1x784) to provide a richer combination. For features derived from mPLUG-Owl2, it uses a Fc layer to project them onto a 1x784 dimension, followed by a relu activation layer and a dropout layer to enhance the network\u2019s expressive power and generalization. The second part employs a MoE to dynamically fuse the three features. The MoE\u2019s gating network takes the transformed three features as input and outputs dynamic weights \ud835\udf36, corresponding to the three features\u2019 contributions to image quality. Structurally, this gating network comprises a Fc layer and a sigmoid layer. The final image quality representation vector g can be obtained through a weighted sum of the three feature vectors. Following the denotation which sign the three features as \ud835\udc531, \ud835\udc53\ud835\udc4e, \ud835\udc53\ud835\udc4f, this process can be represented as: f \u2032 \ud835\udc56= F \ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60 \ud835\udc56 (f\ud835\udc56), where f \u2032 \ud835\udc56\u2208R\ud835\udc51, \ud835\udf36= F \ud835\udc54\ud835\udc4e\ud835\udc61\ud835\udc52(Concat(f \u2032 1, f \u2032 \ud835\udc4e, f \u2032 \ud835\udc4f)), where \ud835\udf36\u2208R3, g = \u2211\ufe013 \ud835\udc56=1 f \u2032 \ud835\udc56\u00b7 \ud835\udefc\ud835\udc56, where g \u2208R\ud835\udc51, \ud835\udc56\u2208{1,\ud835\udc4e,\ud835\udc4f}, (4) where F \ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60 \ud835\udc56 is the transformation block of feature \ud835\udc56, and F \ud835\udc54\ud835\udc4e\ud835\udc61\ud835\udc52is the gating network\u2019s mapping function, \ud835\udc53\ud835\udc56is the original extracted feature and \ud835\udc53\u2032 \ud835\udc56is the transformed feature. R\ud835\udc51is the dimension space of \ud835\udc53\u2032 \ud835\udc56. Finally, we obtain the final image quality score output through a simple regression layer, consisting of a Fc layer. 4 EXPERIMENTS 4.1 Dataset and Evaluation Metrics Dataset. Our model is evaluated on two AI-Generated image datasets, including AIGCQA-20k [17] and AGIQA-3k [20]. Specifically, AIGCQA20k contains 20k images, but at the time of writing, only 14k images have been published. Our experiments are conducted on these 14k images. The MOS for AIGCQA-20k images are distributed between 0-5, with higher scores indicating better image quality. Images in AIGCQA-20k are generated by 15 models, including DALLE2 [32], DALLE3 [32], Dream [6], IF [4], LCM Pixart [22], LCM SD1.5 [22], LCM SDXL [22], Midjourney [11], Pixart \ud835\udefc[3], Playground [29], SD1.4 [34], SD1.5 [34], SDXL [35] and SSD1B [8]. AGIQA-3k includes 2982 images, with MOS also distributed between 0-5, where higher values represent better quality. Images in AGIQA-3k are derived from six models, including GLIDE [27], Stable Diffusion V-1.5 [34], Stable Diffusion XL-2.2 [35], Midjourney [11], AttnGAN [48], and DALLE2 [32]. During training, we split the entire dataset into 70% for training, 10% for validation, and 20% for testing. To ensure the same set of images in each subset when testing across different models, we set the same random seed during the split to control variables and ensure reproducibility. Evaluation Metric. Spearman\u2019s Rank-Order Correlation Coefficient (SRCC), Pearson\u2019s Linear Correlation Coefficient (PLCC), the Kullback-Leibler Correlation Coefficient (KLCC), and the Root Mean Square Error (RMSE) are selected as metrics to measure monotonicity and accuracy. SRCC, PLCC, and KLCC range from -1.0 to 1.0, with larger values indicating better results. In our experiments, we employ the sum of SRCC and PLCC as the criterion for selecting the optimal validation case, and emphasize SRCC for comparing model performance. 4.2 Implementation Details Our method is implemented based on PyTorch, and all experiments are conducted on 4 NVIDIA 3090 GPUs. For all datasets, we opt for handcrafted feature-based BRISQUE [24], NIQE [25] and ILNIQE [54], deep learning (DL)-based HyperIQA [39], MANIQA [49], MUSIQ [14], DBCNN [55], StairIQA [40], BAID [51], and LMMbased CLIPIQA [42], CLIPIQA+ [42] and Q-Align [45]. During the training process of deep learning models, we use the Adam optimizer [15] with a weight decay of 1e-5, and the initial learning rate is 1e-5. The batch size is 8 during training, validation, and testing. All DL-based models are trained for 30 epochs using MSE loss and validated after each training process. The checkpoint with the highest sum of SRCC and PLCC during validation is used for testing. Handcrafted feature-based and LMM based models are used directly without training. 4.3 Comparison with SOTA methods Table 1 lists the results of MA-AGIQA and 12 other models on the AGIQA-3k and AIGCQA-20k dataset. It has been observed that LMM-based models significantly outperform those that rely on handcrafted features. This superior performance is attributed to LMMs being trained on extensive datasets, which provides them with a robust understanding of images and enhances their generalizability. However, trained DL-based models generally perform far better than the LMM-based models because DL-based models tend to fit the data distribution of specific tasks better, thereby resulting in improved performance. Among these twelve models, the ViT-based MANIQA outperforms the other eleven models, and our method still significantly surpasses it on the same training and testing split with large margins (+3.72% of SRCC, +1.73% of PLCC and +5.43% of KRCC in AGIQA-3k & +1.61% of SRCC, +2.02% of PLCC and +2.90% of KRCC in AIGCQA-20k). This demonstrates the superiority of integrating features extracted by LMM into traditional DNN, significantly improving the accuracy and consistency of prediction results. To evaluate the generalization capability of our MA-AGIQA model, we conducted cross-dataset evaluations. Table 2 shows that MA-AGIQA significantly outperforms the other two models, HyperIQA and StairIQA, which performed best on single datasets, with large margins. This superior performance can largely be attributed to the robust generalization capability of the LMM and the benefits of the MoE architecture, which excels in dynamically fusing features. 4.4 Ablation Study Necessity of Fine-grained Semantic Features. To assess the benefits of integrating features extracted by mPLUG-Owl2 [50] into MANIQA [49], we carried out comprehensive ablation studies on each component and their various combinations, as detailed in Tables 3 and 4. Our findings indicate that using either the features extracted by the LMM alone or solely relying on a traditional network does not yield the best outcomes. In contrast, integrating one Large Multi-modality Model Assisted AI-Generated Image Quality Assessment Table 1: Comparisons with SOTA (State-Of-The-Art) methods on AGIQA-3k and AIGCQA-20K-Image datasets. The up arrow \"\u2191\" means that a larger value indicates better performance. The best and second best performances are bolded and underlined, respectively. MA-AGIQA outperforms existing SOTA methods on both datasets by large margins. Note: to ensure fair comparisons, we trained and tested all deep learning based models and ours with the same dataset splitting method. Type Method AGIQA-3k AIGCQA-20K-Image SRCC\u2191 PLCC\u2191 KRCC\u2191 RMSE\u2193 SRCC\u2191 PLCC\u2191 KRCC\u2191 RMSE\u2193 Handcrafted feature-based BRISQUE [24] 0.4726 0.5612 0.3227 0.8299 0.1663 0.3580 0.1112 0.6813 NIQE [25] 0.5236 0.5668 0.3637 0.8260 0.2085 0.3378 0.1394 0.6868 ILNIQE [54] 0.6097 0.6551 0.4318 0.7576 0.3359 0.4551 0.2290 0.6497 LMM-based CLIPIQA [42] 0.6524 0.6968 0.4632 0.7191 0.4147 0.6459 0.2861 0.5570 CLIPIQA+ [42] 0.6933 0.7493 0.4957 0.664 0.4553 0.6682 0.3169 0.5428 Q-Align [45] 0.6728 0.6910 0.4728 0.7204 0.6743 0.6815 0.4808 0.5199 Traditional DNN-based HyperIQA [39] 0.8509 0.9049 0.6685 0.4134 0.8162 0.8329 0.6207 0.3902 MANIQA [49] 0.8618 0.9115 0.6839 0.4111 0.8507 0.8870 0.6612 0.3273 DBCNN [55] 0.8263 0.8900 0.6393 0.4533 0.8054 0.8483 0.6121 0.3726 StairIQA [40] 0.8343 0.8933 0.6485 0.4510 0.7899 0.8428 0.6053 0.3927 BAID [51] 0.1304 0.2030 0.0854 0.9487 0.1652 0.1483 0.1279 0.7297 MUSIQ [14] 0.8261 0.8657 0.6400 0.4907 0.8329 0.8646 0.6403 0.3634 DL with LMM MA-AGIQA 0.8939 0.9273 0.7211 0.3756 0.8644 0.9050 0.6804 0.3104 Table 2: Cross-dataset performance comparison for M-AIGQQA, HyperIQA, and StairIQA. \u201cDirection\u201d from A to B means training with train subset of dataset A and testing on test subset of dataset B. The best result is bolded. direction SRCC \u2191 PLCC \u2191 KRCC \u2191 RMSE \u2193 MA-AGIQA 20k\u21923k 0.8053 0.8430 0.6083 0.5399 3k\u219220k 0.7722 0.8314 0.5777 0.4055 HyperIQA 20k\u21923k 0.6820 0.6806 0.4806 0.7352 3k\u219220k 0.6374 0.6547 0.4577 0.5414 StairIQA 20k\u21923k 0.4335 0.5234 0.3294 0.8549 3k\u219220k 0.6495 0.6895 0.4644 0.5285 fine-grained semantic feature with the original MANIQA network can enhance the network\u2019s performance. However, the optimal results were achieved by combining two features extracted by the LMM with MANIQA, which led to significant improvements on the AGIQA-3k dataset (increases of 1.57%, 0.83%, and 2.56% in SRCC, PLCC, and KRCC, respectively) and on the AIGCQA-20k dataset (enhancements of 2.72%, 1.94%, and 4.35%). The marked enhancements achieved by incorporating two finegrained semantic features suggest that LMM is adept at capturing nuanced, complex features that traditional models might overlook, fostering a more thorough understanding and assessment of AGIs quality. The results from these ablation experiments highlight the significant contribution of fine-grained semantic features. Contribution of MoE. Table 5 demonstrates that incorporating the MoE structure, rather than simply concatenating three vectors, does indeed improve network performance, albeit marginally. Specifically, on the AGIQA-3k dataset, we observed increases of 0.20%, 0.17%, and 0.16% in SRCC, PLCC, and KRCC, respectively. Table 3: Ablation studies of different component combinations in the MA-AGIQA model on AGIQA-3k. SRCC, PLCC and KRCC are reported. The best result is bolded. Note: \"semantic feature\" and \"coherence feature\" denote features extracted by mPLUG-Owl2 through \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4eand \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4frespectively. MANIQA Semantic Feature Coherence Feature SRCC\u2191 PLCC\u2191 KRCC\u2191 \u2713 0.8800 0.9196 0.7031 \u2713 0.8662 0.9082 0.6823 \u2713 0.8661 0.9084 0.6821 \u2713 \u2713 0.8685 0.9108 0.6853 \u2713 \u2713 0.8820 0.9197 0.7090 \u2713 \u2713 0.8699 0.9102 0.6867 \u2713 \u2713 \u2713 0.8939 0.9273 0.7211 For the AIGCQA-20k dataset, the improvements were 0.67%, 0.95%, and 1.37%. The gains, although seemingly modest, highlight the potential of MoE structure in complex systems where integrating diverse expertise can yield better decision-making and predictive outcomes. 4.5 Visualization To vividly demonstrate the efficacy of the MA-AGIQA framework, we selected 300 images from the AIGCQA-20k and AGIQA-3k datasets where MANIQA had the poorest performance. These images primarily exhibit issues in semantic content. We computed the absolute values of the differences between the model scores and the image ground truth, and illustrated these differences in Puyi Wang, Wei Sun, Zicheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai Table 4: Ablation studies of different component combinations in the MA-AGIQA model on AIGCQA-20k. SRCC, PLCC and KRCC are reported. The best result is bolded. Note: \"semantic feature\" and \"coherence feature\" denote features extracted by mPLUG-Owl2 through \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4eand \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4frespectively. MANIQA Semantic Feature Coherence Feature SRCC\u2191 PLCC\u2191 KRCC\u2191 \u2713 0.8415 0.8877 0.6520 \u2713 0.8184 0.8345 0.6323 \u2713 0.8181 0.8343 0.6320 \u2713 \u2713 0.8540 0.8975 0.6671 \u2713 \u2713 0.8596 0.9016 0.6738 \u2713 \u2713 0.8180 0.8323 0.6312 \u2713 \u2713 \u2713 0.8644 0.9050 0.6804 Table 5: Ablation studies on the MoE structure in the AFM demonstrate that compositions integrating MoE yield superior results on both AGIQA-3k and AIGCQA-20k datasets.The better result is bolded. dataset MoE SRCC \u2191 PLCC \u2191 KRCC \u2191 RMSE \u2193 3k \u2717 0.8921 0.9257 0.7199 0.3797 \u2713 0.8939 0.9273 0.7211 0.3756 20k \u2717 0.8586 0.8964 0.6712 0.3234 \u2713 0.8644 0.9050 0.6804 0.3104 Absolute Difference Density Absolute Difference Lower Absolute Difference Figure 6: Comparative Density Distributions of Absolute Differences for MANQA and MA-AGIQA on AGIQA-3k and AIGQA-20k Datasets Figure 6, using 0.1 as the bin size for plotting the quality score distribution. The results clearly show that our MA-AGIQA model are more closely aligned with human perception, with a noticeable shift in the difference distribution toward zero and a marked reduction in peak values. Figure 7 presents a collection of images where the assessments from the MANIQA model were mostly off the mark. Scores assigned by MANIQA alongside those given by the proposed MA-AGIQA model and the ground truth are listed, which reveal that the MAAGIQA model markedly enhances alignment with the ground truth in contrast to MANIQA. For instance, in the first image of the top row, MANIQA\u2019s score is 3.50, which diverging substantially from MANIQA MA-AGIQA Ground Truth 3.50 2.97 3.09 3.73 2.98 2.41 2.49 3.43 1.50 1.62 1.45 2.32 MANIQA MA-AGIQA Ground Truth 1.43 2.85 2.13 2.28 1.84 2.65 1.79 2.19 2.83 0.88 1.49 1.12 Figure 7: Comparative Analysis of Image Quality Assessment Models: Evaluating MANIQA versus MA-AGIQA Against Ground Truth Scores the ground truth score of 1.50. However, MA-AGIQA\u2019s score is 2.98, demonstrating a much closer approximation to the ground truth. This pattern is consistent across the images shown, with MAAGIQA consistently producing scores that are closer to the ground truth, reflecting a more accurate assessment of image quality. 5 CONCLUSION To mitigate the shortcomings of traditional DNNs in capturing semantic content in AGIs, this study explored the integration of LMMs with traditional DNNs and introduced the MA-AGIQA network. Leveraging mPLUG-Owl2 [50], our network efficiently extracts semantic features to enhance MANIQA [49] for quality assessment. The MA-AGIQA network\u2019s ability to dynamically integrate finegrained semantic features with quality-aware features enables it to effectively handle the varied quality aspects of AGIs. Experiment results across two prominent AGIs datasets confirm our model\u2019s superior performance. Through thorough ablation studies, the indispensable role of each component within our framework has been validated. This research aspires to catalyze further exploration into the fusion of LMMs within AI-generated content quality assessment and envisions broader application potentials for such methodology."
},
{
"url": "http://arxiv.org/abs/2010.11929v2",
"title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
"abstract": "While the Transformer architecture has become the de-facto standard for\nnatural language processing tasks, its applications to computer vision remain\nlimited. In vision, attention is either applied in conjunction with\nconvolutional networks, or used to replace certain components of convolutional\nnetworks while keeping their overall structure in place. We show that this\nreliance on CNNs is not necessary and a pure transformer applied directly to\nsequences of image patches can perform very well on image classification tasks.\nWhen pre-trained on large amounts of data and transferred to multiple mid-sized\nor small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision\nTransformer (ViT) attains excellent results compared to state-of-the-art\nconvolutional networks while requiring substantially fewer computational\nresources to train.",
"authors": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby",
"published": "2020-10-22",
"updated": "2021-06-03",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2403.10854v1",
"title": "A Comprehensive Study of Multimodal Large Language Models for Image Quality Assessment",
"abstract": "While Multimodal Large Language Models (MLLMs) have experienced significant\nadvancement on visual understanding and reasoning, their potentials to serve as\npowerful, flexible, interpretable, and text-driven models for Image Quality\nAssessment (IQA) remains largely unexplored. In this paper, we conduct a\ncomprehensive and systematic study of prompting MLLMs for IQA. Specifically, we\nfirst investigate nine prompting systems for MLLMs as the combinations of three\nstandardized testing procedures in psychophysics (i.e., the single-stimulus,\ndouble-stimulus, and multiple-stimulus methods) and three popular prompting\nstrategies in natural language processing (i.e., the standard, in-context, and\nchain-of-thought prompting). We then present a difficult sample selection\nprocedure, taking into account sample diversity and uncertainty, to further\nchallenge MLLMs equipped with the respective optimal prompting systems. We\nassess three open-source and one close-source MLLMs on several visual\nattributes of image quality (e.g., structural and textural distortions, color\ndifferences, and geometric transformations) in both full-reference and\nno-reference scenarios. Experimental results show that only the close-source\nGPT-4V provides a reasonable account for human perception of image quality, but\nis weak at discriminating fine-grained quality variations (e.g., color\ndifferences) and at comparing visual quality of multiple images, tasks humans\ncan perform effortlessly.",
"authors": "Tianhe Wu, Kede Ma, Jie Liang, Yujiu Yang, Lei Zhang",
"published": "2024-03-16",
"updated": "2024-03-16",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2312.17090v1",
"title": "Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels",
"abstract": "The explosion of visual content available online underscores the requirement\nfor an accurate machine assessor to robustly evaluate scores across diverse\ntypes of visual contents. While recent studies have demonstrated the\nexceptional potentials of large multi-modality models (LMMs) on a wide range of\nrelated fields, in this work, we explore how to teach them for visual rating\naligned with human opinions. Observing that human raters only learn and judge\ndiscrete text-defined levels in subjective studies, we propose to emulate this\nsubjective process and teach LMMs with text-defined rating levels instead of\nscores. The proposed Q-Align achieves state-of-the-art performance on image\nquality assessment (IQA), image aesthetic assessment (IAA), as well as video\nquality assessment (VQA) tasks under the original LMM structure. With the\nsyllabus, we further unify the three tasks into one model, termed the OneAlign.\nIn our experiments, we demonstrate the advantage of the discrete-level-based\nsyllabus over direct-score-based variants for LMMs. Our code and the\npre-trained weights are released at https://github.com/Q-Future/Q-Align.",
"authors": "Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtao Zhai, Weisi Lin",
"published": "2023-12-28",
"updated": "2023-12-28",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2403.18714v1",
"title": "Bringing Textual Prompt to AI-Generated Image Quality Assessment",
"abstract": "AI-Generated Images (AGIs) have inherent multimodal nature. Unlike\ntraditional image quality assessment (IQA) on natural scenarios, AGIs quality\nassessment (AGIQA) takes the correspondence of image and its textual prompt\ninto consideration. This is coupled in the ground truth score, which confuses\nthe unimodal IQA methods. To solve this problem, we introduce IP-IQA (AGIs\nQuality Assessment via Image and Prompt), a multimodal framework for AGIQA via\ncorresponding image and prompt incorporation. Specifically, we propose a novel\nincremental pretraining task named Image2Prompt for better understanding of\nAGIs and their corresponding textual prompts. An effective and efficient\nimage-prompt fusion module, along with a novel special [QA] token, are also\napplied. Both are plug-and-play and beneficial for the cooperation of image and\nits corresponding prompt. Experiments demonstrate that our IP-IQA achieves the\nstate-of-the-art on AGIQA-1k and AGIQA-3k datasets. Code will be available.",
"authors": "Bowen Qu, Haohui Li, Wei Gao",
"published": "2024-03-27",
"updated": "2024-03-27",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.MM"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2101.01097v2",
"title": "Transformer for Image Quality Assessment",
"abstract": "Transformer has become the new standard method in natural language processing\n(NLP), and it also attracts research interests in computer vision area. In this\npaper we investigate the application of Transformer in Image Quality (TRIQ)\nassessment. Following the original Transformer encoder employed in Vision\nTransformer (ViT), we propose an architecture of using a shallow Transformer\nencoder on the top of a feature map extracted by convolution neural networks\n(CNN). Adaptive positional embedding is employed in the Transformer encoder to\nhandle images with arbitrary resolutions. Different settings of Transformer\narchitectures have been investigated on publicly available image quality\ndatabases. We have found that the proposed TRIQ architecture achieves\noutstanding performance. The implementation of TRIQ is published on Github\n(https://github.com/junyongyou/triq).",
"authors": "Junyong You, Jari Korhonen",
"published": "2020-12-30",
"updated": "2021-01-08",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.LG",
"eess.IV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2108.05997v1",
"title": "MUSIQ: Multi-scale Image Quality Transformer",
"abstract": "Image quality assessment (IQA) is an important research topic for\nunderstanding and improving visual experience. The current state-of-the-art IQA\nmethods are based on convolutional neural networks (CNNs). The performance of\nCNN-based models is often compromised by the fixed shape constraint in batch\ntraining. To accommodate this, the input images are usually resized and cropped\nto a fixed shape, causing image quality degradation. To address this, we design\na multi-scale image quality Transformer (MUSIQ) to process native resolution\nimages with varying sizes and aspect ratios. With a multi-scale image\nrepresentation, our proposed method can capture image quality at different\ngranularities. Furthermore, a novel hash-based 2D spatial embedding and a scale\nembedding is proposed to support the positional embedding in the multi-scale\nrepresentation. Experimental results verify that our method can achieve\nstate-of-the-art performance on multiple large scale IQA datasets such as\nPaQ-2-PiQ, SPAQ and KonIQ-10k.",
"authors": "Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, Feng Yang",
"published": "2021-08-12",
"updated": "2021-08-12",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2309.14181v3",
"title": "Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision",
"abstract": "The rapid evolution of Multi-modality Large Language Models (MLLMs) has\ncatalyzed a shift in computer vision from specialized models to general-purpose\nfoundation models. Nevertheless, there is still an inadequacy in assessing the\nabilities of MLLMs on low-level visual perception and understanding. To address\nthis gap, we present Q-Bench, a holistic benchmark crafted to systematically\nevaluate potential abilities of MLLMs on three realms: low-level visual\nperception, low-level visual description, and overall visual quality\nassessment. a) To evaluate the low-level perception ability, we construct the\nLLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped\nwith a human-asked question focusing on its low-level attributes. We then\nmeasure the correctness of MLLMs on answering these questions. b) To examine\nthe description ability of MLLMs on low-level information, we propose the\nLLDescribe dataset consisting of long expert-labelled golden low-level text\ndescriptions on 499 images, and a GPT-involved comparison pipeline between\noutputs of MLLMs and the golden descriptions. c) Besides these two tasks, we\nfurther measure their visual quality assessment ability to align with human\nopinion scores. Specifically, we design a softmax-based strategy that enables\nMLLMs to predict quantifiable quality scores, and evaluate them on various\nexisting image quality assessment (IQA) datasets. Our evaluation across the\nthree abilities confirms that MLLMs possess preliminary low-level visual\nskills. However, these skills are still unstable and relatively imprecise,\nindicating the need for specific enhancements on MLLMs towards these abilities.\nWe hope that our benchmark can encourage the research community to delve deeper\nto discover and enhance these untapped potentials of MLLMs. Project Page:\nhttps://q-future.github.io/Q-Bench.",
"authors": "Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin",
"published": "2023-09-25",
"updated": "2024-01-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.MM"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1907.02665v1",
"title": "Blind Image Quality Assessment Using A Deep Bilinear Convolutional Neural Network",
"abstract": "We propose a deep bilinear model for blind image quality assessment (BIQA)\nthat handles both synthetic and authentic distortions. Our model consists of\ntwo convolutional neural networks (CNN), each of which specializes in one\ndistortion scenario. For synthetic distortions, we pre-train a CNN to classify\nimage distortion type and level, where we enjoy large-scale training data. For\nauthentic distortions, we adopt a pre-trained CNN for image classification. The\nfeatures from the two CNNs are pooled bilinearly into a unified representation\nfor final quality prediction. We then fine-tune the entire model on target\nsubject-rated databases using a variant of stochastic gradient descent.\nExtensive experiments demonstrate that the proposed model achieves superior\nperformance on both synthetic and authentic databases. Furthermore, we verify\nthe generalizability of our method on the Waterloo Exploration Database using\nthe group maximum differentiation competition.",
"authors": "Weixia Zhang, Kede Ma, Jia Yan, Dexiang Deng, Zhou Wang",
"published": "2019-07-05",
"updated": "2019-07-05",
"primary_cat": "eess.IV",
"cats": [
"eess.IV",
"cs.CV",
"cs.MM"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2402.03413v1",
"title": "Perceptual Video Quality Assessment: A Survey",
"abstract": "Perceptual video quality assessment plays a vital role in the field of video\nprocessing due to the existence of quality degradations introduced in various\nstages of video signal acquisition, compression, transmission and display. With\nthe advancement of internet communication and cloud service technology, video\ncontent and traffic are growing exponentially, which further emphasizes the\nrequirement for accurate and rapid assessment of video quality. Therefore,\nnumerous subjective and objective video quality assessment studies have been\nconducted over the past two decades for both generic videos and specific videos\nsuch as streaming, user-generated content (UGC), 3D, virtual and augmented\nreality (VR and AR), high frame rate (HFR), audio-visual, etc. This survey\nprovides an up-to-date and comprehensive review of these video quality\nassessment studies. Specifically, we first review the subjective video quality\nassessment methodologies and databases, which are necessary for validating the\nperformance of video quality metrics. Second, the objective video quality\nassessment algorithms for general purposes are surveyed and concluded according\nto the methodologies utilized in the quality measures. Third, we overview the\nobjective video quality assessment measures for specific applications and\nemerging topics. Finally, the performances of the state-of-the-art video\nquality assessment measures are compared and analyzed. This survey provides a\nsystematic overview of both classical works and recent progresses in the realm\nof video quality assessment, which can help other researchers quickly access\nthe field and conduct relevant research.",
"authors": "Xiongkuo Min, Huiyu Duan, Wei Sun, Yucheng Zhu, Guangtao Zhai",
"published": "2024-02-05",
"updated": "2024-02-05",
"primary_cat": "cs.MM",
"cats": [
"cs.MM",
"cs.CV",
"eess.IV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2207.12396v2",
"title": "Exploring CLIP for Assessing the Look and Feel of Images",
"abstract": "Measuring the perception of visual content is a long-standing problem in\ncomputer vision. Many mathematical models have been developed to evaluate the\nlook or quality of an image. Despite the effectiveness of such tools in\nquantifying degradations such as noise and blurriness levels, such\nquantification is loosely coupled with human language. When it comes to more\nabstract perception about the feel of visual content, existing methods can only\nrely on supervised models that are explicitly trained with labeled data\ncollected via laborious user study. In this paper, we go beyond the\nconventional paradigms by exploring the rich visual language prior encapsulated\nin Contrastive Language-Image Pre-training (CLIP) models for assessing both the\nquality perception (look) and abstract perception (feel) of images in a\nzero-shot manner. In particular, we discuss effective prompt designs and show\nan effective prompt pairing strategy to harness the prior. We also provide\nextensive experiments on controlled datasets and Image Quality Assessment (IQA)\nbenchmarks. Our results show that CLIP captures meaningful priors that\ngeneralize well to different perceptual assessments. Code is avaliable at\nhttps://github.com/IceClear/CLIP-IQA.",
"authors": "Jianyi Wang, Kelvin C. K. Chan, Chen Change Loy",
"published": "2022-07-25",
"updated": "2022-11-23",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2204.08958v2",
"title": "MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment",
"abstract": "No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual\nquality of images in accordance with human subjective perception.\nUnfortunately, existing NR-IQA methods are far from meeting the needs of\npredicting accurate quality scores on GAN-based distortion images. To this end,\nwe propose Multi-dimension Attention Network for no-reference Image Quality\nAssessment (MANIQA) to improve the performance on GAN-based distortion. We\nfirstly extract features via ViT, then to strengthen global and local\ninteractions, we propose the Transposed Attention Block (TAB) and the Scale\nSwin Transformer Block (SSTB). These two modules apply attention mechanisms\nacross the channel and spatial dimension, respectively. In this\nmulti-dimensional manner, the modules cooperatively increase the interaction\namong different regions of images globally and locally. Finally, a dual branch\nstructure for patch-weighted quality prediction is applied to predict the final\nscore depending on the weight of each patch's score. Experimental results\ndemonstrate that MANIQA outperforms state-of-the-art methods on four standard\ndatasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our\nmethod ranked first place in the final testing phase of the NTIRE 2022\nPerceptual Image Quality Assessment Challenge Track 2: No-Reference. Codes and\nmodels are available at https://github.com/IIGROUP/MANIQA.",
"authors": "Sidi Yang, Tianhe Wu, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang, Yujiu Yang",
"published": "2022-04-19",
"updated": "2022-04-21",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"eess.IV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2103.00020v1",
"title": "Learning Transferable Visual Models From Natural Language Supervision",
"abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.",
"authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever",
"published": "2021-02-26",
"updated": "2021-02-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2405.01778v1",
"title": "Hierarchical mixture of discriminative Generalized Dirichlet classifiers",
"abstract": "This paper presents a discriminative classifier for compositional data. This\nclassifier is based on the posterior distribution of the Generalized Dirichlet\nwhich is the discriminative counterpart of Generalized Dirichlet mixture model.\nMoreover, following the mixture of experts paradigm, we proposed a hierarchical\nmixture of this classifier. In order to learn the models parameters, we use a\nvariational approximation by deriving an upper-bound for the Generalized\nDirichlet mixture. To the best of our knownledge, this is the first time this\nbound is proposed in the literature. Experimental results are presented for\nspam detection and color space identification.",
"authors": "Elvis Togban, Djemel Ziou",
"published": "2024-05-02",
"updated": "2024-05-02",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2202.13934v1",
"title": "Functional mixture-of-experts for classification",
"abstract": "We develop a mixtures-of-experts (ME) approach to the multiclass\nclassification where the predictors are univariate functions. It consists of a\nME model in which both the gating network and the experts network are\nconstructed upon multinomial logistic activation functions with functional\ninputs. We perform a regularized maximum likelihood estimation in which the\ncoefficient functions enjoy interpretable sparsity constraints on targeted\nderivatives. We develop an EM-Lasso like algorithm to compute the regularized\nMLE and evaluate the proposed approach on simulated and real data.",
"authors": "Nhat Thien Pham, Faicel Chamroukhi",
"published": "2022-02-28",
"updated": "2022-02-28",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2102.06034v1",
"title": "Speech enhancement with mixture-of-deep-experts with clean clustering pre-training",
"abstract": "In this study we present a mixture of deep experts (MoDE) neural-network\narchitecture for single microphone speech enhancement. Our architecture\ncomprises a set of deep neural networks (DNNs), each of which is an 'expert' in\na different speech spectral pattern such as phoneme. A gating DNN is\nresponsible for the latent variables which are the weights assigned to each\nexpert's output given a speech segment. The experts estimate a mask from the\nnoisy input and the final mask is then obtained as a weighted average of the\nexperts' estimates, with the weights determined by the gating DNN. A soft\nspectral attenuation, based on the estimated mask, is then applied to enhance\nthe noisy speech signal. As a byproduct, we gain reduction at the complexity in\ntest time. We show that the experts specialization allows better robustness to\nunfamiliar noise types.",
"authors": "Shlomo E. Chazan, Jacob Goldberger, Sharon Gannot",
"published": "2021-02-11",
"updated": "2021-02-11",
"primary_cat": "cs.SD",
"cats": [
"cs.SD",
"cs.LG",
"eess.AS"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2312.00968v2",
"title": "Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts",
"abstract": "Large multi-modal models (LMMs) exhibit remarkable performance across\nnumerous tasks. However, generalist LMMs often suffer from performance\ndegradation when tuned over a large collection of tasks. Recent research\nsuggests that Mixture of Experts (MoE) architectures are useful for instruction\ntuning, but for LMMs of parameter size around O(50-100B), the prohibitive cost\nof replicating and storing the expert models severely limits the number of\nexperts we can use. We propose Omni-SMoLA, an architecture that uses the Soft\nMoE approach to (softly) mix many multimodal low rank experts, and avoids\nintroducing a significant number of new parameters compared to conventional MoE\nmodels. The core intuition here is that the large model provides a foundational\nbackbone, while different lightweight experts residually learn specialized\nknowledge, either per-modality or multimodally. Extensive experiments\ndemonstrate that the SMoLA approach helps improve the generalist performance\nacross a broad range of generative vision-and-language tasks, achieving new\nSoTA generalist performance that often matches or outperforms single\nspecialized LMM baselines, as well as new SoTA specialist performance.",
"authors": "Jialin Wu, Xia Hu, Yaqing Wang, Bo Pang, Radu Soricut",
"published": "2023-12-01",
"updated": "2024-04-02",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2207.09094v1",
"title": "MoEC: Mixture of Expert Clusters",
"abstract": "Sparsely Mixture of Experts (MoE) has received great interest due to its\npromising scaling capability with affordable computational overhead. MoE\nconverts dense layers into sparse experts, and utilizes a gated routing network\nto make experts conditionally activated. However, as the number of experts\ngrows, MoE with outrageous parameters suffers from overfitting and sparse data\nallocation. Such problems are especially severe on tasks with limited data,\nthus hindering the progress for MoE models to improve performance by scaling\nup. In this work, we propose Mixture of Expert Clusters - a general approach to\nenable expert layers to learn more diverse and appropriate knowledge by\nimposing variance-based constraints on the routing stage. We further propose a\ncluster-level expert dropout strategy specifically designed for the expert\ncluster structure. Our experiments reveal that MoEC could improve performance\non machine translation and natural language understanding tasks, and raise the\nperformance upper bound for scaling up experts under limited data. We also\nverify that MoEC plays a positive role in mitigating overfitting and sparse\ndata allocation.",
"authors": "Yuan Xie, Shaohan Huang, Tianyu Chen, Furu Wei",
"published": "2022-07-19",
"updated": "2022-07-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2212.00471v1",
"title": "Implicit Mixture of Interpretable Experts for Global and Local Interpretability",
"abstract": "We investigate the feasibility of using mixtures of interpretable experts\n(MoIE) to build interpretable image classifiers on MNIST10. MoIE uses a\nblack-box router to assign each input to one of many inherently interpretable\nexperts, thereby providing insight into why a particular classification\ndecision was made. We find that a naively trained MoIE will learn to 'cheat',\nwhereby the black-box router will solve the classification problem by itself,\nwith each expert simply learning a constant function for one particular class.\nWe propose to solve this problem by introducing interpretable routers and\ntraining the black-box router's decisions to match the interpretable router. In\naddition, we propose a novel implicit parameterization scheme that allows us to\nbuild mixtures of arbitrary numbers of experts, allowing us to study how\nclassification performance, local and global interpretability vary as the\nnumber of experts is increased. Our new model, dubbed Implicit Mixture of\nInterpretable Experts (IMoIE) can match state-of-the-art classification\naccuracy on MNIST10 while providing local interpretability, and can provide\nglobal interpretability albeit at the cost of reduced classification accuracy.",
"authors": "Nathan Elazar, Kerry Taylor",
"published": "2022-12-01",
"updated": "2022-12-01",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1704.00946v4",
"title": "Approximation results regarding the multiple-output mixture of linear experts model",
"abstract": "Mixture of experts (MoE) models are a class of artificial neural networks\nthat can be used for functional approximation and probabilistic modeling. An\nimportant class of MoE models is the class of mixture of linear experts (MoLE)\nmodels, where the expert functions map to real topological output spaces. There\nare a number of powerful approximation results regarding MoLE models, when the\noutput space is univariate. These results guarantee the ability of MoLE mean\nfunctions to approximate arbitrary continuous functions, and MoLE models\nthemselves to approximate arbitrary conditional probability density functions.\nWe utilize and extend upon the univariate approximation results in order to\nprove a pair of useful results for situations where the output spaces are\nmultivariate.",
"authors": "Hien D. Nguyen, Faicel Chamroukhi, Florence Forbes",
"published": "2017-04-04",
"updated": "2019-05-28",
"primary_cat": "stat.ME",
"cats": [
"stat.ME"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1905.12969v1",
"title": "Enriched Mixtures of Gaussian Process Experts",
"abstract": "Mixtures of experts probabilistically divide the input space into regions,\nwhere the assumptions of each expert, or conditional model, need only hold\nlocally. Combined with Gaussian process (GP) experts, this results in a\npowerful and highly flexible model. We focus on alternative mixtures of GP\nexperts, which model the joint distribution of the inputs and targets\nexplicitly. We highlight issues of this approach in multi-dimensional input\nspaces, namely, poor scalability and the need for an unnecessarily large number\nof experts, degrading the predictive performance and increasing uncertainty. We\nconstruct a novel model to address these issues through a nested partitioning\nscheme that automatically infers the number of components at both levels.\nMultiple response types are accommodated through a generalised GP framework,\nwhile multiple input types are included through a factorised exponential family\nstructure. We show the effectiveness of our approach in estimating a\nparsimonious probabilistic description of both synthetic data of increasing\ndimension and an Alzheimer's challenge dataset.",
"authors": "Charles W. L. Gadd, Sara Wade, Alexis Boukouvalas",
"published": "2019-05-30",
"updated": "2019-05-30",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2112.14397v2",
"title": "EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate",
"abstract": "Mixture-of-experts (MoE) is becoming popular due to its success in improving\nthe model quality, especially in Transformers. By routing tokens with a sparse\ngate to a few experts (i.e., a small pieces of the full model), MoE can easily\nincrease the model parameters to a very large scale while keeping the\ncomputation cost in a constant level. Most existing works just initialize some\nrandom experts, set a fixed gating strategy (e.g., Top-k), and train the model\nfrom scratch in an ad-hoc way. We identify that these MoE models are suffering\nfrom the immature experts and unstable sparse gate, which are harmful to the\nconvergence performance. In this paper, we propose an efficient end-to-end MoE\ntraining framework called EvoMoE. EvoMoE starts from training one single expert\nand gradually evolves into a large and sparse MoE structure. EvoMoE mainly\ncontains two phases: the expert-diversify phase to train the base expert for a\nwhile and spawn multiple diverse experts from it, and the gate-sparsify phase\nto learn an adaptive sparse gate and activate a dynamic number of experts.\nEvoMoE naturally decouples the joint learning of both the experts and the\nsparse gate and focuses on learning the basic knowledge with a single expert at\nthe early training stage. Then it diversifies the experts and continues to\ntrain the MoE with a novel Dense-to-Sparse gate (DTS-Gate). Specifically,\ninstead of using a permanent sparse gate, DTS-Gate begins as a dense gate that\nroutes tokens to all experts, then gradually and adaptively becomes sparser\nwhile routes to fewer experts. Evaluations are conducted on three popular\nmodels and tasks, including RoBERTa for masked language modeling task, GPT for\nlanguage modeling task and Transformer for machine translation task. The\nresults show that EvoMoE outperforms existing baselines, including Switch, BASE\nLayer, Hash Layer and StableMoE.",
"authors": "Xiaonan Nie, Xupeng Miao, Shijie Cao, Lingxiao Ma, Qibin Liu, Jilong Xue, Youshan Miao, Yi Liu, Zhi Yang, Bin Cui",
"published": "2021-12-29",
"updated": "2022-10-09",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.01334v2",
"title": "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy",
"abstract": "Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up\nthe learning capacity of neural networks, however, they have issues like (a)\nHigh Memory Usage, due to duplication of the network layers into multiple\ncopies as experts; and (b) Redundancy in Experts, as common learning-based\nrouting policies suffer from representational collapse. Therefore, vanilla SMoE\nmodels are memory inefficient and non-scalable, especially for\nresource-constrained downstream scenarios. In this paper, we ask: Can we craft\na compact SMoE model by consolidating expert information? What is the best\nrecipe to merge multiple experts into fewer but more knowledgeable experts? Our\npilot investigation reveals that conventional model merging methods fail to be\neffective in such expert merging for SMoE. The potential reasons are: (1)\nredundant information overshadows critical experts; (2) appropriate neuron\npermutation for each expert is missing to bring all of them in alignment. To\naddress this, we propose M-SMoE, which leverages routing statistics to guide\nexpert merging. Specifically, it starts with neuron permutation alignment for\nexperts; then, dominant experts and their \"group members\" are formed; lastly,\nevery expert group is merged into a single expert by utilizing each expert's\nactivation frequency as their weight for merging, thus diminishing the impact\nof insignificant experts. Moreover, we observed that our proposed merging\npromotes a low dimensionality in the merged expert's weight space, naturally\npaving the way for additional compression. Hence, our final method, MC-SMoE\n(i.e., Merge, then Compress SMoE), further decomposes the merged experts into\nlow-rank and structural sparse alternatives. Extensive experiments across 8\nbenchmarks validate the effectiveness of MC-SMoE. For instance, our MC-SMoE\nachieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in\nperformance.",
"authors": "Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen",
"published": "2023-10-02",
"updated": "2024-03-14",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2312.12379v4",
"title": "Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning",
"abstract": "Instruction tuning of Large Vision-language Models (LVLMs) has revolutionized\nthe development of versatile models with zero-shot generalization across a wide\nrange of downstream vision-language tasks. However, the diversity of training\ntasks of different sources and formats would lead to inevitable task conflicts,\nwhere different tasks conflict for the same set of model parameters, resulting\nin sub-optimal instructionfollowing abilities. To address that, we propose the\nMixture of Clusterconditional LoRA Experts (MoCLE), a novel Mixture of Experts\n(MoE) architecture designed to activate the task-customized model parameters\nbased on the instruction clusters. A separate universal expert is further\nincorporated to improve generalization capabilities of MoCLE for novel\ninstructions. Extensive experiments on 11 zero-shot tasks demonstrate the\neffectiveness of MoCLE.",
"authors": "Yunhao Gou, Zhili Liu, Kai Chen, Lanqing Hong, Hang Xu, Aoxue Li, Dit-Yan Yeung, James T. Kwok, Yu Zhang",
"published": "2023-12-19",
"updated": "2024-03-22",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2402.05220v1",
"title": "On Parameter Estimation in Deviated Gaussian Mixture of Experts",
"abstract": "We consider the parameter estimation problem in the deviated Gaussian mixture\nof experts in which the data are generated from $(1 - \\lambda^{\\ast}) g_0(Y|\nX)+ \\lambda^{\\ast} \\sum_{i = 1}^{k_{\\ast}} p_{i}^{\\ast}\nf(Y|(a_{i}^{\\ast})^{\\top}X+b_i^{\\ast},\\sigma_{i}^{\\ast})$, where $X, Y$ are\nrespectively a covariate vector and a response variable, $g_{0}(Y|X)$ is a\nknown function, $\\lambda^{\\ast} \\in [0, 1]$ is true but unknown mixing\nproportion, and $(p_{i}^{\\ast}, a_{i}^{\\ast}, b_{i}^{\\ast}, \\sigma_{i}^{\\ast})$\nfor $1 \\leq i \\leq k^{\\ast}$ are unknown parameters of the Gaussian mixture of\nexperts. This problem arises from the goodness-of-fit test when we would like\nto test whether the data are generated from $g_{0}(Y|X)$ (null hypothesis) or\nthey are generated from the whole mixture (alternative hypothesis). Based on\nthe algebraic structure of the expert functions and the distinguishability\nbetween $g_0$ and the mixture part, we construct novel Voronoi-based loss\nfunctions to capture the convergence rates of maximum likelihood estimation\n(MLE) for our models. We further demonstrate that our proposed loss functions\ncharacterize the local convergence rates of parameter estimation more\naccurately than the generalized Wasserstein, a loss function being commonly\nused for estimating parameters in the Gaussian mixture of experts.",
"authors": "Huy Nguyen, Khai Nguyen, Nhat Ho",
"published": "2024-02-07",
"updated": "2024-02-07",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2204.09179v3",
"title": "On the Representation Collapse of Sparse Mixture of Experts",
"abstract": "Sparse mixture of experts provides larger model capacity while requiring a\nconstant computational overhead. It employs the routing mechanism to distribute\ninput tokens to the best-matched experts according to their hidden\nrepresentations. However, learning such a routing mechanism encourages token\nclustering around expert centroids, implying a trend toward representation\ncollapse. In this work, we propose to estimate the routing scores between\ntokens and experts on a low-dimensional hypersphere. We conduct extensive\nexperiments on cross-lingual language model pre-training and fine-tuning on\ndownstream tasks. Experimental results across seven multilingual benchmarks\nshow that our method achieves consistent gains. We also present a comprehensive\nanalysis on the representation and routing behaviors of our models. Our method\nalleviates the representation collapse issue and achieves more consistent\nrouting than the baseline mixture-of-experts methods.",
"authors": "Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, Furu Wei",
"published": "2022-04-20",
"updated": "2022-10-12",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1809.04853v2",
"title": "Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership",
"abstract": "A method for implicit variable selection in mixture of experts frameworks is\nproposed. We introduce a prior structure where information is taken from a set\nof independent covariates. Robust class membership predictors are identified\nusing a normal gamma prior. The resulting model setup is used in a finite\nmixture of Bernoulli distributions to find homogenous clusters of women in\nMozambique based on their information sources on HIV. Fully Bayesian inference\nis carried out via the implementation of a Gibbs sampler.",
"authors": "Gregor Zens",
"published": "2018-09-13",
"updated": "2019-01-12",
"primary_cat": "econ.EM",
"cats": [
"econ.EM",
"62F15, 62J07, 62H30, 90-08"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.02410v1",
"title": "Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness",
"abstract": "Large Mixture of Experts (MoE) models could achieve state-of-the-art quality\non various language tasks, including machine translation task, thanks to the\nefficient model scaling capability with expert parallelism. However, it has\nbrought a fundamental issue of larger memory consumption and increased memory\nbandwidth bottleneck at deployment time. In this paper, we propose Mixture of\nQuantized Experts (MoQE) which is a simple weight-only quantization method\napplying ultra low-bit down to 2-bit quantizations only to expert weights for\nmitigating the increased memory and latency issues of MoE models. We show that\nlow-bit quantization together with the MoE architecture delivers a reliable\nmodel performance while reducing the memory size significantly even without any\nadditional training in most cases. In particular, expert layers in MoE models\nare much more robust to the quantization than conventional feedforward networks\n(FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit\nexpert weights can deliver better model performance than the dense model\ntrained on the same dataset. As a result of low-bit quantization, we show the\nmodel size can be reduced by 79.6% of the original half precision floating\npoint (fp16) MoE model. Combined with an optimized GPU runtime implementation,\nit also achieves 1.24X speed-up on A100 GPUs.",
"authors": "Young Jin Kim, Raffy Fahim, Hany Hassan Awadalla",
"published": "2023-10-03",
"updated": "2023-10-03",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2107.04694v1",
"title": "Lifelong Mixture of Variational Autoencoders",
"abstract": "In this paper, we propose an end-to-end lifelong learning mixture of experts.\nEach expert is implemented by a Variational Autoencoder (VAE). The experts in\nthe mixture system are jointly trained by maximizing a mixture of individual\ncomponent evidence lower bounds (MELBO) on the log-likelihood of the given\ntraining samples. The mixing coefficients in the mixture, control the\ncontributions of each expert in the goal representation. These are sampled from\na Dirichlet distribution whose parameters are determined through non-parametric\nestimation during lifelong learning. The model can learn new tasks fast when\nthese are similar to those previously learnt. The proposed Lifelong mixture of\nVAE (L-MVAE) expands its architecture with new components when learning a\ncompletely new task. After the training, our model can automatically determine\nthe relevant expert to be used when fed with new data samples. This mechanism\nbenefits both the memory efficiency and the required computational cost as only\none expert is used during the inference. The L-MVAE inference model is able to\nperform interpolation in the joint latent space across the data domains\nassociated with different tasks and is shown to be efficient for disentangled\nlearning representation.",
"authors": "Fei Ye, Adrian G. Bors",
"published": "2021-07-09",
"updated": "2021-07-09",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1907.05346v1",
"title": "A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts",
"abstract": "End-to-end Task-oriented Dialogue Systems (TDSs) have attracted a lot of\nattention for their superiority (e.g., in terms of global optimization) over\npipeline modularized TDSs. Previous studies on end-to-end TDSs use a\nsingle-module model to generate responses for complex dialogue contexts.\nHowever, no model consistently outperforms the others in all cases. We propose\na neural Modular Task-oriented Dialogue System(MTDS) framework, in which a few\nexpert bots are combined to generate the response for a given dialogue context.\nMTDS consists of a chair bot and several expert bots. Each expert bot is\nspecialized for a particular situation, e.g., one domain, one type of action of\na system, etc. The chair bot coordinates multiple expert bots and adaptively\nselects an expert bot to generate the appropriate response. We further propose\na Token-level Mixture-of-Expert (TokenMoE) model to implement MTDS, where the\nexpert bots predict multiple tokens at each timestamp and the chair bot\ndetermines the final generated token by fully taking into consideration the\noutputs of all expert bots. Both the chair bot and the expert bots are jointly\ntrained in an end-to-end fashion. To verify the effectiveness of TokenMoE, we\ncarry out extensive experiments on a benchmark dataset. Compared with the\nbaseline using a single-module model, our TokenMoE improves the performance by\n8.1% of inform rate and 0.8% of success rate.",
"authors": "Jiahuan Pei, Pengjie Ren, Maarten de Rijke",
"published": "2019-07-10",
"updated": "2019-07-10",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.IR",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1511.06072v1",
"title": "Mediated Experts for Deep Convolutional Networks",
"abstract": "We present a new supervised architecture termed Mediated Mixture-of-Experts\n(MMoE) that allows us to improve classification accuracy of Deep Convolutional\nNetworks (DCN). Our architecture achieves this with the help of expert\nnetworks: A network is trained on a disjoint subset of a given dataset and then\nrun in parallel to other experts during deployment. A mediator is employed if\nexperts contradict each other. This allows our framework to naturally support\nincremental learning, as adding new classes requires (re-)training of the new\nexpert only. We also propose two measures to control computational complexity:\nAn early-stopping mechanism halts experts that have low confidence in their\nprediction. The system allows to trade-off accuracy and complexity without\nfurther retraining. We also suggest to share low-level convolutional layers\nbetween experts in an effort to avoid computation of a near-duplicate feature\nset. We evaluate our system on a popular dataset and report improved accuracy\ncompared to a single model of same configuration.",
"authors": "Sebastian Agethen, Winston H. Hsu",
"published": "2015-11-19",
"updated": "2015-11-19",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.NE"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2109.11449v2",
"title": "Dynamic Mixture of Experts Models for Online Prediction",
"abstract": "A mixture of experts models the conditional density of a response variable\nusing a mixture of regression models with covariate-dependent mixture weights.\nWe extend the finite mixture of experts model by allowing the parameters in\nboth the mixture components and the weights to evolve in time by following\nrandom walk processes. Inference for time-varying parameters in richly\nparameterized mixture of experts models is challenging. We propose a sequential\nMonte Carlo algorithm for online inference and based on a tailored proposal\ndistribution built on ideas from linear Bayes methods and the EM algorithm. The\nmethod gives a unified treatment for mixtures with time-varying parameters,\nincluding the special case of static parameters. We assess the properties of\nthe method on simulated data and on industrial data where the aim is to predict\nsoftware faults in a continuously upgraded large-scale software project.",
"authors": "Parfait Munezero, Mattias Villani, Robert Kohn",
"published": "2021-09-23",
"updated": "2022-10-13",
"primary_cat": "stat.CO",
"cats": [
"stat.CO",
"stat.AP"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2404.15045v1",
"title": "Multi-Head Mixture-of-Experts",
"abstract": "Sparse Mixtures of Experts (SMoE) scales model capacity without significant\nincreases in training and inference costs, but exhibits the following two\nissues: (1) Low expert activation, where only a small subset of experts are\nactivated for optimization. (2) Lacking fine-grained analytical capabilities\nfor multiple semantic concepts within individual tokens. We propose Multi-Head\nMixture-of-Experts (MH-MoE), which employs a multi-head mechanism to split each\ntoken into multiple sub-tokens. These sub-tokens are then assigned to and\nprocessed by a diverse set of experts in parallel, and seamlessly reintegrated\ninto the original token form. The multi-head mechanism enables the model to\ncollectively attend to information from various representation spaces within\ndifferent experts, while significantly enhances expert activation, thus deepens\ncontext understanding and alleviate overfitting. Moreover, our MH-MoE is\nstraightforward to implement and decouples from other SMoE optimization\nmethods, making it easy to integrate with other SMoE models for enhanced\nperformance. Extensive experimental results across three tasks: English-focused\nlanguage modeling, Multi-lingual language modeling and Masked multi-modality\nmodeling tasks, demonstrate the effectiveness of MH-MoE.",
"authors": "Xun Wu, Shaohan Huang, Wenhui Wang, Furu Wei",
"published": "2024-04-23",
"updated": "2024-04-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2009.07806v1",
"title": "Transformer Based Multi-Source Domain Adaptation",
"abstract": "In practical machine learning settings, the data on which a model must make\npredictions often come from a different distribution than the data it was\ntrained on. Here, we investigate the problem of unsupervised multi-source\ndomain adaptation, where a model is trained on labelled data from multiple\nsource domains and must make predictions on a domain for which no labelled data\nhas been seen. Prior work with CNNs and RNNs has demonstrated the benefit of\nmixture of experts, where the predictions of multiple domain expert classifiers\nare combined; as well as domain adversarial training, to induce a domain\nagnostic representation space. Inspired by this, we investigate how such\nmethods can be effectively applied to large pretrained transformer models. We\nfind that domain adversarial training has an effect on the learned\nrepresentations of these models while having little effect on their\nperformance, suggesting that large transformer-based models are already\nrelatively robust across domains. Additionally, we show that mixture of experts\nleads to significant performance improvements by comparing several variants of\nmixing functions, including one novel mixture based on attention. Finally, we\ndemonstrate that the predictions of large pretrained transformer based domain\nexperts are highly homogenous, making it challenging to learn effective\nfunctions for mixing their predictions.",
"authors": "Dustin Wright, Isabelle Augenstein",
"published": "2020-09-16",
"updated": "2020-09-16",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2109.05238v3",
"title": "Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy",
"abstract": "Simultaneous machine translation (SiMT) generates translation before reading\nthe entire source sentence and hence it has to trade off between translation\nquality and latency. To fulfill the requirements of different translation\nquality and latency in practical applications, the previous methods usually\nneed to train multiple SiMT models for different latency levels, resulting in\nlarge computational costs. In this paper, we propose a universal SiMT model\nwith Mixture-of-Experts Wait-k Policy to achieve the best translation quality\nunder arbitrary latency with only one trained model. Specifically, our method\nemploys multi-head attention to accomplish the mixture of experts where each\nhead is treated as a wait-k expert with its own waiting words number, and given\na test latency and source inputs, the weights of the experts are accordingly\nadjusted to produce the best translation. Experiments on three datasets show\nthat our method outperforms all the strong baselines under different latency,\nincluding the state-of-the-art adaptive policy.",
"authors": "Shaolei Zhang, Yang Feng",
"published": "2021-09-11",
"updated": "2022-03-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1703.09302v1",
"title": "Speech Enhancement using a Deep Mixture of Experts",
"abstract": "In this study we present a Deep Mixture of Experts (DMoE) neural-network\narchitecture for single microphone speech enhancement. By contrast to most\nspeech enhancement algorithms that overlook the speech variability mainly\ncaused by phoneme structure, our framework comprises a set of deep neural\nnetworks (DNNs), each one of which is an 'expert' in enhancing a given speech\ntype corresponding to a phoneme. A gating DNN determines which expert is\nassigned to a given speech segment. A speech presence probability (SPP) is then\nobtained as a weighted average of the expert SPP decisions, with the weights\ndetermined by the gating DNN. A soft spectral attenuation, based on the SPP, is\nthen applied to enhance the noisy speech signal. The experts and the gating\ncomponents of the DMoE network are trained jointly. As part of the training,\nspeech clustering into different subsets is performed in an unsupervised\nmanner. Therefore, unlike previous methods, a phoneme-labeled database is not\nrequired for the training procedure. A series of experiments with different\nnoise types verified the applicability of the new algorithm to the task of\nspeech enhancement. The proposed scheme outperforms other schemes that either\ndo not consider phoneme structure or use a simpler training methodology.",
"authors": "Shlomo E. Chazan, Jacob Goldberger, Sharon Gannot",
"published": "2017-03-27",
"updated": "2017-03-27",
"primary_cat": "cs.SD",
"cats": [
"cs.SD"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1907.04377v2",
"title": "Convergence Rates for Gaussian Mixtures of Experts",
"abstract": "We provide a theoretical treatment of over-specified Gaussian mixtures of\nexperts with covariate-free gating networks. We establish the convergence rates\nof the maximum likelihood estimation (MLE) for these models. Our proof\ntechnique is based on a novel notion of \\emph{algebraic independence} of the\nexpert functions. Drawing on optimal transport theory, we establish a\nconnection between the algebraic independence and a certain class of partial\ndifferential equations (PDEs). Exploiting this connection allows us to derive\nconvergence rates and minimax lower bounds for parameter estimation.",
"authors": "Nhat Ho, Chiao-Yu Yang, Michael I. Jordan",
"published": "2019-07-09",
"updated": "2022-03-08",
"primary_cat": "math.ST",
"cats": [
"math.ST",
"cs.LG",
"stat.ML",
"stat.TH"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1911.08151v2",
"title": "Retrospective and Prospective Mixture-of-Generators for Task-oriented Dialogue Response Generation",
"abstract": "Dialogue response generation (DRG) is a critical component of task-oriented\ndialogue systems (TDSs). Its purpose is to generate proper natural language\nresponses given some context, e.g., historical utterances, system states, etc.\nState-of-the-art work focuses on how to better tackle DRG in an end-to-end way.\nTypically, such studies assume that each token is drawn from a single\ndistribution over the output vocabulary, which may not always be optimal.\nResponses vary greatly with different intents, e.g., domains, system actions.\n We propose a novel mixture-of-generators network (MoGNet) for DRG, where we\nassume that each token of a response is drawn from a mixture of distributions.\nMoGNet consists of a chair generator and several expert generators. Each expert\nis specialized for DRG w.r.t. a particular intent. The chair coordinates\nmultiple experts and combines the output they have generated to produce more\nappropriate responses. We propose two strategies to help the chair make better\ndecisions, namely, a retrospective mixture-of-generators (RMoG) and prospective\nmixture-of-generators (PMoG). The former only considers the historical\nexpert-generated responses until the current time step while the latter also\nconsiders possible expert-generated responses in the future by encouraging\nexploration. In order to differentiate experts, we also devise a\nglobal-and-local (GL) learning scheme that forces each expert to be specialized\ntowards a particular intent using a local loss and trains the chair and all\nexperts to coordinate using a global loss.\n We carry out extensive experiments on the MultiWOZ benchmark dataset. MoGNet\nsignificantly outperforms state-of-the-art methods in terms of both automatic\nand human evaluations, demonstrating its effectiveness for DRG.",
"authors": "Jiahuan Pei, Pengjie Ren, Christof Monz, Maarten de Rijke",
"published": "2019-11-19",
"updated": "2020-02-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.IR"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2309.05444v1",
"title": "Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning",
"abstract": "The Mixture of Experts (MoE) is a widely known neural architecture where an\nensemble of specialized sub-models optimizes overall performance with a\nconstant computational cost. However, conventional MoEs pose challenges at\nscale due to the need to store all experts in memory. In this paper, we push\nMoE to the limit. We propose extremely parameter-efficient MoE by uniquely\ncombining MoE architecture with lightweight experts.Our MoE architecture\noutperforms standard parameter-efficient fine-tuning (PEFT) methods and is on\npar with full fine-tuning by only updating the lightweight experts -- less than\n1% of an 11B parameters model. Furthermore, our method generalizes to unseen\ntasks as it does not depend on any prior task knowledge. Our research\nunderscores the versatility of the mixture of experts architecture, showcasing\nits ability to deliver robust performance even when subjected to rigorous\nparameter constraints. Our code used in all the experiments is publicly\navailable here: https://github.com/for-ai/parameter-efficient-moe.",
"authors": "Ted Zadouri, Ahmet \u00dcst\u00fcn, Arash Ahmadian, Beyza Ermi\u015f, Acyr Locatelli, Sara Hooker",
"published": "2023-09-11",
"updated": "2023-09-11",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1904.09948v1",
"title": "PLUME: Polyhedral Learning Using Mixture of Experts",
"abstract": "In this paper, we propose a novel mixture of expert architecture for learning\npolyhedral classifiers. We learn the parameters of the classifierusing an\nexpectation maximization algorithm. Wederive the generalization bounds of the\nproposedapproach. Through an extensive simulation study, we show that the\nproposed method performs comparably to other state-of-the-art approaches.",
"authors": "Kulin Shah, P. S. Sastry, Naresh Manwani",
"published": "2019-04-22",
"updated": "2019-04-22",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2208.07109v3",
"title": "Context-aware Mixture-of-Experts for Unbiased Scene Graph Generation",
"abstract": "Scene graph generation (SGG) has gained tremendous progress in recent years.\nHowever, its underlying long-tailed distribution of predicate classes is a\nchallenging problem. For extremely unbalanced predicate distributions, existing\napproaches usually construct complicated context encoders to extract the\nintrinsic relevance of scene context to predicates and complex networks to\nimprove the learning ability of network models for highly imbalanced predicate\ndistributions. To address the unbiased SGG problem, we introduce a simple yet\neffective method dubbed Context-Aware Mixture-of-Experts (CAME) to improve\nmodel diversity and mitigate biased SGG without complicated design.\nSpecifically, we propose to integrate the mixture of experts with a divide and\nensemble strategy to remedy the severely long-tailed distribution of predicate\nclasses, which is applicable to the majority of unbiased scene graph\ngenerators. The biased SGG is thereby reduced, and the model tends to\nanticipate more evenly distributed predicate predictions. To differentiate\nbetween various predicate distribution levels, experts with the same weights\nare not sufficiently diverse. In order to enable the network dynamically\nexploit the rich scene context and further boost the diversity of model, we\nsimply use the built-in module to create a context encoder. The importance of\neach expert to scene context and each predicate to each expert is dynamically\nassociated with expert weighting (EW) and predicate weighting (PW) strategy. We\nhave conducted extensive experiments on three tasks using the Visual Genome\ndataset, showing that CAME outperforms recent methods and achieves\nstate-of-the-art performance. Our code will be available publicly.",
"authors": "Liguang Zhou, Yuhongze Zhou, Tin Lun Lam, Yangsheng Xu",
"published": "2022-08-15",
"updated": "2023-01-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2403.11412v1",
"title": "Expert Composer Policy: Scalable Skill Repertoire for Quadruped Robots",
"abstract": "We propose the expert composer policy, a framework to reliably expand the\nskill repertoire of quadruped agents. The composer policy links pair of experts\nvia transitions to a sampled target state, allowing experts to be composed\nsequentially. Each expert specializes in a single skill, such as a locomotion\ngait or a jumping motion. Instead of a hierarchical or mixture-of-experts\narchitecture, we train a single composer policy in an independent process that\nis not conditioned on the other expert policies. By reusing the same composer\npolicy, our approach enables adding new experts without affecting existing\nones, enabling incremental repertoire expansion and preserving original motion\nquality. We measured the transition success rate of 72 transition pairs and\nachieved an average success rate of 99.99\\%, which is over 10\\% higher than the\nbaseline random approach, and outperforms other state-of-the-art methods. Using\ndomain randomization during training we ensure a successful transfer to the\nreal world, where we achieve an average transition success rate of 97.22\\%\n(N=360) in our experiments.",
"authors": "Guilherme Christmann, Ying-Sheng Luo, Wei-Chao Chen",
"published": "2024-03-18",
"updated": "2024-03-18",
"primary_cat": "cs.RO",
"cats": [
"cs.RO"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2402.02952v1",
"title": "On Least Squares Estimation in Softmax Gating Mixture of Experts",
"abstract": "Mixture of experts (MoE) model is a statistical machine learning design that\naggregates multiple expert networks using a softmax gating function in order to\nform a more intricate and expressive model. Despite being commonly used in\nseveral applications owing to their scalability, the mathematical and\nstatistical properties of MoE models are complex and difficult to analyze. As a\nresult, previous theoretical works have primarily focused on probabilistic MoE\nmodels by imposing the impractical assumption that the data are generated from\na Gaussian MoE model. In this work, we investigate the performance of the least\nsquares estimators (LSE) under a deterministic MoE model where the data are\nsampled according to a regression model, a setting that has remained largely\nunexplored. We establish a condition called strong identifiability to\ncharacterize the convergence behavior of various types of expert functions. We\ndemonstrate that the rates for estimating strongly identifiable experts, namely\nthe widely used feed forward networks with activation functions\n$\\mathrm{sigmoid}(\\cdot)$ and $\\tanh(\\cdot)$, are substantially faster than\nthose of polynomial experts, which we show to exhibit a surprising slow\nestimation rate. Our findings have important practical implications for expert\nselection.",
"authors": "Huy Nguyen, Nhat Ho, Alessandro Rinaldo",
"published": "2024-02-05",
"updated": "2024-02-05",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.15961v1",
"title": "Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation",
"abstract": "Despite the promise of Mixture of Experts (MoE) models in increasing\nparameter counts of Transformer models while maintaining training and inference\ncosts, their application carries notable drawbacks. The key strategy of these\nmodels is to, for each processed token, activate at most a few experts -\nsubsets of an extensive feed-forward layer. But this approach is not without\nits challenges. The operation of matching experts and tokens is discrete, which\nmakes MoE models prone to issues like training instability and uneven expert\nutilization. Existing techniques designed to address these concerns, such as\nauxiliary losses or balance-aware matching, result either in lower model\nperformance or are more difficult to train. In response to these issues, we\npropose Mixture of Tokens, a fully-differentiable model that retains the\nbenefits of MoE architectures while avoiding the aforementioned difficulties.\nRather than routing tokens to experts, this approach mixes tokens from\ndifferent examples prior to feeding them to experts, enabling the model to\nlearn from all token-expert combinations. Importantly, this mixing can be\ndisabled to avoid mixing of different sequences during inference. Crucially,\nthis method is fully compatible with both masked and causal Large Language\nModel training and inference.",
"authors": "Szymon Antoniak, Sebastian Jaszczur, Micha\u0142 Krutul, Maciej Pi\u00f3ro, Jakub Krajewski, Jan Ludziejewski, Tomasz Odrzyg\u00f3\u017ad\u017a, Marek Cygan",
"published": "2023-10-24",
"updated": "2023-10-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2208.12830v1",
"title": "Mixtures of Gaussian Process Experts with SMC$^2$",
"abstract": "Gaussian processes are a key component of many flexible statistical and\nmachine learning models. However, they exhibit cubic computational complexity\nand high memory constraints due to the need of inverting and storing a full\ncovariance matrix. To circumvent this, mixtures of Gaussian process experts\nhave been considered where data points are assigned to independent experts,\nreducing the complexity by allowing inference based on smaller, local\ncovariance matrices. Moreover, mixtures of Gaussian process experts\nsubstantially enrich the model's flexibility, allowing for behaviors such as\nnon-stationarity, heteroscedasticity, and discontinuities. In this work, we\nconstruct a novel inference approach based on nested sequential Monte Carlo\nsamplers to simultaneously infer both the gating network and Gaussian process\nexpert parameters. This greatly improves inference compared to importance\nsampling, particularly in settings when a stationary Gaussian process is\ninappropriate, while still being thoroughly parallelizable.",
"authors": "Teemu H\u00e4rk\u00f6nen, Sara Wade, Kody Law, Lassi Roininen",
"published": "2022-08-26",
"updated": "2022-08-26",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG",
"stat.CO"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2401.06066v1",
"title": "DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models",
"abstract": "In the era of large language models, Mixture-of-Experts (MoE) is a promising\narchitecture for managing computational costs when scaling up model parameters.\nHowever, conventional MoE architectures like GShard, which activate the top-$K$\nout of $N$ experts, face challenges in ensuring expert specialization, i.e.\neach expert acquires non-overlapping and focused knowledge. In response, we\npropose the DeepSeekMoE architecture towards ultimate expert specialization. It\ninvolves two principal strategies: (1) finely segmenting the experts into $mN$\nones and activating $mK$ from them, allowing for a more flexible combination of\nactivated experts; (2) isolating $K_s$ experts as shared ones, aiming at\ncapturing common knowledge and mitigating redundancy in routed experts.\nStarting from a modest scale with 2B parameters, we demonstrate that\nDeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5\ntimes the expert parameters and computation. In addition, DeepSeekMoE 2B nearly\napproaches the performance of its dense counterpart with the same number of\ntotal parameters, which set the upper bound of MoE models. Subsequently, we\nscale up DeepSeekMoE to 16B parameters and show that it achieves comparable\nperformance with LLaMA2 7B, with only about 40% of computations. Further, our\npreliminary efforts to scale up DeepSeekMoE to 145B parameters consistently\nvalidate its substantial advantages over the GShard architecture, and show its\nperformance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%)\nof computations.",
"authors": "Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, Wenfeng Liang",
"published": "2024-01-11",
"updated": "2024-01-11",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2006.13309v4",
"title": "Fast Deep Mixtures of Gaussian Process Experts",
"abstract": "Mixtures of experts have become an indispensable tool for flexible modelling\nin a supervised learning context, allowing not only the mean function but the\nentire density of the output to change with the inputs. Sparse Gaussian\nprocesses (GP) have shown promise as a leading candidate for the experts in\nsuch models, and in this article, we propose to design the gating network for\nselecting the experts from such mixtures of sparse GPs using a deep neural\nnetwork (DNN). Furthermore, a fast one pass algorithm called\nCluster-Classify-Regress (CCR) is leveraged to approximate the maximum a\nposteriori (MAP) estimator extremely quickly. This powerful combination of\nmodel and algorithm together delivers a novel method which is flexible, robust,\nand extremely efficient. In particular, the method is able to outperform\ncompeting methods in terms of accuracy and uncertainty quantification. The cost\nis competitive on low-dimensional and small data sets, but is significantly\nlower for higher-dimensional and big data sets. Iteratively maximizing the\ndistribution of experts given allocations and allocations given experts does\nnot provide significant improvement, which indicates that the algorithm\nachieves a good approximation to the local MAP estimator very fast. This\ninsight can be useful also in the context of other mixture of experts models.",
"authors": "Clement Etienam, Kody Law, Sara Wade, Vitaly Zankin",
"published": "2020-06-11",
"updated": "2023-12-01",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2304.13833v2",
"title": "Mixtures of Gaussian process experts based on kernel stick-breaking processes",
"abstract": "Mixtures of Gaussian process experts is a class of models that can\nsimultaneously address two of the key limitations inherent in standard Gaussian\nprocesses: scalability and predictive performance. In particular, models that\nuse Dirichlet processes as gating functions permit straightforward\ninterpretation and automatic selection of the number of experts in a mixture.\nWhile the existing models are intuitive and capable of capturing\nnon-stationarity, multi-modality and heteroskedasticity, the simplicity of\ntheir gating functions may limit the predictive performance when applied to\ncomplex data-generating processes. Capitalising on the recent advancement in\nthe dependent Dirichlet processes literature, we propose a new mixture model of\nGaussian process experts based on kernel stick-breaking processes. Our model\nmaintains the intuitive appeal yet improve the performance of the existing\nmodels. To make it practical, we design a sampler for posterior computation\nbased on the slice sampling. The model behaviour and improved predictive\nperformance are demonstrated in experiments using six datasets.",
"authors": "Yuji Saikai, Khue-Dung Dang",
"published": "2023-04-26",
"updated": "2023-05-05",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2205.01848v2",
"title": "Optimizing Mixture of Experts using Dynamic Recompilations",
"abstract": "The Mixture of Experts architecture allows for outrageously large neural\nnetworks by scaling model parameter size independently from computational\ndemand (FLOPs). However, current DNN frameworks cannot effectively support the\ndynamic data flow in Mixture of Experts, and implementations on top of these\nframeworks need to use workarounds that introduce significant overheads. To\naddress the limitation of these frameworks, we present DynaMoE, a DNN library\nthat uses dynamic recompilations to optimize and adapt the use of computational\nresources to the dynamic needs of Mixture of Experts models. Our evaluation\nshows that DynaMoE achieves a 1.8x speedup and supports 2.3x larger model sizes\nwhen compared to existing MoE systems, even when not using recompilations. We\nthen present further optimizations enabled by dynamic recompilations that yield\nan additional 1.7x speedup while simultaneously reducing memory pressure and\nimproving model quality.",
"authors": "Ferdinand Kossmann, Zhihao Jia, Alex Aiken",
"published": "2022-05-04",
"updated": "2022-08-02",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.DC"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2402.00893v1",
"title": "MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts",
"abstract": "The application of mixture-of-experts (MoE) is gaining popularity due to its\nability to improve model's performance. In an MoE structure, the gate layer\nplays a significant role in distinguishing and routing input features to\ndifferent experts. This enables each expert to specialize in processing their\ncorresponding sub-tasks. However, the gate's routing mechanism also gives rise\nto narrow vision: the individual MoE's expert fails to use more samples in\nlearning the allocated sub-task, which in turn limits the MoE to further\nimprove its generalization ability. To effectively address this, we propose a\nmethod called Mixture-of-Distilled-Expert (MoDE), which applies moderate mutual\ndistillation among experts to enable each expert to pick up more features\nlearned by other experts and gain more accurate perceptions on their original\nallocated sub-tasks. We conduct plenty experiments including tabular, NLP and\nCV datasets, which shows MoDE's effectiveness, universality and robustness.\nFurthermore, we develop a parallel study through innovatively constructing\n\"expert probing\", to experimentally prove why MoDE works: moderate distilling\nknowledge can improve each individual expert's test performances on their\nassigned tasks, leading to MoE's overall performance improvement.",
"authors": "Zhitian Xie, Yinger Zhang, Chenyi Zhuang, Qitao Shi, Zhining Liu, Jinjie Gu, Guannan Zhang",
"published": "2024-01-31",
"updated": "2024-01-31",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2403.17749v1",
"title": "Multi-Task Dense Prediction via Mixture of Low-Rank Experts",
"abstract": "Previous multi-task dense prediction methods based on the Mixture of Experts\n(MoE) have received great performance but they neglect the importance of\nexplicitly modeling the global relations among all tasks. In this paper, we\npresent a novel decoder-focused method for multi-task dense prediction, called\nMixture-of-Low-Rank-Experts (MLoRE). To model the global task relationships,\nMLoRE adds a generic convolution path to the original MoE structure, where each\ntask feature can go through this path for explicit parameter sharing.\nFurthermore, to control the parameters and computational cost brought by the\nincrease in the number of experts, we take inspiration from LoRA and propose to\nleverage the low-rank format of a vanilla convolution in the expert network.\nSince the low-rank experts have fewer parameters and can be dynamically\nparameterized into the generic convolution, the parameters and computational\ncost do not change much with the increase of experts. Benefiting from this\ndesign, we increase the number of experts and its reception field to enlarge\nthe representation capacity, facilitating multiple dense tasks learning in a\nunified network. Extensive experiments on the PASCAL-Context and NYUD-v2\nbenchmarks show that our MLoRE achieves superior performance compared to\nprevious state-of-the-art methods on all metrics. Our code is available at\nhttps://github.com/YuqiYang213/MLoRE.",
"authors": "Yuqi Yang, Peng-Tao Jiang, Qibin Hou, Hao Zhang, Jinwei Chen, Bo Li",
"published": "2024-03-26",
"updated": "2024-03-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1605.01652v1",
"title": "LSTM-based Mixture-of-Experts for Knowledge-Aware Dialogues",
"abstract": "We introduce an LSTM-based method for dynamically integrating several\nword-prediction experts to obtain a conditional language model which can be\ngood simultaneously at several subtasks. We illustrate this general approach\nwith an application to dialogue where we integrate a neural chat model, good at\nconversational aspects, with a neural question-answering model, good at\nretrieving precise information from a knowledge-base, and show how the\nintegration combines the strengths of the independent components. We hope that\nthis focused contribution will attract attention on the benefits of using such\nmixtures of experts in NLP.",
"authors": "Phong Le, Marc Dymetman, Jean-Michel Renders",
"published": "2016-05-05",
"updated": "2016-05-05",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.02629v2",
"title": "BA-MoE: Boundary-Aware Mixture-of-Experts Adapter for Code-Switching Speech Recognition",
"abstract": "Mixture-of-experts based models, which use language experts to extract\nlanguage-specific representations effectively, have been well applied in\ncode-switching automatic speech recognition. However, there is still\nsubstantial space to improve as similar pronunciation across languages may\nresult in ineffective multi-language modeling and inaccurate language boundary\nestimation. To eliminate these drawbacks, we propose a cross-layer language\nadapter and a boundary-aware training method, namely Boundary-Aware\nMixture-of-Experts (BA-MoE). Specifically, we introduce language-specific\nadapters to separate language-specific representations and a unified gating\nlayer to fuse representations within each encoder layer. Second, we compute\nlanguage adaptation loss of the mean output of each language-specific adapter\nto improve the adapter module's language-specific representation learning.\nBesides, we utilize a boundary-aware predictor to learn boundary\nrepresentations for dealing with language boundary confusion. Our approach\nachieves significant performance improvement, reducing the mixture error rate\nby 16.55\\% compared to the baseline on the ASRU 2019 Mandarin-English\ncode-switching challenge dataset.",
"authors": "Peikun Chen, Fan Yu, Yuhao Lian, Hongfei Xue, Xucheng Wan, Naijun Zheng, Huan Zhou, Lei Xie",
"published": "2023-10-04",
"updated": "2023-10-08",
"primary_cat": "cs.SD",
"cats": [
"cs.SD",
"eess.AS"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2302.13750v1",
"title": "MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition",
"abstract": "Multi-lingual speech recognition aims to distinguish linguistic expressions\nin different languages and integrate acoustic processing simultaneously. In\ncontrast, current multi-lingual speech recognition research follows a\nlanguage-aware paradigm, mainly targeted to improve recognition performance\nrather than discriminate language characteristics. In this paper, we present a\nmulti-lingual speech recognition network named\nMixture-of-Language-Expert(MoLE), which digests speech in a variety of\nlanguages. Specifically, MoLE analyzes linguistic expression from input speech\nin arbitrary languages, activating a language-specific expert with a\nlightweight language tokenizer. The tokenizer not only activates experts, but\nalso estimates the reliability of the activation. Based on the reliability, the\nactivated expert and the language-agnostic expert are aggregated to represent\nlanguage-conditioned embedding for efficient speech recognition. Our proposed\nmodel is evaluated in 5 languages scenario, and the experimental results show\nthat our structure is advantageous on multi-lingual recognition, especially for\nspeech in low-resource language.",
"authors": "Yoohwan Kwon, Soo-Whan Chung",
"published": "2023-02-27",
"updated": "2023-02-27",
"primary_cat": "eess.AS",
"cats": [
"eess.AS",
"cs.CL",
"cs.SD"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1806.08200v1",
"title": "Mixtures of Experts Models",
"abstract": "Mixtures of experts models provide a framework in which covariates may be\nincluded in mixture models. This is achieved by modelling the parameters of the\nmixture model as functions of the concomitant covariates. Given their mixture\nmodel foundation, mixtures of experts models possess a diverse range of\nanalytic uses, from clustering observations to capturing parameter\nheterogeneity in cross-sectional data. This chapter focuses on delineating the\nmixture of experts modelling framework and demonstrates the utility and\nflexibility of mixtures of experts models as an analytic tool.",
"authors": "Isobel Claire Gormley, Sylvia Fr\u00fchwirth-Schnatter",
"published": "2018-06-21",
"updated": "2018-06-21",
"primary_cat": "stat.ME",
"cats": [
"stat.ME"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2105.11706v1",
"title": "Mixture of ELM based experts with trainable gating network",
"abstract": "Mixture of experts method is a neural network based ensemble learning that\nhas great ability to improve the overall classification accuracy. This method\nis based on the divide and conquer principle, in which the problem space is\ndivided between several experts by supervisition of gating network. In this\npaper, we propose an ensemble learning method based on mixture of experts which\nis named mixture of ELM based experts with trainable gating network (MEETG) to\nimprove the computing cost and to speed up the learning process of ME. The\nstructure of ME consists of multi layer perceptrons (MLPs) as base experts and\ngating network, in which gradient-based learning algorithm is applied for\ntraining the MLPs which is an iterative and time consuming process. In order to\novercome on these problems, we use the advantages of extreme learning machine\n(ELM) for designing the structure of ME. ELM as a learning algorithm for single\nhidden-layer feed forward neural networks provides much faster learning process\nand better generalization ability in comparision with some other traditional\nlearning algorithms. Also, in the proposed method a trainable gating network is\napplied to aggregate the outputs of the experts dynamically according to the\ninput sample. Our experimental results and statistical analysis on 11 benchmark\ndatasets confirm that MEETG has an acceptable performance in classification\nproblems. Furthermore, our experimental results show that the proposed approach\noutperforms the original ELM on prediction stability and classification\naccuracy.",
"authors": "Laleh Armi, Elham Abbasi, Jamal Zarepour-Ahmadabadi",
"published": "2021-05-25",
"updated": "2021-05-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2403.07816v1",
"title": "Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM",
"abstract": "We investigate efficient methods for training Large Language Models (LLMs) to\npossess capabilities in multiple specialized domains, such as coding, math\nreasoning and world knowledge. Our method, named Branch-Train-MiX (BTX), starts\nfrom a seed model, which is branched to train experts in embarrassingly\nparallel fashion with high throughput and reduced communication cost. After\nindividual experts are asynchronously trained, BTX brings together their\nfeedforward parameters as experts in Mixture-of-Expert (MoE) layers and\naverages the remaining parameters, followed by an MoE-finetuning stage to learn\ntoken-level routing. BTX generalizes two special cases, the Branch-Train-Merge\nmethod, which does not have the MoE finetuning stage to learn routing, and\nsparse upcycling, which omits the stage of training experts asynchronously.\nCompared to alternative approaches, BTX achieves the best accuracy-efficiency\ntradeoff.",
"authors": "Sainbayar Sukhbaatar, Olga Golovneva, Vasu Sharma, Hu Xu, Xi Victoria Lin, Baptiste Rozi\u00e8re, Jacob Kahn, Daniel Li, Wen-tau Yih, Jason Weston, Xian Li",
"published": "2024-03-12",
"updated": "2024-03-12",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2308.00951v1",
"title": "From Sparse to Soft Mixtures of Experts",
"abstract": "Sparse mixture of expert architectures (MoEs) scale model capacity without\nlarge increases in training or inference costs. Despite their success, MoEs\nsuffer from a number of issues: training instability, token dropping, inability\nto scale the number of experts, or ineffective finetuning. In this work, we\nproposeSoft MoE, a fully-differentiable sparse Transformer that addresses these\nchallenges, while maintaining the benefits of MoEs. Soft MoE performs an\nimplicit soft assignment by passing different weighted combinations of all\ninput tokens to each expert. As in other MoE works, experts in Soft MoE only\nprocess a subset of the (combined) tokens, enabling larger model capacity at\nlower inference cost. In the context of visual recognition, Soft MoE greatly\noutperforms standard Transformers (ViTs) and popular MoE variants (Tokens\nChoice and Experts Choice). For example, Soft MoE-Base/16 requires 10.5x lower\ninference cost (5.7x lower wall-clock time) than ViT-Huge/14 while matching its\nperformance after similar training. Soft MoE also scales well: Soft MoE Huge/14\nwith 128 experts in 16 MoE layers has over 40x more parameters than ViT\nHuge/14, while inference time cost grows by only 2%, and it performs\nsubstantially better.",
"authors": "Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Neil Houlsby",
"published": "2023-08-02",
"updated": "2023-08-02",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2305.03288v2",
"title": "Demystifying Softmax Gating Function in Gaussian Mixture of Experts",
"abstract": "Understanding the parameter estimation of softmax gating Gaussian mixture of\nexperts has remained a long-standing open problem in the literature. It is\nmainly due to three fundamental theoretical challenges associated with the\nsoftmax gating function: (i) the identifiability only up to the translation of\nparameters; (ii) the intrinsic interaction via partial differential equations\nbetween the softmax gating and the expert functions in the Gaussian density;\n(iii) the complex dependence between the numerator and denominator of the\nconditional density of softmax gating Gaussian mixture of experts. We resolve\nthese challenges by proposing novel Voronoi loss functions among parameters and\nestablishing the convergence rates of maximum likelihood estimator (MLE) for\nsolving parameter estimation in these models. When the true number of experts\nis unknown and over-specified, our findings show a connection between the\nconvergence rate of the MLE and a solvability problem of a system of polynomial\nequations.",
"authors": "Huy Nguyen, TrungTin Nguyen, Nhat Ho",
"published": "2023-05-05",
"updated": "2023-10-30",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2009.06327v1",
"title": "Double-Wing Mixture of Experts for Streaming Recommendations",
"abstract": "Streaming Recommender Systems (SRSs) commonly train recommendation models on\nnewly received data only to address user preference drift, i.e., the changing\nuser preferences towards items. However, this practice overlooks the long-term\nuser preferences embedded in historical data. More importantly, the common\nheterogeneity in data stream greatly reduces the accuracy of streaming\nrecommendations. The reason is that different preferences (or characteristics)\nof different types of users (or items) cannot be well learned by a unified\nmodel. To address these two issues, we propose a Variational and\nReservoir-enhanced Sampling based Double-Wing Mixture of Experts framework,\ncalled VRS-DWMoE, to improve the accuracy of streaming recommendations. In\nVRS-DWMoE, we first devise variational and reservoir-enhanced sampling to\nwisely complement new data with historical data, and thus address the user\npreference drift issue while capturing long-term user preferences. After that,\nwe propose a Double-Wing Mixture of Experts (DWMoE) model to first effectively\nlearn heterogeneous user preferences and item characteristics, and then make\nrecommendations based on them. Specifically, DWMoE contains two Mixture of\nExperts (MoE, an effective ensemble learning model) to learn user preferences\nand item characteristics, respectively. Moreover, the multiple experts in each\nMoE learn the preferences (or characteristics) of different types of users (or\nitems) where each expert specializes in one underlying type. Extensive\nexperiments demonstrate that VRS-DWMoE consistently outperforms the\nstate-of-the-art SRSs.",
"authors": "Yan Zhao, Shoujin Wang, Yan Wang, Hongwei Liu, Weizhe Zhang",
"published": "2020-09-14",
"updated": "2020-09-14",
"primary_cat": "cs.IR",
"cats": [
"cs.IR"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2405.00361v1",
"title": "AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts",
"abstract": "We introduce AdaMoLE, a novel method for fine-tuning large language models\n(LLMs) through an Adaptive Mixture of Low-Rank Adaptation (LoRA) Experts.\nMoving beyond conventional methods that employ a static top-k strategy for\nactivating experts, AdaMoLE dynamically adjusts the activation threshold using\na dedicated threshold network, adaptively responding to the varying\ncomplexities of different tasks. By replacing a single LoRA in a layer with\nmultiple LoRA experts and integrating a gating function with the threshold\nmechanism, AdaMoLE effectively selects and activates the most appropriate\nexperts based on the input context. Our extensive evaluations across a variety\nof commonsense reasoning and natural language processing tasks show that\nAdaMoLE exceeds baseline performance. This enhancement highlights the\nadvantages of AdaMoLE's adaptive selection of LoRA experts, improving model\neffectiveness without a corresponding increase in the expert count. The\nexperimental validation not only confirms AdaMoLE as a robust approach for\nenhancing LLMs but also suggests valuable directions for future research in\nadaptive expert selection mechanisms, potentially broadening the scope for\noptimizing model performance across diverse language processing tasks.",
"authors": "Zefang Liu, Jiahua Luo",
"published": "2024-05-01",
"updated": "2024-05-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2309.05838v1",
"title": "Liu-type Shrinkage Estimators for Mixture of Poisson Regressions with Experts: A Heart Disease Study",
"abstract": "Count data play a critical role in medical research, such as heart disease.\nThe Poisson regression model is a common technique for evaluating the impact of\na set of covariates on the count responses. The mixture of Poisson regression\nmodels with experts is a practical tool to exploit the covariates, not only to\nhandle the heterogeneity in the Poisson regressions but also to learn the\nmixing structure of the population. Multicollinearity is one of the most common\nchallenges with regression models, leading to ill-conditioned design matrices\nof Poisson regression components and expert classes. The maximum likelihood\nmethod produces unreliable and misleading estimates for the effects of the\ncovariates in multicollinearity. In this research, we develop Ridge and\nLiu-type methods as two shrinkage approaches to cope with the ill-conditioned\ndesign matrices of the mixture of Poisson regression models with experts.\nThrough various numerical studies, we demonstrate that the shrinkage methods\noffer more reliable estimates for the coefficients of the mixture model in\nmulticollinearity while maintaining the classification performance of the ML\nmethod. The shrinkage methods are finally applied to a heart study to analyze\nthe heart disease rate stages.",
"authors": "Elsayed Ghanem, Moein Yoosefi, Armin Hatefi",
"published": "2023-09-11",
"updated": "2023-09-11",
"primary_cat": "stat.ME",
"cats": [
"stat.ME",
"stat.CO",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2402.14800v1",
"title": "Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models",
"abstract": "A pivotal advancement in the progress of large language models (LLMs) is the\nemergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs,\nMoE LLMs can achieve higher performance with fewer parameters, but it is still\nhard to deploy them due to their immense parameter sizes. Different from\nprevious weight pruning methods that rely on specifically designed hardware,\nthis paper mainly aims to enhance the deployment efficiency of MoE LLMs by\nintroducing plug-and-play expert-level sparsification techniques. Specifically,\nwe propose, for the first time to our best knowledge, post-training approaches\nfor task-agnostic and task-specific expert pruning and skipping of MoE LLMs,\ntailored to improve deployment efficiency while maintaining model performance\nacross a wide range of tasks. Extensive experiments show that our proposed\nmethods can simultaneously reduce model sizes and increase the inference speed,\nwhile maintaining satisfactory performance. Data and code will be available at\nhttps://github.com/Lucky-Lance/Expert_Sparsity.",
"authors": "Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, Hongsheng Li",
"published": "2024-02-22",
"updated": "2024-02-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1612.06879v1",
"title": "Robust mixture of experts modeling using the skew $t$ distribution",
"abstract": "Mixture of Experts (MoE) is a popular framework in the fields of statistics\nand machine learning for modeling heterogeneity in data for regression,\nclassification and clustering. MoE for continuous data are usually based on the\nnormal distribution. However, it is known that for data with asymmetric\nbehavior, heavy tails and atypical observations, the use of the normal\ndistribution is unsuitable. We introduce a new robust non-normal mixture of\nexperts modeling using the skew $t$ distribution. The proposed skew $t$ mixture\nof experts, named STMoE, handles these issues of the normal mixtures experts\nregarding possibly skewed, heavy-tailed and noisy data. We develop a dedicated\nexpectation conditional maximization (ECM) algorithm to estimate the model\nparameters by monotonically maximizing the observed data log-likelihood. We\ndescribe how the presented model can be used in prediction and in model-based\nclustering of regression data. Numerical experiments carried out on simulated\ndata show the effectiveness and the robustness of the proposed model in fitting\nnon-linear regression functions as well as in model-based clustering. Then, the\nproposed model is applied to the real-world data of tone perception for musical\ndata analysis, and the one of temperature anomalies for the analysis of climate\nchange data. The obtained results confirm the usefulness of the model for\npractical data analysis applications.",
"authors": "Faicel Chamroukhi",
"published": "2016-12-09",
"updated": "2016-12-09",
"primary_cat": "stat.ME",
"cats": [
"stat.ME",
"cs.LG",
"stat.ML",
"62, 62F, 62H30, 62h",
"G.3; I.2.6; I.5.1"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2209.13071v1",
"title": "Diversified Dynamic Routing for Vision Tasks",
"abstract": "Deep learning models for vision tasks are trained on large datasets under the\nassumption that there exists a universal representation that can be used to\nmake predictions for all samples. Whereas high complexity models are proven to\nbe capable of learning such representations, a mixture of experts trained on\nspecific subsets of the data can infer the labels more efficiently. However\nusing mixture of experts poses two new problems, namely (i) assigning the\ncorrect expert at inference time when a new unseen sample is presented. (ii)\nFinding the optimal partitioning of the training data, such that the experts\nrely the least on common features. In Dynamic Routing (DR) a novel architecture\nis proposed where each layer is composed of a set of experts, however without\naddressing the two challenges we demonstrate that the model reverts to using\nthe same subset of experts.\n In our method, Diversified Dynamic Routing (DivDR) the model is explicitly\ntrained to solve the challenge of finding relevant partitioning of the data and\nassigning the correct experts in an unsupervised approach. We conduct several\nexperiments on semantic segmentation on Cityscapes and object detection and\ninstance segmentation on MS-COCO showing improved performance over several\nbaselines.",
"authors": "Botos Csaba, Adel Bibi, Yanwei Li, Philip Torr, Ser-Nam Lim",
"published": "2022-09-26",
"updated": "2022-09-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.09762v1",
"title": "Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer",
"abstract": "The Mixture of Experts (MoE) has emerged as a highly successful technique in\ndeep learning, based on the principle of divide-and-conquer to maximize model\ncapacity without significant additional computational cost. Even in the era of\nlarge-scale language models (LLMs), MoE continues to play a crucial role, as\nsome researchers have indicated that GPT-4 adopts the MoE structure to ensure\ndiverse inference results. However, MoE is susceptible to performance\ndegeneracy, particularly evident in the issues of imbalance and homogeneous\nrepresentation among experts. While previous studies have extensively addressed\nthe problem of imbalance, the challenge of homogeneous representation remains\nunresolved. In this study, we shed light on the homogeneous representation\nproblem, wherein experts in the MoE fail to specialize and lack diversity,\nleading to frustratingly high similarities in their representations (up to 99%\nin a well-performed MoE model). This problem restricts the expressive power of\nthe MoE and, we argue, contradicts its original intention. To tackle this\nissue, we propose a straightforward yet highly effective solution: OMoE, an\northogonal expert optimizer. Additionally, we introduce an alternating training\nstrategy that encourages each expert to update in a direction orthogonal to the\nsubspace spanned by other experts. Our algorithm facilitates MoE training in\ntwo key ways: firstly, it explicitly enhances representation diversity, and\nsecondly, it implicitly fosters interaction between experts during orthogonal\nweights computation. Through extensive experiments, we demonstrate that our\nproposed optimization algorithm significantly improves the performance of\nfine-tuning the MoE model on the GLUE benchmark, SuperGLUE benchmark,\nquestion-answering task, and name entity recognition tasks.",
"authors": "Boan Liu, Liang Ding, Li Shen, Keqin Peng, Yu Cao, Dazhao Cheng, Dacheng Tao",
"published": "2023-10-15",
"updated": "2023-10-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2401.15969v2",
"title": "Routers in Vision Mixture of Experts: An Empirical Study",
"abstract": "Mixture-of-Experts (MoE) models are a promising way to scale up model\ncapacity without significantly increasing computational cost. A key component\nof MoEs is the router, which decides which subset of parameters (experts)\nprocess which feature embeddings (tokens). In this paper, we present a\ncomprehensive study of routers in MoEs for computer vision tasks. We introduce\na unified MoE formulation that subsumes different MoEs with two parametric\nrouting tensors. This formulation covers both sparse MoE, which uses a binary\nor hard assignment between experts and tokens, and soft MoE, which uses a soft\nassignment between experts and weighted combinations of tokens. Routers for\nsparse MoEs can be further grouped into two variants: Token Choice, which\nmatches experts to each token, and Expert Choice, which matches tokens to each\nexpert. We conduct head-to-head experiments with 6 different routers, including\nexisting routers from prior work and new ones we introduce. We show that (i)\nmany routers originally developed for language modeling can be adapted to\nperform strongly in vision tasks, (ii) in sparse MoE, Expert Choice routers\ngenerally outperform Token Choice routers, and (iii) soft MoEs generally\noutperform sparse MoEs with a fixed compute budget. These results provide new\ninsights regarding the crucial role of routers in vision MoE models.",
"authors": "Tianlin Liu, Mathieu Blondel, Carlos Riquelme, Joan Puigcerver",
"published": "2024-01-29",
"updated": "2024-04-18",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2208.02813v1",
"title": "Towards Understanding Mixture of Experts in Deep Learning",
"abstract": "The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by\na router, has achieved great success in deep learning. However, the\nunderstanding of such architecture remains elusive. In this paper, we formally\nstudy how the MoE layer improves the performance of neural network learning and\nwhy the mixture model will not collapse into a single model. Our empirical\nresults suggest that the cluster structure of the underlying problem and the\nnon-linearity of the expert are pivotal to the success of MoE. To further\nunderstand this, we consider a challenging classification problem with\nintrinsic cluster structures, which is hard to learn using a single expert. Yet\nwith the MoE layer, by choosing the experts as two-layer nonlinear\nconvolutional neural networks (CNNs), we show that the problem can be learned\nsuccessfully. Furthermore, our theory shows that the router can learn the\ncluster-center features, which helps divide the input complex problem into\nsimpler linear classification sub-problems that individual experts can conquer.\nTo our knowledge, this is the first result towards formally understanding the\nmechanism of the MoE layer for deep learning.",
"authors": "Zixiang Chen, Yihe Deng, Yue Wu, Quanquan Gu, Yuanzhi Li",
"published": "2022-08-04",
"updated": "2022-08-04",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2312.16610v1",
"title": "Efficient Deweather Mixture-of-Experts with Uncertainty-aware Feature-wise Linear Modulation",
"abstract": "The Mixture-of-Experts (MoE) approach has demonstrated outstanding\nscalability in multi-task learning including low-level upstream tasks such as\nconcurrent removal of multiple adverse weather effects. However, the\nconventional MoE architecture with parallel Feed Forward Network (FFN) experts\nleads to significant parameter and computational overheads that hinder its\nefficient deployment. In addition, the naive MoE linear router is suboptimal in\nassigning task-specific features to multiple experts which limits its further\nscalability. In this work, we propose an efficient MoE architecture with weight\nsharing across the experts. Inspired by the idea of linear feature modulation\n(FM), our architecture implicitly instantiates multiple experts via learnable\nactivation modulations on a single shared expert block. The proposed Feature\nModulated Expert (FME) serves as a building block for the novel\nMixture-of-Feature-Modulation-Experts (MoFME) architecture, which can scale up\nthe number of experts with low overhead. We further propose an\nUncertainty-aware Router (UaR) to assign task-specific features to different FM\nmodules with well-calibrated weights. This enables MoFME to effectively learn\ndiverse expert functions for multiple tasks. The conducted experiments on the\nmulti-deweather task show that our MoFME outperforms the baselines in the image\nrestoration quality by 0.1-0.2 dB and achieves SOTA-compatible performance\nwhile saving more than 72% of parameters and 39% inference time over the\nconventional MoE counterpart. Experiments on the downstream segmentation and\nclassification tasks further demonstrate the generalizability of MoFME to real\nopen-world applications.",
"authors": "Rongyu Zhang, Yulin Luo, Jiaming Liu, Huanrui Yang, Zhen Dong, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Yuan Du, Shanghang Zhang",
"published": "2023-12-27",
"updated": "2023-12-27",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2403.08245v1",
"title": "Scattered Mixture-of-Experts Implementation",
"abstract": "We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE)\non GPUs. ScatterMoE builds upon existing implementations, and overcoming some\nof the limitations to improve inference and training speed, and memory\nfootprint. This implementation achieves this by avoiding padding and making\nexcessive copies of the input. We introduce ParallelLinear, the main component\nwe use to build our implementation and the various kernels used to speed up the\noperation. We benchmark our implementation against Megablocks, and show that it\nenables a higher throughput and lower memory footprint. We also show how\nParallelLinear enables extension of the Mixture-of-Experts concept by\ndemonstrating with an implementation of Mixture of Attention.",
"authors": "Shawn Tan, Yikang Shen, Rameswar Panda, Aaron Courville",
"published": "2024-03-13",
"updated": "2024-03-13",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.DC"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2402.12550v1",
"title": "Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization",
"abstract": "The Mixture of Experts (MoE) paradigm provides a powerful way to decompose\ninscrutable dense layers into smaller, modular computations often more amenable\nto human interpretation, debugging, and editability. A major problem however\nlies in the computational cost of scaling the number of experts to achieve\nsufficiently fine-grained specialization. In this paper, we propose the\nMultilinear Mixutre of Experts (MMoE) layer to address this, focusing on vision\nmodels. MMoE layers perform an implicit computation on prohibitively large\nweight tensors entirely in factorized form. Consequently, MMoEs both (1) avoid\nthe issues incurred through the discrete expert routing in the popular 'sparse'\nMoE models, yet (2) do not incur the restrictively high inference-time costs of\n'soft' MoE alternatives. We present both qualitative and quantitative evidence\n(through visualization and counterfactual interventions respectively) that\nscaling MMoE layers when fine-tuning foundation models for vision tasks leads\nto more specialized experts at the class-level whilst remaining competitive\nwith the performance of parameter-matched linear layer counterparts. Finally,\nwe show that learned expert specialism further facilitates manual correction of\ndemographic bias in CelebA attribute classification. Our MMoE model code is\navailable at https://github.com/james-oldfield/MMoE.",
"authors": "James Oldfield, Markos Georgopoulos, Grigorios G. Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis A. Nicolaou, Jiankang Deng, Ioannis Patras",
"published": "2024-02-19",
"updated": "2024-02-19",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1903.07756v1",
"title": "Hierarchical Routing Mixture of Experts",
"abstract": "In regression tasks the distribution of the data is often too complex to be\nfitted by a single model. In contrast, partition-based models are developed\nwhere data is divided and fitted by local models. These models partition the\ninput space and do not leverage the input-output dependency of\nmultimodal-distributed data, and strong local models are needed to make good\npredictions. Addressing these problems, we propose a binary tree-structured\nhierarchical routing mixture of experts (HRME) model that has classifiers as\nnon-leaf node experts and simple regression models as leaf node experts. The\nclassifier nodes jointly soft-partition the input-output space based on the\nnatural separateness of multimodal data. This enables simple leaf experts to be\neffective for prediction. Further, we develop a probabilistic framework for the\nHRME model, and propose a recursive Expectation-Maximization (EM) based\nalgorithm to learn both the tree structure and the expert models. Experiments\non a collection of regression tasks validate the effectiveness of our method\ncompared to a variety of other regression models.",
"authors": "Wenbo Zhao, Yang Gao, Shahan Ali Memon, Bhiksha Raj, Rita Singh",
"published": "2019-03-18",
"updated": "2019-03-18",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1811.10740v2",
"title": "Mixture of Regression Experts in fMRI Encoding",
"abstract": "fMRI semantic category understanding using linguistic encoding models attempt\nto learn a forward mapping that relates stimuli to the corresponding brain\nactivation. Classical encoding models use linear multi-variate methods to\npredict the brain activation (all voxels) given the stimulus. However, these\nmethods essentially assume multiple regions as one large uniform region or\nseveral independent regions, ignoring connections among them. In this paper, we\npresent a mixture of experts-based model where a group of experts captures\nbrain activity patterns related to particular regions of interest (ROI) and\nalso show the discrimination across different experts. The model is trained\nword stimuli encoded as 25-dimensional feature vectors as input and the\ncorresponding brain responses as output. Given a new word (25-dimensional\nfeature vector), it predicts the entire brain activation as the linear\ncombination of multiple experts brain activations. We argue that each expert\nlearns a certain region of brain activations corresponding to its category of\nwords, which solves the problem of identifying the regions with a simple\nencoding model. We showcase that proposed mixture of experts-based model indeed\nlearns region-based experts to predict the brain activations with high spatial\naccuracy.",
"authors": "Subba Reddy Oota, Adithya Avvaru, Naresh Manwani, Raju S. Bapi",
"published": "2018-11-26",
"updated": "2018-12-01",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.HC",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2008.09662v1",
"title": "Biased Mixtures Of Experts: Enabling Computer Vision Inference Under Data Transfer Limitations",
"abstract": "We propose a novel mixture-of-experts class to optimize computer vision\nmodels in accordance with data transfer limitations at test time. Our approach\npostulates that the minimum acceptable amount of data allowing for\nhighly-accurate results can vary for different input space partitions.\nTherefore, we consider mixtures where experts require different amounts of\ndata, and train a sparse gating function to divide the input space for each\nexpert. By appropriate hyperparameter selection, our approach is able to bias\nmixtures of experts towards selecting specific experts over others. In this\nway, we show that the data transfer optimization between visual sensing and\nprocessing can be solved as a convex optimization problem.To demonstrate the\nrelation between data availability and performance, we evaluate biased mixtures\non a range of mainstream computer vision problems, namely: (i) single shot\ndetection, (ii) image super resolution, and (iii) realtime video action\nclassification. For all cases, and when experts constitute modified baselines\nto meet different limits on allowed data utility, biased mixtures significantly\noutperform previous work optimized to meet the same constraints on available\ndata.",
"authors": "Alhabib Abbas, Yiannis Andreopoulos",
"published": "2020-08-21",
"updated": "2020-08-21",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"eess.IV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2312.03292v1",
"title": "Enhancing Molecular Property Prediction via Mixture of Collaborative Experts",
"abstract": "Molecular Property Prediction (MPP) task involves predicting biochemical\nproperties based on molecular features, such as molecular graph structures,\ncontributing to the discovery of lead compounds in drug development. To address\ndata scarcity and imbalance in MPP, some studies have adopted Graph Neural\nNetworks (GNN) as an encoder to extract commonalities from molecular graphs.\nHowever, these approaches often use a separate predictor for each task,\nneglecting the shared characteristics among predictors corresponding to\ndifferent tasks. In response to this limitation, we introduce the GNN-MoCE\narchitecture. It employs the Mixture of Collaborative Experts (MoCE) as\npredictors, exploiting task commonalities while confronting the homogeneity\nissue in the expert pool and the decision dominance dilemma within the expert\ngroup. To enhance expert diversity for collaboration among all experts, the\nExpert-Specific Projection method is proposed to assign a unique projection\nperspective to each expert. To balance decision-making influence for\ncollaboration within the expert group, the Expert-Specific Loss is presented to\nintegrate individual expert loss into the weighted decision loss of the group\nfor more equitable training. Benefiting from the enhancements of MoCE in expert\ncreation, dynamic expert group formation, and experts' collaboration, our model\ndemonstrates superior performance over traditional methods on 24 MPP datasets,\nespecially in tasks with limited data or high imbalance.",
"authors": "Xu Yao, Shuang Liang, Songqiao Han, Hailiang Huang",
"published": "2023-12-06",
"updated": "2023-12-06",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.MA",
"q-bio.QM"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2004.03751v4",
"title": "Robust Fitting of Mixture Models using Weighted Complete Estimating Equations",
"abstract": "Mixture modeling, which considers the potential heterogeneity in data, is\nwidely adopted for classification and clustering problems. Mixture models can\nbe estimated using the Expectation-Maximization algorithm, which works with the\ncomplete estimating equations conditioned by the latent membership variables of\nthe cluster assignment based on the hierarchical expression of mixture models.\nHowever, when the mixture components have light tails such as a normal\ndistribution, the mixture model can be sensitive to outliers. This study\nproposes a method of weighted complete estimating equations (WCE) for the\nrobust fitting of mixture models. Our WCE introduces weights to complete\nestimating equations such that the weights can automatically downweight the\noutliers. The weights are constructed similarly to the density power divergence\nfor mixture models, but in our WCE, they depend only on the component\ndistributions and not on the whole mixture. A novel\nexpectation-estimating-equation (EEE) algorithm is also developed to solve the\nWCE. For illustrative purposes, a multivariate Gaussian mixture, a mixture of\nexperts, and a multivariate skew normal mixture are considered, and how our EEE\nalgorithm can be implemented for these specific models is described. The\nnumerical performance of the proposed robust estimation method was examined\nusing simulated and real datasets.",
"authors": "Shonosuke Sugasawa, Genya Kobayashi",
"published": "2020-04-08",
"updated": "2022-03-17",
"primary_cat": "stat.ME",
"cats": [
"stat.ME"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2311.09179v1",
"title": "SiRA: Sparse Mixture of Low Rank Adaptation",
"abstract": "Parameter Efficient Tuning has been an prominent approach to adapt the Large\nLanguage Model to downstream tasks. Most previous works considers adding the\ndense trainable parameters, where all parameters are used to adapt certain\ntask. We found this less effective empirically using the example of LoRA that\nintroducing more trainable parameters does not help. Motivated by this we\ninvestigate the importance of leveraging \"sparse\" computation and propose SiRA:\nsparse mixture of low rank adaption. SiRA leverages the Sparse Mixture of\nExpert(SMoE) to boost the performance of LoRA. Specifically it enforces the top\n$k$ experts routing with a capacity limit restricting the maximum number of\ntokens each expert can process. We propose a novel and simple expert dropout on\ntop of gating network to reduce the over-fitting issue. Through extensive\nexperiments, we verify SiRA performs better than LoRA and other mixture of\nexpert approaches across different single tasks and multitask settings.",
"authors": "Yun Zhu, Nevan Wichers, Chu-Cheng Lin, Xinyi Wang, Tianlong Chen, Lei Shu, Han Lu, Canoee Liu, Liangchen Luo, Jindong Chen, Lei Meng",
"published": "2023-11-15",
"updated": "2023-11-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2311.10768v1",
"title": "Memory Augmented Language Models through Mixture of Word Experts",
"abstract": "Scaling up the number of parameters of language models has proven to be an\neffective approach to improve performance. For dense models, increasing model\nsize proportionally increases the model's computation footprint. In this work,\nwe seek to aggressively decouple learning capacity and FLOPs through\nMixture-of-Experts (MoE) style models with large knowledge-rich vocabulary\nbased routing functions and experts. Our proposed approach, dubbed Mixture of\nWord Experts (MoWE), can be seen as a memory augmented model, where a large set\nof word-specific experts play the role of a sparse memory. We demonstrate that\nMoWE performs significantly better than the T5 family of models with similar\nnumber of FLOPs in a variety of NLP tasks. Additionally, MoWE outperforms\nregular MoE models on knowledge intensive tasks and has similar performance to\nmore complex memory augmented approaches that often require to invoke custom\nmechanisms to search the sparse memory.",
"authors": "Cicero Nogueira dos Santos, James Lee-Thorp, Isaac Noble, Chung-Ching Chang, David Uthus",
"published": "2023-11-15",
"updated": "2023-11-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2204.08396v1",
"title": "StableMoE: Stable Routing Strategy for Mixture of Experts",
"abstract": "The Mixture-of-Experts (MoE) technique can scale up the model size of\nTransformers with an affordable computational overhead. We point out that\nexisting learning-to-route MoE methods suffer from the routing fluctuation\nissue, i.e., the target expert of the same input may change along with\ntraining, but only one expert will be activated for the input during inference.\nThe routing fluctuation tends to harm sample efficiency because the same input\nupdates different experts but only one is finally used. In this paper, we\npropose StableMoE with two training stages to address the routing fluctuation\nproblem. In the first training stage, we learn a balanced and cohesive routing\nstrategy and distill it into a lightweight router decoupled from the backbone\nmodel. In the second training stage, we utilize the distilled router to\ndetermine the token-to-expert assignment and freeze it for a stable routing\nstrategy. We validate our method on language modeling and multilingual machine\ntranslation. The results show that StableMoE outperforms existing MoE methods\nin terms of both convergence speed and performance.",
"authors": "Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei",
"published": "2022-04-18",
"updated": "2022-04-18",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1312.4314v3",
"title": "Learning Factored Representations in a Deep Mixture of Experts",
"abstract": "Mixtures of Experts combine the outputs of several \"expert\" networks, each of\nwhich specializes in a different part of the input space. This is achieved by\ntraining a \"gating\" network that maps each input to a distribution over the\nexperts. Such models show promise for building larger networks that are still\ncheap to compute at test time, and more parallelizable at training time. In\nthis this work, we extend the Mixture of Experts to a stacked model, the Deep\nMixture of Experts, with multiple sets of gating and experts. This\nexponentially increases the number of effective experts by associating each\ninput with a combination of experts at each layer, yet maintains a modest model\nsize. On a randomly translated version of the MNIST dataset, we find that the\nDeep Mixture of Experts automatically learns to develop location-dependent\n(\"where\") experts at the first layer, and class-specific (\"what\") experts at\nthe second layer. In addition, we see that the different combinations are in\nuse when the model is applied to a dataset of speech monophones. These\ndemonstrate effective use of all expert combinations.",
"authors": "David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever",
"published": "2013-12-16",
"updated": "2014-03-09",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2012.02130v4",
"title": "A similarity-based Bayesian mixture-of-experts model",
"abstract": "We present a new nonparametric mixture-of-experts model for multivariate\nregression problems, inspired by the probabilistic k-nearest neighbors\nalgorithm. Using a conditionally specified model, predictions for out-of-sample\ninputs are based on similarities to each observed data point, yielding\npredictive distributions represented by Gaussian mixtures. Posterior inference\nis performed on the parameters of the mixture components as well as the\ndistance metric using a mean-field variational Bayes algorithm accompanied with\na stochastic gradient-based optimization procedure. The proposed method is\nespecially advantageous in settings where inputs are of relatively high\ndimension in comparison to the data size, where input-output relationships are\ncomplex, and where predictive distributions may be skewed or multimodal.\nComputational studies on five datasets, of which two are synthetically\ngenerated, illustrate clear advantages of our mixture-of-experts method for\nhigh-dimensional inputs, outperforming competitor models both in terms of\nvalidation metrics and visual inspection.",
"authors": "Tianfang Zhang, Rasmus Bokrantz, Jimmy Olsson",
"published": "2020-12-03",
"updated": "2022-08-03",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG",
"stat.ME"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2311.04894v1",
"title": "DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets",
"abstract": "Construction of a universal detector poses a crucial question: How can we\nmost effectively train a model on a large mixture of datasets? The answer lies\nin learning dataset-specific features and ensembling their knowledge but do all\nthis in a single model. Previous methods achieve this by having separate\ndetection heads on a common backbone but that results in a significant increase\nin parameters. In this work, we present Mixture-of-Experts as a solution,\nhighlighting that MoEs are much more than a scalability tool. We propose\nDataset-Aware Mixture-of-Experts, DAMEX where we train the experts to become an\n`expert' of a dataset by learning to route each dataset tokens to its mapped\nexpert. Experiments on Universal Object-Detection Benchmark show that we\noutperform the existing state-of-the-art by average +10.2 AP score and improve\nover our non-MoE baseline by average +2.0 AP score. We also observe consistent\ngains while mixing datasets with (1) limited availability, (2) disparate\ndomains and (3) divergent label sets. Further, we qualitatively show that DAMEX\nis robust against expert representation collapse.",
"authors": "Yash Jain, Harkirat Behl, Zsolt Kira, Vibhav Vineet",
"published": "2023-11-08",
"updated": "2023-11-08",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2204.10598v3",
"title": "Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability",
"abstract": "Sparsely-gated Mixture of Expert (MoE) layers have been recently successfully\napplied for scaling large transformers, especially for language modeling tasks.\nAn intriguing side effect of sparse MoE layers is that they convey inherent\ninterpretability to a model via natural expert specialization. In this work, we\napply sparse MoE layers to CNNs for computer vision tasks and analyze the\nresulting effect on model interpretability. To stabilize MoE training, we\npresent both soft and hard constraint-based approaches. With hard constraints,\nthe weights of certain experts are allowed to become zero, while soft\nconstraints balance the contribution of experts with an additional auxiliary\nloss. As a result, soft constraints handle expert utilization better and\nsupport the expert specialization process, while hard constraints maintain more\ngeneralized experts and increase overall model performance. Our findings\ndemonstrate that experts can implicitly focus on individual sub-domains of the\ninput space. For example, experts trained for CIFAR-100 image classification\nspecialize in recognizing different domains such as flowers or animals without\nprevious data clustering. Experiments with RetinaNet and the COCO dataset\nfurther indicate that object detection experts can also specialize in detecting\nobjects of distinct sizes.",
"authors": "Svetlana Pavlitska, Christian Hubschneider, Lukas Struppek, J. Marius Z\u00f6llner",
"published": "2022-04-22",
"updated": "2023-04-27",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2302.14703v1",
"title": "Improving Expert Specialization in Mixture of Experts",
"abstract": "Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated\nmodular neural network architecture. There is renewed interest in MoE because\nthe conditional computation allows only parts of the network to be used during\neach inference, as was recently demonstrated in large scale natural language\nprocessing models. MoE is also of potential interest for continual learning, as\nexperts may be reused for new tasks, and new experts introduced. The gate in\nthe MoE architecture learns task decompositions and individual experts learn\nsimpler functions appropriate to the gate's decomposition. In this paper: (1)\nwe show that the original MoE architecture and its training method do not\nguarantee intuitive task decompositions and good expert utilization, indeed\nthey can fail spectacularly even for simple data such as MNIST and\nFashionMNIST; (2) we introduce a novel gating architecture, similar to\nattention, that improves performance and results in a lower entropy task\ndecomposition; and (3) we introduce a novel data-driven regularization that\nimproves expert specialization. We empirically validate our methods on MNIST,\nFashionMNIST and CIFAR-100 datasets.",
"authors": "Yamuna Krishnamurthy, Chris Watkins, Thomas Gaertner",
"published": "2023-02-28",
"updated": "2023-02-28",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.NE"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2302.02043v1",
"title": "mixdistreg: An R Package for Fitting Mixture of Experts Distributional Regression with Adaptive First-order Methods",
"abstract": "This paper presents a high-level description of the R software package\nmixdistreg to fit mixture of experts distributional regression models. The\nproposed framework is implemented in R using the deepregression software\ntemplate, which is based on TensorFlow and follows the neural structured\nadditive learning principle. The software comprises various approaches as\nspecial cases, including mixture density networks and mixture regression\napproaches. Various code examples are given to demonstrate the package's\nfunctionality.",
"authors": "David R\u00fcgamer",
"published": "2023-02-04",
"updated": "2023-02-04",
"primary_cat": "stat.CO",
"cats": [
"stat.CO"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2202.09368v2",
"title": "Mixture-of-Experts with Expert Choice Routing",
"abstract": "Sparsely-activated Mixture-of-experts (MoE) models allow the number of\nparameters to greatly increase while keeping the amount of computation for a\ngiven token or a given sample unchanged. However, a poor expert routing\nstrategy (e.g. one resulting in load imbalance) can cause certain experts to be\nunder-trained, leading to an expert being under or over-specialized. Prior work\nallocates a fixed number of experts to each token using a top-k function\nregardless of the relative importance of different tokens. To address this, we\npropose a heterogeneous mixture-of-experts employing an expert choice method.\nInstead of letting tokens select the top-k experts, we have experts selecting\nthe top-k tokens. As a result, each token can be routed to a variable number of\nexperts and each expert can have a fixed bucket size. We systematically study\npre-training speedups using the same computational resources of the Switch\nTransformer top-1 and GShard top-2 gating of prior work and find that our\nmethod improves training convergence time by more than 2x. For the same\ncomputational cost, our method demonstrates higher performance in fine-tuning\n11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller\nactivation cost, our method outperforms the T5 dense model in 7 out of the 11\ntasks.",
"authors": "Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, James Laudon",
"published": "2022-02-18",
"updated": "2022-10-14",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2309.14976v4",
"title": "MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection",
"abstract": "Combining the strengths of many existing predictors to obtain a Mixture of\nExperts which is superior to its individual components is an effective way to\nimprove the performance without having to develop new architectures or train a\nmodel from scratch. However, surprisingly, we find that na\\\"ively combining\nexpert object detectors in a similar way to Deep Ensembles, can often lead to\ndegraded performance. We identify that the primary cause of this issue is that\nthe predictions of the experts do not match their performance, a term referred\nto as miscalibration. Consequently, the most confident detector dominates the\nfinal predictions, preventing the mixture from leveraging all the predictions\nfrom the experts appropriately. To address this, when constructing the Mixture\nof Experts, we propose to combine their predictions in a manner which reflects\nthe individual performance of the experts; an objective we achieve by first\ncalibrating the predictions before filtering and refining them. We term this\napproach the Mixture of Calibrated Experts and demonstrate its effectiveness\nthrough extensive experiments on 5 different detection tasks using a variety of\ndetectors, showing that it: (i) improves object detectors on COCO and instance\nsegmentation methods on LVIS by up to $\\sim 2.5$ AP; (ii) reaches\nstate-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$\n$\\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent\ndetection tasks such as Open Vocabulary Object Detection.",
"authors": "Kemal Oksuz, Selim Kuzucu, Tom Joy, Puneet K. Dokania",
"published": "2023-09-26",
"updated": "2024-02-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2010.14260v2",
"title": "Concentric mixtures of Mallows models for top-$k$ rankings: sampling and identifiability",
"abstract": "In this paper, we consider mixtures of two Mallows models for top-$k$\nrankings, both with the same location parameter but with different scale\nparameters, i.e., a mixture of concentric Mallows models. This situation arises\nwhen we have a heterogeneous population of voters formed by two homogeneous\npopulations, one of which is a subpopulation of expert voters while the other\nincludes the non-expert voters. We propose efficient sampling algorithms for\nMallows top-$k$ rankings. We show the identifiability of both components, and\nthe learnability of their respective parameters in this setting by, first,\nbounding the sample complexity for the Borda algorithm with top-$k$ rankings\nand second, proposing polynomial time algorithm for the separation of the\nrankings in each component. Finally, since the rank aggregation will suffer\nfrom a large amount of noise introduced by the non-expert voters, we adapt the\nBorda algorithm to be able to recover the ground truth consensus ranking which\nis especially consistent with the expert rankings.",
"authors": "Collas Fabien, Irurozki Ekhine",
"published": "2020-10-27",
"updated": "2020-11-05",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.AI",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1901.10668v2",
"title": "Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference",
"abstract": "Computations for the softmax function are significantly expensive when the\nnumber of output classes is large. In this paper, we present a novel softmax\ninference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages\nsparse mixture of sparse experts to efficiently retrieve top-k classes.\nDifferent from most existing methods that require and approximate a fixed\nsoftmax, our method is learning-based and can adapt softmax weights for a\nbetter inference speedup. In particular, our method learns a two-level\nhierarchy which divides entire output class space into several partially\noverlapping experts. Each expert is sparse and only contains a subset of output\nclasses. To find top-k classes, a sparse mixture enables us to find the most\nprobable expert quickly, and the sparse expert enables us to search within a\nsmall-scale softmax. We empirically conduct evaluation on several real-world\ntasks, including neural machine translation, language modeling and image\nclassification, and demonstrate that significant computation reductions can be\nachieved at no performance loss.",
"authors": "Shun Liao, Ting Chen, Tian Lin, Denny Zhou, Chong Wang",
"published": "2019-01-30",
"updated": "2019-07-03",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2304.02806v2",
"title": "Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling",
"abstract": "Graph neural networks (GNNs) have found extensive applications in learning\nfrom graph data. However, real-world graphs often possess diverse structures\nand comprise nodes and edges of varying types. To bolster the generalization\ncapacity of GNNs, it has become customary to augment training graph structures\nthrough techniques like graph augmentations and large-scale pre-training on a\nwider array of graphs. Balancing this diversity while avoiding increased\ncomputational costs and the notorious trainability issues of GNNs is crucial.\nThis study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the\naim of augmenting their capacity to adapt to a diverse range of training graph\nstructures, without incurring explosive computational overhead. The proposed\nGraph Mixture of Experts (GMoE) model empowers individual nodes in the graph to\ndynamically and adaptively select more general information aggregation experts.\nThese experts are trained to capture distinct subgroups of graph structures and\nto incorporate information with varying hop sizes, where those with larger hop\nsizes specialize in gathering information over longer distances. The\neffectiveness of GMoE is validated through a series of experiments on a diverse\nset of tasks, including graph, node, and link prediction, using the OGB\nbenchmark. Notably, it enhances ROC-AUC by $1.81\\%$ in ogbg-molhiv and by\n$1.40\\%$ in ogbg-molbbbp, when compared to the non-MoE baselines. Our code is\npublicly available at https://github.com/VITA-Group/Graph-Mixture-of-Experts.",
"authors": "Haotao Wang, Ziyu Jiang, Yuning You, Yan Han, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Zhangyang Wang",
"published": "2023-04-06",
"updated": "2023-10-17",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2310.09832v3",
"title": "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts",
"abstract": "Scaling the size of language models usually leads to remarkable advancements\nin NLP tasks. But it often comes with a price of growing computational cost.\nAlthough a sparse Mixture of Experts (MoE) can reduce the cost by activating a\nsmall subset of parameters (e.g., one expert) for each input, its computation\nescalates significantly if increasing the number of activated experts, limiting\nits practical utility. Can we retain the advantages of adding more experts\nwithout substantially increasing the computational costs? In this paper, we\nfirst demonstrate the superiority of selecting multiple experts and then\npropose a computation-efficient approach called \\textbf{\\texttt{Merging Experts\ninto One}} (MEO), which reduces the computation cost to that of a single\nexpert. Extensive experiments show that MEO significantly improves\ncomputational efficiency, e.g., FLOPS drops from 72.0G of vanilla MoE to 28.6G\n(MEO). Moreover, we propose a token-level attention block that further enhances\nthe efficiency and performance of token-level MEO, e.g., 83.3\\% (MEO) vs.\n82.6\\% (vanilla MoE) average score on the GLUE benchmark. Our code will be\nreleased upon acceptance. Code will be released at:\n\\url{https://github.com/Shwai-He/MEO}.",
"authors": "Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao",
"published": "2023-10-15",
"updated": "2023-11-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2204.08753v1",
"title": "Table-based Fact Verification with Self-adaptive Mixture of Experts",
"abstract": "The table-based fact verification task has recently gained widespread\nattention and yet remains to be a very challenging problem. It inherently\nrequires informative reasoning over natural language together with different\nnumerical and logical reasoning on tables (e.g., count, superlative,\ncomparative). Considering that, we exploit mixture-of-experts and present in\nthis paper a new method: Self-adaptive Mixture-of-Experts Network (SaMoE).\nSpecifically, we have developed a mixture-of-experts neural network to\nrecognize and execute different types of reasoning -- the network is composed\nof multiple experts, each handling a specific part of the semantics for\nreasoning, whereas a management module is applied to decide the contribution of\neach expert network to the verification result. A self-adaptive method is\ndeveloped to teach the management module combining results of different experts\nmore efficiently without external knowledge. The experimental results\nillustrate that our framework achieves 85.1% accuracy on the benchmark dataset\nTabFact, comparable with the previous state-of-the-art models. We hope our\nframework can serve as a new baseline for table-based verification. Our code is\navailable at https://github.com/THUMLP/SaMoE.",
"authors": "Yuxuan Zhou, Xien Liu, Kaiyin Zhou, Ji Wu",
"published": "2022-04-19",
"updated": "2022-04-19",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1702.00372v1",
"title": "Visual Saliency Prediction Using a Mixture of Deep Neural Networks",
"abstract": "Visual saliency models have recently begun to incorporate deep learning to\nachieve predictive capacity much greater than previous unsupervised methods.\nHowever, most existing models predict saliency using local mechanisms limited\nto the receptive field of the network. We propose a model that incorporates\nglobal scene semantic information in addition to local information gathered by\na convolutional neural network. Our model is formulated as a mixture of\nexperts. Each expert network is trained to predict saliency for a set of\nclosely related images. The final saliency map is computed as a weighted\nmixture of the expert networks' output, with weights determined by a separate\ngating network. This gating network is guided by global scene information to\npredict weights. The expert networks and the gating network are trained\nsimultaneously in an end-to-end manner. We show that our mixture formulation\nleads to improvement in performance over an otherwise identical non-mixture\nmodel that does not incorporate global scene information.",
"authors": "Samuel Dodge, Lina Karam",
"published": "2017-02-01",
"updated": "2017-02-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2206.00277v2",
"title": "Task-Specific Expert Pruning for Sparse Mixture-of-Experts",
"abstract": "The sparse Mixture-of-Experts (MoE) model is powerful for large-scale\npre-training and has achieved promising results due to its model capacity.\nHowever, with trillions of parameters, MoE is hard to be deployed on cloud or\nmobile environment. The inference of MoE requires expert parallelism, which is\nnot hardware-friendly and communication expensive. Especially for\nresource-limited downstream tasks, such sparse structure has to sacrifice a lot\nof computing efficiency for limited performance gains. In this work, we observe\nmost experts contribute scarcely little to the MoE fine-tuning and inference.\nWe further propose a general method to progressively drop the non-professional\nexperts for the target downstream task, which preserves the benefits of MoE\nwhile reducing the MoE model into one single-expert dense model. Our\nexperiments reveal that the fine-tuned single-expert model could preserve 99.3%\nbenefits from MoE across six different types of tasks while enjoying 2x\ninference speed with free communication cost.",
"authors": "Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei",
"published": "2022-06-01",
"updated": "2022-06-02",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2011.01613v1",
"title": "Towards a Universal Gating Network for Mixtures of Experts",
"abstract": "The combination and aggregation of knowledge from multiple neural networks\ncan be commonly seen in the form of mixtures of experts. However, such\ncombinations are usually done using networks trained on the same tasks, with\nlittle mention of the combination of heterogeneous pre-trained networks,\nespecially in the data-free regime. This paper proposes multiple data-free\nmethods for the combination of heterogeneous neural networks, ranging from the\nutilization of simple output logit statistics, to training specialized gating\nnetworks. The gating networks decide whether specific inputs belong to specific\nnetworks based on the nature of the expert activations generated. The\nexperiments revealed that the gating networks, including the universal gating\napproach, constituted the most accurate approach, and therefore represent a\npragmatic step towards applications with heterogeneous mixtures of experts in a\ndata-free regime. The code for this project is hosted on github at\nhttps://github.com/cwkang1998/network-merging.",
"authors": "Chen Wen Kang, Chua Meng Hong, Tomas Maul",
"published": "2020-11-03",
"updated": "2020-11-03",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.NE"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2110.04260v3",
"title": "Taming Sparsely Activated Transformer with Stochastic Experts",
"abstract": "Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can\neasily scale to have outrageously large amounts of parameters without\nsignificant increase in computational cost. However, SAMs are reported to be\nparameter inefficient such that larger models do not always lead to better\nperformance. While most on-going research focuses on improving SAMs models by\nexploring methods of routing inputs to experts, our analysis reveals that such\nresearch might not lead to the solution we expect, i.e., the commonly-used\nrouting methods based on gating mechanisms do not work better than randomly\nrouting inputs to experts. In this paper, we propose a new expert-based model,\nTHOR (Transformer witH StOchastic ExpeRts). Unlike classic expert-based models,\nsuch as the Switch Transformer, experts in THOR are randomly activated for each\ninput during training and inference. THOR models are trained using a\nconsistency regularized loss, where experts learn not only from training data\nbut also from other experts as teachers, such that all the experts make\nconsistent predictions. We validate the effectiveness of THOR on machine\ntranslation tasks. Results show that THOR models are more parameter efficient\nin that they significantly outperform the Transformer and MoE models across\nvarious settings. For example, in multilingual translation, THOR outperforms\nthe Switch Transformer by 2 BLEU scores, and obtains the same BLEU score as\nthat of a state-of-the-art MoE model that is 18 times larger. Our code is\npublicly available at:\nhttps://github.com/microsoft/Stochastic-Mixture-of-Experts.",
"authors": "Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, Jianfeng Gao",
"published": "2021-10-08",
"updated": "2022-02-03",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2210.01750v1",
"title": "Modular Approach to Machine Reading Comprehension: Mixture of Task-Aware Experts",
"abstract": "In this work we present a Mixture of Task-Aware Experts Network for Machine\nReading Comprehension on a relatively small dataset. We particularly focus on\nthe issue of common-sense learning, enforcing the common ground knowledge by\nspecifically training different expert networks to capture different kinds of\nrelationships between each passage, question and choice triplet. Moreover, we\ntake inspi ration on the recent advancements of multitask and transfer learning\nby training each network a relevant focused task. By making the\nmixture-of-networks aware of a specific goal by enforcing a task and a\nrelationship, we achieve state-of-the-art results and reduce over-fitting.",
"authors": "Anirudha Rayasam, Anusha Kamath, Gabriel Bayomi Tinoco Kalejaiye",
"published": "2022-10-04",
"updated": "2022-10-04",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2307.05956v2",
"title": "Language-Routing Mixture of Experts for Multilingual and Code-Switching Speech Recognition",
"abstract": "Multilingual speech recognition for both monolingual and code-switching\nspeech is a challenging task. Recently, based on the Mixture of Experts (MoE),\nmany works have made good progress in multilingual and code-switching ASR, but\npresent huge computational complexity with the increase of supported languages.\nIn this work, we propose a computation-efficient network named Language-Routing\nMixture of Experts (LR-MoE) for multilingual and code-switching ASR. LR-MoE\nextracts language-specific representations through the Mixture of Language\nExperts (MLE), which is guided to learn by a frame-wise language routing\nmechanism. The weight-shared frame-level language identification (LID) network\nis jointly trained as the shared pre-router of each MoE layer. Experiments show\nthat the proposed method significantly improves multilingual and code-switching\nspeech recognition performances over baseline with comparable computational\nefficiency.",
"authors": "Wenxuan Wang, Guodong Ma, Yuke Li, Binbin Du",
"published": "2023-07-12",
"updated": "2023-07-14",
"primary_cat": "cs.SD",
"cats": [
"cs.SD",
"eess.AS"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1110.2058v2",
"title": "Convergence Rates for Mixture-of-Experts",
"abstract": "In mixtures-of-experts (ME) model, where a number of submodels (experts) are\ncombined, there have been two longstanding problems: (i) how many experts\nshould be chosen, given the size of the training data? (ii) given the total\nnumber of parameters, is it better to use a few very complex experts, or is it\nbetter to combine many simple experts? In this paper, we try to provide some\ninsights to these problems through a theoretic study on a ME structure where\n$m$ experts are mixed, with each expert being related to a polynomial\nregression model of order $k$. We study the convergence rate of the maximum\nlikelihood estimator (MLE), in terms of how fast the Kullback-Leibler\ndivergence of the estimated density converges to the true density, when the\nsample size $n$ increases. The convergence rate is found to be dependent on\nboth $m$ and $k$, and certain choices of $m$ and $k$ are found to produce\noptimal convergence rates. Therefore, these results shed light on the two\naforementioned important problems: on how to choose $m$, and on how $m$ and $k$\nshould be compromised, for achieving good convergence rates.",
"authors": "Eduardo F. Mendes, Wenxin Jiang",
"published": "2011-10-10",
"updated": "2011-11-01",
"primary_cat": "math.ST",
"cats": [
"math.ST",
"stat.ME",
"stat.ML",
"stat.TH"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/1702.04832v1",
"title": "Dynamic Partition Models",
"abstract": "We present a new approach for learning compact and intuitive distributed\nrepresentations with binary encoding. Rather than summing up expert votes as in\nproducts of experts, we employ for each variable the opinion of the most\nreliable expert. Data points are hence explained through a partitioning of the\nvariables into expert supports. The partitions are dynamically adapted based on\nwhich experts are active. During the learning phase we adopt a smoothed version\nof this model that uses separate mixtures for each data dimension. In our\nexperiments we achieve accurate reconstructions of high-dimensional data points\nwith at most a dozen experts.",
"authors": "Marc Goessling, Yali Amit",
"published": "2017-02-16",
"updated": "2017-02-16",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG"
],
"category": "Mixture AND of AND Experts"
},
{
"url": "http://arxiv.org/abs/2210.16710v1",
"title": "Prediction Sets for High-Dimensional Mixture of Experts Models",
"abstract": "Large datasets make it possible to build predictive models that can capture\nheterogenous relationships between the response variable and features. The\nmixture of high-dimensional linear experts model posits that observations come\nfrom a mixture of high-dimensional linear regression models, where the mixture\nweights are themselves feature-dependent. In this paper, we show how to\nconstruct valid prediction sets for an $\\ell_1$-penalized mixture of experts\nmodel in the high-dimensional setting. We make use of a debiasing procedure to\naccount for the bias induced by the penalization and propose a novel strategy\nfor combining intervals to form a prediction set with coverage guarantees in\nthe mixture setting. Synthetic examples and an application to the prediction of\ncritical temperatures of superconducting materials show our method to have\nreliable practical performance.",
"authors": "Adel Javanmard, Simeng Shao, Jacob Bien",
"published": "2022-10-30",
"updated": "2022-10-30",
"primary_cat": "math.ST",
"cats": [
"math.ST",
"stat.ME",
"stat.ML",
"stat.TH"
],
"category": "Mixture AND of AND Experts"
}
]