diff --git "a/related_34K/test_related_short_2404.17858v2.json" "b/related_34K/test_related_short_2404.17858v2.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.17858v2.json" @@ -0,0 +1,1402 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17858v2", + "title": "Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion", + "abstract": "Multi-modal Emotion Recognition in Conversation (MERC) has received\nconsiderable attention in various fields, e.g., human-computer interaction and\nrecommendation systems. Most existing works perform feature disentanglement and\nfusion to extract emotional contextual information from multi-modal features\nand emotion classification. After revisiting the characteristic of MERC, we\nargue that long-range contextual semantic information should be extracted in\nthe feature disentanglement stage and the inter-modal semantic information\nconsistency should be maximized in the feature fusion stage. Inspired by recent\nState Space Models (SSMs), Mamba can efficiently model long-distance\ndependencies. Therefore, in this work, we fully consider the above insights to\nfurther improve the performance of MERC. Specifically, on the one hand, in the\nfeature disentanglement stage, we propose a Broad Mamba, which does not rely on\na self-attention mechanism for sequence modeling, but uses state space models\nto compress emotional representation, and utilizes broad learning systems to\nexplore the potential data distribution in broad space. Different from previous\nSSMs, we design a bidirectional SSM convolution to extract global context\ninformation. On the other hand, we design a multi-modal fusion strategy based\non probability guidance to maximize the consistency of information between\nmodalities. Experimental results show that the proposed method can overcome the\ncomputational and memory limitations of Transformer when modeling long-distance\ncontexts, and has great potential to become a next-generation general\narchitecture in MERC.", + "authors": "Yuntao Shou, Tao Meng, Fuchen Zhang, Nan Yin, Keqin Li", + "published": "2024-04-27", + "updated": "2024-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "2.1 Multi-modal Emotion Recognition in Conversation In the early eras, GRU [7] and LSTM [18] are the de-facto standard network designs for Natural Language Processing (NLP). Many recurrent neural network architectures [15, 16, 22, 29, 37] have been proposed for various Multi-modal Emotion Recognition in Conversation (MERC). The pioneering work, Transformer changed the landscape by enabling efficient parallel computing under the premise of long sequence modeling. Transformer treats text as a series of 1D sequence data and applies an attention architecture to achieve sequence modeling. Transformer\u2019s surprising results on long sequence modeling and its scalability have encouraged considerable follow-up work for MERC [6, 31, 32, 47, 64]. One line of works focus on achieving intra-modal and inter-modal information fusion. For example, CTNet [32] proposes a single Transformer and cross Transformer. CKETF [12] constructs a Context and Knowledge Enriched Transformer. TL-ERC applies the Transformer with the transfer learning. Another pioneering work, Graph Neural Network (GNN) [57\u201359] further improved the performance of ERC. The core idea of GNN is to learn the representation of nodes or graphs through the feature information of nodes and the connection relationships in the graph structure [5, 42, 63]. For instance, DialogueGCN [11] proposes to use context information to build dialogue graphs. DER-GCN [1] fuses event relationships into speaker relationship graphs. These dominant follow-up works have demonstrated excellent performance and higher efficiency on various multi-modal conversational emotion recognition data sets by introducing attention mechanisms or GNNs. In this work, we draw inspiration from Mamba and explore the ability to build a state space model (SSM) based model to improve multi-modal emotion representation learning without efficient parallel sequence modeling using attention, while retaining the sequence modeling advantages of Transformer. Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia 2.2 State Space Models The State Space Models (SSMS) is used to describe the dynamic change process consisting of observed values and unknown internal state variables. Gu et al. [14] proposes a Structured State Space Sequence (S4) model, an alternative to the Transformer architecture that models long-range dependencies without using attention. The property of linear complexity of state space sequence lengths has received considerable research attention. Smith et al. [51] improves S4 by introducing MIMO SSM and efficient parallel scanning into the S4 layer to achieve parallel initialization and state reset of the hidden layer. He et al. [17] proposes introducing dense connection layers into SSM to improve the feature representation ability of shallow hidden layer states. Mehta et al. [38] improves the memory ability of the hidden layer by introducing gated units on S4. Recently, Gu et al. [13] proposes the general language model Mamba, which has better sequence modeling capabilities than Transformers and is linearly complex. Zhu et al. [65] introduces bidirectional SSM based on Mamba to improve the context information representation of the hidden layer. In this work, we are inspired by Mamba to transfer SSM to emotion representation learning without attention computation.", + "pre_questions": [], + "main_content": "INTRODUCTION Emotion recognition in conversation [2, 10, 39, 48, 55] has received considerable research attention and has been widely used in various fields, e.g., emotion analysis [20] and public opinion warning [56], etc. Recently, research on Multi-modal Emotion Recognition in Conversation (MERC) has mainly focused on multimodality, i.e., text, video and audio [4, 8, 35, 40, 46, 50]. As shown in Fig. 1, MERC aims to identify emotion labels in sentences with text, video, and audio information. Unlike previous work [26] that only uses text information for emotion recognition, MERC improves the model\u2019s \u2217Corresponding Author arXiv:2404.17858v2 [cs.CL] 3 May 2024 MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Shou et al. emotion understanding capabilities by introducing audio and video information [19, 49]. The introduction of audio and video alleviates the limitation of insufficient semantic information caused by relying solely on text features. Many existing works [33, 52, 62] improve the performance of MERC by effectively extracting contextual semantic information of different modalities and fusing inter-modal complementary semantic information. By revisiting the characteristics of MERC, we argue that the core idea of MERC includes a feature disentanglement step and a feature fusion step. Specifically, the goal of feature disentanglement is to extract the contextual semantic information most relevant to emotional features in multi-modal features [54, 60]. Recent work on Transformers [9, 30, 32] has achieved great success in modeling long-range contextual semantic information. Compared with traditional Recurrent Neural Networks (RNNs) [29, 37], the advantage of Transformer is that it can effectively provide global contextual semantic information through the attention mechanism in parallel. However, the quadratic complexity of the self-attention mechanism in Transformers poses challenges in terms of speed and memory when dealing with long-range context dependencies. Inspired by the state space models, Mamba with linear complexity is proposed to achieve efficient training and inference. Mamba\u2019s excellent scaling performance shows that it is a promising Transformer alternative for context modeling. Therefore, to efficiently extract long-distance contextual semantic information, we designed the broad Mamba, which incorporates the SSMs for data-dependent global emotional context modeling, and a broad learning system to explore the potential data distribution in the broad space. Different from previous SSMs, we design a bidirectional SSM convolution to extract global context information. In addition, we also introduce position encoding information to improve SSMs\u2019 ability to understand sequences at different positions. After completing feature disentanglement, the model needs to perform feature fusion to maximize the consistency of information between different modalities. The core idea of feature fusion is to assign different weights by determining the importance of different modal features to downstream tasks. Many cross-modal feature fusions have been proposed in existing MERC research, e.g., tensor fusion network [61], graph fusion network [63], attention fusion [46]. However, the feature fusion process in previous works is relatively coarse-grained and cannot actually determine the contribution of each modal feature to downstream tasks. We argue that label information plays an important role in guiding multi-modal information fusion. Therefore, how to properly fuse multi-modality and determine the contribution of multi-modal features to downstream tasks in a fine-grained manner remains a challenge [24, 41]. To tackle the above problems, we propose an effective probabilityguided fusion mechanism to achieve multi-modal contextual feature fusion, which utilizes the predicted label probability of each modal feature as the weight vectors of the modal features. Compared with other feature fusion models for emotion recognition tasks, the proposed fusion method can utilize the predicted label probability information in a fine-grained manner to actually determine the contribution of different modal features to the emotion prediction task. To evaluate the effectiveness and efficiency of our proposed method, we conduct extensive experiments on two widely used benchmark datasets, IEMOCAP and MELD. In fact, the proposed method achieves state-of-the-art performance with low computational consumption, and experimental results demonstrate its effectiveness and efficiency. Overall, our main contributions can be summarized as follows: \u2022 We propose a Broad Mamba, which combines a broad learning system for searching abstract emotional features in a broad space and a SSM for data-dependent global emotional context information extraction. Different from previous SSMs, we design a bidirectional SSM convolution to extract global context information. We propose an effective probability-guided fusion mechacontext information. \u2022 We propose an effective probability-guided fusion mechanism to achieve multi-modal contextual feature fusion, which utilizes the predicted label probability of each modal feature as the weight vectors of the modal features. We conduct extensive experiments on the IEMOCAP and as the weight vectors of the modal features. \u2022 We conduct extensive experiments on the IEMOCAP and MELD datasets. Experimental results show that our proposed method achieves superior performance compared with the well-established Transformer or GNN architectures. Word Embedding: Following previous work [28, 36], we use RoBerta in this paper to obtain context-embedded representations of text. Specifically, we first segment the input text and add the start symbol \u2018[CLS]\u2019 and the end symbol \u2018[SEP]\u2019. The processed input data is then passed to the RoBERTa model to obtain contextual representations \ud835\udf43\ud835\udc61of the text. Visual and Audio Feature Extraction: For video and audio features, following previous work [9, 28], we utilize DenseNet and openSMILE for feature extraction and obtain video embedding features \ud835\udf43\ud835\udc63and audio embedding features \ud835\udf43\ud835\udc4e, respectively. 3.2 State Space Model The State Space Model (SSMs) is an efficient sequence modeling model that can capture the dynamic changes of data over time. Owing to the efficient sequence modeling capabilities, SSM has received widespread attention in various fields, e.g., video understanding and image segmentation. A typical SSM consists of a state equation and an observation equation, where the state equation describes the dynamic changes within the system, and the observation equation describes the connection between the system state and observations. Given an input \ud835\udc65(\ud835\udc61) \u2208R and a hidden state \u210e(\ud835\udc61) \u2208R, \ud835\udc66(\ud835\udc61) is obtained mathematically through a linear ordinary differential equations (ODE) as follows: \u210e\u2032(\ud835\udc61) = A\u210e(\ud835\udc61) + B\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = C\u210e(\ud835\udc61) \u210e\u2032(\ud835\udc61) = A\u210e(\ud835\udc61) + B\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = C\u210e(\ud835\udc61) (1) 1 ()() where A \u2208R\ud835\udc41\u00d7\ud835\udc41is the evolution parameter and B \u2208R\ud835\udc41\u00d71, C \u2208 R1\u00d7\ud835\udc41are the projection parameters, and \ud835\udc41is the latent state size. SSM is a continuous time series model, which is difficult to efficiently integrate into deep learning algorithms. Inspired by SSM, Mamba discretizes ODEs to achieve computational efficiency. \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 Z1 Z2 Zn H1 H2 Hm W Y ( ) XW , 1,2,..., i zi zi Z i n \uf066 \uf062 + = \uf05b \uf05d ( ) 1 2 H , ,..., W , 1,2,..., j n hj hj Z Z Z j m \uf062 + = Figure 2: The overall architecture of Broad Learning System (BLS). Z\ud835\udc56represents the feature nodes, H\ud835\udc56represents the enhancement nodes, and Y represents the predicted labels. Mamba discretizes the evolution parameter A and the projection parameter B by introducing a timescale parameter \ud835\udeabto obtain A and B. The formula is defined as follows: A = exp (\ud835\udeabA), B = (A)\u22121(ex A = exp (\ud835\udeabA), B = (\ud835\udeabA)\u22121(exp (\ud835\udeabA) \u2212I) \u00b7 \ud835\udeabB (2) e use a first-order Taylor series to obtain an ap()( () \u2212) \u00b7 In practice, we use a first-order Taylor series to obtain an approximation of B as follows: B = (\ud835\udc52\ud835\udeabA \u2212I)A\u22121B \u2248(\ud835\udeabA)(\ud835\udeabA)\u22121\ud835\udeabB = \ud835\udeabB (3) btaining the discretized A and B, we rewrite Eq. 1 as ( \u2212) \u2248()() After obtaining the discretized A and B, we rewrite Eq. 1 as follows: \u210e\ud835\udc61= A\u210e\ud835\udc61\u22121 + B\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= C\u210e\ud835\udc61+ D\ud835\udc65\ud835\udc61 (4) \u210e\ud835\udc61= A\u210e\ud835\udc61\u22121 + B\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= C\u210e\ud835\udc61+ D\ud835\udc65\ud835\udc61 (4) omputed via global convolutiona as follows: \ud835\udc61 C\ud835\udc61+ D\ud835\udc61 and then the output is computed via global convolutiona as follows: K = (CB, CAB, . . . , CAMB), y = x \u2217K + x \u2217D K = (CB, CAB, . . . , CAB), y = x \u2217K + x \u2217D (5) mba as a sequence modeling method in this \u2217 + \u2217 We adopted Mamba as a sequence modeling method in this work since Mamba can efficiently process sequence data without significant performance degradation. 3.3 Broad Learning System Broad Learning System (BLS) is different from traditional deep learning methods that it mainly focuses on discovering the relationship between features in the input data, rather than extracting features through multi-level nonlinear transformations. The core idea of BLS is to jointly solve the optimization problem by integrating the semantic information of feature nodes and enhancement nodes. Notably, the feature nodes and enhancement nodes only contain one layer of learnable network parameters, so BLS has faster inference speed than other deep learning architectures. The overall process of the BLS algorithm is shown in the Fig. 2. Specifically, for a given input data X \u2208R\ud835\udc41\u00d7\ud835\udc40, where \ud835\udc41represents the number of samples and \ud835\udc40represents the dimension of the feature. The generated feature nodes are defined as follows: Z\ud835\udc56\u225c\ud835\udf19(XW\ud835\udc67\ud835\udc56+ \ud835\udf37\ud835\udc67\ud835\udc56),\ud835\udc56= 1, 2, . . . ,\ud835\udc5b (6) \ud835\udc40\u00d7\ud835\udc51and1\u00d7\ud835\udc51are the learnable parame(+) where W\ud835\udc67\ud835\udc56\u2208R\ud835\udc40\u00d7\ud835\udc51\ud835\udc67and \ud835\udefd\ud835\udc67\u2208R1\u00d7\ud835\udc51\ud835\udc67are the learnable parameters. \ud835\udc51\ud835\udc67is the embedding dimensions of generated features and \ud835\udf19 is the activation function. The set of generated feature nodes is represented as Z\ud835\udc5b\u225c[Z1, . . . , Z\ud835\udc5b], where \ud835\udc5bis the size of the set of generated feature nodes. Similarly, enhancement node features are defined as follows: MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Shou et al. Hey honey! I missed you today! (Joy) Hey honey! I missed you today! (Joy) Video Text Audio Conv1D MLP & Prediction \u2026 Multimodal Feature Extraction Feature Disentanglement Feature Fusion Prediction OpenSMILE RoBERTa DenseNet Conv1D Conv1D \u2026 \u2026 t1 t2 tN t1' t2' tN' \u2026 \u2026 t1 t2 tN t1' t2' tN' \u2026 \u2026 v1 v2 vN v1' v2' vN' \u2026 \u2026 v1 v2 vN v1' v2' vN' \u2026 \u2026 a1 a2 aN a1' a2' aN' \u2026 \u2026 a1 a2 aN a1' a2' aN' Broad Projection Broad Projection Broad Projection Audio Broad Mamba Video Broad Mamba Text Broad Mamba Classifier a Pobability t Classifier v Wa Wt Wv Notation: Position Encoding Text Features Visual Features Audio Features Concatenation Feature nodes and enhancement nodes Forward SSM Backward SSM Backward SSM Backward SSM Forward SSM Forward SSM Classifier t Probability v Pobability v Probability-guidance Fusion Figure 3: The overall framework of the proposed model. Specifically, we first input the extracted multi-modal features into a 1-D convolutional layer for multi-scale feature extraction and introduce position encoding information to consider the position information of the series in the context. Then we input the obtained multi-modal features with multi-scale information into Broad Mamba to extract contextual semantic information and explore the potential data distribution in the broad space. Finally, we use a probability-guidance fusion model to complete the fusion of multi-modal features and achieve emotion prediction. H\ud835\udc57\u225c\ud835\udf19(ZW\u210e\ud835\udc57+ \ud835\udf37\u210e\ud835\udc57), \ud835\udc57= 1, 2, . . . ,\ud835\udc5a (7) where W\u210e\ud835\udc56\u2208R\ud835\udc51\ud835\udc67\u00d7\ud835\udc51\u210eand \ud835\udefd\ud835\udc67\u2208R1\u00d7\ud835\udc51\u210eare the learnable parameters. \ud835\udc51\u210eis the embedding dimensions of enhancement features. The set of enhancement feature nodes is represented as H\ud835\udc5a\u225c[H1, . . . , H\ud835\udc5a], where \ud835\udc5ais the size of the set of enhancement feature nodes. The final model output by concatenating feature nodes and enhancement nodes is as follows: Y = [Z1, . . . , Z\ud835\udc5b|H1, . . . , H\ud835\udc5a]W = [Z\ud835\udc5b|H\ud835\udc5a]W (8) where W is the learnable parameters. 4 THE PROPOSED METHOD 4.1 Feature Disentanglement 4.1.1 1D-Conv. To capture features of different scales and abstraction levels in multi-modal features (e.g., information such as the relationship between words and the importance of utterence), we input text features \ud835\udf43\ud835\udc61, video features \ud835\udf43\ud835\udc63and audio features \ud835\udf43\ud835\udc4einto a 1D convolutional network (Conv1D) as follows: \u02c6 \ud835\udf43\ud835\udc61/\u02c6 \ud835\udf43\ud835\udc4e/\u02c6 \ud835\udf43\ud835\udc63= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc631\ud835\udc37\ud835\udc61/\ud835\udc4e/\ud835\udc63 \u0000\ud835\udf43\ud835\udc61, \ud835\udf43\ud835\udc4e, \ud835\udf43\ud835\udc63 \u0001 (9) where \u02c6 \ud835\udf43\ud835\udc61\u2208R\ud835\udc47\ud835\udc61\u00d7\ud835\udc51\ud835\udc5a, \u02c6 \ud835\udf43\ud835\udc4e\u2208R\ud835\udc47\ud835\udc4e\u00d7\ud835\udc51\ud835\udc5a, and \u02c6 \ud835\udf43\ud835\udc63\u2208R\ud835\udc47\ud835\udc63\u00d7\ud835\udc51\ud835\udc5a,\ud835\udc47\ud835\udc61,\ud835\udc47\ud835\udc4e,\ud835\udc47\ud835\udc63represent the feature dimensions of text, audio, and video respectively, \ud835\udc51\ud835\udc5arepresents the output feature dimensions. Furthermore, to facilitate the model to capture the dependencies between long-distance positions in the sequence, we introduce sine and cosine position encoding embedding as follows: \ud835\udc43\ud835\udc38(\ud835\udc5d\ud835\udc5c\ud835\udc60,2\ud835\udc56) = sin \u0012 \ud835\udc5d\ud835\udc5c\ud835\udc60 100002\ud835\udc56/\ud835\udc51 \u0013 \ud835\udc43\ud835\udc38(\ud835\udc5d\ud835\udc5c\ud835\udc60,2\ud835\udc56+1) = cos \u0012 \ud835\udc5d\ud835\udc5c\ud835\udc60 100002\ud835\udc56/\ud835\udc51 \u0013 (10) where \ud835\udc5d\ud835\udc5c\ud835\udc60represents the position in the sequence. \ud835\udc56represents the dimension index of position encoding,\ud835\udc56= 0, 1, ..., \ud835\udc37\u22121. \ud835\udc37represents the embedded dimension. We input \u02c6 \ud835\udf43\ud835\udc61, \u02c6 \ud835\udf43\ud835\udc4e,\ud835\udc4e\ud835\udc5b\ud835\udc51\u02c6 \ud835\udf43\ud835\udc63( \u02c6 \ud835\udf43\ud835\udc61, \u02c6 \ud835\udf43\ud835\udc4e, \u02c6 \ud835\udf43\ud835\udc63= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc631\ud835\udc37\ud835\udc61/\ud835\udc4e/\ud835\udc63 \u0000\ud835\udf43\ud835\udc61, \ud835\udf43\ud835\udc4e, \ud835\udf43\ud835\udc63 \u0001 + \ud835\udc43\ud835\udc38) that encodes position information at each time step into Broad Mamba. 4.1.2 Broad Mamba. The overall architecture of the proposed Broad Mamba is shown in Fig. 4. In order to aggregate the contextual semantic information from the forward and backward directions, we build a bidirectional SSM convolution module. Specifically, the first kernel \u2190 \u2212 \ud835\udf3fperforms a 1D convolution operator to obtain forward context information. The second kernel \u2212 \u2192 \ud835\udf3fperforms a 1D convolution operator to obtain the mutual information associated with emotional information, and we add the two convolved results. The overall operating process is formally defined as follows: \u00af \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57 = \u2211\ufe01 \ud835\udc59\u2264\ud835\udc57 \u2190 \u2212 \ud835\udf3f\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57\u2212\ud835\udc59 \u2299\u02c6 \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc59 + \u2211\ufe01 \ud835\udc59\u2265\ud835\udc57 \u2212 \u2192 \ud835\udf3f\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc59\u2212\ud835\udc57 \u2299\u02c6 \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc59 + \ud835\udc85\ud835\udc61/\ud835\udc4e/\ud835\udc63\u2299\u02c6 \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57 = BiSSM( \u02c6 \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57 ) (11) where \u2190 \u2212 \ud835\udf3f, and \u2212 \u2192 \ud835\udf3fare obtained via Eq. 5. To explore the potential data distribution of multi-modal data in the broad space and improve the performance of Mamba, we Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Dropout Add &Norm SSM Forward Convolution SSM Backward Convolution Linear Add &Norm Linear Bi-SSM Modality Features Braod Learning Systems Figure 4: The overall architecture of Broad Mamba. We use a bidirectional SSM to encode forward and reverse contextual semantic information. use Broad Learning Sytems (BLS) to enhance the emotional representation ability of features. Specifically, we map the features output by BiSSM to a random broad space and obtain feature nodes and enhancement nodes, and concatenate the feature nodes and enhancement nodes as the input of the feature fusion layer. Specifically, feature nodes can be formally defined as follows: Z\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc56 \u225cBiSSM( \u02c6 \ud835\udf43\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57 )W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc67\ud835\udc56 + \ud835\udf37\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc67\ud835\udc56 , \ud835\udc56= 1, 2, . . . ,\ud835\udc5b, (12) and the enhancement nodes can be computed as: H\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc57 \u225cReLU(Z\ud835\udc5b \ud835\udc61/\ud835\udc4e/\ud835\udc63W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \u210e\ud835\udc57 + \ud835\udf37\ud835\udc61/\ud835\udc4e/\ud835\udc63 \u210e\ud835\udc57 ), \ud835\udc57= 1, 2, . . . ,\ud835\udc5a. (13) Furthermore, we introduce l2 regularization into the loss function to avoid the overfitting phenomenon of BLS, which is formally defined as follows: L\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a= \u2225[Z\ud835\udc5b \ud835\udc61/\ud835\udc4e/\ud835\udc63|H\ud835\udc5a \ud835\udc61/\ud835\udc4e/\ud835\udc63]W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc4f \u2212Y\ud835\udc61/\ud835\udc4e/\ud835\udc63\u22252 2 + \ud835\udf06\u2225W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc4f \u22252 2 (14) where \ud835\udf06is the weight decay coefficient, W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc4f is the learnable parameters, Y\ud835\udc61/\ud835\udc4e/\ud835\udc63= [Z\ud835\udc61/\ud835\udc4e/\ud835\udc63 1 , . . . , Z\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc5b , . . . , H\ud835\udc61/\ud835\udc4e/\ud835\udc63 1 , . . . , H\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc5b ]. By deriving and solving the Eq. 14, the solution to W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc4f can be calculated as follows: W\ud835\udc61/\ud835\udc4e/\ud835\udc63 \ud835\udc4f = \u0010 [Z\ud835\udc5b \ud835\udc61/\ud835\udc4e/\ud835\udc63|H\ud835\udc5a \ud835\udc61/\ud835\udc4e/\ud835\udc63]\u22a4[Z\ud835\udc5b \ud835\udc61/\ud835\udc4e/\ud835\udc63|H\ud835\udc5a \ud835\udc61/\ud835\udc4e/\ud835\udc63] + \ud835\udf06I \u0011\u22121 [Z\ud835\udc5b \ud835\udc61/\ud835\udc4e/\ud835\udc63|H\ud835\udc5a \ud835\udc61/\ud835\udc4e/\ud835\udc63]\u22a4Y\ud835\udc61/\ud835\udc4e/\ud835\udc63 (15) 4.1.3 Computation-Efficiency. SSM and the self-attention mechanism in Transformer both plays an important role in modeling global contextual semantic information. However, the self-attention mechanism is quadratic in complexity and is very time-consuming in training and inference. On the contrary, the computational complexity of SSM is \ud835\udc42(\ud835\udc3f\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc3f), so it can accelerate model inference in modeling long sequences. 4.2 Feature Fusion 4.2.1 Probability-guided Fusion Model. Many studies have proven that different modalities have different contributions to the prediction of emotional labels, so modal features with higher contributions need to be given greater weight in the multi-modal feature fusion process. Different from previous works that fuse modal features at a coarse-grained level without using label information for guidance, we design a probability-guided fusion model (PFM) that dynamically assigns weights to each modality by using the predicted emotion label probabilities of the modalities. Specifically, we build an emotion classifier for the feature representation of each modality to obtain the predicted probability of the label as the weight of the modal features in the fusion process. The fusion process is formally defined as follows: \ud835\udf14\ud835\udc61/\ud835\udc4e/\ud835\udc63= Sigmoid \u0010 MLP\ud835\udc61/\ud835\udc4e/\ud835\udc63\u0010 Y\ud835\udc61/\ud835\udc4e/\ud835\udc63\u0011\u0011 (16) and then we can obtain the fused multi-modal feature representations as follows: \u210e\ud835\udc53= \ud835\udf14\ud835\udc61Y\ud835\udc61+ \ud835\udf14\ud835\udc4eY\ud835\udc4e+ \ud835\udf14\ud835\udc63Y\ud835\udc63 (17) 4.3 Model Training Finally, we utilize the fused multi-modal emotional context features for emotion classification. The formula is defined as follows: \u02c6 \ud835\udc66\ud835\udc56= argmax \u0010 softmax \u0010 MLP(\u210e\ud835\udc53) \u0011\u0011 (18) where \u02c6 \ud835\udc66\ud835\udc56is the predicted emotion labels, MLP represents the multilayer perceptron with multiple layers of learnable parameters. We use cross-entropy loss as the classification loss of the model, and the formula is defined as follows: L\ud835\udc52\ud835\udc5a\ud835\udc5c= \u2212 \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \ud835\udc66\ud835\udc56log \u02c6 \ud835\udc66\ud835\udc56 (19) where \ud835\udc66\ud835\udc56is the true emotion labels, \ud835\udc5bis the number of the samples. Therefore, during the optimization phase of the model, the overall training loss function is defined as follows: L = L\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a+ L\ud835\udc52\ud835\udc5a\ud835\udc5c (20) 5 EXPERIMENTS In this section, we firstly introduce the experimental data set and evaluation metrics. Then the baselines method in the experiments are explained. We then compare the proposed method with baseline work on two benchmark datasets. Secondly, we conduct ablation studies to analyze the effectiveness of the proposed module and the importance of multi-modal features. Finally, we use tsne to visualize the distribution of the data set. In comparative experiments, our experimental results are the average of 10 runs with MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Shou et al. Table 1: Comparison with other baseline models on the IEMOCAP dataset. The best result in each column is in bold. Methods Parmas. IEMOCAP Happy Sad Neutral Angry Excited Frustrated Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 29.1 34.4 57.1 60.8 54.1 51.8 57.0 56.7 51.1 57.9 67.1 58.9 55.2 54.9 LFM 2.34M 25.6 33.1 75.1 78.8 58.5 59.2 64.7 65.2 80.2 71.8 61.1 58.9 63.4 62.7 A-DMN \u2013 43.1 50.6 69.4 76.8 63.0 62.9 63.5 56.5 88.3 77.9 53.3 55.7 64.6 64.3 DialogueGCN 12.92M 40.6 42.7 89.1 84.5 62.0 63.5 67.5 64.1 65.5 63.1 64.1 66.9 65.2 64.1 RGAT 15.28M 60.1 51.6 78.8 77.3 60.1 65.4 70.7 63.0 78.0 68.0 64.3 61.2 65.0 65.2 CoMPM \u2013 59.9 60.7 78.0 82.2 60.4 63.0 70.2 59.9 85.8 78.2 62.9 59.5 67.7 67.2 EmoBERTa 499M 56.9 56.4 79.1 83.0 64.0 61.5 70.6 69.6 86.0 78.0 63.8 68.7 67.3 67.3 COGMEN \u2013 57.4 51.9 81.4 81.7 65.4 68.6 69.5 66.0 83.3 75.3 63.8 68.2 68.2 67.6 CTNet 8.49M 47.9 51.3 78.0 79.9 69.0 65.8 72.9 67.2 85.3 78.7 52.2 58.8 68.0 67.5 LR-GCN 15.77M 54.2 55.5 81.6 79.1 59.1 63.8 69.4 69.0 76.3 74.0 68.2 68.9 68.5 68.3 AdaGIN 6.3M 53.0 \u2013 81.5 \u2013 71.3 \u2013 65.9 \u2013 76.3 \u2013 67.8 \u2013 70.5 70.7 DER-GCN 78.59M 60.7 58.8 75.9 79.8 66.5 61.5 71.3 72.1 71.1 73.3 66.1 67.8 69.7 69.4 Our Model 1.73M 58.1 65.5 86.0 81.6 70.7 73.5 69.1 70.1 85.5 76.3 69.5 69.8 73.1 73.3 Table 2: Comparison with other baseline models on the MELD dataset. The best result in each column is in bold. Methods Parmas. MELD Neutral Surprise Fear Sadness Joy Disgust Anger Average(w) Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 bc-LSTM 1.28M 78.4 73.8 46.8 47.7 3.8 5.4 22.4 25.1 51.6 51.3 4.3 5.2 36.7 38.4 57.5 55.9 DialogueRNN 14.47M 72.1 73.5 54.4 49.4 1.6 1.2 23.9 23.8 52.0 50.7 1.5 1.7 41.0 41.5 56.1 55.9 DialogueGCN 12.92M 70.3 72.1 42.4 41.7 3.0 2.8 20.9 21.8 44.7 44.2 6.5 6.7 39.0 36.5 54.9 54.7 RGAT 15.28M 76.0 78.1 40.1 41.5 3.0 2.4 32.1 30.7 68.1 58.6 4.5 2.2 40.0 44.6 60.3 61.1 CoMPM \u2013 78.3 82.0 48.3 49.2 1.7 2.9 35.9 32.3 71.4 61.5 3.1 2.8 42.2 45.8 64.1 65.3 EmoBERTa 499M 78.9 82.5 50.2 50.2 1.8 1.9 33.3 31.2 72.1 61.7 9.1 2.5 43.3 46.4 64.1 65.2 A-DMN \u2013 76.5 78.9 56.2 55.3 8.2 8.6 22.1 24.9 59.8 57.4 1.2 3.4 41.3 40.9 61.5 60.4 LR-GCN 15.77M 76.7 80.0 53.3 55.2 0.0 0.0 49.6 35.1 68.0 64.4 10.7 2.7 48.0 51.0 65.7 65.6 AdaGIN 6.3M 79.8 \u2013 60.5 \u2013 15.2 \u2013 43.7 \u2013 64.5 \u2013 29.3 \u2013 56.2 \u2013 67.6 66.8 DER-GCN 78.59M 76.8 80.6 50.5 51.0 14.8 10.4 56.7 41.5 69.3 64.3 17.2 10.3 52.5 57.4 66.8 66.1 Our model 1.73M 73.1 79.7 62.5 60.6 25.0 6.9 52.9 38.9 76.1 63.0 57.1 27.0 53.0 57.1 68.0 67.6 different weight initializations. The results of our experiments are statistically significant (all \ud835\udc5d< 0.05) under paired \ud835\udc61-tests. 5.1 Implementation Details In the experiments, we use AdamW as the optimizer to update the parameters of the network. The model learning rate is set to 1e-4. For the experiment, the number of feature nodes \ud835\udc5band the number of enhancement node features \ud835\udc5aare set to 10 and 30 respectively. Following previous work, we use the same split ratio of training, test, and validation sets for model training and inference. 5.2 Datasets and Evaluation Metrics We conduct experiments using two popular MERC datasets, IEMOCAP [3] and MELD [44], which include three modal data: text, audio, and video. IEMOCAP contains 12 hours of conversations, each containing six emotion labels. The MELD dataset contains conversation clips from the TV show Friends and contains seven Table 3: Ablation studies for PE, BLS, PFM on the IEMOCAP and MELD datasets. Methods IEMOCAP MELD W-Acc. W-F1 W-Acc. W-F1 Ours 73.1 73.3 68.0 67.6 w/o PE 72.4(\u21930.7) 72.0(\u21931.3) 66.7(\u21931.3) 66.3(\u21931.3) w/o BLS 71.5(\u21931.6) 72.1(\u21931.2) 65.5(\u21932.5) 64.9(\u21932.7) w/o PFM 70.3(\u21932.8) 70.7(\u21932.6) 65.8(\u21932.2) 65.3(\u21932.3) different emotion labels. In addition, in the experiments we report the accuracy (Acc.), and F1 of the proposed method and other baseline methods on each emotion category and the overall weighted average accuracy (W-Acc.), and weighted average F1 (W-F1). Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Table 4: The effect of our method on IEMOCAP and MELD datasets using unimodal features and multi-modal features, respectively. Modality IEMOCAP MELD W-Acc. W-F1 W-Acc W-F1 T+A+V 73.1 73.3 68.0 67.6 T 65.5(\u21937.6) 65.7(\u21937.6) 64.6(\u21933.4) 63.9(\u21933.7) A 58.6(\u219314.5) 58.8(\u219314.5) 52.7(\u219315.3) 52.0(\u219315.6) V 49.4(\u219323.7) 49.7(\u219323.6) 40.1(\u219327.9) 41.4(\u219326.2) T+A 71.3(\u21931.8) 70.2(\u21933.1) 65.2(\u21932.8) 65.6(\u21932.0) T+V 68.7(\u21934.4) 67.4(\u21935.9) 65.0(\u21933.0) 63.7(\u21933.9) V+A 62.1(\u219311.0) 62.2(\u219311.1) 51.3(\u219316.7) 51.9(\u219315.7) 5.3 Baselines We compare several baselines on the IEMOCAP and MELD datasets, including bc-LSTM [43], LFM [34], A-DMN [55], DialogueGCN [11], RGAT [21], CoMPM [27], EmoBERTa [25], COGMEN [23], CTNet [32], LR-GCN [45], DER-GCN [1], AdaGIN [53]. 5.4 Overall Results Tables 1 and 2 show the experimental results on the IEMOCAP and MELD data sets. Experimental results show that our method significantly improves the recognition performance of emotion recognition. Specifically, on the MELD dataset, our model improves by 1.40% and 0.80% compared to the best-performing baselines W-Acc and W-F1, respectively. Similarly, on the IEMOCAP data set, our model improves by 2.60% and 2.60% on W-Acc and W-F1 respectively. The performance improvement may be attributed to the effective extraction of contextual semantic information and efficient integration of underlying data distribution. Furthermore, our method is optimal compared with other multimodal fusion methods in experimental results. The results demonstrate the effectiveness of our model in achieving multi-modal semantic information fusion. We also give W-Acc and W-F1 for each emotion. Specifically, on the IEMOCAP data set, our model\u2019s W-Acc is optimal on neutral and frustrated, and W-F1 is optimal on happy, neutral, and frustrated. On the MELD data set, our model\u2019s W-Acc is optimal on neutral and frustrated, and W-F1 is optimal on happy, neutral, and frustrated. We also report the model parameter quantities of the proposed method and the baseline method. The results show that the parameter amount of our model is 1.73M, which is far lower than other methods. The model complexity of other baseline methods is relatively high, but the emotion recognition effect is relatively poor. Experimental results show that the larger the number of parameters, the better the model performance is not necessarily. Experimental results demonstrate that the proposed method is an effective and efficient MERC model. 5.5 Running Time In this section, we report the inference time of different baselines and our proposed method on the IEMOCAP and MELD datasets. As shown in Table 5, the inference time of our method is below 10s, which is much lower than some GCN-based methods and 62 64 66 70 68 60 72 Dataset W-Acc (%) 74 Add Concatenate LFM Ours Add Concatenate LFM Ours 73.1 69.1 68.0 68.3 67.7 66.4 65.5 64.3 IEMOCAP MELD 62 64 66 70 68 60 72 Dataset W-Acc (%) 74 Add Concatenate LFM Ours 73.1 69.1 68.0 68.3 67.7 66.4 65.5 64.3 IEMOCAP MELD 62 64 66 70 68 60 72 Dataset W-F1 (%) 74 Add Concatenate LFM Ours Add Concatenate LFM Ours 73.3 68.6 67.6 68.7 67.2 66.4 65.5 64.3 IEMOCAP MELD 62 64 66 70 68 60 72 Dataset W-F1 (%) 74 Add Concatenate LFM Ours 73.3 68.6 67.6 68.7 67.2 66.4 65.5 64.3 IEMOCAP MELD Figure 5: Emotion recognition effects of different fusion methods on the IEMOCAP and MELD datasets. The experimental results are statistically significant (\ud835\udc61-test with \ud835\udc5d< 0.05). RNN-based methods. Experimental results demonstrate the high efficiency of SSMs. Table 5: We tested the running time of the proposed method in this paper and other comparative methods on the IEMOCAP and MELD data sets. Methods Running time (s) IEMOCAP MELD bc-LSTM 8.3 10.4 LMF 4.5 8.4 DialogueRNN 61.7 138.2 RGAT 68.5 146.3 DialogueGCN 58.1 127.5 LR-GCN 87.7 142.3 DER-GCN 125.5 189.7 Ours 3.5 6.6 5.6 Ablation Studies Ablation studies for PE, BLS, PFM. As shown in Table 3, we found that the performance of the model will decrease after removing PE, which indicates that positional encoding information is quite important for understanding contextual semantic information. Furthermore, without BLS, the performance of the model also degrades. The performance degradation is attributed to the underlying contextual data distribution which is also crucial for emotion prediction. Finally, when the PFM module is removed, the performance of the model drops sharply. Experimental results demonstrate the necessity of multi-modal feature fusion. Different modalities contribute differently to the model\u2019s understanding of emotional information, and using label probabilities can guide the model to adaptively learn the weights of different modal features to better integrate multi-modal features. Ablation studies for multi-modal features. To show the impact of different modal features on experimental results, we conducted ablation experiments to verify the combination of different modal features. From the experimental results in Table IV, it is found MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia Shou et al. Excited Neutral Happy Sad Frustrated Angry (a) Origin representations Excited Neutral Happy Sad Frustrated Angry (b) Learned by our method Neutral Surprise Joy Disgust Sadness Fear Anger (c) Origin representations Neutral Surprise Joy Disgust Sadness Fear Anger (d) Learned by our method Figure 6: Visualizing feature embeddings for the multi-modal emotion on the IEMOCAP and meld benchmark dataset. Each dot represents an utterance, and its color represents an emotion. (a) Original features on the IEMOCAP dataset. (b) Features learned by our method on the IEMOCAP dataset. (c) Original features on the MELD dataset. (d) Features learned by our method on the MELD dataset. that: (1) In the single-modal experimental results of the model, the accuracy of emotion recognition in text mode is far better than the other two modes, indicating that text features play a dominant role in emotion recognition. effect. The emotion recognition effect of video modality features is the worst (2) The emotion recognition effect using bimodal features is better than its own single-modality result. Furthermore, since text features play a dominant role in emotion recognition, this results in the bimodal feature combination with text modality performing better than the combination of acoustic and visual modalities. (3) The emotion recognition effect using three modal features is optimal. Experimental results prove the necessity of fusion of multi-modal features for emotion recognition. Effect of Different Fusion Strategies. To study the effectiveness of the probability-guided fusion method proposed in this paper, we compare it with some previous multi-modal fusion strategies: (1) Add: multi-modality is implemented by element-wise addition of multi-modal features. Information fusion. (2) Concatenate: directly concatenating multi-modal features. LFM: feature fusion is achieved by introducing a low-rank tensor fusion network. As shown in Fig. 5, compared with other fusion methods, the probability-guided fusion strategy we proposed has better emotion recognition effects on the two data sets. The results show that the emotion recognition effect of directly adding or concatenating multi-modal features to achieve multi-modal information fusion is relatively poor. The multi-modal information fusion effect of LMF is better than the adding method and the concatenating method. The probabilistic fusion strategy we propose introduces label information to guide the fusion of multi-modal information and further achieves parameter optimization of the model. Interestingly, the fusion effect of the concatenate method on the IEMOCAP data set is better than that of the add method, but the effect on the MELD data set is worse than that of the add method. This may be because the dimensionality of the multi-modal features of the MELD dataset is relatively high, so dimensionality disaster may occur by concatenating multi-modal information. In contrast, our proposed probabilistic guided model effectively uses label information to guide the consistency of multi-modal semantic information. Dialogue Ours Ours Dialogue Ours Ours neutral neutral neutral neutral Figure 7: An illustrative example of multi-modal emotion recognition in the MELD dataset. We test the emotion recognition effects of DialogueRNN, DialogueGCN and the proposed method. 5.7 Multi-modal Representation Visualization In order to intuitively demonstrate the classification results of our proposed method on the two data sets, we use t-SNE to project the high-dimensional multi-modal feature representation into a two-dimensional space, as shown in Fig. 6. The results show that the proposed method is able to effectively separate different emotion categories from each other. However, there are still a small number of samples that are inseparable. In the future, we will consider constructing more stringent target constraints to optimize the distribution of different emotion categories in the feature space, so as to further improve the emotion recognition performance. 5.8 Error Analysis As shown in Fig. 7, we test the emotion classification results of DialogueRNN, DialogueGCN and the proposed method on the MELD dataset. In the disgust emotion category, the classification results of DialogueRNN and DialogueGCN are very poor, and they are all misclassified as neutral emotions. When the proposed method only uses text features, the emotion classification effect on the disgust Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion MM \u201924, October 21\u2013November 01, 2024, Melbourne, Australia category is unstable, but when multi-modal features are used, it can better classify disgust category emotions. 6 CONCLUSIONS In this work, we introduce a novel MERC method that comprehensively considers both feature disentanglement and multi-modal feature fusion. Specifically, during the feature disentanglement, we designed the broad Mamba, which incorporates the SSMs for datadependent global emotional context modeling, and a broad learning system to explore the potential data distribution in the broad space. Thanks to the proposed bidirectional SSMs, our method can efficiently extract global long-distance contextual semantic information, while only having linear complexity. During the multi-modal feature fusion, we propose an effective probability-guided fusion mechanism to achieve multi-modal contextual feature fusion, which utilizes the predicted label probability of each modal feature as the weight vectors of the modal features. Extensive experiments conducted on two widely used benchmark datasets, IEMOCAP and MELD demonstrate the effectiveness and efficiency of our proposed method." + }, + { + "url": "http://arxiv.org/abs/1811.00405v4", + "title": "DialogueRNN: An Attentive RNN for Emotion Detection in Conversations", + "abstract": "Emotion detection in conversations is a necessary step for a number of\napplications, including opinion mining over chat history, social media threads,\ndebates, argumentation mining, understanding consumer feedback in live\nconversations, etc. Currently, systems do not treat the parties in the\nconversation individually by adapting to the speaker of each utterance. In this\npaper, we describe a new method based on recurrent neural networks that keeps\ntrack of the individual party states throughout the conversation and uses this\ninformation for emotion classification. Our model outperforms the state of the\nart by a significant margin on two different datasets.", + "authors": "Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, Erik Cambria", + "published": "2018-11-01", + "updated": "2019-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.07953v2", + "title": "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks", + "abstract": "Graph convolutional network (GCN) has been successfully applied to many\ngraph-based applications; however, training a large-scale GCN remains\nchallenging. Current SGD-based algorithms suffer from either a high\ncomputational cost that exponentially grows with number of GCN layers, or a\nlarge space requirement for keeping the entire graph and the embedding of each\nnode in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm\nthat is suitable for SGD-based training by exploiting the graph clustering\nstructure. Cluster-GCN works as the following: at each step, it samples a block\nof nodes that associate with a dense subgraph identified by a graph clustering\nalgorithm, and restricts the neighborhood search within this subgraph. This\nsimple but effective strategy leads to significantly improved memory and\ncomputational efficiency while being able to achieve comparable test accuracy\nwith previous algorithms. To test the scalability of our algorithm, we create a\nnew Amazon2M data with 2 million nodes and 61 million edges which is more than\n5 times larger than the previous largest publicly available dataset (Reddit).\nFor training a 3-layer GCN on this data, Cluster-GCN is faster than the\nprevious state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much\nless memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this\ndata, our algorithm can finish in around 36 minutes while all the existing GCN\ntraining algorithms fail to train due to the out-of-memory issue. Furthermore,\nCluster-GCN allows us to train much deeper GCN without much time and memory\noverhead, which leads to improved prediction accuracy---using a 5-layer\nCluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI\ndataset, while the previous best result was 98.71 by [16]. Our codes are\npublicly available at\nhttps://github.com/google-research/google-research/tree/master/cluster_gcn.", + "authors": "Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh", + "published": "2019-05-20", + "updated": "2019-08-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.08695v1", + "title": "DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition", + "abstract": "This paper presents our pioneering effort for emotion recognition in\nconversation (ERC) with pre-trained language models. Unlike regular documents,\nconversational utterances appear alternately from different parties and are\nusually organized as hierarchical structures in previous work. Such structures\nare not conducive to the application of pre-trained language models such as\nXLNet. To address this issue, we propose an all-in-one XLNet model, namely\nDialogXL, with enhanced memory to store longer historical context and\ndialog-aware self-attention to deal with the multi-party structures.\nSpecifically, we first modify the recurrence mechanism of XLNet from\nsegment-level to utterance-level in order to better model the conversational\ndata. Second, we introduce dialog-aware self-attention in replacement of the\nvanilla self-attention in XLNet to capture useful intra- and inter-speaker\ndependencies. Extensive experiments are conducted on four ERC benchmarks with\nmainstream models presented for comparison. The experimental results show that\nthe proposed model outperforms the baselines on all the datasets. Several other\nexperiments such as ablation study and error analysis are also conducted and\nthe results confirm the role of the critical modules of DialogXL.", + "authors": "Weizhou Shen, Junqing Chen, Xiaojun Quan, Zhixian Xie", + "published": "2020-12-16", + "updated": "2020-12-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.09075v1", + "title": "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network", + "abstract": "Real-time emotion recognition (RTER) in conversations is significant for\ndeveloping emotionally intelligent chatting machines. Without the future\ncontext in RTER, it becomes critical to build the memory bank carefully for\ncapturing historical context and summarize the memories appropriately to\nretrieve relevant information. We propose an Attention Gated Hierarchical\nMemory Network (AGHMN) to address the problems of prior work: (1) Commonly used\nconvolutional neural networks (CNNs) for utterance feature extraction are less\ncompatible in the memory modules; (2) Unidirectional gated recurrent units\n(GRUs) only allow each historical utterance to have context before it,\npreventing information propagation in the opposite direction; (3) The Soft\nAttention for summarizing loses the positional and ordering information of\nmemories, regardless of how the memory bank is built. Particularly, we propose\na Hierarchical Memory Network (HMN) with a bidirectional GRU (BiGRU) as the\nutterance reader and a BiGRU fusion layer for the interaction between\nhistorical utterances. For memory summarizing, we propose an Attention GRU\n(AGRU) where we utilize the attention weights to update the internal state of\nGRU. We further promote the AGRU to a bidirectional variant (BiAGRU) to balance\nthe contextual information from recent memories and that from distant memories.\nWe conduct experiments on two emotion conversation datasets with extensive\nanalysis, demonstrating the efficacy of our AGHMN models.", + "authors": "Wenxiang Jiao, Michael R. Lyu, Irwin King", + "published": "2019-11-20", + "updated": "2019-11-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2208.04933v3", + "title": "Simplified State Space Layers for Sequence Modeling", + "abstract": "Models using structured state space sequence (S4) layers have achieved\nstate-of-the-art performance on long-range sequence modeling tasks. An S4 layer\ncombines linear state space models (SSMs), the HiPPO framework, and deep\nlearning to achieve high performance. We build on the design of the S4 layer\nand introduce a new state space layer, the S5 layer. Whereas an S4 layer uses\nmany independent single-input, single-output SSMs, the S5 layer uses one\nmulti-input, multi-output SSM. We establish a connection between S5 and S4, and\nuse this to develop the initialization and parameterization used by the S5\nmodel. The result is a state space layer that can leverage efficient and widely\nimplemented parallel scans, allowing S5 to match the computational efficiency\nof S4, while also achieving state-of-the-art performance on several long-range\nsequence modeling tasks. S5 averages 87.4% on the long range arena benchmark,\nand 98.5% on the most difficult Path-X task.", + "authors": "Jimmy T. H. Smith, Andrew Warrington, Scott W. Linderman", + "published": "2022-08-09", + "updated": "2023-03-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.02187v1", + "title": "M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation", + "abstract": "Emotion Recognition in Conversations (ERC) is crucial in developing\nsympathetic human-machine interaction. In conversational videos, emotion can be\npresent in multiple modalities, i.e., audio, video, and transcript. However,\ndue to the inherent characteristics of these modalities, multi-modal ERC has\nalways been considered a challenging undertaking. Existing ERC research focuses\nmainly on using text information in a discussion, ignoring the other two\nmodalities. We anticipate that emotion recognition accuracy can be improved by\nemploying a multi-modal approach. Thus, in this study, we propose a Multi-modal\nFusion Network (M2FNet) that extracts emotion-relevant features from visual,\naudio, and text modality. It employs a multi-head attention-based fusion\nmechanism to combine emotion-rich latent representations of the input data. We\nintroduce a new feature extractor to extract latent features from the audio and\nvisual modality. The proposed feature extractor is trained with a novel\nadaptive margin-based triplet loss function to learn emotion-relevant features\nfrom the audio and visual data. In the domain of ERC, the existing methods\nperform well on one benchmark dataset but not on others. Our results show that\nthe proposed M2FNet architecture outperforms all other methods in terms of\nweighted average F1 score on well-known MELD and IEMOCAP datasets and sets a\nnew state-of-the-art performance in ERC.", + "authors": "Vishal Chudasama, Purbayan Kar, Ashish Gudmalwar, Nirmesh Shah, Pankaj Wasnik, Naoyuki Onoe", + "published": "2022-06-05", + "updated": "2022-06-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1908.11540v1", + "title": "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation", + "abstract": "Emotion recognition in conversation (ERC) has received much attention,\nlately, from researchers due to its potential widespread applications in\ndiverse areas, such as health-care, education, and human resources. In this\npaper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph\nneural network based approach to ERC. We leverage self and inter-speaker\ndependency of the interlocutors to model conversational context for emotion\nrecognition. Through the graph network, DialogueGCN addresses context\npropagation issues present in the current RNN-based methods. We empirically\nshow that this method alleviates such issues, while outperforming the current\nstate of the art on a number of benchmark emotion classification datasets.", + "authors": "Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, Alexander Gelbukh", + "published": "2019-08-30", + "updated": "2019-08-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.00492v3", + "title": "BiERU: Bidirectional Emotional Recurrent Unit for Conversational Sentiment Analysis", + "abstract": "Sentiment analysis in conversations has gained increasing attention in recent\nyears for the growing amount of applications it can serve, e.g., sentiment\nanalysis, recommender systems, and human-robot interaction. The main difference\nbetween conversational sentiment analysis and single sentence sentiment\nanalysis is the existence of context information which may influence the\nsentiment of an utterance in a dialogue. How to effectively encode contextual\ninformation in dialogues, however, remains a challenge. Existing approaches\nemploy complicated deep learning structures to distinguish different parties in\na conversation and then model the context information. In this paper, we\npropose a fast, compact and parameter-efficient party-ignorant framework named\nbidirectional emotional recurrent unit for conversational sentiment analysis.\nIn our system, a generalized neural tensor block followed by a two-channel\nclassifier is designed to perform context compositionality and sentiment\nclassification, respectively. Extensive experiments on three standard datasets\ndemonstrate that our model outperforms the state of the art in most cases.", + "authors": "Wei Li, Wei Shao, Shaoxiong Ji, Erik Cambria", + "published": "2020-05-31", + "updated": "2021-07-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.00396v3", + "title": "Efficiently Modeling Long Sequences with Structured State Spaces", + "abstract": "A central goal of sequence modeling is designing a single principled model\nthat can address sequence data across a range of modalities and tasks,\nparticularly on long-range dependencies. Although conventional models including\nRNNs, CNNs, and Transformers have specialized variants for capturing long\ndependencies, they still struggle to scale to very long sequences of $10000$ or\nmore steps. A promising recent approach proposed modeling sequences by\nsimulating the fundamental state space model (SSM) \\( x'(t) = Ax(t) + Bu(t),\ny(t) = Cx(t) + Du(t) \\), and showed that for appropriate choices of the state\nmatrix \\( A \\), this system could handle long-range dependencies mathematically\nand empirically. However, this method has prohibitive computation and memory\nrequirements, rendering it infeasible as a general sequence modeling solution.\nWe propose the Structured State Space sequence model (S4) based on a new\nparameterization for the SSM, and show that it can be computed much more\nefficiently than prior approaches while preserving their theoretical strengths.\nOur technique involves conditioning \\( A \\) with a low-rank correction,\nallowing it to be diagonalized stably and reducing the SSM to the well-studied\ncomputation of a Cauchy kernel. S4 achieves strong empirical results across a\ndiverse range of established benchmarks, including (i) 91\\% accuracy on\nsequential CIFAR-10 with no data augmentation or auxiliary losses, on par with\na larger 2-D ResNet, (ii) substantially closing the gap to Transformers on\nimage and language modeling tasks, while performing generation $60\\times$\nfaster (iii) SoTA on every task from the Long Range Arena benchmark, including\nsolving the challenging Path-X task of length 16k that all prior work fails on,\nwhile being as efficient as all competitors.", + "authors": "Albert Gu, Karan Goel, Christopher R\u00e9", + "published": "2021-10-31", + "updated": "2022-08-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.00818v2", + "title": "DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models", + "abstract": "Large language models (LLMs) face a daunting challenge due to the excessive\ncomputational and memory requirements of the commonly used Transformer\narchitecture. While state space model (SSM) is a new type of foundational\nnetwork architecture offering lower computational complexity, their performance\nhas yet to fully rival that of Transformers. This paper introduces DenseSSM, a\nnovel approach to enhance the flow of hidden information between layers in\nSSMs. By selectively integrating shallowlayer hidden states into deeper layers,\nDenseSSM retains fine-grained information crucial for the final output. Dense\nconnections enhanced DenseSSM still maintains the training parallelizability\nand inference efficiency. The proposed method can be widely applicable to\nvarious SSM types like RetNet and Mamba. With similar model size, DenseSSM\nachieves significant improvements, exemplified by DenseRetNet outperforming the\noriginal RetNet with up to 5% accuracy improvement on public benchmarks. code\nis avalaible at https://github.com/WailordHe/DenseSSM", + "authors": "Wei He, Kai Han, Yehui Tang, Chengcheng Wang, Yujie Yang, Tianyu Guo, Yunhe Wang", + "published": "2024-02-26", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.09417v2", + "title": "Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model", + "abstract": "Recently the state space models (SSMs) with efficient hardware-aware designs,\ni.e., the Mamba deep learning model, have shown great potential for long\nsequence modeling. Meanwhile building efficient and generic vision backbones\npurely upon SSMs is an appealing direction. However, representing visual data\nis challenging for SSMs due to the position-sensitivity of visual data and the\nrequirement of global context for visual understanding. In this paper, we show\nthat the reliance on self-attention for visual representation learning is not\nnecessary and propose a new generic vision backbone with bidirectional Mamba\nblocks (Vim), which marks the image sequences with position embeddings and\ncompresses the visual representation with bidirectional state space models. On\nImageNet classification, COCO object detection, and ADE20k semantic\nsegmentation tasks, Vim achieves higher performance compared to\nwell-established vision transformers like DeiT, while also demonstrating\nsignificantly improved computation & memory efficiency. For example, Vim is\n2.8$\\times$ faster than DeiT and saves 86.8% GPU memory when performing batch\ninference to extract features on images with a resolution of 1248$\\times$1248.\nThe results demonstrate that Vim is capable of overcoming the computation &\nmemory constraints on performing Transformer-style understanding for\nhigh-resolution images and it has great potential to be the next-generation\nbackbone for vision foundation models. Code is available at\nhttps://github.com/hustvl/Vim.", + "authors": "Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang", + "published": "2024-01-17", + "updated": "2024-02-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.10681v2", + "title": "Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations", + "abstract": "Messages in human conversations inherently convey emotions. The task of\ndetecting emotions in textual conversations leads to a wide range of\napplications such as opinion mining in social networks. However, enabling\nmachines to analyze emotions in conversations is challenging, partly because\nhumans often rely on the context and commonsense knowledge to express emotions.\nIn this paper, we address these challenges by proposing a Knowledge-Enriched\nTransformer (KET), where contextual utterances are interpreted using\nhierarchical self-attention and external commonsense knowledge is dynamically\nleveraged using a context-aware affective graph attention mechanism.\nExperiments on multiple textual conversation datasets demonstrate that both\ncontext and commonsense knowledge are consistently beneficial to the emotion\ndetection performance. In addition, the experimental results show that our KET\nmodel outperforms the state-of-the-art models on most of the tested datasets in\nF1 score.", + "authors": "Peixiang Zhong, Di Wang, Chunyan Miao", + "published": "2019-09-24", + "updated": "2019-10-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2003.08414v4", + "title": "Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks", + "abstract": "Graph convolutional networks (GCNs) have shown promising results in\nprocessing graph data by extracting structure-aware features. This gave rise to\nextensive work in geometric deep learning, focusing on designing network\narchitectures that ensure neuron activations conform to regularity patterns\nwithin the input graph. However, in most cases the graph structure is only\naccounted for by considering the similarity of activations between adjacent\nnodes, which limits the capabilities of such methods to discriminate between\nnodes in a graph. Here, we propose to augment conventional GCNs with geometric\nscattering transforms and residual convolutions. The former enables band-pass\nfiltering of graph signals, thus alleviating the so-called oversmoothing often\nencountered in GCNs, while the latter is introduced to clear the resulting\nfeatures of high-frequency noise. We establish the advantages of the presented\nScattering GCN with both theoretical results establishing the complementary\nbenefits of scattering and GCN features, as well as experimental results\nshowing the benefits of our method compared to leading graph neural networks\nfor semi-supervised node classification, including the recently proposed GAT\nnetwork that typically alleviates oversmoothing using graph attention\nmechanisms.", + "authors": "Yimeng Min, Frederik Wenkel, Guy Wolf", + "published": "2020-03-18", + "updated": "2022-01-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.13504v1", + "title": "EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition", + "abstract": "Emotion recognition in conversation (ERC) aims to analyze the speaker's state\nand identify their emotion in the conversation. Recent works in ERC focus on\ncontext modeling but ignore the representation of contextual emotional\ntendency. In order to extract multi-modal information and the emotional\ntendency of the utterance effectively, we propose a new structure named\nEmoformer to extract multi-modal emotion vectors from different modalities and\nfuse them with sentence vector to be an emotion capsule. Furthermore, we design\nan end-to-end ERC model called EmoCaps, which extracts emotion vectors through\nthe Emoformer structure and obtain the emotion classification results from a\ncontext analysis model. Through the experiments with two benchmark datasets,\nour model shows better performance than the existing state-of-the-art models.", + "authors": "Zaijing Li, Fengxiao Tang, Ming Zhao, Yusen Zhu", + "published": "2022-03-25", + "updated": "2022-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.10579v1", + "title": "DER-GCN: Dialogue and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialogue Emotion Recognition", + "abstract": "With the continuous development of deep learning (DL), the task of multimodal\ndialogue emotion recognition (MDER) has recently received extensive research\nattention, which is also an essential branch of DL. The MDER aims to identify\nthe emotional information contained in different modalities, e.g., text, video,\nand audio, in different dialogue scenes. However, existing research has focused\non modeling contextual semantic information and dialogue relations between\nspeakers while ignoring the impact of event relations on emotion. To tackle the\nabove issues, we propose a novel Dialogue and Event Relation-Aware Graph\nConvolutional Neural Network for Multimodal Emotion Recognition (DER-GCN)\nmethod. It models dialogue relations between speakers and captures latent event\nrelations information. Specifically, we construct a weighted multi-relationship\ngraph to simultaneously capture the dependencies between speakers and event\nrelations in a dialogue. Moreover, we also introduce a Self-Supervised Masked\nGraph Autoencoder (SMGAE) to improve the fusion representation ability of\nfeatures and structures. Next, we design a new Multiple Information Transformer\n(MIT) to capture the correlation between different relations, which can provide\na better fuse of the multivariate information between relations. Finally, we\npropose a loss optimization strategy based on contrastive learning to enhance\nthe representation learning ability of minority class features. We conduct\nextensive experiments on the IEMOCAP and MELD benchmark datasets, which verify\nthe effectiveness of the DER-GCN model. The results demonstrate that our model\nsignificantly improves both the average accuracy and the f1 value of emotion\nrecognition.", + "authors": "Wei Ai, Yuntao Shou, Tao Meng, Keqin Li", + "published": "2023-12-17", + "updated": "2023-12-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.13947v3", + "title": "Long Range Language Modeling via Gated State Spaces", + "abstract": "State space models have shown to be effective at modeling long range\ndependencies, specially on sequence classification tasks. In this work we focus\non autoregressive sequence modeling over English books, Github source code and\nArXiv mathematics articles. Based on recent developments around the\neffectiveness of gated activation functions, we propose a new layer named Gated\nState Space (GSS) and show that it trains significantly faster than the\ndiagonal version of S4 (i.e. DSS) on TPUs, is fairly competitive with several\nwell-tuned Transformer-based baselines and exhibits zero-shot generalization to\nlonger inputs while being straightforward to implement. Finally, we show that\nleveraging self-attention to model local dependencies improves the performance\nof GSS even further.", + "authors": "Harsh Mehta, Ankit Gupta, Ashok Cutkosky, Behnam Neyshabur", + "published": "2022-06-27", + "updated": "2022-07-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1412.3555v1", + "title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling", + "abstract": "In this paper we compare different types of recurrent units in recurrent\nneural networks (RNNs). Especially, we focus on more sophisticated units that\nimplement a gating mechanism, such as a long short-term memory (LSTM) unit and\na recently proposed gated recurrent unit (GRU). We evaluate these recurrent\nunits on the tasks of polyphonic music modeling and speech signal modeling. Our\nexperiments revealed that these advanced recurrent units are indeed better than\nmore traditional recurrent units such as tanh units. Also, we found GRU to be\ncomparable to LSTM.", + "authors": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio", + "published": "2014-12-11", + "updated": "2014-12-11", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.00752v1", + "title": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces", + "abstract": "Foundation models, now powering most of the exciting applications in deep\nlearning, are almost universally based on the Transformer architecture and its\ncore attention module. Many subquadratic-time architectures such as linear\nattention, gated convolution and recurrent models, and structured state space\nmodels (SSMs) have been developed to address Transformers' computational\ninefficiency on long sequences, but they have not performed as well as\nattention on important modalities such as language. We identify that a key\nweakness of such models is their inability to perform content-based reasoning,\nand make several improvements. First, simply letting the SSM parameters be\nfunctions of the input addresses their weakness with discrete modalities,\nallowing the model to selectively propagate or forget information along the\nsequence length dimension depending on the current token. Second, even though\nthis change prevents the use of efficient convolutions, we design a\nhardware-aware parallel algorithm in recurrent mode. We integrate these\nselective SSMs into a simplified end-to-end neural network architecture without\nattention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\\times$\nhigher throughput than Transformers) and linear scaling in sequence length, and\nits performance improves on real data up to million-length sequences. As a\ngeneral sequence model backbone, Mamba achieves state-of-the-art performance\nacross several modalities such as language, audio, and genomics. On language\nmodeling, our Mamba-3B model outperforms Transformers of the same size and\nmatches Transformers twice its size, both in pretraining and downstream\nevaluation.", + "authors": "Albert Gu, Tri Dao", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.09498v2", + "title": "FusionMamba: Dynamic Feature Enhancement for Multimodal Image Fusion with Mamba", + "abstract": "Multi-modal image fusion aims to combine information from different modes to\ncreate a single image with comprehensive information and detailed textures.\nHowever, fusion models based on convolutional neural networks encounter\nlimitations in capturing global image features due to their focus on local\nconvolution operations. Transformer-based models, while excelling in global\nfeature modeling, confront computational challenges stemming from their\nquadratic complexity. Recently, the Selective Structured State Space Model has\nexhibited significant potential for long-range dependency modeling with linear\ncomplexity, offering a promising avenue to address the aforementioned dilemma.\nIn this paper, we propose FusionMamba, a novel dynamic feature enhancement\nmethod for multimodal image fusion with Mamba. Specifically, we devise an\nimproved efficient Mamba model for image fusion, integrating efficient visual\nstate space model with dynamic convolution and channel attention. This refined\nmodel not only upholds the performance of Mamba and global modeling capability\nbut also diminishes channel redundancy while enhancing local enhancement\ncapability. Additionally, we devise a dynamic feature fusion module (DFFM)\ncomprising two dynamic feature enhancement modules (DFEM) and a cross modality\nfusion mamba module (CMFM). The former serves for dynamic texture enhancement\nand dynamic difference perception, whereas the latter enhances correlation\nfeatures between modes and suppresses redundant intermodal information.\nFusionMamba has yielded state-of-the-art (SOTA) performance across various\nmultimodal medical image fusion tasks (CT-MRI, PET-MRI, SPECT-MRI), infrared\nand visible image fusion task (IR-VIS) and multimodal biomedical image fusion\ndataset (GFP-PC), which is proved that our model has generalization ability.\nThe code for FusionMamba is available at\nhttps://github.com/millieXie/FusionMamba.", + "authors": "Xinyu Xie, Yawen Cui, Chio-In Ieong, Tao Tan, Xiaozhi Zhang, Xubin Zheng, Zitong Yu", + "published": "2024-04-15", + "updated": "2024-04-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.11144v3", + "title": "Is Mamba Effective for Time Series Forecasting?", + "abstract": "In the realm of time series forecasting (TSF), it is imperative for models to\nadeptly discern and distill hidden patterns within historical time series data\nto forecast future states. Transformer-based models exhibit formidable efficacy\nin TSF, primarily attributed to their advantage in apprehending these patterns.\nHowever, the quadratic complexity of the Transformer leads to low computational\nefficiency and high costs, which somewhat hinders the deployment of the TSF\nmodel in real-world scenarios. Recently, Mamba, a selective state space model,\nhas gained traction due to its ability to process dependencies in sequences\nwhile maintaining near-linear complexity. For TSF tasks, these characteristics\nenable Mamba to comprehend hidden patterns as the Transformer and reduce\ncomputational overhead compared to the Transformer. Therefore, we propose a\nMamba-based model named Simple-Mamba (S-Mamba) for TSF. Specifically, we\ntokenize the time points of each variate autonomously via a linear layer. A\nbidirectional Mamba layer is utilized to extract inter-variate correlations and\na Feed-Forward Network is set to learn temporal dependencies. Finally, the\ngeneration of forecast outcomes through a linear mapping layer. Experiments on\nthirteen public datasets prove that S-Mamba maintains low computational\noverhead and achieves leading performance. Furthermore, we conduct extensive\nexperiments to explore Mamba's potential in TSF tasks. Our code is available at\nhttps://github.com/wzhwzhwzh0921/S-D-Mamba.", + "authors": "Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Xiaocui Yang, Han Zhao, Daling Wang, Yifei Zhang", + "published": "2024-03-17", + "updated": "2024-04-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/1705.07838v3", + "title": "Dynamics of a liquid plug in a capillary tube under cyclic forcing: memory effects and airway reopening", + "abstract": "In this paper, we investigate both experimentally and theoretically the\ndynamics of a liquid plug driven by a cyclic periodic forcing inside a\ncylindrical rigid capillary tube. First, it is shown that depending on the type\nof forcing (flow rate or pressure cycle), the dynamics of the liquid plug can\neither be stable and periodic, or conversely accelerative and eventually\nleading to the plug rupture. In the latter case, we identify the sources of the\ninstability as: (i) the cyclic diminution of the plug viscous resistance to\nmotion due to the decrease in the plug length and (ii) a cyclic reduction of\nthe plug interfacial resistance due to a lubrication effect. Since the flow is\nquasi-static and the forcing periodic, this cyclic evolution of the resistances\nrelies on the existence of flow memories stored in the length of the plug and\nthe thickness of the trailing film. Second, we show that contrary to\nunidirectional pressure forcing, cyclic forcing enables breaking of large plugs\nin confined space though it requires longer times. All the experimentally\nobserved tendencies are quantitatively recovered from an analytical model. This\nstudy not only reveals the underlying physics but also opens up the prospect\nfor the simulation of \"breathing\" of liquid plugs in complex geometries and the\ndetermination of optimal cycles for obstructed airways reopening.", + "authors": "S Signe Mamba, J C Magniez, F Zoueshtiagh, M Baudoin, S Mamba", + "published": "2017-05-11", + "updated": "2017-10-23", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "physics.class-ph" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.06800v1", + "title": "MambaMIL: Enhancing Long Sequence Modeling with Sequence Reordering in Computational Pathology", + "abstract": "Multiple Instance Learning (MIL) has emerged as a dominant paradigm to\nextract discriminative feature representations within Whole Slide Images (WSIs)\nin computational pathology. Despite driving notable progress, existing MIL\napproaches suffer from limitations in facilitating comprehensive and efficient\ninteractions among instances, as well as challenges related to time-consuming\ncomputations and overfitting. In this paper, we incorporate the Selective Scan\nSpace State Sequential Model (Mamba) in Multiple Instance Learning (MIL) for\nlong sequence modeling with linear complexity, termed as MambaMIL. By\ninheriting the capability of vanilla Mamba, MambaMIL demonstrates the ability\nto comprehensively understand and perceive long sequences of instances.\nFurthermore, we propose the Sequence Reordering Mamba (SR-Mamba) aware of the\norder and distribution of instances, which exploits the inherent valuable\ninformation embedded within the long sequences. With the SR-Mamba as the core\ncomponent, MambaMIL can effectively capture more discriminative features and\nmitigate the challenges associated with overfitting and high computational\noverhead. Extensive experiments on two public challenging tasks across nine\ndiverse datasets demonstrate that our proposed framework performs favorably\nagainst state-of-the-art MIL methods. The code is released at\nhttps://github.com/isyangshu/MambaMIL.", + "authors": "Shu Yang, Yihui Wang, Hao Chen", + "published": "2024-03-11", + "updated": "2024-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.18276v2", + "title": "RankMamba: Benchmarking Mamba's Document Ranking Performance in the Era of Transformers", + "abstract": "Transformer structure has achieved great success in multiple applied machine\nlearning communities, such as natural language processing (NLP), computer\nvision (CV) and information retrieval (IR). Transformer architecture's core\nmechanism -- attention requires $O(n^2)$ time complexity in training and $O(n)$\ntime complexity in inference. Many works have been proposed to improve the\nattention mechanism's scalability, such as Flash Attention and Multi-query\nAttention. A different line of work aims to design new mechanisms to replace\nattention. Recently, a notable model structure -- Mamba, which is based on\nstate space models, has achieved transformer-equivalent performance in multiple\nsequence modeling tasks.\n In this work, we examine \\mamba's efficacy through the lens of a classical IR\ntask -- document ranking. A reranker model takes a query and a document as\ninput, and predicts a scalar relevance score. This task demands the language\nmodel's ability to comprehend lengthy contextual inputs and to capture the\ninteraction between query and document tokens. We find that (1) Mamba models\nachieve competitive performance compared to transformer-based models with the\nsame training recipe; (2) but also have a lower training throughput in\ncomparison to efficient transformer implementations such as flash attention. We\nhope this study can serve as a starting point to explore Mamba models in other\nclassical IR tasks. Our code implementation and trained checkpoints are made\npublic to facilitate reproducibility\n(https://github.com/zhichaoxu-shufe/RankMamba).", + "authors": "Zhichao Xu", + "published": "2024-03-27", + "updated": "2024-04-07", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2008.04532v1", + "title": "A purely mechanical model with asymmetric features for early morphogenesis of rod-shaped bacteria micro-colony", + "abstract": "To model the morphogenesis of rod-shaped bacterial micro-colony, several\nindividual-based models have been proposed in the biophysical literature. When\nstudying the shape of micro-colonies, most models present interaction forces\nsuch as attraction or filial link. In this article, we propose a model where\nthe bacteria interact only through non-overlapping constraints. We consider the\nasymmetry of the bacteria, and its influence on the friction with the\nsubstrate. Besides, we consider asymmetry in the mass distribution of the\nbacteria along their length. These two new modelling assumptions allow us to\nretrieve mechanical behaviours of micro-colony growth without the need of\ninteraction such as attraction. We compare our model to various sets of\nexperiments, discuss our results, and propose several quantifiers to compare\nmodel to data in a systematic way.", + "authors": "Marie Doumic, Sophie Hecht, Diane Peurichard", + "published": "2020-08-11", + "updated": "2020-08-11", + "primary_cat": "q-bio.CB", + "cats": [ + "q-bio.CB", + "cond-mat.soft", + "physics.bio-ph" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2401.09923v2", + "title": "MAMBA: Multi-level Aggregation via Memory Bank for Video Object Detection", + "abstract": "State-of-the-art video object detection methods maintain a memory structure,\neither a sliding window or a memory queue, to enhance the current frame using\nattention mechanisms. However, we argue that these memory structures are not\nefficient or sufficient because of two implied operations: (1) concatenating\nall features in memory for enhancement, leading to a heavy computational cost;\n(2) frame-wise memory updating, preventing the memory from capturing more\ntemporal information. In this paper, we propose a multi-level aggregation\narchitecture via memory bank called MAMBA. Specifically, our memory bank\nemploys two novel operations to eliminate the disadvantages of existing\nmethods: (1) light-weight key-set construction which can significantly reduce\nthe computational cost; (2) fine-grained feature-wise updating strategy which\nenables our method to utilize knowledge from the whole video. To better enhance\nfeatures from complementary levels, i.e., feature maps and proposals, we\nfurther propose a generalized enhancement operation (GEO) to aggregate\nmulti-level features in a unified manner. We conduct extensive evaluations on\nthe challenging ImageNetVID dataset. Compared with existing state-of-the-art\nmethods, our method achieves superior performance in terms of both speed and\naccuracy. More remarkably, MAMBA achieves mAP of 83.7/84.6% at 12.6/9.1 FPS\nwith ResNet-101. Code is available at\nhttps://github.com/guanxiongsun/vfe.pytorch.", + "authors": "Guanxiong Sun, Yang Hua, Guosheng Hu, Neil Robertson", + "published": "2024-01-18", + "updated": "2024-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.18401v1", + "title": "Spectral-Spatial Mamba for Hyperspectral Image Classification", + "abstract": "Recently, deep learning models have achieved excellent performance in\nhyperspectral image (HSI) classification. Among the many deep models,\nTransformer has gradually attracted interest for its excellence in modeling the\nlong-range dependencies of spatial-spectral features in HSI. However,\nTransformer has the problem of quadratic computational complexity due to the\nself-attention mechanism, which is heavier than other models and thus has\nlimited adoption in HSI processing. Fortunately, the recently emerging state\nspace model-based Mamba shows great computational efficiency while achieving\nthe modeling power of Transformers. Therefore, in this paper, we make a\npreliminary attempt to apply the Mamba to HSI classification, leading to the\nproposed spectral-spatial Mamba (SS-Mamba). Specifically, the proposed SS-Mamba\nmainly consists of spectral-spatial token generation module and several stacked\nspectral-spatial Mamba blocks. Firstly, the token generation module converts\nany given HSI cube to spatial and spectral tokens as sequences. And then these\ntokens are sent to stacked spectral-spatial mamba blocks (SS-MB). Each SS-MB\nblock consists of two basic mamba blocks and a spectral-spatial feature\nenhancement module. The spatial and spectral tokens are processed separately by\nthe two basic mamba blocks, respectively. Besides, the feature enhancement\nmodule modulates spatial and spectral tokens using HSI sample's center region\ninformation. In this way, the spectral and spatial tokens cooperate with each\nother and achieve information fusion within each block. The experimental\nresults conducted on widely used HSI datasets reveal that the proposed model\nachieves competitive results compared with the state-of-the-art methods. The\nMamba-based method opens a new window for HSI classification.", + "authors": "Lingbo Huang, Yushi Chen, Xin He", + "published": "2024-04-29", + "updated": "2024-04-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.15956v2", + "title": "A Survey on Visual Mamba", + "abstract": "State space models (SSMs) with selection mechanisms and hardware-aware\narchitectures, namely Mamba, have recently demonstrated significant promise in\nlong-sequence modeling. Since the self-attention mechanism in transformers has\nquadratic complexity with image size and increasing computational demands, the\nresearchers are now exploring how to adapt Mamba for computer vision tasks.\nThis paper is the first comprehensive survey aiming to provide an in-depth\nanalysis of Mamba models in the field of computer vision. It begins by\nexploring the foundational concepts contributing to Mamba's success, including\nthe state space model framework, selection mechanisms, and hardware-aware\ndesign. Next, we review these vision mamba models by categorizing them into\nfoundational ones and enhancing them with techniques such as convolution,\nrecurrence, and attention to improve their sophistication. We further delve\ninto the widespread applications of Mamba in vision tasks, which include their\nuse as a backbone in various levels of vision processing. This encompasses\ngeneral visual tasks, Medical visual tasks (e.g., 2D / 3D segmentation,\nclassification, and image registration, etc.), and Remote Sensing visual tasks.\nWe specially introduce general visual tasks from two levels: High/Mid-level\nvision (e.g., Object detection, Segmentation, Video classification, etc.) and\nLow-level vision (e.g., Image super-resolution, Image restoration, Visual\ngeneration, etc.). We hope this endeavor will spark additional interest within\nthe community to address current challenges and further apply Mamba models in\ncomputer vision.", + "authors": "Hanwei Zhang, Ying Zhu, Dan Wang, Lijun Zhang, Tianxiang Chen, Zi Ye", + "published": "2024-04-24", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.18861v1", + "title": "A Survey on Vision Mamba: Models, Applications and Challenges", + "abstract": "Mamba, a recent selective structured state space model, performs excellently\non long sequence modeling tasks. Mamba mitigates the modeling constraints of\nconvolutional neural networks and offers advanced modeling capabilities similar\nto those of Transformers, through global receptive fields and dynamic\nweighting. Crucially, it achieves this without incurring the quadratic\ncomputational complexity typically associated with Transformers. Due to its\nadvantages over the former two mainstream foundation models, Mamba exhibits\ngreat potential to be a visual foundation model. Researchers are actively\napplying Mamba to various computer vision tasks, leading to numerous emerging\nworks. To help keep pace with the rapid advancements in computer vision, this\npaper aims to provide a comprehensive review of visual Mamba approaches. This\npaper begins by delineating the formulation of the original Mamba model.\nSubsequently, our review of visual Mamba delves into several representative\nbackbone networks to elucidate the core insights of the visual Mamba. We then\ncategorize related works using different modalities, including image, video,\npoint cloud, multi-modal, and others. Specifically, for image applications, we\nfurther organize them into distinct tasks to facilitate a more structured\ndiscussion. Finally, we discuss the challenges and future research directions\nfor visual Mamba, providing insights for future research in this quickly\nevolving area. A comprehensive list of visual Mamba models reviewed in this\nwork is available at https://github.com/Ruixxxx/Awesome-Vision-Mamba-Models.", + "authors": "Rui Xu, Shu Yang, Yihui Wang, Bo Du, Hao Chen", + "published": "2024-04-29", + "updated": "2024-04-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2005.11944v1", + "title": "The sterile insect technique used as a barrier control against reinfestation", + "abstract": "The sterile insect technique consists in massive release of sterilized males\nin the aim to reduce the size of mosquitoes population or even eradicate it. In\nthis work, we investigate the feasability of using the sterile insect technique\nas a barrier against reinvasion. More precisely, we provide some numerical\nsimulations and mathematical results showing that performing the sterile insect\ntechnique on a band large enough may stop reinvasion.", + "authors": "Luis Almeida, Jorge Estrada, Nicolas Vauchelet", + "published": "2020-05-25", + "updated": "2020-05-25", + "primary_cat": "math.AP", + "cats": [ + "math.AP", + "q-bio.PE" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.19654v1", + "title": "RSMamba: Remote Sensing Image Classification with State Space Model", + "abstract": "Remote sensing image classification forms the foundation of various\nunderstanding tasks, serving a crucial function in remote sensing image\ninterpretation. The recent advancements of Convolutional Neural Networks (CNNs)\nand Transformers have markedly enhanced classification accuracy. Nonetheless,\nremote sensing scene classification remains a significant challenge, especially\ngiven the complexity and diversity of remote sensing scenarios and the\nvariability of spatiotemporal resolutions. The capacity for whole-image\nunderstanding can provide more precise semantic cues for scene discrimination.\nIn this paper, we introduce RSMamba, a novel architecture for remote sensing\nimage classification. RSMamba is based on the State Space Model (SSM) and\nincorporates an efficient, hardware-aware design known as the Mamba. It\nintegrates the advantages of both a global receptive field and linear modeling\ncomplexity. To overcome the limitation of the vanilla Mamba, which can only\nmodel causal sequences and is not adaptable to two-dimensional image data, we\npropose a dynamic multi-path activation mechanism to augment Mamba's capacity\nto model non-causal data. Notably, RSMamba maintains the inherent modeling\nmechanism of the vanilla Mamba, yet exhibits superior performance across\nmultiple remote sensing image classification datasets. This indicates that\nRSMamba holds significant potential to function as the backbone of future\nvisual foundation models. The code will be available at\n\\url{https://github.com/KyanChen/RSMamba}.", + "authors": "Keyan Chen, Bowen Chen, Chenyang Liu, Wenyuan Li, Zhengxia Zou, Zhenwei Shi", + "published": "2024-03-28", + "updated": "2024-03-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.18959v1", + "title": "MambaStock: Selective state space model for stock prediction", + "abstract": "The stock market plays a pivotal role in economic development, yet its\nintricate volatility poses challenges for investors. Consequently, research and\naccurate predictions of stock price movements are crucial for mitigating risks.\nTraditional time series models fall short in capturing nonlinearity, leading to\nunsatisfactory stock predictions. This limitation has spurred the widespread\nadoption of neural networks for stock prediction, owing to their robust\nnonlinear generalization capabilities. Recently, Mamba, a structured state\nspace sequence model with a selection mechanism and scan module (S6), has\nemerged as a powerful tool in sequence modeling tasks. Leveraging this\nframework, this paper proposes a novel Mamba-based model for stock price\nprediction, named MambaStock. The proposed MambaStock model effectively mines\nhistorical stock market data to predict future stock prices without handcrafted\nfeatures or extensive preprocessing procedures. Empirical studies on several\nstocks indicate that the MambaStock model outperforms previous methods,\ndelivering highly accurate predictions. This enhanced accuracy can assist\ninvestors and institutions in making informed decisions, aiming to maximize\nreturns while minimizing risks. This work underscores the value of Mamba in\ntime-series forecasting. Source code is available at\nhttps://github.com/zshicode/MambaStock.", + "authors": "Zhuangwei Shi", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE", + "q-fin.ST" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.03011v1", + "title": "AC-MAMBASEG: An adaptive convolution and Mamba-based architecture for enhanced skin lesion segmentation", + "abstract": "Skin lesion segmentation is a critical task in computer-aided diagnosis\nsystems for dermatological diseases. Accurate segmentation of skin lesions from\nmedical images is essential for early detection, diagnosis, and treatment\nplanning. In this paper, we propose a new model for skin lesion segmentation\nnamely AC-MambaSeg, an enhanced model that has the hybrid CNN-Mamba backbone,\nand integrates advanced components such as Convolutional Block Attention Module\n(CBAM), Attention Gate, and Selective Kernel Bottleneck. AC-MambaSeg leverages\nthe Vision Mamba framework for efficient feature extraction, while CBAM and\nSelective Kernel Bottleneck enhance its ability to focus on informative regions\nand suppress background noise. We evaluate the performance of AC-MambaSeg on\ndiverse datasets of skin lesion images including ISIC-2018 and PH2; then\ncompare it against existing segmentation methods. Our model shows promising\npotential for improving computer-aided diagnosis systems and facilitating early\ndetection and treatment of dermatological diseases. Our source code will be\nmade available at: https://github.com/vietthanh2710/AC-MambaSeg.", + "authors": "Viet-Thanh Nguyen, Van-Truong Pham, Thi-Thao Tran", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.08506v2", + "title": "P-Mamba: Marrying Perona Malik Diffusion with Mamba for Efficient Pediatric Echocardiographic Left Ventricular Segmentation", + "abstract": "In pediatric cardiology, the accurate and immediate assessment of cardiac\nfunction through echocardiography is important since it can determine whether\nurgent intervention is required in many emergencies. However, echocardiography\nis characterized by ambiguity and heavy background noise interference, bringing\nmore difficulty to accurate segmentation. Present methods lack efficiency and\nare also prone to mistakenly segmenting some background noise areas as the left\nventricular area due to noise disturbance. To relieve the two issues, we\nintroduce P-Mamba for efficient pediatric echocardiographic left ventricular\nsegmentation. Specifically, we turn to the recently proposed vision mamba\nlayers in our vision mamba encoder branch to improve the computing and memory\nefficiency of our model while modeling global dependencies. In the other\nDWT-based PMD encoder branch, we devise DWT-based Perona-Malik Diffusion (PMD)\nBlocks that utilize PMD for noise suppression, while simultaneously preserving\nthe local shape cues of the left ventricle. Leveraging the strengths of both\nthe two encoder branches, P-Mamba achieves superior accuracy and efficiency to\nestablished models, such as vision transformers with quadratic and linear\ncomputational complexity. This innovative approach promises significant\nadvancements in pediatric cardiac imaging and beyond.", + "authors": "Zi Ye, Tianxiang Chen, Fangyijie Wang, Hanwei Zhang, Guanxi Li, Lijun Zhang", + "published": "2024-02-13", + "updated": "2024-03-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.07332v1", + "title": "Large Window-based Mamba UNet for Medical Image Segmentation: Beyond Convolution and Self-attention", + "abstract": "In clinical practice, medical image segmentation provides useful information\non the contours and dimensions of target organs or tissues, facilitating\nimproved diagnosis, analysis, and treatment. In the past few years,\nconvolutional neural networks (CNNs) and Transformers have dominated this area,\nbut they still suffer from either limited receptive fields or costly long-range\nmodeling. Mamba, a State Space Sequence Model (SSM), recently emerged as a\npromising paradigm for long-range dependency modeling with linear complexity.\nIn this paper, we introduce a Large Window-based Mamba U}-shape Network, or\nLMa-UNet, for 2D and 3D medical image segmentation. A distinguishing feature of\nour LMa-UNet is its utilization of large windows, excelling in locally spatial\nmodeling compared to small kernel-based CNNs and small window-based\nTransformers, while maintaining superior efficiency in global modeling compared\nto self-attention with quadratic complexity. Additionally, we design a novel\nhierarchical and bidirectional Mamba block to further enhance the global and\nneighborhood spatial modeling capability of Mamba. Comprehensive experiments\ndemonstrate the effectiveness and efficiency of our method and the feasibility\nof using large window size to achieve large receptive fields. Codes are\navailable at https://github.com/wjh892521292/LMa-UNet.", + "authors": "Jinhong Wang, Jintai Chen, Danny Chen, Jian Wu", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.03025v1", + "title": "Matten: Video Generation with Mamba-Attention", + "abstract": "In this paper, we introduce Matten, a cutting-edge latent diffusion model\nwith Mamba-Attention architecture for video generation. With minimal\ncomputational cost, Matten employs spatial-temporal attention for local video\ncontent modeling and bidirectional Mamba for global video content modeling. Our\ncomprehensive experimental evaluation demonstrates that Matten has competitive\nperformance with the current Transformer-based and GAN-based models in\nbenchmark performance, achieving superior FVD scores and efficiency.\nAdditionally, we observe a direct positive correlation between the complexity\nof our designed model and the improvement in video quality, indicating the\nexcellent scalability of Matten.", + "authors": "Yu Gao, Jiancheng Huang, Xiaopeng Sun, Zequn Jie, Yujie Zhong, Lin Ma", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.09157v1", + "title": "VM-UNET-V2 Rethinking Vision Mamba UNet for Medical Image Segmentation", + "abstract": "In the field of medical image segmentation, models based on both CNN and\nTransformer have been thoroughly investigated. However, CNNs have limited\nmodeling capabilities for long-range dependencies, making it challenging to\nexploit the semantic information within images fully. On the other hand, the\nquadratic computational complexity poses a challenge for Transformers.\nRecently, State Space Models (SSMs), such as Mamba, have been recognized as a\npromising method. They not only demonstrate superior performance in modeling\nlong-range interactions, but also preserve a linear computational complexity.\nInspired by the Mamba architecture, We proposed Vison Mamba-UNetV2, the Visual\nState Space (VSS) Block is introduced to capture extensive contextual\ninformation, the Semantics and Detail Infusion (SDI) is introduced to augment\nthe infusion of low-level and high-level features. We conduct comprehensive\nexperiments on the ISIC17, ISIC18, CVC-300, CVC-ClinicDB, Kvasir, CVC-ColonDB\nand ETIS-LaribPolypDB public datasets. The results indicate that VM-UNetV2\nexhibits competitive performance in medical image segmentation tasks. Our code\nis available at https://github.com/nobodyplayer1/VM-UNetV2.", + "authors": "Mingya Zhang, Yue Yu, Limei Gu, Tingsheng Lin, Xianping Tao", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.17839v1", + "title": "ReMamber: Referring Image Segmentation with Mamba Twister", + "abstract": "Referring Image Segmentation (RIS) leveraging transformers has achieved great\nsuccess on the interpretation of complex visual-language tasks. However, the\nquadratic computation cost makes it resource-consuming in capturing long-range\nvisual-language dependencies. Fortunately, Mamba addresses this with efficient\nlinear complexity in processing. However, directly applying Mamba to\nmulti-modal interactions presents challenges, primarily due to inadequate\nchannel interactions for the effective fusion of multi-modal data. In this\npaper, we propose ReMamber, a novel RIS architecture that integrates the power\nof Mamba with a multi-modal Mamba Twister block. The Mamba Twister explicitly\nmodels image-text interaction, and fuses textual and visual features through\nits unique channel and spatial twisting mechanism. We achieve the\nstate-of-the-art on three challenging benchmarks. Moreover, we conduct thorough\nanalyses of ReMamber and discuss other fusion designs using Mamba. These\nprovide valuable perspectives for future research.", + "authors": "Yuhuan Yang, Chaofan Ma, Jiangchao Yao, Zhun Zhong, Ya Zhang, Yanfeng Wang", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.08678v2", + "title": "Graph Mamba: Towards Learning on Graphs with State Space Models", + "abstract": "Graph Neural Networks (GNNs) have shown promising potential in graph\nrepresentation learning. The majority of GNNs define a local message-passing\nmechanism, propagating information over the graph by stacking multiple layers.\nThese methods, however, are known to suffer from two major limitations:\nover-squashing and poor capturing of long-range dependencies. Recently, Graph\nTransformers (GTs) emerged as a powerful alternative to Message-Passing Neural\nNetworks (MPNNs). GTs, however, have quadratic computational cost, lack\ninductive biases on graph structures, and rely on complex Positional/Structural\nEncodings (SE/PE). In this paper, we show that while Transformers, complex\nmessage-passing, and SE/PE are sufficient for good performance in practice,\nneither is necessary. Motivated by the recent success of State Space Models\n(SSMs), such as Mamba, we present Graph Mamba Networks (GMNs), a general\nframework for a new class of GNNs based on selective SSMs. We discuss and\ncategorize the new challenges when adapting SSMs to graph-structured data, and\npresent four required and one optional steps to design GMNs, where we choose\n(1) Neighborhood Tokenization, (2) Token Ordering, (3) Architecture of\nBidirectional Selective SSM Encoder, (4) Local Encoding, and dispensable (5) PE\nand SE. We further provide theoretical justification for the power of GMNs.\nExperiments demonstrate that despite much less computational cost, GMNs attain\nan outstanding performance in long-range, small-scale, large-scale, and\nheterophilic benchmark datasets.", + "authors": "Ali Behrouz, Farnoosh Hashemi", + "published": "2024-02-13", + "updated": "2024-02-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.19394v1", + "title": "CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation", + "abstract": "State space models and Mamba-based models have been increasingly applied\nacross various domains, achieving state-of-the-art performance. This technical\nreport introduces the first attempt to train a transferable Mamba model\nutilizing contrastive language-image pretraining (CLIP). We have trained Mamba\nmodels of varying sizes and undertaken comprehensive evaluations of these\nmodels on 26 zero-shot classification datasets and 16 out-of-distribution (OOD)\ndatasets. Our findings reveal that a Mamba model with 67 million parameters is\non par with a 307 million-parameter Vision Transformer (ViT) model in zero-shot\nclassification tasks, highlighting the parameter efficiency of Mamba models. In\ntests of OOD generalization, Mamba-based models exhibit exceptional performance\nin conditions of OOD image contrast or when subjected to high-pass filtering.\nHowever, a Hessian analysis indicates that Mamba models feature a sharper and\nmore non-convex landscape compared to ViT-based models, making them more\nchallenging to train. The source code is available at\nhttps://github.com/raytrun/mamba-clip.", + "authors": "Weiquan Huang, Yifei Shen, Yifan Yang", + "published": "2024-04-30", + "updated": "2024-04-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.09146v1", + "title": "Fusion-Mamba for Cross-modality Object Detection", + "abstract": "Cross-modality fusing complementary information from different modalities\neffectively improves object detection performance, making it more useful and\nrobust for a wider range of applications. Existing fusion strategies combine\ndifferent types of images or merge different backbone features through\nelaborated neural network modules. However, these methods neglect that modality\ndisparities affect cross-modality fusion performance, as different modalities\nwith different camera focal lengths, placements, and angles are hardly fused.\nIn this paper, we investigate cross-modality fusion by associating cross-modal\nfeatures in a hidden state space based on an improved Mamba with a gating\nmechanism. We design a Fusion-Mamba block (FMB) to map cross-modal features\ninto a hidden state space for interaction, thereby reducing disparities between\ncross-modal features and enhancing the representation consistency of fused\nfeatures. FMB contains two modules: the State Space Channel Swapping (SSCS)\nmodule facilitates shallow feature fusion, and the Dual State Space Fusion\n(DSSF) enables deep fusion in a hidden state space. Through extensive\nexperiments on public datasets, our proposed approach outperforms the\nstate-of-the-art methods on $m$AP with 5.9% on $M^3FD$ and 4.9% on FLIR-Aligned\ndatasets, demonstrating superior object detection performance. To the best of\nour knowledge, this is the first work to explore the potential of Mamba for\ncross-modal fusion and establish a new baseline for cross-modality object\ndetection.", + "authors": "Wenhao Dong, Haodong Zhu, Shaohui Lin, Xiaoyan Luo, Yunhang Shen, Xuhui Liu, Juan Zhang, Guodong Guo, Baochang Zhang", + "published": "2024-04-14", + "updated": "2024-04-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.09898v1", + "title": "TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting", + "abstract": "Long-term time-series forecasting remains challenging due to the difficulty\nin capturing long-term dependencies, achieving linear scalability, and\nmaintaining computational efficiency. We introduce TimeMachine, an innovative\nmodel that leverages Mamba, a state-space model, to capture long-term\ndependencies in multivariate time series data while maintaining linear\nscalability and small memory footprints. TimeMachine exploits the unique\nproperties of time series data to produce salient contextual cues at\nmulti-scales and leverage an innovative integrated quadruple-Mamba architecture\nto unify the handling of channel-mixing and channel-independence situations,\nthus enabling effective selection of contents for prediction against global and\nlocal contexts at different scales. Experimentally, TimeMachine achieves\nsuperior performance in prediction accuracy, scalability, and memory\nefficiency, as extensively validated using benchmark datasets. Code\navailability: https://github.com/Atik-Ahamed/TimeMachine", + "authors": "Md Atik Ahamed, Qiang Cheng", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.13660v2", + "title": "ProMamba: Prompt-Mamba for polyp segmentation", + "abstract": "Detecting polyps through colonoscopy is an important task in medical image\nsegmentation, which provides significant assistance and reference value for\nclinical surgery. However, accurate segmentation of polyps is a challenging\ntask due to two main reasons. Firstly, polyps exhibit various shapes and\ncolors. Secondly, the boundaries between polyps and their normal surroundings\nare often unclear. Additionally, significant differences between different\ndatasets lead to limited generalization capabilities of existing methods. To\naddress these issues, we propose a segmentation model based on Prompt-Mamba,\nwhich incorporates the latest Vision-Mamba and prompt technologies. Compared to\nprevious models trained on the same dataset, our model not only maintains high\nsegmentation accuracy on the validation part of the same dataset but also\ndemonstrates superior accuracy on unseen datasets, exhibiting excellent\ngeneralization capabilities. Notably, we are the first to apply the\nVision-Mamba architecture to polyp segmentation and the first to utilize prompt\ntechnology in a polyp segmentation model. Our model efficiently accomplishes\nsegmentation tasks, surpassing previous state-of-the-art methods by an average\nof 5% across six datasets. Furthermore, we have developed multiple versions of\nour model with scaled parameter counts, achieving better performance than\nprevious models even with fewer parameters. Our code and trained weights will\nbe released soon.", + "authors": "Jianhao Xie, Ruofan Liao, Ziang Zhang, Sida Yi, Yuesheng Zhu, Guibo Luo", + "published": "2024-03-20", + "updated": "2024-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.18257v2", + "title": "Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation", + "abstract": "Transformers have been the most successful architecture for various speech\nmodeling tasks, including speech separation. However, the self-attention\nmechanism in transformers with quadratic complexity is inefficient in\ncomputation and memory. Recent models incorporate new layers and modules along\nwith transformers for better performance but also introduce extra model\ncomplexity. In this work, we replace transformers with Mamba, a selective state\nspace model, for speech separation. We propose dual-path Mamba, which models\nshort-term and long-term forward and backward dependency of speech signals\nusing selective state spaces. Our experimental results on the WSJ0-2mix data\nshow that our dual-path Mamba models of comparably smaller sizes outperform\nstate-of-the-art RNN model DPRNN, CNN model WaveSplit, and transformer model\nSepformer. Code: https://github.com/xi-j/Mamba-TasNet", + "authors": "Xilin Jiang, Cong Han, Nima Mesgarani", + "published": "2024-03-27", + "updated": "2024-05-01", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.12418v2", + "title": "STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model", + "abstract": "Spatial-Temporal Graph (STG) data is characterized as dynamic, heterogenous,\nand non-stationary, leading to the continuous challenge of spatial-temporal\ngraph learning. In the past few years, various GNN-based methods have been\nproposed to solely focus on mimicking the relationships among node individuals\nof the STG network, ignoring the significance of modeling the intrinsic\nfeatures that exist in STG system over time. In contrast, modern Selective\nState Space Models (SSSMs) present a new approach which treat STG Network as a\nsystem, and meticulously explore the STG system's dynamic state evolution\nacross temporal dimension. In this work, we introduce Spatial-Temporal Graph\nMamba (STG-Mamba) as the first exploration of leveraging the powerful selective\nstate space models for STG learning by treating STG Network as a system, and\nemploying the Graph Selective State Space Block (GS3B) to precisely\ncharacterize the dynamic evolution of STG networks. STG-Mamba is formulated as\nan Encoder-Decoder architecture, which takes GS3B as the basic module, for\nefficient sequential data modeling. Furthermore, to strengthen GNN's ability of\nmodeling STG data under the setting of SSSMs, we propose Kalman Filtering Graph\nNeural Networks (KFGN) for adaptive graph structure upgrading. KFGN smoothly\nfits in the context of selective state space evolution, and at the same time\nkeeps linear complexity. Extensive empirical studies are conducted on three\nbenchmark STG forecasting datasets, demonstrating the performance superiority\nand computational efficiency of STG-Mamba. It not only surpasses existing\nstate-of-the-art methods in terms of STG forecasting performance, but also\neffectively alleviate the computational bottleneck of large-scale graph\nnetworks in reducing the computational cost of FLOPs and test inference time.", + "authors": "Lincan Li, Hanchen Wang, Wenjie Zhang, Adelle Coster", + "published": "2024-03-19", + "updated": "2024-03-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/1801.03701v1", + "title": "Oscillatory regimes in a mosquito population model with larval feedback on egg hatching", + "abstract": "Understanding mosquitoes life cycle is of great interest presently because of\nthe increasing impact of vector borne diseases in several countries. There is\nevidence of oscillations in mosquito populations independent of seasonality,\nstill unexplained, based on observations both in laboratories and in nature. We\npropose a simple mathematical model of egg hatching enhancement by larvae which\nproduces such oscillations that conveys a possible explanation. We propose both\na theoretical analysis, based on slow-fast dynamics and Hopf bifurcation, and\nnumerical investigations in order to shed some light on the mechanisms at work\nin this model.", + "authors": "Martin Strugarek, Laetitia Dufour, Nicolas Vauchelet, Luis Almeida, Beno\u00eet Perthame, Daniel Villela", + "published": "2018-01-11", + "updated": "2018-01-11", + "primary_cat": "math.DS", + "cats": [ + "math.DS" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2401.13934v4", + "title": "MambaMorph: a Mamba-based Framework for Medical MR-CT Deformable Registration", + "abstract": "Capturing voxel-wise spatial correspondence across distinct modalities is\ncrucial for medical image analysis. However, current registration approaches\nare not practical enough in terms of registration accuracy and clinical\napplicability. In this paper, we introduce MambaMorph, a novel multi-modality\ndeformable registration framework. Specifically, MambaMorph utilizes a\nMamba-based registration module and a fine-grained, yet simple, feature\nextractor for efficient long-range correspondence modeling and high-dimensional\nfeature learning, respectively. Additionally, we develop a well-annotated brain\nMR-CT registration dataset, SR-Reg, to address the scarcity of data in\nmulti-modality registration. To validate MambaMorph's multi-modality\nregistration capabilities, we conduct quantitative experiments on both our\nSR-Reg dataset and a public T1-T2 dataset. The experimental results on both\ndatasets demonstrate that MambaMorph significantly outperforms the current\nstate-of-the-art learning-based registration methods in terms of registration\naccuracy. Further study underscores the efficiency of the Mamba-based\nregistration module and the lightweight feature extractor, which achieve\nnotable registration quality while maintaining reasonable computational costs\nand speeds. We believe that MambaMorph holds significant potential for\npractical applications in medical image registration. The code for MambaMorph\nis available at: https://github.com/Guo-Stone/MambaMorph.", + "authors": "Tao Guo, Yinuo Wang, Shihao Shu, Diansheng Chen, Zhouping Tang, Cai Meng, Xiangzhi Bai", + "published": "2024-01-25", + "updated": "2024-03-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.02148v3", + "title": "MiM-ISTD: Mamba-in-Mamba for Efficient Infrared Small Target Detection", + "abstract": "Recently, infrared small target detection (ISTD) has made significant\nprogress, thanks to the development of basic models. Specifically, the\nstructures combining convolutional networks with transformers can successfully\nextract both local and global features. However, the disadvantage of the\ntransformer is also inherited, i.e., the quadratic computational complexity to\nthe length of the sequence. Inspired by the recent basic model with linear\ncomplexity for long-distance modeling, called Mamba, we explore the potential\nof this state space model for ISTD task in terms of effectiveness and\nefficiency in the paper. However, directly applying Mamba achieves poor\nperformance since local features, which are critical to detecting small\ntargets, cannot be fully exploited. Instead, we tailor a Mamba-in-Mamba\n(MiM-ISTD) structure for efficient ISTD. Specifically, we treat the local\npatches as \"visual sentences\" and use the Outer Mamba to explore the global\ninformation. We then decompose each visual sentence into sub-patches as \"visual\nwords\" and use the Inner Mamba to further explore the local information among\nwords in the visual sentence with negligible computational costs. By\naggregating the word and sentence features, the MiM-ISTD can effectively\nexplore both global and local information. Experiments on NUAA-SIRST and\nIRSTD-1k show the superior accuracy and efficiency of our method. Specifically,\nMiM-ISTD is $10 \\times$ faster than the SOTA method and reduces GPU memory\nusage by 73.4$\\%$ when testing on $2048 \\times 2048$ image, overcoming the\ncomputation and memory constraints on high-resolution infrared images. Source\ncode is available at https://github.com/txchen-USTC/MiM-ISTD.", + "authors": "Tianxiang Chen, Zhentao Tan, Tao Gong, Qi Chu, Yue Wu, Bin Liu, Jieping Ye, Nenghai Yu", + "published": "2024-03-04", + "updated": "2024-03-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2308.03643v1", + "title": "Mamba: Bringing Multi-Dimensional ABR to WebRTC", + "abstract": "Contemporary real-time video communication systems, such as WebRTC, use an\nadaptive bitrate (ABR) algorithm to assure high-quality and low-delay services,\ne.g., promptly adjusting video bitrate according to the instantaneous network\nbandwidth. However, target bitrate decisions in the network and bitrate control\nin the codec are typically incoordinated and simply ignoring the effect of\ninappropriate resolution and frame rate settings also leads to compromised\nresults in bitrate control, thus devastatingly deteriorating the quality of\nexperience (QoE). To tackle these challenges, Mamba, an end-to-end\nmulti-dimensional ABR algorithm is proposed, which utilizes multi-agent\nreinforcement learning (MARL) to maximize the user's QoE by adaptively and\ncollaboratively adjusting encoding factors including the quantization\nparameters (QP), resolution, and frame rate based on observed states such as\nnetwork conditions and video complexity information in a video conferencing\nsystem. We also introduce curriculum learning to improve the training\nefficiency of MARL. Both the in-lab and real-world evaluation results\ndemonstrate the remarkable efficacy of Mamba.", + "authors": "Yueheng Li, Zicheng Zhang, Hao Chen, Zhan Ma", + "published": "2023-08-07", + "updated": "2023-08-07", + "primary_cat": "cs.MM", + "cats": [ + "cs.MM" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.09293v1", + "title": "A Novel State Space Model with Local Enhancement and State Sharing for Image Fusion", + "abstract": "In image fusion tasks, images from different sources possess distinct\ncharacteristics. This has driven the development of numerous methods to explore\nbetter ways of fusing them while preserving their respective characteristics.\nMamba, as a state space model, has emerged in the field of natural language\nprocessing. Recently, many studies have attempted to extend Mamba to vision\ntasks. However, due to the nature of images different from casual language\nsequences, the limited state capacity of Mamba weakens its ability to model\nimage information. Additionally, the sequence modeling ability of Mamba is only\ncapable of spatial information and cannot effectively capture the rich spectral\ninformation in images. Motivated by these challenges, we customize and improve\nthe vision Mamba network designed for the image fusion task. Specifically, we\npropose the local-enhanced vision Mamba block, dubbed as LEVM. The LEVM block\ncan improve local information perception of the network and simultaneously\nlearn local and global spatial information. Furthermore, we propose the state\nsharing technique to enhance spatial details and integrate spatial and spectral\ninformation. Finally, the overall network is a multi-scale structure based on\nvision Mamba, called LE-Mamba. Extensive experiments show the proposed methods\nachieve state-of-the-art results on multispectral pansharpening and\nmultispectral and hyperspectral image fusion datasets, and demonstrate the\neffectiveness of the proposed approach. Code will be made available.", + "authors": "Zihan Cao, Xiao Wu, Liang-Jian Deng, Yu Zhong", + "published": "2024-04-14", + "updated": "2024-04-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.03302v2", + "title": "Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining", + "abstract": "Accurate medical image segmentation demands the integration of multi-scale\ninformation, spanning from local features to global dependencies. However, it\nis challenging for existing methods to model long-range global information,\nwhere convolutional neural networks (CNNs) are constrained by their local\nreceptive fields, and vision transformers (ViTs) suffer from high quadratic\ncomplexity of their attention mechanism. Recently, Mamba-based models have\ngained great attention for their impressive ability in long sequence modeling.\nSeveral studies have demonstrated that these models can outperform popular\nvision models in various tasks, offering higher accuracy, lower memory\nconsumption, and less computational burden. However, existing Mamba-based\nmodels are mostly trained from scratch and do not explore the power of\npretraining, which has been proven to be quite effective for data-efficient\nmedical image analysis. This paper introduces a novel Mamba-based model,\nSwin-UMamba, designed specifically for medical image segmentation tasks,\nleveraging the advantages of ImageNet-based pretraining. Our experimental\nresults reveal the vital role of ImageNet-based training in enhancing the\nperformance of Mamba-based models. Swin-UMamba demonstrates superior\nperformance with a large margin compared to CNNs, ViTs, and latest Mamba-based\nmodels. Notably, on AbdomenMRI, Encoscopy, and Microscopy datasets, Swin-UMamba\noutperforms its closest counterpart U-Mamba_Enc by an average score of 2.72%.", + "authors": "Jiarun Liu, Hao Yang, Hong-Yu Zhou, Yan Xi, Lequan Yu, Yizhou Yu, Yong Liang, Guangming Shi, Shaoting Zhang, Hairong Zheng, Shanshan Wang", + "published": "2024-02-05", + "updated": "2024-03-06", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.20035v3", + "title": "UltraLight VM-UNet: Parallel Vision Mamba Significantly Reduces Parameters for Skin Lesion Segmentation", + "abstract": "Traditionally for improving the segmentation performance of models, most\napproaches prefer to use adding more complex modules. And this is not suitable\nfor the medical field, especially for mobile medical devices, where\ncomputationally loaded models are not suitable for real clinical environments\ndue to computational resource constraints. Recently, state-space models (SSMs),\nrepresented by Mamba, have become a strong competitor to traditional CNNs and\nTransformers. In this paper, we deeply explore the key elements of parameter\ninfluence in Mamba and propose an UltraLight Vision Mamba UNet (UltraLight\nVM-UNet) based on this. Specifically, we propose a method for processing\nfeatures in parallel Vision Mamba, named PVM Layer, which achieves excellent\nperformance with the lowest computational load while keeping the overall number\nof processing channels constant. We conducted comparisons and ablation\nexperiments with several state-of-the-art lightweight models on three skin\nlesion public datasets and demonstrated that the UltraLight VM-UNet exhibits\nthe same strong performance competitiveness with parameters of only 0.049M and\nGFLOPs of 0.060. In addition, this study deeply explores the key elements of\nparameter influence in Mamba, which will lay a theoretical foundation for Mamba\nto possibly become a new mainstream module for lightweighting in the future.\nThe code is available from https://github.com/wurenkai/UltraLight-VM-UNet .", + "authors": "Renkai Wu, Yinghao Liu, Pengchen Liang, Qing Chang", + "published": "2024-03-29", + "updated": "2024-04-24", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.04964v1", + "title": "Frequency-Assisted Mamba for Remote Sensing Image Super-Resolution", + "abstract": "Recent progress in remote sensing image (RSI) super-resolution (SR) has\nexhibited remarkable performance using deep neural networks, e.g.,\nConvolutional Neural Networks and Transformers. However, existing SR methods\noften suffer from either a limited receptive field or quadratic computational\noverhead, resulting in sub-optimal global representation and unacceptable\ncomputational costs in large-scale RSI. To alleviate these issues, we develop\nthe first attempt to integrate the Vision State Space Model (Mamba) for RSI-SR,\nwhich specializes in processing large-scale RSI by capturing long-range\ndependency with linear complexity. To achieve better SR reconstruction,\nbuilding upon Mamba, we devise a Frequency-assisted Mamba framework, dubbed\nFMSR, to explore the spatial and frequent correlations. In particular, our FMSR\nfeatures a multi-level fusion architecture equipped with the Frequency\nSelection Module (FSM), Vision State Space Module (VSSM), and Hybrid Gate\nModule (HGM) to grasp their merits for effective spatial-frequency fusion.\nRecognizing that global and local dependencies are complementary and both\nbeneficial for SR, we further recalibrate these multi-level features for\naccurate feature fusion via learnable scaling adaptors. Extensive experiments\non AID, DOTA, and DIOR benchmarks demonstrate that our FMSR outperforms\nstate-of-the-art Transformer-based methods HAT-L in terms of PSNR by 0.11 dB on\naverage, while consuming only 28.05% and 19.08% of its memory consumption and\ncomplexity, respectively.", + "authors": "Yi Xiao, Qiangqiang Yuan, Kui Jiang, Yuzeng Chen, Qiang Zhang, Chia-Wen Lin", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.13600v1", + "title": "VL-Mamba: Exploring State Space Models for Multimodal Learning", + "abstract": "Multimodal large language models (MLLMs) have attracted widespread interest\nand have rich applications. However, the inherent attention mechanism in its\nTransformer structure requires quadratic complexity and results in expensive\ncomputational overhead. Therefore, in this work, we propose VL-Mamba, a\nmultimodal large language model based on state space models, which have been\nshown to have great potential for long-sequence modeling with fast inference\nand linear scaling in sequence length. Specifically, we first replace the\ntransformer-based backbone language model such as LLama or Vicuna with the\npre-trained Mamba language model. Then, we empirically explore how to\neffectively apply the 2D vision selective scan mechanism for multimodal\nlearning and the combinations of different vision encoders and variants of\npretrained Mamba language models. The extensive experiments on diverse\nmultimodal benchmarks with competitive performance show the effectiveness of\nour proposed VL-Mamba and demonstrate the great potential of applying state\nspace models for multimodal learning tasks.", + "authors": "Yanyuan Qiao, Zheng Yu, Longteng Guo, Sihan Chen, Zijia Zhao, Mingzhen Sun, Qi Wu, Jing Liu", + "published": "2024-03-20", + "updated": "2024-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.06467v2", + "title": "Point Mamba: A Novel Point Cloud Backbone Based on State Space Model with Octree-Based Ordering Strategy", + "abstract": "Recently, state space model (SSM) has gained great attention due to its\npromising performance, linear complexity, and long sequence modeling ability in\nboth language and image domains. However, it is non-trivial to extend SSM to\nthe point cloud field, because of the causality requirement of SSM and the\ndisorder and irregularity nature of point clouds. In this paper, we propose a\nnovel SSM-based point cloud processing backbone, named Point Mamba, with a\ncausality-aware ordering mechanism. To construct the causal dependency\nrelationship, we design an octree-based ordering strategy on raw irregular\npoints, globally sorting points in a z-order sequence and also retaining their\nspatial proximity. Our method achieves state-of-the-art performance compared\nwith transformer-based counterparts, with 93.4% accuracy and 75.7 mIOU\nrespectively on the ModelNet40 classification dataset and ScanNet semantic\nsegmentation dataset. Furthermore, our Point Mamba has linear complexity, which\nis more efficient than transformer-based methods. Our method demonstrates the\ngreat potential that SSM can serve as a generic backbone in point cloud\nunderstanding. Codes are released at https://github.com/IRMVLab/Point-Mamba.", + "authors": "Jiuming Liu, Ruiji Yu, Yian Wang, Yu Zheng, Tianchen Deng, Weicai Ye, Hesheng Wang", + "published": "2024-03-11", + "updated": "2024-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2302.08304v2", + "title": "PandA(Box) flies on Bluesky: maintainable and user-friendly fly scans with Mamba at HEPS", + "abstract": "Purpose: Fly scans are indispensible in many experiments at the High Energy\nPhoton Source (HEPS). PandABox, the main platform to implement fly scans at\nHEPS, needs to be integrated into Mamba, the experiment control system\ndeveloped at HEPS based on Bluesky. Methods: In less than 600 lines of easily\ncustomisable and extensible backend code, provided are full control of\nPandABox's TCP server in native ophyd, automated configuration (also including\nwiring) of \"PandA blocks\" for constant-speed mapping experiments of various\ndimensions, as well as generation of scans deliberately fragmented to deal with\nhardware limits in numbers of exposure frames or sequencer table entries.\nResults: The upper-level control system for PandABox has been ported to\nBluesky, enabling the combination of both components' flexibility in fly-scan\napplications. Based on this backend, a user-friendly Mamba frontend is\ndeveloped for X-ray fluorescence (XRF) mapping experiments, which provides\nfully online visual feedback.", + "authors": "Peng-Cheng Li, Cheng-Long Zhang, Yu-Jun Zhang, Chun Li, Zhi-Ying Guo, Ge Lei, Yi Zhang, Ai-Yu Zhou, Xiao-Xue Bi, Yu Liu", + "published": "2023-02-16", + "updated": "2023-08-09", + "primary_cat": "physics.ins-det", + "cats": [ + "physics.ins-det", + "hep-ex" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.00789v1", + "title": "Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective State Spaces", + "abstract": "Attention mechanisms have been widely used to capture long-range dependencies\namong nodes in Graph Transformers. Bottlenecked by the quadratic computational\ncost, attention mechanisms fail to scale in large graphs. Recent improvements\nin computational efficiency are mainly achieved by attention sparsification\nwith random or heuristic-based graph subsampling, which falls short in\ndata-dependent context reasoning. State space models (SSMs), such as Mamba,\nhave gained prominence for their effectiveness and efficiency in modeling\nlong-range dependencies in sequential data. However, adapting SSMs to\nnon-sequential graph data presents a notable challenge. In this work, we\nintroduce Graph-Mamba, the first attempt to enhance long-range context modeling\nin graph networks by integrating a Mamba block with the input-dependent node\nselection mechanism. Specifically, we formulate graph-centric node\nprioritization and permutation strategies to enhance context-aware reasoning,\nleading to a substantial improvement in predictive performance. Extensive\nexperiments on ten benchmark datasets demonstrate that Graph-Mamba outperforms\nstate-of-the-art methods in long-range graph prediction tasks, with a fraction\nof the computational cost in both FLOPs and GPU memory consumption. The code\nand models are publicly available at https://github.com/bowang-lab/Graph-Mamba.", + "authors": "Chloe Wang, Oleksii Tsepa, Jun Ma, Bo Wang", + "published": "2024-02-01", + "updated": "2024-02-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.09476v1", + "title": "FreqMamba: Viewing Mamba from a Frequency Perspective for Image Deraining", + "abstract": "Images corrupted by rain streaks often lose vital frequency information for\nperception, and image deraining aims to solve this issue which relies on global\nand local degradation modeling. Recent studies have witnessed the effectiveness\nand efficiency of Mamba for perceiving global and local information based on\nits exploiting local correlation among patches, however, rarely attempts have\nbeen explored to extend it with frequency analysis for image deraining,\nlimiting its ability to perceive global degradation that is relevant to\nfrequency modeling (e.g. Fourier transform). In this paper, we propose\nFreqMamba, an effective and efficient paradigm that leverages the complementary\nbetween Mamba and frequency analysis for image deraining. The core of our\nmethod lies in extending Mamba with frequency analysis from two perspectives:\nextending it with frequency-band for exploiting frequency correlation, and\nconnecting it with Fourier transform for global degradation modeling.\nSpecifically, FreqMamba introduces complementary triple interaction structures\nincluding spatial Mamba, frequency band Mamba, and Fourier global modeling.\nFrequency band Mamba decomposes the image into sub-bands of different\nfrequencies to allow 2D scanning from the frequency dimension. Furthermore,\nleveraging Mamba's unique data-dependent properties, we use rainy images at\ndifferent scales to provide degradation priors to the network, thereby\nfacilitating efficient training. Extensive experiments show that our method\noutperforms state-of-the-art methods both visually and quantitatively.", + "authors": "Zou Zhen, Yu Hu, Zhao Feng", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.16877v1", + "title": "Proprioception Is All You Need: Terrain Classification for Boreal Forests", + "abstract": "Recent works in field robotics highlighted the importance of resiliency\nagainst different types of terrains. Boreal forests, in particular, are home to\nmany mobility-impeding terrains that should be considered for off-road\nautonomous navigation. Also, being one of the largest land biomes on Earth,\nboreal forests are an area where autonomous vehicles are expected to become\nincreasingly common. In this paper, we address this issue by introducing\nBorealTC, a publicly available dataset for proprioceptive-based terrain\nclassification (TC). Recorded with a Husky A200, our dataset contains 116 min\nof Inertial Measurement Unit (IMU), motor current, and wheel odometry data,\nfocusing on typical boreal forest terrains, notably snow, ice, and silty loam.\nCombining our dataset with another dataset from the state-of-the-art, we\nevaluate both a Convolutional Neural Network (CNN) and the novel state space\nmodel (SSM)-based Mamba architecture on a TC task. Interestingly, we show that\nwhile CNN outperforms Mamba on each separate dataset, Mamba achieves greater\naccuracy when trained on a combination of both. In addition, we demonstrate\nthat Mamba's learning capacity is greater than a CNN for increasing amounts of\ndata. We show that the combination of two TC datasets yields a latent space\nthat can be interpreted with the properties of the terrains. We also discuss\nthe implications of merging datasets on classification. Our source code and\ndataset are publicly available online:\nhttps://github.com/norlab-ulaval/BorealTC.", + "authors": "Damien LaRocque, William Guimont-Martin, David-Alexandre Duclos, Philippe Gigu\u00e8re, Fran\u00e7ois Pomerleau", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG", + "eess.SP" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.07645v1", + "title": "Simba: Mamba augmented U-ShiftGCN for Skeletal Action Recognition in Videos", + "abstract": "Skeleton Action Recognition (SAR) involves identifying human actions using\nskeletal joint coordinates and their interconnections. While plain Transformers\nhave been attempted for this task, they still fall short compared to the\ncurrent leading methods, which are rooted in Graph Convolutional Networks\n(GCNs) due to the absence of structural priors. Recently, a novel selective\nstate space model, Mamba, has surfaced as a compelling alternative to the\nattention mechanism in Transformers, offering efficient modeling of long\nsequences. In this work, to the utmost extent of our awareness, we present the\nfirst SAR framework incorporating Mamba. Each fundamental block of our model\nadopts a novel U-ShiftGCN architecture with Mamba as its core component. The\nencoder segment of the U-ShiftGCN is devised to extract spatial features from\nthe skeletal data using downsampling vanilla Shift S-GCN blocks. These spatial\nfeatures then undergo intermediate temporal modeling facilitated by the Mamba\nblock before progressing to the encoder section, which comprises vanilla\nupsampling Shift S-GCN blocks. Additionally, a Shift T-GCN (ShiftTCN) temporal\nmodeling unit is employed before the exit of each fundamental block to refine\ntemporal representations. This particular integration of downsampling spatial,\nintermediate temporal, upsampling spatial, and ultimate temporal subunits\nyields promising results for skeleton action recognition. We dub the resulting\nmodel \\textbf{Simba}, which attains state-of-the-art performance across three\nwell-known benchmark skeleton action recognition datasets: NTU RGB+D, NTU RGB+D\n120, and Northwestern-UCLA. Interestingly, U-ShiftGCN (Simba without\nIntermediate Mamba Block) by itself is capable of performing reasonably well\nand surpasses our baseline.", + "authors": "Soumyabrata Chaudhuri, Saumik Bhattacharya", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.07245v2", + "title": "Semi-Mamba-UNet: Pixel-Level Contrastive and Pixel-Level Cross-Supervised Visual Mamba-based UNet for Semi-Supervised Medical Image Segmentation", + "abstract": "Medical image segmentation is essential in diagnostics, treatment planning,\nand healthcare, with deep learning offering promising advancements. Notably,\nConvolutional Neural Network (CNN) excel in capturing local image features,\nwhereas Vision Transformer (ViT) adeptly model long-range dependencies through\nmulti-head self-attention mechanisms. Despite their strengths, both CNN and ViT\nface challenges in efficiently processing long-range dependencies within\nmedical images, often requiring substantial computational resources. This\nissue, combined with the high cost and limited availability of expert\nannotations, poses significant obstacles to achieving precise segmentation. To\naddress these challenges, this paper introduces the Semi-Mamba-UNet, which\nintegrates a visual mamba-based UNet architecture with a conventional UNet into\na semi-supervised learning (SSL) framework. This innovative SSL approach\nleverages dual networks to jointly generate pseudo labels and cross supervise\neach other, drawing inspiration from consistency regularization techniques.\nFurthermore, we introduce a self-supervised pixel-level contrastive learning\nstrategy, employing a projector pair to further enhance feature learning\ncapabilities. Our comprehensive evaluation on a publicly available MRI cardiac\nsegmentation dataset, comparing against various SSL frameworks with different\nUNet-based segmentation networks, highlights the superior performance of\nSemi-Mamba-UNet. The source code has been made publicly accessible.", + "authors": "Chao Ma, Ziyang Wang", + "published": "2024-02-11", + "updated": "2024-03-29", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.01828v2", + "title": "FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space", + "abstract": "Facial Expression Recognition (FER) plays a pivotal role in understanding\nhuman emotional cues. However, traditional FER methods based on visual\ninformation have some limitations, such as preprocessing, feature extraction,\nand multi-stage classification procedures. These not only increase\ncomputational complexity but also require a significant amount of computing\nresources. Considering Convolutional Neural Network (CNN)-based FER schemes\nfrequently prove inadequate in identifying the deep, long-distance dependencies\nembedded within facial expression images, and the Transformer's inherent\nquadratic computational complexity, this paper presents the FER-YOLO-Mamba\nmodel, which integrates the principles of Mamba and YOLO technologies to\nfacilitate efficient coordination in facial expression image recognition and\nlocalization. Within the FER-YOLO-Mamba model, we further devise a FER-YOLO-VSS\ndual-branch module, which combines the inherent strengths of convolutional\nlayers in local feature extraction with the exceptional capability of State\nSpace Models (SSMs) in revealing long-distance dependencies. To the best of our\nknowledge, this is the first Vision Mamba model designed for facial expression\ndetection and classification. To evaluate the performance of the proposed\nFER-YOLO-Mamba model, we conducted experiments on two benchmark datasets,\nRAF-DB and SFEW. The experimental results indicate that the FER-YOLO-Mamba\nmodel achieved better results compared to other models. The code is available\nfrom https://github.com/SwjtuMa/FER-YOLO-Mamba.", + "authors": "Hui Ma, Sen Lei, Turgay Celik, Heng-Chao Li", + "published": "2024-05-03", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2203.17236v1", + "title": "Mamba: a systematic software solution for beamline experiments at HEPS", + "abstract": "To cater for the diverse experiment requirements at the High Energy Photon\nSource (HEPS) with often limited human resources, Bluesky is chosen as the\nbasis for our software framework, Mamba. In our attempt to address Bluesky's\nlack of integrated GUIs, command injection with feedback is chosen as the main\nway for the GUIs to cooperate with the CLI; a RPC service is provided, which\nalso covers functionalities unsuitable for command injection, as well as\npushing of status updates. In order to fully support high-frequency\napplications like fly scans, Bluesky's support for asynchronous control is\nbeing improved; to support high-throughput experiments, Mamba Data Worker (MDW)\nis being developed to cover the complexity in asynchronous online data\nprocessing for these experiments. To systematically simplify the specification\nof metadata, scan parameters and data-processing graphs for each type of\nexperiments, an experiment parameter generator (EPG) will be developed;\nexperiment-specific modules to automate preparation steps will also be made.\nThe integration of off-the-shelf code in Mamba for domain-specific needs is\nunder investigation, and Mamba GUI Studio (MGS) is being developed to simplify\nthe implementation and integration of GUIs.", + "authors": "Yu Liu, Yan-Da Geng, Xiao-Xue Bi, Xiang Li, Ye Tao, Jian-She Cao, Yu-Hui Dong, Yi Zhang", + "published": "2022-03-28", + "updated": "2022-03-28", + "primary_cat": "physics.ins-det", + "cats": [ + "physics.ins-det", + "hep-ex", + "physics.acc-ph" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.01590v2", + "title": "The Hidden Attention of Mamba Models", + "abstract": "The Mamba layer offers an efficient selective state space model (SSM) that is\nhighly effective in modeling multiple domains, including NLP, long-range\nsequence processing, and computer vision. Selective SSMs are viewed as dual\nmodels, in which one trains in parallel on the entire sequence via an IO-aware\nparallel scan, and deploys in an autoregressive manner. We add a third view and\nshow that such models can be viewed as attention-driven models. This new\nperspective enables us to empirically and theoretically compare the underlying\nmechanisms to that of the self-attention layers in transformers and allows us\nto peer inside the inner workings of the Mamba model with explainability\nmethods. Our code is publicly available.", + "authors": "Ameen Ali, Itamar Zimerman, Lior Wolf", + "published": "2024-03-03", + "updated": "2024-03-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "F.2.2, I.2.7", + "F.2.2; I.2.7" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.04256v1", + "title": "Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation", + "abstract": "Multi-modal semantic segmentation significantly enhances AI agents'\nperception and scene understanding, especially under adverse conditions like\nlow-light or overexposed environments. Leveraging additional modalities\n(X-modality) like thermal and depth alongside traditional RGB provides\ncomplementary information, enabling more robust and reliable segmentation. In\nthis work, we introduce Sigma, a Siamese Mamba network for multi-modal semantic\nsegmentation, utilizing the Selective Structured State Space Model, Mamba.\nUnlike conventional methods that rely on CNNs, with their limited local\nreceptive fields, or Vision Transformers (ViTs), which offer global receptive\nfields at the cost of quadratic complexity, our model achieves global receptive\nfields coverage with linear complexity. By employing a Siamese encoder and\ninnovating a Mamba fusion mechanism, we effectively select essential\ninformation from different modalities. A decoder is then developed to enhance\nthe channel-wise modeling ability of the model. Our method, Sigma, is\nrigorously evaluated on both RGB-Thermal and RGB-Depth segmentation tasks,\ndemonstrating its superiority and marking the first successful application of\nState Space Models (SSMs) in multi-modal perception tasks. Code is available at\nhttps://github.com/zifuwan/Sigma.", + "authors": "Zifu Wan, Yuhao Wang, Silong Yong, Pingping Zhang, Simon Stepputtis, Katia Sycara, Yaqi Xie", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/1604.04177v1", + "title": "Traveling Pulses for a Two-Species Chemotaxis Model", + "abstract": "Mathematical models have been widely used to describe the collective movement\nof bacteria by chemotaxis. In particular, bacterial concentration waves\ntraveling in a narrow channel have been experimentally observed and can be\nprecisely described thanks to a mathematical model at the macroscopic scale.\nSuch model was derived in [1] using a kinetic model based on an accurate\ndescription of the mesoscopic run-and-tumble process. We extend this approach\nto study the behavior of the interaction between two populations of E. Coli.\nSeparately, each population travels with its own speed in the channel. When put\ntogether, a synchronization of the speed of the traveling pulses can be\nobserved. We show that this synchronization depends on the fraction of the fast\npopulation. Our approach is based on mathematical analysis of a macroscopic\nmodel of partial differential equations. Numerical simulations in comparison\nwith experimental observations show qualitative agreement.", + "authors": "Casimir Emako, Charl\u00e8ne Gayrard, Axel Buguin, Lu\u00eds Neves de Almeida, Nicolas Vauchelet", + "published": "2016-04-14", + "updated": "2016-04-14", + "primary_cat": "math.AP", + "cats": [ + "math.AP", + "math.NA", + "q-bio.CB" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.03008v1", + "title": "DVMSR: Distillated Vision Mamba for Efficient Super-Resolution", + "abstract": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git", + "authors": "Xiaoyan Lei, Wenlong ZHang, Weifeng Cao", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.13802v2", + "title": "ZigMa: A DiT-style Zigzag Mamba Diffusion Model", + "abstract": "The diffusion model has long been plagued by scalability and quadratic\ncomplexity issues, especially within transformer-based structures. In this\nstudy, we aim to leverage the long sequence modeling capability of a\nState-Space Model called Mamba to extend its applicability to visual data\ngeneration. Firstly, we identify a critical oversight in most current\nMamba-based vision methods, namely the lack of consideration for spatial\ncontinuity in the scan scheme of Mamba. Secondly, building upon this insight,\nwe introduce a simple, plug-and-play, zero-parameter method named Zigzag Mamba,\nwhich outperforms Mamba-based baselines and demonstrates improved speed and\nmemory utilization compared to transformer-based baselines. Lastly, we\nintegrate Zigzag Mamba with the Stochastic Interpolant framework to investigate\nthe scalability of the model on large-resolution visual datasets, such as\nFacesHQ $1024\\times 1024$ and UCF101, MultiModal-CelebA-HQ, and MS COCO\n$256\\times 256$ . Code will be released at https://taohu.me/zigma/", + "authors": "Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, Bj\u00f6rn Ommer", + "published": "2024-03-20", + "updated": "2024-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.17432v1", + "title": "Integrating Mamba Sequence Model and Hierarchical Upsampling Network for Accurate Semantic Segmentation of Multiple Sclerosis Legion", + "abstract": "Integrating components from convolutional neural networks and state space\nmodels in medical image segmentation presents a compelling approach to enhance\naccuracy and efficiency. We introduce Mamba HUNet, a novel architecture\ntailored for robust and efficient segmentation tasks. Leveraging strengths from\nMamba UNet and the lighter version of Hierarchical Upsampling Network (HUNet),\nMamba HUNet combines convolutional neural networks local feature extraction\npower with state space models long range dependency modeling capabilities. We\nfirst converted HUNet into a lighter version, maintaining performance parity\nand then integrated this lighter HUNet into Mamba HUNet, further enhancing its\nefficiency. The architecture partitions input grayscale images into patches,\ntransforming them into 1D sequences for processing efficiency akin to Vision\nTransformers and Mamba models. Through Visual State Space blocks and patch\nmerging layers, hierarchical features are extracted while preserving spatial\ninformation. Experimental results on publicly available Magnetic Resonance\nImaging scans, notably in Multiple Sclerosis lesion segmentation, demonstrate\nMamba HUNet's effectiveness across diverse segmentation tasks. The model's\nrobustness and flexibility underscore its potential in handling complex\nanatomical structures. These findings establish Mamba HUNet as a promising\nsolution in advancing medical image segmentation, with implications for\nimproving clinical decision making processes.", + "authors": "Kazi Shahriar Sanjid, Md. Tanzim Hossain, Md. Shakib Shahariar Junayed, Dr. Mohammad Monir Uddin", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.11375v1", + "title": "Text-controlled Motion Mamba: Text-Instructed Temporal Grounding of Human Motion", + "abstract": "Human motion understanding is a fundamental task with diverse practical\napplications, facilitated by the availability of large-scale motion capture\ndatasets. Recent studies focus on text-motion tasks, such as text-based motion\ngeneration, editing and question answering. In this study, we introduce the\nnovel task of text-based human motion grounding (THMG), aimed at precisely\nlocalizing temporal segments corresponding to given textual descriptions within\nuntrimmed motion sequences. Capturing global temporal information is crucial\nfor the THMG task. However, transformer-based models that rely on global\ntemporal self-attention face challenges when handling long untrimmed sequences\ndue to the quadratic computational cost. We address these challenges by\nproposing Text-controlled Motion Mamba (TM-Mamba), a unified model that\nintegrates temporal global context, language query control, and spatial graph\ntopology with only linear memory cost. The core of the model is a\ntext-controlled selection mechanism which dynamically incorporates global\ntemporal information based on text query. The model is further enhanced to be\ntopology-aware through the integration of relational embeddings. For\nevaluation, we introduce BABEL-Grounding, the first text-motion dataset that\nprovides detailed textual descriptions of human actions along with their\ncorresponding temporal segments. Extensive evaluations demonstrate the\neffectiveness of TM-Mamba on BABEL-Grounding.", + "authors": "Xinghan Wang, Zixi Kang, Yadong Mu", + "published": "2024-04-17", + "updated": "2024-04-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2007.00795v2", + "title": "Policy Improvement via Imitation of Multiple Oracles", + "abstract": "Despite its promise, reinforcement learning's real-world adoption has been\nhampered by the need for costly exploration to learn a good policy. Imitation\nlearning (IL) mitigates this shortcoming by using an oracle policy during\ntraining as a bootstrap to accelerate the learning process. However, in many\npractical situations, the learner has access to multiple suboptimal oracles,\nwhich may provide conflicting advice in a state. The existing IL literature\nprovides a limited treatment of such scenarios. Whereas in the single-oracle\ncase, the return of the oracle's policy provides an obvious benchmark for the\nlearner to compete against, neither such a benchmark nor principled ways of\noutperforming it are known for the multi-oracle setting. In this paper, we\npropose the state-wise maximum of the oracle policies' values as a natural\nbaseline to resolve conflicting advice from multiple oracles. Using a reduction\nof policy optimization to online learning, we introduce a novel IL algorithm\nMAMBA, which can provably learn a policy competitive with this benchmark. In\nparticular, MAMBA optimizes policies by using a gradient estimator in the style\nof generalized advantage estimation (GAE). Our theoretical analysis shows that\nthis design makes MAMBA robust and enables it to outperform the oracle policies\nby a larger margin than the IL state of the art, even in the single-oracle\ncase. In an evaluation against standard policy gradient with GAE and\nAggreVaTe(D), we showcase MAMBA's ability to leverage demonstrations both from\na single and from multiple weak oracles, and significantly speed up policy\noptimization.", + "authors": "Ching-An Cheng, Andrey Kolobov, Alekh Agarwal", + "published": "2020-07-01", + "updated": "2020-12-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.05079v2", + "title": "Mamba-UNet: UNet-Like Pure Visual Mamba for Medical Image Segmentation", + "abstract": "In recent advancements in medical image analysis, Convolutional Neural\nNetworks (CNN) and Vision Transformers (ViT) have set significant benchmarks.\nWhile the former excels in capturing local features through its convolution\noperations, the latter achieves remarkable global context understanding by\nleveraging self-attention mechanisms. However, both architectures exhibit\nlimitations in efficiently modeling long-range dependencies within medical\nimages, which is a critical aspect for precise segmentation. Inspired by the\nMamba architecture, known for its proficiency in handling long sequences and\nglobal contextual information with enhanced computational efficiency as a State\nSpace Model (SSM), we propose Mamba-UNet, a novel architecture that synergizes\nthe U-Net in medical image segmentation with Mamba's capability. Mamba-UNet\nadopts a pure Visual Mamba (VMamba)-based encoder-decoder structure, infused\nwith skip connections to preserve spatial information across different scales\nof the network. This design facilitates a comprehensive feature learning\nprocess, capturing intricate details and broader semantic contexts within\nmedical images. We introduce a novel integration mechanism within the VMamba\nblocks to ensure seamless connectivity and information flow between the encoder\nand decoder paths, enhancing the segmentation performance. We conducted\nexperiments on publicly available ACDC MRI Cardiac segmentation dataset, and\nSynapse CT Abdomen segmentation dataset. The results show that Mamba-UNet\noutperforms several types of UNet in medical image segmentation under the same\nhyper-parameter setting. The source code and baseline implementations are\navailable.", + "authors": "Ziyang Wang, Jian-Qing Zheng, Yichi Zhang, Ge Cui, Lei Li", + "published": "2024-02-07", + "updated": "2024-03-30", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.08479v1", + "title": "MD-Dose: A Diffusion Model based on the Mamba for Radiotherapy Dose Prediction", + "abstract": "Radiation therapy is crucial in cancer treatment. Experienced experts\ntypically iteratively generate high-quality dose distribution maps, forming the\nbasis for excellent radiation therapy plans. Therefore, automated prediction of\ndose distribution maps is significant in expediting the treatment process and\nproviding a better starting point for developing radiation therapy plans. With\nthe remarkable results of diffusion models in predicting high-frequency regions\nof dose distribution maps, dose prediction methods based on diffusion models\nhave been extensively studied. However, existing methods mainly utilize CNNs or\nTransformers as denoising networks. CNNs lack the capture of global receptive\nfields, resulting in suboptimal prediction performance. Transformers excel in\nglobal modeling but face quadratic complexity with image size, resulting in\nsignificant computational overhead. To tackle these challenges, we introduce a\nnovel diffusion model, MD-Dose, based on the Mamba architecture for predicting\nradiation therapy dose distribution in thoracic cancer patients. In the forward\nprocess, MD-Dose adds Gaussian noise to dose distribution maps to obtain pure\nnoise images. In the backward process, MD-Dose utilizes a noise predictor based\non the Mamba to predict the noise, ultimately outputting the dose distribution\nmaps. Furthermore, We develop a Mamba encoder to extract structural information\nand integrate it into the noise predictor for localizing dose regions in the\nplanning target volume (PTV) and organs at risk (OARs). Through extensive\nexperiments on a dataset of 300 thoracic tumor patients, we showcase the\nsuperiority of MD-Dose in various metrics and time consumption.", + "authors": "Linjie Fu, Xia Li, Xiuding Cai, Yingkai Wang, Xueyao Wang, Yali Shen, Yu Yao", + "published": "2024-03-13", + "updated": "2024-03-13", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "physics.med-ph" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2401.04081v2", + "title": "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts", + "abstract": "State Space Models (SSMs) have become serious contenders in the field of\nsequential modeling, challenging the dominance of Transformers. At the same\ntime, Mixture of Experts (MoE) has significantly improved Transformer-based\nLarge Language Models, including recent state-of-the-art open models. We\npropose that to unlock the potential of SSMs for scaling, they should be\ncombined with MoE. We showcase this on Mamba, a recent SSM-based model that\nachieves remarkable performance. Our model, MoE-Mamba, outperforms both Mamba\nand baseline Transformer-MoE. In particular, MoE-Mamba reaches the same\nperformance as Mamba in $2.35\\times$ fewer training steps while preserving the\ninference performance gains of Mamba against Transformer.", + "authors": "Maciej Pi\u00f3ro, Kamil Ciebiera, Krystian Kr\u00f3l, Jan Ludziejewski, Micha\u0142 Krutul, Jakub Krajewski, Szymon Antoniak, Piotr Mi\u0142o\u015b, Marek Cygan, Sebastian Jaszczur", + "published": "2024-01-08", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.08027v1", + "title": "SurvMamba: State Space Model with Multi-grained Multi-modal Interaction for Survival Prediction", + "abstract": "Multi-modal learning that combines pathological images with genomic data has\nsignificantly enhanced the accuracy of survival prediction. Nevertheless,\nexisting methods have not fully utilized the inherent hierarchical structure\nwithin both whole slide images (WSIs) and transcriptomic data, from which\nbetter intra-modal representations and inter-modal integration could be\nderived. Moreover, many existing studies attempt to improve multi-modal\nrepresentations through attention mechanisms, which inevitably lead to high\ncomplexity when processing high-dimensional WSIs and transcriptomic data.\nRecently, a structured state space model named Mamba emerged as a promising\napproach for its superior performance in modeling long sequences with low\ncomplexity. In this study, we propose Mamba with multi-grained multi-modal\ninteraction (SurvMamba) for survival prediction. SurvMamba is implemented with\na Hierarchical Interaction Mamba (HIM) module that facilitates efficient\nintra-modal interactions at different granularities, thereby capturing more\ndetailed local features as well as rich global representations. In addition, an\nInteraction Fusion Mamba (IFM) module is used for cascaded inter-modal\ninteractive fusion, yielding more comprehensive features for survival\nprediction. Comprehensive evaluations on five TCGA datasets demonstrate that\nSurvMamba outperforms other existing methods in terms of performance and\ncomputational cost.", + "authors": "Ying Chen, Jiajing Xie, Yuxiang Lin, Yuhang Song, Wenxian Yang, Rongshan Yu", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "q-bio.QM" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.19925v1", + "title": "Decision Mamba: Reinforcement Learning via Sequence Modeling with Selective State Spaces", + "abstract": "Decision Transformer, a promising approach that applies Transformer\narchitectures to reinforcement learning, relies on causal self-attention to\nmodel sequences of states, actions, and rewards. While this method has shown\ncompetitive results, this paper investigates the integration of the Mamba\nframework, known for its advanced capabilities in efficient and effective\nsequence modeling, into the Decision Transformer architecture, focusing on the\npotential performance enhancements in sequential decision-making tasks. Our\nstudy systematically evaluates this integration by conducting a series of\nexperiments across various decision-making environments, comparing the modified\nDecision Transformer, Decision Mamba, with its traditional counterpart. This\nwork contributes to the advancement of sequential decision-making models,\nsuggesting that the architecture and training methodology of neural networks\ncan significantly impact their performance in complex tasks, and highlighting\nthe potential of Mamba as a valuable tool for improving the efficacy of\nTransformer-based models in reinforcement learning scenarios.", + "authors": "Toshihiro Ota", + "published": "2024-03-29", + "updated": "2024-03-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.14757v1", + "title": "Integrating Mamba and Transformer for Long-Short Range Time Series Forecasting", + "abstract": "Time series forecasting is an important problem and plays a key role in a\nvariety of applications including weather forecasting, stock market, and\nscientific simulations. Although transformers have proven to be effective in\ncapturing dependency, its quadratic complexity of attention mechanism prevents\nits further adoption in long-range time series forecasting, thus limiting them\nattend to short-range range. Recent progress on state space models (SSMs) have\nshown impressive performance on modeling long range dependency due to their\nsubquadratic complexity. Mamba, as a representative SSM, enjoys linear time\ncomplexity and has achieved strong scalability on tasks that requires scaling\nto long sequences, such as language, audio, and genomics. In this paper, we\npropose to leverage a hybrid framework Mambaformer that internally combines\nMamba for long-range dependency, and Transformer for short range dependency,\nfor long-short range forecasting. To the best of our knowledge, this is the\nfirst paper to combine Mamba and Transformer architecture in time series data.\nWe investigate possible hybrid architectures to combine Mamba layer and\nattention layer for long-short range time series forecasting. The comparative\nstudy shows that the Mambaformer family can outperform Mamba and Transformer in\nlong-short range time series forecasting problem. The code is available at\nhttps://github.com/XiongxiaoXu/Mambaformerin-Time-Series.", + "authors": "Xiongxiao Xu, Yueqing Liang, Baixiang Huang, Zhiling Lan, Kai Shu", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.05892v4", + "title": "Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data", + "abstract": "In recent years, Transformers have become the de-facto architecture for\nsequence modeling on text and a variety of multi-dimensional data, such as\nimages and video. However, the use of self-attention layers in a Transformer\nincurs prohibitive compute and memory complexity that scales quadratically\nw.r.t. the sequence length. A recent architecture, Mamba, based on state space\nmodels has been shown to achieve comparable performance for modeling text\nsequences, while scaling linearly with the sequence length. In this work, we\npresent Mamba-ND, a generalized design extending the Mamba architecture to\narbitrary multi-dimensional data. Our design alternatively unravels the input\ndata across different dimensions following row-major orderings. We provide a\nsystematic comparison of Mamba-ND with several other alternatives, based on\nprior multi-dimensional extensions such as Bi-directional LSTMs and S4ND.\nEmpirically, we show that Mamba-ND demonstrates performance competitive with\nthe state-of-the-art on a variety of multi-dimensional benchmarks, including\nImageNet-1K classification, HMDB-51 action recognition, and ERA5 weather\nforecasting.", + "authors": "Shufan Li, Harkanwar Singh, Aditya Grover", + "published": "2024-02-08", + "updated": "2024-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.18451v2", + "title": "MambaMIR: An Arbitrary-Masked Mamba for Joint Medical Image Reconstruction and Uncertainty Estimation", + "abstract": "The recent Mamba model has shown remarkable adaptability for visual\nrepresentation learning, including in medical imaging tasks. This study\nintroduces MambaMIR, a Mamba-based model for medical image reconstruction, as\nwell as its Generative Adversarial Network-based variant, MambaMIR-GAN. Our\nproposed MambaMIR inherits several advantages, such as linear complexity,\nglobal receptive fields, and dynamic weights, from the original Mamba model.\nThe innovated arbitrary-mask mechanism effectively adapt Mamba to our image\nreconstruction task, providing randomness for subsequent Monte Carlo-based\nuncertainty estimation. Experiments conducted on various medical image\nreconstruction tasks, including fast MRI and SVCT, which cover anatomical\nregions such as the knee, chest, and abdomen, have demonstrated that MambaMIR\nand MambaMIR-GAN achieve comparable or superior reconstruction results relative\nto state-of-the-art methods. Additionally, the estimated uncertainty maps offer\nfurther insights into the reliability of the reconstruction quality. The code\nis publicly available at https://github.com/ayanglab/MambaMIR.", + "authors": "Jiahao Huang, Liutao Yang, Fanwen Wang, Yinzhe Wu, Yang Nan, Angelica I. Aviles-Rivero, Carola-Bibiane Sch\u00f6nlieb, Daoqiang Zhang, Guang Yang", + "published": "2024-02-28", + "updated": "2024-03-19", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.16536v2", + "title": "VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting", + "abstract": "Combining CNNs or ViTs, with RNNs for spatiotemporal forecasting, has yielded\nunparalleled results in predicting temporal and spatial dynamics. However,\nmodeling extensive global information remains a formidable challenge; CNNs are\nlimited by their narrow receptive fields, and ViTs struggle with the intensive\ncomputational demands of their attention mechanisms. The emergence of recent\nMamba-based architectures has been met with enthusiasm for their exceptional\nlong-sequence modeling capabilities, surpassing established vision models in\nefficiency and accuracy, which motivates us to develop an innovative\narchitecture tailored for spatiotemporal forecasting. In this paper, we propose\nthe VMRNN cell, a new recurrent unit that integrates the strengths of Vision\nMamba blocks with LSTM. We construct a network centered on VMRNN cells to\ntackle spatiotemporal prediction tasks effectively. Our extensive evaluations\nshow that our proposed approach secures competitive results on a variety of\ntasks while maintaining a smaller model size. Our code is available at\nhttps://github.com/yyyujintang/VMRNN-PyTorch.", + "authors": "Yujin Tang, Peijie Dong, Zhenheng Tang, Xiaowen Chu, Junwei Liang", + "published": "2024-03-25", + "updated": "2024-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2405.04404v1", + "title": "Vision Mamba: A Comprehensive Survey and Taxonomy", + "abstract": "State Space Model (SSM) is a mathematical model used to describe and analyze\nthe behavior of dynamic systems. This model has witnessed numerous applications\nin several fields, including control theory, signal processing, economics and\nmachine learning. In the field of deep learning, state space models are used to\nprocess sequence data, such as time series analysis, natural language\nprocessing (NLP) and video understanding. By mapping sequence data to state\nspace, long-term dependencies in the data can be better captured. In\nparticular, modern SSMs have shown strong representational capabilities in NLP,\nespecially in long sequence modeling, while maintaining linear time complexity.\nNotably, based on the latest state-space models, Mamba merges time-varying\nparameters into SSMs and formulates a hardware-aware algorithm for efficient\ntraining and inference. Given its impressive efficiency and strong long-range\ndependency modeling capability, Mamba is expected to become a new AI\narchitecture that may outperform Transformer. Recently, a number of works have\nattempted to study the potential of Mamba in various fields, such as general\nvision, multi-modal, medical image analysis and remote sensing image analysis,\nby extending Mamba from natural language domain to visual domain. To fully\nunderstand Mamba in the visual domain, we conduct a comprehensive survey and\npresent a taxonomy study. This survey focuses on Mamba's application to a\nvariety of visual tasks and data types, and discusses its predecessors, recent\nadvances and far-reaching impact on a wide range of domains. Since Mamba is now\non an upward trend, please actively notice us if you have new findings, and new\nprogress on Mamba will be included in this survey in a timely manner and\nupdated on the Mamba project at\nhttps://github.com/lx6c78/Vision-Mamba-A-Comprehensive-Survey-and-Taxonomy.", + "authors": "Xiao Liu, Chenxu Zhang, Lei Zhang", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.05246v2", + "title": "LightM-UNet: Mamba Assists in Lightweight UNet for Medical Image Segmentation", + "abstract": "UNet and its variants have been widely used in medical image segmentation.\nHowever, these models, especially those based on Transformer architectures,\npose challenges due to their large number of parameters and computational\nloads, making them unsuitable for mobile health applications. Recently, State\nSpace Models (SSMs), exemplified by Mamba, have emerged as competitive\nalternatives to CNN and Transformer architectures. Building upon this, we\nemploy Mamba as a lightweight substitute for CNN and Transformer within UNet,\naiming at tackling challenges stemming from computational resource limitations\nin real medical settings. To this end, we introduce the Lightweight Mamba UNet\n(LightM-UNet) that integrates Mamba and UNet in a lightweight framework.\nSpecifically, LightM-UNet leverages the Residual Vision Mamba Layer in a pure\nMamba fashion to extract deep semantic features and model long-range spatial\ndependencies, with linear computational complexity. Extensive experiments\nconducted on two real-world 2D/3D datasets demonstrate that LightM-UNet\nsurpasses existing state-of-the-art literature. Notably, when compared to the\nrenowned nnU-Net, LightM-UNet achieves superior segmentation performance while\ndrastically reducing parameter and computation costs by 116x and 21x,\nrespectively. This highlights the potential of Mamba in facilitating model\nlightweighting. Our code implementation is publicly available at\nhttps://github.com/MrBlankness/LightM-UNet.", + "authors": "Weibin Liao, Yinghao Zhu, Xinyuan Wang, Chengwei Pan, Yasha Wang, Liantao Ma", + "published": "2024-03-08", + "updated": "2024-03-11", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.02063v1", + "title": "SPMamba: State-space model is all you need in speech separation", + "abstract": "In speech separation, both CNN- and Transformer-based models have\ndemonstrated robust separation capabilities, garnering significant attention\nwithin the research community. However, CNN-based methods have limited\nmodelling capability for long-sequence audio, leading to suboptimal separation\nperformance. Conversely, Transformer-based methods are limited in practical\napplications due to their high computational complexity. Notably, within\ncomputer vision, Mamba-based methods have been celebrated for their formidable\nperformance and reduced computational requirements. In this paper, we propose a\nnetwork architecture for speech separation using a state-space model, namely\nSPMamba. We adopt the TF-GridNet model as the foundational framework and\nsubstitute its Transformer component with a bidirectional Mamba module, aiming\nto capture a broader range of contextual information. Our experimental results\nreveal an important role in the performance aspects of Mamba-based models.\nSPMamba demonstrates superior performance with a significant advantage over\nexisting separation models in a dataset built on Librispeech. Notably, SPMamba\nachieves a substantial improvement in separation quality, with a 2.42 dB\nenhancement in SI-SNRi compared to the TF-GridNet. The source code for SPMamba\nis publicly accessible at https://github.com/JusperLee/SPMamba .", + "authors": "Kai Li, Guo Chen", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.08406v1", + "title": "MambaDFuse: A Mamba-based Dual-phase Model for Multi-modality Image Fusion", + "abstract": "Multi-modality image fusion (MMIF) aims to integrate complementary\ninformation from different modalities into a single fused image to represent\nthe imaging scene and facilitate downstream visual tasks comprehensively. In\nrecent years, significant progress has been made in MMIF tasks due to advances\nin deep neural networks. However, existing methods cannot effectively and\nefficiently extract modality-specific and modality-fused features constrained\nby the inherent local reductive bias (CNN) or quadratic computational\ncomplexity (Transformers). To overcome this issue, we propose a Mamba-based\nDual-phase Fusion (MambaDFuse) model. Firstly, a dual-level feature extractor\nis designed to capture long-range features from single-modality images by\nextracting low and high-level features from CNN and Mamba blocks. Then, a\ndual-phase feature fusion module is proposed to obtain fusion features that\ncombine complementary information from different modalities. It uses the\nchannel exchange method for shallow fusion and the enhanced Multi-modal Mamba\n(M3) blocks for deep fusion. Finally, the fused image reconstruction module\nutilizes the inverse transformation of the feature extraction to generate the\nfused result. Through extensive experiments, our approach achieves promising\nfusion results in infrared-visible image fusion and medical image fusion.\nAdditionally, in a unified benchmark, MambaDFuse has also demonstrated improved\nperformance in downstream tasks such as object detection. Code with checkpoints\nwill be available after the peer-review process.", + "authors": "Zhe Li, Haiwei Pan, Kejia Zhang, Yuhua Wang, Fengming Yu", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.18174v1", + "title": "Mamba-FETrack: Frame-Event Tracking via State Space Model", + "abstract": "RGB-Event based tracking is an emerging research topic, focusing on how to\neffectively integrate heterogeneous multi-modal data (synchronized exposure\nvideo frames and asynchronous pulse Event stream). Existing works typically\nemploy Transformer based networks to handle these modalities and achieve decent\naccuracy through input-level or feature-level fusion on multiple datasets.\nHowever, these trackers require significant memory consumption and\ncomputational complexity due to the use of self-attention mechanism. This paper\nproposes a novel RGB-Event tracking framework, Mamba-FETrack, based on the\nState Space Model (SSM) to achieve high-performance tracking while effectively\nreducing computational costs and realizing more efficient tracking.\nSpecifically, we adopt two modality-specific Mamba backbone networks to extract\nthe features of RGB frames and Event streams. Then, we also propose to boost\nthe interactive learning between the RGB and Event features using the Mamba\nnetwork. The fused features will be fed into the tracking head for target\nobject localization. Extensive experiments on FELT and FE108 datasets fully\nvalidated the efficiency and effectiveness of our proposed tracker.\nSpecifically, our Mamba-based tracker achieves 43.5/55.6 on the SR/PR metric,\nwhile the ViT-S based tracker (OSTrack) obtains 40.0/50.9. The GPU memory cost\nof ours and ViT-S based tracker is 13.98GB and 15.44GB, which decreased about\n$9.5\\%$. The FLOPs and parameters of ours/ViT-S based OSTrack are 59GB/1076GB\nand 7MB/60MB, which decreased about $94.5\\%$ and $88.3\\%$, respectively. We\nhope this work can bring some new insights to the tracking field and greatly\npromote the application of the Mamba architecture in tracking. The source code\nof this work will be released on\n\\url{https://github.com/Event-AHU/Mamba_FETrack}.", + "authors": "Ju Huang, Shiao Wang, Shuai Wang, Zhe Wu, Xiao Wang, Bo Jiang", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.11778v1", + "title": "CU-Mamba: Selective State Space Models with Channel Learning for Image Restoration", + "abstract": "Reconstructing degraded images is a critical task in image processing.\nAlthough CNN and Transformer-based models are prevalent in this field, they\nexhibit inherent limitations, such as inadequate long-range dependency modeling\nand high computational costs. To overcome these issues, we introduce the\nChannel-Aware U-Shaped Mamba (CU-Mamba) model, which incorporates a dual State\nSpace Model (SSM) framework into the U-Net architecture. CU-Mamba employs a\nSpatial SSM module for global context encoding and a Channel SSM component to\npreserve channel correlation features, both in linear computational complexity\nrelative to the feature map size. Extensive experimental results validate\nCU-Mamba's superiority over existing state-of-the-art methods, underscoring the\nimportance of integrating both spatial and channel contexts in image\nrestoration.", + "authors": "Rui Deng, Tianpei Gu", + "published": "2024-04-17", + "updated": "2024-04-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/1612.04698v1", + "title": "Asymptotic analysis and optimal control of an integro-differential system modelling healthy and cancer cells exposed to chemotherapy", + "abstract": "We consider a system of two coupled integro-differential equations modelling\npopulations of healthy and cancer cells under therapy. Both populations are\nstructured by a phenotypic variable, representing their level of resistance to\nthe treatment. We analyse the asymptotic behaviour of the model under constant\ninfusion of drugs. By designing an appropriate Lyapunov function, we prove that\nboth densities converge to Dirac masses. We then define an optimal control\nproblem, by considering all possible infusion protocols and minimising the\nnumber of cancer cells over a prescribed time frame. We provide a quasi-optimal\nstrategy and prove that it solves this problem for large final times. For this\nmodelling framework, we illustrate our results with numerical simulations, and\ncompare our optimal strategy with periodic treatment schedules.", + "authors": "Camille Pouchol, Jean Clairambault, Alexander Lorz, Emmanuel Tr\u00e9lat", + "published": "2016-12-14", + "updated": "2016-12-14", + "primary_cat": "math.OC", + "cats": [ + "math.OC" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.01771v1", + "title": "BlackMamba: Mixture of Experts for State-Space Models", + "abstract": "State-space models (SSMs) have recently demonstrated competitive performance\nto transformers at large-scale language modeling benchmarks while achieving\nlinear time and memory complexity as a function of sequence length. Mamba, a\nrecently released SSM model, shows impressive performance in both language\nmodeling and long sequence processing tasks. Simultaneously, mixture-of-expert\n(MoE) models have shown remarkable performance while significantly reducing the\ncompute and latency costs of inference at the expense of a larger memory\nfootprint. In this paper, we present BlackMamba, a novel architecture that\ncombines the Mamba SSM with MoE to obtain the benefits of both. We demonstrate\nthat BlackMamba performs competitively against both Mamba and transformer\nbaselines, and outperforms in inference and training FLOPs. We fully train and\nopen-source 340M/1.5B and 630M/2.8B BlackMamba models on 300B tokens of a\ncustom dataset. We show that BlackMamba inherits and combines both of the\nbenefits of SSM and MoE architectures, combining linear-complexity generation\nfrom SSM with cheap and fast inference from MoE. We release all weights,\ncheckpoints, and inference code open-source. Inference code at:\nhttps://github.com/Zyphra/BlackMamba", + "authors": "Quentin Anthony, Yury Tokpanov, Paolo Glorioso, Beren Millidge", + "published": "2024-02-01", + "updated": "2024-02-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.DC", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.02491v1", + "title": "VM-UNet: Vision Mamba UNet for Medical Image Segmentation", + "abstract": "In the realm of medical image segmentation, both CNN-based and\nTransformer-based models have been extensively explored. However, CNNs exhibit\nlimitations in long-range modeling capabilities, whereas Transformers are\nhampered by their quadratic computational complexity. Recently, State Space\nModels (SSMs), exemplified by Mamba, have emerged as a promising approach. They\nnot only excel in modeling long-range interactions but also maintain a linear\ncomputational complexity. In this paper, leveraging state space models, we\npropose a U-shape architecture model for medical image segmentation, named\nVision Mamba UNet (VM-UNet). Specifically, the Visual State Space (VSS) block\nis introduced as the foundation block to capture extensive contextual\ninformation, and an asymmetrical encoder-decoder structure is constructed. We\nconduct comprehensive experiments on the ISIC17, ISIC18, and Synapse datasets,\nand the results indicate that VM-UNet performs competitively in medical image\nsegmentation tasks. To our best knowledge, this is the first medical image\nsegmentation model constructed based on the pure SSM-based model. We aim to\nestablish a baseline and provide valuable insights for the future development\nof more efficient and effective SSM-based segmentation systems. Our code is\navailable at https://github.com/JCruan519/VM-UNet.", + "authors": "Jiacheng Ruan, Suncheng Xiang", + "published": "2024-02-04", + "updated": "2024-02-04", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2401.13660v2", + "title": "MambaByte: Token-free Selective State Space Model", + "abstract": "Token-free language models learn directly from raw bytes and remove the\ninductive bias of subword tokenization. Operating on bytes, however, results in\nsignificantly longer sequences. In this setting, standard autoregressive\nTransformers scale poorly as the effective memory required grows with sequence\nlength. The recent development of the Mamba state space model (SSM) offers an\nappealing alternative approach with a fixed-sized memory state and efficient\ndecoding. We propose MambaByte, a token-free adaptation of the Mamba SSM\ntrained autoregressively on byte sequences. In terms of modeling, we show\nMambaByte to be competitive with, and even to outperform, state-of-the-art\nsubword Transformers on language modeling tasks while maintaining the benefits\nof token-free language models, such as robustness to noise. In terms of\nefficiency, we develop an adaptation of speculative decoding with tokenized\ndrafting and byte-level verification. This results in a $2.6\\times$ inference\nspeedup to the standard MambaByte implementation, showing similar decoding\nefficiency as the subword Mamba. These findings establish the viability of SSMs\nin enabling token-free language modeling.", + "authors": "Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M. Rush", + "published": "2024-01-24", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.16371v1", + "title": "Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation", + "abstract": "Sequential Recommenders have been widely applied in various online services,\naiming to model users' dynamic interests from their sequential interactions.\nWith users increasingly engaging with online platforms, vast amounts of\nlifelong user behavioral sequences have been generated. However, existing\nsequential recommender models often struggle to handle such lifelong sequences.\nThe primary challenges stem from computational complexity and the ability to\ncapture long-range dependencies within the sequence. Recently, a state space\nmodel featuring a selective mechanism (i.e., Mamba) has emerged. In this work,\nwe investigate the performance of Mamba for lifelong sequential recommendation\n(i.e., length>=2k). More specifically, we leverage the Mamba block to model\nlifelong user sequences selectively. We conduct extensive experiments to\nevaluate the performance of representative sequential recommendation models in\nthe setting of lifelong sequences. Experiments on two real-world datasets\ndemonstrate the superiority of Mamba. We found that RecMamba achieves\nperformance comparable to the representative model while significantly reducing\ntraining duration by approximately 70% and memory costs by 80%. Codes and data\nare available at \\url{https://github.com/nancheng58/RecMamba}.", + "authors": "Jiyuan Yang, Yuanzi Li, Jingyu Zhao, Hanbing Wang, Muyang Ma, Jun Ma, Zhaochun Ren, Mengqi Zhang, Xin Xin, Zhumin Chen, Pengjie Ren", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.07487v3", + "title": "Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM", + "abstract": "Human motion generation stands as a significant pursuit in generative\ncomputer vision, while achieving long-sequence and efficient motion generation\nremains challenging. Recent advancements in state space models (SSMs), notably\nMamba, have showcased considerable promise in long sequence modeling with an\nefficient hardware-aware design, which appears to be a promising direction to\nbuild motion generation model upon it. Nevertheless, adapting SSMs to motion\ngeneration faces hurdles since the lack of a specialized design architecture to\nmodel motion sequence. To address these challenges, we propose Motion Mamba, a\nsimple and efficient approach that presents the pioneering motion generation\nmodel utilized SSMs. Specifically, we design a Hierarchical Temporal Mamba\n(HTM) block to process temporal data by ensemble varying numbers of isolated\nSSM modules across a symmetric U-Net architecture aimed at preserving motion\nconsistency between frames. We also design a Bidirectional Spatial Mamba (BSM)\nblock to bidirectionally process latent poses, to enhance accurate motion\ngeneration within a temporal frame. Our proposed method achieves up to 50% FID\nimprovement and up to 4 times faster on the HumanML3D and KIT-ML datasets\ncompared to the previous best diffusion-based method, which demonstrates strong\ncapabilities of high-quality long sequence motion modeling and real-time human\nmotion generation. See project website\nhttps://steve-zeyu-zhang.github.io/MotionMamba/", + "authors": "Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, Hao Tang", + "published": "2024-03-12", + "updated": "2024-03-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.17695v1", + "title": "PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition", + "abstract": "We present PlainMamba: a simple non-hierarchical state space model (SSM)\ndesigned for general visual recognition. The recent Mamba model has shown how\nSSMs can be highly competitive with other architectures on sequential data and\ninitial attempts have been made to apply it to images. In this paper, we\nfurther adapt the selective scanning process of Mamba to the visual domain,\nenhancing its ability to learn features from two-dimensional images by (i) a\ncontinuous 2D scanning process that improves spatial continuity by ensuring\nadjacency of tokens in the scanning sequence, and (ii) direction-aware updating\nwhich enables the model to discern the spatial relations of tokens by encoding\ndirectional information. Our architecture is designed to be easy to use and\neasy to scale, formed by stacking identical PlainMamba blocks, resulting in a\nmodel with constant width throughout all layers. The architecture is further\nsimplified by removing the need for special tokens. We evaluate PlainMamba on a\nvariety of visual recognition tasks including image classification, semantic\nsegmentation, object detection, and instance segmentation. Our method achieves\nperformance gains over previous non-hierarchical models and is competitive with\nhierarchical alternatives. For tasks requiring high-resolution inputs, in\nparticular, PlainMamba requires much less computing while maintaining high\nperformance. Code and models are available at\nhttps://github.com/ChenhongyiYang/PlainMamba", + "authors": "Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Ericsson, Zhenyu Wang, Jiaming Liu, Elliot J. Crowley", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.06564v3", + "title": "MambaAD: Exploring State Space Models for Multi-class Unsupervised Anomaly Detection", + "abstract": "Recent advancements in anomaly detection have seen the efficacy of CNN- and\ntransformer-based approaches. However, CNNs struggle with long-range\ndependencies, while transformers are burdened by quadratic computational\ncomplexity. Mamba-based models, with their superior long-range modeling and\nlinear efficiency, have garnered substantial attention. This study pioneers the\napplication of Mamba to multi-class unsupervised anomaly detection, presenting\nMambaAD, which consists of a pre-trained encoder and a Mamba decoder featuring\n(Locality-Enhanced State Space) LSS modules at multi-scales. The proposed LSS\nmodule, integrating parallel cascaded (Hybrid State Space) HSS blocks and\nmulti-kernel convolutions operations, effectively captures both long-range and\nlocal information. The HSS block, utilizing (Hybrid Scanning) HS encoders,\nencodes feature maps into five scanning methods and eight directions, thereby\nstrengthening global connections through the (State Space Model) SSM. The use\nof Hilbert scanning and eight directions significantly improves feature\nsequence modeling. Comprehensive experiments on six diverse anomaly detection\ndatasets and seven metrics demonstrate state-of-the-art performance,\nsubstantiating the method's effectiveness.", + "authors": "Haoyang He, Yuhu Bai, Jiangning Zhang, Qingdong He, Hongxu Chen, Zhenye Gan, Chengjie Wang, Xiangtai Li, Guanzhong Tian, Lei Xie", + "published": "2024-04-09", + "updated": "2024-04-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.15772v1", + "title": "Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting", + "abstract": "Long-term time series forecasting (LTSF) provides longer insights into future\ntrends and patterns. In recent years, deep learning models especially\nTransformers have achieved advanced performance in LTSF tasks. However, the\nquadratic complexity of Transformers rises the challenge of balancing\ncomputaional efficiency and predicting performance. Recently, a new state space\nmodel (SSM) named Mamba is proposed. With the selective capability on input\ndata and the hardware-aware parallel computing algorithm, Mamba can well\ncapture long-term dependencies while maintaining linear computational\ncomplexity. Mamba has shown great ability for long sequence modeling and is a\npotential competitor to Transformer-based models in LTSF. In this paper, we\npropose Bi-Mamba4TS, a bidirectional Mamba for time series forecasting. To\naddress the sparsity of time series semantics, we adopt the patching technique\nto enrich the local information while capturing the evolutionary patterns of\ntime series in a finer granularity. To select more appropriate modeling method\nbased on the characteristics of the dataset, our model unifies the\nchannel-independent and channel-mixing tokenization strategies and uses a\nseries-relation-aware decider to control the strategy choosing process.\nExtensive experiments on seven real-world datasets show that our model achieves\nmore accurate predictions compared with state-of-the-art methods.", + "authors": "Aobo Liang, Xingguo Jiang, Yan Sun, Chang Lu", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2402.10887v1", + "title": "Weak-Mamba-UNet: Visual Mamba Makes CNN and ViT Work Better for Scribble-based Medical Image Segmentation", + "abstract": "Medical image segmentation is increasingly reliant on deep learning\ntechniques, yet the promising performance often come with high annotation\ncosts. This paper introduces Weak-Mamba-UNet, an innovative weakly-supervised\nlearning (WSL) framework that leverages the capabilities of Convolutional\nNeural Network (CNN), Vision Transformer (ViT), and the cutting-edge Visual\nMamba (VMamba) architecture for medical image segmentation, especially when\ndealing with scribble-based annotations. The proposed WSL strategy incorporates\nthree distinct architecture but same symmetrical encoder-decoder networks: a\nCNN-based UNet for detailed local feature extraction, a Swin Transformer-based\nSwinUNet for comprehensive global context understanding, and a VMamba-based\nMamba-UNet for efficient long-range dependency modeling. The key concept of\nthis framework is a collaborative and cross-supervisory mechanism that employs\npseudo labels to facilitate iterative learning and refinement across the\nnetworks. The effectiveness of Weak-Mamba-UNet is validated on a publicly\navailable MRI cardiac segmentation dataset with processed scribble annotations,\nwhere it surpasses the performance of a similar WSL framework utilizing only\nUNet or SwinUNet. This highlights its potential in scenarios with sparse or\nimprecise annotations. The source code is made publicly accessible.", + "authors": "Ziyang Wang, Chao Ma", + "published": "2024-02-16", + "updated": "2024-02-16", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.07705v1", + "title": "ViM-UNet: Vision Mamba for Biomedical Segmentation", + "abstract": "CNNs, most notably the UNet, are the default architecture for biomedical\nsegmentation. Transformer-based approaches, such as UNETR, have been proposed\nto replace them, benefiting from a global field of view, but suffering from\nlarger runtimes and higher parameter counts. The recent Vision Mamba\narchitecture offers a compelling alternative to transformers, also providing a\nglobal field of view, but at higher efficiency. Here, we introduce ViM-UNet, a\nnovel segmentation architecture based on it and compare it to UNet and UNETR\nfor two challenging microscopy instance segmentation tasks. We find that it\nperforms similarly or better than UNet, depending on the task, and outperforms\nUNETR while being more efficient. Our code is open source and documented at\nhttps://github.com/constantinpape/torch-em/blob/main/vimunet.md.", + "authors": "Anwai Archit, Constantin Pape", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.00762v2", + "title": "Point Cloud Mamba: Point Cloud Learning via State Space Model", + "abstract": "In this work, for the first time, we demonstrate that Mamba-based point cloud\nmethods can outperform point-based methods. Mamba exhibits strong global\nmodeling capabilities and linear computational complexity, making it highly\nattractive for point cloud analysis. To enable more effective processing of 3-D\npoint cloud data by Mamba, we propose a novel Consistent Traverse Serialization\nto convert point clouds into 1-D point sequences while ensuring that\nneighboring points in the sequence are also spatially adjacent. Consistent\nTraverse Serialization yields six variants by permuting the order of x, y, and\nz coordinates, and the synergistic use of these variants aids Mamba in\ncomprehensively observing point cloud data. Furthermore, to assist Mamba in\nhandling point sequences with different orders more effectively, we introduce\npoint prompts to inform Mamba of the sequence's arrangement rules. Finally, we\npropose positional encoding based on spatial coordinate mapping to inject\npositional information into point cloud sequences better. Based on these\nimprovements, we construct a point cloud network named Point Cloud Mamba, which\ncombines local and global modeling. Point Cloud Mamba surpasses the SOTA\npoint-based method PointNeXt and achieves new SOTA performance on the\nScanObjectNN, ModelNet40, and ShapeNetPart datasets.", + "authors": "Tao Zhang, Xiangtai Li, Haobo Yuan, Shunping Ji, Shuicheng Yan", + "published": "2024-03-01", + "updated": "2024-03-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.14520v2", + "title": "Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference", + "abstract": "In recent years, the application of multimodal large language models (MLLM)\nin various fields has achieved remarkable success. However, as the foundation\nmodel for many downstream tasks, current MLLMs are composed of the well-known\nTransformer network, which has a less efficient quadratic computation\ncomplexity. To improve the efficiency of such basic models, we propose Cobra, a\nlinear computational complexity MLLM. Specifically, Cobra integrates the\nefficient Mamba language model into the visual modality. Moreover, we explore\nand study various modal fusion schemes to create an effective multi-modal\nMamba. Extensive experiments demonstrate that (1) Cobra achieves extremely\ncompetitive performance with current computationally efficient state-of-the-art\nmethods, e.g., LLaVA-Phi, TinyLLaVA, and MobileVLM v2, and has faster speed due\nto Cobra's linear sequential modeling. (2) Interestingly, the results of\nclosed-set challenging prediction benchmarks show that Cobra performs well in\novercoming visual illusions and spatial relationship judgments. (3) Notably,\nCobra even achieves comparable performance to LLaVA with about 43% of the\nnumber of parameters. We will make all codes of Cobra open-source and hope that\nthe proposed method can facilitate future research on complexity problems in\nMLLM. Our project page is available at: https://sites.google.com/view/cobravlm.", + "authors": "Han Zhao, Min Zhang, Wei Zhao, Pengxiang Ding, Siteng Huang, Donglin Wang", + "published": "2024-03-21", + "updated": "2024-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2403.09626v1", + "title": "Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding", + "abstract": "Understanding videos is one of the fundamental directions in computer vision\nresearch, with extensive efforts dedicated to exploring various architectures\nsuch as RNN, 3D CNN, and Transformers. The newly proposed architecture of state\nspace model, e.g., Mamba, shows promising traits to extend its success in long\nsequence modeling to video modeling. To assess whether Mamba can be a viable\nalternative to Transformers in the video understanding domain, in this work, we\nconduct a comprehensive set of studies, probing different roles Mamba can play\nin modeling videos, while investigating diverse tasks where Mamba could exhibit\nsuperiority. We categorize Mamba into four roles for modeling videos, deriving\na Video Mamba Suite composed of 14 models/modules, and evaluating them on 12\nvideo understanding tasks. Our extensive experiments reveal the strong\npotential of Mamba on both video-only and video-language tasks while showing\npromising efficiency-performance trade-offs. We hope this work could provide\nvaluable data points and insights for future research on video understanding.\nCode is public: https://github.com/OpenGVLab/video-mamba-suite.", + "authors": "Guo Chen, Yifei Huang, Jilan Xu, Baoqi Pei, Zhe Chen, Zhiqi Li, Jiahao Wang, Kunchang Li, Tong Lu, Limin Wang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Mamba" + }, + { + "url": "http://arxiv.org/abs/2404.18213v1", + "title": "S$^2$Mamba: A Spatial-spectral State Space Model for Hyperspectral Image Classification", + "abstract": "Land cover analysis using hyperspectral images (HSI) remains an open problem\ndue to their low spatial resolution and complex spectral information. Recent\nstudies are primarily dedicated to designing Transformer-based architectures\nfor spatial-spectral long-range dependencies modeling, which is computationally\nexpensive with quadratic complexity. Selective structured state space model\n(Mamba), which is efficient for modeling long-range dependencies with linear\ncomplexity, has recently shown promising progress. However, its potential in\nhyperspectral image processing that requires handling numerous spectral bands\nhas not yet been explored. In this paper, we innovatively propose S$^2$Mamba, a\nspatial-spectral state space model for hyperspectral image classification, to\nexcavate spatial-spectral contextual features, resulting in more efficient and\naccurate land cover analysis. In S$^2$Mamba, two selective structured state\nspace models through different dimensions are designed for feature extraction,\none for spatial, and the other for spectral, along with a spatial-spectral\nmixture gate for optimal fusion. More specifically, S$^2$Mamba first captures\nspatial contextual relations by interacting each pixel with its adjacent\nthrough a Patch Cross Scanning module and then explores semantic information\nfrom continuous spectral bands through a Bi-directional Spectral Scanning\nmodule. Considering the distinct expertise of the two attributes in homogenous\nand complicated texture scenes, we realize the Spatial-spectral Mixture Gate by\na group of learnable matrices, allowing for the adaptive incorporation of\nrepresentations learned across different dimensions. Extensive experiments\nconducted on HSI classification benchmarks demonstrate the superiority and\nprospect of S$^2$Mamba. The code will be available at:\nhttps://github.com/PURE-melo/S2Mamba.", + "authors": "Guanchun Wang, Xiangrong Zhang, Zelin Peng, Tianyang Zhang, Xiuping Jia, Licheng Jiao", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Mamba" + } +] \ No newline at end of file