{ "url": "http://arxiv.org/abs/2404.16302v1", "title": "CFMW: Cross-modality Fusion Mamba for Multispectral Object Detection under Adverse Weather Conditions", "abstract": "Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementary information for object detection. Despite this,\nexisting visible-infrared object detection methods severely degrade in severe\nweather conditions. This failure stems from the pronounced sensitivity of\nvisible images to environmental perturbations, such as rain, haze, and snow,\nwhich frequently cause false negatives and false positives in detection. To\naddress this issue, we introduce a novel and challenging task, termed\nvisible-infrared object detection under adverse weather conditions. To foster\nthis task, we have constructed a new Severe Weather Visible-Infrared Dataset\n(SWVID) with diverse severe weather scenes. Furthermore, we introduce the\nCross-modality Fusion Mamba with Weather-removal (CFMW) to augment detection\naccuracy in adverse weather conditions. Thanks to the proposed Weather Removal\nDiffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules, CFMW is\nable to mine more essential information of pedestrian features in\ncross-modality fusion, thus could transfer to other rarer scenarios with high\nefficiency and has adequate availability on those platforms with low computing\npower. To the best of our knowledge, this is the first study that targeted\nimprovement and integrated both Diffusion and Mamba modules in cross-modality\nobject detection, successfully expanding the practical application of this type\nof model with its higher accuracy and more advanced architecture. Extensive\nexperiments on both well-recognized and self-created datasets conclusively\ndemonstrate that our CFMW achieves state-of-the-art detection performance,\nsurpassing existing benchmarks. The dataset and source code will be made\npublicly available at https://github.com/lhy-zjut/CFMW.", "authors": "Haoyuan Li, Qi Hu, You Yao, Kailun Yang, Peng Chen", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.MM", "cs.RO", "eess.IV" ], "label": "Original Paper", "paper_cat": "Mamba", "gt": "In an open and dynamic environment, object detection faces chal- lenging weather conditions such as rain, haze, and snow. The rapid advancement of deep-learning-based object detection methods has significantly improved the ability to identify and classify objects. Benefiting from the advanced feature extraction and fusion strate- gies, cross-modality object detection methods have achieved high accuracy, e.g., CFT [34], GAFF [56], and CFR_3 [54]. However, as shown in Fig. 1, the performance of these methods is often chal- lenged by adverse weather conditions, which can severely impact the visibility and quality of visual data. Although the infrared image \u2217Equal contribution. \u2020Corresponding authors (e-mail: chenpeng@zjut.edu.cn, kailun.yang@kit.edu). Figure 1: The proposed method can achieve high-precision cross-modality object detection under adverse weather condi- tions. The top two examples are results from CFT [34], while the bottom two examples are results from CFMW (ours). could provide complementary cues to some extent, it cannot re- pair the appearance distortion or information loss of visual images. Thus, traditional cross-modality object detection methods still face severe performance degradation under adverse weather. Existing methods cannot be directly applied to adverse weather conditions, since the color gamut of visible images is weakened by environmental disturbance and the existing fusion methods are difficult to fully fuse visible and infrared spectra, nor have they made sufficient training under corresponding datasets. To make up the blank in this research area, we construct and release a new dataset, named Severe Weather Visible-Infrared Dataset (SWVID), as well as propose a novel framework named Cross-modality Fusion Mamba with Weather-removal (CFMW). To facilitate research in this area, we propose a new visible- infrared dataset, named SWVID, which is designed to encompass diverse severe weather scenarios by mathematically formalizing the impact of various weather phenomena on images. Specifically, SWVID comprises 20, 000 aligned visible-infrared image pairs, span- ning three weather conditions and two scenes, with each condition and scene evenly distributed. Motivated by the critical research gap highlighted in Fig. 1, where current methods falter in adverse weather, we introduce CFMW for multispectral object detection under adverse weather conditions. Our CFMW leverages a Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) to enhance detection accuracy amid adverse weather arXiv:2404.16302v1 [cs.CV] 25 Apr 2024 conditions while minimizing computational burden. Specifically, WRDM is employed to restore affected visible images before fusion with infrared counterparts, offering plug-and-play compatibility with image fusion networks. Based on learning reversal to increase the order of noise and disrupt the process of data samples, the WRDM model is advantageous to minimize the impact of adverse weather conditions. Additionally, CFM can be integrated into the feature extraction backbone, effectively integrating global contex- tual information from diverse modalities. Recent research shows that Mamba [10] achieves higher inference speed and overall met- rics than the equivalent-scale transformer. To our knowledge, this study represents the first endeavor to employ Diffusion models and Mamba for multispectral object detection. Extensive experiments on both well-established and self-created datasets demonstrate that our CFMW method achieves superior detection performance compared to existing benchmarks. Specifi- cally, we achieved about 17% performance improvement compared with the current state-of-the-art image restoration methods. The proposed method achieves about 8% accuracy improvement while saving 51.2% GPU memory compared with CFT [34], a state-of-the- art cross-modality object detection method. At a glance, we summarize the main contributions as follows: \u2022 We introduce a novel task focusing on visible-infrared object detection under adverse weather conditions and develop a new dataset called the Severe Weather Visible-Infrared Dataset (SWVID), which simulates real-world conditions. SWVID comprises 60, 000 paired visible-infrared images and labels, encompassing weather conditions such as rain, haze, and snow; \u2022 We propose a novel approach, Cross-modality Fusion Mamba with Weather-removal (CFMW) for multispectral object de- tection under adverse weather conditions; \u2022 We introduce a novel Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules to tackle image de-weathering and visible-infrared object detection tasks simultaneously; \u2022 Extensive experiments demonstrate that this integration achieves the best task migration capacity, resulting in state- of-the-art performance for both tasks.", "main_content": "In this section, we briefly review previous related works about crossmodality object detection, state space model, and multi-weather image restoration. Cross-modality Object Detection The existing cross-modality object detection methods can be divided into two categories: feature level and pixel level fusion, distinguished through feature fusion methods and timing. Recently, dual stream object detection models based on convolutional neural networks have made great progress in improving recognition performance [4, 34, 37, 54, 55], while pixel level fusion methods have also achieved good performance [5, 44, 59]. Other works employing methods such as GAN to effective integration also have achieved good results [51, 58, 59]. Those works can be integrated into downstream tasks such as object detection. Traditional convolutional neural networks have limited receptive fields that the information is only integrated into a local area when using the convolution operator, where the self-attention operator of the transformer can learn long-range dependencies [43]. Thus, a transformer-based method, named Cross-Modality Fusion Transformer (CFT) [34], was presented and achieved state-of-theart detection performance. Differing from these works, we first introduce Mamba into cross-modality object detection to learn long-range dependencies with gating mechanisms, achieving high accuracy and low computation overhead simultaneously. State Space Model The concept of the State Space Model was initially introduced in the S4 model [11], presenting a distinctive architecture capable of effectively modeling global information, compared with traditional convolutional neural networks and transformers. Based on S4, the S5 model [38] reduces complexity to a linear level, with H3 [31] introducing it into language model tasks. Mamba [10] introduced an input-activate mechanism to enhance the State Space model, achieving higher inference speed and overall metrics compared with equivalent-scale transformers. With the introduction of Vision Mamba [61] and Vmamba [30], the application of the State Space Model has been extended into visual tasks. Currently, existing research does not consider effectively generalizing the State Space Model to cross-modality object detection. Multi-Weather Image Restoration Recently, some attempts have been made to unity multiple recovery tasks in a single deep learning framework, including generating modeling solutions to recover superimposed noise types [9], recovering superimposed noise or weather damage with unknown test time, or especially unfavorable multi-weather image fading [3, 22, 42]. All in One [23] unified a weather restoration method with a multi-encoder and decoder architecture. It is worth noting that diffusion-based conditional generative models have shown state-of-the-art performance in various tasks such as class-conditional data synthesis with classifier guidance [7], image super-resolution [14], image deblurring [48]. Denosing diffusion restoration models (DDRM) [21] were proposed for general linear inverse image restoration problems, exploiting pro-trained denoising diffusion models for unsupervised posterior sampling. Generally, diffusion models were so far not considered to be generalized to adverse weather scenes in the cross-modality image fusion field. Unlike existing works, we expand the multiweather restoration to the field of cross-modality fusion. 3 PROPOSED FRAMEWORK 3.1 Overview As shown in Fig. 2, CFMW comprises two main stages. In the multi-weather image restoration stage, we aim to achieve image restoration of three types of adverse weather conditions (rain, snow, and haze) and implement it using a unified framework with only one pre-trained weight. In the cross-modality fusion stage, we aim to integrate unique features of different modalities. Inspired by CFT [34], to show the effectiveness of our proposed CFM fusion model, we extend the framework of YOLOv5 to enable multispectral object detection. We present our carefully designed loss functions and training procedure for WRDM and CFM in the last subsection. 3.2 Weather Removal Diffusion Model (WRDM) Denoising diffusion models [13, 39] are a class of generative models, that learn a Markov chain that gradually transforms a Gaussian Figure 2: Framework of Cross-Modality Fusion Mamba backbone. It has three parts: a Weather Removal Diffusion Model (WRDM), a two-stream feature extraction network (our baseline), and three Cross-Modality Fusion Mamba (CFM) modules. \u00c9 represents element-wise add, \u00cb represents element-wise multiply, and C1 is short of 1-dimension convolutions. noise distribution into the data distribution trained by the models. The original denoising diffusion probabilistic models (DDPMs)[13] diffusion process (data to noise) and generative process (noise to data) are based on a Markov chain process, resulting in a large number of steps and huge time consumption. Thus, denoising diffusion implicit models (DDIMs) [40] were presented to accelerate sampling, providing a more efficient class of iterative implicit probabilistic models. DDIMs define the generative process via a class of non-Markovian diffusion processes that lead to the same training objective as DDPMs but can produce deterministic generative processes, thus speeding up sample generation. In DDIMs, implicit sampling refers to the generation of samples from the latent space of the model in a deterministic manner. Implicit sampling using a noise estimator network can be performed by: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61). (1) where \ud835\udc4b\ud835\udc61and \ud835\udc4b\ud835\udc61\u22121 represent the data \ud835\udc4b0 \u223c\ud835\udc5e(\ud835\udc4b0)) in different diffusion time steps, \ud835\udefc\ud835\udc61= 1 \u2212\ud835\udefd\ud835\udc61, \u00af \ud835\udefc\ud835\udc61= \ud835\udc61 \u00ce \ud835\udc56=1 \ud835\udefc\ud835\udc56, and \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) can be optimized as: E\ud835\udc4b0,\ud835\udc61,\ud835\udf16\ud835\udc61\u223c\ud835\udc41(0, \ud835\udc70), [\u2225\ud835\udf16\ud835\udc61\u2212\ud835\udf16\ud835\udf03(\u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc4b0+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udc61,\ud835\udc61\u22252]. Conditional diffusion models have shown state-of-the-art imageconditional data synthesis and editing capabilities [6, 7]. The core idea is to learn a conditional reverse process without changing the diffusion process. Our proposed WRDM is a conditional diffusion model, adding reference images (clear images) in the process of sampling to guide the reconstructed image to be similar to reference images. As shown in Fig. 3, we introduce a new parameter e \ud835\udc4b, which represents the weather-degraded observation. A Markov chain is defined as a diffusion process, and Gaussian noise is gradually added to simulate the gradual degradation of data samples until reaching time point \ud835\udc47. We ground our model hyper-parameters via a U-Net architecture based on WideResNet [52]. For the input images conditional reflection, we connect patch \ud835\udc65\ud835\udc47and e \ud835\udc65, to obtain the six-dimensional input image channel. Conditioning the reverse process on e \ud835\udc4bcan maintain its compatibility with implicit sampling, so we could expand Eq. (1) as: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61). (2) The sampling process starts from \ud835\udc4b\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc70), following a deterministic reverse path towards \ud835\udc4b0 with fidelity. See more derivation details in the supplementary material. Our proposed WRDM is a patch-based conditional diffusion model, guiding the reverse sampling process toward smoothness across neighboring patches. During training, we randomly sample the \ud835\udc5d\ud835\udc65\ud835\udc5dpatch location for \ud835\udc43\ud835\udc56within the compute of image dimensions. Under any given time step \ud835\udc47, we reverse-sample the average estimated noise of each pixel in the overlapping patch area according to Fig. 3, which effectively controls the reverse sampling process to ensure that all adjacent patches have higher fidelity. Furthermore, WRDM can be regarded as a plug-in, embedded into other works such as visible-infrared image fusion to remove the influence of multi-weather conditions, which is demonstrated experimentally in Fig. 5. 3.3 Cross-modality Fusion Mamba (CFM) The goal of Cross-modality Fusion Mamba (CFM) is to introduce the advanced state space model (SSM), or Mamba [10], to crossmodality object detection. Structured state space sequence models (S4) and Mamba are inspired by the continuous system, mapping a 1-D function or sequence \ud835\udc65(\ud835\udc61) \u2208R \u2192\ud835\udc66(\ud835\udc61) through a hidden Figure 3: Schematic diagram of WRDM training and reasoning process. The left side is the framework of WRDM. We use a paired data distribution (e \ud835\udc4b,\ud835\udc4b\ud835\udc61), splitting into (e \ud835\udc4b(\ud835\udc51),\ud835\udc4b(\ud835\udc51) \ud835\udc61 ) for model-training. The right side is the illustration of the patch-based diffusive image restoration pipeline (4 patches for example here). state \u210e(\ud835\udc61) \u2208R\ud835\udc41. This system uses \ud835\udc68\u2208R\ud835\udc41\u00d7\ud835\udc41as the evolution parameter and \ud835\udc69\u2208R\ud835\udc41\u00d71, \ud835\udc6a\u2208R1\u00d7\ud835\udc41as the projection parameters, so that \ud835\udc66(\ud835\udc61) could evolve as follows: \u210e\u2032(\ud835\udc61) = \ud835\udc68\u210e(\ud835\udc61) + \ud835\udc69\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = \ud835\udc6a\u210e\u2032(\ud835\udc61). (3) Notice that S4 and Mamba are the discrete versions of the continuous system, including a timescale parameter \u0394 to transform the continuous parameters \ud835\udc34, \ud835\udc35to discrete parameters \u00af \ud835\udc68, \u00af \ud835\udc69as follows: \u00af \ud835\udc68= \ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68), \u00af \ud835\udc69= (\u0394\ud835\udc68)\u22121(\ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68) \u2212\ud835\udc70) \u00b7 \u0394\ud835\udc69. (4) After that, Eq. (3) could be rewritten as: \u210e\ud835\udc61= \u00af \ud835\udc68\u210e\ud835\udc61\u22121 + \u00af \ud835\udc69\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= \ud835\udc6a\u210e\ud835\udc61. (5) Finally, the models compute output through a global convolution as follows: \u00af \ud835\udc72= \ud835\udc6a\u00af \ud835\udc69, \ud835\udc6a\u00af \ud835\udc68\u00af \ud835\udc69, ..., \ud835\udc6a\u00af \ud835\udc68\ud835\udc74\u22121 \u00af \ud835\udc69, \ud835\udc66= \ud835\udc65\u2217\u00af \ud835\udc72. (6) where \ud835\udc74is the length of the input sequence x, and \u00af \ud835\udc72\u2208R\ud835\udc40is a structured convolution kernel. Standard Mamba is designed for the 1-D sequence. As shown in Vision Mamba (Vim), 2-D multispectral images \ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36 could be transformed into the flattened 2-D patches \ud835\udc65\ud835\udc5d\u2208R\ud835\udc3d\u00d7(\ud835\udc432\u00b7\ud835\udc36), where (\ud835\udc3b,\ud835\udc4a) represents the size of input images, \ud835\udc36is the channels, and \ud835\udc43is the size of image patches. Similarly, we linearly project the \ud835\udc65\ud835\udc5dto the vector with size \ud835\udc37and add position embeddings \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60\u2208R(\ud835\udc3d+1)\u00d7\ud835\udc37as follows: \ud835\udc7b0 = [\ud835\udc61\ud835\udc50\ud835\udc59\ud835\udc60;\ud835\udc611 \ud835\udc5d\ud835\udc7e;\ud835\udc612 \ud835\udc5d\ud835\udc7e; ...;\ud835\udc61\ud835\udc3d \ud835\udc5d\ud835\udc7e] + \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60. (7) where \ud835\udc61\ud835\udc57 \ud835\udc43is the \ud835\udc57\u2212\ud835\udc61\u210epath of \ud835\udc95, \ud835\udc7e\u2208R(\ud835\udc432\u00b7\ud835\udc36)\u00d7\ud835\udc37is the learnable projection matrix. Here are more details of the proposed CFM. As mentioned in the introduction section, the RGB modality and the Thermal modality show different features under different lighting and weather conditions, which are complementary and redundant. Therefore, we aim to design a block to suppress redundant features and fuse complementary to efficiently harvest essential cross-modal cues for object detection against adverse weather conditions. Motivated by the concept of Cross-Attention [1], we introduce a new crossmodality Mamba block to fuse features from different modalities. As shown in Fig. 2, to encourage feature interaction between RGB and Thermal modalities, we use a Channel Swapping Mamba block (CS) [12], which incorporates information from different channels and enhances cross-modality correlations. Given RGB features \ud835\udc39\ud835\udc45\ud835\udc56 and Thermal features \ud835\udc39\ud835\udc47\ud835\udc56, the first half of channels from \ud835\udc39\ud835\udc45\ud835\udc56will be concatenated with the latter half of \ud835\udc39\ud835\udc47\ud835\udc56and processed through the Mamba block for feature extraction. The obtained features are added to \ud835\udc39\ud835\udc45\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc45\ud835\udc56 \u2032. Meanwhile, the first half of \ud835\udc39\ud835\udc47\ud835\udc56is concatenated with the latter half of \ud835\udc39\ud835\udc45\ud835\udc56, then passes through the Mamba block. The obtained features are added to \ud835\udc39\ud835\udc47\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc47\ud835\udc56 \u2032. Subsequently, we project the features: \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 into the shared space during the feature fusion process, using the gating mechanism to encourage complementary feature learning while restraining redundant features. As shown in Fig. 2, we first normalize every token sequence in \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 with Norm block, which helps to improve the convergence speed and performance of the model. Then project the input sequence through linear layers and apply SiLu as the activation function. \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, and \ud835\udc6a\ud835\udc90can be generated by the Parameters Function: \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90= \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc5a\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc39\ud835\udc62\ud835\udc5b\ud835\udc50\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc65\ud835\udc5c\u2032), (8) where \ud835\udc65\u2032 \ud835\udc5c= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93(\ud835\udc65\ud835\udc65 \ud835\udc5c\ud835\udc75\ud835\udc90\ud835\udc93\ud835\udc8e(\ud835\udc39\ud835\udc5c\u2032 \ud835\udc56)). After that, we apply State Space Model (SSM): \ud835\udc66\ud835\udc5c= \ud835\udc7a\ud835\udc7a\ud835\udc74( \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90)(\ud835\udc65\ud835\udc5c\u2032), (9) Figure 4: Overview of the established SWVID benchmarks. The dataset includes three weather conditions (i.e., Rain, Foggy, and Snow), and two scenarios (i.e., Daylight and Night), providing 60, 000 images in total. Then we apply the gating operation, followed by residual connection: \ud835\udc67= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc9b(\ud835\udc39\ud835\udc47\ud835\udc56 \u2032), (10) \ud835\udc66\ud835\udc45\u2032 = \ud835\udc66\ud835\udc45\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (11) \ud835\udc66\ud835\udc47\u2032 = \ud835\udc66\ud835\udc47\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (12) \ud835\udc39\ud835\udc56= \ud835\udc79\ud835\udc86\ud835\udc94\ud835\udc89\ud835\udc82\ud835\udc91\ud835\udc86(\ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc47(\ud835\udc66\ud835\udc45\u2032 + \ud835\udc66\ud835\udc47\u2032) + \ud835\udc39\ud835\udc56\u2032). (13) Finally, we get the fused 2-D feature \ud835\udc39\ud835\udc56successfully. Different from CFT [34], our fusion block improves computational efficiency while inheriting the components of global receptive field and dynamic weight. Comparing the state space model (SSM) in our CFM block with the self-attention mechanism of transformers in CFT [34], both of them play an important role in providing global context adaptively, but self-attention is quadratic to sequence length while SSM is linear to sequence length [61]. To achieve lower memory usage when dealing with long-sequence works, CFM chooses the recomputation method as the same as Mamba. Experiment on the SWVID and LLVIP dataset, whose resolution is 1080 \u00d7 720, shows that CFT requires 21.88GB GPU memory while CFM only requires 10.72GB, saving 11.16GB in the same configuration. 3.4 Loss Functions As a two-stage pre-training model, we carefully design the training loss functions to produce enhanced results with minimum blurriness and the closest details to ground-truth images and to extract the differences between RGB and thermal modalities. For training WRDM, the goal of the loss function in this stage is to maximize the data log-likelihood \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650). Since maximizing this target directly is very challenging, we use variational inference to approximate this target. Variational inference approximates the true posterior distribution \ud835\udc5d\ud835\udf03(\ud835\udc650 : \ud835\udc47) by introducing a variational Table 1: Comparisons of SWVID benchmark with existing visible-infrared datasets. !means available while %denotes the opposite. Dataset Year Resolution Publication Scene Daylight Night Weather KAIST [16] 2015 640 \u00d7 512 CVPR \" \" % FLIR [8] 2018 640 \u00d7 512 \" \" % RoadScene [50] 2020 640 \u00d7 512 AAAI \" \" % LLVIP [18] 2021 1080 \u00d7 720 ICCV \" \" % MSRS [41] 2022 640 \u00d7 480 Info. Fusion \" \" % M3FD [27] 2022 640 \u00d7 512 CVPR \" \" % VTUAV [32] 2022 1920 \u00d7 1080 CVPR \" \" % SWVID 2024 1080 \u00d7 720 Proposed \" \" \" distribution\ud835\udc5e(\ud835\udc651 : \ud835\udc47|\ud835\udc650) and then minimizes the difference between these two distributions. Here we define L\ud835\udf03= \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650), we have: L\ud835\udf03= \ud835\udc47 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e[\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650|\ud835\udc65\ud835\udc47)] \u2212 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) [\ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650))||\ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61)]. (14) where the second term is the expected value of the Kullback-Leibler divergence between \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) and \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). In alignment with the prevalent practices in this field, the overall loss function (L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59) is a sum of the bounding-box regression loss (L\ud835\udc4f\ud835\udc5c\ud835\udc65), the classification loss (L\ud835\udc50\ud835\udc59\ud835\udc60), and the confidence loss (L\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53= L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57). L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= L\ud835\udc4f\ud835\udc5c\ud835\udc65+ L\ud835\udc50\ud835\udc59\ud835\udc60+ L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57, (15) Details of the loss function for CFMW are elucidated in the supplementary material. 4 EXPERIMENTS 4.1 Established SWVID benchmark Dataset. The color gamut of visible images is weakened by environmental disturbance in dynamic environments, and the existing fusion methods make it difficult to fully fuse visible and infrared spectra because of a deficiency of sufficient training under corresponding datasets. As shown in Fig. 4, we established the benchmark, SWVID, which is constructed from the public datasets (i.e. LLVIP [18], M3FD [27], MSRS [41]) collected in the real scene. It contains a variety of uniformly distributed scenes (daylight, night, rain, foggy, and snow), simulating real environments through the combination of different scenes. Furthermore, we provide the corresponding ground-truth images for each visible image affected by adverse weather conditions for image fusion and image restoration network training. As shown in Table 1, compared with previous visible-infrared datasets, SWVID is the first one that considers weather conditions. Specifically, we have constructed the dataset from public visible-infrared datasets as follows: D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc5f(\ud835\udc65)) + \ud835\udc45(\ud835\udc65)\ud835\udc40\ud835\udc5f(\ud835\udc65), (16) D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc60(\ud835\udc65)) + \ud835\udc46(\ud835\udc65)\ud835\udc40\ud835\udc60(\ud835\udc65), (17) D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)\ud835\udc52\u2212 \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udefd\ud835\udc51\ud835\udc59+ \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udc3f\u221e\ud835\udefd\ud835\udc52\u2212\ud835\udefd\ud835\udc59\ud835\udc51\ud835\udc59. (18) Figure 5: Examples of daylight and night scenes for multimodal fusion and object detection visualization, including three kinds of adverse weather conditions (rain, haze, and snow). We embed WRDM into two state-of-the-art visible-infrared fusion methods (i.e., CDDFuse [59] and DeFusion [25]) to mitigate the adverse impact of weather conditions. where \ud835\udc65represents the spatial location in an image, D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)), D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) and D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) represent a function that maps a clear image to one with rain, snow, and fog particle effects, \ud835\udc3d(\ud835\udc65) represents the clear image with no weather effects, \ud835\udc40\ud835\udc5f(\ud835\udc65) and \ud835\udc40\ud835\udc60(\ud835\udc65) represent rain and snow equivalents, \ud835\udc45(\ud835\udc65) represents a map of the rain masks, \ud835\udc46(\ud835\udc65) represents a chromatic aberration map of the snow particles. Considering scattering effects, \ud835\udc51(\ud835\udc65) represents the distance from the observer at a pixel location \ud835\udc65, \ud835\udefdis an atmospheric attenuation coefficient, and \ud835\udc3f\u221eis the radiance of light. We divide SWVID into the training set (34, 280 images), validation set (17, 140 images), and test set (8, 570 images), each folder contains three parts: pairs of visible-infrared images and corresponding weather-influenced visible images. Notice that weather-influenced visible images contain three kinds of weather conditions, classified as SWVID-snow, SWVID-rain, and SWVID-foggy. During the training period, we use the pairs of images (weather-influenced and ground-truth) to train WRDM in the first stage, then use the pairs of images (ground-truth and infrared) with corresponding labels to train CFM in the second stage. During the validating and testing period, we use the pairs of images (weather-influenced and infrared) directly, verifying and testing the performance of CFMW under real conditions. Also, we use the same way when evaluating other networks in comparative experiments. Evaluation metrics. We adopt the conventional peak signalto-noise ratio (PSNR) [15] and structural similarity (SSIM) [47] for quantitative evaluations between ground truth and restored images. PSNR is mainly used to evaluate the degree of distortion after image processing, while SSIM pays more attention to the Table 2: Quantitative comparisons in terms of PSNR and SSIM (higher is better) with state-of-the-art image deraining, dehazing, and desnowing methods. For the sake of fairness, we uniformly use the visible light part of the established SWVID dataset as the evaluation dataset. Image-Deraining Task SWVID-rain (RGB) Image-Dehazing Task SWVID-foggy (RGB) Image-Desnowing Task SWVID-snow (RGB) PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 pix2pix [17] 19.95 0.7270 pix2pix [17] 25.12 0.8359 SPANet [46] 29.92 0.8260 CycleGAN [60] 17.65 0.6452 DuRN [29] 31.44 0.9256 DDMSNet [57] 34.87 0.9462 PCNet [19] 27.13 0.8546 AttentiveGAN [33] 32.56 0.9331 DesnowNet [2] 32.15 0.9416 MPRNet [53] 29.14 0.9022 IDT [49] 34.14 0.9412 RESCAN [24] 30.57 0.9003 de-rain (ours) 36.78 0.9464 de-haze (ours) 36.53 0.9795 de-snow (ours) 42.23 0.9821 All-in-One [23] 25.13 0.8856 All-in-One [23] 31.24 0.9122 All-in-One [23] 28.12 0.8815 TransWeather [42] 29.77 0.9107 TransWeather [42] 33.85 0.9388 TransWeather [42] 35.15 0.9417 WRDM (ours) 35.02 0.9322 WRDM (ours) 35.88 0.9602 WRDM (ours) 40.98 0.9578 Table 3: Comparison of performances with other networks on the SWVID-snow dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 82.3 34.6 30.7 Faster R-CNN [36] Thermal ResNet50 90.6 63.7 55.4 SDD [28] RGB VGG16 73.6 37.8 38.6 SDD [28] Thermal VGG16 88.6 55.6 50.2 YOLOv3 [35] RGB Darknet53 78.3 29.4 24.4 YOLOv3 [35] Thermal Darknet53 84.6 50.7 47.4 YOLOv5 [20] RGB CSPD53 80.7 38.2 30.7 YOLOv5 [20] Thermal CSPD53 90.5 65.2 57.6 YOLOv7 [45] RGB CSPD53 85.3 41.8 34.9 YOLOv7 [45] Thermal CSPD53 91.8 67.6 60.4 multi-modality networks Baseline RGB+T CSPD53 92.2 68.4 59.3 CFT [34] RGB+T CFB 92.4 71.1 58.4 CFMW (ours) RGB+T CFM 97.2 76.9 63.4 structural information and visual quality of the images. \ud835\udc43\ud835\udc46\ud835\udc41\ud835\udc45= 10 \u00d7 \ud835\udc59\ud835\udc54( (2\ud835\udc5b\u22121)2 \ud835\udc40\ud835\udc46\ud835\udc38 ), (19) \ud835\udc46\ud835\udc46\ud835\udc3c\ud835\udc40= [\ud835\udc59(\ud835\udc65,\ud835\udc66)]\ud835\udefc\u00b7 [\ud835\udc50(\ud835\udc65,\ud835\udc66)]\ud835\udefd\u00b7 [\ud835\udc60(\ud835\udc65,\ud835\udc66)]\ud835\udefe, (20) As for object detection quantitative experiments, we introduced three object detection metrics: mean Average Precision (mAP, mAP50, and mAP75) to evaluate the accuracy of the object detection models. For more calculation details, please refer to the supplementary material. 4.2 Implantation Details As for WRDM, we performed experiments both in specific-weather conditions and multi-weather conditions image restoration settings. We denote our specific-weather restoration models as de-rain, desnow, and de-foggy to verify the general WRDM model under specific weather conditions. We trained the 128 \u00d7 128 patch size version of all models. We use NVIDIA RTX 4090 cards to perform all the experiments. We use Adam as an optimizer while training all the models we compare. During the training process, we trained WRDM for 3 \u00d7 106 iterations. As for CFM, we did not perform Table 4: Comparison of performances with other networks on the LLVIP [18] dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 91.4 48.0 49.2 Faster R-CNN [36] Thermal ResNet50 96.1 68.5 61.1 SDD [28] RGB VGG16 82.6 31.8 39.8 SDD [28] Thermal VGG16 90.2 57.9 53.5 YOLOv3 [35] RGB Darknet53 85.9 37.9 43.3 YOLOv3 [35] Thermal Darknet53 89.7 53.4 52.8 YOLOv5 [20] RGB CSPD53 90.8 51.9 50.0 YOLOv5 [20] Thermal CSPD53 94.6 72.2 61.9 YOLOv7 [45] RGB CSPD53 91.4 58.4 53.6 YOLOv7 [45] Thermal CSPD53 94.6 70.6 62.4 multi-modality networks Baseline RGB+T CSPD53 95.2 71.4 62.3 CFT [34] RGB+T CFB 97.5 72.9 63.6 CFMW (ours) RGB+T CFM 98.8 77.2 64.8 task-specific parameter tuning or modifications to the network architecture. For better performance, we select the YOLOv5 model\u2019s public weight initialization (yolov5s.pt), which is pre-trained on the COCO dataset [26]. 4.3 Comparative Experiments In this section, we make comparisons with several state-of-theart methods in image deweathering and cross-modality object detection separately. In Table 2, we perform comparisons with methods for image desnowing (i.e. SPANet [46], DDMSNet [57], DesnowNet [2], RESCAN [24]), deraining (i.e. pix2pix [17], CycleGAN [60], PCNet [19], MPRNet [53]), and dehazing (i.e. pix2pix [17], DuRN [29], Attentive-GAN [33], IDT [49]), as well as two state-ofthe-art multi-weather image restoration methods: All in One [23] and TransWeather [42]. In Table 3 and Table 4, to prove the consistent improvements of CFMW, we compare with several base single-modality object detection methods (i.e., Faster R-CNN [36], SDD [28], YOLOv3 [35], YOLOv5 [20], YOLOv7 [45]) and several multi-modality object detection methods (i.e., our baseline, standard two-stream YOLOv5 object detection network, and CFT [34]). Table 5: Ablation experiments on SWVID-snow dataset. To present the general effectiveness of our CFMW, we further combine the WRDM and CFM module with other classical detectors (i.e., YOLOv7, YOLOv5, Faster R-CNN). Modality Method Detector mAP50\u2191 mAP75\u2191 mAP\u2191 RGB CSPDarknet53 YOLOv7 [45] 85.3 41.8 34.9 Thermal CSPDarknet53 95.8 72.6 60.4 RGB+T +two stream 95.4 68.1 60.4 +CFM 95.5 68.6 63.3 +WRDM 96.5 70.9 63.1 +CFM&WRDM 96.6 75.1 64.1 RGB CSPDarknet53 YOLOv5 [20] 80.7 38.2 30.7 Thermal CSPDarknet53 90.5 65.2 57.6 RGB+T +two stream 92.2 68.4 59.3 +CFM 96.5 70.6 63.3 +WRDM 96.4 71.2 62.8 +CFM&WRDM 97.2 76.9 63.4 RGB Resnet53 Faster R-CNN [36] 82.3 34.6 30.7 Thermal Resnet53 90.6 63.7 55.4 RGB+T +two stream 93.7 62.8 55.4 +CFM 96.7 69.5 61.9 +WRDM 96.2 69.4 61.6 +CFM&WRDM 96.2 69.7 62.2 Comparison of image deweathering. As shown in Table 2, we use the single RGB modality of the SWVID dataset (including rain, foggy, and haze weather conditions) as a comparative dataset to measure the performance of different models under different weather conditions. The top of the table contains results from specific-weather image restoration, where we show \ud835\udc46= 50 sampling time steps. For image-deraining, image-dehazing, and image-desnowing tasks, the proposed solution consistently achieves the best results (36.78/0.9464 on SWVID-rain, 36.53/0.9795 on SWVID-foggy, and 42.23/0.9821 on SWVID-snow). Especially, in the image de-rain task, the performance improvement is about 24% compared with the current state-of-the-art method (MPRNet [53]). For multi-weather image restoration, although the results are not as good as the specific-weather model due to the complexity of the task, the proposed method also reaches the best results ( 35.02/0.9322 on SWVID-rain, 35.88/0.9602 on SWVID-foggy, and 40.98/0.9578 on SWVID-snow) compared with All in One [23] and TransWeather [42], with about 17% performance improvement compared against TransWeather [42] and about 25% performance improvement compared against All in One [23]. Comparison of cross-modality object detection. As shown in Table 3 and Table 4, we use LLVIP [18] and SWVID-snow as the comparative datasets. Compared with SWVID-rain and SWVIDfoggy, the size of pedestrians in these two datasets is more in line with the general object detection standards. There are more complex cases of pedestrian overlap in these two datasets, which can better measure the accuracy of the object detection networks. The top of the table contains results from single-modality networks, each network uses the RGB modality or the thermal modality for detection. The bottom of the table shows results from multi-modality networks, including our baseline, CFT [34] and the proposed CFMW. According to Table 3, it can be observed that with the integration of WRDM and CFM, CFMW achieves an overwhelming performance improvement on each metric (mAP50:2.3\u2191, mAP75:4.3\u2191, mAP:3.0\u2191) on SWVID-snow compared with the best existing network on each metric, which shows that it has preferable adaptability under adverse weather conditions. Also, CFMW can achieve a more accurate detection (mAP50:98.8, mAP75:77.2, mAP:64.8) with lower computational consumption, as shown in Table 4, which demonstrates the commonality of CFWM. 4.4 Ablation Study In this section, we analyze the effectiveness of CFMW. We first validate the importance of WRDM and CFM modules in performance improvement in a parametric form through detailed ablation experiments, then visually show the role of WRDM in cross-modality fusion and object detection tasks to highlight its versatility as a weather-restoration plug-in. Ablation experiments To understand the impact of each component in our method, we have performed a comprehensive set of ablation experiments. As shown in Table 5, we further combine the CFM and WRDM with other classical detectors, i.e. YOLOv7 [45], YOLOv5 [20] and Faster R-CNN [36] to present the general effectiveness of our CFMW. The proposed CFMW improves the performance of cross-modality object detection using either a one-stage or twostage detector under complex weather conditions. Specifically, CFM achieves an 11.3% gain on mAP50, an 81.6% gain on mAP75, and a 78.3% gain on mAP (on YOLOv5 [20] ). After adding WRDM, we achieved a 12.1% gain on mAP50, an 88.2% gain on mAP75, and an 80.4% gain on mAP. CFM and WRDM provide non-negligible gains for all the considered evaluation metrics. Visual interpretation To verify the applicability of WRDM as a plug-in intuitively, we visually show the application scenario of WRDM in the field of visible-infrared image fusion and object detection. As shown in Fig. 5, we perform comparisons with methods of visible-infrared image fusion methods (i.e. CDDFuse [59], DeFusion [25]). It can be seen from the figure that compared with the original images, the image fusion effects of the two methods before and after using WRDM are quite different, more people at the far end of images could be detected successfully after deweathering. In cross-modality object detection, rich image details can provide great assistance for feature extraction and fusion, with direct fusion without removing the weather influence causing the loss and interference of image details. 5 CONCLUSION In this work, we introduce a novel approach to visible-infrared object detection under severe weather conditions, namely the Severe Weather Visible-Infrared Dataset (SWVID). We have provided a valuable resource for training and evaluating models in realistic and challenging environments. The Cross-modality Fusion Mamba with Weather-removal (CFMW) model, has proven to be highly effective in enhancing detection accuracy while managing computational efficiency. Our extensive experiments have shown that CFMW outperforms existing benchmarks, achieving state-of-the-art on both tasks: multi-weather image restoration and cross-modality object detection. This work opens up new possibilities for cross-modality object detection in adverse weather.", "additional_graph_info": { "graph": [ [ "Haoyuan Li", "Zhijie Lin" ], [ "Zhijie Lin", "Jinglin Liu" ], [ "Zhijie Lin", "Xingshan Zeng" ] ], "node_feat": { "Haoyuan Li": [ { "url": "http://arxiv.org/abs/2404.16302v1", "title": "CFMW: Cross-modality Fusion Mamba for Multispectral Object Detection under Adverse Weather Conditions", "abstract": "Cross-modality images that integrate visible-infrared spectra cues can\nprovide richer complementary information for object detection. Despite this,\nexisting visible-infrared object detection methods severely degrade in severe\nweather conditions. This failure stems from the pronounced sensitivity of\nvisible images to environmental perturbations, such as rain, haze, and snow,\nwhich frequently cause false negatives and false positives in detection. To\naddress this issue, we introduce a novel and challenging task, termed\nvisible-infrared object detection under adverse weather conditions. To foster\nthis task, we have constructed a new Severe Weather Visible-Infrared Dataset\n(SWVID) with diverse severe weather scenes. Furthermore, we introduce the\nCross-modality Fusion Mamba with Weather-removal (CFMW) to augment detection\naccuracy in adverse weather conditions. Thanks to the proposed Weather Removal\nDiffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules, CFMW is\nable to mine more essential information of pedestrian features in\ncross-modality fusion, thus could transfer to other rarer scenarios with high\nefficiency and has adequate availability on those platforms with low computing\npower. To the best of our knowledge, this is the first study that targeted\nimprovement and integrated both Diffusion and Mamba modules in cross-modality\nobject detection, successfully expanding the practical application of this type\nof model with its higher accuracy and more advanced architecture. Extensive\nexperiments on both well-recognized and self-created datasets conclusively\ndemonstrate that our CFMW achieves state-of-the-art detection performance,\nsurpassing existing benchmarks. The dataset and source code will be made\npublicly available at https://github.com/lhy-zjut/CFMW.", "authors": "Haoyuan Li, Qi Hu, You Yao, Kailun Yang, Peng Chen", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.MM", "cs.RO", "eess.IV" ], "main_content": "In this section, we briefly review previous related works about crossmodality object detection, state space model, and multi-weather image restoration. Cross-modality Object Detection The existing cross-modality object detection methods can be divided into two categories: feature level and pixel level fusion, distinguished through feature fusion methods and timing. Recently, dual stream object detection models based on convolutional neural networks have made great progress in improving recognition performance [4, 34, 37, 54, 55], while pixel level fusion methods have also achieved good performance [5, 44, 59]. Other works employing methods such as GAN to effective integration also have achieved good results [51, 58, 59]. Those works can be integrated into downstream tasks such as object detection. Traditional convolutional neural networks have limited receptive fields that the information is only integrated into a local area when using the convolution operator, where the self-attention operator of the transformer can learn long-range dependencies [43]. Thus, a transformer-based method, named Cross-Modality Fusion Transformer (CFT) [34], was presented and achieved state-of-theart detection performance. Differing from these works, we first introduce Mamba into cross-modality object detection to learn long-range dependencies with gating mechanisms, achieving high accuracy and low computation overhead simultaneously. State Space Model The concept of the State Space Model was initially introduced in the S4 model [11], presenting a distinctive architecture capable of effectively modeling global information, compared with traditional convolutional neural networks and transformers. Based on S4, the S5 model [38] reduces complexity to a linear level, with H3 [31] introducing it into language model tasks. Mamba [10] introduced an input-activate mechanism to enhance the State Space model, achieving higher inference speed and overall metrics compared with equivalent-scale transformers. With the introduction of Vision Mamba [61] and Vmamba [30], the application of the State Space Model has been extended into visual tasks. Currently, existing research does not consider effectively generalizing the State Space Model to cross-modality object detection. Multi-Weather Image Restoration Recently, some attempts have been made to unity multiple recovery tasks in a single deep learning framework, including generating modeling solutions to recover superimposed noise types [9], recovering superimposed noise or weather damage with unknown test time, or especially unfavorable multi-weather image fading [3, 22, 42]. All in One [23] unified a weather restoration method with a multi-encoder and decoder architecture. It is worth noting that diffusion-based conditional generative models have shown state-of-the-art performance in various tasks such as class-conditional data synthesis with classifier guidance [7], image super-resolution [14], image deblurring [48]. Denosing diffusion restoration models (DDRM) [21] were proposed for general linear inverse image restoration problems, exploiting pro-trained denoising diffusion models for unsupervised posterior sampling. Generally, diffusion models were so far not considered to be generalized to adverse weather scenes in the cross-modality image fusion field. Unlike existing works, we expand the multiweather restoration to the field of cross-modality fusion. 3 PROPOSED FRAMEWORK 3.1 Overview As shown in Fig. 2, CFMW comprises two main stages. In the multi-weather image restoration stage, we aim to achieve image restoration of three types of adverse weather conditions (rain, snow, and haze) and implement it using a unified framework with only one pre-trained weight. In the cross-modality fusion stage, we aim to integrate unique features of different modalities. Inspired by CFT [34], to show the effectiveness of our proposed CFM fusion model, we extend the framework of YOLOv5 to enable multispectral object detection. We present our carefully designed loss functions and training procedure for WRDM and CFM in the last subsection. 3.2 Weather Removal Diffusion Model (WRDM) Denoising diffusion models [13, 39] are a class of generative models, that learn a Markov chain that gradually transforms a Gaussian Figure 2: Framework of Cross-Modality Fusion Mamba backbone. It has three parts: a Weather Removal Diffusion Model (WRDM), a two-stream feature extraction network (our baseline), and three Cross-Modality Fusion Mamba (CFM) modules. \u00c9 represents element-wise add, \u00cb represents element-wise multiply, and C1 is short of 1-dimension convolutions. noise distribution into the data distribution trained by the models. The original denoising diffusion probabilistic models (DDPMs)[13] diffusion process (data to noise) and generative process (noise to data) are based on a Markov chain process, resulting in a large number of steps and huge time consumption. Thus, denoising diffusion implicit models (DDIMs) [40] were presented to accelerate sampling, providing a more efficient class of iterative implicit probabilistic models. DDIMs define the generative process via a class of non-Markovian diffusion processes that lead to the same training objective as DDPMs but can produce deterministic generative processes, thus speeding up sample generation. In DDIMs, implicit sampling refers to the generation of samples from the latent space of the model in a deterministic manner. Implicit sampling using a noise estimator network can be performed by: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61). (1) where \ud835\udc4b\ud835\udc61and \ud835\udc4b\ud835\udc61\u22121 represent the data \ud835\udc4b0 \u223c\ud835\udc5e(\ud835\udc4b0)) in different diffusion time steps, \ud835\udefc\ud835\udc61= 1 \u2212\ud835\udefd\ud835\udc61, \u00af \ud835\udefc\ud835\udc61= \ud835\udc61 \u00ce \ud835\udc56=1 \ud835\udefc\ud835\udc56, and \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61,\ud835\udc61) can be optimized as: E\ud835\udc4b0,\ud835\udc61,\ud835\udf16\ud835\udc61\u223c\ud835\udc41(0, \ud835\udc70), [\u2225\ud835\udf16\ud835\udc61\u2212\ud835\udf16\ud835\udf03(\u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc4b0+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udc61,\ud835\udc61\u22252]. Conditional diffusion models have shown state-of-the-art imageconditional data synthesis and editing capabilities [6, 7]. The core idea is to learn a conditional reverse process without changing the diffusion process. Our proposed WRDM is a conditional diffusion model, adding reference images (clear images) in the process of sampling to guide the reconstructed image to be similar to reference images. As shown in Fig. 3, we introduce a new parameter e \ud835\udc4b, which represents the weather-degraded observation. A Markov chain is defined as a diffusion process, and Gaussian noise is gradually added to simulate the gradual degradation of data samples until reaching time point \ud835\udc47. We ground our model hyper-parameters via a U-Net architecture based on WideResNet [52]. For the input images conditional reflection, we connect patch \ud835\udc65\ud835\udc47and e \ud835\udc65, to obtain the six-dimensional input image channel. Conditioning the reverse process on e \ud835\udc4bcan maintain its compatibility with implicit sampling, so we could expand Eq. (1) as: \ud835\udc4b\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 (\ud835\udc4b\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 ) +\u221a1 \u2212 \u00af \ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc4b\ud835\udc61, e \ud835\udc4b,\ud835\udc61). (2) The sampling process starts from \ud835\udc4b\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc70), following a deterministic reverse path towards \ud835\udc4b0 with fidelity. See more derivation details in the supplementary material. Our proposed WRDM is a patch-based conditional diffusion model, guiding the reverse sampling process toward smoothness across neighboring patches. During training, we randomly sample the \ud835\udc5d\ud835\udc65\ud835\udc5dpatch location for \ud835\udc43\ud835\udc56within the compute of image dimensions. Under any given time step \ud835\udc47, we reverse-sample the average estimated noise of each pixel in the overlapping patch area according to Fig. 3, which effectively controls the reverse sampling process to ensure that all adjacent patches have higher fidelity. Furthermore, WRDM can be regarded as a plug-in, embedded into other works such as visible-infrared image fusion to remove the influence of multi-weather conditions, which is demonstrated experimentally in Fig. 5. 3.3 Cross-modality Fusion Mamba (CFM) The goal of Cross-modality Fusion Mamba (CFM) is to introduce the advanced state space model (SSM), or Mamba [10], to crossmodality object detection. Structured state space sequence models (S4) and Mamba are inspired by the continuous system, mapping a 1-D function or sequence \ud835\udc65(\ud835\udc61) \u2208R \u2192\ud835\udc66(\ud835\udc61) through a hidden Figure 3: Schematic diagram of WRDM training and reasoning process. The left side is the framework of WRDM. We use a paired data distribution (e \ud835\udc4b,\ud835\udc4b\ud835\udc61), splitting into (e \ud835\udc4b(\ud835\udc51),\ud835\udc4b(\ud835\udc51) \ud835\udc61 ) for model-training. The right side is the illustration of the patch-based diffusive image restoration pipeline (4 patches for example here). state \u210e(\ud835\udc61) \u2208R\ud835\udc41. This system uses \ud835\udc68\u2208R\ud835\udc41\u00d7\ud835\udc41as the evolution parameter and \ud835\udc69\u2208R\ud835\udc41\u00d71, \ud835\udc6a\u2208R1\u00d7\ud835\udc41as the projection parameters, so that \ud835\udc66(\ud835\udc61) could evolve as follows: \u210e\u2032(\ud835\udc61) = \ud835\udc68\u210e(\ud835\udc61) + \ud835\udc69\ud835\udc65(\ud835\udc61), \ud835\udc66(\ud835\udc61) = \ud835\udc6a\u210e\u2032(\ud835\udc61). (3) Notice that S4 and Mamba are the discrete versions of the continuous system, including a timescale parameter \u0394 to transform the continuous parameters \ud835\udc34, \ud835\udc35to discrete parameters \u00af \ud835\udc68, \u00af \ud835\udc69as follows: \u00af \ud835\udc68= \ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68), \u00af \ud835\udc69= (\u0394\ud835\udc68)\u22121(\ud835\udc52\ud835\udc65\ud835\udc5d(\u0394\ud835\udc68) \u2212\ud835\udc70) \u00b7 \u0394\ud835\udc69. (4) After that, Eq. (3) could be rewritten as: \u210e\ud835\udc61= \u00af \ud835\udc68\u210e\ud835\udc61\u22121 + \u00af \ud835\udc69\ud835\udc65\ud835\udc61, \ud835\udc66\ud835\udc61= \ud835\udc6a\u210e\ud835\udc61. (5) Finally, the models compute output through a global convolution as follows: \u00af \ud835\udc72= \ud835\udc6a\u00af \ud835\udc69, \ud835\udc6a\u00af \ud835\udc68\u00af \ud835\udc69, ..., \ud835\udc6a\u00af \ud835\udc68\ud835\udc74\u22121 \u00af \ud835\udc69, \ud835\udc66= \ud835\udc65\u2217\u00af \ud835\udc72. (6) where \ud835\udc74is the length of the input sequence x, and \u00af \ud835\udc72\u2208R\ud835\udc40is a structured convolution kernel. Standard Mamba is designed for the 1-D sequence. As shown in Vision Mamba (Vim), 2-D multispectral images \ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36 could be transformed into the flattened 2-D patches \ud835\udc65\ud835\udc5d\u2208R\ud835\udc3d\u00d7(\ud835\udc432\u00b7\ud835\udc36), where (\ud835\udc3b,\ud835\udc4a) represents the size of input images, \ud835\udc36is the channels, and \ud835\udc43is the size of image patches. Similarly, we linearly project the \ud835\udc65\ud835\udc5dto the vector with size \ud835\udc37and add position embeddings \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60\u2208R(\ud835\udc3d+1)\u00d7\ud835\udc37as follows: \ud835\udc7b0 = [\ud835\udc61\ud835\udc50\ud835\udc59\ud835\udc60;\ud835\udc611 \ud835\udc5d\ud835\udc7e;\ud835\udc612 \ud835\udc5d\ud835\udc7e; ...;\ud835\udc61\ud835\udc3d \ud835\udc5d\ud835\udc7e] + \ud835\udc6c\ud835\udc5d\ud835\udc5c\ud835\udc60. (7) where \ud835\udc61\ud835\udc57 \ud835\udc43is the \ud835\udc57\u2212\ud835\udc61\u210epath of \ud835\udc95, \ud835\udc7e\u2208R(\ud835\udc432\u00b7\ud835\udc36)\u00d7\ud835\udc37is the learnable projection matrix. Here are more details of the proposed CFM. As mentioned in the introduction section, the RGB modality and the Thermal modality show different features under different lighting and weather conditions, which are complementary and redundant. Therefore, we aim to design a block to suppress redundant features and fuse complementary to efficiently harvest essential cross-modal cues for object detection against adverse weather conditions. Motivated by the concept of Cross-Attention [1], we introduce a new crossmodality Mamba block to fuse features from different modalities. As shown in Fig. 2, to encourage feature interaction between RGB and Thermal modalities, we use a Channel Swapping Mamba block (CS) [12], which incorporates information from different channels and enhances cross-modality correlations. Given RGB features \ud835\udc39\ud835\udc45\ud835\udc56 and Thermal features \ud835\udc39\ud835\udc47\ud835\udc56, the first half of channels from \ud835\udc39\ud835\udc45\ud835\udc56will be concatenated with the latter half of \ud835\udc39\ud835\udc47\ud835\udc56and processed through the Mamba block for feature extraction. The obtained features are added to \ud835\udc39\ud835\udc45\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc45\ud835\udc56 \u2032. Meanwhile, the first half of \ud835\udc39\ud835\udc47\ud835\udc56is concatenated with the latter half of \ud835\udc39\ud835\udc45\ud835\udc56, then passes through the Mamba block. The obtained features are added to \ud835\udc39\ud835\udc47\ud835\udc56, creating a new feature \ud835\udc39\ud835\udc47\ud835\udc56 \u2032. Subsequently, we project the features: \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 into the shared space during the feature fusion process, using the gating mechanism to encourage complementary feature learning while restraining redundant features. As shown in Fig. 2, we first normalize every token sequence in \ud835\udc39\ud835\udc45\ud835\udc56 \u2032 and \ud835\udc39\ud835\udc47\ud835\udc56 \u2032 with Norm block, which helps to improve the convergence speed and performance of the model. Then project the input sequence through linear layers and apply SiLu as the activation function. \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, and \ud835\udc6a\ud835\udc90can be generated by the Parameters Function: \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90= \ud835\udc43\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc5a\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc39\ud835\udc62\ud835\udc5b\ud835\udc50\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc65\ud835\udc5c\u2032), (8) where \ud835\udc65\u2032 \ud835\udc5c= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93(\ud835\udc65\ud835\udc65 \ud835\udc5c\ud835\udc75\ud835\udc90\ud835\udc93\ud835\udc8e(\ud835\udc39\ud835\udc5c\u2032 \ud835\udc56)). After that, we apply State Space Model (SSM): \ud835\udc66\ud835\udc5c= \ud835\udc7a\ud835\udc7a\ud835\udc74( \u00af \ud835\udc68\ud835\udc90, \u00af \ud835\udc69\ud835\udc90, \ud835\udc6a\ud835\udc90)(\ud835\udc65\ud835\udc5c\u2032), (9) Figure 4: Overview of the established SWVID benchmarks. The dataset includes three weather conditions (i.e., Rain, Foggy, and Snow), and two scenarios (i.e., Daylight and Night), providing 60, 000 images in total. Then we apply the gating operation, followed by residual connection: \ud835\udc67= \ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc9b(\ud835\udc39\ud835\udc47\ud835\udc56 \u2032), (10) \ud835\udc66\ud835\udc45\u2032 = \ud835\udc66\ud835\udc45\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (11) \ud835\udc66\ud835\udc47\u2032 = \ud835\udc66\ud835\udc47\u2299\ud835\udc7a\ud835\udc8a\ud835\udc73\ud835\udc7c(\ud835\udc67), (12) \ud835\udc39\ud835\udc56= \ud835\udc79\ud835\udc86\ud835\udc94\ud835\udc89\ud835\udc82\ud835\udc91\ud835\udc86(\ud835\udc73\ud835\udc8a\ud835\udc8f\ud835\udc86\ud835\udc82\ud835\udc93\ud835\udc47(\ud835\udc66\ud835\udc45\u2032 + \ud835\udc66\ud835\udc47\u2032) + \ud835\udc39\ud835\udc56\u2032). (13) Finally, we get the fused 2-D feature \ud835\udc39\ud835\udc56successfully. Different from CFT [34], our fusion block improves computational efficiency while inheriting the components of global receptive field and dynamic weight. Comparing the state space model (SSM) in our CFM block with the self-attention mechanism of transformers in CFT [34], both of them play an important role in providing global context adaptively, but self-attention is quadratic to sequence length while SSM is linear to sequence length [61]. To achieve lower memory usage when dealing with long-sequence works, CFM chooses the recomputation method as the same as Mamba. Experiment on the SWVID and LLVIP dataset, whose resolution is 1080 \u00d7 720, shows that CFT requires 21.88GB GPU memory while CFM only requires 10.72GB, saving 11.16GB in the same configuration. 3.4 Loss Functions As a two-stage pre-training model, we carefully design the training loss functions to produce enhanced results with minimum blurriness and the closest details to ground-truth images and to extract the differences between RGB and thermal modalities. For training WRDM, the goal of the loss function in this stage is to maximize the data log-likelihood \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650). Since maximizing this target directly is very challenging, we use variational inference to approximate this target. Variational inference approximates the true posterior distribution \ud835\udc5d\ud835\udf03(\ud835\udc650 : \ud835\udc47) by introducing a variational Table 1: Comparisons of SWVID benchmark with existing visible-infrared datasets. !means available while %denotes the opposite. Dataset Year Resolution Publication Scene Daylight Night Weather KAIST [16] 2015 640 \u00d7 512 CVPR \" \" % FLIR [8] 2018 640 \u00d7 512 \" \" % RoadScene [50] 2020 640 \u00d7 512 AAAI \" \" % LLVIP [18] 2021 1080 \u00d7 720 ICCV \" \" % MSRS [41] 2022 640 \u00d7 480 Info. Fusion \" \" % M3FD [27] 2022 640 \u00d7 512 CVPR \" \" % VTUAV [32] 2022 1920 \u00d7 1080 CVPR \" \" % SWVID 2024 1080 \u00d7 720 Proposed \" \" \" distribution\ud835\udc5e(\ud835\udc651 : \ud835\udc47|\ud835\udc650) and then minimizes the difference between these two distributions. Here we define L\ud835\udf03= \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650), we have: L\ud835\udf03= \ud835\udc47 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e[\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d\ud835\udf03(\ud835\udc650|\ud835\udc65\ud835\udc47)] \u2212 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=1 E\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) [\ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650))||\ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61)]. (14) where the second term is the expected value of the Kullback-Leibler divergence between \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) and \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). In alignment with the prevalent practices in this field, the overall loss function (L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59) is a sum of the bounding-box regression loss (L\ud835\udc4f\ud835\udc5c\ud835\udc65), the classification loss (L\ud835\udc50\ud835\udc59\ud835\udc60), and the confidence loss (L\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53= L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57). L\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= L\ud835\udc4f\ud835\udc5c\ud835\udc65+ L\ud835\udc50\ud835\udc59\ud835\udc60+ L\ud835\udc5b\ud835\udc5c\ud835\udc5c\ud835\udc4f\ud835\udc57+ L\ud835\udc5c\ud835\udc4f\ud835\udc57, (15) Details of the loss function for CFMW are elucidated in the supplementary material. 4 EXPERIMENTS 4.1 Established SWVID benchmark Dataset. The color gamut of visible images is weakened by environmental disturbance in dynamic environments, and the existing fusion methods make it difficult to fully fuse visible and infrared spectra because of a deficiency of sufficient training under corresponding datasets. As shown in Fig. 4, we established the benchmark, SWVID, which is constructed from the public datasets (i.e. LLVIP [18], M3FD [27], MSRS [41]) collected in the real scene. It contains a variety of uniformly distributed scenes (daylight, night, rain, foggy, and snow), simulating real environments through the combination of different scenes. Furthermore, we provide the corresponding ground-truth images for each visible image affected by adverse weather conditions for image fusion and image restoration network training. As shown in Table 1, compared with previous visible-infrared datasets, SWVID is the first one that considers weather conditions. Specifically, we have constructed the dataset from public visible-infrared datasets as follows: D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc5f(\ud835\udc65)) + \ud835\udc45(\ud835\udc65)\ud835\udc40\ud835\udc5f(\ud835\udc65), (16) D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)(1 \u2212\ud835\udc40\ud835\udc60(\ud835\udc65)) + \ud835\udc46(\ud835\udc65)\ud835\udc40\ud835\udc60(\ud835\udc65), (17) D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) = \ud835\udc3d(\ud835\udc65)\ud835\udc52\u2212 \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udefd\ud835\udc51\ud835\udc59+ \u222b\ud835\udc51(\ud835\udc65) 0 \ud835\udc3f\u221e\ud835\udefd\ud835\udc52\u2212\ud835\udefd\ud835\udc59\ud835\udc51\ud835\udc59. (18) Figure 5: Examples of daylight and night scenes for multimodal fusion and object detection visualization, including three kinds of adverse weather conditions (rain, haze, and snow). We embed WRDM into two state-of-the-art visible-infrared fusion methods (i.e., CDDFuse [59] and DeFusion [25]) to mitigate the adverse impact of weather conditions. where \ud835\udc65represents the spatial location in an image, D\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b(\ud835\udc3d(\ud835\udc65)), D\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc64(\ud835\udc3d(\ud835\udc65)) and D\ud835\udc53\ud835\udc5c\ud835\udc54\ud835\udc54\ud835\udc66(\ud835\udc3d(\ud835\udc65)) represent a function that maps a clear image to one with rain, snow, and fog particle effects, \ud835\udc3d(\ud835\udc65) represents the clear image with no weather effects, \ud835\udc40\ud835\udc5f(\ud835\udc65) and \ud835\udc40\ud835\udc60(\ud835\udc65) represent rain and snow equivalents, \ud835\udc45(\ud835\udc65) represents a map of the rain masks, \ud835\udc46(\ud835\udc65) represents a chromatic aberration map of the snow particles. Considering scattering effects, \ud835\udc51(\ud835\udc65) represents the distance from the observer at a pixel location \ud835\udc65, \ud835\udefdis an atmospheric attenuation coefficient, and \ud835\udc3f\u221eis the radiance of light. We divide SWVID into the training set (34, 280 images), validation set (17, 140 images), and test set (8, 570 images), each folder contains three parts: pairs of visible-infrared images and corresponding weather-influenced visible images. Notice that weather-influenced visible images contain three kinds of weather conditions, classified as SWVID-snow, SWVID-rain, and SWVID-foggy. During the training period, we use the pairs of images (weather-influenced and ground-truth) to train WRDM in the first stage, then use the pairs of images (ground-truth and infrared) with corresponding labels to train CFM in the second stage. During the validating and testing period, we use the pairs of images (weather-influenced and infrared) directly, verifying and testing the performance of CFMW under real conditions. Also, we use the same way when evaluating other networks in comparative experiments. Evaluation metrics. We adopt the conventional peak signalto-noise ratio (PSNR) [15] and structural similarity (SSIM) [47] for quantitative evaluations between ground truth and restored images. PSNR is mainly used to evaluate the degree of distortion after image processing, while SSIM pays more attention to the Table 2: Quantitative comparisons in terms of PSNR and SSIM (higher is better) with state-of-the-art image deraining, dehazing, and desnowing methods. For the sake of fairness, we uniformly use the visible light part of the established SWVID dataset as the evaluation dataset. Image-Deraining Task SWVID-rain (RGB) Image-Dehazing Task SWVID-foggy (RGB) Image-Desnowing Task SWVID-snow (RGB) PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 pix2pix [17] 19.95 0.7270 pix2pix [17] 25.12 0.8359 SPANet [46] 29.92 0.8260 CycleGAN [60] 17.65 0.6452 DuRN [29] 31.44 0.9256 DDMSNet [57] 34.87 0.9462 PCNet [19] 27.13 0.8546 AttentiveGAN [33] 32.56 0.9331 DesnowNet [2] 32.15 0.9416 MPRNet [53] 29.14 0.9022 IDT [49] 34.14 0.9412 RESCAN [24] 30.57 0.9003 de-rain (ours) 36.78 0.9464 de-haze (ours) 36.53 0.9795 de-snow (ours) 42.23 0.9821 All-in-One [23] 25.13 0.8856 All-in-One [23] 31.24 0.9122 All-in-One [23] 28.12 0.8815 TransWeather [42] 29.77 0.9107 TransWeather [42] 33.85 0.9388 TransWeather [42] 35.15 0.9417 WRDM (ours) 35.02 0.9322 WRDM (ours) 35.88 0.9602 WRDM (ours) 40.98 0.9578 Table 3: Comparison of performances with other networks on the SWVID-snow dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 82.3 34.6 30.7 Faster R-CNN [36] Thermal ResNet50 90.6 63.7 55.4 SDD [28] RGB VGG16 73.6 37.8 38.6 SDD [28] Thermal VGG16 88.6 55.6 50.2 YOLOv3 [35] RGB Darknet53 78.3 29.4 24.4 YOLOv3 [35] Thermal Darknet53 84.6 50.7 47.4 YOLOv5 [20] RGB CSPD53 80.7 38.2 30.7 YOLOv5 [20] Thermal CSPD53 90.5 65.2 57.6 YOLOv7 [45] RGB CSPD53 85.3 41.8 34.9 YOLOv7 [45] Thermal CSPD53 91.8 67.6 60.4 multi-modality networks Baseline RGB+T CSPD53 92.2 68.4 59.3 CFT [34] RGB+T CFB 92.4 71.1 58.4 CFMW (ours) RGB+T CFM 97.2 76.9 63.4 structural information and visual quality of the images. \ud835\udc43\ud835\udc46\ud835\udc41\ud835\udc45= 10 \u00d7 \ud835\udc59\ud835\udc54( (2\ud835\udc5b\u22121)2 \ud835\udc40\ud835\udc46\ud835\udc38 ), (19) \ud835\udc46\ud835\udc46\ud835\udc3c\ud835\udc40= [\ud835\udc59(\ud835\udc65,\ud835\udc66)]\ud835\udefc\u00b7 [\ud835\udc50(\ud835\udc65,\ud835\udc66)]\ud835\udefd\u00b7 [\ud835\udc60(\ud835\udc65,\ud835\udc66)]\ud835\udefe, (20) As for object detection quantitative experiments, we introduced three object detection metrics: mean Average Precision (mAP, mAP50, and mAP75) to evaluate the accuracy of the object detection models. For more calculation details, please refer to the supplementary material. 4.2 Implantation Details As for WRDM, we performed experiments both in specific-weather conditions and multi-weather conditions image restoration settings. We denote our specific-weather restoration models as de-rain, desnow, and de-foggy to verify the general WRDM model under specific weather conditions. We trained the 128 \u00d7 128 patch size version of all models. We use NVIDIA RTX 4090 cards to perform all the experiments. We use Adam as an optimizer while training all the models we compare. During the training process, we trained WRDM for 3 \u00d7 106 iterations. As for CFM, we did not perform Table 4: Comparison of performances with other networks on the LLVIP [18] dataset. Model Data Backbone mAP50\u2191 mAP75\u2191 mAP\u2191 mono-modaltiy networks Faster R-CNN [36] RGB ResNet50 91.4 48.0 49.2 Faster R-CNN [36] Thermal ResNet50 96.1 68.5 61.1 SDD [28] RGB VGG16 82.6 31.8 39.8 SDD [28] Thermal VGG16 90.2 57.9 53.5 YOLOv3 [35] RGB Darknet53 85.9 37.9 43.3 YOLOv3 [35] Thermal Darknet53 89.7 53.4 52.8 YOLOv5 [20] RGB CSPD53 90.8 51.9 50.0 YOLOv5 [20] Thermal CSPD53 94.6 72.2 61.9 YOLOv7 [45] RGB CSPD53 91.4 58.4 53.6 YOLOv7 [45] Thermal CSPD53 94.6 70.6 62.4 multi-modality networks Baseline RGB+T CSPD53 95.2 71.4 62.3 CFT [34] RGB+T CFB 97.5 72.9 63.6 CFMW (ours) RGB+T CFM 98.8 77.2 64.8 task-specific parameter tuning or modifications to the network architecture. For better performance, we select the YOLOv5 model\u2019s public weight initialization (yolov5s.pt), which is pre-trained on the COCO dataset [26]. 4.3 Comparative Experiments In this section, we make comparisons with several state-of-theart methods in image deweathering and cross-modality object detection separately. In Table 2, we perform comparisons with methods for image desnowing (i.e. SPANet [46], DDMSNet [57], DesnowNet [2], RESCAN [24]), deraining (i.e. pix2pix [17], CycleGAN [60], PCNet [19], MPRNet [53]), and dehazing (i.e. pix2pix [17], DuRN [29], Attentive-GAN [33], IDT [49]), as well as two state-ofthe-art multi-weather image restoration methods: All in One [23] and TransWeather [42]. In Table 3 and Table 4, to prove the consistent improvements of CFMW, we compare with several base single-modality object detection methods (i.e., Faster R-CNN [36], SDD [28], YOLOv3 [35], YOLOv5 [20], YOLOv7 [45]) and several multi-modality object detection methods (i.e., our baseline, standard two-stream YOLOv5 object detection network, and CFT [34]). Table 5: Ablation experiments on SWVID-snow dataset. To present the general effectiveness of our CFMW, we further combine the WRDM and CFM module with other classical detectors (i.e., YOLOv7, YOLOv5, Faster R-CNN). Modality Method Detector mAP50\u2191 mAP75\u2191 mAP\u2191 RGB CSPDarknet53 YOLOv7 [45] 85.3 41.8 34.9 Thermal CSPDarknet53 95.8 72.6 60.4 RGB+T +two stream 95.4 68.1 60.4 +CFM 95.5 68.6 63.3 +WRDM 96.5 70.9 63.1 +CFM&WRDM 96.6 75.1 64.1 RGB CSPDarknet53 YOLOv5 [20] 80.7 38.2 30.7 Thermal CSPDarknet53 90.5 65.2 57.6 RGB+T +two stream 92.2 68.4 59.3 +CFM 96.5 70.6 63.3 +WRDM 96.4 71.2 62.8 +CFM&WRDM 97.2 76.9 63.4 RGB Resnet53 Faster R-CNN [36] 82.3 34.6 30.7 Thermal Resnet53 90.6 63.7 55.4 RGB+T +two stream 93.7 62.8 55.4 +CFM 96.7 69.5 61.9 +WRDM 96.2 69.4 61.6 +CFM&WRDM 96.2 69.7 62.2 Comparison of image deweathering. As shown in Table 2, we use the single RGB modality of the SWVID dataset (including rain, foggy, and haze weather conditions) as a comparative dataset to measure the performance of different models under different weather conditions. The top of the table contains results from specific-weather image restoration, where we show \ud835\udc46= 50 sampling time steps. For image-deraining, image-dehazing, and image-desnowing tasks, the proposed solution consistently achieves the best results (36.78/0.9464 on SWVID-rain, 36.53/0.9795 on SWVID-foggy, and 42.23/0.9821 on SWVID-snow). Especially, in the image de-rain task, the performance improvement is about 24% compared with the current state-of-the-art method (MPRNet [53]). For multi-weather image restoration, although the results are not as good as the specific-weather model due to the complexity of the task, the proposed method also reaches the best results ( 35.02/0.9322 on SWVID-rain, 35.88/0.9602 on SWVID-foggy, and 40.98/0.9578 on SWVID-snow) compared with All in One [23] and TransWeather [42], with about 17% performance improvement compared against TransWeather [42] and about 25% performance improvement compared against All in One [23]. Comparison of cross-modality object detection. As shown in Table 3 and Table 4, we use LLVIP [18] and SWVID-snow as the comparative datasets. Compared with SWVID-rain and SWVIDfoggy, the size of pedestrians in these two datasets is more in line with the general object detection standards. There are more complex cases of pedestrian overlap in these two datasets, which can better measure the accuracy of the object detection networks. The top of the table contains results from single-modality networks, each network uses the RGB modality or the thermal modality for detection. The bottom of the table shows results from multi-modality networks, including our baseline, CFT [34] and the proposed CFMW. According to Table 3, it can be observed that with the integration of WRDM and CFM, CFMW achieves an overwhelming performance improvement on each metric (mAP50:2.3\u2191, mAP75:4.3\u2191, mAP:3.0\u2191) on SWVID-snow compared with the best existing network on each metric, which shows that it has preferable adaptability under adverse weather conditions. Also, CFMW can achieve a more accurate detection (mAP50:98.8, mAP75:77.2, mAP:64.8) with lower computational consumption, as shown in Table 4, which demonstrates the commonality of CFWM. 4.4 Ablation Study In this section, we analyze the effectiveness of CFMW. We first validate the importance of WRDM and CFM modules in performance improvement in a parametric form through detailed ablation experiments, then visually show the role of WRDM in cross-modality fusion and object detection tasks to highlight its versatility as a weather-restoration plug-in. Ablation experiments To understand the impact of each component in our method, we have performed a comprehensive set of ablation experiments. As shown in Table 5, we further combine the CFM and WRDM with other classical detectors, i.e. YOLOv7 [45], YOLOv5 [20] and Faster R-CNN [36] to present the general effectiveness of our CFMW. The proposed CFMW improves the performance of cross-modality object detection using either a one-stage or twostage detector under complex weather conditions. Specifically, CFM achieves an 11.3% gain on mAP50, an 81.6% gain on mAP75, and a 78.3% gain on mAP (on YOLOv5 [20] ). After adding WRDM, we achieved a 12.1% gain on mAP50, an 88.2% gain on mAP75, and an 80.4% gain on mAP. CFM and WRDM provide non-negligible gains for all the considered evaluation metrics. Visual interpretation To verify the applicability of WRDM as a plug-in intuitively, we visually show the application scenario of WRDM in the field of visible-infrared image fusion and object detection. As shown in Fig. 5, we perform comparisons with methods of visible-infrared image fusion methods (i.e. CDDFuse [59], DeFusion [25]). It can be seen from the figure that compared with the original images, the image fusion effects of the two methods before and after using WRDM are quite different, more people at the far end of images could be detected successfully after deweathering. In cross-modality object detection, rich image details can provide great assistance for feature extraction and fusion, with direct fusion without removing the weather influence causing the loss and interference of image details. 5 CONCLUSION In this work, we introduce a novel approach to visible-infrared object detection under severe weather conditions, namely the Severe Weather Visible-Infrared Dataset (SWVID). We have provided a valuable resource for training and evaluating models in realistic and challenging environments. The Cross-modality Fusion Mamba with Weather-removal (CFMW) model, has proven to be highly effective in enhancing detection accuracy while managing computational efficiency. Our extensive experiments have shown that CFMW outperforms existing benchmarks, achieving state-of-the-art on both tasks: multi-weather image restoration and cross-modality object detection. This work opens up new possibilities for cross-modality object detection in adverse weather.", "introduction": "In an open and dynamic environment, object detection faces chal- lenging weather conditions such as rain, haze, and snow. The rapid advancement of deep-learning-based object detection methods has significantly improved the ability to identify and classify objects. Benefiting from the advanced feature extraction and fusion strate- gies, cross-modality object detection methods have achieved high accuracy, e.g., CFT [34], GAFF [56], and CFR_3 [54]. However, as shown in Fig. 1, the performance of these methods is often chal- lenged by adverse weather conditions, which can severely impact the visibility and quality of visual data. Although the infrared image \u2217Equal contribution. \u2020Corresponding authors (e-mail: chenpeng@zjut.edu.cn, kailun.yang@kit.edu). Figure 1: The proposed method can achieve high-precision cross-modality object detection under adverse weather condi- tions. The top two examples are results from CFT [34], while the bottom two examples are results from CFMW (ours). could provide complementary cues to some extent, it cannot re- pair the appearance distortion or information loss of visual images. Thus, traditional cross-modality object detection methods still face severe performance degradation under adverse weather. Existing methods cannot be directly applied to adverse weather conditions, since the color gamut of visible images is weakened by environmental disturbance and the existing fusion methods are difficult to fully fuse visible and infrared spectra, nor have they made sufficient training under corresponding datasets. To make up the blank in this research area, we construct and release a new dataset, named Severe Weather Visible-Infrared Dataset (SWVID), as well as propose a novel framework named Cross-modality Fusion Mamba with Weather-removal (CFMW). To facilitate research in this area, we propose a new visible- infrared dataset, named SWVID, which is designed to encompass diverse severe weather scenarios by mathematically formalizing the impact of various weather phenomena on images. Specifically, SWVID comprises 20, 000 aligned visible-infrared image pairs, span- ning three weather conditions and two scenes, with each condition and scene evenly distributed. Motivated by the critical research gap highlighted in Fig. 1, where current methods falter in adverse weather, we introduce CFMW for multispectral object detection under adverse weather conditions. Our CFMW leverages a Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) to enhance detection accuracy amid adverse weather arXiv:2404.16302v1 [cs.CV] 25 Apr 2024 conditions while minimizing computational burden. Specifically, WRDM is employed to restore affected visible images before fusion with infrared counterparts, offering plug-and-play compatibility with image fusion networks. Based on learning reversal to increase the order of noise and disrupt the process of data samples, the WRDM model is advantageous to minimize the impact of adverse weather conditions. Additionally, CFM can be integrated into the feature extraction backbone, effectively integrating global contex- tual information from diverse modalities. Recent research shows that Mamba [10] achieves higher inference speed and overall met- rics than the equivalent-scale transformer. To our knowledge, this study represents the first endeavor to employ Diffusion models and Mamba for multispectral object detection. Extensive experiments on both well-established and self-created datasets demonstrate that our CFMW method achieves superior detection performance compared to existing benchmarks. Specifi- cally, we achieved about 17% performance improvement compared with the current state-of-the-art image restoration methods. The proposed method achieves about 8% accuracy improvement while saving 51.2% GPU memory compared with CFT [34], a state-of-the- art cross-modality object detection method. At a glance, we summarize the main contributions as follows: \u2022 We introduce a novel task focusing on visible-infrared object detection under adverse weather conditions and develop a new dataset called the Severe Weather Visible-Infrared Dataset (SWVID), which simulates real-world conditions. SWVID comprises 60, 000 paired visible-infrared images and labels, encompassing weather conditions such as rain, haze, and snow; \u2022 We propose a novel approach, Cross-modality Fusion Mamba with Weather-removal (CFMW) for multispectral object de- tection under adverse weather conditions; \u2022 We introduce a novel Weather Removal Diffusion Model (WRDM) and Cross-modality Fusion Mamba (CFM) modules to tackle image de-weathering and visible-infrared object detection tasks simultaneously; \u2022 Extensive experiments demonstrate that this integration achieves the best task migration capacity, resulting in state- of-the-art performance for both tasks." }, { "url": "http://arxiv.org/abs/2311.06622v2", "title": "TrainerAgent: Customizable and Efficient Model Training through LLM-Powered Multi-Agent System", "abstract": "Training AI models has always been challenging, especially when there is a\nneed for custom models to provide personalized services. Algorithm engineers\noften face a lengthy process to iteratively develop models tailored to specific\nbusiness requirements, making it even more difficult for non-experts. The quest\nfor high-quality and efficient model development, along with the emergence of\nLarge Language Model (LLM) Agents, has become a key focus in the industry.\nLeveraging the powerful analytical, planning, and decision-making capabilities\nof LLM, we propose a TrainerAgent system comprising a multi-agent framework\nincluding Task, Data, Model and Server agents. These agents analyze\nuser-defined tasks, input data, and requirements (e.g., accuracy, speed),\noptimizing them comprehensively from both data and model perspectives to obtain\nsatisfactory models, and finally deploy these models as online service.\nExperimental evaluations on classical discriminative and generative tasks in\ncomputer vision and natural language processing domains demonstrate that our\nsystem consistently produces models that meet the desired criteria.\nFurthermore, the system exhibits the ability to critically identify and reject\nunattainable tasks, such as fantastical scenarios or unethical requests,\nensuring robustness and safety. This research presents a significant\nadvancement in achieving desired models with increased efficiency and quality\nas compared to traditional model development, facilitated by the integration of\nLLM-powered analysis, decision-making, and execution capabilities, as well as\nthe collaboration among four agents. We anticipate that our work will\ncontribute to the advancement of research on TrainerAgent in both academic and\nindustry communities, potentially establishing it as a new paradigm for model\ndevelopment in the field of AI.", "authors": "Haoyuan Li, Hao Jiang, Tianke Zhang, Zhelun Yu, Aoxiong Yin, Hao Cheng, Siming Fu, Yuhao Zhang, Wanggui He", "published": "2023-11-11", "updated": "2023-11-23", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.CL" ], "main_content": "2.1. Framework In Section 1, as we have mentioned, our system can understand user\u2019s intent and ultimately train a model that satisfies the user\u2019s requirements based on four agents. Next, we will introduce how the entire system operates. Firstly, like most LLM-powered agents [2, 6, 7, 14, 16, 20, 24, 33], each agent in our system comprises the following components: profile, memory, perception, planning, action, and response, as illustrated in Figure 1(a). Specifically, our agents are initially fed a system prompt as profile, informing them of the system overview and their responsibilities, and encoding Standard Operating Procedures (SOPs) [1, 4, 19] into prompts . Moreover, during the interaction of Agents, the current requirements from user or other agents, as well as the memory of all past system interactions, are fed into the current agent. It then analyzes the current requirements, and enters the planning phase, organizing thoughts, set objectives, and determine the steps needed to achieve those objectives. Agents can also modify their plans through introspection to adapt to current circumstances. Next, the agent will take action based on the results of planning, and ultimately responds to the agent or user who provided the requirement. Through these operations, an agent can autonomously complete complex subtasks through various tools. However, the journey from business requirement identification to the final model deployment in the actual business scenario is not simple, involving numerous complex analysis and optimization. Based on our preliminary experiments, it is challenging and insufficientfor a single Agent to meet user requirements efficiently and effectively. Therefore, in our framework, we break down the entire process into four parts: task parsing and planning, data acquisition and analysis, model training and testing, and service deployment. These are implemented collaboratively by Task, Data, Model, and Server Agents respectively, as shown in Figure 1(b). Among them, the Task Agent acts as a hub, with all other agents interacting through it. It also interacts with the user, while the other three agents only focus on their specific tasks. Next, we will introduce the specific responsibilities for the four agents. 2.2. Responsibility of Each Agent Task Agent Task agent is the core agent in the TrainerAgent system, responsible for task parsing, global planning, coordination, and user interaction to ensure efficient and effective model development. Firstly, the Task agent conducts task parsing, which involves parsing the user-defined tasks and extracting relevant information. This process includes identifying the specific goals and requirements of the tasks, such as the desired model accuracy, speed, or any other specific criteria. The parsed tasks are then transformed into a structured JSON format, enabling effective communication and collaboration with the other agents for further analysis and processing. Once the tasks are parsed, the Task agent engages in global planning. This step involves developing a comprehensive plan for model development that takes into account the parsed tasks, available input data, and the capabilities of the other agents. The Task agent assesses the feasibility and potential challenges associated with the tasks, considering factors such as data availability, computational resources, and model complexity. This planning phase aims to optimize the model development process and ensure that the subsequent steps are well-informed and aligned with the user\u2019s requirements. Furthermore, the Task agent plays a pivotal role in coordinating the activities of the other agents within the system. It acts as a central coordinator, orchestrating the collaboration and communication between the Data, Model, and Server agents. This coordination ensures that the tasks are processed efficiently, and the agents work in tandem towards achieving the desired models. The Task agent schedules and assigns tasks to the relevant agents, monitors their progress, and resolves any conflicts or dependencies that may arise. In addition to its coordination role, the Task agent also facilitates user interaction. It provides a user-friendly interface that allows users to interact with the TrainerAgent system. Users can provide feedback, refine their requirements, or monitor the progress of model development through this interface. Data Agent The Data agent plays a crucial role in the TrainerAgent system, primarily responsible for processing various types of data. To facilitate effective data processing, we have developed an extensive internal knowledge base within the Data Agent. This knowledge base encompasses a wide range of data modalities, including tabular, image, text, audio, and video data. It equips the agent with the understanding of which tools and techniques to employ for different types of data and specific processing scenarios. In cases where a suitable operation is not readily available in the knowledge base, the Data agent conducts online searches to find appropriate approaches. The Data agent operates in collaboration with the Task agent, receiving data processing requirements and instructions from the Task agent. Based on these requirements, the Data agent autonomously performs planning and action to execute the necessary operations. Specifically, the Data agent is responsible for data collection, which involves gathering relevant data from various sources such as internal databases or web scraping. This ensures a diverse and comprehensive dataset for model development. Furthermore, the Data agent conducts data cleaning, which focuses on removing noise, outliers, and inconsistencies from the collected data as well as correcting the annotation. This step aims to enhance the quality and reliability of the dataset, ensuring that subsequent modeling processes are based on clean and accurate data. Moreover, on scenarios where annotated data is insufficient, the Data agent possesses the capability to perform automatic data labeling. For instance, the Data agent can employ methods based on pre-training large-scale models to generate preliminary labels for various types of data, enabling the model to learn from a larger and more diverse dataset. Additionally, the Data agent performs data augmentation, which involves generating additional training samples by applying various transformations and modifications to the existing data. This technique helps to increase the diversity and generalization capability of the dataset, leading to improved model performance. Also, the Data agent conducts data reduction, which focuses on reducing the dimensionality or size of the dataset while preserving its key information. This step is particularly useful when dealing with large datasets or computationally intensive models, allowing for more efficient model training. Lastly, the Data agent facilitates data visualization, providing visual representations and summaries of the dataset to aid in data exploration and understanding. This enables users to gain insights into the data distribution and patterns, assisting in making informed decisions throughout the model development process. Model Agent The Model agent is responsible for training and validating models. Similar to the Data agent, the Model agent receives task requirements and instructions from the Task agent. It autonomously performs planning and takes action based on these inputs. Specifically, the Model agent is responsible for model initialization, which involves the selection of appropriate pre-trained models for specific tasks. The internal model repository including a comprehensive collection of pre-trained models suitable for different tasks and the huggingface model retriever provide a vast array of pre-trained models, allowing the Model agent to identify the most suitable ones based on the task requirements. Furthermore, the Model agent carries out optimization processes to enhance the performance of the selected models, along with standardized training scripts based on huggingface. Leveraging the internal training knowledge base we built, the Model agent automates various optimization techniques such as hyperparameter tuning, learning rate scheduling, and regularization. This ensures that the models are trained effectively and efficiently. The Model agent can leverage ensemble methods to improve model performance if needed. Moreover, the Model agent performs model compression, aiming to reduce the size and complexity of the models without significant performance degradation. This enables efficient deployment of models in resourceconstrained environments and facilitates faster inference. The Model agent also conducts model evaluation to assess the performance and generalization of the trained models. Various evaluation metrics and techniques are employed to ensure the models meet the user-desired criteria and deliver reliable predictions. Furthermore, the Model agent facilitates model visualization, providing visual representations and summaries of the models\u2019 architecture, learned representations, and decision boundaries. This aids in model inFigure 2. Qualitative Analysis of Visual Grounding Task. The user presents a task to develop a model for Visual Grounding in live streaming, with specific performance and deployment requirements, and the Task Agent parses these requirements and initiates a preliminary planning. The Data Agent retrieves relevant Product Grounding dataset from internal databases and enhances it with image and text preprocessing techniques. The Model Agent then selects a pre-trained model from an internal library, trains and evaluates it against the set criteria. The Server Agent converts the model\u2019s format for deployment, estimates online resource required, sets up the service infrastructure on the specified platform, writes the API document, and establishes continuously monitoring mechanisms. The result is a well-trained model capable of providing an online service for product grounding in live streaming. terpretation and understanding, allowing users to gain insights into the model\u2019s behavior. Server Agent The Server Agent handles the deployment of models based on user-defined online service requirements. Similar to the Data and Model agents, the Server agent receives requirements from the Task agent and autonomously performs planning and actions. Specifically, the Server agent conducts resource estimation, dynamically assessing the computational and memory resources required for model deployment. This estimation considers factors such as server specifications and expected service concurrency. By accurately estimating resource needs, the Server agent ensures efficient utilization of available infrastructure and prevents resource bottlenecks during model serving. Furthermore, the Server agent is responsible for model conversion, ensuring compatibility and efficiency during the deployment process. It performs conversions from frameworks like PyTorch or TensorFlow to formats such as ONNX and TensorRT. This enables seamless integration with different runtime environments and optimizes model inference performance. Moreover, the Server agent focuses on interface document preparation to facilitate collaboration between engineering and business teams. It prepares comprehensive and parameterized service invocation interfaces, enabling seamless communication and integration of the deployed models into various applications and systems. These interface documents serve as a reference for both technical implementation and business integration. In summary, the Server agent ensures efficient resource allocation, seamless deployment, and effective integration of the models into real-world applications. Through its contributions, the Server agent strengthens the practicality and usability of the TrainerAgent system. 3. Experiments To validate the effectiveness of our TrainerAgent, we conducted experiments on real-world business scenarios from Taobao, a popular e-commerce platform, in both computer vision (CV) and natural language processing (NLP) domains. Specifically, we focused on classical discriminative and generative tasks including Visual Grounding, Image Generation, and Text Classification. Additionally, we also tested the system\u2019s ability to handle challenging tasks that could lead to failure. In our experiments, we utilize GPT-4 as a standalone agent within the TrainerAgent system. Each agent is individually configured with a profile, also known as a system prompt. Users directly interact with the TrainerAgent system through dialogue, ultimately completing the model training process. Note that although our experiments were conducted specifically within the Taobao, the TrainerAgent system can be generalized and applied in various real-world scenarios. 3.1. Visual Grounding Visual Grounding (VG) [3,5,9,13,18,21,28,30,32] aims to localize the objects on an image according to a text query. Similarly, Product Grounding [15] aims to ground product, internally constructed within Taobao previously, which is more simple than a completely new task. Thus, we input all requirement into the system for both the training and deployment processesy to test the capabilities of TrainerAgent. As shown in Figure 2, the system successfully accomplishes the internally constructed Product Grounding task, addressing the specific performance and deployment requirements presented by the user. This highlights the system\u2019s capability to handle task specifications and deliver satisfactory results. TrainerAgent system exemplifies a collaborative, adaptive, and efficient multi-agent framework for AI model development, embodying advantages in task analysis, data processing, model training, and server deployment. Each Agent is designed to perform specialized tasks, collaborating and communicating with each other to make optimal decisions collectively. Specifically, the Task Agent exhibits effective preliminary planning by parsing the task requirements and initiating the planning process. Furthermore, the Task Agent demonstrates great interaction and collaboration with the other four specialized agents. This emphasizes the system\u2019s ability to facilitate coordination and communication among different agents, ensuring a smooth workflow and efficient task execution. In addition, the other three specialized agents (Data Agent, Model Agent, and Server Agent) each perform their designated roles in a competent manner. The Data Agent retrieves relevant product grounding dataset from internal databases and enhances it through image and text preprocessing techniques. The Model Agent selects a pre-trained model from an internal library, trains and evaluates it against the specified criteria. The Server Agent undertakes various tasks such as model format conversion, resource estimation, service infrastructure setup, API documentation writing, and continuous monitoring. This highlights the system\u2019s capability to delegate specific responsibilities to the specialized agents, ensuring that each agent contributes to the overall success of the task. The qualitative analysis of the Visual Grounding task in the proposed TrainerAgent system demonstrates its ability to effectively handle internally constructed tasks, perform preliminary planning, and facilitate collaboration among different agents. The specialized agents also showcase their competence in fulfilling their assigned responsibilities. These features collectively contribute to the overall functionality and effectiveness of the TrainerAgent system. In addition, we conducted experiments on Image Generation, which are presented in the Appendix. 3.2. Text Classification In this part, we will explore the pure NLP domain, where ChatGPT\u2019s powerful capabilities make handling NLP tasks more convenient, requiring less reliance on external tools compared with vision or audio domain. For instance, ChatGPT can directly analyze textual data and perform tasks such as data generation, augmentation, and error correction. In the following, we take the example of a classic text classification task to illustrate how TrainerAgent deals with the scarcity of annotated data, as shown in Figure 3. In this experiment, we utilize the TrainerAgent to develop a classifier for determining whether a product promotion contains benefits information. Unlike the scenario where the user provides requirements all at once in Visual Grounding, this experiment is conducted in a step-bystep interactive manner involving more human participation, with the system adapting to the user\u2019s requirements and providing assistance throughout the process. The User initially expresses their need for a classifier with an accuracy of at least 90% and a parameter count below 10 million. The Task Agent performs an initial task analysis and conducts preliminary model and data searches. However, no existing model is found that meets the user\u2019s requirements. Instead of providing an unsatisfactory solution, the Task Agent suggests training a specific model using the available data. The Data Agent plays a crucial role in this experiment. It assists the Task Agent in analyzing the data and determines that the input data format, sentence structure, and semantics are messy. Additionally, the Data Agent identifies that the initial dataset of 30 labeled pairs is insufficient for training an accurate model. Based on past experience and data quality assessment, the Data Agent recommends a minimum of 100 labeled pairs for the task. The User responds by providing an updated dataset of 100 laFigure 3. Interaction with TrainerAgent in Text Classification Task. beled pairs, acknowledging that there might be labeling errors present. The Data Agent proceeds to improve the data quality by performing several tasks. Firstly, it cleans the input data by removing stopwords to enhance the model\u2019s performance. Secondly, the annotation data of lines 7 and 12 are corrected using the internal ChatGPT corrector tool, ensuring accurate labeling. Thirdly, to expand the dataset, the Data Agent retrieves an additional 1000 input data instances from Taobao Mall. Lastly, the input data is automatically labeled using the internal ChatGPT annotator tool. The Model Agent, responsible for model selection and training, makes a decision based on the user\u2019s requirement for a small parameter count. It chooses the albert-tiny model for training. However, during the evaluation phase, the model\u2019s accuracy is found to be 86%, falling short of the desired 90% accuracy. To address this issue, the Model Agent autonomously selects a hierarchical training mode, optimizing the training process for the final small model. In this mode, the llama2-7b model is employed for pseudolabeling, generating a larger labeled dataset. Subsequently, the albert-tiny model is trained on this expanded dataset. The final evaluation yields an accuracy of 92%, meeting the user\u2019s requirement. During the experiment, the User makes an additional request to deploy the trained model on a specific platform with a 2GB container. The Server Agent swiftly responds by converting the model to TensorRT format using PyTorch model conversion tools. Resource estimation determines that to achieve a minimum QPS of 100, eight 2GB containers are required. The Server Agent sets up the service infrastructure, executes the deployment script provided by the platform, and implements monitoring and logging mechanisms to track the deployed service\u2019s performance, usage, and potential issues. This experiment demonstrates the effectiveness of the TrainerAgent system in developing a text classifier. The iterative and interactive nature of the experiment allows for a smoother and more user-involved process compared to a one-time requirement submission. The Task Agent\u2019s analysis, the Data Agent\u2019s data-related tasks, and the Model Agent\u2019s autonomous training mode selection showcase the system\u2019s capabilities and adaptability. Additionally, the system effortlessly accommodates the User\u2019s request for deployment, demonstrating the ease of integrating sudden deployment requirements into the system\u2019s workflow. In addition to the experiments shown above, our system can be applied to many multimodal tasks [10\u201312,17,26,29]. 3.3. Failed or Refused Tasks In this part, we will introduce tasks that our systems might fail or refuse to do. Our system may fail to solve pretty challenging task. Suppose a user requests a tough task (e.g. Video Question Answering [27]), however, there is no labeled data available for training the model, and the user demands a high accuracy for the task. After conducting an extensive analysis, our Task Agent can autonomously determine that it cannot meet the user\u2019s requirements due to the lack of labeled data and the performance limitations of existing models. Despite conducting extensive data and model searches, the Agents are unable to find suitable resources to meet the user\u2019s requirements. To overcome this limitation, the Agents request user intervention, such as manually annotating more data to improve model performance. If the user does not provide the necessary assistance, our system will appropriately conclude that it cannot fulfill the task due to the lack of available resources and training data. Additionally, our TrainerAgent will refuse to implement tasks for ethical reasons. In order to uphold ethical standards and ensure the safety of users, our system will refuse to perform certain tasks. For example, if a user requests the system to generate content that is harmful, offensive, or violates ethical norms, the Task Agent understands the request and its potential consequences. The Agent recognizes the importance of responsible AI usage and the potential harm that such generated content can cause. It prioritizes user well-being and the ethical implications of the task. Therefore, the Agent firmly refuses to comply with the request, ensuring that the system does not contribute to the dissemination of harmful or inappropriate content. The Agent emphasizes the ethical guidelines and ethical responsibility of the system, fostering a safe and supportive environment for users. By incorporating the Agent\u2019s understanding and decision-making process, these detailed explanations showcase how the system assesses tasks, recognizes limitations, and considers ethical implications. This enhances the system\u2019s user-centric approach and responsible deployment of AI models. 4. Conclusion In this paper, we present a pioneering TrainerAgent system that revolutionizes the process of AI model development. This system leverages a multi-agent framework comprising Task, Data, Model, and Server agents, each playing a pivotal role in streamlining the development process. By comprehensively analyzing user-defined tasks, data, and requirements, our TrainerAgent optimizes models from both data and model perspectives, resulting in the creation of highly satisfactory models that can be seamlessly deployed as online services. The proposed TrainerAgent system offers a plethora of advantages over traditional model development approaches. Firstly, it dramatically reduces the time and effort required to develop customized models, opening the doors to AI for non-experts and accelerating the pace of innovation. Secondly, it ensures that the produced models meet the desired criteria, such as accuracy and speed, through a comprehensive optimization process. This not only boosts the quality and effectiveness of the models but also enhances the overall user experience. However, our system still has several limitations. Lower Success Rate: Currently, our TrainerAgent system relies on pre-established local model running scripts, which limits its ability to successfully run on any opensource code available on platforms like GitHub. To address this limitation, we are committed to enhancing the system\u2019s capability to automatically understand documentation, such as readme files, and autonomously execute the code, thereby improving the success rate of model implementation. Dependence on Human Interaction: The TrainerAgent system still requires interaction with humans to ensure optimal performance and customization. However, as the system undergoes iterative improvements, we aim to minimize this dependence and ultimately achieve an end-to-end model training and deployment process. By doing so, we will reduce the need for extensive manual intervention, enhancing the system\u2019s autonomy and usability. Limited Generalization: While our system demonstrates effectiveness in various tasks, its generalization across a wide range of domains and applications may be limited. The current version of TrainerAgent focuses on discriminative and generative tasks in computer vision and natural language processing. To address this limitation, future iterations of the system will incorporate additional domains and expand the scope of task applicability, allowing for more diverse and comprehensive model development. Ethical Implications: As with any AI system, our TrainerAgent system raises ethical considerations. While efforts are made to ensure the system adheres to ethical guidelines, there is always a possibility of unintended consequences or biases in decision-making. We are committed to ongoing research and development to address these ethical implications and incorporate safeguards to mitigate potential risks. Despite these limitations, our TrainerAgent system represents a significant step forward in customizable and efficient model training. Through continuous improvements and addressing these limitations, we aim to enhance the system\u2019s performance, adaptability, and overall impact in both academic and industry settings.", "introduction": "The rapid advancement of artificial intelligence (AI) has revolutionized numerous industries, enabling personalized and efficient services that were once unimaginable. How- ever, the process of training AI models to meet specific busi- ness requirements remains a daunting and time-consuming challenge. This is particularly pertinent for non-experts who struggle to navigate the intricacies of model develop- ment and customization. Bridging this gap between user needs and model development has become a pressing con- cern in the AI industry. Nowadays, autonomous agents [2,6,7,14,16,20,24,33] utilizing Large Language Models (LLMs) offer promising opportunities to enhance and replicate human workflows, which seems to able to solve ease the concern above. Spe- cially, HuggingGPT [22], a framework that employs large language models like ChatGPT as controllers to integrate various specialized AI models for complex tasks. It uses natural language as an interface to streamline task execu- tion across different domains and modalities, demonstrating the potential for more advanced AI systems. MetaGPT [8] introduces a meta-programming framework that enhances LLM-based multi-agent systems by incorporating standard- ized workflows to reduce logic errors and increase task effi- ciency. It achieves superior performance by assigning spe- cialized roles to agents for collaborative problem-solving, outperforming existing chat-based solutions in complex benchmarks. AutoGen [25] provides an open-source plat- form for building complex LLM applications, allowing for inter-agent communication and a blend of LLM capabili- ties, human inputs, and additional tools. It enables the cus- tomization of conversational patterns and agent behaviors, demonstrating its versatility and effectiveness across a wide range of fields, from technical domains to creative indus- tries. However, the current agent system is unable to sat- isfactorily accomplish the construction of specific require- ments, from user needs to model training and deployment, particularly in terms of model training. It lacks dedicated mechanisms to ensure the success rate of system opera- arXiv:2311.06622v2 [cs.AI] 23 Nov 2023 tion and the training effectiveness of the final model. Al- though there are also some works that specialize in train- ing models using LLMs, they still have significant limita- tions. AutoML-GPT [31] merges the power of LLM with expert system insights to automate AI model training, en- compassing data processing to design and experiment ex- ecution. It simplifies the development of AI solutions by using standardized prompts based on comprehensive model and data descriptions. This unified approach has proven ef- fective across various AI tasks, including those in language and vision, and excels in adapting to and tuning for new datasets as evidenced by rigorous testing. However, it re- quires fixed model inputs, which is rigid, demanding a high understanding of algorithms for users, while our system ac- cepts natural language inputs, automatically comprehends the specific AI models involved, and performs training and optimization. Prompt2Model [23] advances the field by proposing a method that uses natural language task descrip- tions to train specialized models, offering competence with fewer computational resources than LLM. It retrieves ex- isting datasets, generates additional data using LLMs, and fine-tunes models for improved performance. However, Prompt2Model has limitations in scalability, lack of consid- eration for user private databases, and reliance on hugging- face. It is also limited to NLP tasks and lacks flexibility. To build an intelligent system that can directly compre- hend user-customized requirements and efficiently accom- plish model training and deployment with enhanced flexi- bility, we propose TrainerAgent, a cutting-edge, customiz- able, and highly efficient model training system powered by groundbreaking LLM-powered Agents. Leveraging the remarkable analytical, scheduling, and decision-making ca- pabilities of LLM, our system aims to revolutionize the way models are developed and deployed. By introducing a multi-agent framework comprising Task, Data, Model, and Server agents, TrainerAgent offers a comprehensive solu- tion that optimizes models from both data and model per- spectives, resulting in highly satisfactory outcomes. Specif- ically, The Task Agent acts as a hub, coordinating the activi- ties of the other agents and interacting with the user, respon- sible for task parsing, global planning, coordination among agents, and user interaction. It parses user-defined tasks, develops a comprehensive plan for model development, co- ordinates agent activities, and provides a user-friendly in- terface. The Data Agent handles various data processing operations such as collection, cleaning, labeling, augmen- tation, reduction, and visualization. It works in collabora- tion with the Task Agent, receiving data processing require- ments and instructions, and autonomously planning and ex- ecuting these operations. The Model Agent is responsible for model initialization, optimization, ensemble, compres- sion, evaluation, and visualization. It selects appropriate pre-trained models, optimizes their performance, conducts model compression, evaluates their performance, and pro- vides visual representations and summaries of the models. The Server Agent handles model deployment based on user- defined online service requirements. It estimates resource needs, performs model conversion for compatibility and ef- ficiency, and prepares interface documents for seamless in- tegration with various applications and systems. And each agent is composed of several components and is provided with a system prompt and Standard Operating Procedures (SOPs) to guide their actions. The agents analyze require- ments, plan their actions, and autonomously complete com- plex subtasks as Figure 1 shown. To evaluate the effectiveness of TrainerAgent, we con- ducted rigorous experimental evaluations on classical dis- criminative and generative tasks within the domains of com- puter vision (CV) and natural language processing (NLP) as Figure 2 and 3 shown. The results consistently demon- strated that our system produces exceptional models that meet the desired criteria. The qualitative analysis of the Visual Grounding, Image Generation and Text Classifica- tion task in the proposed TrainerAgent system demonstrates its ability to effectively handle internally constructed tasks, perform preliminary planning, and facilitate collaboration among different agents. The specialized agents also show- case their competence in fulfilling their assigned responsi- bilities. These features collectively contribute to the overall functionality and effectiveness of the TrainerAgent system. Moreover, TrainerAgent showcased its remarkable ability to identify and reject unattainable tasks, ensuring the ro- bustness and safety of the model development. Our research makes several significant contributions to the field of AI model development. Firstly, we introduce a novel system that automates the entire process, from re- quirement analysis to model training and deployment. This is the first of its kind and addresses the challenges faced by algorithm engineers in developing custom models for personalized services. Secondly, our approach utilizes a multi-agent framework comprising Task, Data, Model, and Server agents. These agents work collaboratively, each with their specific roles, to optimize user-defined tasks, input data, and requirements. This comprehensive optimization, considering both data and model perspectives, ensures the generation of satisfactory models that meet desired crite- ria such as accuracy and speed. Lastly, our system under- goes extensive experimental evaluations in computer vision and natural language processing domains. These evalua- tions demonstrate the consistent production of high-quality models that meet the desired criteria. Additionally, our sys- tem showcases the remarkable ability to identify and reject unattainable tasks, ensuring robustness and safety. We an- ticipate that our research will have a substantial impact on both academic and industry communities and establish the TrainerAgent system as a new paradigm for model develop- Figure 1. Interaction and Responsibilities of Agents. ment in AI." }, { "url": "http://arxiv.org/abs/2308.10334v1", "title": "Coordinate Transformer: Achieving Single-stage Multi-person Mesh Recovery from Videos", "abstract": "Multi-person 3D mesh recovery from videos is a critical first step towards\nautomatic perception of group behavior in virtual reality, physical therapy and\nbeyond. However, existing approaches rely on multi-stage paradigms, where the\nperson detection and tracking stages are performed in a multi-person setting,\nwhile temporal dynamics are only modeled for one person at a time.\nConsequently, their performance is severely limited by the lack of inter-person\ninteractions in the spatial-temporal mesh recovery, as well as by detection and\ntracking defects. To address these challenges, we propose the Coordinate\ntransFormer (CoordFormer) that directly models multi-person spatial-temporal\nrelations and simultaneously performs multi-mesh recovery in an end-to-end\nmanner. Instead of partitioning the feature map into coarse-scale patch-wise\ntokens, CoordFormer leverages a novel Coordinate-Aware Attention to preserve\npixel-level spatial-temporal coordinate information. Additionally, we propose a\nsimple, yet effective Body Center Attention mechanism to fuse position\ninformation. Extensive experiments on the 3DPW dataset demonstrate that\nCoordFormer significantly improves the state-of-the-art, outperforming the\npreviously best results by 4.2%, 8.8% and 4.7% according to the MPJPE, PAMPJPE,\nand PVE metrics, respectively, while being 40% faster than recent video-based\napproaches. The released code can be found at\nhttps://github.com/Li-Hao-yuan/CoordFormer.", "authors": "Haoyuan Li, Haoye Dong, Hanchao Jia, Dong Huang, Michael C. Kampffmeyer, Liang Lin, Xiaodan Liang", "published": "2023-08-20", "updated": "2023-08-20", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Figure 2. The motivation of our Coordinate-Aware Attention (CAA) module in CoordFormer. (Top) The standard Transformer based modules (such as ST-Trans [44, 26, 24]) model patch-level dependency, which results in corruption of pixel-level features. (Bottom) CAA encodes pixel-level spatial-temporal coordinates and preserves pixel-level dependencies in features. inherent properties of the 3D human, supervising the models using 2D keypoints [22], semantic segmentation [13], texture consistency [30], interpenetration and depth [13], body shape [9] and IUV maps [16]. However, they primarily use a multi-stage paradigm that is limited by the first stage. BMP [42] improves upon this by proposing a single-stage model that is more robust to occlusions through inter-instance ordinal relation supervision and taking into account body structure. Concurrently, ROMP [32] adopts a multi-head design which predicts a Body Center heatmap and a Mesh Parameter map. Via parsing the Body Center heatmap and sampling from the Mesh Parameters map, ROMP is able to extract and predict 3D human meshes for multi-person scenarios. BEV [33] extends upon this by further leveraging relative depth information to effectively avoid mesh collision in the single-stage design, as well as age information. Despite these advances in estimating human pose and shape from single images, these above methods are restricted to single images and poorly capture motion relations of spatial interaction. Video-based 3D human pose and shape estimation. The existing video-based methods are similarly built based on SMPL and extract SMPL parameters from frames [34, 18, 3]. However, in these methods, a greater focus is put on modeling temporal consistency and motion coherence. As their image counterparts, video methods follow a two-stage design where people are first detected and features of the bounding-boxes are extracted. In the second stage, tracking is used to capture the motion sequence and refine the pose and shape estimation. More specifically, Sun et. al [34] disentangle skeleton features for improving the learning of spatial features and develop a self-attention temporal network for modelling temporal relations. Additionally, they propose an unsupervised adversarial training strategy for guiding the representation learning of motion dynamics in the video. HMMR [18] proposes a temporal encoder that learns to capture 3D human dynamics in a semi-supervised manner, while Arnab et al. [3] presents a bundle-adjustmentbased algorithm for human mesh optimization and a new dataset consisting of in-the-wild videos. Compared to temporal convolutions and optimization across frames, recurrent structures and attention mechanisms provide superior motion information for mesh regression. VIBE [20] first extracts features from each frame and uses a temporal encoder, i.e. bidirectional gated recurrent units (GRU), to model temporal relations and obtain consistent motion sequences. For more realistic mesh results, the discriminator adopts an attention mechanism to weight the contribution of distinct frames. TCMR [42] proposes the PoseForecast approach composed of GRUs, which integrates and refines static features by fusing pose information from past and future frames to ensure motion consistency. MPS-Net [38] further extends the non-local concept to capture motion continuity, as well as temporal similarities and dissimilarities. MPS-Net further develops a hierarchical attentive feature integration to refine temporal features observed from past and future frames. However, these methods only optimize the motion of individual people and ignore the spatial interactions among people, which is crucial in multi-person scenarios. CoordFormer, instead, adopts a single-stage design for multi-person mesh recovery, aiming at modeling spatial-temporal relations and constraints across frames. 3. CoordFormer Overview. We present the CoordFormer framework (see Fig. 3) to advance multi-person temporal-spatial modelling for video-based 3D mesh recovery. We take inspiration from single-stage image-based approaches for mesh recovery [32] and leverage a multi-head design that predicts a Body Center heatmap as well as a Mesh parameter map. To further capture the spatial-temporal relations, we introduce two novel modules: (1) the BCA mechanism (Sec. 3.1), which focuses spatial-temporal feature extraction on persons for better performance and faster convergence, and (2) the CAA module (Sec. 3.2) incorporated in a SpatialTemporal Transformer (Sec. 3.3), which preserves pixellevel spatial-temporal coordinate information. CAA avoids the spatial information degradation which usually occurs in the patch-level tokenization of standard vision transformers. For completeness and notation consistency we briefly present the Body Center heatmap and the Mesh Parameter map which are predicted by the backbone network. They follow [32] and are computed for all the T frames in a video. Body Center Heatmap Cm \u2208 RT \u00d71\u00d7H\u00d7W : Cm (where H=W=64) represents the likelihood of there being a 2D human body center at a given pixel in the image, where each potential body center is characterised by a Gaussian distribution. Following [32], scale information such as body size is encoded in the kernel size of the Gaussian k. More specifically, let dbb be the diagonal length of the person bounding box and W be the width of the Body Center heatmap, then k is computed as: k = kl + \u221a 2W dbb 2 kr, (1) where kl is the minimum kernel size, kr is the range of k. Note, for in-the-wild images of multiple people, Cm not only contains the scale information of every potential target, but also contains strong location information that can be leveraged to reduce redundancy and focus features. This is further explored in Sec. 3.1. Mesh Parameter map Pm \u2208RT \u00d7145\u00d7H\u00d7W : Pm (where H=W=64) contains the camera parameters Am \u2208 RT \u00d73\u00d7H\u00d7W and SMPL parameters Sm \u2208RT \u00d7142\u00d7H\u00d7W . \u2022 In terms of camera parameters, Am = (\u03be, tx, ty) describes the 2D scale and translation information for every person in each frame, such that the 2D projection \u02c6 J of the 3D body joints J can be obtained as \u02c6 Jx = \u03beJx + tx, \u02c6 Jy = \u03beJy + ty. \u2022 The SMPL parameters, Sm, describe the 3D pose \u03b8 and shape \u03b2 of the body mesh at each 2D position. For every potential person, \u03b8 \u2208R6\u00d722 describes the 3D rotations in the 6D representation [45] of each body joint apart from the hands, and \u03b2 \u2208R10 are the shape parameters. Combining \u03b8 with \u03b2, SMPL establishes an efficient mapping to a human 3D Mesh M \u2208R6890\u00d73. 3.1. BCA: Body Center Attention The Body Center Attention mechanism is at the core of CoordFormer. It aims to fuse position information and acts as a learnable feature indexer by leveraging the representation pattern of the body center heatmap Cm. Each pixel in Cm represents a potential person and learning relations at this pixel-level through Multi-Head Self-Attention (MHSA) would result in redundant calculations as most pixels do not contain people. Instead, we leverage the fact that Cm contains effective position information which can be used as a natural additional attention map for locating people in the corresponding frame. We thus use the Body Center heatmap as the Attention map, i.e. Body Center Attention, to focus and extract features of all persons. Specifically, given an input video sequence V = {It}T t=1 with T frames, we first use the backbone to extract the feature map Fm \u2208RT \u00d7H\u00d7W \u00d7C. To enhance the perception of the coordinate system, we extend Fm with coordinate channels [25] resulting in Fcoord and predict Cm from it. Finally, we compute the focused features as the Hadamard product between Cm and Fm. Note, here we leverage Fm instead of Fcoord, to avoid altering the coordinate features of Fcoord. Let Ft m \u2208RH\u00d7W \u00d7C, Ft coord \u2208RH\u00d7W \u00d7(C+2) and Ct m \u2208RH\u00d7W \u00d71 be the feature map, coordinate feature map and Body Center heatmap of the tth frame, respectively. The focused feature map of the tth frame Ft focus \u2208 Figure 3. An overview of the CoordFormer. (a) Given a video sequence, CoordFormer first extracts a Feature map from each image and predicts the Body Center heatmap that reflects the probability of each position being a body center. Then CoordFormer leverages our proposed BCA mechanism and Spatial-Temporal Decoder to predict the pixel-level Mesh Parameter map that contains SMPL and camera parameters. Finally, the Body Center heatmap is parsed and the 3D mesh results are sampled. (b) The Coordinate Enhancing Layer that the Spatial Transformer and the Temporal Transformer of CoordFormer are comprised of. Each layer consist of multi-head CAA operations, a feed-forward network (FFN), Layernorm, and skip connections. RH\u00d7W \u00d7C can then be computed as follows, Ft coord = ACC( Ft m ), (2) Ct m = Convc( Ft coord ), (3) Ft focus,c = \u2299( Ct m, Ft m,c ), (4) where ACC(\u00b7) indicates adding the coordinate channels, \u2299(\u00b7, \u00b7) indicates the Hadamard product, Convc(\u00b7) is the head convolution layers to obtain the Body Center heatmap, and Ft focus,c and Ft m,c indicate the cth channel of Ft focus and Ft m, respectively. As obtaining Cm is arguably the simplest learning task in the multi-head framework, it represents a reliable source to obtain the focused features Ffocus and facilitates the effectiveness of BCA. 3.2. CEL : Coordinate Enhancing Layer After establishing the existence and location of the people in the video, the motion sequence features must be used to determine their temporal relationships. Moreover, in multi-person scenarios, it is imperative to understand the spatial-temporal interactions to facilitate accurate mesh recovery. The spatial-temporal constraints between all known entities must therefore be modeled effectively. Inspired by the progress on Spatial-Temporal Transformers (ST-Trans) with joint coordinates as input [26, 44, 46], we adopt a powerful ST-Tran as the base model for our Spatial-Temporal Decoder. However, directly applying a ST-Tran on Ffocus does not produce the desired results. This is because the patch-level position information captured from the Position-Encoding [36] is not enough to regress the precise joint coordinates required for our singlestage design. Moreover, as illustrated in Fig. 2, vision transformers [11] that split features into patches and extract tokens from them, can lead to a degradation in the pixel-level information, especially for Cm. Empirical evidence for this is provided in the supplementary material. To add precise coordinate information across frames and maintain the pixel-level representation of Cm and Pm, we introduce the CAA module that expands the selfattention operation of the Transformer Encoder [36]. Unlike Position-Encoding [36], which provides only rough location information at the patch-level, the CAA module captures the the coordinate relationships between (t, x, y) of each pixel. As depicted in Fig. 4, we extend Cm with both time and axis coordinates, enabling us to leverage Cm for detection while also leveraging the coordinate features to capture relations. Specifically, we set Pixel coordinate PC \u2208RW \u00d71 = [1, 2, 3..., W] and Time coordinate TC \u2208 RT \u00d71 = [1, 2, 3..., T] and repeat them to PCr \u2208RT \u00d7W \u00d71 and Time coordinate TCr \u2208RT \u00d7H\u00d71. The input feature F1 input \u2208 RT \u00d7H\u00d7(W +2) of the 1th CEL is then the concatenation of Ffocus, PCr, and TCr. After adding coordinate information, three linear projections, fQ, fK, fV , are applied to transfer Cm and Fl input into three matrices of equal size, namely the query Q, the key K, and the value V, respectively. The CAA operation is then calculated by: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Q = fQ( Finput ), K = fK( Cm ), V = fV ( Finput ), (5) CAA( Cm, Finput ) = Softmax(QKT \u221a D )V. (6) As shown in Fig. 3, in our proposed CEL, H heads of CAA are applied to Cm and Fl input. Therefore, the output of the lth CEL, Fl output \u2208RT \u00d7H\u00d7W , can be compute as F\u2032 = LN(Fl input + CAA(Fl input)), (7) Figure 4. Network structure of our CAA module. Given the Centermap and Featuremap as input, the precise coordinate information is encoded in the Centermap by coordinate-encoding and the rough position information is encoded in the Featuremap by Position-Encoding. Then, K, Q and V are computed for scaled dot-product attention. With powerful position information as key, CAA can capture high-quality spatial-temporal correspondence among multiple persons. Fl output = LN(F\u2032 + FFN(F\u2032)), (8) where LN indicates the Layer Normalization [4] and FFN indicates a feed-forward network. The output of layer l, Fl output, is then provided as input to the next layer, i.e. becomes F(l+1) input. Through multiple CELs, our SpatialTemporal Transformers receives sufficient global location information for implicit feature matching. 3.3. Spatial-Temporal Decoder Building on the coordinate-awareness induced by CAA, we leverage the Spatial-Temporal Transformers to learn the spatial and temporal constraints, respectively. As shown in Fig. 3, the Spatial-Temporal Decoder forms a residual structure and first establishes spatial feature relationships, before modelling the temporal connections. Spatial Transformer Module. Since the representation patterns of the Body Center Heatmap bring spatial information to the feature due to its similarity around the body center, we first use the Spatial Transformer to extract the corresponding spatial information. Given the input Finput, the Spatial Transformer performs a CAA operation on each frame, where Q, K, V are \u2208Rkt\u00d7Ekt , and where k and Ekt indicate the number of tokens and the length of the token embedding, respectively. Temporal Transformer Module. After building the spatial relationships, the Temporal Transformer is used to ensure consistency in the temporal relationships. Given the input Finput, the Temporal Transformer performs a CAA operation on all frames, where Q, K, V are \u2208R(T \u00b7kt)\u00d7Ekt . Coordinate Information Fusion. Since the Coordinate encoding only adds spatial coordinates for one dimension at a time in the 2D image, we observe improvements by transposing Cm at alternating layers, thereby infusing coordinate information along both spatial dimensions. More specifically, each Transformer has 2L CELs. At every Lth 2N layer where N = [1, 2, 3..., L], Cm will be transposed to Cmt \u2208RT \u00d7W \u00d7H to add precise coordinate information, resulting in \uf8f1 \uf8f2 \uf8f3 Fl+1 output = CEL(Cmt, Fl input), l = 2N Fl+1 output = CEL(Cm, Fl input), otherwise. (9) Through multiple layers of CEL, the Transformer learns the correspondence along both dimensions. 3.4. Loss Functions The loss function of CoordFormer consists of a set of temporal and spatial loss functions that ensure temporal consistency and spatial accuracy, respectively. Temporal loss Ltem. We add Ltem to maintain the similarity of adjacent frames via Ltem = waccelLaccel + waj3dLaj3d + wsmLsm, (10) where Laccel and Laj3d are the Accel error [20] and the L2 loss of the 3D joints offsets, respectively, and Lsm is a regular L1 loss between consecutive frames, preventing mutation of Cm and Fm. For each loss item, w(\u00b7) indicates the corresponding weight. Spatial losses Lspa. For spatial accuracy, we follow the previous methods [17, 32] to add loss functions on SMPL parameters, 3D body joints, 2D body joints and Center Body heatmap. Specifically, Lcm is the focal loss [32] of the Center Body heatmap. L\u03b8 and L\u03b2 are L2 loss of SMPL pose \u2212 \u2192 \u03b8 and shape \u2212 \u2192 \u03b2 parameters respectively. Lprior is the Mixture Gaussian prior loss [5, 27] of the SMPL parameters for supervision of prior knowledge. To supervise the accuracy of the joint prediction, Lj3d and Lpj2d are added. Lj3d consist of Lmpj and Lpmpj, where Lmpj is the L2 loss of predicted 3D joints \u2212 \u2192 J and Lpmpj is the L2 loss of the predicted 3D joints after Procrustes alignment with the ground truth [32, 34]. Lpj2d is the L2 loss of the 2D projection of the 3D joints \u2212 \u2192 J . For each loss item, w(\u00b7) indicates the corresponding weight and Lspa can be computed as, Lspa =wcmLcm + wposeLpose + wshapeLshape +wpriorLprior + wj3dLj3d + wpj2dLpj2d. (11) 4. Experiments 4.1. Implementation Details Network Architecture. To facilitate a fair comparison, we follow prior approaches [32] and leverage the HRNet32 [7] as the backbone, similar to [32, 40]. Datasets. To ensure a fair comparison with previous methods, the training is conducted on well-known datasets. The image dataset that is used to train the spatial branch consists of two 3D pose datasets (MPI-INF-3DHP[29] and MuCo-3DHP [29]) and two in-the-wild 2D pose datasets (MPII [2] and LSP [14, 15]), while the video dataset consists of the 3DPW [37] and Human3.6M [12] datasets. Evaluation. Evaluation is performed on the 3DPW [37] dataset as the Human3.6M [12] and MPI-INF-3DHP[29] datasets only contain one person per frame and can thus not be used to assess the performance in multi-person scenarios. Therefore, 3DPW [37] is employed as the main benchmark for evaluating the 3D mesh/joint error. Moreover, we follow [32] and divide the 3DPW dataset into three subsets, namely 3DPW-PC, 3DPW-OC and 3DPW-NC. These subsets represent subsets containing person-person occlusion, object occlusion and non-occluded/truncated cases, respectively, and are used to evaluate the performance under different occlusion scenarios. Following prior approaches [20, 38], the quantitative performance is evaluated by computing the mean per joint position error (MPJPE), the Procrustes-aligned mean per joint position error (PAMPJPE), and the mean Per Vertex Error (PVE) for each frame. Baselines. We compare CoordFormer to both singleimage-based and video-based baseline methods. For single image-based methods, we include HMR [17], SPIN [22], CRMH [13], EFT [16], BMP [42], ROMP [32] and BEV [33]. For video-based methods, we include HMMR [18], Doersch et al. [10], DSD-SATN [34], VIBE [20], TCMR [8], MEVA [44], MPS-Net [38] and MotionBERT [46]. Note that MotionBERT [46] requires additional 2D skeletons motion information as input. 4.2. Comparisons to the State-of-the-Art In-the-wild multi-person scenarios. To reveal the effectiveness of CoordFormer, we evaluate CoordFormer under the different in-the-wild scenarios of 3DPW. For a fair and comprehensive comparison, we follow [32] to adopt three evaluation protocols and then compare CoordFormer with state-of-the-art methods. As ROMP was originally trained on a considerably larger dataset, which included OH [43], the pseduo 3D labels from [16], and PoseTrack [1], we retrain ROMP on our dataset to ensure fair comparisons. While we attempted the same with BEV, we observed that BEV did not converge due to the missing relative depth and age supervision that is used to learn BEV\u2019s centermap. For completeness, we still report the original results reported in [32] for both ROMP and BEV as reference. To comprehensively verify the in-the-wild performance, we follow Protocol 1 to evaluate models on the entire 3DPW dataset. Without any ground truth as input, singleperson methods [20, 22] are equipped with a 2D human detector [6, 31]. As shown in Tab. 1, CoordFormer significantly outperforms all the baselines in MPJPE and PAMPJPE, which reveals that CoordFormer can successfully learn the pixel-level feature representation and better model spatial-temporal relations through ST-Trans. Moreover, to evaluate the ability in modeling temporal motion constrains, we follow Protocol 2 on the 3DPW test set without fine-tuning on the 3DPW training set. In Tab. 1, CoordFormer takes the whole image as input and the temporal branch is only trained on the Human3.6 M [12] dataset, while multi-stage baseline methods can use the cropped single-person image as input and train on more video datasets, i.e. Human3.6 M [12], MPI-INF3DHP [29], AMASS [28]. CoordFormer still outperforms all baselines. Finally, we follow Protocol 3 to evaluate the models on the 3DPW test set with 3DPW fine-tuning. As shown in Tab. 2, CoordFormer outperforms all the methods in MPJPE and PAMPJPE, while being only slightly worse than MotionBERT in PVE. Note that MotionBERT requires additional 2D skeleton motion as input, while CoordFormer can directly be applied on in-the-wild images. A qualitative comparisons to state-of-the-art methods is provided in Fig. 5, demonstrating the effectiveness of CoordFormer to precisely recover the mesh. Additional qualitative results are included in the supplementary material. 3DPW upper-bound performance. To show the upperbound performance of the video-based methods on the in-the-wild multi-person video dataset, i.e. 3DPW, we compare CoordFormer with previous state-of-the-art videobased methods regardless of their training dataset and training setting. As shown in Tab. 3, CoordFormer achieves the best results, which demonstrates the effectiveness of CoordFormer for multi-person mesh recovery from videos. Occlusion scenarios. As shown in Tab. 4, CoordFormer achieves superior performance on the 3DPW-NC and 3DPW-OC subset under non-occlusion and object occlusion cases according to PAMPJPE. Further comparisons show that CoordFormer outperforms ROMP [32] in MPJPE on all 3DPW subsets, demonstrating that precise coordinate information improves the performance under occlusion. Runtime comparisons. In Tab. 5, all comparisons are performed on a desktop with a GTX 3090Ti GPU and a Intel(R) Xeon(R) Platinum 8163 CPU. All video-based models are tested on 8-frames video clips. CoordFormer is Table 1. Comparisons to the state-of-the-art methods on 3DPW following Protocol 1 and 2 (evaluate on the entire 3DPW dataset and on the test set only). * means that additional datasets are used for training [32] . Protocol 1 Protocol 2 Methods MPJPE \u2193 PAMPJPE \u2193 Methods MPJPE \u2193 PAMPJPE \u2193 PVE \u2193 ROMP(ResNet-50)* [32] 87.0 62.0 ROMP(ResNet50)* [32] 91.3 54.9 108.3 Openpose + SPIN [22] 95.8 66.4 HMR [17] 130.0 76.7 HMMR [18] 116.5 72.6 139.3 CRMH [13] 105.9 71.8 Arnab et al. [3] 72.2 YOLO + VIBE[20]* 94.7 66.1 GCMR [23] 70.2 DSD-SATN [34] 69.5 BMP [42]* 104.1 63.8 SPIN [22] 96.5 59.2 116.4 ROMP [32] 90.87 61.34 ROMP [32] 96.96 57.48 110.13 CoordFormer(Ours) 88.95 59.86 CoordFormer(Ours) 95.27 54.58 110.35 Figure 5. Qualitative results of ROMP [32], BEV [33], VIBE [20], MPS-Net [38] and CordFormer on 3DPW and the internet videos. slightly slower than image-based methods [32, 33] due to the overhead in spatial-temporal modeling, however, CoordFormer is significantly faster than the video-based methods [20, 38]. 4.3. Ablation Study To validate the effectiveness of the BCA and CAA modules in CoordFormer, we train CoordFormer under different settings and conduct ablation studies following Protocol 3 to evaluate on 3DPW. Specifically, we evaluate the BCA module by replacing Ffocus with Fcoord without extra attention mechanism and evaluate the CAA module by skipping the Coordinate encoding in Fig. 4. As shown in Tab. 6, CoordFormer with BCA and CCA achieves the best result in the in-the-wild scenarios, which fully demonstrates the effectiveness of BCA and CCA. Specifically, the results confirm that BCA can effectively enhance the perception of potential people in the multi-person scenario. Second, the ablation experiments strongly reflect the importance of precise coordinate information in videos. In summary, the results from Tab. 6 reveal the importance of capturing position information in the multi-person scenario and the effectiveness of the BCA and CCA modules. We further perform an additional ablation study on the Spatial-Temporal Transformer of CoordFormer. Results in Tab. 7 illustrate the benefit of exploiting temporal and spatial information jointly. Table 2. Comparisons to the state-of-the-art methods on 3DPW following Protocol 3 (fine-tuned on the training set). * means that additional datasets are used for training [32]. Methods MPJPE \u2193 PAMPJPE \u2193 PVE \u2193 ROMP(ResNet-50)* [32] 84.2 51.9 100.4 ROMP(HRNet-32)* [32] 78.8 48.3 94.3 BEV* [33] 78.5 46.9 92.3 EFT [16] 51.6 VIBE [20] 82.9 51.9 99.1 MPS-Net [38] 84.3 52.1 99.7 MotionBERT [46] 80.9 49.1 94.2 ROMP [32] 81.06 49.07 96.74 CoordFormer(Ours) 79.41 46.58 94.44 Table 3. Comparisons of best result to the state-of-the-art videobased methods for in-the-wild scenarios on 3DPW. Methods MPJPE \u2193 PAMPJPE \u2193 PVE \u2193 HMMR [18] 116.5 72.6 139.3 Doersch et al. [10] 74.7 Arnab et al. [3] 72.2 DSD-SATN [34] 69.5 VIBE [20] 82.9 51.9 99.1 MEVA [44] 86.9 54.7 TCMR [8] 86.5 52.7 103.2 GLAMR [40] + SPEC [21] 54.9 GLAMR [40] + KAMA [19] 51.1 MPS-Net [38] 84.3 52.1 99.7 CoordFormer(Ours) 79.41 46.58 94.44 Table 4. Comparisons to state-of-the-art methods on the personoccluded (3DPW-PC), object-occluded (3DPW-OC) and nonoccluded/truncated (3DPW-NC) subsets of 3DPW. * means that additional datasets are used for training. Metric Method 3DPW-PC \u2193 3DPW-NC \u2193 3DPW-OC \u2193 PAMPJPE ROMP* 75.8 57.1 67.1 CRMH [13] 103.55 65.7 78.9 VIBE [20] 103.9 57.3 65.9 ROMP 77.64 56.67 66.6 CoordFormer 79.30 54.13 64.47 MPJPE ROMP 103.70 95.53 100.79 CoordFormer 101.51 93.17 97.25 Table 5. Run-time comparison on a 3090 GPU. Methods Time per frame(s) \u2193 FPS\u2191 Backbbone Using Temporal information ROMP [32] 0.01329 75.26 HRNet-32 \u2715 BEV [33] 0.01448 69.04 HRNet-32 \u2715 VIBE [20] 0.07881 12.68 HRNet-32 \u2713 MPS-Net [38] 0.08013 12.47 HRNet-32 \u2713 CoordFormer 0.01867 53.55 HRNet-32 \u2713 Table 6. Ablation study under 3DPW Protocol 3. Methods MPJPE \u2193 PAMPJPE \u2193 PVE \u2193 CoordFormer w/o CAA 83.19 50.62 99.21 CoordFormer w/o BCA 82.20 48.84 98.23 CoordFormer 79.41 46.58 94.44 Table 7. Ablation study of spatial and temporal Transformer on 3DPW. S means only training the spatial branch, ST means finetuning the temporal branch on Human3.6 M, ST-fine means finetuning on the 3DPW training set. Evaluation Methods MPJPE \u2193 PAMPJPE \u2193 PVE \u2193 On entire 3DPW S 95.05 63.22 115.90 ST 88.95 59.86 103.88 On test set only S 103.95 58.03 120.67 ST 95.27 54.58 110.35 ST-fine 79.41 46.58 94.44 The reason for the decline in performance when only leveraging the spatial branch can be attributed to two factors: the inability to utilize temporal information and the fact that CAA lacks temporal coordinate information. 5. Conclusion We proposed CoordFormer to achieve single-stage multi-person mesh recovery from videos. CoordFormer incorporates implicit multi-person detection, tracking, and spatial-temporal modeling. Two critical novelties are the Coordinate-Aware Attention mechanism for pixel-level feature learning and the Body Center Attention for personfocused feature selection. CoordFormer paves the way for various downstream applications related to perceiving group behavior, including but not limited to virtual reality and physical therapy. Despite CoordFormer\u2019s robust performance to recover multi-person meshes, its current version lacks the ability to recover completely occluded meshes. We plan to explore this exciting area by leveraging the continuity along the temporal dimension of the body center heatmap. Acknowledgment: This work was supported in part by National Key R&D Program of China under Grant No. 2020AAA0109700, Guangdong Outstanding Youth Fund (Grant No. 2021B1515020061), Shenzhen Science and Technology Program (Grant No. RCYX20200714114642083), Shenzhen Fundamental Research Program(Grant No. JCYJ20190807154211365), Nansha Key RD Program under Grant No.2022ZD014 and Sun Yat-sen University under Grant No. 22lgqb38 and 76160-12220011. We thank MindSpore for the partial support of this work, which is a new deep learning computing framwork1. 1https://www.mindspore.cn/", "introduction": "Considerable progress has been made on monocular 3D human pose and shape estimation from images [5, 35, 17, 22, 39] due to extensive efforts of computer graph- *Both authors contributed equally to this work as co-first authors. \u2020Corresponding author. Figure 1. Comparison of video-based multi-person mesh recov- ery pipelines. (a) Multi-stage pipelines [20, 8, 44, 38, 40] ex- plicitly generate tracklets and model single-person temporal mesh sequences independently. (b) Our single-stage CoordFormer im- plicitly matches persons across frames and simultaneously models multi-person mesh sequences in an end-to-end manner. ics and augmented/virtual reality researchers. However, while frame-wise body mesh detection is feasible, many applications require direct video-based pipelines to avoid spatial-temporal incoherence and missing frame-based de- tections [20, 8, 38]. Existing video-based methods follow a multi-stage de- sign that involves using a 2D person detector and tracker to obtain the image sequences of a single-person for pose and shape estimation [18, 20, 38, 41, 40]. More specifi- cally, these methods first detect and crop image patches that contain persons, then track these individuals across frames, and associate each cropped image sequence with a per- son. The frame-level or sequence-level features are then extracted and used to regress 3D human mesh sequences under spatial and temporal constraints. However, the accu- racy of the detection and tracking stage greatly affects the performance of these multi-stage approaches, making them particularly sensitive to false, overlapping, and missing de- tections. Moreover, these multi-stage approaches have a considerable computation cost and lack real-time perspec- arXiv:2308.10334v1 [cs.CV] 20 Aug 2023 tives since the single-person meshes can only be recovered sequence-by-sequence after detection and tracking. To address the above issues, we introduce CoordFormer, the first single-stage approach for multi-person 3D mesh recovery from videos that can be trained in an end-to-end manner. As shown in Fig. 1, our method differs from current state-of-the-art approaches [20, 8, 44, 38, 40] by being a single-stage pipeline that implicitly performs detection and tracking through the interaction of feature representations, producing multiple mesh sequences simultaneously. In particular, CoordFormer leverages a multi-head framework to predict a body center heatmap, which is en- coded using our proposed Body Center Attention (BCA). BCA serves as a weak/intermediate person detector that fo- cuses the framework-wide feature representations on po- tential body centers. Many-to-many temporal-spatial re- lations among people and across frames are then derived from the BCA-focused features and directly mapped to mesh sequences using our novel Coordinate-Aware Atten- tion (CAA). CAA is integrated into a Spatial-Temporal Transformer (ST-Trans) [44, 26, 24] to capture non-local context relations at the pixel level. See Fig. 2 for an illus- tration of CAAs motivation. Facilitated by BCA and CAA, CoordFormer advances existing video mesh recovery solu- tions beyond explicit detection, tracking and sequence mod- eling. Under various experimental settings on the 3DPW dataset, CoordFormer significantly outperforms the best re- sults of state-of-the-art by 4.2%, 8.8% and 4.7% on MPJPE, PAMPJPE and PVE metrics, respectively. CoordFormer also improves inference speed by 40% compared to the state-of-the-art video-based approaches [20, 38]. More- over, we demonstrate that enhancing and capturing pixel- level coordinate information significantly benefits the per- formance under multi-person scenarios. The main contributions of this work are as follows: \u2022 We propose the first single-stage multi-person video mesh recovery approach, where our BCA mechanism fuses position information and our CAA module en- ables end-to-end multi-person model training. \u2022 We demonstrate that the pixel-level coordinate corre- spondence is the most critical factor for performance. \u2022 Extensive experiments on challenging 3D pose datasets demonstrate that the proposed method achieves significant improvements, outperforming the state-of-the-art methods." }, { "url": "http://arxiv.org/abs/2308.00718v1", "title": "Beam Detection Based on Machine Learning Algorithms", "abstract": "The positions of free electron laser beams on screens are precisely\ndetermined by a sequence of machine learning models. Transfer training is\nconducted in a self-constructed convolutional neural network based on VGG16\nmodel. Output of intermediate layers are passed as features to a support vector\nregression model. With this sequence, 85.8% correct prediction is achieved on\ntest data.", "authors": "Haoyuan Li, Qing Yin", "published": "2023-08-01", "updated": "2023-08-01", "primary_cat": "physics.data-an", "cats": [ "physics.data-an", "cs.LG" ], "main_content": "A bird view of such a simple-looking task reveals the challenges behind it. \u2022 Complicated background noises. The background noises are not static. In contrast, it is coherent in both time and space domain due to quantum effect. Thus we have to dynamically change the background when doing background subtraction for different images. \u2022 Large variety in the intensity and shape of the beam spot. As is indicated above, the maximum intensity of the beam is incredibly high. However, it can also be 0 which corresponds to the case that no beam spot presented. The shape of the beam can also varies significantly from case to case. \u2022 Ground truths is hard to find. The signal processing methods fail in finding the beam spot sometimes. There are also cases when the photo is so terrible that even human can not find the beam spot after all the signal processing procedures. The task is similar to face recognition and the expectation of solving it in one strike is unrealistic. Thus, we develop the following strategy. First, we develop an algorithm that is capable of dealing with cases where we have implemented signal processing to realize the fundamental functions. Second, 2 Figure 1: Raw photos extend the application to cases that we gradually ignore pre-processing steps, e.g. Gaussian blur. In the end, extend the application to raw images. In the essay, we report the accomplishment of the first step that realizes fundamental functions. 3 Related Work A similar dilemma stated above occurred when researchers tried to pinpoint the electron bunch\u2019s trace [2]. It was solved by first feeding the images into a convolutional neural network [1], VGG16 [7], and then performing linear regression on its codewords. This gives us the inspiration of this project. Signal processing algorithms have been developed for targeting the beam 3 spot. Thus, we use the results from these algorithms as the ground truth for our training. 4 Dataset Training dataset for this project consists of 162 original VCC screen figures and 16200 figures generated from the original ones (each original image corresponding to 100 generated images). Each figure contains only one beam spot. To generate new figures, we first cut out part of figure that contains the beam spot, then rotate the small patch of figure to some random degree, and put the patch on a random point. In the end, cut out the covered part of the figure and paste it to the original position of the beam spot. The position of the beam on each of the original figures can be determined by the signal processing algorithm. The position of the beam on the generated figures are determined by the generation process.The position of beam in each figure is represented by the coordinates of two diagonal vertices: z(i) = (y(i) min, y(i) max, x(i) min, x(i) max)T 5 Method 5.1 Pipeline To obtain an algorithm robust to variations of beam spot and noises, we utilize a two step method. First, we use a convolutional neural network (CNN) as a feature extractor and we take the intermediate outputs of the CNN as features and train a Supportive Vector Regression(SVR) program with them. The pipeline is demonstrated in Figure 2. 5.2 Feature Extraction Since we have moderate sample size, transfer training seems suitable for this purpose. VGG16 is selected as the pre-trained model. VGG16 is a deep convolutional neural network. There are totally 13 layers of convolutional layer, 5 pooling layer and 3 full connected layer. The structure of this network in demonstrated in Figure 3. 4 16200 16200 Figure 2: Algorithmic flowchart The output of the first and second fully connected layers are both 4096dimensional vectors. We feed in our samples and use these two vectors as the features of the image. To do transfer training, first we re-construct the VGG16 with python 2.7 and tensorflow 0.11. The last layer of the original VGG16 is a classifier of a thousand classes. We modify it to output the position of the center of the beam spot. The loss is defined as the squared distance between the 5 Figure 3: Structure of VGG 16 [3] predicted beam center and the true beam center. A l2 regularization term is added which contains parameters of the last three fully connected layer. LCNN,loss = 1 m m X i=1 ||\u20d7 xii \u2212\u20d7 \u02dc xi||2 + \u03bb m||W||2 (1) where \u20d7 xi and \u20d7 \u02dc xi indicate respectively the predicted center of the beam and the true center of the beam which is represented by the center of the box. W is a vector consisting of all parameters in the last three full connected layers, i.e. we only tune the parameters in the last three layers. The reason for only tuning the last three layers is that we don\u2019t have enough data to tune all parameters. The first several layers, according to modern viewpoint, mainly detect the detailed structures while the upper layers concern more about larger scale objects. So it seems more reasonable to try to tune those layers affecting our next step directly. 5.3 Support Vector Regression The feature for the ith figure is denoted by q(i) \u2208R8192.To avoid overfitting, we use l2 regularization when implementing supportive vector regression (SVR) with Gaussian kernel to model the situation. The four parameters of the position are independent. So we train a SVR model for each of the four labels by maximizing the following dual function [8] Ls = m X i=1 \u03b1s iz(i) s \u22121 2 m X i,j=1 \u03b1s i\u03b1s jK(q(i), q(j)) (2) 6 \u03b1i \u2208[\u2212t, t], i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , m}, s = 1, 2, 3, 4. m is the number of training samples. \u03b1s i\u2019s are the optimization parameters. t is the penalty ratio of the difference between the predicted label and the real label. This dual function is obtained from the primary optimization problem [8] in the following equations by eliminating the original variables: Ls = 1 2|\u03b8s|2 2 + t m X i=1 \u03be+ i + t m X i=1 \u03be\u2212 i (3) under the constraints that \u2200i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , m} \u03be+ i \u22650 (4) \u03be\u2212 i \u22650 (5) z(i) s \u2212\u03b8T q(i) s \u2264\u03be+ i (6) \u03b8T q(i) s \u2212z(i) s \u2264\u03be\u2212 i (7) where \u03be+ i and \u03be\u2212 i are the threshold for training error. 6 Training 6.1 Convolutional Neural Network Nesterov momentum method [4] is utilized to minimize the CNN loss function. We use a vanishing learning rate which decays as t\u22121: \u03b1 = 10\u22124 \u00d7 (1 + t)\u22121 (8) 6.2 Cross Validation To maximize the usage of limited data, we use 9-fold cross-validation. First we randomly shuffle the 162 original images to erase the potential correlation between different images. Then divided them into 9 batches, each of 18 images. For each batch, the other 144 original images together with the related 14400 generated images form the training data set for the next two steps. This guarantees the independence between the training and testing process. 7 6.3 PCA and SVM Sequential minimal optimization (SMO) algorithm is utilized to minimize the SVR loss function. Package scikit-learn is used throughout the process to guarantee the performance. The feature has a huge dimension of 8192, while the prediction only contains 4 numbers. To prevent over-fitting, we use principle component analysis (PCA) to extract the most influence dimensions and iterate the training and testing procedure for 12 different different dimensions equally distributed in log-scale, i.e. {5, 10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 8000}. 7 Results and Discussions 7.1 Performance After re-tuning, the regression algorithm gives superior performance. Beam positions are corrected predicted in 139 of the 162 original images. The following diagram demonstrates the typical performance of the regression on the test set. When the regression algorithm catches the spot, it gives really precise prediction, which implies that the algorithm does have learned how to pinpoint the position of the spot. Yet, when it can not catch the spot (picture in second column of Figure 4), the regression algorithm just randomly guess a point in the center of the screen. After checking the pictures where the algorithm miss the spot, we found that the missing spots are not dim at all. In fact the maximum point in those figures are even larger than those with perfection predictions. But the beam spots are all very small in those figures. 7.2 Effect of PCA dimension To prevent overfitting, we use PCA to pick out the most influential dimensions of the features before regression. Figure 5 demonstrates the training error of the SVR for different PCA dimensions. The error is defined as the distance between the predicted beam center and the true beam center. Figure 6 demonstrate the performance of the SVR for different PCA dimensions.To quantify the performance, we use the average ratio between the overlapped area and the area of the true spot as the indicator of the performance. 8 Figure 4: Test Results From these two figures, it is clearly shown that when the PCA dimension exceeds 1000, there will be severe overfitting problem, as the training vanishes and the performance drops in that area. 8 Conclusions and Future Works With newly generated figures and re-tuned neural network, the performance of the SVR gains great improvement. Yet it is still far from applicable in real environment. Three approaches are scheduled for further improvements. 9 PCA Dimension 100 101 102 103 104 Overlap area ratio 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 Figure 5: Test performance for different PCA dimensions 0 1000 2000 3000 4000 5000 6000 7000 8000 Sample label 0 2 4 6 8 10 Center Deviation d=5 d=160 d=1280 Figure 6: Training error for different PCA dimensions 10 Figure 7: Test results for raw pictures 8.1 Fine-Tuning Neural Network VGG16-like neural networks are huge and hard to tune. It is highly likely that with improved tuning skill and a bit of luck and patience, we can get better performance. 8.2 More and Harder Data Noisy figures are currently beyond the signal processing algorithm\u2019s capability. But as we can see sometimes our regression algorithm has done better job on the existing samples. And even for those raw pictures, our model can generate good prediction results occasionally (Figure 7). So we may try the algorithm on harder samples in the future. Especially we will try to mark the position of the beam by hand for noisy figures and try to train the model on those samples as well. 8.3 Building New Neural Network It is possible that a smaller but better training neural network can do better job. Only after trying both methods can we decide which way is better. We have already constructed another smaller neural network. Training of this one will soon begin.", "introduction": "The free electron laser(FEL) at Stanford Linear Accelerator Center(SLAC) is an ultra-fast X-ray laser. As one of the most advanced X-ray light source [5] [6], it is famous for its high brightness and short pulse dura- tion: it is 10 billion times brighter than the world\u2019s second brightest light source; the pulse duration is several tens femtoseconds.It plays a pivotal role in both fundamental science research and applied research [6]. The mecha- nism behind this laser is very delicate [5]. Thus to keep the laser in optimal working condition is challenging.The positions of the electron beams and the laser beams are of fundamental importance in the control and maintenance of this FEL. Currently, the task of locating beam spots heavily depends on human labor. This is mainly attributed to the wide varieties of beam spots and the presentation of strong noises as demonstrated in Figure 1, where the white square marks the boundary of the beam spot. Each picture requires a long sequence of signal processing methods to mark the beam position. 1 arXiv:2308.00718v1 [physics.data-an] 1 Aug 2023 To make things even worse, different instrument configurations and work- ing conditions require different processing parameters. Within the frame of same configuration, the parameters will also drift away along with time ad- vancement because of the inherent delicacy of the instrument. This makes tuning and maintaining the FEL a tedious and burdensome task for re- searchers.Currently, the data update frequency of the laser is 120 Hz. We can barely handle this. In 2020, after the scheduled update, the frequency will climb to 5000 Hz. Thus the only hope lies in automatic processing methods. Considering that simple signal processing method can not handle such nasty condition, We hope to come up with a general machine learn- ing algorithm which is capable of locating the beams\u2019 positions quickly and automatically. In this report, we demonstrate the consecutive application of neural network and supportive vector machine (SVM) to determine the positions of beam spots on virtual cathode camera (VCC) screen in simple cases." }, { "url": "http://arxiv.org/abs/2304.03669v1", "title": "DATE: Domain Adaptive Product Seeker for E-commerce", "abstract": "Product Retrieval (PR) and Grounding (PG), aiming to seek image and\nobject-level products respectively according to a textual query, have attracted\ngreat interest recently for better shopping experience. Owing to the lack of\nrelevant datasets, we collect two large-scale benchmark datasets from Taobao\nMall and Live domains with about 474k and 101k image-query pairs for PR, and\nmanually annotate the object bounding boxes in each image for PG. As annotating\nboxes is expensive and time-consuming, we attempt to transfer knowledge from\nannotated domain to unannotated for PG to achieve un-supervised Domain\nAdaptation (PG-DA). We propose a {\\bf D}omain {\\bf A}daptive Produc{\\bf t}\nS{\\bf e}eker ({\\bf DATE}) framework, regarding PR and PG as Product Seeking\nproblem at different levels, to assist the query {\\bf date} the product.\nConcretely, we first design a semantics-aggregated feature extractor for each\nmodality to obtain concentrated and comprehensive features for following\nefficient retrieval and fine-grained grounding tasks. Then, we present two\ncooperative seekers to simultaneously search the image for PR and localize the\nproduct for PG. Besides, we devise a domain aligner for PG-DA to alleviate\nuni-modal marginal and multi-modal conditional distribution shift between\nsource and target domains, and design a pseudo box generator to dynamically\nselect reliable instances and generate bounding boxes for further knowledge\ntransfer. Extensive experiments show that our DATE achieves satisfactory\nperformance in fully-supervised PR, PG and un-supervised PG-DA. Our\ndesensitized datasets will be publicly available\nhere\\footnote{\\url{https://github.com/Taobao-live/Product-Seeking}}.", "authors": "Haoyuan Li, Hao Jiang, Tao Jin, Mengyan Li, Yan Chen, Zhijie Lin, Yang Zhao, Zhou Zhao", "published": "2023-04-07", "updated": "2023-04-07", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "2.1. Visual Retrieval Given a text query, Visual Retrieval (VR) [1, 3, 20, 38, , 50] aims to find the corresponding image/video in a liGiven a text query, Visual Retrieval (VR) [1, 3, 20, 38, 39, 50] aims to find the corresponding image/video in a library. The common latent space based methods [1,50] have been proven their effectiveness, which first extract the visual and textual features and map them into a common latent space to directly measure vision-language similarity. Representatively, [15] applies CNN and RNN to encode images and sentences respectively, and learn image-caption matching based on ranking loss. [50] proposes a semantic graph to generate multi-level visual embeddings and aggregate results from the hierarchical levels for the overall crossmodal similarity. Recently, transformer [42] exhibits better performance in Natural Language Processing [11, 19], Computer Vision [4, 12, 24, 25, 27] and multi-modal area [22, 23, 26, 31, 44, 46\u201348] than previous architecture, especially for global information understanding. Unsuprisingly, there is an increasing effort on repurposing such powerful models [1, 16, 29, 52] for VR. They apply transformer to learn joint multi-mmodal representations and model detailed cross-modal relation, which achieves satisfactory performance. 2.2. Visual Grounding The paradigm of Visual Grounding (VG) [28,34,37,41], which aims to localize the objects on an image, is similar as Visual Retrieval (VR), they are both to search the best matching part in visual signals according to the text query. Compared to VR, modeling fine-grained internal relations of the image is more significant for VG. In early work, twostage methods [6,21,49] were widely used, which first generate candidate object proposals, then leverage the language Swin-TF block Textual Transformer Object-Seeking Transformer Target: Source: \u8fd9\u4e5d\u829d\u5802\u7ea2\u7cd6\u2026 (This jiuzhitang brown sugar...) rep rep Linear Linear (a) Semantics-Aggregated Feature Extractor (b) Cooperative Seekers (c) Dynamic Knowledge Transfer Box Pred Similarity Unmatched Pairs Ground-truth Box Swin-TF block \ud835\udc49 \ud835\udc46 \ud835\udc49\ud835\udc47 \ud835\udc44\ud835\udc46 \ud835\udc44\ud835\udc47 \ud835\udc49 \ud835\udc46 \ud835\udc44\ud835\udc46 Text Embed Patch Embed Patch Merge rep rep rep rep \u2026. \u2026. Stage 4 \ud835\udc3b 4\u00d7 \ud835\udc4a 4 \ud835\udc3b 32\u00d7 \ud835\udc4a 32 Stage 1 \ud835\udc3b\u00d7 \ud835\udc4a \u7f57 \u6280 \u84dd \u2026 \u8fd9 \u4e5d \u829d \u2026 Patch Tokens Char & Word Tokens \ud835\udc3b\u00d7 \ud835\udc4a \u2026.... Aggregate Semantics Image Seeking Object Seeking Coordinates (\ud835\udc65, \ud835\udc66, \u210e, \ud835\udc64) ImgSeek Loss ObjSeek Loss \u7f57\u6280\u65e0\u7ebf\u9f20\u6807\u2026 (Logitech wireless mouse...) Domain Aligner Select top \ud835\udc58 instances \ud835\udc49 \ud835\udc46\ud835\udc49\ud835\udc47\ud835\udc44\ud835\udc46\ud835\udc44\ud835\udc47 ObjectSeeker ObjSeek PseLoss \ud835\udc44\u2032\ud835\udc47 \ud835\udc49\u2032\ud835\udc47 ObjectSeeker Predicted Coordinates (\ud835\udc65, \ud835\udc66, \u210e, \ud835\udc64) Instance Similarity \ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc46 \ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc46 Cosine score DA Loss Pseudo Box \ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc46\ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc47 gradient \u00d7 \ud835\udc59\ud835\udc5c\ud835\udc50T / : source flow/ source module / : target flow/ target module / : mixed flow/ sharable module / : visual/ textual feature vector Figure 2. Overview of our DATE. (a) is the feature extractor, applying the semantics-aggregated transformers to obtain image and query features. (b) is the cooperative seekers, calculating the similarity to seek the image for PR and predicting coordinates to seek the object for PG. (c) includes a domain aligner to minimize distribution divergence between source and target domains and a pseudo box generator to select reliable instances and generate bounding boxes for knowledge transfer in PG-DA. descriptions to select the most relevant object, by leveraging off-the-shelf detectors or proposal generators to ensure recall. However, the computation-intensive proposal generation is time-consuming and also limits the performance of these methods, one-stage methods [30, 45] concentrate on localizing the referred object directly. Concretely, [45] fuses the linguistic feature into visual feature maps and predict bounding box directly in a sliding-window manner. Recently, [10] re-formulates VG as a coordinates regression problem and applies transformer to solve it. Generally, VR and VG are regarded as two separate problems. In this paper, we mine the commonalities of the two problems and design a uni\ufb01ed architecture based on cooperative seeking to ef\ufb01ciently solve VR and VG effectively. 2.3. Un-supervised Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer knowledge from the annotated source domain to the unlabelled target domain, and the challenge is how to overcome the in\ufb02uence of domain gap. In uni-modal tasks applications, several UDA techniques have been explored, including aligning the cross-domain feature distribution [17, 32], applying adversarial learning strategy [2,36] or reconstruction method [8] to obtain domain-invariant features. And DisV Loss \ud835\udc44! \ud835\udc44\" \ud835\udc49\" \ud835\udc49 ! \ud835\udc59\ud835\udc5c\ud835\udc50! \ud835\udc59\ud835\udc5c\ud835\udc50\" \ud835\udc43(\ud835\udc49 !) \ud835\udc43(\ud835\udc49 \") RKH Space Minimize Uni-modal Marginal Distribution Divergence (\ud835\udc43(\ud835\udc44!)) (\ud835\udc43(\ud835\udc44\")) \ud835\udc43(\ud835\udc42!|\ud835\udc49 !,\ud835\udc44!) \ud835\udc43(\ud835\udc42\"|\ud835\udc49 \",\ud835\udc44\") Minimize Multi-modal Conditional Distribution Divergence DisO Loss RKH Space DisQ Loss Figure 3. The multi-modal domain aligner. [9] uses optimal transport to estimate the discrepancy between the two distributions and exploits labels from the source domain. Different from the works described above, our task is cross-modal in nature, which is more challenging due to the heterogeneous gap between different modalities. In multi-modal area, few works have considered UDA, [5] studies the cross-dataset adaptation for visual question answering, [7] studies the video-text retrieval with pseudolabelling algorithm. To the best of our knowledge, this is the \ufb01rst work to consider un-supervised Visual Grounding in domain adaptation setting. 3. Proposed DATE 3.1. Problem Formulation In this paper, we explore fully-supervised PR and PG, and un-supervised PG-DA in domain adaptation setting. In the next, we will formulate them. PR and PG. We collect a fully-annotated dataset {V, Q, O}, given a textual query Qi in query set Q, PR and PG aim to seek the image-level product VQi from whole image gallery V , and object-level product OQi from an matched image VQi. The O is the bounding box annotation. PG-DA. We have access to a fully-annotated source domain S = \b V S, QS, OS\t , and an unannotated target domain T = \b V T , QT \t without box annotation OT . The goal of PG-DA is to transfer the knowledge from S to T , and seek the object-level product on T . 3.2. Semantics-Aggregated Feature Extractor As Figure 2(a) shown, for both settings, we share the feature extractor, which can aggregate the global semantics of each modality for image seeking as well as capture comprehensive and context-aware features for object seeking. Image Stream. Given a RGB image v, we \ufb01rst split it into non-overlapping patches, then we refer to Swin-TF [35] for hierarchical feature extraction. Swin is mainly through the stack of patch merging module and Swin Transformer block to achieve 4-stage encoding, and the resolution is halved at each stage to acquire hierarchical features. The original Swin-TF utilizes average pooling to obtain image representation vector, ignoring the difference in importance of each token for semantics extraction. For promotion, we append a learnable [REP] token in front of visual token sequence during 4th stage, which is involved in the computation of self-attention and absorbs the weighted global image features. After the 4th stage, we can acquire the semanticsaggregated visual feature, and we name this advanced visual encoder as SA-Swin. Next we apply a linear layer to project them into dimension d to obtain VSA = [Vrep, V ] \u2208 Rd\u00d7(1+Nv), where Nv is the number of visual tokens, Vrep and V are concentrated and comprehensive features respectively. Query Stream. Given a textual query q, we \ufb01rst split it into character-level sequence and convert each character into a one-hot vector. After that, we tokenize each one-hot vector into a dense language vector in the embedding layer. Similar to image stream, we append a [REP] token in front of the tokenized query sequence to aggregate the global semantics. Note that the visual and textual [REP] tokens are independent for respective aggregation. Next we take all tokens into a textual transformer to produce the semanticsaggregated query features. Then we project them into the common space dimension d as image stream, to obtain QSA = [Qrep, Q] \u2208Rd\u00d7(1+Nq), where Nq is the number of textual tokens. 3.3. Cooperative Seekers After acquiring common space image feature VSA = [Vrep, V ] and query feature QSA = [Qrep, Q], as Figure 2(b) shown, we design two cooperative seekers to search the matched image and localize the object on this image. Next we describe the responsibility of our two seekers. Image Seekers for PR. The goal of the image seeker is to search the image corresponds to a query. we can directly compute the cosine distance between concentrated features Vrep and Qrep to measure the simliarity between image and query, which is time-ef\ufb01cient to search the most similar item and ensures seeking inference is of trivial cost. Given a batch B with B image-text pairs during training, we calculate the text-to-vision similarity as pq2v(q) = exp(l \u00b7 s(Vrep, Qrep) \u00b7 mq2v) P v\u2208B exp(l \u00b7 s(Vrep, Qrep) \u00b7 mq2v) (1) mq2v = exp (\u03c4 \u00b7 s (Vrep, Qrep)) P q\u2208B exp (\u03c4 \u00b7 s (Vrep, Qrep)) (2) where pq2v(q) is text-to-vision probability distribution, l is a learnable logit scaling parameter, s(\u00b7, \u00b7) denotes cosine similarity, m denotes the prior matrix to re\ufb01ne the similarity distribution following [13], \u03c4 represents a temperature hyperparameter. For product retrieval on our datasets, the query (title or description of the product) can be also retrieved by the image, and the vision-to-text similarity is pv2q(v). Then, we treat matching pairs in the batch as positives, and all other pairwise combinations are treated as negatives, thus the image seeking loss can act as LImgS = 1 2Ev,q\u223cB[H \u0000pq2v(q), yq2v(q) \u0001 +H(pv2q(v), yv2q(v))], (3) where H(\u00b7, \u00b7) is the cross-entropy formulation, y(\u00b7) is the ground-truth binary label that positive and negative pairs are 1 and 0 respectively. Object Seeker for PG. Different from the image seeker, the ambition of object seeker is to localize the microscopic object-level product on an image, and more suf\ufb01cient image-query interaction and \ufb01ne-grained seeking are required. Thus, we leverage comprehensive image and query features V and Q for object seeking. We consider apply a transformer to fuse cross-modal tokens adequately, in order to learn how to localize the product during interaction, we frist append a learnable [LOC] token with visual and textual features as TO = [Tloc, V , Q] \u2208Rd\u00d7(1+Nv+Nq). Then we apply a cross-modal object-seeking transformer to embed TO into a common space by performing intraand inter-modality semantic interaction. Besides, we add learnable modal-type embedding and position embedding to the input of each transformer encoder layer. We leverage the output state of the [LOC] token floc from the object-seeking transformer and attach a regression module to it to predict 4-dim box coordinates. Further, to eliminate the in\ufb02uence of scale problem, we normalize the coordinates of the ground-truth box by the scale of the image and perform the object seeking loss as LObjS = \u2225b \u2212\u02c6 b\u22251 + G(b,\u02c6 b), (4) where G(\u00b7, \u00b7) is GIoU Loss [40], b = (x, y, w, h) and \u02c6 b = (\u02c6 x, \u02c6 y, \u02c6 w, \u02c6 h) are our prediction the normalized ground-truth box respectively. So far, PR and PG can be solved simultaneously by the cooperation of two seekers, and our cooperative seeking loss is Lcoop = \u03bbcoLImgS + LObjS, (5) where \u03bbco \u2208R are hyperparameters to weigh two losses. 3.4. Dynamic Knowledge Transfer As Figure 2(a) shown, we design a knowledge transfer method for PG-DA, including a domain aligner to alleviate feature distribution shift and a dynamic pseudo box generator to promote transfer. Domain Aligner. As Sec 3.3, we extract visual feature V S SA = [V S rep, V S] and textual feature QS SA = [QS rep, QS] from S domain, and we acquire V T SA = [V T rep, V T ] and QT SA = [QT rep, QT ] from T domain in the same way. To alleviate the domain discrepancy, we design an alignment approach based on Maximum Mean Discrepancy (MMD), which compares two distributions by embedding each distribution in to Reproducing Kernel Hibert Space (RKHS) H with a kernel function \u03c6. And we utilize multiple Gaussian Radial Basis Function kernels as \u03c6. Given two marginal distributions PXS and PXT from uni-modal source and target domain respectively, MMD can be expressed as MMDuni(PXS, PXT ) = \r \r\u00b5PXS \u2212\u00b5PXT \r \r H . (6) In order to compute the inner product of vectors using the kernel function \u03c6 in RKHS, we square MMD as MMD2 uni(PXS, PXT ) = \r \r\u00b5PXS \u2212\u00b5PXT \r \r2 H = \r \r \r \r \r \r 1 n2 S nS X i=1 nS X i\u2032=1 \u03c6 \u0000xS i , xS i\u2032 \u0001 \u2212 2 nSnT nS X i=1 nT X j=1 \u03c6 \u0000xS i , xT j \u0001 + 1 n2 T nT X j=1 nT X j\u2032=1 \u03c6 \u0000xT j , xT j\u2032 \u0001 \r \r \r \r \r \r H . (7) Then, we can minimize the distance between visual feature distributions from different domains through MMD2 uni as LDisV = X v\u2208B [ MMD2 uni(V S rep, V T rep) + MMD2 uni(\u00b5(V S), \u00b5(V T ))], (8) where \u00b5(\u00b7) is calculating the mean value of V on token dimension. In the same way, we compute LDisQ for textual feature. After that, we can obtain domain-invariant features. In addition to the discrepancy of uni-modal marginal distribution, we compute the multi-modal conditional distribution divergence to adjust the output distribution for better adaptation, and the form of MMD computation becomes MMDmul[P(Y S|XS V , XS Q), P(Y T |XT V , XT Q)]. (9) Concretely, we take out the output of [LOC] token f S loc and f T loc in object seeking transformer from two domains and minimize MMD2 mul to reduce distance of output feature distribution from different domains as LDisO = X f S loc,f T loc\u2208B MMD2 mul(f S loc, f T loc). (10) The total domain alignment loss function is as follows LDA = \u03bbDvLDisV + \u03bbDqLDisQ + LDisO, (11) where \u03bbDv, \u03bbDq \u2208R are hyperparameters to weigh losses. Dynamic Pseudo Box Generator. To further transfer the knowledge from S to T , we attempt to generate pseudo bounding boxes by model on S to train the model on T . However, it is unlikely that all data can be precisely boxed by source model, which may result in dissatisfactory performance. Therefore, the instances from T which are close to S are relatively reliable to be selected. For more precise selection, we compute the instance similarity between two datasets rather than batches. Thus, given the datasets {V S, QS} and {V T , QT }, we calculate the cosine score of features encoded by semantics-aggregated extractor for every pair {V S, V T } and {QS, QT } in each modality to obtain similarity matrixs MV and MQ, and we add them to M \u2208[\u22121, 1]NS\u00d7NT , where NS and NT are lengths of source and target datasets respectively. Next, we rank the target instances based on the counts exceed the similarity threshold \u03b8 and select the top k percent high-score instances {V T \u2032, QT \u2032}. Then, we generate the pseudo box e b\u2032 by source object seeker and predict the coordinate b\u2032 by target object seeker. Like Eq. 4, we perform the pseudo object seeking loss as LP ObjS = \u2225b\u2032 \u2212e b\u2032\u22251 + G(b\u2032, e b\u2032). (12) We compute M each epoch after executing box generation, and the selected instances are dynamically updated. Table 1. Performance of Product Retrieval (text-to-vision) on our TMPS and TLPS datasets. Method TMPS R@1 R@5 R@10 R@50 Random 0.00 0.04 0.09 0.43 VSEpp 10.23 29.24 34.42 69.73 ViLT 14.39 38.42 50.74 83.23 DATE 16.32 40.54 51.23 82.58 TLPS Random 0.03 0.14 0.23 1.59 VSEpp 3.41 15.33 29.12 43.24 ViLT 5.38 19.29 35.95 57.48 DATE 6.44 21.71 36.32 59.58 Table 2. Performance of Product Grounding on our TMPS and TLPS datasets. Method TMPS TLPS mIoU Pr@1 mIoU Pr@1 Random 29.51 18.22 23.91 10.09 MAttNet 80.71 85.33 62.12 73.24 FAOA 76.24 83.72 61.31 69.13 TransVG 84.52 89.50 67.11 77.93 DATE 86.67 92.12 70.24 81.43 With the constant knowledge transfer, more instances can be labeled correctly, and hyper-parameter ratio k will be increased. The total knowledge transfer loss function is as follows LKT = LDA + \u03bbP OLP ObjS, (13) where \u03bbP O \u2208R are hyperparameters to weigh losses. 3.5. Training and Testing Fully-supervised PR and PG. We perform Lcoop for training, and we search the image of product by image-seeker for PR, and directly predict the coordinates of product on the image by object-seeker for PG during testing. Un-supervised PG-DA. We train the model in three stages. First, we warm up our model under fully-supervised setting on S domain by Lstage1 = LObjS. Next, we perform Lstage2 = \u03bbOLObjS + LDA on S and T to reduce domain gap. Then, we execute dynamic box generateing and add LP ObjS as Lstage3 = \u03bbOLObjS + LKT to further transfer the knowledge. We test the model on T domain in the same approach as PG. 4. Experiments 4.1. Our Product Seeking Datasets We collect two large-scale Product Seeking datasets from Taobao Mall (TMPS) and Taobao Live (TLPS) with Table 3. Performance of Product Grounding-DA on our datasets. (L\u2192M means we transfer the knowledge from TLPS to TMPS. And F, W, U stand for Fully-, Weakly-, Un-supervised respectively.) Method Mode TMPS TLPS mIoU Pr@1 mIoU Pr@1 Random 29.51 18.22 23.91 10.09 ARN W 70.72 73.32 51.31 53.24 MAF W 72.52 75.09 54.82 59.04 FAOA F 76.24 83.72 61.31 69.13 DATE F 86.67 92.12 70.24 81.43 L\u2192M M\u2192L Source-only U 75.20 83.62 59.64 67.71 MMD-uni U 76.93 84.87 60.74 69.01 Pseudo-label U 77.02 86.23 62.87 71.48 DATE U 79.92 89.35 64.86 74.75 about 474k image-title pairs and 101k frame-description pairs respectively. They are \ufb01rst two benchmark ecommerce datasets involving cross-modal grounding. For TMPS, each product item corresponds to a single title, three levels of categories and multiple displayed images with the manually annotated bounding box. For TLPS, we collect frames and descriptions from the livestreamer in live video streams, and annotate the location of described product. Note that the language in our datasets is mainly Chinese. The basic statistics about our datasets is in Appendix. We can see the categories of our datasets are diverse and the number of images are tens of times larger than existing datasets. After the collection, we split each dataset into training/validation/testing sets in a 8:1:1 ratio, and we make sure each product is isolated within one set. 4.2. Evaluation Metrics Product Grounding. Following [6], we measure the performance by mIoU (mean Intersection over Union) and precision (predicted object is true positive if its IoU with ground-truth box is greater than 0.5). Product Retrieval. We use standard retrieval metrics (following [1,52]) to evaluate text-to-vision (t2v) retrieval and vision-to-text (v2t) retrieval. We measure rank-based performance by R@K. 4.3. Performance Comparison and Analysis To evaluate the effectiveness of DATE, we compare it with various related methods (More details of our methods are reported in Appendix). For each task, we apply untrained model to predict results as Random method to perceive the dif\ufb01culty of tasks. Product Retrieval. We re-implement these representative cross-modal retrieval methods to compare with our DATE. Table 4. Ablation study of Product Retrieval and Grounding on TMPS and TLPS datasets. TMPS TLPS Method Grounding T2V Retrieval Grounding T2V Retrieval mIoU Pr@1 R@1 R@5 R@10 R@50 mIoU Pr@1 R@1 R@5 R@10 R@50 Visual Feature Extractor ResNet 80.73 84.13 10.85 29.10 40.82 70.52 64.12 72.25 2.91 13.82 30.94 49.31 DETR 82.29 87.71 12.12 33.52 44.52 74.13 66.13 76.81 4.33 16.39 32.81 54.91 Swin 83.11 89.19 13.21 35.54 46.12 77.59 67.31 78.35 5.01 18.56 34.14 56.25 SA-DETR 84.21 90.03 14.81 36.84 47.21 78.23 68.62 79.11 5.43 19.39 35.81 57.28 SA-Swin (Ours) 86.67 92.12 16.32 40.54 51.23 82.58 70.24 81.43 6.44 21.71 36.32 59.58 Cooperative Seekers w/o Rep 83.11 89.19 13.21 35.54 46.12 76.59 67.31 78.35 5.01 18.56 34.14 55.25 w/o ObjS 82.25 87.59 12.85 36.12 45.24 75.23 65.82 75.47 4.93 18.39 35.33 54.12 w/o Rep&ObjS 80.45 85.31 11.78 31.17 43.23 72.23 63.21 71.91 4.13 16.53 31.82 51.10 Full (Ours) 86.67 92.12 16.32 40.54 51.23 82.58 70.24 81.43 6.44 21.71 36.32 59.58 1) VSEpp [15], a respectively encoding method based on CNN and RNN. 2) ViLT [29], a jointly encoding method based on transformer. Product Grounding. In addition to cross-modal retrieval baselines above, we re-implement these classic visual grounding baselines to compare with our DATE. 1) MAttNet [49], a two-stage model. 2) FAOA [45], a one-stage model. 3) TransVG [10], a regression-based model under transformer architecture. The PR and PG results are presented in Table 1 and Table 2 respectively. We can see that (1) the Random results in both tasks are pretty low, showing our PR and PG are challenging. (2) The proposed DATE outperforms all the baselines by a large margin, indicating the effectiveness of our method for both PR and PG. (3) Although the performance of TransVG and ViLT is little behind ours, they are two separate models, and our method under uni\ufb01ed architecture is more time-ef\ufb01cient and memory-saving. Un-supervised Product Grounding-DA. To validate the effectiveness of our DATE in DA setting, we further reimplement these typical weakly-supervised VG baselines for comparison. 1) ARN [33], a reconstruction-based model. 2) MAF [43], a contrast-based model. For DA setting, we serve these methods as baselines for comparison. 1) Source-only, which applies the model trained on source domain to straightway test on the target dataset. 2) MMD-uni, which only utilizes MMD loss to minimize the uni-modal marginal distribution distance for visual and textual feature. 3) Pseudo-label, which trains the model on target domain entirely based on the pseudo box labels generated by the model trained on source domain. The results are presented in Table 3, and we can distill the following observations: (1) our un-supervised DATE outperforms all weakly-supervised methods and fully-supervised methods FAOA signi\ufb01cantly, demonstrating the knowledge has been transfered to target domain effectively. (2) Source-only method degenerates the performance severely due to the huge semantic gap between two domains, and MMD-uni only achieves slight improvement as the cross-domain discrepanciy fails to reduced suf\ufb01ciently. (3) Pseudo-label enhances limited performance since a number of bad instances are incorrectly labeled which misleads the model, while our DATE can dynamically select instances and generate reliable bounding boxes for transfer and boosting performance. 4.4. Ablation Study In this section, we study the effect of different visual feature extractors, text options and cooperative seeking strategies in Table 4. Visual Feature Extractor. We compare our SA-Swin to ResNet, DETR, Swin and SA-DETR methods, where ResNet, DETR and Swin apply ResNet-50 [18], DETR-50 [4] Swinbase [35] to extract image features respectively, and leverage the average pooled feature for PR and feed the \ufb02attened last feature map as tokens into object-seeking transformer for PG. And SA-DETR executes the same way as the former methods for PG, but injects the semantics-aggregated token from beginning for PR as SA-Swin performs. From the results in Table 4, we can \ufb01nd following interesting points: Figure 4. T-SNE visualization of visual and textual features. (1) Swin surpasses ResNet and DETR, illustrating better visual features are extracted by hierarchical transformer. (2) SA-DETR performs better than Swin which has more powerful feature extraction ability during cooperative training, demonstrating our designed semantics-aggregated encoder can extract concentrated and comprehensive features for following cooperative seeking for both PR and PG. Cooperative Seeking Strategies. We conduct ablative experiments as follows: w/o Rep: using the average pooling of two modal features for image seeking (PR) rather than [REP] token. w/o ObjS: removing object-seeking transformer, and applying an MLP to fuse visual and textual [REP] token for object seeking; w/o Rep&ObjS: using the average pooled feature for both image and object seeking. From Table 4, we observe that the performance decreases sharply after removing [REP] or ObjS. To analyse: (1) more discriminative representation of image and query can be extracted by weighted vector (i.e. [REP] token) than average pooling, con\ufb01rming the effectiveness of our semantics-aggregated feature extractor. (2) As w/o Rep result shown, the performance of object seeking (PG) degenerates although [REP] is not involved in it, which demonstrates such disadvantageous image seeking (PR) approach drags down object seeking (PG) during multi-task learning. (3) Image and object levels seeking falls on the shoulder of [REP] tokens in w/o ObjS model, which is detrimental for both levels seeking. The above two points prove the reasonableness of our designed cooperative seeking strategy. 4.5. Feature Visualization To help prove the validity of our DATE, we visualise visual and textual features by T-SNE for TMPS\u2192TLPS in Figure 4, earned by Source-only baseline and our DATE method. We can observe the shift between source and target domains is apparent, meanwhile there are overlaps in two domains, which is reasonable since a few scenes in Taobao Mall and Live are similar. With our proposed method, the discrepancy in feature distribution of two domains becomes narrow signi\ufb01cantly, suggesting our method has effectively aligned two domains. Query: Farmacy\u6728\u74dc\u690d\u8403\u8865\u6c34\u4fdd\u6e7f\u6c28\u57fa\u9178\u4fee\u590d\u808c\u80a4\u7eff\u80d6\u5b50\u6e29\u548c\u6d01\u9762\u4e73 (Farmacy Papaya Plant-Extracted Moisturizing Amino-Acid Repairing Skin Green Fat Mild Cleanser) Rank1 Rank2 Rank3 DATE ViLT Rank4 Figure 5. Qualitative results of Product Retrieval sampled from TMPS dataset (green: correct, red: incorrect). 4.6. Qualitative Analysis To qualitatively investigate the effectiveness of our DETA, we compare ViLT and our DATE for PR as Figure 5 shown. We can \ufb01nd that the image-level product can be sought precisely by our DATE while ViLT fails to \ufb01nd the correct image until Rank3. Further, the whole top4 results retrieved by DATE are more relevant to the text query than the results from ViLT, which illustrates the multi-modal semantic understanding and interaction are suf\ufb01cient through our DATE. 5. Conclusion In this paper, we study the fully-supervised product retrieval (PR) and grounding (PG) and un-supervised PGDA in domain adaptation setting. For research, we collect and manually annotate two large-scale benchmark datasets TMPS and TLPS for both PR and PG. And we propose a DATE framework with the semantics-aggregated feature extractor, ef\ufb01cient cooperative seekers, multi-modal domain aligner and a pseudo bounding box generator to solve the problems effectively on our datasets. We will release the desensitized datasets to promote investigations on product retrieval, product grounding and multi-modal domain adaptation. In the future, we will consider more speci\ufb01c techniques like Optical Character Recognition (OCR) and Human Object Interaction (HOI) to further improve the performance of PR and PG. Acknowledgments This work was supported by National Natural Science Foundation of China under Grant No.62222211, No.61836002, No.62072397, and a research fund supported by Alibaba.", "introduction": "Nowadays, with the rapid development of e-commerce and livestreaming, consumers can enjoy shopping on e-mall or various livestreaming platforms. Although the fact that *Corresponding author. 1https://github.com/Taobao-live/Product-Seeking Query Title Mall domain Source: with box annotation Live domain Target: w/o. box annotation Textual modal Product Gallery Product Gallery Visual modal Domain gap Feature space Query Description Grounding Feature space Retrieval Grounding Retrieval Figure 1. Illustration of Product Retrieval (PR) and Grounding (PG) problems on two datasets collected from Taobao Mall and Live. (1) Given a text query (i.e. Chinese title or description of a product), PR is to seek the corresponding image-level product from gallery while PG is to seek the object-level product from an image. (2) We further explore PG-DA, which aims to transfer knowledge from the annotated source domain to the unannotated target domain under the in\ufb02uence of multi-modal domain gap to achieve un-supervised PG. diverse products can be presented and purchased on screen brings us convenience, we are immersed in this miscel- laneous product world. Therefore, cross-modal Retrieval [1,3,14,20,38,39,50] for Product (PR), aiming to seek the corresponding image based on a text query, is signi\ufb01cant for boosting holistic product search engine and promoting consumers\u2019 shopping experience. Besides, provided that the object-level product can be localized on the target product image or live room im- age according to a query, it will help consumers focus on the desired product and also bene\ufb01t the downstream vision-to-vision retrieval. And we name this interesting task as Product Grounding (PG) like Visual Grounding [28, 34, 37, 41, 51]. Generally, PR and PG are seen as two separate tasks, but we consider mining the commonalities of PR and PG and regard them as Product Seeking at image- arXiv:2304.03669v1 [cs.CV] 7 Apr 2023 level and object-level respectively. And we design a uni- \ufb01ed architecture to simultaneously solve PR and PG, which is more time-saving and memory-economical than separate methods. To research the PR and PG with great practical applica- tion value, we collect two large-scale benchmark Product Seeking datasets TMPS and TLPS from Taobao Mall and Taobao Live domains with about 474k image-title pairs and 101k frame-description pairs respectively, and the locations of object-level products in images are manually annotated. As annotating bounding box of product is time-consuming and expensive, we explore how to transfer knowledge from an annotated domain to the unannotated one, and achieve un-supervised PG in domain adaptation setting (PG-DA). Thus, we propose the Domain Adaptive Product Seeker (DATE) to solve the following aspects of the challenging PR, PG and PG-DA problems. Firstly, due to the complexity of the mall and live scenar- ios, discriminative representations of the image and query are prerequisite to accurately localize the object. Consid- ering conventional CNNs are hard to achieve long-distance relation reasoning and full-scale understanding, we utilize and improve the Swin-TF [35] to extract hierarchical and comprehensive features. As large-scale image seeking is demanding for PR, it is vital to ensure seeking inference is of trivial cost. Thus, we inject [REP] token into Swin- TF to absorb the weighted global semantics, and condense them into a single vector, which will be discriminative and concentrated for following ef\ufb01cient image seeking. And we perform the same semantics-aggregated technique for query feature extraction. Secondly, the capacity of both macroscopic image seek- ing and microcosmic \ufb01ne-grained object seeking is neces- sary for PR and PG. Therefore, we present two cooperative seekers, where image seeker calculates the cosine similar- ity between visual and textual concentrated features for PR, and object seeker based on cross-modal interaction trans- former directly predicts the coordinates of the product by comprehensive features for PG. We validate the reasonable- ness of such cooperative strategy through experiments. Thirdly, due to the domain gap between two datasets as Figure 1 shown, applying the model straightway to test on target domain will cause performance degeneration severely for PG-DA. To the best of our knowledge, this is the \ufb01rst work to consider un-supervised Visual Grounding in do- main adaptation setting, and most uni-modal DA [8,32,36] and multi-modal DA [5,7] methods are not directly applica- ble in our complicated object seeking. Therefore, we devise a domain aligner based on Maximum Mean Discrepancy to align the domain by minimizing uni-modal marginal distri- bution and multi-modal conditional distribution divergence between source and target domains, and design a dynamic pseudo bounding box generator to select similar instances in target domain and generate reliable boxes for knowledge transfer. To summarize, the contributions of this paper are as fol- lows: \u2022 We collect and manually annotate two large-scale benchmark datasets for PR and PG with great practi- cal application value. \u2022 We propose a uni\ufb01ed framework with semantics- aggregated feature extractor and cooperative seekers to simultaneously solve fully-supervised PR and PG. \u2022 We explore un-supervised PG in domain adaptation setting and design the multi-modal domain aligner and dynamic box generator to transfer knowledge. \u2022 We conduct extensive experiments which shows that our methods achieve satisfactory performance in fully- supervised PR, PG and un-supervised PG-DA." } ], "Zhijie Lin": [ { "url": "http://arxiv.org/abs/2108.13630v1", "title": "SimulLR: Simultaneous Lip Reading Transducer with Attention-Guided Adaptive Memory", "abstract": "Lip reading, aiming to recognize spoken sentences according to the given\nvideo of lip movements without relying on the audio stream, has attracted great\ninterest due to its application in many scenarios. Although prior works that\nexplore lip reading have obtained salient achievements, they are all trained in\na non-simultaneous manner where the predictions are generated requiring access\nto the full video. To breakthrough this constraint, we study the task of\nsimultaneous lip reading and devise SimulLR, a simultaneous lip Reading\ntransducer with attention-guided adaptive memory from three aspects: (1) To\naddress the challenge of monotonic alignments while considering the syntactic\nstructure of the generated sentences under simultaneous setting, we build a\ntransducer-based model and design several effective training strategies\nincluding CTC pre-training, model warm-up and curriculum learning to promote\nthe training of the lip reading transducer. (2) To learn better spatio-temporal\nrepresentations for simultaneous encoder, we construct a truncated 3D\nconvolution and time-restricted self-attention layer to perform the\nframe-to-frame interaction within a video segment containing fixed number of\nframes. (3) The history information is always limited due to the storage in\nreal-time scenarios, especially for massive video data. Therefore, we devise a\nnovel attention-guided adaptive memory to organize semantic information of\nhistory segments and enhance the visual representations with acceptable\ncomputation-aware latency. The experiments show that the SimulLR achieves the\ntranslation speedup 9.10$\\times$ compared with the state-of-the-art\nnon-simultaneous methods, and also obtains competitive results, which indicates\nthe effectiveness of our proposed methods.", "authors": "Zhijie Lin, Zhou Zhao, Haoyuan Li, Jinglin Liu, Meng Zhang, Xingshan Zeng, Xiaofei He", "published": "2021-08-31", "updated": "2021-08-31", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.CL" ], "main_content": "Lip reading aims to recognize spoken sentences according to the given video of lip movements without relying on the audio stream. Early works focus on single word classification [9, 38] and then switched to full sentences prediction [1, 3, 8, 30, 34, 39, 41]. These works mainly study lip reading in a non-simultaneous manner with CTC-based model [3, 5, 30, 34, 39] and autoregressive model [1, 8, 26, 41, 42]. Among them, LipNet [3] takes advantage of spatiotemporal convolutional features and context modeling of RNNs. Chen et al. [5] design a system that leverages the task duality of lip reading and lip generation to improve both tasks. Afouras et al. [1] first introduce Transformer self-attention architecture into lip reading. Zhao et al. [42] enhance the training of lip reading model by distilling multi-granularity knowledge from speech recognition. Besides, instead of CTC decoder, Liu et al. [24] further study nonautoregressive lip reading by leveraging integrate-and-fire module to estimate the length of output sequence and alleviate the problem of time correlation. However, these methods explore lip reading in a non-simultaneous manner, where the sentence prediction relies on the entire video of talking face during inference. In this paper, we further study the task of simultaneous lip reading that recognizes sentences based on partial input, which owns more application scenarios. 2.2 Simultaneous Decoding Due to lower latency and broader scenarios, simultaneous decoding has attracted a lot interest in many fields such as neural machine translation (NMT) [15, 16], automatic speech recognition (ASR) [22, 31, 33, 40], speech to text translation [6, 11, 29, 32], speech to speech translation [35] and so on. In real-time scenarios, the simultaneous decoding aims to generate the predictions based on the given partial input instead of the whole sequence, and the history context could be limited due to the rapid increase in the length of input. Some widely used approaches for simultaneous decoding includes reinforcement learning (RL) [15, 16], connectionist temporal classification (CTC) [2], transducers [6, 31, 33] and attention-based encoder-decoder [27, 32]. Among them, at each time step, transducers generate the next target token, or an empty transfer to read next source input. In this paper, we concentrate on vision-text cross-modal simultaneous decoding and propose a novel lip reading transducer with an adaptive memory where the history frames are limited. 2.3 Memory Memory module introduces external memory to store the past context and absorb new information, which is proposed to improve the learning capability and boost the performance. The neural turing machine (NTM) [13] and differentiable neural computer (DNC) [14] are the typical memory for memorization and reasoning. For fewshot learning, memory module mainly stores the information contained in the support set [28, 37] and attempts to learn the common access mechanism across tasks. Memory module has also been incorporated into generative models [4, 23] and sequence modeling [21] that conditions on the global contextual information provided in external memory. The recurrent neural networks such as GRU [7] are also commonly-used differentiable memory module for sequence modeling, although they still suffer from gradual forgetting of early contents after memorizing long sequences. In this paper, considering the limited storage and computational cost, we devise a novel attention-guided adaptive memory module to compress the history semantic information and absorb upcoming video segments. 3 PROBLEM FORMULATION In this section, we first introduce the problem formulation of simultaneous lip reading. Given a sequence of video segments \ud835\udc94= {\ud835\udc941, \ud835\udc942, ..., \ud835\udc94\ud835\udc5b} without the audio stream, lip reading aims to predict the words sequence \ud835\udc64= {\ud835\udc641,\ud835\udc642, ...,\ud835\udc64\ud835\udc62} that the lip is speaking, where \ud835\udc94\ud835\udc61is the \ud835\udc61-th video segments containing several frames, \ud835\udc5bis the number of video segments, \ud835\udc5b\ud835\udc53is the number of frames in a segment, \ud835\udc64\ud835\udc56is the \ud835\udc56-th token and \ud835\udc62is the length of target sequence. Under the simultaneous setting, the lip reading model is required to generate the \ud835\udc56-th token \ud835\udc64\ud835\udc56with only partial input \ud835\udc94\ud835\udc5d\ud835\udc56= {\ud835\udc941, \ud835\udc942, ..., \ud835\udc94\ud835\udc5b(\ud835\udc64\ud835\udc56)}, where \ud835\udc5b(\ud835\udc64\ud835\udc56) is the number of segments needed to predict the\ud835\udc56-th token\ud835\udc64\ud835\udc56and\ud835\udc5b(\ud835\udc64\ud835\udc56) >= \ud835\udc5b(\ud835\udc64\ud835\udc56\u22121) for monotonic alignments. Also, in our paper, only the adjacent segments are available due to the limited storage, making the partial input \ud835\udc94\ud835\udc5d\ud835\udc56= {\ud835\udc94\ud835\udc5b(\ud835\udc64\ud835\udc56)\u2212\ud835\udc4e+1, \ud835\udc94\ud835\udc5b(\ud835\udc64\ud835\udc56)\u2212\ud835\udc4e+2, ..., \ud835\udc94\ud835\udc5b(\ud835\udc64\ud835\udc56)}, where \ud835\udc4eis the number of available segments for the \ud835\udc56-th token prediction. For simultaneous lip reading model, the monotonic alignments to predict the target sequence \ud835\udc64are not given explicitly, which means that the decoding segment path \ud835\udc51= {\ud835\udc94\ud835\udc5d1, \ud835\udc94\ud835\udc5d2, ..., \ud835\udc94\ud835\udc5d\ud835\udc62} is not unique. Therefore, the optimize object can be computed as follows: \ud835\udc43(\ud835\udc64|\ud835\udc94) = \u2211\ufe01 \ud835\udc51\u2208\ud835\udf19(\ud835\udc64) \ud835\udc62 \u00d6 \ud835\udc56=1 \ud835\udc43(\ud835\udc64\ud835\udc56|\ud835\udc94\ud835\udc5d\ud835\udc56) (1) where \ud835\udc43(\ud835\udc64|\ud835\udc94) is the probability of generating the target sequence \ud835\udc64, which is the sum over all possible decoding segment paths \ud835\udc51\u2208 \ud835\udf19(\ud835\udc64). 4 APPROACHES In this section, we describe the SimulLR approach thoroughly. As shown in Figure 2(a), the proposed model is composed of a truncated 3D spatio-temporal convolutional network to extract the visual features, a transformer-based sequence encoder, a transducer-based cross-modal decoder for language modeling and token prediction, an attention-guided adaptive memory to organize semantic information of history segments and enhance the visual representations with acceptable computation-aware latency. We also design several effective training strategies including CTC pre-training, model warm-up and curriculum learning to promote the training of the lip reading transducer. The details of our method are described in the following subsections. 4.1 Visual Encoder Truncated C3D. To learn better spatio-temporal representations for cross-modal decoding, prior non-simultaneous methods [24, 41] employ multiple 3D convolution in the visual encoder, which cannot be transferred to our simultaneous model directly due to their expanding receptive field on the whole video. To address this challenge, in our paper, we truncate the 3D convolutional network in the temporal dimension and perform spatio-temporal convolution only within one single segment \ud835\udc94\ud835\udc61, as shown in Figure 2(a), which introduces sufficient spatial-temporal context for representations learning while maintaining a simultaneous manner without absorbing the information of the entire video. Sequence Encoder. The sequence modeling of video segments is based on the stacked multi-head self-attention layers and feedforward layers, as proposed in Transformer [36] and transformerbased lip reading models (TM-seq2seq) [1]. Moreover, to enable the simultaneous decoding, we employ the time-restricted selfattention, where the unavailable and future frames are masked and each video frame can only see its previous several segments {\ud835\udc94\ud835\udc61\u2212\ud835\udc4e+1, \ud835\udc94\ud835\udc61\u2212\ud835\udc4e+2, ..., \ud835\udc94\ud835\udc61}, to simulate the streaming inputs and limited history storage. We denote the encoded visual representations of the \ud835\udc61-th video segment \ud835\udc94\ud835\udc61as \ud835\udc89\ud835\udc63 \ud835\udc61, as shown in Figure 2. 4.2 Simultaneous Cross-Modal Decoder The simultaneous cross-modal decoder is built based on the neural transducer [12, 19]. Concretely, at each time step, the decoder (joint network) chooses to predict the next token \ud835\udc98\ud835\udc56based on the partial input \ud835\udc94\ud835\udc5d\ud835\udc56, or generate an empty transfer \ud835\udf16to read next video segment \ud835\udc94\ud835\udc5b(\ud835\udc64\ud835\udc56)+1, making \ud835\udc5b(\ud835\udc64\ud835\udc56) = \ud835\udc5b(\ud835\udc64\ud835\udc56) + 1. Also, the syntactic structure of the generated sentences {\ud835\udc981,\ud835\udc982, ...,\ud835\udc98\ud835\udc56\u22121} are taken into consideration with a language model LM(\u00b7). With the reading of video segments, the tokens are generated frame-by-frame and then merged to the ultimate predictions. Language Model. Rather than recurrent neural network (RNN), we also build a uni-directional transformer-based language model that comprises of multi-head self-attention and feed-forward layers to leverage the history context of generated sentences. Specially, Self-Attention Add & Norm Feed Forward Add & Norm Truncated C3D Language Model \u2026 \u2026 \u2026 Joint Network Update \u2026 Access \u2026 \u2026 \u2026 \u2026 Inter-Attention (a) The architecture of the proposed model. (b) The update and access procedure of memory. Truncated C3D \u2026 Low information entropy High information entropy or Attention Score \u2026 \u2295 momentum update \u2026 \u2026 discard obsolete feature & absorb new feature \u2026 update LFU index \ud835\udc94# (Unavailable) \ud835\udc94$ \u2026 \ud835\udc8e# \ud835\udc8e& \ud835\udc8e' \ud835\udc8e()#\ud835\udc8e( \ud835\udc8e# \ud835\udc8e& \ud835\udc8e()#\ud835\udc8e( \ud835\udc64# \ud835\udc64& \ud835\udc64' \ud835\udc64+ \ud835\udc64, \ud835\udf00 \ud835\udc89$ / \ud835\udc89 0$ / \ud835\udc8e# \ud835\udc8e& \ud835\udc8e' \ud835\udc8e()#\ud835\udc8e( Decoder Encoder Memory \ud835\udc41\u00d7 \u2026 \u2026 \u2026 predicted words \ud835\udc89, 3 \ud835\udc89$ / Summarize Figure 2: (a) The overall framework of SimulLR: a truncated 3D spatio-temporal convolutional network to extract the visual features, a transformer-based sequence encoder, a transducer-based cross-modal decoder for language modeling and token prediction, an attention-guided adaptive memory to organize semantic information of history segments and enhance the visual representations. (b) The update and access of memory: absorb new segments by momentum update and discard obsolete features using the least frequently used (LFU) algorithm guided by attention scores. the semantic representations of different words are denoted as {\ud835\udc89\ud835\udc64 1 , \ud835\udc89\ud835\udc64 2 , ..., \ud835\udc89\ud835\udc64 \ud835\udc5a}. Joint Network. Based on the visual representations given by the simultaneous visual encoder and the semantic representations given by the uni-directional language model, we employ a fully-connected layer with softmax to compute the joint matrix \ud835\udc45, where \ud835\udc45\ud835\udc61,\ud835\udc56is the distribution over token vocabulary with \ud835\udc89\ud835\udc63 \ud835\udc61and \ud835\udc89\ud835\udc56 \ud835\udc64. A possible decoding path \ud835\udc51= {\ud835\udc94\ud835\udc5d1, \ud835\udc94\ud835\udc5d2, ..., \ud835\udc94\ud835\udc5d\ud835\udc62} can be simply represented as a path from the start (0, 0) to the end (\ud835\udc5b,\ud835\udc5a) in the joint matrix. Therefore, the prior optimize object is further denoted as: \ud835\udc43\ud835\udc61\ud835\udc51(\ud835\udc64|\ud835\udc94) = \u2211\ufe01 \ud835\udc51\u2208\ud835\udf19(\ud835\udc64) \ud835\udc43(\ud835\udc51|\ud835\udc45) (2) 4.3 Attention-guided Adaptive Memory In real scenarios, the storage is always limited by the extremely long input sequence (e.g. massive video data). Therefore, for simultaneous decoding, history segments may be unavailable, making it more difficult to predict a new token with limited visual context. To achieve a good storage-accuracy trade-off, we introduce a novel attention-guided adaptive memory to organize semantic information of history segments and enhance the visual representations with acceptable computation-aware latency. Specially, the attention-guided memory containing \ud835\udc58memory banks, is constructed to absorb new segments by momentum update and discard obsolete features using the least frequently used (LFU) algorithm guided by attention scores. Enhanced Visual Feature. As shown in Figure 2(b), given the adaptive memory \ud835\udc74= {\ud835\udc8e1, \ud835\udc8e2, ..., \ud835\udc8e\ud835\udc58}, we compute a encodermemory inter-attention for \ud835\udc89\ud835\udc63 \ud835\udc61to enhance the visual representations, given by \u02dc \ud835\udc89\ud835\udc63 \ud835\udc61= \ud835\udc89\ud835\udc63 \ud835\udc61+ \ud835\udc58 \u2211\ufe01 \ud835\udc56=1 \u02dc \ud835\udefc\ud835\udc56\ud835\udc8e\ud835\udc56, \u02dc \ud835\udefc\ud835\udc56= exp(\ud835\udefc\ud835\udc56) \u00cd\ud835\udc58 \ud835\udc57=1 exp(\ud835\udefc\ud835\udc57) (3) where \ud835\udefc\ud835\udc56is the attention score of the \ud835\udc56-th memory bank \ud835\udc8e\ud835\udc56and video segment \ud835\udc94\ud835\udc61, and \u02dc \ud835\udc89\ud835\udc63 \ud835\udc61is the enhanced visual feature that absorbs earlier segments. The enhanced visual feature is actually used for computation of the joint matrix \ud835\udc45. Note that we employ the dotproduct attention [36] to obtain scores over all the memory banks. Absorb New Segment. Since the attention distribution \u02dc \ud835\udefcreflects the similarity between current video segment and existing segments in the memory banks, some replacements on the memory bank seem to be redundant if the segment is close enough to some existing one. To enable higher memory efficiency and avoid storing redundant information, we adaptively absorb the new segment based on the information entropy \ud835\udc3c\ud835\udc61guided by the attention distributed \u02dc \ud835\udefc, given by \ud835\udc3c\ud835\udc61= \u2212 \ud835\udc58 \u2211\ufe01 \ud835\udc56=1 \u02dc \ud835\udefc\ud835\udc56\u00b7 log( \u02dc \ud835\udefc\ud835\udc56) (4) High information entropy \ud835\udc3c\ud835\udc61represents a more smoothed attention distribution and indicates that more information different from the memory is contained in video segment \ud835\udc97\ud835\udc61, while low information entropy indicates redundancy. To enable higher memory efficiency, we absorb these redundant visual features having \ud835\udc3c\ud835\udc61< \ud835\udefe\ud835\udc52 by momentum updating, as shown in Figure 2(b), given by \ud835\udc8e\ud835\udc57= \ud835\udefe\ud835\udc5a\u00b7 \ud835\udc8e\ud835\udc57+ (1 \u2212\ud835\udefe\ud835\udc5a) \u00b7 \ud835\udc46\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc67\ud835\udc52(\ud835\udc94\ud835\udc61), \ud835\udc57= arg max \ud835\udc57 \u02dc \ud835\udefc\ud835\udc57 (5) where \ud835\udefe\ud835\udc52is the information entropy threshold, \ud835\udefe\ud835\udc5ais the parameter to control the impact of moving average, and \ud835\udc46\ud835\udc62\ud835\udc5a\ud835\udc5a\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc67\ud835\udc52(\u00b7) is the operation (e.g. max-pooling) to aggregate features from different frames within a segment. Discard Obsolete Segment. For these video segments that are distinct from the existing ones in the memory bank, we simply replace the least frequently used segment in the adaptive memory. Also, the counting index is updated based on the soft attention distribution, given by \ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61(\ud835\udc5a\ud835\udc56) = \ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61(\ud835\udc5a\ud835\udc56) + \u02dc \ud835\udefc\ud835\udc56 (6) And the LFU index is computed as follows: \ud835\udc3f\ud835\udc39\ud835\udc48(\ud835\udc5a\ud835\udc56) = \ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61(\ud835\udc5a\ud835\udc56) \ud835\udc59\ud835\udc56\ud835\udc53\ud835\udc52(\ud835\udc5a\ud835\udc56) (7) where \ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5b\ud835\udc61(\ud835\udc5a\ud835\udc56) and \ud835\udc59\ud835\udc56\ud835\udc53\ud835\udc52(\ud835\udc5a\ud835\udc56) are separately the counting index of \ud835\udc8e\ud835\udc56and timespan that \ud835\udc8e\ud835\udc56stays in the memory bank. 4.4 Training Pre-training with CTC Loss. To stable the training of the lip reading transducer, we first pre-train the model with an ordinary CTC loss without considering the syntactic structure of target sequences. The CTC also works in frame-synchronized mode and introduces a set of intermediate CTC path \ud835\udf11(\ud835\udc64) where each path is composed of target tokens and blanks that can be reduced to the target sequence \ud835\udc64. The CTC loss can be computed as follows: L\ud835\udc36\ud835\udc47\ud835\udc36= \u2212log \ud835\udc43\ud835\udc50\ud835\udc61\ud835\udc50(\ud835\udc64|\ud835\udc60) = \u2212log \u2211\ufe01 \ud835\udc50\u2208\ud835\udf11(\ud835\udc64) \ud835\udc43(\ud835\udc50|\ud835\udc60) (8) With the pre-trained model, we can train the lip reading transducer with the simultaneous lip readig loss as follows: L\ud835\udc46\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc3f\ud835\udc45= \u2212log \ud835\udc43\ud835\udc61\ud835\udc51(\ud835\udc64|\ud835\udc94) = \u2212log \u2211\ufe01 \ud835\udc51\u2208\ud835\udf19(\ud835\udc64) \ud835\udc43(\ud835\udc51|\ud835\udc45) (9) Model Warm-up. Although a better visual encoder (stacked selfattention and feed-forward layers) can effectively promote the prediction, it also becomes difficult to train especially with deeper structure for transducer-based methods [18]. In this paper, we devise a strategy called model warm-up for the training of lip reading transducer with deeper structure. Specially, (1) We first apply a shallower sequence encoder (e.g. less self-attention and feed-forward layers) and focus on the training of truncated C3D layer, which warms up the C3D encoder. (2) We then freeze the parameters of truncated C3D and employ a deeper network structure, which warms up the sequence encoder. (3) We train both the visual encoder that has been warmed up and simultaneous decoder with the proposed loss. Curriculum Learning. To further make the training procedure stable, we exploit the novel training paradigm based on the curriculum learning that starts with short videos, learns the easier aspects of lip reading and then gradually increase the length of training videos. 5 EXPERIMENTS 5.1 Datasets GRID. The GRID [10] dataset contains 34,000 sentences uttered by 34 speakers. This dataset is easy to learn since the spoken sentences are in a restricted grammar and composed of 6\u223c10 words. The vocabulary of GRID is also small, comprising 51 different words including 4 commands, 4 color, 4 prepositions, 25 letters, 10 digits and 4 adverbs. All the videos of lip movements have the same length of 75 frames with a frame rate of 25fps. Following prior works [3, 5, 24], we randomly select 255 sentences for evaluation. TCD-TIMIT. The TCD-TIMIT [17] dataset contains 59 speakers that utters approximately 100 phonetically rich sentences, making this dataset more challenging but closer to the natural scene. Also, the video length and sentence in the TCD-TIMIT dataset are longer than GRID and variable. Following prior work [17], we use the recommended train-test splits for training and evaluation. 5.2 Implementation Details Data Preprocessing. For the videos, to extract lip movements, we first obtain a 256 \u00d7 256 aligned face with Dlib detector [20], crop the 160 \u00d7 80 mouth-centered region from the aligned face and then resize the region to 100 \u00d7 60 as the video input. To improve the recognition accuracy, we use the strategy of data augmentation that involves horizontal flips with 40% probability, crop 0\u223c5% of horizontal or vertical pixels with 40% probability. In particular, we convert the video frames to grey scale for the easier GRID dataset to reduce computation cost. For the sentences, we build a vocabulary at word-level for the GRID dataset while phoneme-level for the TCD-TIMIT dataset following previous works [17]. Model Setting. For simultaneous decoding, we set the number of available segments \ud835\udc4eto 2, and the number of frames in a video segment \ud835\udc5b\ud835\udc53to 3 for GRID dataset and to 5 for TCD-TIMIT dataset. The number of memory banks \ud835\udc58is set to 20. The information entropy threshold \ud835\udefe\ud835\udc52is set to 0.6 \u00d7 log2 \ud835\udc58and moving step \ud835\udefe\ud835\udc5ais set to 0.7. For the truncated C3D to extract spatial-temporal representations, we stack six 3D convolutional layers with 3D max pooling, RELU activation, and two fully connected layers. The kernel size of 3D convolution and pooling is set to 3 \u00d7 3. For both the segment sequence encoder and language model, we stack four self-attention layers with feed-forward network. We set \ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5b= 256 for GRID dataset and \ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5b= 512 for TCD-TIMIT dataset respectively. The joint network is simply a two-layer non-linear transformation. Training Setup. For GRID dataset, We pretrain the model using the CTC loss with 10 epochs, warmup the visual encoder using two sequence encoder layers with 20 epochs and then train the whole model using four encoder layers with 100 epochs. For TCD-TIMIT dataset, We pretrain the model using the CTC loss with 50 epochs, Table 1: The word error rate (WER) and character error rate (CER) on the GRID dataset, and the phoneme error rate (PER) on the TCD-TIMIT dataset. GRID TCD-TIMIT Method WER(%) CER(%) PER(%) Non-Simultaneous Methods LSTM [38] 20.4 / / LipNet [3] 4.8 1.9 / FastLR [24] 4.5 2.4 / LCANet [39] 4.215 1.532 / DualLip [5] 2.71 1.16 46.2 Simultaneous Methods LR-RNN-CTC 28.884 19.912 67.021 LR-TM-CTC 20.691 15.223 63.428 LR-RNN-TD 11.570 7.263 64.213 LR-TM-TD 3.125 1.588 62.831 SimulLR(Ours) 2.738 1.201 56.029 warmup the visual encoder using two sequence encoder layers with 50 epochs and then train the whole model using four encoder layers with 150 epochs. To train the SimulLR model, we employ the Adam optimizer with a initial learning rate 0.0005 for GRID dataset and 0.0003 for TCD-TIMIT dataset, and with a shrink rate of 0.99 according to the updating step. 5.3 Evaluation Metrics During the inference stage, the SimulLR model perform simultaneous decoding with the adaptive memory. Following prior works [5], to evaluate the recognition quality, we use the metrics of character error rate (CER) and word error rate (WER) on the GRID dataset, and phoneme error rate (PER) on the TCD-TIMIT dataset since the output of this dataset is phoneme sequence. The different types of error rate can be computed as follows: \ud835\udc38\ud835\udc5f\ud835\udc5f\ud835\udc5c\ud835\udc5f\ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc52= (\ud835\udc46+ \ud835\udc37+ \ud835\udc3c) \ud835\udc40 (10) where \ud835\udc46, \ud835\udc37, \ud835\udc3c, \ud835\udc40are separately the number of the substitutios, deletions, insertions and reference tokens (character, word or phoneme). To compute the latency of simultaneous decoding, we consider the non computation-aware (NCA) latency, as proposed in [25]. Specially, The NCA latency for\ud835\udc98\ud835\udc56,\ud835\udc51\ud835\udc41\ud835\udc36\ud835\udc34(\ud835\udc64\ud835\udc56), equals to\ud835\udc5b(\ud835\udc64\ud835\udc56)\u00b7\ud835\udc5b\ud835\udc53\u00b7\ud835\udc47\ud835\udc60, where \ud835\udc47\ud835\udc60(ms) is the frame sampling rate. The average NCA latency \ud835\udc34\ud835\udc3f\ud835\udc41\ud835\udc36\ud835\udc34is defined as: \ud835\udc34\ud835\udc3f\ud835\udc41\ud835\udc36\ud835\udc34= 1 \ud835\udf0f(\ud835\udc64) \ud835\udf0f(\ud835\udc64) \u2211\ufe01 \ud835\udc56=1 \ud835\udc51\ud835\udc41\ud835\udc36\ud835\udc34(\ud835\udc64\ud835\udc56) \u2212\ud835\udc5f\u00b7 (\ud835\udc56\u22121) \u00b7 \ud835\udc47\ud835\udc60 (11) where \ud835\udf0f(\ud835\udc64) denotes the index of the first generated token when the model read the entire video, and \ud835\udc5f= (\ud835\udc5b\u00b7 \ud835\udc5b\ud835\udc53)/\ud835\udc62is the length ratio between source and target sequence. Table 2: The comparison of NCA latency and corresponding recognition accuracy with different segment size \ud835\udc5b\ud835\udc53on TCDTIMIT dataset. The evaluation is conducted with 1 Nvidia 2080Ti GPU. Methods PER(%) Latency(ms) Speedup DualLip [5] 46.200 4580.0 1.00\u00d7 SimulLR (\ud835\udc5b\ud835\udc53= 3) 58.182 384.93 11.91\u00d7 SimulLR (\ud835\udc5b\ud835\udc53= 5) 56.029 502.83 9.10\u00d7 SimulLR (\ud835\udc5b\ud835\udc53= 20) 49.743 973.62 4.70\u00d7 10 20 30 40 50 60 Length of Target Sentences 0 1000 2000 3000 4000 5000 6000 7000 NCA Latency LNCA(ms) SimulLR DualLip Figure 3: The NCA latency of target sentences with different length for DualLip and SimulLR on TCD-TIMIT dataset. 5.4 Main Results Since prior methods are trained in a non-simultaneous setting, to verify the effectiveness of our proposed methods, we first build several simultaneous lip reading baselines as follows: LR-RNN-CTC. Using the convolutional network and uni-directional recurrent neural network as visual encoder, we train the simultaneous model with the mentioned CTC loss. Note that RNN is already a natural memory network to organize the history information. LR-RNN-TD. Further considering the syntactic structure of generated sequences, we introduces language model and train the simultaneous model with the transducer loss. LR-TM-CTC. By replacing the RNN sequence encoder with the popular transformer architecture, we train the model with the mentioned CTC loss. LR-TM-TD. With the transformer architecture, we introduce the language model and train the network with the transducer loss. We compare our methods with some mainstream state-of-theart non-simultaneous models and the constructed baselines. The overall evaluation results on two datasets are presents in Table 1. We can see that (1) The proposed SimulLR outperforms all the simultaneous baselines by a large margin, indicating the effectiveness of our method for simultaneous lip reading. (2) The SimulLR also achieves comparable results with the state-of-the-art nonsimultaneous method DualLip [5], especially on GRID datasets, demonstrating the potential of our method. (3) With the same visual encoder, the transducer-based models obtain better performance than CTC-based models, verifying the effectiveness of modeling of syntactic structure. 400 600 800 1000 1200 1400 NCA Latency LNCA(ms) 50 52 54 56 58 Phoneme Error Rate(%) nf = 3 nf = 5 nf = 7 nf = 10 nf = 15 nf = 20 nf = inf Figure 4: The recognition accuracy against the NCA latency with different segment size \ud835\udc5b\ud835\udc53on TCD-TIMIT dataset. 5.5 Latency Analysis In this section, to further explore the time-efficiency of the proposed SimulLR method, we record the prediction latency of both simultaneous and non-simultaneous models to make a comparison. We first measure the inference NCA latency and corresponding recognition accuracy of DualLip [5] and SimulLR with \ud835\udc5b\ud835\udc53= 5 and \ud835\udc5b\ud835\udc53= 20, which is listed in Table 2. As the results shows, compared with the non-simultaneous method DualLip, the SimulLR speeds up the prediction by 9.10\u00d7 with \ud835\udc5b\ud835\udc53= 5, and 4.70\u00d7 with \ud835\udc5b\ud835\udc53= 20. Also, the SimulLR (\ud835\udc5b\ud835\udc53= 20) even achieves competitive results (PER 49.743%) with less waiting time, indicating the great ability of the adaptive memory to incorporate history information. The speedup rate increases rapidly especially for longer sentences, as shown in Figure 3. During inference, the non-simultaneous models wait for the entire video to process, making the NCA latency increases with respect to the length of target sequence, while the NCA latency nearly holds a small constant for SimulLR. Further, considering the computation-aware (CA) latency that is the time elapsing from the processing of corresponding input to the prediction of a token, compared with attention-based TM-Seq2Seq [1], the SimulLR achieves a speedup of 1.6\u00d7 on GPU and 13.3\u00d7 on CPU, indicating the effectiveness of memory to reduce the computation-aware cost. To explore the performance of simultaneous decoding, we also measure the NCA latency and phoneme error rate with different segment size \ud835\udc5b\ud835\udc53on TCD-TIMIT dataset, as shown in Figure 4. Note that for \u201c\ud835\udc5b\ud835\udc53= inf\u201d, we remove the memory and all the history segments are available. The recognition accuracy increases as the segment size increases, with the sacrifice of NCA latency. Notice that the SimulLR (\ud835\udc5b\ud835\udc53= 20) even obtain better performance than model with all the history segments. which indicates that compared with direct interaction with all the history segments, the proposed memory can better organize history information, discard obsolete segments and extract useful context for prediction. 5.6 Ablation Analysis In this section, to explore the effectiveness of the proposed techniques in SimulLR, we first conduct ablation experiments on the GRID and TCD-TIMIT datasets. The evaluation results are presented in Table 3. Table 3: The ablation results on the GRID and TCD-TIMIT dataset. We add the proposed techniques and evaluate their effectiveness progressively. GRID TCD-TIMIT Models WER(%) CER(%) PER(%) Naive LR Transducer 3.125 1.588 62.831 +CTC 3.029 1.503 62.032 +CTC+TC3D 2.978 1.339 60.397 +CTC+TC3D+WARM 2.963 1.302 59.428 +CTC+TC3D+WARM+MEM (SimulLR) 2.738 1.201 56.029 Table 4: The effect of different memory strategies on the GRID and TCD-TIMIT dataset. GRID TCD-TIMIT Memory strategy WER(%) CER(%) PER(%) FIFO Queue 2.894 1.313 57.731 LFU 2.881 1.292 57.384 Ours (LFU + Momentum) 2.738 1.201 56.029 Naive LR Transducer (Base). We construct the base model with only the convolutional network and transformer architecture as visual encoder and the frame-synchronized simultaneous transducerbased decoder. Base+CTC. To stable the training and promote the performance, we first employ the CTC pre-training for base model, and the results demonstrate that CTC pre-training is helpful for the cross-modal alignment between visual frames and textual tokens. Base+CTC+TC3D. To enhance the visual representations while maintaining the simultaneous manner, we replace the 2D convolutional network with truncated C3D layer in the visual encoder, and the results show that the truncated C3D layer can effectively boost the feature representation ability of visual encoder. Base+CTC+TC3D+WARM. To further improve the performance, we apply the proposed model warm-up strategy where we train a deeper network step by step. As shown by the results, the warmup technique can further facilitate the features learning of visual encoder and improve the performance. Base+CTC+TC3D+WARM+MEM. With limited history and to reduce the computational cost, we further add the proposed attentionguided adaptive memory to organize semantic information of history segments and enhance the visual representations. Table 3 shows that using the adaptive memory can boost the performance significantly, showing that the proposed memory can effectively organize history information and incorporate global context for visual representations enhancement. Besides, we further study the effectiveness of the proposed adaptive memory from the perspective of the memory strategy, the memory size and the way to summarize semantic information from new segments. As results shows in Table 4, we first devise different memory strategies to organize history segments including Table 5: The effect of memory size \ud835\udc58on the GRID and TCDTIMIT dataset. GRID TCD-TIMIT Memory size WER(%) CER(%) PER(%) k = 5 2.881 1.289 56.896 k = 10 2.796 1.259 56.528 k = 20 2.738 1.201 56.029 k = inf 2.760 1.236 56.173 Table 6: The effect of different ways of summarization on the GRID and TCD-TIMIT dataset. GRID TCD-TIMIT Summarization WER(%) CER(%) PER(%) conv 2.872 1.286 57.461 max-pooling 2.763 1.267 57.116 avg-pooling 2.738 1.201 56.029 first-in-first-out (FIFO) queue, and attention-guided least frequently used (LFU) algorithm. The memory with FIFO queue achieves the worst performance, demonstrating that the LFU can effectively extract useful history information, while the adaptive memory with momentum updating obtains the best performance, indicating that the attention-guided strategy with entropy can avoid storing redundant information and enable higher memory efficiency. We then explore the effect of memory size on the recognition accuracy. As shown in Table 5, increasing the memory size \ud835\udc58can firstly absorb more history information for better recognition, while there is an error rate increase for \u201c\ud835\udc58= inf\u201d where relatively unimportant contexts are introduced as noise. We also conduct different ways to summarize semantic information from new segments including convolution, max-pooling and avg-pooling, and the evaluation results are presented in Table 6. Compared with \u201cconv\u201d and \u201cmax-pooling\u201d, the \u201cavg-pooling\u201d can better summarize the semantic information of a video segment. 5.7 Qualitative Results As shown in Figure 5, by normalizing the distribution of target token over all frames, we visualize the monotonic alignment learned by SimulLR between target sequence and source video on TCD-TIMIT dataset. The brightness of the color represents the matching degree between tokens and frames. The approximate monotonic alignment in the figure indicates the effectiveness of our proposed methods to learn the cross-modal alignment under simultaneous setting and with limited history. 6 CONCLUSIONS In this paper, we study the task of simultaneous lip reading and devise SimulLR, a simultaneous lip Reading transducer with attentionguided adaptive memory. To address the challenging of monotonic alignments while considering the syntactic structure of the generated sentences, we build a transducer-based model with adaptive Figure 5: The visualization of monotonic alignment between target sequence and source video on TCD-TIMIT dataset (memory size \ud835\udc58= 10). The second row represents a longer video with more frames. The brightness of the color represents the degree of alignment between tokens and frames. memory and design several effective training strategies including CTC pre-training, model warm-up and curriculum learning to promote the training of the lip reading transducer. Also, to learn better spatio-temporal representations for simultaneous encoder, we construct a truncated 3D convolution and time-restricted selfattention layer to perform the frame-to-frame interaction within a video segment. Further, the history information is always limited due to the storage in real-time scenarios. To achieve a good trade-off, we devise a novel attention-guided adaptive memory to organize semantic information of history segments and enhance the visual representations with acceptable computation-aware latency. Experiments on GRID and TCD-TIMIT datasets shows that the SimulLR outperforms the baselines and has great time-efficiency, demonstrating the effectiveness of our methods for simultaneous lip reading. ACKNOWLEDGMENTS This work was supported in part by the National Key R&D Program of China under Grant No.2018AAA0100603, National Natural Science Foundation of China under Grant No.61836002, No.62072397 and Zhejiang Natural Science Foundation under Grant LR19F020006.", "introduction": "Lip reading, aiming to recognize spoken sentences according to the given video of lip movements without relying on the audio stream, has attracted great interest [1, 3, 8, 26, 30, 34, 39, 41] due to the application in many scenarios including dictating instructions in public areas or a noisy environment, and providing help for hard- of-hearing people. It remains a challenging task even for excellent lip readers [3]. Although prior works that explore lip reading have obtained salient achievements, they are all trained in a non-simultaneous manner where the predictions are generated requiring access to the full video. Therefore, simultaneous lip reading, where a video segment containing fix number of frames is processed while spoken sentence is generated concurrently, is a more difficult but necessary extension for real-time understanding (e.g. live video streaming). Due to the low latency of simultaneous decoding, simultaneous lip reading are able to deal with massive video data (e.g. long films) without \u201cwatching\u201d the entire video first. In this paper, we study the task of simultaneous lip reading that recognizes sentences based on partial input. However, it is very challenging to decode simultane- ously for vision-text cross-modal translation in following aspects: Firstly, for simultaneous decoding, the model is required to learn the monotonic alignments between video segments and target to- kens, and pick a suitable moment that achieves a good trade-off between latency and accuracy to predict the next token. Due to the arXiv:2108.13630v1 [cs.CV] 31 Aug 2021 I honor my mom watch write Predicted Text \u2026\u2026 Lip Movements Figure 1: The frame-synchronized simultaneous decoding of the proposed lip reading transducer. At each time step, an empty transfer (watch) is allowed to read next video seg- ment or a context-aware token can be generated (write). significant discrepancy of length of same token in different videos, it is difficult to estimate the duration of tokens and learn such mono- tonic alignments. Prior autoregressive methods [1, 8, 26, 41, 42] leverage the semantic information of entire videos and work in a word-synchronized mode without considering monotonic align- ments, making it non-simultaneous in nature. A naive method is to scale the CTC-based model [3, 5, 30, 34, 39] to simultaneous decoding by limiting each frame to see only its previous frames. However, the target sentences always show a strong correlation across time [24], but the CTC-based model generate different tokens conditionally independent of each other, ignoring the syntactic in- formation. In our paper, inspired by neural transducer [12, 19], we devise a lip reading transducer that generates tokens in a frame- synchronized mode where an empty transfer is allowed to read next video segment at each time step (See Figure 1), and also con- siders the syntactic structure of the generated sentences. With the reading of video segments, the tokens generate frame-by-frame and then are merged to the ultimate predictions. We also design several effective training strategies including CTC pre-training, model warm-up and curriculum learning to promote the training of lip reading transducer. Secondly, to learn better spatio-temporal representations for cross-modal decoding, prior non-simultaneous methods [24, 41] employ multiple 3D convolution and self-attention layers in the visual encoder, which cannot be transferred to our simultaneous model due to their expanding receptive field on the whole video. To obtain a better simultaneous encoder and reduce the gap be- tween our method and non-simultaneous methods, we construct a truncated 3D convolution for spatio-temporal representations learning and time-restricted self-attention layer to perform the frame-to-frame interaction among available video segments. Thirdly, in real scenarios, the storage is always limited by the extremely long input sequence (e.g. massive video data). Therefore, for simultaneous decoding, history segments may also be unavail- able, making it more difficult to predict a new token with limited visual context. To achieve a good storge-accuracy trade-off, inspired by memory networks [13, 14], we devise a novel attention-guided adaptive memory to organize semantic information of history seg- ments and enhance the visual representations using limited context. Also, given the memory, the computation of the commonly-used self-attention mechanism is no longer conducted over all the history segments, which reduces the computation-aware latency for simul- taneous decoding [25]. Specially, the attention-guided memory is constructed to absorb new segments by momentum update and discard obsolete features using the least frequently used (LFU) algo- rithm guided by attention scores. Based on the proposed adaptive memory, the simultaneous model incorporates both global context and adjacent semantic information with acceptable computation- aware latency. In summary, we study the task of simultaneous lip reading with limited history and devise a vision-text cross-modal transducer SimulLR and devise several effective training strategies to promote the performance. For the simultaneous encoder, we construct a truncated 3D convolution and time-restricted self-attention layer to learn better spatio-temporal representations for video segments. Further, considering the limited storage and computational cost, we further devise a novel attention-guided adaptive memory to or- ganize semantic information of history segments for simultaneous decoding with acceptable computation-aware latency. The experiments show that the SimulLR achieves the translation speedup 9.10\u00d7 compared with the state-of-the-art non-simultaneous methods, and also obtains competitive results, which indicates the effectiveness of our proposed methods." }, { "url": "http://arxiv.org/abs/1911.08199v3", "title": "Weakly-Supervised Video Moment Retrieval via Semantic Completion Network", "abstract": "Video moment retrieval is to search the moment that is most relevant to the\ngiven natural language query. Existing methods are mostly trained in a\nfully-supervised setting, which requires the full annotations of temporal\nboundary for each query. However, manually labeling the annotations is actually\ntime-consuming and expensive. In this paper, we propose a novel\nweakly-supervised moment retrieval framework requiring only coarse video-level\nannotations for training. Specifically, we devise a proposal generation module\nthat aggregates the context information to generate and score all candidate\nproposals in one single pass. We then devise an algorithm that considers both\nexploitation and exploration to select top-K proposals. Next, we build a\nsemantic completion module to measure the semantic similarity between the\nselected proposals and query, compute reward and provide feedbacks to the\nproposal generation module for scoring refinement. Experiments on the\nActivityCaptions and Charades-STA demonstrate the effectiveness of our proposed\nmethod.", "authors": "Zhijie Lin, Zhou Zhao, Zhu Zhang, Qi Wang, Huasheng Liu", "published": "2019-11-19", "updated": "2020-01-15", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.LG", "cs.MM" ], "main_content": "Otani et al. 2016) all propose to learn a joint visual-semantic space for cross-modal representations. In such space, the similarity of cross-modal representations reflects the closeness between their original inputs. In moment retrieval, however, we focus on retrieving a target moment in video based on the given query, rather than simply selecting a target image/video from pre-defined candidate sets. Temporal Action Detection: Temporal Action Detection aims at identifying the temporal boundary as well as the category for each action instance in untrimmed videos. The approaches of action detection can be also summarized into supervised settings and weakly-supervised settings. These methods in (Shou, Wang, and Chang 2016; Escorcia et al. 2016; Buch et al. 2017; Shou et al. 2017; Zhao et al. 2017) are trained in two-stage supervised learning manner, which first generate temporal action proposals through a proposal network, and then predict the action category for each proposal through a classification network. In the weakly-supervised settings, however, only the coarse video-level labels instead of the exact temporal boundary is available. The UntrimmedNet in (Wang et al. 2017) make use of the principle of multiple instance learning and the generated attention weights to select proposals that most probably contain action instances. The method presented in (Nguyen et al. 2018) combines temporal class activation maps and class agnostic attentions for localizing the boundary of action instances. Further than action detection that is limited to a pre-defined set of categories, moment retrieval according to natural language query is much more challenging but general. Video Moment Retrieval: Video Moment Retrieval is to address the target moment that is semantically aligned with the given natural language query. Prior works (Gao et al. 2017; Hendricks et al. 2017; 2018; Liu et al. 2018; Chen et al. 2018; Xu et al. 2019; Zhang et al. 2019b; 2019a; Wang, Huang, and Wang 2019) mainly focus on localizing the most relevant moment in a fully-supervised settings. Among them, methods proposed in (Gao et al. 2017; Hendricks et al. 2017; 2018) sample candidate moments by sliding windows with various length, and perform coarse fusion to estimate the correlation between the queries and moments in a multi-modal space. Further, the Temporal GroundNet (TGN) (Chen et al. 2018) proposes an interactor to exploit the evolving fine-grained frame-by-word interactions and simultaneously score a set of candidate moments in one single pass. The Cross-Modal Interaction Network (CMIN) (Zhang et al. 2019b) advises a multi-head self-attention mechanism to capture the long-range dependencies in videos and a syntactic GCN to obtain the finegrained queries representations. The Semantic Matching Reinforcement Learning (SM-RL) (Wang, Huang, and Wang 2019) proposes a recurrent neural network based reinforcement learning model and introduce mid-level semantic concepts to bridge the semantic gap between visual and semantic information. Though those methods achieve good performance, they still suffer from collecting a large amount of manually labelled temporal annotations. Some works (Bojanowski et al. 2015; Duan et al. 2018; Mithun, Paul, and Roy-Chowdhury The man takes his shirt and shoes off and dances on the beach and grass. Query Video Textual Encoder Visual Decoder \u2026 \u2026 Candidate Proposals Confidence Scores \u2026 \u2026 \u2026 \u2026 The man takes his shirt and off and \u2026 Masked query Selected proposals \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 The man takes his shirt and shoes off and dances \u2026 Reconstructed query Feed Forward Network (c) visual/textual encoder (d) visual/textual decoder (a) Proposal Generation Module (b) Semantic Completion Module Add & Norm Multi-Head Attention Multi-Head Attention Add & Norm Add & Norm Feed Forward Network Add & Norm Multi-Head Attention Multi-Head Attention Add & Norm Add & Norm Textual Decoder Visual Encoder Selection Top-K Proposals \u2026 Reconstruction Loss \u2026 Top-K Rewards \u2026 Rank Loss Refinement C3D C3D Softmax \ud835\udc41\u00d7 \ud835\udc41\u00d7 Figure 2: The Framework of our Semantic Completion Network for Video Moment Retrieval. (a) The proposal generation module leverages the cross-modal fusion representations of video and query to score all the candidate proposals at each time step, and then select the top-K proposals considering both exploitation and exploration. (b) The semantic completion module reconstruct the query in which the important words are masked according to the visual representations of proposal, compute the rewards based on the reconstruction loss, and provide feedbacks to the proposal generation module for scoring re\ufb01nement. 2019) also study this task in a weakly-supervised setting. The method proposed in (Bojanowski et al. 2015) consider the task of aligning a video with a set of temporal ordered sentences, in which temporal ordering can be seen as additional constraint and supervision. The method proposed in (Duan et al. 2018) decomposes the problem of weaklysupervised dense event captioning in videos (WS-DEC) into a cycle of dual problems: caption generation and moment retrieval and explores the one-to-one correspondence between the temporal segment and event caption, and has a complex training pipeline such as pre-training and alternating training. The Text-Guided Attention (TGA) (Mithun, Paul, and Roy-Chowdhury 2019) proposes to learn a joint visualsemantic representations and utilizes the attention score as the alignment between video frames and query. Approach Problem Formulation In this paper, we consider the task of Video Moment Retrieval in a weakly-supervised setting. Given an untrimmed video v = {vi}nv i=1 where nv is the frame number of the video and vi is the i-th feature vector, and a corresponding query q = {qi}nq i=1 where nq is the word number of the query and qi is the i-th feature vector, aims to localize the most relevant moment \u02c6 \u03c4 = (\u02c6 s, \u02c6 e) during inference, where \u02c6 s, \u02c6 e are the indices of start frame and end frame respectively. Proposal Generation Module In this section, we introduce the proposal generation module. As mentioned above, the attention weights usually focus on the most discriminative but small regions, and thus fails to cover the entire temporal extent of target moment. As Figure 2(a) shows, instead, this module scores the candidate proposals according to the cross-modal representations of video and query. Moreover, further than these methods in (Gao et al. 2017; Hendricks et al. 2018) that handle different proposals separately in a sliding window fashion, our method scores all the candidate moments in a single pass, which makes full use of the context information. In detail, the feature vector qi of each word can be extracted using a pre-trained word2vec embedding. Then we develop a textual encoder Encq to obtain the textual representations for the query q. After that, we input the textual representations and the video features vi to the visual decoder Decv to obtain the \ufb01nal cross-modal representations c = {ci}nv i=1 of video and query, given by c = Decv(v, Encq(q)), (1) To generate con\ufb01dence score in a single pass, we \ufb01rst prede\ufb01ne a set of candidate proposals at each time step, denoted by Ct = {(t \u2212rk \u2217nv, t)}nk k=1, where t \u2212rk \u2217nv, t are the start and end boundaries of the k-th candidate proposal at the t-th time step, rk \u2208(0, 1) is the k-th ratio and nk is the number of candidate proposals. Note that rk is a \ufb01x ratio for each time step. Then based on the cross-modal representations c, we can simultaneously give the con\ufb01dence scores for these proposals at all time steps by a fully connected layer with sigmoid nonlinearity, denoted by SCt = \u03c3(Wsct + bs), (2) where SCt \u2208Rnk represents the vector of con\ufb01dence scores for the nk candidate proposals at the t-th time step. Given the candidate proposals {Ct}nv t=1, we apply the selection algorithm that considers both exploitation and exploration to select the top-K proposals G = {Gk}K k=1 and give the corresponding con\ufb01dence scores S = {Sk}K k=1, where Gk = (sk, ek) represents the k-th proposal in top-K proposals and Sk is its con\ufb01dence score. Concretely, we rank the proposals according to their corresponding con\ufb01dence scores. At each step, we choose a proposal randomly with a possibility of p or choose the proposal with the highest score with a possibility of 1 \u2212p, and use Non Maximum Suppression (NMS) to remove those proposals that have high overlap with the chosen one. We de\ufb01ne the sampling possibility p with a decay function dependent on the times of parameter updates nupdate, given by p = \u03bb1 \u2217exp(\u2212nupdate/\u03bb2), (3) where \u03bb1, \u03bb2 are the hyper-parameters to control the decay rate. As the training proceeds, the possibility of choosing next proposal randomly decreases gradually. Semantic Completion Module In this section, we introduce the semantic completion module to measure the semantic similarity between proposals and query, compute rewards and provide feedbacks to previous module for scoring re\ufb01nement. As shown in Figure 2(b), the important words (e.g. noun, verb) are masked and predicted according to the given visual context. The most semantically matching proposal can provide enough useful information to predict the key words and also contains less noise. First, we extract video features for the k-th proposal Gk = (sk, ek), denoted by \u02c6 vk = {vi}ek i=sk, and obtain the visual representations through the visual encoder Encv. We denote the original words sequence as w = {wi}nq i=1, where wi is the i-th word of the query. Then given the words sequence w and a set of masked position X, we denote \u02c6 w as a modi\ufb01ed version of w where those words wi, i \u2208X are replaced by a special symbol. We can extract word features for \u02c6 w, denoted as \u02c6 q = {\u02c6 qi}nq i=1. Next, through a bi-directional textual decoder Decq, we can obtain the \ufb01nal cross-modal semantic representations f k = {f k i }nq i=1 for the proposal Gk, given by f k = Decq(\u02c6 q, Encv(\u02c6 vk)), (4) To predict the masked words, we can compute the energy distribution ek = {ek i }nq i=1 on the vocabulary by a fully connected layer, denoted by ek i = Wvf k i + bv, (5) where ek i \u2208Rnw is the energy distribution at the i-th time step, nw is the number of words in the vocabulary. Training of Semantic Completion Network In this section, we describe the loss function we optimize to train the Semantic Completion Network. Reconstruction Loss. With the energy distribution ek for the proposal Gk, we \ufb01rst adopt a reconstruction loss to train the semantic completion module and make it able to extract key information from the visual context to predict the masked words. Formally, we then compute the negative loglikehood of each masked word and add them up, denoted by Lk rec = \u2212 nq\u22121 X i=1 log p(wi+1|\u02c6 w1:i, \u02c6 vk) (6) = \u2212 nq\u22121 X i=1 log p(wi+1|ek i ), (7) where Lk rec represents the reconstruction loss based on the visual context of the proposal Gk. Rank Loss. As Figure 2 shows, in order to correct the con\ufb01dence scores given by the proposal generation module, we further apply a rank loss to train this module. Note that we correct the con\ufb01dence scores based on reward rather than one-hot label. Speci\ufb01cally, we de\ufb01ne the reward Rk for the proposal Gk with a reward function to encourage proposals with lower reconstruction loss. The reward is reduced from one to zero in steps of 1 / (K \u22121). Then the strategy of policy gradient is used to correct the scores. Note that the con\ufb01dence scores are normalized by a softmax layer, which is an extremely important operation to highlight the semantically matching proposals and weaken the mismatched ones. The rank loss Lk ran for the proposal Gk is computed by Lk ran = \u2212Rk log ( exp(Sk) PK i=1 exp(Si) ), (8) Multi-Task Loss. With the reconstruction loss and the rank loss for each proposal, we average losses over all proposals and compute a multi-task loss to train the semantic complete network in an end-to-end manner, denoted by L = 1 K K X k=1 (Lk rec + \u03b2Lk ran), (9) where \u03b2 is a hyper-parameter to control the balance of two losses. Network Design In this section, we introduce the details of the semantic completion network, including the components of visual/textual encoder and visual/textual decoder. Encoder and Decoder. It has been indicated in (Tang et al. 2018) that Transformer (Vaswani et al. 2017) is a strong feature extractor. In this paper, we build our visual/textual encoder and visual/textual decoder based on the bi-directional Transformer, as Figure 2(c)(d) shows. The encoder/decoder is composed of a stack of layer that contains the multi-head attention sub-layer and the fully connected feed-forward network. Parameter Sharing. We share the parameters between the visual/textual encoder and visual/textual decoder. As Figure 2(c)(d) shows, an encoder can be also regarded as a decoder without computing attention from another input of different modality. Parameter sharing greatly reduces the number of parameters and save memory. This is also a kind of model-level dual learning (Xia et al. 2018) sharing parameters across tasks, which promotes knowledge sharing. Experiments Datasets We perform experiments on two public datasets for video moment retrieval to evaluate the effectiveness of our SCN method. ActivityCaptions. The ActivityCaptions (Caba Heilbron et al. 2015) dataset is originally developed for human activity understanding. This dataset contains 20,000 various untrimmed videos and each video includes multiple natural language descriptions with temporal annotations. The released ActivityCaptions dataset comprise 17,031 description-moment pairs for training. Since the caption annotations of test data of ActivityCaptions are not publically available, we take the val 1 as the validation set and val 2 as test data. The average length of the description, also regarded as query in moment retrieval, is 13.16 words, and the average duration of the video is 117.74 seconds. Charades-STA. The Charades-STA dataset is released in (Gao et al. 2017) for moment retrieval, and comprises 12,408 description-moment pairs for training, and 3,720 for testing. The average length of the query is 8.6 words, and the average duration of the video is 29.8 seconds. The Charades dataset, originally introduced in (Sigurdsson et al. 2016), contains only temporal activity annotation and multiple video-level descriptions for each video. The authors of (Gao et al. 2017) design a semi-automatic way to generate sentence temporal annotations. First, the video-level descriptions from the original dataset were split into subsentences. Then, by matching keywords for activity categories, these sub-sentences are aligned with moments in videos. The rule-based annotations are ultimately veri\ufb01ed by humans. Evaluation Metric To evaluate the performance of our SCN method and baselines, we adopt the evaluation metric proposed by (Gao et al. 2017) to compute \u201cR@n, IoU=m\u201d. Speci\ufb01cally, we compute the percentage of at least one of the top-n predicted moments having Intersection over Union (IoU) larger than m, denoted by R(n, m) = 1 nt Pnt i=1 r(n, m, qi), where qi is the i-th query, nt is the number of testing query, r(n, m, qi) is 1 only if the top-n returned moments about qi contains at least one that has a temporal IoU > m and R(n, m) is the overall performance. Implementation Details Data Preprocessing. For each video, we pre-extract visual frame-based features by a publicly available pre-trained 3D-ConvNet model which has a temporal resolution of 16 frames. This network was not \ufb01ne-tuned on our data. We reduce the dimensionality of the activations from the second fully-connected layer (fc7) of the network from 4096 to 500 dimensions using PCA. The C3D features were extracted every 8 frames. The maximum number of frame is set to 200. For each description, we split it into words using NLTK and extract word embeddings using the pretrained Glove (Pennington, Socher, and Manning 2014) word2vec for each word token. The maximum description length is set to 20. We also keep the most common nw words in training set, resulting in a vocabulary size of 8,000 for ActivityCaptions and 1,111 for Charades-STA. Model Settings. At each time step of video, we score nk candidate proposals of multiple scales. We set nk to 6 with ratios of [0.167, 0.333, 0.500, 0.667, 0.834, 1.0] for ActivityCaptions, and to 4 with ratios of [0.167, 0.250, 0.333, 0.500] for Charades-STA. We then set the decay hyperparameter \u03bb1 to 0.5, \u03bb2 to 2000, the number of selected proposals K to 4, the balance hyper-parameter \u03b2 to 0.1. Also, we mask one-third of words in a sentence and replace with a special token for semantic completion. Note that noun and verb are more likely to be masked. Moreover, for TransformerEncoder as well as TransformerDecoder, the dimension of hidden state is set to 256 and the number of layers is set to 3. During training, we adopt the Adam optimizer with learning rate 0.0002 to minimize the multi-task loss. The learning rate increases linearly to the maximum with a warm-up step of 400 and then decreases itself based on the number of updates (Vaswani et al. 2017). Compared Methods Random. We simply select a candidate moment randomly. VSA-RNN and VSA-STV. (Gao et al. 2017) This two methods both simply project the visual feature of all candidate proposals and the textual feature of the query into a common space, and computes the con\ufb01dence scores based on cosine similarity. CTRL. (Gao et al. 2017) The CTRL method introduces a cross-modal temporal localizer to estimate the alignment scores and uses clip location regression to further adjust the the boundary. QSPN. (Xu et al. 2019) The QSPN method devises a multilevel approach for integrating vision and language features using attention mechanisms, and also leverages video captioning as an auxiliary task. WS-DEC. (Duan et al. 2018) The WS-DEC method decomposes the problem of weakly-supervised dense event captioning in videos into a cycle of dual problems: caption generation and moment retrieval, and explores the one-to-one correspondence between the temporal segment and event caption. TGA. (Mithun, Paul, and Roy-Chowdhury 2019) The TGA method proposes a weakly-supervised joint visual-semantic embedding framework for moment retrieval, and utilizes the latent alignment for localization during inference. Quantitative Results and Analysis The overall performance results of our SCN and baselines on ActivityCaptions and Charades-STA datasets are presented in Table 1 and Table 2 respectively. We consider the evaluation metric \u201cR@n, IoU=m\u201d, where n \u2208{1, 5}, m \u2208 {0.1, 0.3, 0.5} for ActivityCaptions, and n \u2208{1, 5}, m \u2208 {0.3, 0.5, 0.7} for Charades-STA. By observing the evaluation results, we can discover some facts: \u2022 Compared with Random method, the overall performance results of SCN have a huge improvements on both two Table 1: Performance Evaluation Results on the ActivityCaptions Dataset (n \u2208{1, 5} and m \u2208{0.1, 0.3, 0.5}). Method R@1 R@5 IoU=0.1 IoU=0.3 IoU=0.5 IoU=0.1 IoU=0.3 IoU=0.5 Random 38.23 18.64 7.63 75.74 52.78 29.49 VSA-RNN 39.28 23.43 70.84 55.52 VSA-STV 41.71 24.01 71.05 56.62 CTRL 47.43 29.01 75.32 59.17 QSPN 52.12 33.26 77.72 62.39 WS-DEC 62.71 41.98 23.34 SCN 71.48 47.23 29.22 90.88 71.45 55.69 Table 2: Performance Evaluation Results on the CharadesSTA Dataset (n \u2208{1, 5} and m \u2208{0.3, 0.5, 0.7}). Method R@1 R@5 IoU=0.3 IoU=0.5 IoU=0.7 IoU=0.3 IoU=0.5 IoU=0.7 Random 20.12 8.61 3.39 68.42 37.57 14.98 VSA-RNN 10.50 4.32 48.43 20.21 VSA-STV 16.91 5.81 53.89 23.58 CTRL 23.63 8.89 58.92 29.52 QSPN 54.70 35.60 15.80 95.60 79.40 45.40 TGA 32.14 19.94 8.84 86.58 65.52 33.51 SCN 42.96 23.58 9.97 95.56 71.80 38.87 datasets, which demonstrates that optimizing the multitask loss instead of explicitly optimizing the localization loss can reach the goal of predicting the target moment and also indicates the feasibility of our SCN method. \u2022 As the results show, the proposed SCN method outperforms the supervised visual-embedding approaches VSARNN and VSA-STV signi\ufb01cantly, and obtains results comparable to the other fully-supervised methods on two datasets, indicating that even without the full annotations of temporal boundary, our SCN method can still effectively exploit the alignment relationship between video and query and \ufb01nd the most semantically relevant moment. \u2022 The coarse methods VSA-RNN and VSA-STV achieve the worst performance on two datasets, even compared with the weakly-supervised SCN method, demonstrating the key role of visual and textual modeling in moment retrieval and indicating the limitation of learning a common visual-semantic space in high-quality retrieval. \u2022 Also, compared with the weakly-supervised methods WS-DEC and TGA, our method achieves tremendous improvements on both ActivityCaptions and Charades-STA datasets. These results verify the effectiveness of the proposal generation module, the semantic completion module, the algorithm of proposals selection and the multitask loss. Ablation Study To prove the validity of different parts of our method, we simplify the algorithm to generate different ablation models as follows: \u2022 SCN(w/o. rand). During proposals selection, we assign the sample possibility p to zero, which means we select next proposal completely based on the con\ufb01dence scores without random selection at each step. 0 5 10 15 Epoch 23 24 25 26 27 28 29 30 31 32 33 mIoU full w/o. rand w/o. reward (a) ActivityCaptions 0 5 10 15 Epoch 20 21 22 23 24 25 26 27 28 mIoU full w/o. rand w/o. reward (b) Charades-STA Figure 3: Training Process of Different Models on ActivityCaptions and Charades-STA Datasets 0.1 0.3 0.5 R@1 IoU 0 10 20 30 40 50 60 70 80 Recall w/o. rand w/o. reward w/o. share w/o. mask full (a) ActivityCaptions 0.3 0.5 0.7 R@1 IoU 0 5 10 15 20 25 30 35 40 45 Recall w.o. rand w/o. reward w/o. share w/o. mask full (b) Charades-STA Figure 4: Evaluation Results of Different Models on ActivityCaptions and Charades-STA Datasets \u2022 SCN(w/o. reward). With feedbacks given by the semantic completion module, we modify the rank loss by using one-hot label instead of computing rewards for scoring re\ufb01nement. Concretely, we simply assign a reward of one to the best proposal, and zero to the other ones. The rank loss is equivalent to the cross entropy loss. \u2022 SCN(w/o. mask). To validate the effectiveness of the semantic completion module and the reconstruction loss, we replace this module with a ordinary captioning generator (Duan et al. 2018) without masking words. \u2022 SCN(w/o. share). Instead of parameter sharing between the proposal generation module and the semantic completion module, we use two separate sets of parameters for this two modules. The training process of different models on ActivityCaptions and Charades-STA is presented in Figure 3. By analyzing the results, we can \ufb01nd some interesting points: \u2022 The simpli\ufb01ed models SCN(w/o. rand) and SCN(w/o. reward) still achieve results comparable to the fullysupervised methods and outperform the existing weaklysupervised methods, which further demonstrates the effectiveness of our framework including proposal generation and selection, semantic completion for semantic similarity estimation and scoring re\ufb01nement. \u2022 The SCN(full) achieves better results than the SCN(w/o. rand) and the evaluation results of SCN(w/o. rand) grows more gently as the model converges, which proves the ability of the random selection to \ufb01nd potentially good proposals during proposals selection. When the model has not converged, selecting proposals randomly provide opportunities for those potentially good proposals and speed Query: They are using paddles to navigate the rough waters. 41.51s 128.39s GT: P1: 60.82s 189.14s P2: 41.82s 140.83s \u2112\"#$ = 5.56 \u2112\"#$ = 5.50 Query: The bartender demonstrates how to prepare the drink. 6.20s 64.35s GT: P1: 6.21s 60.12s P2: 19.76s 71.71s \u2112\"#$ = 2.52 \u2112\"#$ = 3.11 Query: The man gets in the beam and start doing gymnastics. 6.96s 50.37s GT: P1: 5.30s 51.97s P2: 1.82s 23.89s \u2112\"#$ = 3.51 \u2112\"#$ = 3.56 Query: The dog performs tricks using the frisbee as the crowd watches. 19.42s 185.45s GT: P1: 49.47s 146.47s P2: 18.49s 180.48s \u2112\"#$ = 5.84 \u2112\"#$ = 5.72 Figure 5: Qualitative Examples on the ActivityCaptions dataset Query: Person walks through the doorway. 1.90s 8.10s GT: P1: 1.17s 8.27s P2: 3.91s 10.88s \u2112\"#$ = 1.37 \u2112\"#$ = 1.42 Query: Person sits down on a chair. 13.20s 20.80s GT: P1: 13.72s 20.79s P2: 9.46s 19.89s \u2112\"#$ = 1.31 \u2112\"#$ = 1.34 Figure 6: Qualitative Examples on the Charades-STA dataset up training. \u2022 The SCN(full) achieves the best results faster than SCN(w/o. reward), which demonstrates the effectiveness of employing rewards as feedbacks to train the proposal generation module. In the early of training stage, the semantic module can\u2019t provide accurate feedbacks but the one-hot label force the previous module to accept only one proposal and reject other proposals that are actually reasonable. \u2022 As shown in 4, the SCN(full) achieves better results than the SCN(w/o. mask), which indicating the effectiveness of the semantic completion module and the masking operation. By masking the important words of the query, we forces the decoder to absorb the cross-modal visual information in the decoder side. \u2022 The SCN(full) also perform a bit better than the SCN(w/o. share), demonstrating the effectiveness of parameter sharing to promote knowledge sharing. Also, The amount of parameters is greatly reduced by parameter sharing. Qualitative Results To qualitatively validate the performance of our SCN method, several examples of video moment retrieval from ActivityCaptions and Charades-STA are provided in Figure 5 and Figure 6 respectively. Each example provide the ground truth of temporal boundaries, the \ufb01rst two proposals with the highest con\ufb01dence score given by the proposal generation module. The bold words in the sentence are considered as the important words associated with the video context, and are masked for semantic completion. The corresponding reconstruction loss Lrec is also computed and presented in each example. It can be observed that both the \ufb01rst two proposals with the highest con\ufb01dence score cover the most discriminative video contents relevant to the query, which qualitatively verify that the proposal generation module can locate those semantically important proposals, and the rank loss is helpful for scoring re\ufb01nement during training. Additionally, the proposal with higher IoU has lower reconstruction loss, also indicating the proposal that is more semantically matching with the query can be recognized by the semantic completion module. Therefore, due to the effectiveness of two submodules and the training algorithm, our method is successful in localizing the moment that has high IoU with the target moment. Conclusion In this paper, we study the task of video moment retrieval from the perspective of weak-supervised learning without manually-labelled temporal boundaries of start time and end time, which makes this task more realistic but more challenging. We propose a novel semantic completion network (SCN) including the proposal generation module to score all candidate proposals in a single pass, an ef\ufb01cient algorithm for proposals selection considering both exploitation and exploration, the semantic completion module for semantic similarity estimation and a multi-task loss for training. The experiments on the ActivityCaptions and Charades-STA datasets also demonstrate the effectiveness of our method to exploit the alignment relationship between video and query, and the ef\ufb01ciency of the proposal selection algorithm and the rank loss. Acknowledgements This work is supported by the National Natural Science Foundation of China under Grant No.61602405, No.U1611461, No.61751209 and No.61836002, China Knowledge Centre for Engineering Sciences and Technology, and Alibaba Innovative Research.", "introduction": "Video moment retrieval, a key topic in information retrieval and computer vision, has attracted more and more interests in recent years (Gao et al. 2017; Hendricks et al. 2017). As two examples in Figure 1 show, according to a given natural language query, moment retrieval aims to locate the tempo- ral boundary of the most related moment in the video, which can help us quickly \ufb01lter out useless contents in the video. More accurate moment retrieval requires suf\ufb01cient under- standing of both the video and the query, which makes it a challenging task. Although recent works (Chen et al. 2018; Zhang et al. 2019b; 2019a) has achieved good results, they are mostly trained in a fully-supervised setting, which re- quires the full annotations of temporal boundary for each video. However, manually labeling the ground truth tem- poral boundaries is time-consuming and expensive, requir- ing a large amount of human labor. Moreover, considering an untrimmed video contains multiple consecutive temporal activities, it can be dif\ufb01cult to mark the boundaries accu- rately, which produces ambiguity and noise in training data. \u2217Corresponding author. Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Query: The man takes his shirt and shoes off and dances on the beach and grass. 123.91s 183.69s GT: Query: A man kicks a piece of wood and breaks it in half. 140.33s 152.31s GT: Figure 1: Examples of video moment retrieval: search the temporal boundary of the most relevant moment in video according to the given natural language query. Relatively, it is much easier to obtain coarse descriptions of a video without marking the temporal boundaries, such as the captions of videos in YouTube. This motivates us to develop a weakly-supervised method for moment retrieval that needs only coarse video-level annotations for training. Existing weakly-supervised method in (Mithun, Paul, and Roy-Chowdhury 2019) proposes to learn a joint visual-text embedding, and utilizes the latent alignment produced by in- termediate Text-Guided Attention (TGA) to localize the tar- get moment. However, the latent attention weights without extra supervision usually focus on the most discriminative but small regions (Singh and Lee 2017) instead of covering complete regions. To deal with these issues, in this paper, we devise a novel weakly-supervised Semantic Completion Network (SCN) including proposal generation and selection, semantic completion for semantic similarity estimation and scoring re\ufb01nement. Firstly, rather than localizing the most relevant moment relying on the ambiguous attention weights, we extract the semantically important proposals through a proposal gen- eration module. Further than method in (Gao et al. 2017) that treats each candidate proposal separately, we leverage the cross-modal fusion representations of video and query to score all the candidate proposals sampled at different scales in a single pass, which makes full use of context information for scoring other proposals. arXiv:1911.08199v3 [cs.CV] 15 Jan 2020 With a large set of densely sampled proposals, we then devise an algorithm that considers both exploitation and ex- ploration to select top-K proposals. Concretely, we \ufb01rst rank all candidate proposals based on their corresponding con\ufb01- dence scores. Further than just selecting the proposals with high con\ufb01dence score based on Non Maximum Suppres- sion (NMS), to encourage full exploration, we select next proposal randomly with a decay possibility, which is help- ful for \ufb01nding potentially good proposals and giving more accurate con\ufb01dence scores for proposals. At the beginning of training, we tend to select next proposal randomly for ex- ploration. As the model converges gradually, proposals with high con\ufb01dence score are chosen more often for exploita- tion. To explicitly model the scoring of proposal generation module rather than rely on attention weights without extra supervision (Mithun, Paul, and Roy-Chowdhury 2019), we are supposed to further measure the semantic similarity be- tween the selected proposals and query for scoring re\ufb01ne- ment. Inspired by the success of recent works about masked language model (Devlin et al. 2019; Song et al. 2019; Wang, Li, and Smola 2019), we design a novel semantic completion module that predicts the important words (e.g. noun, verb) that are masked according to the given visual context. In detail, by masking the important words in the de- coder side, SCN forces the decoder rely on the visual context to reconstruct the query and the most semantically matching proposal can provide enough information to predict the key words. Then with the evaluation results given by semantic completion module, we compute reward for each proposal based on the reconstruction loss and formulate a rank loss to encourage the proposal generation module to give higher con\ufb01dence score for those proposals with greater rewards. In total, the main contributions of our work are listed as follows: \u2022 We propose a novel weakly-supervised moment re- trieval framework requiring only coarse annotations for training, and experiments on two datasets: ActivityCap- tions (Caba Heilbron et al. 2015) and Charades-STA (Gao et al. 2017) demonstrate the effectiveness of our method. \u2022 We build a proposal generation module to score all candi- date proposals in a single pass and formulate a rank loss for scoring re\ufb01nement. \u2022 We devise an algorithm for top-K proposals selection that encourages both exploitation and exploration. \u2022 We design a novel semantic completion module that pre- dicts the important words that are masked according to the given visual context for semantic similarity estimation." } ], "Jinglin Liu": [ { "url": "http://arxiv.org/abs/2212.07000v3", "title": "DopplerBAS: Binaural Audio Synthesis Addressing Doppler Effect", "abstract": "Recently, binaural audio synthesis (BAS) has emerged as a promising research\nfield for its applications in augmented and virtual realities. Binaural audio\nhelps users orient themselves and establish immersion by providing the brain\nwith interaural time differences reflecting spatial information. However,\nexisting BAS methods are limited in terms of phase estimation, which is crucial\nfor spatial hearing. In this paper, we propose the \\textbf{DopplerBAS} method\nto explicitly address the Doppler effect of the moving sound source.\nSpecifically, we calculate the radial relative velocity of the moving speaker\nin spherical coordinates, which further guides the synthesis of binaural audio.\nThis simple method introduces no additional hyper-parameters and does not\nmodify the loss functions, and is plug-and-play: it scales well to different\ntypes of backbones. DopperBAS distinctly improves the representative WarpNet\nand BinauralGrad backbones in the phase error metric and reaches a new state of\nthe art (SOTA): 0.780 (versus the current SOTA 0.807). Experiments and ablation\nstudies demonstrate the effectiveness of our method.", "authors": "Jinglin Liu, Zhenhui Ye, Qian Chen, Siqi Zheng, Wen Wang, Qinglin Zhang, Zhou Zhao", "published": "2022-12-14", "updated": "2023-06-01", "primary_cat": "eess.AS", "cats": [ "eess.AS" ], "main_content": "In this work, we focus on the most basic BAS scenario where only the monaural audio, the series of positions and head orientations are provided (Richard et al., 2022; Leng et al., 2022), rather than other scenarios where extra modalities (Xu et al., 2021) are present. Note that scenarios with extra modalities present are different tasks. Also, as demonstrated in this paper, our proposed DopplerBAS is plug-and-play and can be easily integrated into other more complex scenarios. In this section, we will introduce the Doppler Effect as the preliminary knowledge, and then introduce the proposed method DopplerBAS. We will describe how to calculate and decompose the velocity vector, and how to apply this vector to two different backbones. 2.1 Doppler Effect The Doppler effect (Gill, 1965) is the change in frequency of a wave to an observer, when the wave source is moving relative to it. This effect is originally used in radar systems to reveal the characteristics of interest for the target moving objects (Chen et al., 2006). It can be formulated as: f = \ufffd c c \u00b1 c \u00b1 vr \ufffd f0, (1) where c, vr, f0 and f are the propagation speed of waves, the radial relative velocity of the moving sound source, the original frequency of waves and the received frequency of waves, respectively. \ud835\udc63! \ud835\udc63\" \ud835\udc5f \ud835\udc5f \ud835\udc52 \ud835\udc63#$ \ud835\udc65 \ud835\udc66 right ear Figure 1: We illustrate the top view where the height dimension is omitted for simplicity. The sound source is moving in the x-y plane with the velocity vxy. This velocity is decomposed into the radial velocity vr relative to one ear (e.g., the right ear). 2.2 DopplerBAS We do not directly apply Eq. (1) in the frequency domain of audio, because some previous works (Lee and Lee, 2022) show that modeling the binaural audio in the frequency domain degrades the accuracy although it could benefit the generalization ability. Different from modeling the Doppler effect in the frequency domain, we calculate the velocity of interest and use it as a condition to guide the neural network to synthesize binaural audio consistent with the moving event. In the receiver-centric Cartesian coordinates, we define \u20d7 ps and \u20d7 pe as the 3D position of the moving sound source s and one ear of the receiver e respectively (e.g., the right ear, as shown in Figure 1). The Model Wave L2 (\u00d710\u22123 ) \u2193 Amplitude L2 \u2193 Phase L2 \u2193 PESQ \u2191 MRSTFT \u2193 DSP (Leng et al., 2022) 1.543 0.097 1.596 1.610 2.750 WaveNet (Leng et al., 2022) 0.179 0.037 0.968 2.305 1.915 NFS (Lee and Lee, 2022) 0.172 0.035 0.999 1.656 1.241 WarpNet\u2217(Richard et al., 2021) 0.164 0.040 0.805 1.935 2.051 WarpNet\u2217+ DopplerBAS 0.154 0.036 0.780 2.161 2.039 BinauralGrad\u2217(Leng et al., 2022) 0.133 0.031 0.889 2.659 1.207 BinauralGrad\u2217+ DopplerBAS 0.131 0.030 0.869 2.699 1.202 Table 1: The comparison regarding binaural audio synthesis quality. For WarpNet\u2217and BinauralGrad\u2217, we reproduced the results using their official codes (Section 3.1). position vector \u20d7 p = (px, py, pz) of s relative to e is: \u20d7 p = (px, py, pz) = \u20d7 ps \u2212\u20d7 pe. Then s\u2019s velocity 2 can be calculated as: \u20d7 v = (vx, vy, vz) = (dpx dt , dpy dt , dpz dt ). Next, we build the spherical coordinate system using the ear as the origin, and decompose \u20d7 v into the radial relative velocity \u20d7 vr by: \u20d7 vr = \u20d7 p \u00b7 \u20d7 v \u2225\u20d7 p\u2225\u00b7 \u02c6 r, (2) where \u02c6 r \u2208R1 is the radial unit vector. Finally, we add \u20d7 vr as the additional condition to the network: The original conditions in monauralto-binaural speech synthesis are Co \u2208R7 = (x, y, z, qx, qy, qz, qw), of which the first 3 represent the positions and the last 4 represent the head orientations. We define the new condition C \u2208 R9 = (x, y, z, qx, qy, qz, qw, vr\u2212left, vr\u2212right), where vr\u2212left and vr\u2212right represent the radial velocity of source relative to the left and right ear respectively, which are derived from Eq. (2). We then apply C to WarpNet and BinauralGrad backbones, as follows. 2.2.1 WarpNet WarpNet consists of two blocks: 1) The Neural Time Warping block to learn a warp from the source position to the listener\u2019s left ear and right ear while respecting physical properties (Richard et al., 2021). This block is composed of a geometric warp and a parameterized neural warp. 2) The Temporal ConvNet block to model subtle effects such as room reverberations and output the final binaural 2This velocity is the same in all the Cartesian coordinate systems relatively stationary to the receiver. audio. This block is composed of a stack of hyperconvolution layers. We replace the original Co with C for the input of parameterized neural warp and for the condition of hyper-convolution layers. 2.2.2 BinauralGrad BinauralGrad consists of two stages: 1) The \u201cCommon Stage\u201d generates the average of the binaural audio. The conditions for this stage include the monaural audio, the average of the binaural audio produced by the geometric warp in WarpNet (Richard et al., 2021), and Co. 2) The \u201cSpecific Stage\u201d generates the final binaural audio. The conditions for this stage include the binaural audio produced by the geometric warp, the output of the \u201cCommon Stage\u201d, and Co. BinauralGrad adopts diffusion model for both stages, which is based on non-causal WaveNet blocks (Oord et al., 2016) with a conditioner block composed of a series of 1D-convolutional layers. We replace Co with C as the input of the conditioner block for both stages. 3 Experiments In this section, we first introduce the commonly used binaural dataset, and then introduce the training details for WarpNet-based and BinauralGradbased models. After that, we describe the evaluation metrics that we use to evaluate baselines and our methods. Finally, we provide the main results with analytical experiments on BAS. 3.1 Setup Dataset We evaluate our methods on the standard binaural dataset released by Richard et al. (2021). It contains 2 hours of paired monaural and binaural audio at 48kHz from eight different speakers. Speakers were asked to walk around a listener equipped with binaural microphones. An OptiTrack system track the positions and orientations of the speaker and listener at 120Hz, which are aligned with the audio. We follow the original train-validation-test splits as Richard et al. (2021) and Leng et al. (2022) for a fair comparison. Training Details We apply DopplerBAS on two open-source BAS systems WarpNet and BinauralGrad. We train 1) WarpNet and WarNet+DopplerBAS on 2 NVIDIA V100 GPUs with batch size 32 for 300K steps, and 2) BinauralGrad and BinauralGrad+DopplerBAS on 8 NVIDIA A100 GPUs with batch size 48 for 300K steps 3. Evaluation Metrics Following the previous works (Leng et al., 2022; Lee and Lee, 2022), we adopt 5 metrics to evaluate baselines and our methods: 1) Wave L2: the mean squared error between waveforms; 2) Amplitude L2: the mean squared errors between the synthesized speech and the ground truth in amplitude; 3) Phase L2: the mean squared errors between the synthesized speech and the ground truth in phase; 4) PESQ: the perceptual evaluation of speech quality; 5) MRSTFT: the multi-resolution spectral loss. 3.2 Main Results and Analysis Main Results We compare the following systems: 1) DSP, which utilizes the room impulse response (Lin and Lee, 2006) to model the room reverberance and the head-related transfer functions (Cheng and Wakefield, 2001) to model the acoustic influence of the human head; 2) WaveNet (Richard et al., 2021; Leng et al., 2022), which utilizes the WaveNet (Oord et al., 2016) model to generate binaural speech; 3) NFS, which proposes to model the binaural audio in the Fourier space; 4) WarpNet (Richard et al., 2021), which proposes a combination of geometry warp and neural warp to produce coarse binaural audio from the monaural audio and a stack of hyper-convolution layers to refine coarse binaural audio; 5) WarpNet + DopplerBAS, which applies DopplerBAS to WarpNet; 6) BinauralGrad (Leng et al., 2022), which proposes to use diffusion model to improve the audio naturalness; 7) BinauralGrad + DopplerBAS, which applies DopplerBAS to BinauralGrad. The results are shown in Table 1. \u201c+ DopplerBAS\u201d could improve both WarpNet and BinauralGrad in all the metrics, especially in the Phase L2 metric. WarpNet + DopplerBAS performs best in the Phase L2 metric and reaches a new state of the 3Following the recommended training steps in their official repository. No. Model W. L2 Amp. L2 Phase L2 1 WarpNet 0.164 0.040 0.805 2 +Spherical \u20d7 v\u2020 0.154 0.036 0.780 3 +Cartesian \u20d7 v 0.164 0.038 0.790 4 +Zeros 0.159 0.038 0.806 5 +Time series 0.163 0.039 0.822 Table 2: Analysis Experiments. \u201cW. L2\u201d means Wave L2 \u00b7103; \u201cAmp. L2\u201d means Amplitude L2; \u2020 means our method: DopplerBAS. Best scores over the corresponding baseline are marked in bold. art 0.780. BinauralGrad + DopplerBAS obtains the best Wave L2, Amplitude L2, PESQ and MRSTFT score among all the systems. These results show the effectiveness of DopplerBAS. Analysis We conduct analytical experiments for the following four velocity conditions. \u201cSpherical \u20d7 v \u201d: the velocity conditions introduced in Section 2.2 are calculated in the spherical coordinate system; \u201cCartesian \u20d7 v \u201d: the velocity conditions are calculated in the Cartesian coordinate system; \u201cZeros\u201d: the provided conditions are two sequences of zeros; \u201cTime series\u201d: the provided conditions are two sequences of time. The results are shown in Table 2, where we place WarpNet in the first row as the reference. We discover that: 1) Radial relative velocity is the practical velocity component, which obeys the theory of the Doppler effect (row 2 vs. row 1); 2) The velocity condition is beneficial to binaural audio synthesis, even for the absolute velocity in the Cartesian coordinates (row 3 vs. row 1); 3) Just increasing the channel number of the condition Co (Section 2.2) by increasing the parameters in neural networks without providing meaningful information could not change the results (row 4 vs. row 1); 4) The neural networks do not explicitly learn the derivative of position to time (row 5 vs. row 1). These points verify the rationality of our proposed method. 4 Conclusion In this work, we proposed DopplerBAS to address the Doppler effect of the moving sound source in binaural audio synthesis, which is not explicitly considered in previous neural BAS methods. We calculate the radial relative velocity of the moving source in the spherical coordinate system as the additional conditions for BAS. Experimental results show that DopplerBAS scales well to different types of backbones and reaches a new SOTA. Analyses further verify rationality of DopplerBAS. Limitations The major limitation is that we test our method only on a binaural speech dataset, in which there is a person moving slowly while speaking. Because this person moves slowly, the Doppler effect is not so obvious. We will try to find or collect a sound dataset of a source moving at high speed, such as a running man, flying objects, or vehicles, and further, analyze the experimental phenomena at different speeds of the moving source. Ethics Statement The immersive experience brought by space audio may make people indulge in the virtual world. Acknowledgements This work was supported in part by the National Key R&D Program of China under Grant No.2022ZD0162000,National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397. This work was also supported by Speech Lab of DAMO Academy, Alibaba Group.", "introduction": "Binaural audio synthesis (BAS), which aims to render binaural audio from the monaural counter- part, has become a prominent technology in ar- tificial spaces (e.g. augmented and virtual real- ity) (Richard et al., 2021, 2022; Leng et al., 2022; Lee and Lee, 2022; Parida et al., 2022; Zhu et al., 2022; Park and Kim, 2022). Binaural rendering provides users with an immersive spatial and so- cial presence (Hendrix and Barfield, 1996; Gao and Grauman, 2019; Huang et al., 2022; Zheng et al., 2022), by producing stereophonic sounds with ac- curate spatial information. Unlike traditional single channel audio synthesis (van den Oord et al., 2016; Chen et al., 2021), BAS places more emphasis on \u2217Equal contribution. accuracy over sound quality, since humans need to interpret accurate spatial clues to locate objects and sense their movements consistent with visual input (Richard et al., 2021; Lee et al., 2022). Currently, there are three types of neural net- works (NN) to synthesize binaural audio. Firstly, Richard et al. (2021) collects a paired monaural- binaural speech dataset and provides an end-to-end baseline with geometric and neural warping tech- nologies. Secondly, to simplify the task, Leng et al. (2022) decompose the synthesis into a two-stage paradigm: the common information of the binau- ral audio is generated in the first stage, based on which the binaural audio is generated in the sec- ond stage. They also propose to use the generative model DDPM (Ho et al., 2020) to improve the audio naturalness. Thirdly, to increase the gener- alization capability for the out-of-distribution au- dio, Lee and Lee (2022) renders the speech in the Fourier space. These non-linear NN-based meth- ods outperform the traditional digital signal pro- cessing systems based on a linear time-invariant system (Savioja et al., 1999; Zotkin et al., 2004; Sunder et al., 2015). However, these NN methods still have room for improvement in accuracy, especially phase accu- racy. Richard et al. (2022) claims that the correct phase estimation is crucial for binaural rendering 1. Actually, the previous works tend to view the scene \u201cstatically\u201d, and only take into account the series of positions and head orientations. This motivates us to propose DopplerBAS, which facilitates phase estimation by explicitly introducing the Doppler effect (Gill, 1965; Giordano, 2009) into neural net- works. Specifically, 1) we calculate the 3D velocity vector of the moving sound source in the Cartesian coordinates and then decompose this 3D velocity vector into a velocity vector in the spherical coor- 1Our ears can discriminate interaural time differences as short as 10\u00b5s (Brown and Duda, 1998; Richard et al., 2021; johansson et al., 2022). arXiv:2212.07000v3 [eess.AS] 1 Jun 2023 dinates relative to the listener; 2) According to the Doppler effect, we use the radial relative velocity as an additional condition of the neural network, to incentivize the model to sense the moving objects. We also analyze the efficacy of different types of velocity conditions through extensive experiments. Naturally, DopplerBAS can be applied to differ- ent neural binaural renderers without tuning hyper- parameters. We pick two typical recent backbones to demonstrate the effectiveness of our method: 1) WarpNet (Richard et al., 2021), a traditional neu- ral network optimized by reconstruction losses; 2) BinauralGrad (Leng et al., 2022), a novel diffu- sion model optimized by maximizing the evidence bound of the data likelihood. Experiments on Warp- Net and BinauralGrad are representative and could show the generalizability of our proposed Doppler- BAS on other conditions based on gains on these two models. The contributions of this work can be summarized as follows: \u2022 We propose DopplerBAS, which distinctly improves WarpNet and BinauralGrad in the phase error metric and produces a new state of the art performance: 0.780 (vs. the current state of the art 0.807). \u2022 We conduct analytical experiments under var- ious velocity conditions and discover that: 1) NN does not explicitly learn the derivative of position to time (velocity); 2) The veloc- ity condition is beneficial to binaural audio synthesis, even the absolute velocity in the Cartesian coordinates; 3) The radial relative velocity is the practical velocity component, which obeys the theory of the Doppler effect." }, { "url": "http://arxiv.org/abs/2202.13277v2", "title": "Learning the Beauty in Songs: Neural Singing Voice Beautifier", "abstract": "We are interested in a novel task, singing voice beautifying (SVB). Given the\nsinging voice of an amateur singer, SVB aims to improve the intonation and\nvocal tone of the voice, while keeping the content and vocal timbre. Current\nautomatic pitch correction techniques are immature, and most of them are\nrestricted to intonation but ignore the overall aesthetic quality. Hence, we\nintroduce Neural Singing Voice Beautifier (NSVB), the first generative model to\nsolve the SVB task, which adopts a conditional variational autoencoder as the\nbackbone and learns the latent representations of vocal tone. In NSVB, we\npropose a novel time-warping approach for pitch correction: Shape-Aware Dynamic\nTime Warping (SADTW), which ameliorates the robustness of existing time-warping\napproaches, to synchronize the amateur recording with the template pitch curve.\nFurthermore, we propose a latent-mapping algorithm in the latent space to\nconvert the amateur vocal tone to the professional one. To achieve this, we\nalso propose a new dataset containing parallel singing recordings of both\namateur and professional versions. Extensive experiments on both Chinese and\nEnglish songs demonstrate the effectiveness of our methods in terms of both\nobjective and subjective metrics. Audio samples are available\nat~\\url{https://neuralsvb.github.io}. Codes:\n\\url{https://github.com/MoonInTheRiver/NeuralSVB}.", "authors": "Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao", "published": "2022-02-27", "updated": "2022-03-02", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.CL", "cs.LG", "cs.MM", "cs.SD" ], "main_content": "2.1 Singing Voice Conversion Singing Voice Conversion (SVC) is a sub-task of Voice Conversion (VC) (Berg-Kirkpatrick and Klein, 2015; Serr\u00e0 et al., 2019; Popov et al., 2021; Liu et al., 2021b), which transforms the vocal timbre (or singer identity) of one singer to that of another singer, while preserving the linguistic content and pitch/melody information (Li et al., 2021). Mainstream SVC models can be grouped into three categories (Zhao et al., 2020): 1) parallel spectral feature mapping models, which learn the conversion function between source and target singers relying on parallel singing data (Villavicencio and Bonada, 2010; Kobayashi et al., 2015; Sisman et al., 2019); 2) Cycle-consistent Generative Adversarial Networks (CycleGAN) (Zhu et al., 2017; Kaneko et al., 2019), where an adversarial loss and a cycleconsistency loss are concurrently used to learn the forward and inverse mappings simultaneously (Sisman and Li, 2020); 3) encoder-decoder models, such as PPG-SVC (Li et al., 2021), which leverage a singing voice synthesis (SVS) system for SVC (Zhang et al., 2020), and auto-encoder (Qian et al., 2019a; Wang et al., 2021b; Yuan et al., 2020) based SVC (Wang et al., 2021a). The models of the latter two categories can be utilized with nonparallel data. In our work, we aim to convert the intonation and the vocal tone while keeping the content and the vocal timbre, which is quite different from the SVC task. 2.2 Automatic Pitch Correction Automatic Pitch Correction (APC) works attempt to minimize the manual effort in modifying the flawed singing voice (Yong and Nam, 2018). Luo et al. (2018) propose Canonical Time Warping (CTW) (Zhou and Torre, 2009; Zhou and De la Torre, 2012) which aligns amateur singing recordings to professional ones according to the pitch curves only. Wager et al. (2020) propose a datadriven approach to predict pitch shifts depending on both amateur recording and its accompaniment. Rosenzweig et al. (2021) propose a pitch shift method for Cappella recordings. Zhuang et al. (2021) propose a pitch-controllable SVS system to resynthesize the audio with correctly predicted pitch curves. Besides modifying pitch, Yong and Stage 1 Pitch (Aligned in Stage 2 and Inference) VAE Dec VAE Enc Discriminator Pitch Modul e Z Type equation here. P\ud835\udc56\ud835\udc61\ud835\udc50\u210e Pitch Modul Stage 2 and Inference Target mel VAE Dec VAE Enc Discriminator Z! Z\" P\ud835\udc56\ud835\udc61\ud835\udc50\u210e! Timbre Encoder Content Encoder Pitch Encoder Amateur Cond VAE Enc Professional Cond VAE Dec M\ud835\udc52\ud835\udc59! P\ud835\udc56\ud835\udc61\ud835\udc50\u210e! Latent Mapping SADTW Algorithm P\ud835\udc56\ud835\udc61\ud835\udc50\u210e\" M\ud835\udc52\ud835\udc59! M\ud835\udc52\ud835\udc59 Figure 1: The overview of NVSB. The training process consists of 2 stages, and the second stage shares the same pipeline with the inference stage. \u201cVAE Enc\u201d means the encoder of CVAE; \u201cVAE Dec\u201d means the decoder of CVAE; \u201cMel\u201d means the mel-spectrogram; \u201cz\u201d means the latent variable of the vocal tone; the \u201ca\u201d/\u201cp\u201d subscript means the amateur/professional version. Nam (2018) propose to modify pitch and energy information to improve the singing expressions of an amateur singing recording. However, this method heavily relies on a reference recording, causing the tuned recording and the reference recording to be homogeneous in singing style (Zhuang et al., 2021). Our work adopts the non-parametric and data-free pitch correction method like Luo et al. (2018), but improves the accuracy of alignment. 3 Methdology In this section, we describe the overview of NSVB, which is shown in Figure 1. At Stage 1 in the \ufb01gure, we reconstruct the input mel-spectrogram through the CVAE backbone (Section 3.1) based on the pitch, content and vocal timbre conditions extracted from the input by the pitch encoder, content encoder and timbre encoder, and optimize the CVAE by maximizing evidence lower bound and adversarial learning. At Stage 2/Inference in the \ufb01gure, \ufb01rstly we infer the latent variable za based on the amateur conditions; secondly we prepare the amateur content vectors aligned with the professional pitch by SADTW algorithm (Section 3.2); thirdly we map za to zp by the latent-mapping algorithm (Section 3.3); \ufb01nally, we mix the professional pitch, the aligned amateur content vectors, and the amateur vocal timbre to obtain a new condition, which is leveraged along with the mapped zp by the decoder of CVAE to generate a new beauti\ufb01ed mel-spectrogram. The training/inference details and model structure of each component in NSVB are described in Section 3.4 and Section 3.5. 3.1 Conditional Variational Generator with Adversarial Learning As shown in Figure 2, to generate audio with high quality and learn the latent representations of vocal tone, we introduce a Conditional Variational AutoEncoder (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) as the mel-spectrogram generator, with the optimizing objective of maximizing the evidence lower bound (ELBO) of the intractable marginal log-likelihood of melspectrogram log p\u03b8(x|c): log p\u03b8(x|c) \u2265ELBO(\u03c6, \u03b8) \u2261 Ez\u223cq\u03c6(z|x,c) \u0014 log p\u03b8(x|z, c) \u2212log q\u03c6(z|x, c) p(z) \u0015 , where x, c, z denote the input/output melspectrogram, the mix of content, vocal timbre and pitch conditions, and the latent variable representing the vocal tone respectively; \u03c6 and \u03b8 denote the model parameters of CVAE encoder and CVAE decoder; q\u03c6(z|x, c) is the posterior distribution approximated by the CVAE encoder; p\u03b8(x|z, c) is the likelihood function that generates mel-spectrograms given latent variable z and condition c; p(z) is the prior distribution of the latent variables z, and we choose the standard normal distribution as p(z) for simpli\ufb01cation. Furthermore, to address the over-smoothing problem (Qian et al., 2019b) in CVAE, we utilize an adversarial discriminator (D) (Mao et al., 2017) to re\ufb01ne the output mel-spectrogram: Ladv(\u03c6, \u03b8) = E[(D(e x) \u22121)2], Ladv(D) = E[(D(x) \u22121)2] + E[D(e x)2], (1) where x is the ground-truth and e x is the output of CVAE. The descriptions for the model structure of each component are in Section 3.5. Pooling \ud835\udf07\ud835\udf0e WN WN VAE Dec VAE Enc Input mel Output mel \ud835\udc67 Enc Cond Dec Cond \u00d73 Discriminator ReLU+BN Conv1d Conv1d Conv1d Figure 2: The CVAE backbone in NSVB. \u201cEnc/Dec Cond\u201d means the conditions for the encoder/decoder; \u201cConv1d\u201d means the 1-D convolutional layer; \u201cPooling\u201d means the average pooling layer; \u00b5 and \u03c3 represent the approximated mean and log scale standard deviation parameters in the posterior Gaussian distribution; z is the sampled latent variable. 3.2 Shape-Aware Dynamic Time Warping To implement the pitch correction, a straightforward method is aligning the amateur recording with the template pitch curve, and then concatenating them to resynthesize a new singing sample with improved intonation. Since the source pitch curve of amateur recordings and template one show a high degree of natural correlation along the time axis, applying a proper time-warping algorithm on them is crucial. However, original DTW (M\u00fcller, 2007) could result in a poor alignment when certain parts of the axis move to higher frequencies, and other parts to lower ones, or vice versa (Sundermann and Ney, 2003). Luo et al. (2018) adopt an advanced algorithm CTW (Zhou and Torre, 2009), which combines the canonical correlation analysis (CCA) and DTW to extract the feature sequences of two pitch curves, and then apply DTW on them. However, the alignment accuracy of CTW leaves much to be desired. We elaborate a non-parametric and data-free algorithm, Shape-Aware DTW (SADTW), based on the prior knowledge that the source pitch curve and the template one have analogous local shape contours. Speci\ufb01cally, we replace the Euclidean distance in the original DTW distance matrix with the shape context descriptor distance. The shape context descriptor of a time point fi in one pitch curve is illustrated in Figure 3. Inspired by (Mori et al., 2005), we divide the data points around fi into m \u2217n bins by m time windows and n angles. We calculate the number of all points falling in the k-th bin. Then the descriptor for fi is de\ufb01ned as the histogram hi \u2208Rm\u2217n: hi(k) = |{fj \u0338= fi, fj \u2208bin(k)}|, where | \u00b7 | means the cardinality of a set. This histogram represents the distribution over relative positions, which is a robust, compact and discriminative descriptor. Then, it is natural to use the X 2-test statistic on this distribution descriptor as the \u201cdistance\u201d of two points fa and fp: C(a, p) = 1 2 m\u2217n X k=1 [ha(k) \u2212hp(k)]2 ha(k) + hp(k) , where ha and hp are the normalized histograms corresponding to the point fa from the amateur pitch curve and the point fp from the template pitch curve. C(a, p) ranges from 0 to 1. Finally, we run DTW on the distance matrix C to obtain the alignment with least distance cost between two curves. 4 windows \ud835\udc53 ! 30\u00b0 Figure 3: The shape descriptor in SADTW. The blue curve represents pitch; the horizontal axis means time; the vertical axis means F0-frequency. There are m = 4 windows, n = 6 angles to divide neighbor points of fi. 3.3 Latent-mapping Algorithm De\ufb01ne a pair of mel-spectrograms (xa, xp): the contents of xa and yp are the same sentence of a song from the same singer6, who sings these two recordings using the amateur tone and the professional tone respectively. Given the CVAE model, 6The singers all major in vocal music. we can infer the posterior distribution q\u03c6(za|xa, ca) and q\u03c6(zp|xp, cp) corresponding to xa and xp through the encoder of CVAE. To achieve the conversion of vocal tone, we introduce a mapping function M to convert the latent variables from q\u03c6(za|xa, ca) to q\u03c6(zp|xp, cp). Concretely, we sample a latent variable of amateur vocal tone za from q\u03c6(za|xa, ca), and map za to M(za). Then, M can be optimized by minimizing the negative log-likelihood of M(za): Lmap1(M) = \u2212log q\u03c6(M(za)|xp, cp). De\ufb01ne \u02c6 cp as the mix of 1) the content vectors from the amateur recording aligned by SADTW, 2) vocal timbre embedding encoded by timbre encoder, and 3) template pitch7 embeddings encoded by pitch encoder. To make sure the converted latent variable could work well together with \u02c6 cp to generate a high-quality audio sample (with the correct pitch and improved vocal tone), we send M(za) to the CVAE decoder to generate \u02c6 x, and propose an additional loss: Lmap2(M) = \u2225\u02c6 x \u2212xp\u22251 + \u03bb(D(\u02c6 x) \u22121)2, where D has been optimized by Eq. (1); \u03bb is a hyper-parameter. 3.4 Training and Inference There are two training stages for NSVB: in the \ufb01rst training stage, we optimize CVAE by minimizing the following loss function L(\u03c6, \u03b8) = \u2212ELBO(\u03c6, \u03b8) + \u03bbLadv(\u03c6, \u03b8), and optimize the discriminator (D) by minimizing Eq. (1). Note that, the \ufb01rst stage is the reconstruction process of mel-spectrograms, where any unpaired, unlabeled singing data beyond PopBuTFy could be leveraged to facilitate the learning of the latent representations. In the second training stage, we optimize M on the parallel dataset PopBuTFy by minimizing the following loss function L(M) = Lmap1(M) + Lmap2(M). \u03c6, \u03b8, and D are not optimized in this stage. In inference, the encoder of CVAE encodes xa with the condition ca to predict za. Secondly, we map za to M(za), and run SADTW to align the 7During training, template pitch is extracted from the waveform corresponding to xp. amateur recordings with the template pitch curve. The template pitch curve can be derived from a reference recording with good intonation or a pitch predictor with the input of music notes. Then, we obtain \u02c6 cp de\ufb01ned in Section 3.3 and send M(za) together with \u02c6 cp in the decoder of CVAE to generate \u02c6 x. Finally, by running a pre-trained vocoder conditioned on \u02c6 x, a new beauti\ufb01ed recording is produced. 3.5 Model Structure The encoder of CVAE consists of a 1-D convolutional layer (stride=4), an 8-layer WaveNet structure (Oord et al., 2016; Rethage et al., 2018) and 3 1-D convolutional layers (stride=2) with ReLU activation function and batch normalization followed by a mean pooling, which outputs the mean and log scale standard deviation parameters in the posterior distribution of z. The decoder of CVAE consists of a 4-layer WaveNet structure and a 1-D convolutional layer, which outputs the mel-spectrogram with 80 channels. The discriminator adopts the same structure as (Wu and Luan, 2020), which consists of multiple random window discriminators. The latent-mapping function is composed of 2 linear layers to encode the vocal timbre as the mapping condition, and 3 linear layers to map za. The pitch encoder is composed of 3 convolutional layers. In addition, given a singing recording, 1) to obtain its content vectors, we train an Automatic Speech Recognition (ASR) model based on Conformer (Gulati et al., 2020) with both speech and singing data, and extract the hidden states from the ASR encoder (viewed as the content encoder) output as the linguistic content information, which are also called phonetic posterior-grams (PPG); 2) to obtain the vocal timbre, we leverage the open-source API resemblyzer8 as the timbre encoder, which is a deep learning model designed for speaker veri\ufb01cation (Wan et al., 2018), to extract the identity information of a singer. More details of model structure can be found in Appendix A. 4 Experiments 4.1 Experimental Setup In this section, we \ufb01rst introduce PopBuTFy, the dataset for SVB, and then describe the implementation details in our work. Finally, we explain the evaluation method we adopt in this paper. 8https://github.com/resemble-ai/ Resemblyzer Dataset Since there is no publicly available highquality, unaccompanied and parallel singing dataset for the SVB task, we collect and annotate a dataset containing both Chinese Mandarin and English pop songs: PopBuTFy. To collect PopBuTFy for SVB, the quali\ufb01ed singers majoring in vocal music are asked to sing a song twice, using the amateur vocal tone for one time and the professional vocal tone for another. Note that some of the amateur recordings are sung off-key by one or more semi-tones for the pitch correction sub-task. The parallel setting could make sure that the personal vocal timbre will keep still during the beautifying process. In all, PopBuTFy consists of 99 Chinese pop songs (\u223c10.4 hours in total) from 12 singers and 443 English pop songs (\u223c40.4 hours in total) from 22 singers. All the audio \ufb01les are recorded in a professional recording studio by quali\ufb01ed singers, male and female. Every song is sampled at 22050 Hz with 16-bit quantization. We randomly choose 274 pieces in Chinese and 617 pieces in English for validation and test. For subjective evaluations, we choose 60 samples in the test set from different singers, half in Chinese and English. All testing samples are included for objective evaluations. Implementation Details We train the Neural Singing Beauti\ufb01er on a single 32G Nividia V100 GPU with the batch size of 64 sentences for both 100k steps in Stage 1 and Stage 2 respectively. Besides PopBuTFy, we pre-train the ASR model (used for PPG extraction) leveraging the extra speech datasets: AISHELL-3 (Yao Shi et al., 2020) for Chinese and LibriTTS (Zen et al., 2019) for English. For the semi-supervised learning mentioned in Section 1 and Section 3.4, we leverage an internal Chinese singing dataset (\u223c30 hours without labeled vocal tone) in the \ufb01rst training stage described in Section 3.4 for Chinese experiments. The output melspectrograms of our model are transformed into audio samples using a HiFi-GAN vocoder (Kong et al., 2020) trained with singing data in advance. We set the \u03bb metioned in Section 3.3 to 0.1. We transform the raw waveform with the sampling rate 22050 Hz into mel-spectrograms with the frame size 1024 and the hop size 128. We extract F0 (fundamental frequency) as pitch information from the raw waveform using Parselmouth9, following Wu and Luan (2020); Blaauw and Bonada (2020); Ren et al. (2020). To obtain the ground truth pitch 9https://github.com/YannickJadoul/ Parselmouth alignment between the amateur recordings and the professional ones for evaluating the accuracy of pitch alignment algorithm, we run the Montreal Forced Aligner tool (McAuliffe et al., 2017) on all the singing recordings to obtain their alignments to lyrics. Then the ground-truth pitch alignment can be derived since the lyrics are shared in a pair of data in PopBuTFy. Performance Evaluation We employ both subjective metrics: Mean Opinion Score (MOS), Comparison Mean Opinion Score (CMOS), and an objective metric: Mean Cepstral Distortion (MCD) to evaluate the audio quality on the test-set. Besides, we use F0 Root Mean Square Error (F0 RMSE) and Pitch Alignment Accuracy (PAA) to estimate the pitch correction performance. For audio, we analyze the MOS and CMOS in two aspects: audio quality (naturalness, pronunciation and sound quality) and vocal tone quality. MOS-Q/CMOS-Q and MOS-V/CMOS-V correspond to the MOS/CMOS of audio quality and vocal tone quality respectively. More details about subjective evaluations are placed in Appendix C. 4.2 Main Results In this section, we conduct extensive experiments to present our proposed model in regard to 1) the performance of pitch conversion; 2) the audio quality and vocal tone quality. 4.2.1 Pitch Correction Firstly, we provide the comparison among timewarping algorithms in terms of PAA in Table 1. Normed DTW means two pitch curves will be normalized before running DTW (M\u00fcller, 2007); CTW means the Canonical Time Warping (Zhou and Torre, 2009), which is used for pitch correction in Luo et al. (2018). It can be seen that, SADTW surpasses existing methods by a large margin. We also visualize an alignment example of DTW, CTW, and SADTW in Figure 4. Secondly, to check whether the amateur recordings are corrected to the good intonation after being beauti\ufb01ed by NSVB, we calculate the F0 RMSE metric of the amateur recordings and the audio generated by NSVB, and list the results in Table 2. We can see that F0 RMSE has been improved significantly, which means NSVB successfully achieve pitch correction. DTW SADTW professional amateur aligned Mel A Mel P DTW CTW SADTW CTW f0 (HZ) f0 (HZ) f0 (HZ) time (frame) Figure 4: The behavior of DTW, CTW and SADTW. 1) In the left panel of the \ufb01gure, we align the pitch curve of the amateur recording to the professional one\u2019s. It can be seen that DTW perform terribly; CTW fails at many parts; SADTW perform well as expectation. 2) In the right panel of the \ufb01gure, we use the alignments obtained from these time-warping algorithm on pitch curves to align the amateur mel-spectrogram to the professional one. It shows that only SADTW could provide an alignment which preserves the content information in the amateur recording well and make the aligned result match the professional recording along the time axis. Table 1: The Pitch Alignment Accuracy of different algorithms on Chinese and English songs. Algorithm PAA (%) Chinese English DTW 66.94 63.90 Normed DTW 65.19 62.86 CTW 71.35 69.28 SADTW 79.45 78.64 Table 2: The F0 RMSE of the original amateur audio and the beauti\ufb01ed audio on Chinese and English datasets. \u201cGT Amateur\u201d means the ground-truth amateur recordings. Algorithm F0 RMSE (Hz) Chinese English GT Amateur 25.11 23.75 NVSB 6.96 7.29 4.2.2 Audio Quality and Vocal Tone Quality To thoroughly evaluate our proposed model in audio quality and vocal tone quality, we compare subjective metric MOS-Q, MOS-V and objective metric MCD of audio samples generated by NVSB with the systems including: 1) GT Mel, amateur (A) and professional (P) version, where we \ufb01rst convert ground truth audio into mel-spectrograms, and then convert the mel-spectrograms back to audio using HiFi-GAN introduced in Section 4.1; 2) Baseline: the baseline model for SVB based on WaveNet with the number of parameters similar to NSVB, which adopts the same pitch correction method (SADTW) as NSVB does, and takes in the condition \u02c6 cp de\ufb01ned in Section 3.3 to generate the mel-spectrogram optimized by the L1 distance to xp. MCD is calculated using the audio samples of GT Mel P as references. The subjective and objective results on both Chinese and English datasets are shown in Table 3. We can see that 1) NSVB achieves promising results, with MOS-Q being less than those for ground truth professional recordings by only 0.1 and 0.12 on both datasets; 2) NSVB surpasses the GT Mel A in terms of MOS-V by a large margin, which indicates that NSVB successfully accomplishes the vocal tone improvement. 3) NSVB surpasses the baseline model on all the metrics distinctly, which proves the superiority of our proposed model; 4) GT Mel P, NSVB and Baseline all outperform GT Mel A in terms of MOS-V, which demonstrates that the proposed dataset PopBuTFy is reasonably labeled in respect of vocal tone. 4.3 Ablation Studies We conduct some ablation studies to demonstrate the effectiveness of our proposed methods and some designs in our model, including latentmapping, additional loss Lmap2 in the second training stage, and semi-supervised learning with extra unpaired, unlabeled data on Chinese songs. Table 3: The Mean Opinion Score in audio quality (MOS-Q), vocal tone (MOS-V) with 95% con\ufb01dence intervals and the Mean Cepstral Distortion (MCD) comparisons with ground-truth singing recordings and baseline model. Method MOS-Q MOS-V MCD Chinese GT Mel P 4.21 \u00b1 0.06 4.27 \u00b1 0.10 GT Mel A 4.11 \u00b1 0.07 3.51 \u00b1 0.13 Baseline 3.90 \u00b1 0.09 3.58 \u00b1 0.18 7.609 NVSB 4.11 \u00b1 0.07 3.69 \u00b1 0.17 7.068 English GT Mel P 3.96 \u00b1 0.11 3.96 \u00b1 0.18 GT Mel A 3.67 \u00b1 0.11 3.36 \u00b1 0.19 Baseline 3.65 \u00b1 0.12 3.37 \u00b1 0.19 8.166 NVSB 3.84 \u00b1 0.06 3.63 \u00b1 0.18 7.992 4.3.1 Latent Mapping We compare audio samples from NSVB with and without latent-mapping in terms of CMOS-V and MCD. From Table 4, we can see that the latentmapping brings CMOS-V and MCD gains, which demonstrates the improvements in vocal tone from latent-mapping in our model. We visualize linearspectrograms of GT Mel A, GT Mel P, NSVB, NSVB w/o mapping in Appendix B. The patterns of highfrequency parts in NVSB samples are comparatively similar to those in GT Mel P samples while NSVB w/o mapping sample resembles GT Mel A samples. Table 4: The Comparison Mean Opinion Score in vocal tone (CMOS-V) and the Mean Ceptral Distortion (MCD) results of singing audio samples for latent mapping. Method CMOS-V MCD Chinese NVSB 0.000 7.068 NVSB w/o mapping -0.100 7.069 English NVSB 0.000 7.992 NVSB w/o mapping -0.330 8.115 4.3.2 Additional Loss Lmap2 As shown in Table 5, all the compared metrics show the effectiveness of Lmap2, which means that the additional loss Lmap2 is bene\ufb01cial to optimizing the latent mapping function M, working as a complement to the basic loss Lmap1. Table 5: The Comparison Mean Opinion Score in audio quality (CMOS-Q), vocal tone (CMOS-V) and the Mean Ceptral Distortion (MCD) of singing audio samples. Method CMOS-Q CMOS-V MCD Chinese NVSB 0.000 0.000 7.068 NVSB w/o Lmap2 -0.213 -0.760 7.237 English NVSB 0.000 0.000 7.992 NVSB w/o Lmap2 -0.060 -0.090 8.040 4.3.3 Semi-supervised Learning To illustrate the advantage of the CVAE architecture that allows semi-supervised training, we compare NSVB trained with and without extra unpaired, unlabeled data on Chinese songs. The corresponding results are shown in Table 6. The compared metrics indicate the advantage of semi-supervised learning, which facilitates the learning of the latent representations for better sample reconstruction (audio quality) and better latent conversion (vocal tone quality). Table 6: The Comparison Mean Opinion Score in audio quality (CMOS-Q), vocal tone (CMOS-V) and the Mean Ceptral Distortion (MCD) of singing audio samples. Method CMOS-Q CMOS-V MCD NVSB 0.000 0.000 7.068 NVSB w/o extra data -0.420 -0.070 7.283 5 Conclusion In this work, we propose Neural Singing Voice Beauti\ufb01er, the \ufb01rst generative model for the SVB task, which is based on a CVAE model allowing semi-supervised learning. For pitch correction, we propose a robust alignment algorithm: ShapeAware Dynamic Time Warping (SADTW). For vocal tone improvement, we propose a latent mapping algorithm. To retain the vocal timbre during the vocal tone mapping, we also propose a new specialized SVB dataset named PopBuTFy containing parallel singing recordings of both amateur and professional versions. The experiments conducted on the dataset of Chinese and English songs show that NSVB accomplishes the SVB task (pitch correction and vocal tone improvement), and extensional ablation studies demonstrate the effectiveness of the proposed methods mentioned above.", "introduction": "The major successes of the arti\ufb01cial intelligent singing voice research are primarily in Singing Voice Synthesis (SVS) (Lee et al., 2019; Blaauw and Bonada, 2020; Ren et al., 2020; Lu et al., 2020; Liu et al., 2021a) and Singing Voice Conversion (SVC) (Sisman and Li, 2020; Li et al., 2021; Wang et al., 2021a). However, the Singing Voice Beauti- fying (SVB) remains an important and challenging endeavor for researchers. SVB aims to improve the intonation1 and the vocal tone of the voice, while keeping the content and vocal timbre2. SVB is ex- tensively required both in the professional record- ing studios and the entertainment industries in our daily life, since it is impractical to record \ufb02awless singing audio. Nowadays in real-life scenarios, SVB is usually performed by professional sound engineers with adequate domain knowledge, who manipulate com- mercial vocal correction tools such as Melodyne3 and Autotune4 (Yong and Nam, 2018). Most cur- rent automatic pitch correction works are shown to be an attractive alternative, but they may 1) show weak alignment accuracy (Luo et al., 2018) or pitch accuracy (Wager et al., 2020); 2) cause the tuned recording and the reference recording to be homo- geneous in singing style (Yong and Nam, 2018). Besides, they typically focus on the intonation but ignore the overall aesthetic quality (audio quality and vocal tone) (Rosenzweig et al., 2021; Zhuang et al., 2021). To tackle these challenges, we introduce Neu- ral Singing Voice Beauti\ufb01er (NSVB), the \ufb01rst generative model to solve the SVB task, which adopts a Conditional Variational AutoEncoder (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) as the backbone to generate high-quality au- dio and learns the latent representation of vocal tone. In NSVB, we dichotomize the SVB task into pitch correction and vocal tone improvement: 1) To correct the intonation, a straightforward method is aligning the amateur recording with the tem- plate pitch curve, and then putting them together to resynthesize a new singing sample. Previous 1Intonation refers to the accuracy of pitch in singing. 2The differences between the vocal tone and vocal timbre is that: the former represents one\u2019s skills of singing, such as air\ufb02ow controlling ability, muscle strength of vocal folds and vocal placement; the latter represents the identical, overall sound of one\u2019s vocal. 3https://www.celemony.com/en/start 4https://www.antarestech.com/ arXiv:2202.13277v2 [eess.AS] 2 Mar 2022 works (Wada et al., 2017; Luo et al., 2018) imple- mented this by \ufb01guring out the alignment through Dynamic Time Warping (DTW) (M\u00fcller, 2007) or Canonical Time Warping (CTW) (Zhou and Torre, 2009). We propose a novel Shape-Aware DTW algorithm, which ameliorates the robustness of ex- isting time-warping approaches by considering the shape of the pitch curve rather than low-level fea- tures when calculating the optimal alignment path. 2) To improve the vocal tone, we propose a latent- mapping algorithm in the latent space, which con- verts the latent variables of the amateur vocal tone to those of the professional ones. This process is optimized by maximizing the log-likelihood of the converted latent variables. To retain the vocal timbre during the vocal tone mapping, we also pro- pose a new dataset named PopBuTFy containing parallel singing recordings of both amateur and pro- fessional versions. Besides, thanks to the autoen- coder structure, NSVB inherently supports semi- supervised learning, where the additional unpaired, unlabeled5 singing data could be leveraged to fa- cilitate the learning of the latent representations. Extensive experiments on both Chinese and En- glish songs show that NSVB outperforms previous methods by a notable margin, and each component in NSVB is effective, in terms of both objective and subjective metrics. The main contributions of this work are summarized as follows: \u2022 We propose the \ufb01rst generative model NSVB to solve the SVB task. NSVB not only corrects the pitch of amateur recordings, but also generates the audio with high audio quality and improved vocal tone, to which previous works typically pay little attention. \u2022 We propose Shape-Aware Dynamic Time Warp- ing (SADTW) algorithm to synchronize the am- ateur recording with the template pitch curve, which ameliorates the robustness of the previous time-warping algorithm. \u2022 We propose a latent-mapping algorithm to con- vert the latent variable of the amateur vocal tone to the professional one\u2019s, and contribute a new dataset PopBuTFyto train the latent-mapping function. \u2022 We design NSVB as a CVAE model, which sup- ports the semi-supervised learning to leverage 5\u201cunpaired, unlabeled\u201d means the recordings sung by any people, in any vocal tone without label. unpaired, unlabeled singing data for better per- formance." }, { "url": "http://arxiv.org/abs/2107.06831v2", "title": "Parallel and High-Fidelity Text-to-Lip Generation", "abstract": "As a key component of talking face generation, lip movements generation\ndetermines the naturalness and coherence of the generated talking face video.\nPrior literature mainly focuses on speech-to-lip generation while there is a\npaucity in text-to-lip (T2L) generation. T2L is a challenging task and existing\nend-to-end works depend on the attention mechanism and autoregressive (AR)\ndecoding manner. However, the AR decoding manner generates current lip frame\nconditioned on frames generated previously, which inherently hinders the\ninference speed, and also has a detrimental effect on the quality of generated\nlip frames due to error propagation. This encourages the research of parallel\nT2L generation. In this work, we propose a parallel decoding model for fast and\nhigh-fidelity text-to-lip generation (ParaLip). Specifically, we predict the\nduration of the encoded linguistic features and model the target lip frames\nconditioned on the encoded linguistic features with their duration in a\nnon-autoregressive manner. Furthermore, we incorporate the structural\nsimilarity index loss and adversarial learning to improve perceptual quality of\ngenerated lip frames and alleviate the blurry prediction problem. Extensive\nexperiments conducted on GRID and TCD-TIMIT datasets demonstrate the\nsuperiority of proposed methods. Video samples are available via\n\\url{https://paralip.github.io/}.", "authors": "Jinglin Liu, Zhiying Zhu, Yi Ren, Wencan Huang, Baoxing Huai, Nicholas Yuan, Zhou Zhao", "published": "2021-07-14", "updated": "2021-12-20", "primary_cat": "cs.MM", "cats": [ "cs.MM", "cs.CV" ], "main_content": "Speech-to-Lip Generation Previous speech-driven works e.g.Chung, Jamaludin, and Zisserman (2017) simply generate the talking face images conditioned on the encoded speech and the encoded face image carrying the identity information. To synthesize more accurate and distinct lip movements, Chen et al. (2018) introduce the task of speechto-lip generation using lip image as the identity information. Further, Song et al. (2019) add a lip-reading discriminator to focus on the mouth region, and Zhu et al. (2020) add the dynamic attention on lip area to synthesize talking face while keeping the lip movements realistic. Prajwal et al. (2020) propose a pre-trained lip-syncing discriminator to synthesize talking face with speech-consistent lip movements. Text-to-Lip Generation The literature of direct text-tolip generation is rare. Some text-driven approaches either cascade the text-to-speech and speech-to-lip generation model(KR et al. 2019; Kumar et al. 2017), or combine the text feature with speech feature together to synthesize lip movements (Yu, Yu, and Ling 2019). Fried et al. (2019) edit a given video based on pure speech-aligned text sequence. Unlike the scenario where source speech or video is given, the sequence length of target lip frames is uncertain with only text input. Existing work (Chen et al. 2020) depends on the attention mechanism and AR decoding method to generate the target lip frames until the stop token is predicted. 2.2 Non-Autoregressive Sequence Generation In sequence-to-sequence tasks, an autoregressive (AR) model takes in a source sequence and then generates tokens of the target sentence one by one with the causal structure at inference (Sutskever, Vinyals, and Le 2014; Vaswani et al. 2017). Since the AR decoding manner causes the high inference latency, many non-autoregressive (NAR) models, which generate target tokens conditionally independent of Image Decoder DeConv1 DeConv2 DeConvK \u2026 Identity Information Motion Information at \u03c4 time Image at \u03c4 time Video Decoder T\u00d7 Video frames Motion Information Linguistic information N\u00d7 Positional Encoding Text Tokens Text Encoder Motion Decoder Video Decoder \u2026 Conv1 Conv2 ConvK Identity Image Identity Encoder \u2026 L1 Loss & SSIM Loss & Adversarial Loss Text Encoder Duration Predictor N\u00d7 TM Block Positional Encoding Layer Norm Linear Layer Motion Decoder Linguistic information Text Tokens Text Embedding TM Block Length Expansion Layer Norm b i n | w h i t e | \u2026 (a) ParaLip. Image Decoder DeConv1 DeConv2 DeConvK \u2026 Identity Information Motion Information at \u03c4 time Image at \u03c4 time Video Decoder T\u00d7 Video frames Motion Information Linguistic information N\u00d7 Positional Encoding Text Tokens Text Encoder Motion Decoder Video Decoder \u2026 Conv1 Conv2 ConvK Identity Image Identity Encoder \u2026 L1 Loss & SSIM Loss & Adversarial Loss Text Encoder Duration Predictor N\u00d7 TM Block Positional Encoding Layer Norm Linear Layer Motion Decoder Linguistic information Text Tokens Text Embedding TM Block Length Expansion Layer Norm b i n | w h i t e | \u2026 (b) Text Encoder with Length Regulator. Image Decoder DeConv1 DeConv2 DeConvK \u2026 Identity Information Motion Information at \u03c4 time Image at \u03c4 time Video Decoder T\u00d7 Video frames Motion Information Linguistic information N\u00d7 Positional Encoding Text Tokens Text Encoder Motion Decoder Video Decoder \u2026 Conv1 Conv2 ConvK Identity Image Identity Encoder \u2026 L1 Loss & SSIM Loss & Adversarial Loss Text Encoder Duration Predictor N\u00d7 TM Block Positional Encoding Layer Norm Linear Layer Motion Decoder Linguistic information Text Tokens Text Embedding TM Block Length Expansion Layer Norm b i n | w h i t e | \u2026 (c) Motion Decoder. Image Decoder DeConv1 DeConv2 DeConvK \u2026 Identity Information Motion Information at \u03c4 time Image at \u03c4 time Video Decoder T\u00d7 Video frames Motion Information Linguistic information N\u00d7 Positional Encoding Text Tokens Text Encoder Motion Decoder Video Decoder \u2026 Conv1 Conv2 ConvK Identity Image Identity Encoder \u2026 L1 Loss & SSIM Loss & Adversarial Loss Text Encoder Duration Predictor N\u00d7 TM Block Positional Encoding Layer Norm Linear Layer Motion Decoder Linguistic information Text Tokens Text Embedding TM Block Length Expansion Layer Norm b i n | w h i t e | \u2026 (d) Video Decoder with multiple Image Decoders. Figure 2: The overall architecture for ParaLip. In sub\ufb01gure (a), Identity Encoder sends out residual information at every convolutional layer. In sub\ufb01gure (b), Length Regulator expands the text sequence according to ground truth duration in training or predicted duration in inference. In sub\ufb01gure (c), Motion Decoder models lip movement information sequence from linguistic information sequence. In sub\ufb01gure (d), there are T Image Decoders placed parallel in Video Decoder. The \u03c4-th Image Decoder takes in motion information at \u03c4 time and generates lip image at \u03c4 time. T means total number of lip frames. each other, have been proposed recently. Earliest in the NAR machine translation \ufb01eld, many works use the fertility module or length predictor (Gu et al. 2018; Lee, Mansimov, and Cho 2018; Ghazvininejad et al. 2019; Ma et al. 2019) to predict the length correspondence (fertility) between source and target sequences, and then generate the target sequence depending on the source sequence and predicted fertility. Shortly afterward, researchers bring NAR decoding manner into heterogeneous tasks. In the speech \ufb01eld, NAR-based TTS (Ren et al. 2019; Peng et al. 2020; Miao et al. 2020) synthesize speech from text with high speed and slightly quality drop; NAR-based ASR (Chen et al. 2019; Higuchi et al. 2020) recognize speech to corresponding transcription faster. In the computer vision \ufb01eld, Liu et al. (2020) propose an NAR model for lipreading; Deng et al. (2020) present NAR image caption model not only improving the decoding ef\ufb01ciency but also making the generated captions more controllable and diverse. 3 Method 3.1 Preliminary Knowledge The text-to-lip generation aims to generate the sequence of lip movement video frames L = {l1, l2, ..., lT }, given source text sequence S = {s1, s2, ..., sm} and a single identity lip image lI as condition. Generally, there is a considerable discrepancy between the sequence length of L and S with uncertain mapping relationship. Previous work views this as a sequence-to-sequence problem, utilizing attention mechanism and AR decoding manner, where the conditional probability of L can be formulated as: P(L|S, lI) = T Y \u03c4=0 P(l\u03c4+1|l<\u03c4+1, S, lI; \u03b8), (1) where \u03b8 denotes the parameters of the model. To remedy the error propagation and high latency problem brought by AR decoding, ParaLip models the target sequence in an NAR manner, where the conditional probability becomes: P(L|S, lI) = T Y \u03c4=1 P(l\u03c4|S, lI; \u03b8). (2) 3.2 Model Architecture of ParaLip The overall model architecture and training losses are shown in Figure 2a. We explain each component in ParaLip in the following paragraphs. Identity Encoder As shown in the right panel of Figure 2a, identity encoder consists of stacked 2D convolutional layers with batch normalization, which down-samples the identity image multiple times to extract features. The identity image is selected randomly from target lip frames, providing the appearance information of a speaker. It is worth noting that the identity encoder sends out the \ufb01nal encoded hidden feature together with the intermediate hidden feature of convolutional layers at every level, which provides the \ufb01ne-grained image information. Text Encoder As shown in Figure 2b, the text encoder consists of a text embedding layer, stacked feed-forward Transformer layers (TM) (Ren et al. 2019), a duration predictor and a length regulator. The TM layer contains selfattention layer and 1D convolutional layer with layer normalization and residual connection (Vaswani et al. 2017; Gehring et al. 2017). The duration predictor contains two 1D convolutional layers with layer normalization and one linear layer, which takes in the hidden text embedding sequence and predicts duration sequence D\u2217 = {d\u2217 1, d\u2217 2, ..., d\u2217 m}, where d\u2217 i means how many video frames the i-th text token corresponding to. The length regulator expands the hidden text embedding sequence according to ground truth duration D at training stage or predicted duration D\u2217at inference stage. For example, when given source text and duration sequence are {s1, s2, s3} and {2, 1, 3} respectively, denoting the hidden text embedding as {h1, h2, h3}, the expanded sequence is {h1, h1, h2, h3, h3, h3}, which carries the linguistic information corresponding to lip movement video at frame level. Collectively, text encoder encodes the source text sequence S to linguistic information sequence e S = {e s1, e s2, ..., e sT \u2217}, where T \u2217= Pm i=1 di at training stage, or T \u2217= Pm i=1 d\u2217 i at inference stage. Motion Decoder Motion decoder (Figure 2c) aims to model the lip movement information sequence e L = {e l1,e l2, ...,e lT \u2217} from linguistic information sequence e S. It utilizes the positional encoding and self-attention mechanism in stacked TM blocks to enforce the temporal correlation on the hidden sequence. There is a linear layer at the end of this module convert the hidden states to an appropriate dimension. Video Decoder The video decoder generates the target lip movement video L\u2217conditioned on the motion information sequence and identity information. As shown in Figure 2d, the video decoder consists of multiple parallel image decoders with all parameters shared, each of which contains stacked 2D deconvolutional layers, and there are skip connections at every level between the identity encoder and each image decoder. The skip connection is implemented by concatenation. Then two extra 2D convolutional layers are added at the end of each decoder for spatial coherence. Finally, the \u03c4-th image decoder takes in lip motion information at \u03c4 time e l\u03c4 and generates lip image l\u2217 \u03c4 at \u03c4 time in corresponding shape. 3.3 Training Methods In this section, we describe the loss function and training strategy to supervise ParaLip. The reconstruction loss and duration prediction loss endow the model with the fundamental ability to generate lip movement video. To generate the lip with better perceptual quality and alleviate the \u201cblurry predictions\u201d (Mathieu, Couprie, and LeCun 2016) problem, the structural similarity index loss and adversarial learning are introduced. We also explore the source-target alignment method when the audio is absent even in the training set, which will be introduced in Section 6. Reconstruction Loss Basically, we optimize the whole network by adding L1 reconstruction loss on generated lip sequence L\u2217: Lrec = T X \u03c4=1 \u2225l\u03c4 \u2212l\u2217 \u03c4\u22251. (3) Duration Prediction Loss In the training stage, we add L1 loss on predicted duration sequence D\u2217at token level2 and sequence level, which supervises the duration predictor to make the precise \ufb01ne-grained and coarse-grained predictions. Duration prediction loss Ldur can be written as: Ldur = m X i=1 \u2225di \u2212d\u2217 i \u22251 + \u2225 m X i=1 di \u2212 m X i=1 d\u2217 i \u22251. (4) Structural Similarity Index Loss Structural Similarity Index (SSIM) (Wang 2004) is adopted to measure the perceptual image quality, which takes luminance, contrast and structure into account, and is close to the perception of human beings. The SSIM value for two pixels at position (i, j) in \u03c4-th images l\u2217 \u03c4 and l\u03c4 can be formulated as: SSIMi,j,\u03c4 = 2\u00b5l\u2217 \u03c4 \u00b5l\u03c4 + C1 \u00b52 l\u2217 \u03c4 + \u00b52 l\u03c4 + C1 \u00b7 2\u03c3l\u2217 \u03c4 l\u03c4 + C2 \u03c32 l\u2217 \u03c4 + \u03c32 l\u03c4 + C2 , where \u00b5l\u2217 \u03c4 and \u00b5l\u03c4 denotes the mean for regions in image l\u2217 \u03c4 and l\u03c4 within a 2D-window surrounding (i, j). Similar, \u03c3l\u2217 \u03c4 and \u03c3l\u03c4 are standard deviation; \u03c3l\u2217 \u03c4 l\u03c4 is the covariance; C1 and C2 are constant values. To improve the perceptual quality of the generated lip frames, we leverage SSIM loss in ParaLip. Assuming the size of each lip frame to be (A\u00d7B), the SSIM loss between generated L\u2217and ground truth L becomes: Lssim = 1 T \u00b7 A \u00b7 B T X \u03c4=1 A X i B X j (1 \u2212SSIMi,j,\u03c4)). (5) Adversarial Learning Through experiments, it can be found that only using above losses is insuf\ufb01cient to generate distinct lip images with more realistic texture and local details (e.g.wrinkles, beard and teeth). Thus, we adopt adversarial learning to mitigate this problem and train a quality discriminator Disc along with ParaLip. The Disc contains stacked 2D convolutional layers with LeakyReLU activation which down-samples each image to 1 \u00d7 1 \u00d7 H (H is hidden size), and a 1 \u00d7 1 convolutional layer to project the hidden states to a value of probability for judging real or fake. We use the loss function in LSGAN (Mao et al. 2017) to train ParaLip and Disc: LG adv = Ex\u223cl\u2217(Disc(x) \u22121)2, (6) LD adv = Ex\u223cl(Disc(x) \u22121)2 + Ex\u223cl\u2217Disc(x)2, (7) where l\u2217means lip images generated by ParaLip and l means ground truth lip images. To summarize, we optimize the Disc by minimizing Equation (7), and optimize the ParaLip by minimizing Ltotal: Ltotal = \u03bb1 \u00b7Lrec +\u03bb2 \u00b7Ldur +\u03bb3 \u00b7Lssim +\u03bb4 \u00b7LG adv, (8) where the \u03bb1, \u03bb2, \u03bb3 and \u03bb4 are hyperparameters to trade off the four losses. 2Character level for GRID and phoneme level for TCD-TIMIT following previous works. 4 Experimental Settings 4.1 Datasets GRID The GRID dataset (Cooke et al. 2006) consists of 33 video-available speakers, and each speaker utters 1,000 phrases. The phrases are in a 6-categories structure following \ufb01xed simple grammar: command4 + color4 + preposition4 + letter25 + digit10 + adverb4 where the number denotes how many choices of each category. Thus, the total vocabulary size is 51, composing 64,000 possible phrases. All the videos last 3 seconds with frame rate 25 fps, which form a total duration of 27.5 hours. It is a typical talking face dataset and there are a considerable of lip-related works (Assael et al. 2016; Chung et al. 2017; Afouras, Chung, and Zisserman 2018; Chen et al. 2018; Zhu et al. 2020; Lin et al. 2021) conducting experiments on it. Following previous works, we select 255 random samples from each speaker to form the test set. TCD-TIMIT The TCD-TIMIT dataset (Harte and Gillen 2015) is closer to real cases and more challenging than GRID dataset, since 1) the vocabulary is not limited; 2) the sequence length of videos is not \ufb01xed and is longer than that in GRID. We use the \u2018volunteers\u2019 subset of TCD-TIMIT following previous works, which consists of 59 speakers uttering about 98 sentences individually. The frame rate is 29.97 fps and each video lasts 2.5\u223c8.1 seconds. The total duration is about 7.5 hours. We set 30% of data from each speaker aside for testing following the recommended speaker-dependent train-test splits (Harte and Gillen 2015). 4.2 Data Pre-processing As for the video pre-processing, we utilize Dlib (King 2009) to detect 68 facial landmarks (including 20 mouth landmarks), and extract the face images from video frames. We resize the face images to 256 \u00d7 256, and further crop each face to a \ufb01xed 160 \u00d7 80 size containing the lip-centered region. As for the text pre-processing, we encode the text sequence at the character level for GRID dataset and phoneme level for TCD-TIMIT dataset. And for ground truth duration extraction, we \ufb01rst extract the speech audio from video \ufb01les, and then utilize \u201cPenn Phonetics Lab Forced Aligner\u201d (P2FA) (Yuan and Liberman 2008) to get speech-to-text alignments, from which we obtain the duration of each text token for training our duration predictor in ParaLip. 5 Results and Analysis In this section, we present extensive experimental results to evaluate the performance of ParaLip in terms of lip movements quality and inference speedup. And then, we conduct ablation experiments to verify the signi\ufb01cance of all proposed methods in ParaLip. 5.1 Quality Comparison We compare our model with 1) DualLip (Chen et al. 2020), which is the state-of-the-art (SOTA) autoregressive textto-lip model based on RNN and location-sensitive attention (Shen et al. 2018). And 2) TransformerT2L, an autoregressive baseline model based on Transformer (Vaswani et al. 2017) implemented by us, which uses the same model settings with ParaLip3. The quantitative results on GRID and TCD-TIMIT are listed in Table 1 and Table 2 respectively4. Note that we do not add adversarial learning on any model in Table 1 or Table 2, since there is no adversarial learning in DualLip (Chen et al. 2020). Methods PSNR \u2191 SSIM \u2191 LMD \u2193 AR Benchmarks DualLip 29.13\u2020 0.872\u2020 1.809 \u2020 TransformerT2L 26.85 0.829 1.980 Our Model ParaLip 28.74 0.875 1.675 Table 1: Comparison with Autoregressive Benchmarks on GRID dataset. \u2020 denotes our reproduction under the case w/o GT duration at inference. Methods PSNR \u2191 SSIM \u2191 LMD \u2193 AR Benchmarks DualLip 27.38\u2020 0.809\u2020 2.351\u2020 TransformerT2L 26.89 0.794 2.763 Our Model ParaLip 27.64 0.816 2.084 Table 2: Comparison with Autoregressive Benchmarks on TCD-TIMIT dataset. \u2020 denotes our reproduction under the case w/o GT duration at inference. Quantitative Comparison We can see that: 1) On GRID dataset (Table 1), ParaLip outperforms DualLip on LMD metric, and keeps the same performance in terms of PSNR and SSIM metrics. However, on TCD-TIMIT dataset (Table 2), ParaLip achieves a overall performance surpassing DualLip by a notable margin, since autoregressive models perform badly on the long-sequence dataset due to accumulated prediction error; 2) ParaLip shows absolute superiority over AR baseline TransformerT2L in terms of three quantitative metrics on both datasets; 3) although DualLip outperforms AR baseline by incorporating the technique of location-sensitive attention, which could alleviate the error propagation, it is still vulnerable on long-sequence dataset. Qualitative Comparison We further visualize the qualitative comparison between DualLip, TransformerT2L and ParaLip in Figure 3. It can be seen that the quality of lip frames generated by DualLip and TransformerT2L become 3Most modules and the total number of model parameters in ParaLip and TransformerT2L are similar. 4Note that the reported results are all under the case where the ground truth (GT) duration is not provided at inference (denoted as w/o duration in (Chen et al. 2020)) , since there is no GT duration available in the real case. sil bin blue at y nine now sil DualLip TransformerT2L ParaLip sil dh ser p sh z sil DualLip TransformerT2L ParaLip uw w ow ae t ah ih w Ground Truth Ground Truth Figure 3: The qualitative comparison among AR SOTA (DualLip), AR baseline (TransformerT2L) and our NAR method (ParaLip). We visualize two cases from GRID dataset and TCD-TIMIT dataset to illustrate the error propagation problem existing in AR generation and verify the robustness of ParaLip. In the \ufb01rst case, the lip sequence generated from AR baseline predicts a wrong lip image (the 6-th frame with red box), and as a result, the subsequent lip images conditioned on that image becomes out of synchronization with linguistic information and ends in chaos; DualLip alleviates the error propagation to some degree. In the second case, both AR models perform poorly on the long-sequence dataset and generate the frames that look speechless as the time goes further. increasingly worse as the time goes further. Concretely, The lip image becomes fuzzy and out of synchronization with linguistic contents. We attribute this phenomenon to the reason that: error propagation problem is serious in AR T2L since the wrong prediction could take place at more dimensions (every pixel with three channels in generated image) and there is information loss during the down-sampling when sending the last generated lip frame to predict current one. What\u2019s worse, on TCD-TIMIT, a long-sequence dataset, DualLip and TransformerT2L often generate totally unsatisfying results that look like speechless video. By contrast, the lip frames generated by NAR model ParaLip maintain high \ufb01delity to ground truth all the while, which demonstrates the effectiveness and robustness of NAR decoding. 5.2 Speed Comparison In this section, we evaluate and compare the average inference latency of DualLip, TransformerT2L and ParaLip on both datasets. Furthermore, we study the relationship between inference latency and the target video length. Comparison of Average Inference Latency The average inference latency is the average time consumed to generate one video sample on the test set, which is measured in seconds. Table 3 exhibits the inference latency of all systems. It can be found that, 1) compared with DualLip, ParaLip speeds up the inference by 13.09\u00d7 and 19.12\u00d7 on average on two datasets; 2) TransformerT2L has the same structure with ParaLip, but runs about 50% slower than DualLip, which indicates that it is NAR decoding manner in ParaLip speedups the inference, rather than the modi\ufb01cation of model structure; 3) In regard to AR models, the time consumption of a single sentence increases to 0.5-1.5 seconds even on GPU, which is unacceptable for real-world application. By contrast, ParaLip addresses the inference latency problem satisfactorily. Datasets Methods Latency (s) Speedup GRID DualLip 0.299 1.00 \u00d7 TransformerT2L 0.689 0.43 \u00d7 ParaLip 0.022 13.09 \u00d7 TIMIT DualLip 0.650 1.00 \u00d7 TransformerT2L 1.278 0.51 \u00d7 ParaLip 0.034 19.12 \u00d7 Table 3: The comparison of inference latency on GRID and TCD-TIMIT dataset. The computations are conducted on a server with 1 NVIDIA 2080Ti GPU. 80 100 120 140 160 180 200 220 Predicted Lip Video Length 0.08 0.50 1.00 1.50 2.00 Inference Time(s) DualLip TransformerT2L ParaLip Figure 4: Relationship between inference latency (seconds) and predicted video length for DualLip, TransformerT2L and ParaLip. Relationship between Inference Latency and Video Length In this section, we study the speedup as the sequence length increases. The experiment is conducted on TCD-TIMIT, since its videos are not in a \ufb01xed length. From Figure 4, it can be seen that 1) ParaLip model speeds up the inference obviously due to high parallelization compared with AR models; 2) ParaLip is insensitive to sequence length and almost holds a constant inference latency, but by contrast, the inference latency of DualLip and TransformerT2L increase linearly as the sequence length increases. As a result, the speedup of ParaLip relative to DualLip or TransformerT2L also increases linearly as the sequence length increases. 5.3 Ablation Study Model PSNR \u2191 SSIM \u2191 LMD \u2193 FID \u2193 Base model 30.24 0.896 0.998 56.36 +SSIM 30.51 0.906 0.978 55.05 +ADV 25.70 0.736 2.460 65.88 +SSIM+ADV 28.36 0.873 1.077 39.74 Table 4: The ablation studies on GRID dataset. Base model is trained only with L1 loss; \u201c+SSIM\u201d means adding structural similarity index loss and \u201c+ADV\u201d means adding adversarial learning to the base model. FID means Fr\u00b4 echet Inception Distance metric. To focus on the frames quality, we provide the GT duration for eliminating the interference caused by the discrepancy of predicted length. We conduct ablation experiments on GRID dataset to analyze the effectiveness of the proposed methods in our work. All the results are shown in Table 4. Experiments show that: \u2022 Adding only SSIM loss obtains the optimal score on PSNR/SSIM/LMD (\u201c+SSIM\u201d); \u2022 Adding only adversarial training causes performance drop on PSNR/SSIM/LMD, which is consistent with previous works (Song et al. 2019) (\u201c+ADV\u201d); \u2022 Adding SSIM to model with adversarial training can greatly alleviate the detriment on PSNR/SSIM/LMD brought by adversarial training; make the GANbased model more stable; obtain the best FID score, which means the generated lips look more realistic. (\u201c+SSIM+ADV\u201d). Previous works (Song et al. 2019) claim that 1) PSNR and SSIM cannot well re\ufb02ect some visual quality; 2) adversarial learning encourages the generated face to pronounce in diverse ways, leading to diverse lip movements and LMD decrease. Although \u201c+SSIM+ADV\u201d causes marginally PSNR/SSIM/LMD scores losses, \u201c+SSIM+ADV\u201d obtain the best FID score and tends to generate distinct lip images with more realistic texture and local details (e.g. wrinkles, beard and teeth). The qualitative results are shown in Figure 5. 6 Further Discussions In the foregoing sections, we train the duration predictor using the \u201cGT\u201d duration extracted by P2FA, but it is not + SSIM + SSIM + ADV + SSIM + SSIM + ADV + SSIM + SSIM + ADV Figure 5: The qualitative evaluation for adversarial learning. \u201c+ADV\u201d tends to generate more realistic lip images. applicable to the case where the audio is absent even in the training set. Thus, to obtain the \u201cGT\u201d duration in this case, we tried a lipreading model with monotonic alignment searching (MAS) (Tillmann et al. 1997; Kim et al. 2020) to \ufb01nd the alignment between text and lip frames. Specifically, we 1) \ufb01rst trained a lipreading model by CTC loss on the training set; 2) traveled the training set and for each (L, S) pair, we extracted Octc \u2208Rm (the CTC outputs corresponding to the label tokens) from Octc \u2208RV (Octc is the original CTC outputs; V is the vocabulary size); 3) applied softmax on the extracted Octc to obtain the probability matrix P(A(si, lj)) = e Oi ctc,lj P m eOctc,lj ; 4) conducted MAS by dynamic programming to \ufb01nd the best alignment solution. This method achieves similar results with P2FA on GRID dataset, but could cause deterioration on TIMIT dataset: PSNR:27.09, SSIM:0.816, LMD:2.313. Theoretically, obtaining the alignment between lip frames and its transcript directly from themselves has more potential than obtaining this alignment indirectly from audio sample and its transcripts, since there are many cases when the mouth is moving but the sound hasn\u2019t come out yet (e.g. the \ufb01rst milliseconds of a speech sentence). This part of video frames should be aligned to some words, but its corresponding audio piece is silent, which will not be aligned to any word, causing contradictions. We think it is valuable to try more and better methods in this direction. 7 Conclusion In this work, we point out and analyze the unacceptable inference latency and intractable error propagation existing in AR T2L generation, and propose a parallel decoding model ParaLip to circumvent these problems. Extensive experiments show that ParaLip generates lip movements with competitive quality compared with the state-of-the-art AR T2L model, exceeds the baseline AR model TransformerT2L by a notable margin and exhibits distinct superiority in inference speed, which provides the possibility to bring T2L generation from laboratory to industrial applications. 8 Acknowledgments This work was supported in part by the National Key R&D Program of China under Grant No.2020YFC0832505, No.62072397, Zhejiang Natural Science Foundation under Grant LR19F020006.", "introduction": "In the modern service industries, talking face generation has broad application prospects such as avatar, virtual assistant, movie animation, teleconferencing, etc. (Zhu et al. 2020). As a key component of talking face generation, lip movements generation (a.k.a. lip generation) determines the naturalness and coherence of the generated talking face video. Lip gen- eration aims to synthesize accurate mouth movements video corresponding to the linguistic content information carried in speech or pure text. Mainstream literature focuses on speech-to-lip (S2L) gen- eration while there is a paucity in text-to-lip (T2L) genera- tion. Even so, T2L generation is very crucial and has con- siderable merits compared to S2L since 1) text data can be obtained or edited more easily than speech, which makes T2L generation more convenient; and 2) T2L extremely *Equal contribution. \u2020Corresponding author Copyright \u00a9 2022, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. place white by u three again sil place white by u three again sil ground truth generated lip video ( by AR T2L model ) identity lip arbitrary text random selection \u2026 Figure 1: The task description of T2L generation. The model takes in an arbitrary source text sequence and a single iden- tity lip image to synthesize the target lip movements video. And in this \ufb01gure we can see that the generated video loses the linguistic information gradually and \ufb01nally becomes fuzzy and motionless (lip frames in the red box), which is the intractable problem existing in AR T2L models due to error propagation. preserves privacy especially in the society where the deep learning techniques are so developed that a single sentence speech could expose an unimaginable amount of personal information. However, end-to-end T2L task (shown in Figure 1) is challenging. Unlike S2L task where the mapping relation- ship between the sequence length of source speech and tar- get video is certain (according to audio sample rate and fps), there is an uncertain sequence length discrepancy be- tween source and target in T2L task. The traditional tempo- ral convolutional networks become impractical. Hence, ex- isting works view T2L as a sequence-to-sequence task and tackle it by leveraging the attention mechanism and autore- gressive (AR) decoding manner. The AR decoding manner brings two drawbacks: 1) it inherently hinders the inference speed since its decoder generates target lips one by one au- toregressively with the causal structure. Consequently, gen- erating a single sentence of short video consumes about 0.5- 1.5 seconds even on GPU, which is not acceptable for indus- trial applications such as real-time interactions with avatar or virtual assistant, real-time teleconferencing and document- level audio-visual speech synthesis, etc. 2) It has a detrimen- tal effect on the quality of generated lips due to error prop- arXiv:2107.06831v2 [cs.MM] 20 Dec 2021 agation1, which is frequently discussed in neural machine translation and image caption \ufb01eld (Bengio et al. 2015; Wu et al. 2018). Worse still, error propagation is more obvious in AR lip generation than in other tasks, because the mistakes could take place at more dimensions (every pixel with three channels in generated image) and there is information loss during the down-sampling when sending the last generated lip frame to predict current one. Although prior works al- leviate the error propagation by incorporating the technique of location-sensitive attention, it still has an unsatisfying per- formance on long-sequence datasets due to accumulated pre- diction error. To address such limitations, we turn to non-autoregressive (NAR) approaches. NAR decoding manner generates all the target tokens in parallel, which has already pervaded multi- ple research \ufb01elds such as neural machine translation (Gu et al. 2018; Lee, Mansimov, and Cho 2018; Ghazvinine- jad et al. 2019; Ma et al. 2019), speech recognition (Chen et al. 2019; Higuchi et al. 2020), speech synthesis (Ren et al. 2019; Peng et al. 2020; Miao et al. 2020), image captioning (Deng et al. 2020) and lip reading (Liu et al. 2020). These works utilize the NAR decoding in sequence- to-sequence tasks to reduce the inference latency or generate length-controllable sequence. In this work, we propose an NAR model for parallel and high-\ufb01delity T2L generation (ParaLip). ParaLip predicts the duration of the encoded linguistic features and models the target lip frames conditioned on the encoded linguistic fea- tures with their duration in a non-autoregressive manner. Furthermore, we leverage structural similarity index (SSIM) loss to supervise ParaLip generating lips with better per- ceptual quality. Finally, using only reconstruction loss and SSIM loss is insuf\ufb01cient to generate distinct lip images with more realistic texture and local details (e.g.wrinkles, beard and teeth), and therefore we adopt adversarial learning to mitigate this problem. Our main contributions can be summarized as follows: 1) We point out and analyze the unacceptable inference latency and intractable error propagation existing in AR T2L gen- eration. 2) To circumvent these problems, we propose Par- aLip to generate high-quality lips with low inference latency. And as a byproduct of ParaLip, the duration predictor in Par- aLip could be leveraged in an NAR text-to-speech model, which naturally enables the synchronization in audio-visual speech synthesis task. 3) We explore the source-target align- ment method when the audio is absent even in the train- ing set. Extensive experiments demonstrate that ParaLip generates the competitive lip movements quality compared with state-of-the-art AR T2L model and exceeds the base- line AR model TransformerT2L by a notable margin. In the meanwhile, ParaLip exhibits distinct superiority in inference speed, which truly provides the possibility to bring T2L gen- eration from laboratory to industrial applications. 1Error propagation means if a token is mistakenly predicted at inference stage, the error will be propagated and the future tokens conditioned on this one will be in\ufb02uenced (Bengio et al. 2015; Wu et al. 2018)." }, { "url": "http://arxiv.org/abs/2105.02446v6", "title": "DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism", "abstract": "Singing voice synthesis (SVS) systems are built to synthesize high-quality\nand expressive singing voice, in which the acoustic model generates the\nacoustic features (e.g., mel-spectrogram) given a music score. Previous singing\nacoustic models adopt a simple loss (e.g., L1 and L2) or generative adversarial\nnetwork (GAN) to reconstruct the acoustic features, while they suffer from\nover-smoothing and unstable training issues respectively, which hinder the\nnaturalness of synthesized singing. In this work, we propose DiffSinger, an\nacoustic model for SVS based on the diffusion probabilistic model. DiffSinger\nis a parameterized Markov chain that iteratively converts the noise into\nmel-spectrogram conditioned on the music score. By implicitly optimizing\nvariational bound, DiffSinger can be stably trained and generate realistic\noutputs. To further improve the voice quality and speed up inference, we\nintroduce a shallow diffusion mechanism to make better use of the prior\nknowledge learned by the simple loss. Specifically, DiffSinger starts\ngeneration at a shallow step smaller than the total number of diffusion steps,\naccording to the intersection of the diffusion trajectories of the ground-truth\nmel-spectrogram and the one predicted by a simple mel-spectrogram decoder.\nBesides, we propose boundary prediction methods to locate the intersection and\ndetermine the shallow step adaptively. The evaluations conducted on a Chinese\nsinging dataset demonstrate that DiffSinger outperforms state-of-the-art SVS\nwork. Extensional experiments also prove the generalization of our methods on\ntext-to-speech task (DiffSpeech). Audio samples: https://diffsinger.github.io.\nCodes: https://github.com/MoonInTheRiver/DiffSinger. The old title of this\nwork: \"Diffsinger: Diffusion acoustic model for singing voice synthesis\".", "authors": "Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, Zhou Zhao", "published": "2021-05-06", "updated": "2022-03-22", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.LG", "cs.SD" ], "main_content": "In this section, we introduce the theory of diffusion probabilistic model (Sohl-Dickstein et al. 2015; Ho, Jain, and Abbeel 2020). The full proof can be found in previous works (Ho, Jain, and Abbeel 2020; Kong et al. 2021; Song, Meng, and Ermon 2021). A diffusion probabilistic model converts the raw data into Gaussian distribution gradually by a diffusion process, and then learns the reverse process to restore the data from Gaussian white noise (Sohl-Dickstein et al. 2015). These processes are shown in Figure 1. 2Here we use a traditional acoustic model based on feedforward Transformer (Ren et al. 2021; Blaauw and Bonada 2020), which is trained by L1 loss to reconstruct mel-spectrogram. 3K < T, where T is the total number of diffusion steps. \ufffd Mk can be calculated in closed form time (Ho, Jain, and Abbeel 2020). \ud835\udc66t\u22121 e \ud835\udc660 \ud835\udc5e \ud835\udc66t \ud835\udc66T \ud835\udc5e(\ud835\udc66\ud835\udc61|\ud835\udc66\ud835\udc61\u22121) \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc61\u22121|\ud835\udc66\ud835\udc61) diffusion process reverse process reverse process Figure 1: The directed graph for diffusion model. Diffusion Process Define the data distribution as q(y0), and sample y0 \u223cq(y0). The diffusion process is a Markov chain with fixed parameters (Ho, Jain, and Abbeel 2020), which converts y0 into the latent yT in T steps: q(y1:T |y0) := T \ufffd t=1 1, T \ufffd t=1 q(yT |yt\u22121). \ufffd At each diffusion step t \u2208[1, T], a tiny Gaussian noise is added to yt\u22121 to obtain yt, according to a variance schedule \u03b2 = {\u03b21, . . . , \u03b2T }: q(y|y) := N(y; \ufffd 1 \u2212\u03b2y, \u03b2I). , . . . , \u03b2T }: q(yt|yt\u22121) := N(yt; \ufffd ell designed and T is suf y an isotropic Gaussian d 1 \u2212\u03b2tyt\u22121, \u03b2tI). fficiently large, the |\u2212 N \ufffd \u2212\u2212 If \u03b2 is well designed and T is sufficiently large, then q(yT ) is nearly an isotropic Gaussian distribution (Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021). Besides, there is a special property of diffusion process that q(yt|y0) can be calculated in closed form in O(1) time (Ho, Jain, and Abbeel 2020): \u221a q(yt|y0) = N(yt; \u221a\u00af \u03b1ty0, (1 \u2212\u00af \u03b1t)I), (1) \ufffdt ,. t|0 Nt t0 where \u00af \u03b1t := \ufffdt s=1 \u03b1s, \u03b1t := 1 \u2212\u03b2t. Reverse Process The reverse proc with learnable parameters \u03b8 from yT reverse transition distribution \ufffd \u2212 Reverse Process The reverse process is a Markov chain with learnable parameters \u03b8 from yT to y0. Since the exact reverse transition distribution q(yt\u22121|yt) is intractable, we approximate it by a neural network with parameters \u03b8 (\u03b8 is shared at every t-th step): p\u03b8(yt\u22121|yt) := N(yt\u22121; \u00b5\u03b8(yt, t), \u03c32 t I). (2) whole reverse process can be defined as: \u2212| N\u2212 Thus the whole reverse process can be defined as: p\u03b8(y0:T ) := p(yT ) T \ufffd t=1 \ufffd t=1 p\u03b8(yt\u22121|yt). Training To learn the parameters \u03b8, we minimize a variational bound of the negative log likelihood: Eq(y0)[\u2212log p\u03b8(y0)] \u2265 E [log q(y \u2212 \u2265 Eq(y0,y1,...,yT ) [log q(y1:T |y0) \u2212log p\u03b8(y0:T )] =: L. ficient training is optimizing a random term of L w | \u2212 Efficient training is optimizing a random term of L with stochastic gradient descent (Ho, Jain, and Abbeel 2020): Lt\u22121 = DKL(q(yt\u22121|yt, y0) \u2225p\u03b8(yt\u22121|yt)) , (3) where q(yt\u22121|yt, y0) = N(yt\u22121; \u02dc \u00b5t(yt, y0), \u02dc \u03b2tI) \u221a\u00af \u03b1\u03b2 \u221a\u03b1(1 \u2212\u00af \u03b1 q(yt\u22121|yt, y0) = N(yt\u2212 \u02dc \u00b5t(yt, y0) := \u221a\u00af \u03b1t\u22121\u03b2t 1 \u2212\u00af \u03b1t = N(yt\u22121; \u02dc \u00b5t(yt, y0), \u03b2tI) \u221a\u00af \u03b1t\u22121\u03b2t 1 \u2212\u00af \u03b1t y0 + \u221a\u03b1t(1 \u2212\u00af \u03b1t\u22121) 1 \u2212\u00af \u03b1t (1 \u2212\u00af \u03b1t\u22121) 1 \u2212\u00af \u03b1t yt, \ud835\udc65\ud835\udc65 Encoder Step Embedding t Aux Decoder L1 loss with \ud835\udc40\ud835\udc40 \u0de9 \ud835\udc40\ud835\udc40 \ud835\udf16\ud835\udf16\ud835\udf03\ud835\udf03(\u00b7, \ud835\udc61\ud835\udc61) L1 loss with \ud835\udf16\ud835\udf16 Em \ud835\udf16\ud835\udf16,\ud835\udc58\ud835\udc58,\ud835\udefd\ud835\udefd Alg 2, Line #9 Loop \ud835\udc58\ud835\udc58/T times output Et Denoiser \u0de9 \ud835\udc40\ud835\udc40 \u0de9 \ud835\udc40\ud835\udc40\ud835\udc58\ud835\udc58 \ud835\udc40\ud835\udc40\ud835\udc47\ud835\udc47 \u0de9 \ud835\udc40\ud835\udc40\ud835\udc58\ud835\udc58 \ud835\udc40\ud835\udc40t Denoiser Em Et (a) The training procedure of DiffSinger. \ud835\udc65\ud835\udc65 Encoder Step Embedding t Aux Decoder L1 loss with \ud835\udc40\ud835\udc40 \u0de9 \ud835\udc40\ud835\udc40 \ud835\udf16\ud835\udf16\ud835\udf03\ud835\udf03(\u00b7, \ud835\udc61\ud835\udc61) L1 loss with \ud835\udf16\ud835\udf16 Em \ud835\udf16\ud835\udf16,\ud835\udc58\ud835\udc58,\ud835\udefd\ud835\udefd Alg 2, Line #9 Loop \ud835\udc58\ud835\udc58/T times output Et Denoiser \u0de9 \ud835\udc40\ud835\udc40 \u0de9 \ud835\udc40\ud835\udc40\ud835\udc58\ud835\udc58 \ud835\udc40\ud835\udc40\ud835\udc47\ud835\udc47 \u0de9 \ud835\udc40\ud835\udc40\ud835\udc58\ud835\udc58 \ud835\udc40\ud835\udc40t Denoiser Em Et (b) The inference procedure of DiffSinger. Figure 2: The overview of DiffSinger (with shallow diffusion mechanism in the dotted line boxes). In sub\ufb01gure (a), x is the music score; t is the step number; M means the ground truth mel-spectrogram; f M means the blurry mel-spectrogram generated by the auxiliary decoder trained with L1 loss; Mt is M at the t-th step in the diffusion process. In sub\ufb01gure (b), MT means the M at T-th diffusion step (Gaussian white noise); k is the predicted intersection boundary; there is a switch to select MT (naive version) or f Mk (with shallow diffusion) as the start point of the inference procedure. where \u02dc \u03b2t := 1\u2212\u00af \u03b1t\u22121 1\u2212\u00af \u03b1t \u03b2t. Eq. (3) is equivalent to: Lt\u22121 \u2212C = Eq \u0014 1 2\u03c32 t \u2225\u02dc \u00b5t(yt, y0) \u2212\u00b5\u03b8(yt, t)\u22252 \u0015 , (4) where C is a constant. And by reparameterizing Eq. (1) as yt(y0, \u03f5) = \u221a\u00af \u03b1ty0 + \u221a1 \u2212\u00af \u03b1t\u03f5, and choosing the parameterization: \u00b5\u03b8(yt, t) = 1 \u221a\u03b1t \u0012 yt \u2212 \u03b2t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(yt, t) \u0013 , (5) Eq. (4) can be simpli\ufb01ed to: Ey0,\u03f5 \u0014 \u03b22 t 2\u03c32 t \u03b1t(1 \u2212\u00af \u03b1t) \r \r\u03f5 \u2212\u03f5\u03b8(\u221a\u00af \u03b1ty0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, t) \r \r2\u0015 . (6) Finally we set \u03c32 t to \u02dc \u03b2t, sample \u03f5 \u223cN(0, I) and \u03f5\u03b8(\u00b7) is the outputs of the neural network. Sampling Sample yT from p(yT ) \u223cN(0, I) and run the reverse process to obtain a data sample. 3 DiffSinger As illustrated in Figure 2, DiffSinger is built on the diffusion model. Since SVS task models the conditional distribution p\u03b8(M0|x), where M is the mel-spectrogram and x is the music score corresponding to M, we add x to the diffusion denoiser as the condition in the reverse process. In this section, we \ufb01rst describe a naive version of DiffSinger (Section 3.1); then we introduce a novel shallow diffusion mechanism to improve the model performance and ef\ufb01ciency (Section 3.2); \ufb01nally, we describe the boundary prediction network which can adaptively \ufb01nd the intersection boundary required in shallow diffusion mechanism (Section 3.3). 3.1 Naive Version of DiffSinger In the naive version of DiffSinger (without dotted line boxes in Figure 2): In the training procedure (shown in Figure 2a), DiffSinger takes in the mel-spectrogram at t-th step Mt in the diffusion process and predicts the random noise \u03f5\u03b8(\u00b7) in Eq. (6), conditioned on t and the music score x. The inference procedure (shown in Figure 2b) starts at the Gaussian white noise sampled from N(0, I), as the previous diffusion models do (Ho, Jain, and Abbeel 2020; Kong et al. 2021). Then the procedure iterates for T times to repeatedly denoise the intermediate samples with two steps: 1) predict the \u03f5\u03b8(\u00b7) using the denoiser; 2) obtain Mt\u22121 from Mt using the predicted \u03f5\u03b8(\u00b7), according to Eq. (2) and Eq. (5): Mt\u22121 = 1 \u221a\u03b1t \u0012 Mt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(Mt, x, t) \u0013 + \u03c3tz, where z \u223cN(0, I) when t > 1, and z = 0 when t = 1. Finally, a mel-spectrogram M corresponding to x could be generated. 3.2 Shallow Diffusion Mechanism Although the previous acoustic model trained by the simple loss has intractable drawbacks, it still generates samples showing strong connection4 to the ground-truth data distribution, which could provide plenty of prior knowledge to DiffSinger. To explore this connection and \ufb01nd a way to make better use of the prior knowledge, we conduct the empirical observation leveraging the diffusion process (shown in Figure 3): 1) when t = 0, M has rich details between the neighboring harmonics, which can in\ufb02uence the naturalness of the synthesized singing voice, but f M is over-smoothing as we introduced in Section 1; 2) as t increases, samples 4The samples fail to maintain the variable aperiodic parameters, but they usually have a clear \u201cskeleton\u201d (harmonics) matching the ground truth. \u0de9 \ud835\udc40\ud835\udc40 \ud835\udc40\ud835\udc40 \ud835\udc61\ud835\udc61= 0 \ud835\udc61\ud835\udc61= 25 \ud835\udc61\ud835\udc61=50 \ud835\udc61\ud835\udc61= 75 Figure 3: The mel-spectrograms at different steps in the diffusion process. The \ufb01rst line shows the diffusion process of mel-spectorgrams f M generated by a simple decoder trained with L1 loss; the second line shows that of ground truth melspectrograms. of two process become indistinguishable. We illustrate this observation in Figure 4: the trajectory from f M manifold to Gaussian noise manifold and the trajectory from M to Gaussian noise manifold intersect when the diffusion step is big enough. Inspired by this observation, we propose the shallow diffusion mechanism: instead of starting with the Gaussian white noise, the reverse process starts at the intersection of two trajectories shown in Figure 4. Thus the burden of the reverse process could be distinctly alleviated5. Speci\ufb01cally, in the inference stage we 1) leverage an auxiliary decoder to generate f M, which is trained with L1 conditioned on the music score encoder outputs, as shown in the dotted line box in Figure 2a; 2) generate the intermediate sample at a shallow step k through the diffusion process, as shown in the dotted line box in Figure 2b according to Eq. (1): f Mk(f M, \u03f5) = \u221a\u00af \u03b1k f M + \u221a 1 \u2212\u00af \u03b1k\u03f5, where \u03f5 \u223cN(0, I), \u00af \u03b1k := Qk s=1 \u03b1s, \u03b1k := 1 \u2212\u03b2k. If the intersection boundary k is properly chosen, it can be considered that f Mk and Mk come from the same distribution; 3) start reverse process from f Mk, and complete the process by k iterations denoising. The training and inference procedures with shallow diffusion mechanism are described in Algorithm 1 and 2 respectively. The theoretical proof of the intersection of two trajectories can be found in the supplement. 3.3 Boundary Prediction We propose a boundary predictor (BP) to locate the intersection in Figure 4 and determine k adaptively. Concretely, BP consists of a classi\ufb01er and a module for adding noise to mel-spectrograms according to Eq. (1). Given the step number t \u2208[0, T], we label the Mt as 1 and f Mt as 0, and use cross-entropy loss to train the boundary predictor to judge whether the input mel-spectrogram at t step in diffusion process comes from M or f M. The training loss LBP can be 5Converting Mk into M0 is easier than converting MT (Gaussion white noise) into M0 (k < T). Thus the former could improve the quality of synthesized audio and accelerates inference. \ud835\udc95\u0d4cT Gaussian Noise Manifold \ud835\udc95\u0d4c\ud835\udfce \ud835\udc40Manifold \ud835\udc95\u0d4c\ud835\udfce \ud835\udc74 \u0de9Manifold \ud835\udc95\u0d4c\ud835\udc8c Figure 4: The diffusion trajectories of M and f M. Two distributions q(Mt|M0) and q(f Mt|f M0) become closer as t increases. written as: LBP = \u2212EM\u2208Y,t\u2208[0,T ][log BP(Mt, t)+ log(1 \u2212BP(f Mt, t))], where Y is the training set of mel-spectrograms. When BP have been trained, we determine k using the predicted value of BP, which indicates the probability of a sample classi\ufb01ed to be 1. For all M \u2208Y, we \ufb01nd the earliest step k\u2032 where the 95% steps t in [k\u2032, T] satis\ufb01es: the margin between BP(Mt, t) and BP(f Mt, t) is under the threshold. Then we choose the average of k\u2032 as the intersection boundary k. Algorithm 1: Training procedure of DiffSinger. Input: The denoiser \u03f5\u03b8; the intersection boundary k; the training set (X, Y). 1 repeat 2 Sample (x, M) from (X, Y); 3 \u03f5 \u223cN(0, I); 4 t \u223cUniform({1, . . . , k}); 5 Take gradient descent step on 6 \u2207\u03b8 \r \r\u03f5 \u2212\u03f5\u03b8(\u221a\u00af \u03b1tM + \u221a1 \u2212\u00af \u03b1t\u03f5, x, t) \r \r2 7 until convergence; We also propose an easier trick for boundary prediction in the supplement by comparing the KL-divergence. Note that the boundary prediction can be considered as a step of dataset preprocessing to choose the hyperparameter k for the whole dataset. k actually can be chosen manually by bruteforce searching on validation set. 3.4 Model Structures Encoder The encoder encodes the music score into the condition sequence, which consists of 1) a lyrics encoder to map the phoneme ID into embedding sequence, and a series of Transformer blocks (Vaswani et al. 2017) to convert this sequence into linguistic hidden sequence; 2) a length regulator to expand the linguistic hidden sequence to the length of mel-spectrograms according to the duration information; and 3) a pitch encoder to map the pitch ID into pitch embedding sequence. Finally, the encoder adds linguistic sequence and pitch sequence together as the music condition sequence Em following (Ren et al. 2020). Algorithm 2: Inference procedure of DiffSinger. Input: The denoiser \u03f5\u03b8; the auxiliary decoder; the intersection boundary k; the source testing set X. 1 Sample x from X as condition; 2 Generate f M by the auxiliary decoder; 3 \u03f5 \u223cN(0, I); 4 f Mk(f M, \u03f5) = \u221a\u00af \u03b1k f M + \u221a1 \u2212\u00af \u03b1k\u03f5; 5 Mk = f Mk; 6 for t = k, k \u22121, ..., 1 do 7 if t = 1 then z = 0 ; 8 else Sample z \u223cN(0, I); 9 Mt\u22121 = 1 \u221a\u03b1t \u0010 Mt \u2212 1\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03f5\u03b8(Mt, x, t) \u0011 + \u03c3tz 10 end Step Embedding The diffusion step t is another conditional input for denoiser \u03f5\u03b8, as shown in Eq. (6). To convert the discrete step t to continuous hidden, we use the sinusoidal position embedding (Vaswani et al. 2017) followed by two linear layers to obtain step embedding Et with C channels. Auxiliary Decoder We introduce a simple melspectrogram decoder called the auxiliary decoder, which is composed of stacked feed-forward Transformer (FFT) blocks and generates f M as the \ufb01nal outputs, the same as the mel-spectrogram decoder in FastSpeech 2 (Ren et al. 2021). Denoiser Denoiser \u03f5\u03b8 takes in Mt as input to predict \u03f5 added in diffusion process conditioned on the step embedding Et and music condition sequence Em. Since diffusion model imposes no architectural constraints (Sohl-Dickstein et al. 2015; Kong et al. 2021), the design of denoiser has multiple choices. We adopt a non-causal WaveNet (Oord et al. 2016) architecture proposed by (Rethage, Pons, and Serra 2018; Kong et al. 2021) as our denoiser. The denoiser is composed of a 1 \u00d7 1 convolution layer to project Mt with Hm channels to the input hidden sequence H with C channels and N convolution blocks with residual connections. Each convolution block consists of 1) an element-wise adding operation which adds Et to H; 2) a non-causal convolution network which converts H from C to 2C channels; 3) a 1 \u00d7 1 convolution layer which converts the Em to 2C channels; 4) a gate unit to merge the information of input and conditions; and 5) a residual block to split the merged hidden into two branches with C channels (the residual as the following H and the \u201cskip hidden\u201d to be collected as the \ufb01nal results), which enables the denoiser to incorporate features at several hierarchical levels for \ufb01nal prediction. Boundary Predictor The classi\ufb01er in the boundary predictor is composed of 1) a step embedding to provide Et; 2) a ResNet (He et al. 2016) with stacked convolutional layers and a linear layer, which takes in the mel-spectrograms at t-th step and Et to classify Mt and f Mt. More details of model structure and con\ufb01gurations are shown in the supplement. 4 Experiments In this section, we \ufb01rst describe the experimental setup, and then provide the main results on SVS with analysis. Finally, we conduct the extensional experiments on TTS. 4.1 Experimental Setup Dataset Since there is no publicly available high-quality unaccompanied singing dataset, we collect and annotate a Chinese Mandarin pop songs dataset: PopCS, to evaluate our methods. PopCS contains 117 Chinese pop songs (total \u223c5.89 hours with lyrics) collected from a quali\ufb01ed female vocalist. All the audio \ufb01les are recorded in a recording studio. Every song is sampled at 24kHz with 16-bit quantization. To obtain more accurate music scores corresponding to the songs (Lee et al. 2019), we 1) split each whole song into sentence pieces following DeepSinger (Ren et al. 2020) and train a Montreal Forced Aligner tool (MFA) (McAuliffe et al. 2017) model on those sentence-level pairs to obtain the phoneme-level alignments between song piece and its corresponding lyrics; 2) extract F0 (fundamental frequency) as pitch information from the raw waveform using Parselmouth, following (Wu and Luan 2020; Blaauw and Bonada 2020; Ren et al. 2020). We randomly choose 2 songs for validation and testing. To release a high-quality dataset, after the paper is accepted, we clean and re-segment these songs, resulting in 1,651 song pieces, which mostly last 10\u223c13 seconds. The codes accompanied with the access to PopCS are in https://github.com/MoonInTheRiver/DiffSinger. In this repository, we also add the extra codes out of interest, for MIDI-to-Mel, including the MIDI-to-Mel without F0 prediction/condition. Implementation Details We convert Chinese lyrics into phonemes by pypinyin following (Ren et al. 2020); and extract the mel-spectrogram (Shen et al. 2018) from the raw waveform; and set the hop size and frame size to 128 and 512 in respect of the sample rate 24kHz. The size of phoneme vocabulary is 61. The number of mel bins Hm is 80. The mel-spectrograms are linearly scaled to the range [1, 1], and F0 is normalized to have zero mean and unit variance. In the lyrics encoder, the dimension of phoneme embeddings is 256 and the Transformer blocks have the same setting as that in FastSpeech 2 (Ren et al. 2021). In the pitch encoder, the size of the lookup table and encoded pitch embedding are set to 300 and 256. The channel size C mentioned before is set to 256. In the denoiser, the number of convolution layers N is 20 with the kernel size 3, and we set the dilation to 1 (without dilation) at each layer6. We set T to 100 and \u03b2 to constants increasing linearly from \u03b21 = 10\u22124 to \u03b2T = 0.06. The auxiliary decoder has the same setting as the mel-spectrogram decoder in FastSpeech 2. In the boundary predictor, the number of convolutional layers is 5, and the threshold is set to 0.4 empirically. 6You can consider setting a bigger dilation number to increase the receptive \ufb01eld of the denoiser. See our Github repository. (a) GT (b) Diffsinger (c) GAN-singer (d) FFT-Singer Figure 5: Visualizations of mel-spectrograms in four systems: GT, DiffSinger, GAN-Singer and FFT-Singer. Training and Inference The training has two stages: 1) warmup stage: separately train the auxiliary decoder for 160k steps with the music score encoder, and then leverage the auxiliary decoder to train the boundary predictor for 30k steps to obtain k; 2) main stage: training DiffSinger as Algorithm 1 describes for 160k steps until convergence. In the inference stage, for all SVS experiments, we uniformly use a pretrained Parallel WaveGAN (PWG) (Yamamoto, Song, and Kim 2020)7 as vocoder to transform the generated melspectrograms into waveforms (audio samples). 4.2 Main Results and Analysis Audio Performance To evaluate the perceptual audio quality, we conduct the MOS (mean opinion score) evaluation on the test set. Eighteen quali\ufb01ed listeners are asked to make judgments about the synthesized song samples. We compare MOS of the song samples generated by DiffSinger with the following systems: 1) GT, the ground truth singing audio; 2) GT (Mel + PWG), where we \ufb01rst convert the ground truth singing audio to the ground truth mel-spectrograms, and then convert these mel-spectrograms back to audio using PWG vocoder described in Section 4.1; 3) FFT-NPSS (Blaauw and Bonada 2020) (WORLD), the SVS system which generates WORLD vocoder features (Morise, Yokomori, and Ozawa 2016) through feedforward Transformer (FFT) and uses WORLD vocoder to synthesize audio; 4) FFT-Singer (Mel + PWG) the SVS system which generates mel-spectrograms through FFT network and uses PWG vocoder to synthesize audio; 5) GANSinger (Wu and Luan 2020) (Mel + PWG), the SVS system with adversarial training using multiple random window discriminators. The results are shown in Table 1. The quality of GT (MEL + PWG) (4.04 \u00b1 0.11) is the upper limit of the acoustic model for SVS. DiffSinger outperforms the baseline system with simple training loss (FFT-Singer) by a large margin, and shows the superiority compared with the state-of-theart GAN-based method (GAN-Singer (Wu and Luan 2020)), which demonstrate the effectiveness of our method. As shown in Figure 5, we compare the ground truth, the generated mel-spectrograms from Diffsinger, GAN-singer and FFT-Singer with the same music score. It can be seen that both Figure 5c and Figure 5b contain more delicate details between harmonics than Figure 5d does. Moreover, the 7We adjust PWG to take in F0 driven source excitation (Wang and Yamagishi 2020) as additional condition, similar to that in (Chen et al. 2020). Method MOS GT 4.30 \u00b1 0.09 GT (Mel + PWG) 4.04 \u00b1 0.11 FFT-NPSS (WORLD) 1.75 \u00b1 0.17 FFT-Singer (Mel + PWG) 3.67 \u00b1 0.11 GAN-Singer (Mel + PWG) 3.74 \u00b1 0.12 DiffSinger Naive (Mel + PWG) 3.71 \u00b1 0.10 DiffSinger (Mel + PWG) 3.85 \u00b1 0.11 Table 1: The MOS with 95% con\ufb01dence intervals of song samples. DiffSinger Naive means the naive version of DiffSinger without shallow diffusion mechanism. performance of Diffsinger in the region of mid or low frequency is more competitive than that of GAN-singer while maintaining similar quality of the high-frequency region. In the meanwhile, the shallow diffusion mechanism accelerates inference of naive diffusion model by 45.1% (RTF 0.191 vs. 0.348, RTF is the real-time factor, that is the seconds it takes to generate one second of audio). Ablation Studies We conduct ablation studies to demonstrate the effectiveness of our proposed methods and some hyper-parameters studies to seek the best model con\ufb01gurations. We conduct CMOS evaluation for these experiments. The results of variations on DiffSinger are listed in Table 2. It can be seen that: 1) removing the shallow diffusion mechanism results in quality drop (-0.500 CMOS), which is consistent with the MOS test results and veri\ufb01es the effectiveness of our shallow diffusion mechanism (row 1 vs. row 2); 2) adopting other k (row 1 vs. row 3) rather than the one predicted by our boundary predictor causes quality drop, which veri\ufb01es that our boundary prediction network can predict a proper k for shallow diffusion mechanism; and 3) the model with con\ufb01gurations C = 256 and L = 20 produces the best results (row 1 vs. row 4,5,6,7), indicating that our model capacity is suf\ufb01cient. 4.3 Extensional Experiments on TTS To verify the generalization of our methods on TTS task, we conduct the extensional experiments on LJSpeech dataset (Ito and Johnson 2017), which contains 13,100 English audio clips (total \u223c24 hours) with corresponding transcripts. We follow the train-val-test dataset splits, the pre-processing of mel-spectrograms, and the grapheme-tophoneme tool in FastSpeech 2. To build DiffSpeech, we 1) No. C L w/ shallow k CMOS 1 256 20 \u2713 54 0.000 2 256 20 \u00d7 -0.500 3 256 20 \u2713 25 -0.053 4 128 20 \u2713 54 -0.071 5 512 20 \u2713 54 -0.044 6 256 10 \u2713 54 -0.293 7 256 30 \u2713 54 -0.445 Table 2: Variations on the DiffSinger. T in all the experiments is set to 100. C is channel size; L is the number of layers in denoiser; w/ shallow means \u201cwith shallow diffusion mechanism\u201d; k = 54 is our predicted intersection boundary. Method MOS GT 4.22 \u00b1 0.07 GT (Mel + HiFi-GAN) 4.15 \u00b1 0.07 Tacotron 2 (Mel + HiFi-GAN) 3.54 \u00b1 0.05 BVAE-TTS (Mel + HiFi-GAN) 3.48 \u00b1 0.06 FastSpeech 2 (Mel + HiFi-GAN) 3.68 \u00b1 0.06 Glow-TTS (Mel + HiFi-GAN) 3.69 \u00b1 0.07 DiffSpeech Naive (Mel + HiFi-GAN) 3.69 \u00b1 0.05 DiffSpeech (Mel + HiFi-GAN) 3.92 \u00b1 0.06 Table 3: The MOS of speech samples with 95% con\ufb01dence intervals. add a pitch predictor and a duration predictor to DiffSinger as those in FastSpeech 2; 2) adopt k = 70 for shallow diffusion mechanism. We use Amazon Mechanical Turk (ten testers) to make subjective evaluation and the results are shown in Table 3. All the systems adopt HiFi-GAN (Kong, Kim, and Bae 2020) as vocoder. DiffSpeech outperforms FastSpeech 2 and Glow-TTS, which demonstrates the generalization. Besides, the last two rows in Table 3 also show the effectiveness of shallow diffusion mechanism (with 29.2% speedup, RTF 0.121 vs. 0.171). 5 Related Work 5.1 Singing Voice Synthesis Initial works of singing voice synthesis generate the sounds using concatenated (Macon et al. 1997; Kenmochi and Ohshita 2007) or HMM-based parametric (Saino et al. 2006; Oura et al. 2010) methods, which are kind of cumbersome and lack \ufb02exibility and harmony. Thanks to the rapid evolution of deep learning, several SVS systems based on deep neural networks have been proposed in the past few years. Nishimura et al. (2016); Blaauw and Bonada (2017); Kim et al. (2018); Nakamura et al. (2019); Gu et al. (2020) utilize neural networks to map the contextual features to acoustic features. Ren et al. (2020) build the SVS system from scratch using singing data mined from music websites. Blaauw and Bonada (2020) propose a feed-forward Transformer SVS model for fast inference and avoiding exposure bias issues caused by autoregressive models. Besides, with the help of adversarial training, Lee et al. (2019) propose an end-to-end framework which directly generates linearspectrograms. Wu and Luan (2020) present a multi-singer SVS system with limited available recordings and improve the voice quality by adding multiple random window discriminators. Chen et al. (2020) introduce multi-scale adversarial training to synthesize singing with a high sampling rate (48kHz). The voice naturalness and diversity of SVS system have been continuously improved in recent years. 5.2 Denoising Diffusion Probabilistic Models A diffusion probabilistic model is a parameterized Markov chain trained by optimizing variational lower bound, which generates samples matching the data distribution in constant steps (Ho, Jain, and Abbeel 2020). Diffusion model is \ufb01rst proposed by Sohl-Dickstein et al. (2015). Ho, Jain, and Abbeel (2020) make progress of diffusion model to generate high-quality images using a certain parameterization and reveal an equivalence between diffusion model and denoising score matching (Song and Ermon 2019; Song et al. 2021). Recently, Kong et al. (2021) and Chen et al. (2021) apply the diffusion model to neural vocoders, which generate high-\ufb01delity waveform conditioned on mel-spectrogram. Chen et al. (2021) also propose a continuous noise schedule to reduce the inference iterations while maintaining synthesis quality. Song, Meng, and Ermon (2021) extend diffusion model by providing a faster sampling mechanism, and a way to interpolate between samples meaningfully. Diffusion model is a fresh and developing technique, which has been applied in the \ufb01elds of unconditional image generation, conditional spectrogram-to-waveform generation (neural vocoder). And in our work, we propose a diffusion model for the acoustic model which generates melspectrogram given music scores (or text). There is a concurrent work (Jeong et al. 2021) at the submission time of our preprint which adopts a diffusion model as the acoustic model for TTS task. 6 Conclusion In this work, we proposed DiffSinger, an acoustic model for SVS based on diffusion probabilistic model. To improve the voice quality and speed up inference, we proposed a shallow diffusion mechanism. Speci\ufb01cally, we found that the diffusion trajectories of M and f M converge together when the diffusion step is big enough. Inspired by this, we started the reverse process at the intersection (step k) of two trajectories rather than at the very deep diffusion step T. Thus the burden of the reverse process could be distinctly alleviated, which improves the quality of synthesized audio and accelerates inference. The experiments conducted on PopCS demonstrate the superiority of DiffSinger compared with previous works, and the effectiveness of our novel shallow diffusion mechanism. The extensional experiments conducted on LJSpeech dataset prove the effectiveness of DiffSpeech on TTS task. The directly synthesis without vocoder will be future work. Acknowledgments This work was supported in part by the National Key R&D Program of China under Grant No.2020YFC0832505, No.62072397, Zhejiang Natural Science Foundation under Grant LR19F020006. Thanks participants of the listening test for the valuable evaluations.", "introduction": "Singing voice synthesis (SVS) which aims to synthesize nat- ural and expressive singing voice from musical score (Wu and Luan 2020), increasingly draws attention from the research community and entertainment industries (Zhang et al. 2020). The pipeline of SVS usually consists of an acoustic model to generate the acoustic features (e.g., mel- spectrogram) conditioned on a music score, and a vocoder to convert the acoustic features to waveform (Nakamura et al. *Equal contribution. \u2020Corresponding author Copyright \u00a9 2022, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. 2019; Lee et al. 2019; Blaauw and Bonada 2020; Ren et al. 2020; Chen et al. 2020)1. Previous singing acoustic models mainly utilize simple loss (e.g., L1 or L2) to reconstruct the acoustic features. However, this optimization is based on the incorrect uni- modal distribution assumptions, leading to blurry and over- smoothing outputs. Although existing methods endeavor to solve this problem by generative adversarial network (GAN) (Lee et al. 2019; Chen et al. 2020), training an ef- fective GAN may occasionally fail due to the unstable dis- criminator. These issues hinder the naturalness of synthe- sized singing. Recently, a highly \ufb02exible and tractable generative model, diffusion probabilistic model (a.k.a. diffusion model) (Sohl- Dickstein et al. 2015; Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2021) emerges. Diffusion model consists of two processes: diffusion process and reverse process (also called denoising process). The diffusion process is a Markov chain with \ufb01xed parameters (when using the certain param- eterization in (Ho, Jain, and Abbeel 2020)), which converts the complicated data into isotropic Gaussian distribution by adding the Gaussian noise gradually; while the reverse pro- cess is a Markov chain implemented by a neural network, which learns to restore the origin data from Gaussian white noise iteratively. Diffusion model can be stably trained by implicitly optimizing variational lower bound (ELBO) on the data likelihood. It has been demonstrated that diffu- sion model can produce promising results in image genera- tion (Ho, Jain, and Abbeel 2020; Song, Meng, and Ermon 2021) and neural vocoder (Chen et al. 2021; Kong et al. 2021) \ufb01elds. In this work, we propose DiffSinger, an acoustic model for SVS based on diffusion model, which converts the noise into mel-spectrogram conditioned on the music score. DiffSinger can be ef\ufb01ciently trained by optimizing ELBO, without adversarial feedback, and generates realistic mel- spectrograms strongly matching the ground truth distribu- tion. To further improve the voice quality and speed up infer- ence, we introduce a shallow diffusion mechanism to make better use of the prior knowledge learned by the simple loss. Speci\ufb01cally, we \ufb01nd that there is an intersection of the diffu- 1A music score consists of lyrics, pitch and duration. arXiv:2105.02446v6 [eess.AS] 22 Mar 2022 sion trajectories of the ground-truth mel-spectrogram M and the one predicted by a simple mel-spectrogram decoder f M 2: sending M and f M into the diffusion process could result in similar distorted mel-spectrograms, when the diffusion step is big enough (but not reaches the deep step where the distorted mel-spectrograms become Gaussian white noise). Thus, in the inference stage we 1) leverage the simple mel- spectrogram decoder to generate f M; 2) calculate the sam- ple at a shallow step k through the diffusion process: f Mk3; and 3) start reverse process from f Mk rather than Gaussian white noise, and complete the process by k iteration denois- ing steps (Vincent 2011; Song and Ermon 2019; Ho, Jain, and Abbeel 2020). Besides, we train a boundary prediction network to locate this intersection and determine the k adap- tively. The shallow diffusion mechanism provides a better start point than Gaussian white noise and alleviates the bur- den of the reverse process, which improves the quality of synthesized audio and accelerates inference. Finally, since the pipeline of SVS resembles that of text- to-speech (TTS) task, we also build DiffSpeech adjusting from DiffSinger for generalization. The evaluations con- ducted on a Chinese singing dataset demonstrate the superi- ority of DiffSinger (0.11 MOS gains compared with a state- of-the-art acoustic model for SVS (Wu and Luan 2020)), and the effectiveness of our novel mechanism (0.14 MOS gains, 0.5 CMOS gains and 45.1% speedup with shallow diffusion mechanism). The extensional experiments of DiffSpeech on TTS task prove the generalization of our methods (0.24/0.23 MOS gains compared with FastSpeech 2 (Ren et al. 2021) and Glow-TTS (Kim et al. 2020) respectively). The contri- butions of this work can be summarized as follows: \u2022 We propose DiffSinger, which is the \ufb01rst acoustic model for SVS based on diffusion probabilistic model. Diff- Singer addresses the over-smoothing and unstable train- ing issues in previous works. \u2022 We propose a shallow diffusion mechanism to further im- prove the voice quality, and accelerate the inference. \u2022 The extensional experiments on TTS task (DiffSpeech) prove the generalization of our methods." }, { "url": "http://arxiv.org/abs/2008.02516v4", "title": "FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire", "abstract": "Lipreading is an impressive technique and there has been a definite\nimprovement of accuracy in recent years. However, existing methods for\nlipreading mainly build on autoregressive (AR) model, which generate target\ntokens one by one and suffer from high inference latency. To breakthrough this\nconstraint, we propose FastLR, a non-autoregressive (NAR) lipreading model\nwhich generates all target tokens simultaneously. NAR lipreading is a\nchallenging task that has many difficulties: 1) the discrepancy of sequence\nlengths between source and target makes it difficult to estimate the length of\nthe output sequence; 2) the conditionally independent behavior of NAR\ngeneration lacks the correlation across time which leads to a poor\napproximation of target distribution; 3) the feature representation ability of\nencoder can be weak due to lack of effective alignment mechanism; and 4) the\nremoval of AR language model exacerbates the inherent ambiguity problem of\nlipreading. Thus, in this paper, we introduce three methods to reduce the gap\nbetween FastLR and AR model: 1) to address challenges 1 and 2, we leverage\nintegrate-and-fire (I\\&F) module to model the correspondence between source\nvideo frames and output text sequence. 2) To tackle challenge 3, we add an\nauxiliary connectionist temporal classification (CTC) decoder to the top of the\nencoder and optimize it with extra CTC loss. We also add an auxiliary\nautoregressive decoder to help the feature extraction of encoder. 3) To\novercome challenge 4, we propose a novel Noisy Parallel Decoding (NPD) for I\\&F\nand bring Byte-Pair Encoding (BPE) into lipreading. Our experiments exhibit\nthat FastLR achieves the speedup up to 10.97$\\times$ comparing with\nstate-of-the-art lipreading model with slight WER absolute increase of 1.5\\%\nand 5.5\\% on GRID and LRS2 lipreading datasets respectively, which demonstrates\nthe effectiveness of our proposed method.", "authors": "Jinglin Liu, Yi Ren, Zhou Zhao, Chen Zhang, Baoxing Huai, Nicholas Jing Yuan", "published": "2020-08-06", "updated": "2021-03-15", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.CL", "cs.CV", "cs.LG", "cs.SD" ], "main_content": "Prior works utilize deep learning for lipreading. The first typical approach is LipNet [3] based on CTC [12], which takes the advantage of the spatio-temporal convolutional front-end feature generator and GRU [6]. Further, Stafylakis and Tzimiropoulos [25] propose a network combining the modified 3D/2D-ResNet architecture with LSTM. Afouras et al. [1] introduce the Transformer self-attention architecture into lipreading, and build TM-seq2seq and TM-CTC. The former surpasses the performance of all previous work on LRS2-BBC dataset by a large margin. To boost the performance of lipreading, Petridis et al. [19] present a hybrid CTC/Attention architecture aiming to obtain the better alignment than attentiononly mechanism, Zhao et al. [34] provide the idea that transferring knowledge from audio-speech recognition model to lipreading model by distillation. However, state-of-the-art methods, either based on recurrent neural network [33, 34] or Transformer [1, 2], take in the input video sequence and generates the tokens of target sentence \ud835\udc66in a recurrent structure during the inference process. And they all suffer from the high latency. 2.2 Non-Autoregressive Decoding An autoregressive model takes in a source sequence \ud835\udc65= (\ud835\udc651,\ud835\udc652, ..., \ud835\udc65\ud835\udc47\ud835\udc65) and then generates words in target sentence\ud835\udc66= (\ud835\udc661,\ud835\udc662, ...,\ud835\udc66\ud835\udc47\ud835\udc66) one by one with the causal structure during the inference process [26, 29]. To reduce the inference latency, Gu et al. [13] introduce non-autoregressive model based on Transformer into the machine translation field, which generates all target words in parallel. The conditional probability can be defined as \ud835\udc43(\ud835\udc66|\ud835\udc65) = \ud835\udc43(\ud835\udc47\ud835\udc66|\ud835\udc65) \ud835\udc47\ud835\udc66 \ufffd \ud835\udc61=1 \ufffd \ud835\udc61=1 \ud835\udc43(\ud835\udc66\ud835\udc61|\ud835\udc65), (1) where \ud835\udc47\ud835\udc66is the length of the target sequence gained from the fertility prediction function conditioned on the source sentence. Due to the multimodality problem [13], the performance of NAR model is usually inferior to AR model. Recently, a line of works aiming to bridge the performance gap between NAR and AR model for translation task has been presented [11, 14]. Besides the study of NAR translation, many works bring NAR model into other sequence-to-sequence tasks, such as video caption [32], speech recognition [5] and speech synthesis [18, 22]. 2.3 Spike Neural Network The integrate-and-fire neuron model describes the membrane potential of a neuron according to the synaptic inputs and the injected current [4]. It is bio-logical and widely used in spiking neural networks. Concretely, the neuron integrates the input signal forwardly and increases the membrane potential. Once the membrane potential reaches a threshold, a spike signal is generated, which means an event takes place. Henceforth, the membrane potential is reset and then grows in response to the subsequent input signal again. It enables the encoding from continuous signal sequences to discrete signal sequences, while retaining the timing information. Output Text (Shifted right) N \u00d7 Encoder Block Positional Encoding Source Video Decoder Block Output Text I&F Module Visual Front-end Self-Attention Self-Attention Add & Norm Feed Forward Add & Norm Add & Norm Feed Forward Add & Norm Encoder Hidden 1D conv & FC Weight Embedding Weight Embedding Encoder Hidden Fired Embedding FC CTC loss Length Controller Positional Encoding N \u00d7 Decoder Block Output Text Inter-Attention Add & Norm Feed Forward Add & Norm Positional Encoding Self-Attention Add & Norm Auxiliary auto-regressive decoder Text embedding pad N \u00d7 Figure 1: The overview of the model architecture for FastLR. Recently, Dong and Xu [10] introduce the integrate-and-fire model into speech recognition task. They use continuous functions that support back-propagation to simulate the process of integrateand-fire. In this work, the fired spike represents the event that locates an acoustic boundary. 3 METHODS In this section, we introduce FastLR and describe our methods thoroughly. As shown in Figure 1, FastLR is composed of a spatiotemporal convolutional neural network for video feature extraction (visual front-end) and a sequence processing model (main model) based on Transformer with an enhenced encoder, a nonautoregressive decoder and a I&F module. To further tackle the challenges in non-autoregressive lipreading, we propose the NPD method for I&F and bring byte-pair encoding into our method. The details of our model and methods are described in the following subsections2: 3.1 Enhenced Encoder The encoder of FastLR is composed of stacked self-attention and feed-forward layers, which are the same as those in Transformer [29] and autoregressive lipreading model (TM-seq2seq[1]). Thus, we add an auxiliary autoregressive decoder, shown in the left panel of Figure 1, and by doing so, we can optimize the AR lipreading task with FastLR together with one shared the encoder during training stage. This transfers knowledge from the AR model to FastLR which facilitates the optimization. Besides, we add the connectionist temporal classification (CTC) decoder with CTC loss on the encoder for 2We introduce the visual front-end in section 4.2 as it varies from one dataset to another. forcing monotonic alignments, which is a widely used technique in speech recognition field. Both adjustments improve the feature representation ability of our encoder. 3.2 Integrate-and-fire module To estimate the length of the output sequence and alleviate the problem of time correlation in target sequence, we adopt continuous integrate-and-Fire (I&F) [10] module for FastLR. This is a soft and monotonic alignment which can be employed in the encoderdecoder sequence processing model. First, the encoder output hidden sequence \u210e= (\u210e1,\u210e2, . . .,\u210e\ud835\udc5a) will be fed to a 1-dimensional convolutional layer followed by a fully connected layer with sigmoid activation function. Then we obtain the weight embedding sequence \ud835\udc64= (\ud835\udc641,\ud835\udc642, . . .,\ud835\udc64\ud835\udc5a) which represents the weight of information carried in \u210e. Second, the I&F module scans \ud835\udc64and accumulates them from left to right until the sum reaches the threshold (we set it to 1.0), which means an acoustic boundary is detected. Third, I&F divides \ud835\udc64\ud835\udc56at this point into two part: \ud835\udc64\ud835\udc56,1 and \ud835\udc64\ud835\udc56,2. \ud835\udc64\ud835\udc56,1 is used for fulfilling the integration of current embedding \ud835\udc53\ud835\udc57to be fired, while \ud835\udc64\ud835\udc56,2 is used for the next integration of \ud835\udc53\ud835\udc57+1. Then, I&F resets the accumulation and continues to scan the rest of \ud835\udc64 which begins with \ud835\udc64\ud835\udc56,2 for the next integration. This procedure is noted as \"accumulate and detect\". Finally, I&F multiplies all \ud835\udc64\ud835\udc58(or \ud835\udc64\ud835\udc58,1,\ud835\udc64\ud835\udc58,2) in \ud835\udc64by corresponding \u210e\ud835\udc58and integrates them according to detected boundaries. An example is shown in Figure 2. 3.3 Non-autoregressive Decoder Different from Transformer decoder, the self-attention of FastLR\u2019s decoder can attend to the entire sequence for the conditionally independent property of NAR model. And we remove the interattention mechanism since FastLR already has an alignment mechanism (I&F) between source and target. The decoder takes in the fired embedding sequence of I&F \ud835\udc53= (\ud835\udc531, \ud835\udc532, . . ., \ud835\udc53\ud835\udc5b) and generates the text tokens \ud835\udc66= (\ud835\udc661,\ud835\udc662, . . .\ud835\udc66\ud835\udc5b) in parallel during either training or inference stage. 3.4 Noisy parallel decoding (NPD) for I&F The absence of AR decoding procedure makes the model much more difficult to tackle the inherent ambiguity problem in lipreading. So, we design a novel NPD for I&F method to leverage the language information in well-trained AR lipreading model. In section 3.2, it is not hard to find that, \u230a\ud835\udc46\u230brepresents the length of predicted sequence \ud835\udc53(or \ud835\udc66), where \ud835\udc46is the total sum of \ud835\udc64. And Dong and Xu [10] propose a scaling strategy which multiplies \ud835\udc64by a scalar e \ud835\udc46 \u00cd\ud835\udc5a \ud835\udc56=1 \ud835\udc64\ud835\udc56to generate \ud835\udc64\u2032 = (\ud835\udc64\u2032 1,\ud835\udc64\u2032 2, . . .,\ud835\udc64\u2032 \ud835\udc5a), where e \ud835\udc46is the length of target label e \ud835\udc66. By doing so, the total sum of \ud835\udc64\u2032 is equal to e \ud835\udc46and this teacher-forces I&F to predict \ud835\udc53with the true length of e \ud835\udc46 which would benefit the cross-entropy training. However, we do not stop at this point. Besides training, we also scale \ud835\udc64during the inference stage to generate multiple candidates of weight embedding with different length bias e \ud835\udc4f. When set the beam size \ud835\udc35= 4, \ud835\udc64\u2032 e \ud835\udc4f= \u00cd\ud835\udc5a \ud835\udc56=1 \ud835\udc64\ud835\udc56+ e \ud835\udc4f \u00cd\ud835\udc5a \ud835\udc56=1 \ud835\udc64\ud835\udc56 \u00b7 \ud835\udc64, where e \ud835\udc4f\u2208[\u22124, 4] \u2229Z, (2) where \ud835\udc64= (\ud835\udc641,\ud835\udc642, . . .,\ud835\udc64\ud835\udc5a) is the output of I&F module during inference and length bias e \ud835\udc4fis provided in \"Length Controller\" module in Figure 1. Then, we utilize the re-scoring method used in Noisy Parallel Decoding (NPD), which is a common practice in non-autoregressive neural machine translation, to select the best sequence from these 2 \u2217\ud835\udc35candidates via an AR lipreading teacher: \ud835\udc64\ud835\udc41\ud835\udc43\ud835\udc37= \ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc64\u2032 e \ud835\udc4f \ud835\udc5d\ud835\udc34\ud835\udc45(\ud835\udc3a(\ud835\udc65,\ud835\udc64\u2032 e \ud835\udc4f;\ud835\udf03)|\ud835\udc65;\ud835\udf03), (3) where \ud835\udc5d\ud835\udc34\ud835\udc45(\ud835\udc34) is the probability of the sequence \ud835\udc34generated by autoregressive model; The \ud835\udc3a(\ud835\udc65,\ud835\udc64;\ud835\udf03) means the optimal generation of FastLR given a source sentence \ud835\udc65and weight embedding \ud835\udc64, \ud835\udf03 represents the parameters of model. The selection process could leverage information in the language model (decoder) of the well-trained autoregressive lipreading teacher, which alleviates the ambiguity problem and gives a chance to adjust the weight embedding generated by I&F module for predicting a better sequence length. Note that these candidates can be computed independently, which won\u2019t hurt the parallelizability (only doubles the latency due to the selection process). The experiments demonstrate that the re-scored sequence is more accurate. 3.5 Byte-Pair Encoding Byte-Pair Encoding [23] is widely used in NMT [29] and ASR [10] fields, but rare in lipreading tasks. BPE could make each token contain more language information and reduce the dependency among tokens compared with character level encoding, which alleviate the problems of non-autoregressive generation discussed before. In this work, we tokenize the sentence with moses tokenizer 3 and then use BPE algorithm to segment each target word into sub-words. Accumulate and detect w1 w2,1 w2,2 w3 w4 w1 w2 w3 w4 w5 w6 w7 w8 w9 w5 w6 w7,1 w7,2 w8 w9 h1 h3 h4 h5 h6 h7 h8 h9 h2 f1 f2 Weight Embedding Fired Embedding \u2026 \u2026 Figure 2: An example to illustrate how I&F module works. \u210erespresents the encoder output hidden sequence. In this case \ud835\udc531 = \ud835\udc641 \u00b7 \u210e1 + \ud835\udc642,1 \u00b7 \u210e2, \ud835\udc532 = \ud835\udc642,2 \u00b7 \u210e2 + \ud835\udc643 \u00b7 \u210e3 + \ud835\udc644 \u00b7 \u210e4 + \ud835\udc645 \u00b7 \u210e5 + \ud835\udc646 \u00b7 \u210e6 + \ud835\udc647,1 \u00b7 \u210e7. 3.6 Training of FastLR We optimize the CTC decoder with CTC loss. CTC introduces a set of intermediate representation path \ud835\udf19(\ud835\udc66) termed as CTC paths for one target text sequence \ud835\udc66. Each CTC path is composed of scattered target text tokens and blanks which can reduce to the target text sequence by removing the repeated words and blanks. The likelihood of \ud835\udc66could be calculated as the sum of probabilities of all CTC paths corresponding to it: \ud835\udc43\ud835\udc50\ud835\udc61\ud835\udc50(\ud835\udc66|\ud835\udc65) = \u2211\ufe01 \ud835\udc50\u2208\ud835\udf19(\ud835\udc66) \ud835\udc43\ud835\udc50\ud835\udc61\ud835\udc50(\ud835\udc50|\ud835\udc65) (4) Thus, CTC loss can be formulated as: L\ud835\udc50\ud835\udc61\ud835\udc50= \u2212 \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208(X\u00d7Y) \u2211\ufe01 \ud835\udc50\u2208\ud835\udf19(\ud835\udc66) \ud835\udc43\ud835\udc50\ud835\udc61\ud835\udc50(\ud835\udc50|\ud835\udc65) (5) where (X \u00d7 Y) denotes the set of source video and target text sequence pairs in one batch. We optimize the auxiliary autoregressive task with cross-entropy loss, which can be formulated as: L\ud835\udc34\ud835\udc45= \u2212 \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208(X\u00d7Y) \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc43\ud835\udc34\ud835\udc45(\ud835\udc66|\ud835\udc65) (6) And most importantly, we optimize the main task FastLR with cross-entropy loss and sequence length loss: L\ud835\udc39\ud835\udc3f\ud835\udc45= \u2212 \u2211\ufe01 (\ud835\udc65,\ud835\udc66) \u2208(X\u00d7Y) h \ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc43\ud835\udc39\ud835\udc3f\ud835\udc45(\ud835\udc66|\ud835\udc65) + ( e \ud835\udc46\ud835\udc65\u2212\ud835\udc46\ud835\udc65)2i (7) where the e \ud835\udc46and \ud835\udc46are defined in section 3.4. Then, the total loss function for training our model is: L = \ud835\udf061L\ud835\udc50\ud835\udc61\ud835\udc50+ \ud835\udf062L\ud835\udc34\ud835\udc45+ \ud835\udf063L\ud835\udc39\ud835\udc3f\ud835\udc45 (8) where the \ud835\udf061, \ud835\udf062, \ud835\udf063 are hyperparameters to trade off the three losses. 3https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer /tokenizer.perl 4 EXPERIMENTS AND RESULTS 4.1 Datasets GRID. The GRID dataset [9] consists of 34 subjects, and each of them utters 1,000 phrases. It is a clean dataset and easy to learn. We adopt the split the same with Assael et al. [3], where 255 random sentences from each speaker are selected for evaluation. In order to better recognize lip movements, we transform the image into gray scale, and crop the video images to a fixed 100 \u00d7 50 size containing the mouth region with Dlib face detector. Since the vocabulary size of GRID datasets is quite small and most words are simple, we do not apply Byte-Pair Encoding [23] on GRID, and just encode the target sequence at the character level. LRS2. The LRS2 dataset contains sentences of up to 100 characters from BBC videos [2], which have a range of viewpoints from frontal to profile. We adopt the origin split of LRS2 for train/dev/test sets, which contains 46k, 1,082 and 1,243 sentences respectively. And we make use of the pre-train dataset provided by LRS2 which contains 96k sentences for pretraining. Following previous works [1, 2, 34], the input video frames are converted to grey scale and centrally cropped into 114 \u00d7 114 images. As for the text sentence, we split each word token into subwords using BPE [23], and set the vocabulary size to 1k considering the vocabulary size of LRS2. The statistics of both datasets are listed in Table 1. Table 1: The statistics on GRID and LRS2 lip reading datasets. Utt: Utterance. Dataset Utt. Word inst. Vocab hours GRID 33k 165k 51 27.5 LRS2 (Train-dev) 47k 337k 18k 29 4.2 Visual feature extraction For GRID datasets, we use spatio-temporal CNN to extract visual features follow Torfi et al. [27]. The visual front-end network is composed of four 3D convolution layers with 3D max pooling and RELU, and two fully connected layers. The kernel size of 3D convolution and pooling is 3\u00d73, the hidden sizes of fully connected layer as well as output dense layer are both 256. We directly train this visual front-end together with our main model end-to-end on GRID on the implementation4 by Torfi et al. [27]. For LRS2 datasets, we adopt the same structure as Afouras et al. [2], which uses a 3D convolution on the input frame sequence with a filter width of 5 frames, and a 2D ResNet decreasing the spatial dimensions progressively with depth. The network convert the \ud835\udc47\u00d7 \ud835\udc3b\u00d7\ud835\udc4aframe sequence into\ud835\udc47\u00d7 \ud835\udc3b 32 \u00d7\ud835\udc4a 32 \u00d7512 feature sequence, where \ud835\udc47, \ud835\udc3b,\ud835\udc4ais frame number, frame height, frame width respectively. It is worth noting that, training the visual front-end together with the main model could obtain poor results on LRS2, which is observed in previous works [1]. Thus, as Zhao et al. [34] do, we utilize the frozen visual front-end provided by Afouras et al. [1], which is pretrained on a non-public datasets MV-LRS [8], to exact the visual 4https://github.com/astorfi/lip-reading-deeplearning features. And then, we train FastLR on these features end-to-end. The pre-trained model can be found in http://www.robots.ox.ac.uk/ ~vgg/research/deep_lip_reading/models/lrs2_lip_model.zip. 4.3 Model Configuration We adopt the Transformer [29] as the basic model structure for FastLR because it is parallelizable and achieves state-of-the-art accuracy in lipreading [1]. The model hidden size, number of encoderlayers, number of decoder-layers, and number of heads are set to \ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5b= 512,\ud835\udc5b\ud835\udc52\ud835\udc5b\ud835\udc50= 6,\ud835\udc5b\ud835\udc51\ud835\udc52\ud835\udc50= 6,\ud835\udc5b\u210e\ud835\udc52\ud835\udc4e\ud835\udc51= 8 for LRS2 dataset and \ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5b= 256,\ud835\udc5b\ud835\udc52\ud835\udc5b\ud835\udc50= 4,\ud835\udc5b\ud835\udc51\ud835\udc52\ud835\udc50= 4,\ud835\udc5b\u210e\ud835\udc52\ud835\udc4e\ud835\udc51= 8 for GRID dataset respectively. We replace the fully-connected network in origin Transformer with 2-layer 1D convolution network with ReLU activation which is commonly used in speech task and the same with TM-seq2seq [1] for lipreading. The kernel size and filter size of 1D convolution are set to 4 \u2217\ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5band 9 respectively. The CTC decoder consists of two fully-connected layers with ReLU activation function and one fully-connected layer without activation function. The hidden sizes of these fully-connected layers equal to \ud835\udc51\u210e\ud835\udc56\ud835\udc51\ud835\udc51\ud835\udc52\ud835\udc5b. The auxiliary decoder is an ordinary Transformer decoder with the same configuration as FastLR, which takes in the target text sequence shifted right one sequence step for teacher-forcing. 4.4 Training setup As mentioned in section 3.1, to boost the feature representation ability of encoder, we add an auxiliary connectionist temporal classification (CTC) decoder and an autoregressive decoder to FastLR and optimize them together. We set \ud835\udf061 to 0.5, \ud835\udf062, \ud835\udf063 to 1, 0 during warm-up training stage, and set \ud835\udf062, \ud835\udf063 to 0, 1 during main training stage for simplicity. The training steps of each training stage are listed in details in Table 2. Note that experiment on GRID dataset needs more training steps, since it is trained with its visual frontend together from scratch, different from experiments on LRS2 dataset. Moreover, the first 45k steps in warm-up stage for LRS2 are trained on LRS2-pretrain sub-dataset and all the left steps are trained on LRS2-main sub-dataset [1, 2, 34]. We train our model FastLR using Adam following the optimizer settings and learning rate schedule in Transformer [29]. The training procedure runs on 2 NVIDIA 1080Ti GPUs. Our code is based on tensor2tensor [28]. Table 2: The training steps of FastLR for different datasets for each training stage. Stage GRID LRS2 Warm-up 300k 55k Main 160k 120k 4.5 Inference and Evaluation During the inference stage, the auxiliary CTC decoder as well as autoregressive decoder will be thrown away. Given the beam size \ud835\udc35= 4, FastLR generates 2 \u2217\ud835\udc35+ 1 candidates of weight embedding sequence which correspond to 2\u2217\ud835\udc35+1 text sequences, and these text sequences will be sent to the decoder of a well-trained autoregressive lipreading model (TM-seq2seq) for selection as described in section 3.4. The result of selected best text sequence is marked with \"NPD9\". We conduct the experiments on both \"NPD9\" and \"without NPD\". To be specific, the result of \"without NPD\" means directly using the candidate with zero-length bias without a selection process, which has a lower latency. The recognition quality is evaluated by Word Error Rate (WER) and Character Error Rate (CER). Both error rate can be defined as: \ud835\udc38\ud835\udc5f\ud835\udc5f\ud835\udc5c\ud835\udc5f\ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc52= (\ud835\udc46+ \ud835\udc37+ \ud835\udc3c)/\ud835\udc41, (9) where S, D, I and N are the number of substitutions, deletions, insertions and reference tokens (word or character) respectively. When evaluating the latency, we run FastLR on 1 NVIDIA 1080Ti GPU in inference. Table 3: The word error rate (WER) and character error rate (CER) on GRID GRID Method WER CER Autoregressive Models LSTM [30] 20.4% / LipNet [3] 4.8% 1.9% WAS [7] 3.0% / Non-Autoregressive Models NAR-LR (base) 25.8% 13.6% FastLR (Ours) 4.5% 2.4% Table 4: The word error rate (WER) and character error rate (CER) on LRS2. \u2020 denotes baselines from our reproduction. LRS2 Method WER CER Autoregressive Models WAS [7] 70.4% / BLSTM+CTC [2] 76.5% 40.6% LIBS [34] 65.3% 45.5% TM-seq2seq [1] 61.7%\u2020 43.5%\u2020 Non-Autoregressive Models NAR-LR (base) 81.3% 57.9% FastLR (Ours) 67.2% 46.9% 4.6 Main Results We conduct experiments of FastLR, and compare them with lipreading baseline and some mainstream state-of-the-art of AR lipreading models on the GRID and LRS2 datasets respectively. As for TMseq2seq [1], it has the same Transformer settings with FastLR and works as the AR teacher for NPD selection. We also apply CTC loss and BPE technique to TM-seq2seq for a fair comparison. 5 The results on two datasets are listed in Table 3 and 4. We can see that 1) WAS [7] and TM-seq2seq [1, 2] obtain the best results of autoregressive lipreading model on GRID and LRS2. Compared with them, FastLR only has a slight WER absolute increase of 1.5% and 5.5% respectively. 2) Moreover, on GRID dataset, FastLR outperforms LipNet [3] for 0.3% WER, and exceeds LSTM [30] with a notable margin; On LRS2 dataset, FastLR achieves better WER scores than WAS and BLSTM+CTC [2] and keeps comparable performance with LIBS [34]. In addition, compared with LIBS, we do not introduce any distillation method in training stage, and compared with WAS and TM-seq2seq, we do not leverage information from other datasets beyond GRID and LRS2. We also propose a baseline non-autoregressive lipreading model without Integrate-andFire module termed as NAR-LR (base), and conduct experiments for comparison. As the result shows, FastLR outperforms this NAR baseline distinctly. The overview of the design for NAR-LR (base) is shown in Figure 3. Positional Encoding Source Video Encoder Visual Front-end N \u00d7 Decoder Block Output Text Inter-Attention Add & Norm Feed Forward Add & Norm Positional Encoding Self-Attention Add & Norm Encoder Hidden m m \u00d7 \u2026 : A trainable tensor m : Predicted length Auxiliary AR Decoder CTC Decoder : A trainable tensor m : Predicted length Figure 3: The NAR-LR (base) model. It is also based on Transformer [29], but generates outputs in the nonautoregressive manner [13]. It sends a series of duplicated trainable tensor into the decoder to generates target tokens. The repeat count of this trainable tensor is denoted as \"m\". For training, \"m\" is set to ground truth length, but for inference, we estimate it by a linear function of input length, and the parameters are obtained using the least square method on the train set. The auxiliary AR decoder is the same as FastLR\u2019s. The CTC decoder contains FC layers and CTC loss. 4.7 Speedup In this section, we compare the average inference latency of FastLR with that of the autoregressive Transformer lipreading model. And 5Our reproduction has a weaker performance compared with the results reported in [1, 2]. Because we do not have the resource of MV-LRS, a non-public dataset which contains individual word excerpts of frequent words used by [1, 2]. Thus, we do not adopt curriculum learning strategy as Afouras et al. [2]. then, we analyze the relationship between speedup and the length of the predicted sequence. 4.7.1 Average Latency Comparison. The average latency is measured in average time in seconds required to decode one sentence on the test set of LRS2 dataset. We record the inference latency and corresponding recognition accuracy of TM-seq2seq [1, 2], FastLR without NPD and FastLR with NPD9, which is listed in Table 5. The result shows that FastLR speeds up the inference by 11.94\u00d7 without NPD, and by 5.81\u00d7 with NPD9 on average, compared with the TM-seq2seq which has similar number of model parameters. Note that the latency is calculated excluding the computation cost of data pre-processing and the visual front-end. Table 5: The comparison of average inference latency and corresponding recognition accuracy. The evaluation is conducted on a server with 1 NVIDIA 1080Ti GPU, 12 Intel Xeon CPU. The batch size is set to 1. The average length of the generated sub-word sequence are all about 14. Method WER Latency (s) Speedup TM-seq2seq [1] 61.7% 0.215 1.00 \u00d7 FastLR (no NPD) 73.2% 0.018 11.94 \u00d7 FastLR (NPD 9) 67.2% 0.037 5.81 \u00d7 4.7.2 Relationship between Speedup and Length. During inference, the autoregressive model generates the target tokens one by one, but the non-autoregressive model speeds up the inference by increasing parallelization in the generation process. Thus, the longer the target sequence is, the more the speedup rate is. We visualize the relationship between the length of the predicted sub-word sequence in Figure 4. It can be seen that the inference latency increases distinctly with the predicted text length for TM-seq2seq, while nearly holds a small constant for FastLR. Then, we bucket the test sequences of length within [30, 35], and calculate their average inference latency for TM-seq2seq and FastLR to obtain the maximum speedup on LRS2 test set. The results are 0.494s and 0.045s for TM-seq2seq and FastLR (NPD9) respectively, which shows that FastLR (NPD9) achieves the speedup up to 10.97\u00d7 on LRS2 test set, thanks to the parallel generation which is insensitive to sequence length. 5 ANALYSIS In this section, we first conduct ablation experiments on LRS2 to verify the significance of all proposed methods in FastLR. The experiments are listed in Table 6. Then we visualize the encoderdecoder attention map of the well-trained AR model (TM-seq2seq) and the acoustic boundary detected by the I&F module in FastLR to check whether the I&F module works well. 5.1 The Effectiveness of Auxiliary AR Task As shown in the table 6, the naive lipreading model with Integrateand-Fire is not able to converge well, due to the difficulty of learning the weight embedding in I&F module from the meaningless encoder hidden. Thus, the autoregressive lipreading model works as the 5 10 15 20 25 30 35 Predicted Text Length 0.1 0.2 0.3 0.4 0.5 0.6 Inference Time (a) TM-seq2seq [1] 10 15 20 25 30 35 Predicted Text Length 0.02 0.03 0.04 0.05 0.06 Inference Time (b) FastLR (NPD9) Figure 4: Relationship between Inference time (second) and Predicted Text Length for TM-seq2seq [1] and FastLR. Table 6: The ablation studies on LRS2 dataset. Naive Model with I&F is the naive lipreading model only with Integrateand-Fire. \"+Aux\" means adding the auxiliary autoregressive task. We add our methods and evaluate their effectiveness progressively. Model WER CER Naive Model with I&F >1 75.2% +Aux 93.1% 64.9% +Aux+BPE 75.7% 52.7% +Aux+BPE+CTC 73.2% 51.4% +Aux+BPE+CTC+NPD (FastLR) 67.2% 46.9% auxiliary model to enhance the feature representation ability of encoder, and guides the non-autoregressive model with Integrateand-Fire to learn the right alignments (weight embedding). From this, the model with I&F begins to generate the target sequence with meaning, and \ud835\udc36\ud835\udc38\ud835\udc45< 65% (Row 3). 5.2 The Effectiveness of Byte-Pair Encoding BPE makes each token contain more language information and reduce the dependency among tokens compared with character level encoding. In addition, from observation, the speech speed of BBC video is a bit fast, which causes that one target token (character if without BPE) corresponds to few video frames. While BPE compresses the target sequence and this will help the Integrate-and-Fire module to find the acoustic level alignments easier. From the table 6 (Row 4), it can be seen that BPE reduces the word error rate and character error rate to 75.7% and 52.7% respectively, which means BPE helps the model gains the ability to generates understandable sentence. 5.3 The Effectiveness of CTC The result shows that (Row 5), adding auxiliary connectionist temporal classification(CTC) decoder with CTC loss will further boost the feature representation ability of encoder, and cause 2.5% absolute decrease in WER. At this point, the model gains considerable recognition accuracy compared with the traditional autoregressive method. 5.4 The Effectiveness of NPD for I&F Table 6 (Row 6) shows that using NPD for I&F can boost the performance effectively. We also study the effect of increasing the candidates number for FastLR on LRS2 dataset, as shown in Figure 5. It can be seen that, when setting the candidates number to 9, the accuracy peaks. Finally, FastLR achieves considerable accuracy compared with state-of-the-art autoregressive lipreading model. 1 2 3 4 5 6 7 8 9 10 11 Candidates Number 50 55 60 65 70 Error Rate % WER CER Figure 5: The effect of cadidates number on WER and CER for FastLR model. 5.5 The Visualization of Boundary Detection We visualize the encoder-decoder attention map in Figure 6, which is obtained from the well-trained AR TM-seq2seq. The attention map illustrates the alignment between source video frames and the corresponding target sub-word sequence. The figure shows that the video frames between two horizontal red lines are roughly just what the corresponding target token attends to. It means that the \"accumulate and detect\" part in I&F module tells the acoustic boundary well and makes a right prediction of sequence length. Figure 6: An example of the visualization for encoderdecoder attention map and the acoustic boundary. The horizontal red lines represent the acoustic boundaries detected by I&F module in FastLR, which split the video frames to discrete segments. 6 CONCLUSION In this work, we developed FastLR, a non-autoregressive lipreading system with Integrate-and-Fire module, that recognizes source silent video and generates all the target text tokens in parallel. FastLR consists of a visual front-end, a visual feature encoder and a text decoder for simultaneous generation. To bridge the accuracy gap between FastLR and state-of-the-art autoregressive lipreading model, we introduce I&F module to encode the continuous visual features into discrete token embedding by locating the acoustic boundary. In addition, we propose several methods including auxiliary AR task and CTC loss to boost the feature representation ability of encoder. At last, we design NPD for I&F and bring Byte-Pair Encoding into lipreading, and both methods alleviate the problem caused by the removal of AR language model. Experiments on GRID and LRS2 lipreading datasets show that FastLR outperforms the NAR-LR baseline and has a slight WER increase compared with state-of-the-art AR model, which demonstrates the effectiveness of our method for NAR lipreading. In the future, we will continue to work on how to make a better approximation to the true target distribution for NAR lipreading task, and design more flexible policies to bridge the gap between AR and NAR model as well as keeping the fast speed of NAR generation. ACKNOWLEDGMENTS This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002, No.U1611461 and No.61751209) and the Fundamental Research Funds for the Central Universities (2020QNA5024). This work was also partially supported by the Language and Speech Innovation Lab of HUAWEI Cloud.", "introduction": "Lipreading aims to recognize sentences being spoken by a talk- ing face, which is widely used now in many scenarios including dictating instructions or messages in a noisy environment, tran- scribing archival silent films, resolving multi-talker speech [1] and understanding dialogue from surveillance videos. However, it is widely considered a challenging task and even experienced human lipreaders cannot master it perfectly [3, 24]. Thanks to the rapid development of deep learning in recent years, there has been a line of works studying lipreading and salient achievements have been made. Existing state-of-the-art methods mainly adopt autoregressive (AR) model, either based on RNN [33, 34], or Transformer [1, 2]. Those systems generate each target token conditioned on the se- quence of tokens generated previously, which hinders the paralleliz- ability. Thus, they all without exception suffer from high inference latency, especially when dealing with the massive videos data con- taining hundreds of hours (like long films and surveillance videos) or real-time applications such as dictating messages in a noisy environment. To tackle the low parallelizability problem due to AR generation, many non-autoregressive (NAR) models [13\u201317, 21, 31] have been proposed in the machine translation field. The most typical one is NAT-FT [13], which modifies the Transformer [29] by adding a fertility module to predict the number of words in the target se- quence aligned to each source word. Besides NAR translation, many researchers bring NAR generation into other sequence-to-sequence tasks, such as video caption [20, 22], speech recognition [5] and speech synthesis[18, 22]. These works focus on generating the tar- get sequence in parallel and mostly achieve more than an order of magnitude lower inference latency than their corresponding AR models. However, it is very challenging to generate the whole target sequence simultaneously in lipreading task in following aspects: arXiv:2008.02516v4 [eess.AS] 15 Mar 2021 \u2022 The considerable discrepancy of sequence length between the input video frames and the target text tokens makes it difficult to estimate the length of the output sequence or to define a proper decoder input during the inference stage. This is different from machine translation model, which can even simply adopt the way of uniformly mapping the source word embedding as the decoder input [31] due to the analogous text sequence length. \u2022 The true target sequence distributions show a strong cor- relation across time, but the NAR model usually generates target tokens conditionally independent of each other. This is a poor approximation and may generate repeated words. Gu et al. [13] terms the problem as \"multimodal-problem\". \u2022 The feature representation ability of encoder could be weak when just training the raw NAR model due to lack of effective alignment mechanism. \u2022 The removal of the autoregressive decoder, which usually acts as a language model, makes the model much more diffi- cult to tackle the inherent ambiguity problem in lipreading. In our work, we propose FastLR, a non-autoregressive lipreading model based on Transformer. To handle the challenges mentioned above and reduce the gap between FastLR and AR model, we intro- duce three methods as follows: \u2022 To estimate the length of the output sequence and allevi- ates the problem of time correlation in target sequence, we leverage integrate-and-fire (I&F) module to encoding the continuous video signal into discrete token embeddings by locating the acoustic boundary, which is inspired by Dong and Xu [10]. These discrete embeddings retain the timing information and correspond to the target tokens directly. \u2022 To enhance the feature representation ability of encoder, we add the connectionist temporal classification (CTC) decoder on the top of encoder and optimize it with CTC loss, which could force monotonic alignments. Besides, we add an aux- iliary AR decoder during training to facilitate the feature extraction ability of encoder. \u2022 To tackle the inherent ambiguity problem and reduce the spelling errors in NAR inference, we first propose a novel Noisy Parallel Decoding (NPD) for I&F method. The rescor- ing method in NPD takes advantages of the language model in the well-trained AR lipreading teacher without harming the parallelizability. Then we bring Byte-Pair Encoding (BPE) into lipreading, which compresses the target sequence and makes each token contain more language information to reduce the dependency among tokens compared with char- acter level encoding. The core contribution of this work is that, we propose a non- autoregressive lipreading system, and present several elaborate methods metioned above to bridge the gap between FastLR and state-of-the-art autoregressive lipreading models. The experimental results show that FastLR achieves the speedup up to 10.97\u00d7 comparing with state-of-the-art lipreading model with slight WER increase of 1.5% and 5.5% on GRID and LRS2 lipreading datasets respectively, which demonstrates the effectiveness of our proposed method. We also conduct ablation experiments to verify the significance of all proposed methods in FastLR." }, { "url": "http://arxiv.org/abs/2007.08772v1", "title": "Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation", "abstract": "Non-autoregressive translation (NAT) achieves faster inference speed but at\nthe cost of worse accuracy compared with autoregressive translation (AT). Since\nAT and NAT can share model structure and AT is an easier task than NAT due to\nthe explicit dependency on previous target-side tokens, a natural idea is to\ngradually shift the model training from the easier AT task to the harder NAT\ntask. To smooth the shift from AT training to NAT training, in this paper, we\nintroduce semi-autoregressive translation (SAT) as intermediate tasks. SAT\ncontains a hyperparameter k, and each k value defines a SAT task with different\ndegrees of parallelism. Specially, SAT covers AT and NAT as its special cases:\nit reduces to AT when k = 1 and to NAT when k = N (N is the length of target\nsentence). We design curriculum schedules to gradually shift k from 1 to N,\nwith different pacing functions and number of tasks trained at the same time.\nWe called our method as task-level curriculum learning for NAT (TCL-NAT).\nExperiments on IWSLT14 De-En, IWSLT16 En-De, WMT14 En-De and De-En datasets\nshow that TCL-NAT achieves significant accuracy improvements over previous NAT\nbaselines and reduces the performance gap between NAT and AT models to 1-2 BLEU\npoints, demonstrating the effectiveness of our proposed method.", "authors": "Jinglin Liu, Yi Ren, Xu Tan, Chen Zhang, Tao Qin, Zhou Zhao, Tie-Yan Liu", "published": "2020-07-17", "updated": "2020-07-17", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.LG" ], "main_content": "In this section, we first introduce the related works on neural machine translation, including autoregressive translation (AT), non-autoregressive translation (NAT) and semiautoregressive translation (SAT), and then describe three learning paradigms: transfer learning, multitask learning and curriculum learning, which are related to our method. 2.1 Neural Machine Translation (AT/NAT/SAT) An autoregressive translation (AT) model takes source sentence s as input and then generates the tokens of target sentence y one by one during the inference process [Bahdanau et al., 2015; Sutskever et al., 2014; Vaswani et al., 2017], which causes much inference latency. To improve the inference speed of AT models, a series of works develop non-autoregressive translation (NAT) models based on Transformer [Gu et al., 2018; Lee et al., 2018; Li et al., 2019; Wang et al., 2019; Guo et al., 2019a], which generate all the target tokens in parallel. Several works introduce auxiliary components or losses to improve the accuracy of NAT models: Wang et al. [2019] and Li et al. [2019] propose auxiliary loss functions to solve the problem that NAT models tend to translate missing and duplicating tokens; Guo et al. [2019a] try to enhance the decoder input with target-side information by leveraging auxiliary information; Ma et al. [2019] introduce generative flow to directly model the joint distribution of all target tokens simultaneously. While NAT models achieve faster inference speed, the translation accuracy is still worse than AT model. Some works aim to balance the translation accuracy and inference latency between AT and NAT by introducing semi-autoregressive translation (SAT) [Wang et al., 2018], which generates multiple adjacent tokens in parallel during the autoregressive generation. Different from the above works, we leverage AT, SAT and NAT together and schedule the training in a curriculum way to achieve better translation accuracy for NAT. 2.2 Transfer/Multitask/Curriculum Learning Our proposed TCL-NAT actually leverages the knowledge from easier and more accurate tasks k < N to help the task k = N, but uses a curriculum schedule and trains multiple tasks at a stage. In general, our work is related to three different learning paradigms: transfer learning, multitask learning and curriculum learning. Transfer learning has been a common approach for NLP tasks. Pre-trained models such as BERT [Devlin et al., 2019] and MASS [Song et al., 2019] are fine-tuned on many language understanding and generation tasks for better accuracy. Many NAT works [Gu et al., 2018; Lee et al., 2018; Guo et al., 2019a; Guo et al., 2019b] employ sequence level data distillation to transfer the knowledge from AT teacher model to NAT student model which has proved to be effective. Multitask learning [Caruana, 1997] has found extensive usage in NLP tasks. Dong et al. [2015] use multitask learning for multiple language translation. Anastasopoulos and Chiang [2018] explore multitask models for neural translation of speech and find that jointly trained models improve performance on the tasks of low-resource speech transcription and translation. Garg et al. [2019] leverage extracted discrete alignments in a multi-task framework to optimize towards translation and alignment objectives. Inspired by the human learning process, curriculum learning [Bengio et al., 2009] is proposed as a machine learning training strategy by feeding training instances to the model from easy to hard. Most of the works on curriculum learning focus on the determining the orders of data [Lee and Grauman, 2011; Sachan and Xing, 2016]. Later, some works explore the curriculum learning strategies in task level. Previous work [Sarafianos et al., 2017] in computer visual domain splits the tasks into groups according to the correlation and transfers the acquired knowledge from strongly correlated task to weakly correlated one. Guo et al. [2019b] propose a fine-tuning method to transfer a well-trained AT model to a NAT model by designing a curriculum in the shift process between two kinds of models, which is perhaps the most similar work to ours. However, the training strategy during their curriculum learning process is not a natural task, but just some hand-crafted training strategies, which could affect the final transfer accuracy and the total training time. In contrast, each intermediate task during our curriculum learning process is a standard translation task and is empirically verified to be helpful to the consequent tasks. 3 Task-Level Curriculum Learning For NAT In this section, we introduce our proposed task-level curriculum learning for NAT (TCL-NAT) in detail. First, we propose a unified perspective to represent different tasks including AT, SAT and NAT with a parameter k. Second, we empirically demonstrate the task with smaller k can help the task with bigger k. Third, we introduce the task-level curriculum learning mechanism based on the uni\ufb01ed perspective. Finally, we describe the design of our model architecture for TCL-NAT. 3.1 A Uni\ufb01ed Perspective for AT/SAT/NAT We propose a new perspective to view AT, SAT and NAT as to generate target tokens in an autoregressive manner during the whole sentence translation, but generate k adjacent tokens in parallel at a time. Speci\ufb01cally, given a source and target sentence pair (x, y) \u2208(X, Y), we factorize the conditional probability P(y|x) according to the chain rule: P(y|x) = \u230aN/k\u230b Y t=0 k Y j=1 P(ytk+j|y N represents invalid tokens, which is introduced to make our formulation simple. Under this perspective, we regard each k as an individual task. As special cases, when k = 1, the equation becomes: P(y|x) = N Y t=0 P(yt+1|y 1, which is smoother for task shift than w = 1 intuitively. 3.4 Model Structure for TCL-NAT Positional Encoding k=4 N \u00d7 Word Embedding Add & Norm Feed Forward Add & Norm Self Attention Add & Norm Feed Forward Add & Norm Causal-k Self-Attention X1 X2 Y1 Y2 Y3 Y4 Y5 Y6 Word Embedding Enc-Dec Attention Add & Norm k Source Token N \u00d7 k=2 Source Token Target Token X1 X2 X3 X4 X5 X6 X7 X8 Encoder Block Decoder Block Positional Encoding Linear & Softmax Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8 Output Target Token k=2 Query Key k=4 Query Key Figure 1: The overview of the model structure for TCL-NAT. This \ufb01gure shows the case with k = 2. To support different k in the same model, we leverage Transformer model with a causal-k self-attention mechanism in the decoder [Wang et al., 2018]. Note that although Positional Encoding k=4 N \u00d7 Word Embedding Add & Norm Feed Forward Add & Norm Self Attention Add & Norm Feed Forward Add & Norm Causal-k Self-Attention X1 X2 Y1 Y2 Y3 Y4 Y5 Y6 Word Embedding Enc-Dec Attention Add & Norm k Source Token N \u00d7 k=2 Source Token Target Token X1 X2 X3 X4 X5 X6 X7 X8 Encoder Block Decoder Block Positional Encoding Linear & Softmax Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8 Output Target Token k=2 Query Key k=4 Query Key k=4 k=2 k=2 Query Key k=4 Query Key Figure 2: Causal-k self-attention mask for an intuitive understanding. Yellow grids denote elements 1 and white grids denote elements 0 in the decoder self-attention mask. The left sub\ufb01gure shows the case with k = 2 and the right one shows the case with k = 4. we only apply the task-level curriculum learning technique in Transformer-based model, it can also be easily applied to other non-autoregressive architectures such as CNN. The whole model architecture of TCL-NAT is shown in Figure 1. The encoder of TCL-NAT is exactly the same as the basic structure of Transformer [Vaswani et al., 2017], which uses stacked self-attention and fully connected layers, shown in the left panel of the Figure 1. For the decoder, we introduce a causal-k self-attention mechanism which can generate k successive tokens in parallel. As shown in the right panel of Figure 1, our decoder is similar with the decoder in SAT [Wang et al., 2018], except that we feed the model with the \ufb01rst k source tokens (x1, ...xk) rather than special tokens to predict (y1, ..., yk) in parallel at the beginning of decoding, in order to keep consistent with NAT model when k = N. Then (y1, ..., yk) are fed to the model to predict (yk+1..., y2k) in parallel. As a result, the decoder input can be denoted as (x1, ..., xk, y1, ...yN\u2212k). We also adopt a causal-k mask in the decoder self-attention following Wang et al. [2018], as shown in Figure 2. Under this model structure, we can utilize the \ufb01rst k source tokens (x1, ..., xk) to predict sub-sentence (y1, ...yk) in parallel, and then utilize them to predict sub-sentence (yk+1, ..., y2k) in parallel and so on. Then it comes the conditional probability in Equation 1. As the k increases, the model\u2019s dependency on target tokens decreases. Specially, when k = 1 the decoder is an autoregressive decoder, and when k is large enough, the decoder becomes a nonautoregressive decoder which generates all outputs simultaneously depending on the source tokens only. In the inference stage, we set k to N and make the decoder run in the NAT mode. 4 Experiments and Results 4.1 Experiments Settings Datasets. We evaluate our method on three standard translation datasets: IWSLT14 German-to-English (De-En) dataset2, IWSLT16 English-to-German (En-De) dataset3 and WMT14 English-to-German (En-De) dataset4. Following Li et al. [2019], we reverse WMT14 English-to-German to 2https://wit3.fbk.eu/mt.php?release=2014-01 3https://wit3.fbk.eu/mt.php?release=2016-01 4https://www.statmt.org/wmt14/translation-task.html get WMT14 German-to-English (De-En) dataset. In details, IWSLT14 dataset contains 153k/7k/7k parallel bilingual sentences for training/dev/test set respectively; IWSLT16 dataset contains 195k/1k/1k parallel bilingual sentences for training/dev/test set and WMT14 dataset contains 4.5M parallel sentence pairs for training sets, where newstest2014 and newstest2013 are used as test and validation set respectively, following previous works [Gu et al., 2018; Guo et al., 2019b]. We split each token into subwords using Byte-Pair Encoding (BPE) [Sennrich et al., 2016] and set 10k, 10k and 40k as the vocabulary size for IWSLT14, IWSLT16 and WMT14 respectively. The vocabulary is shared by source and target languages in those datasets. Model Con\ufb01guration. We adopt the basic NAT model con\ufb01guration [Gu et al., 2018; Guo et al., 2019b] based on Transformer [Vaswani et al., 2017], which is composed by multihead attention modules and feed forward networks. We follow Guo et al. [2019b] for con\ufb01guration hyperparameters: For WMT14 datasets, we use the hyperparameters of a base Transformer (dmodel = dhidden = 512, nlayer = 6, nhead = 8). For IWSLT14 and IWSLT16 datasets, we utilize a small Transformer (dmodel = dhidden = 256, nlayer = 6, nhead = 4). Training. Following previous works [Gu et al., 2018; Guo et al., 2019a; Wang et al., 2019], we employ sequence-level knowledge distillation [Kim and Rush, 2016] during training to reduce the dif\ufb01culty of training and boost the accuracy through constructing a more deterministic and less noisy training set: \ufb01rst we train an AT teacher model which has the same architecture as the TCL-NAT model, and then we use the translation results of each source sentence generated by the teacher model as the new ground truth to construct a new training set for further training. We design three different pacing functions mentioned in Section 3.3 and detail their de\ufb01nitions in Table 3. We set task window w to 2 by default, which is determined by the model performance on the validation sets. We train all models using Adam following the optimizer settings and learning rate schedule in Transformer [Vaswani et al., 2017]. We run the training procedure on 8 NVIDIA Tesla P100 GPUs for WMT and 2 NVIDIA 2080Ti GPUs for IWSLT datasets respectively. The training steps of each phase are listed in Table 2. We implement our model on Tensor2Tensor [Vaswani et al., 2018]. Phases IWSLT14 De-En IWSLT16 En-De WMT14 De-En WMT14 En-De AT 0.08M 0.08M 0.16M 0.16M SAT 0.24M 0.24M 0.32M 0.32M NAT 0.32M 0.32M 0.64M 0.64M Table 2: The training steps of TCL-NAT for different datasets for each phase. Inference and Evaluation. For inference, we adopt the common method of noisy parallel decoding (NPD) [Gu et al., 2018], which generates a number of decoding candidates in parallel and selects the best translation by AT teacher model re-scoring. In our work, we generate multiple translation candidates by predicting different target lengths N \u2208 Name Description Linear flinear(i) = min(2\u230a4i/SSAT\u230b+1, 16) Logarithmic flog(i) = min(2\u230alog1.5(4i/SSAT+1)\u230b+1, 16) Exponential fexp(i) = min(2\u230a1.54i/SSAT\u230b, 16) Table 3: The proposed different curriculum pacing functions and their de\ufb01nitions. SSAT denotes the total steps in SAT training phase. We choose constants empirically to meet the actual training situation. [M \u2212B, M + B] (M is the length of the source sentence), which results in 2B + 1 candidates. We test with B = 0 and B = 4 (denoted as NPD 9) to keep consistent with our baselines [Wang et al., 2019; Guo et al., 2019a; Guo et al., 2019b]. We evaluate the translation quality by tokenized case sensitive BLEU [Papineni et al., 2002] with multi-bleu.pl5. Inference and evaluation are run on 1 Nvidia P100 GPU for WMT14 En-De datasets in order to keep in line with previous works for testing latency. 4.2 Results We compare TCL-NAT with non-autoregressive baselines including NAT-FT [Gu et al., 2018], NAT-IR [Lee et al., 2018], ENAT [Guo et al., 2019a], NAT-Reg [Wang et al., 2019], FlowSeq [Ma et al., 2019] and FCL-NAT [Guo et al., 2019b]. For NAT-IR, we report their best results when re\ufb01nement steps is 10. For ENAT, NAT-Reg and FCL-NAT, we report their best results with B = 0 and B = 4 correspondingly. For FlowSeq, we report their results without NPD. It is worth noting that we mainly compare our method with existing methods that have similar speed-up, so Mask-Predict [Ghazvininejad et al., 2019], LevT [Gu et al., 2019] and FlowSeq-large are not included into discussion. We list the main results of our work in Table 4. We can see that TCL-NAT achieves signi\ufb01cant improvements over all NAT baselines on different datasets. Speci\ufb01cally, we outperform ENAT and NAT-Reg with a notable margin. In addition, compared with NAT-Reg, we do not introduce any auxiliary loss functions in training stage and compared with ENAT, we just copy the source sentence as the decoder input, which does not add the extra workload in inference stage. Compared with FlowSeq, our method (without NPD) achieves better scores on most datasets with a much larger speedup. We also outperform FCL-NAT on most datasets with a less training steps. As for the inference ef\ufb01ciency, we achieve a 16.0 times speedup (NPD 9), which is comparable with state of the art methods (FCL-NAT and ENAT). 4.3 Analyses Comparison with Direct Transfer. We take Direct Transfer (DT) as another baseline, where we omit the SAT stage in Section 3.3, and train the model in a non-autoregressive manner for the same steps as TCL-NAT to ensure a fair comparison. We test DT model on the test set of IWSLT14 De-En task and obtain the BLEU score of 27.00, while our method 5https://github.com/moses-smt/mosesdecoder/blob /master/scripts/generic/multi-bleu.perl Models IWSLT14 De-En IWSLT16 En-De WMT14 De-En WMT14 En-De Latency Speedup Autoregressive Models (AT Teachers) Transformer [Vaswani et al., 2017] 33.90 30.32 31.38 27.30 607 ms 1.00 \u00d7 Non-Autoregressive Models NAT-FT [Gu et al., 2018] / 26.52 21.47 17.69 39 ms 15.6 \u00d7 NAT-FT (NPD 10) / 27.44 22.41 18.66 79 ms 7.68 \u00d7 NAT-IR [Lee et al., 2018] / 27.11 25.48 21.61 404 ms 1.50 \u00d7 ENAT [Guo et al., 2019a] 25.09 / 23.23 20.65 24 ms 25.3 \u00d7 ENAT (NPD 9) 28.60 / 26.67 24.28 49 ms 12.4 \u00d7 NAT-Reg [Wang et al., 2019] 23.89 23.14 24.77 20.65 22 ms 27.6 \u00d7 NAT-Reg (NPD 9) 28.04 27.02 28.90 24.61 40 ms 15.1 \u00d7 FlowSeq-base [Ma et al., 2019] 27.55 / 26.16 21.45 / 5.94 \u00d7 FCL-NAT [Guo et al., 2019b] 26.62 / 25.32 21.70 21 ms 28.9 \u00d7 FCL-NAT (NPD 9) 29.91 / 29.50 25.75 38 ms 16.0 \u00d7 TCL-NAT 28.16 26.01 25.62 21.94 22 ms 27.6 \u00d7 TCL-NAT (NPD 9) 31.79 29.30 29.60 25.37 38 ms 16.0 \u00d7 Table 4: The BLEU scores of our proposed TCL-NAT and the baseline methods on the IWSLT14 De-En, IWSLT16 En-De, WMT14 De-En and WMT14 En-De tasks. NPD 9 indicates results of noisy parallel decoding with 9 candidates, i.e., B = 4, otherwise B = 0. achieves 28.16 BLEU score. We can see that compared with DT, TCL-NAT gains a large improvement on translation accuracy, demonstrating the importance of the progressive transfer between two tasks with curriculum learning. Analysis on Pacing Functions. We compare the accuracy of models trained with different pacing functions. We evaluate TCL-NAT with different pacing functions shown in Table 3. From the table, we can see that the model trained with exponential function slightly outperforms those with other functions and the logarithmic function performs the worst. As we mentioned in Section 3.3, exponential function shows more preference on easier stages while logarithmic function focuses more on harder stages, and therefore we can conclude that showing more preference on easier tasks is bene\ufb01cial to the NAT model training, and thus obtain a better score. Pacing Functions Linear Logarithmic Exponential TCL-NAT 27.89 27.76 27.96 TCL-NAT (NPD 9) 31.51 31.45 31.71 Table 5: The comparison of BLEU scores on the test set of IWSLT14 De-En task among different pacing functions. Analysis on Task Window. We compare the accuracy of models trained with different task windows, as mentioned in Section 3.3. The results are listed in Table 6. From the table, we can see that the model trained with w = 2 achieves the best score in IWSLT14 De-En task, which proves that an appropriate task window w can help reduce the gap between neighboring stages, and thus help model training. 5 Conclusion In this work, we proposed a novel task-level curriculum learning method to improve the accuracy of non-autoregressive Task Window w = 1 w = 2 w = 3 w = 4 TCL-NAT 27.89 28.16 28.00 27.96 TCL-NAT (NPD 9) 31.51 31.79 31.44 31.40 Table 6: The comparison of BLEU scores on the test set of IWSLT14 De-En task among different task windows. neural machine translation. We \ufb01rst view autoregressive, semi-autoregressive and non-autoregressive translation as individual tasks with different k, and propose a task-level curriculum mechanism to shift the training process from k = 1 to N, where N is the length of the target sentence. Experiments on several benchmark translation datasets demonstrate the effectiveness of our method for NAT. In the future, we will extend the task-level curriculum learning method to other sequence generation tasks such as non-autoregressive speech synthesis, automatic speech recognition and image captioning, where there exists smooth transformation between autoregressive and non-autoregressive generation using semi-autoregressive generation as bridges. We expect task-level curriculum learning could become a general training paradigm for a broader range of tasks. Acknowledgments This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), National Natural Science Foundation of China (Grant No.61751209), and Microsoft Research Asia.", "introduction": "Neural Machine Translation (NMT) has witnessed rapid progress in recent years [Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017]. Typically, NMT mod- els adopt the encoder-decoder framework [Bahdanau et al., 2015], and the decoder generates a target sentence in an au- toregressive manner [Bahdanau et al., 2015; Vaswani et al., 2017], where the generation of the current token depends on previous tokens and the source context from the encoder. While the accuracy of NMT models achieve human parity, \u2217Equal contribution. \u2020Corresponding author they usually suffer from high inference latency due to autore- gressive generation. Therefore, non-autoregressive transla- tion (NAT) [Gu et al., 2018; Guo et al., 2019a; Wang et al., 2019; Ma et al., 2019; Ren et al., 2019] has been proposed to generate target tokens in parallel, which can greatly speed up the inference process. However, the accuracy of NAT models still lag behind that of the autoregressive translation (AT) models, due to the previous target tokens are removed from conditional depen- dency. A variety of works have tried to improve the accu- racy of NAT, including enhanced decoder input with embed- ding mapping [Guo et al., 2019a], generative \ufb02ow [Ma et al., 2019], and iterative re\ufb01nement [Ghazvininejad et al., 2019; Lee et al., 2018], etc. However, none of these works leverage the task relationship between AT and NAT when designing their methods. As AT models are more accurate and easier to train than NAT models due to the explicit dependency on previous tokens, a natural idea is to \ufb01rst train the model with easier AT, and then continue to train it with harder NAT. AT and NAT can be regarded as two tasks that are far differ- ent from each other, which makes it less bene\ufb01cial to directly shift to NAT training right after AT training. How to smoothly shift the model training from AT to NAT is critical for the \ufb01- nal accuracy. In this paper, we introduce semi-autoregressive translation (SAT) [Wang et al., 2018], which only generates a part of the tokens in parallel at each decoding step, as in- termediate tasks to bridge the shift process from AT to NAT. Speci\ufb01cally, we de\ufb01ne a parameter k to represent the degree of parallelism for each task, and view different tasks under a uni\ufb01ed perspective: k = 1 represents AT, k = N rep- resents NAT where N is the length of target sentence, and 1 < k < N represents SAT. Intuitively, a task with smaller k is easier to train and achieves higher accuracy, while that with larger k is harder to train and results in worse accu- racy [Wang et al., 2018], which forms a good curriculum to train the model from easy to hard. Inspired by this, we propose a task-level curriculum learn- ing for non-autoregressive translation (TCL-NAT), which trains the model with sequentially increased k. We divide the training procedure into three phases: AT training (k = 1), SAT training (1 < k < N) and NAT training (k = N). SAT training consists of multiple stages, where we shift k gradu- ally and exponentially as k = 2, 4, 8, ..., 16. To \ufb01nd the best schedule strategy to shift k, we design different pacing func- arXiv:2007.08772v1 [cs.CL] 17 Jul 2020 tions to control the training steps for each k, including linear, logarithmic and exponential functions. On the other hand, to smooth the shift process and reduce the gap between different stages, we further introduce a parameter called task window w, which represents the number of tasks training at the same time in each stage. For example, when w = 2, we train the model with k = 1, 2 for the \ufb01rst stage and k = 2, 4 for the second stage, and so on. We implement TCL-NAT on Transformer model [Vaswani et al., 2017]. In order to support different k in the same model, we introduce a causal-k self-attention mechanism in the Transformer decoder. We conduct experiments on four translation datasets including IWSLT14 German-English (De-En), IWSLT16 English-German (En-De), WMT14 English-German (En-De) and WMT14 German-English (De- En) to demonstrate the effectiveness of our method. The ex- periment results show that our method can achieve signi\ufb01cant improvement over NAT baselines and also outperform state of the art NAT models, without sacri\ufb01cing the inference speed. Speci\ufb01cally, we outperform the state of art NAT model [Guo et al., 2019b] by 1.88 BLEU on the IWSLT14 De-En task, and reduce the accuracy gap between AT and NAT models to nearly 1 BLEU point on IWSLT16 En-De and WMT14 En- De tasks." } ], "Xingshan Zeng": [ { "url": "http://arxiv.org/abs/2212.08911v1", "title": "AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation", "abstract": "To alleviate the data scarcity problem in End-to-end speech translation (ST),\npre-training on data for speech recognition and machine translation is\nconsidered as an important technique. However, the modality gap between speech\nand text prevents the ST model from efficiently inheriting knowledge from the\npre-trained models. In this work, we propose AdaTranS for end-to-end ST. It\nadapts the speech features with a new shrinking mechanism to mitigate the\nlength mismatch between speech and text features by predicting word boundaries.\nExperiments on the MUST-C dataset demonstrate that AdaTranS achieves better\nperformance than the other shrinking-based methods, with higher inference speed\nand lower memory usage. Further experiments also show that AdaTranS can be\nequipped with additional alignment losses to further improve performance.", "authors": "Xingshan Zeng, Liangyou Li, Qun Liu", "published": "2022-12-17", "updated": "2022-12-17", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.SD", "eess.AS" ], "main_content": "2.1 Architecture Following previous studies (Liu et al., 2020; Xu et al., 2021), AdaTranS decouples the ST encoder into an acoustic encoder and a semantic encoder. To bridge the modality gap between speech and text, an adaptor is usually needed before the semantic encoder. We choose the shrinking operation (Liu et al., 2020; Zeng et al., 2021) as our adaptor, where the long speech sequences are shrunk to the similar lengths as the transcription based on designed mechanisms (details will be introduced in the next subsection). The shrunk representations are sent to the semantic encoder to derive the encoder output. Finally, the semantic output is fed into the ST decoder for computing the cross-entropy loss: LST = \u2212 \ufffd |DST \ufffd |DST | Ty \ufffd t=1 \ufffd t=1 log p(yt|y