diff --git "a/abs_29K_G/test_abstract_long_2405.03121v1.json" "b/abs_29K_G/test_abstract_long_2405.03121v1.json"
new file mode 100644--- /dev/null
+++ "b/abs_29K_G/test_abstract_long_2405.03121v1.json"
@@ -0,0 +1,444 @@
+{
+ "url": "http://arxiv.org/abs/2405.03121v1",
+ "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding",
+ "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.",
+ "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu",
+ "published": "2024-05-06",
+ "updated": "2024-05-06",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.",
+ "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6",
+ "additional_graph_info": {
+ "graph": [
+ [
+ "Tao Liu",
+ "Feilong Chen"
+ ],
+ [
+ "Tao Liu",
+ "Shuai Fan"
+ ],
+ [
+ "Tao Liu",
+ "Chenpeng Du"
+ ],
+ [
+ "Feilong Chen",
+ "Shuang Xu"
+ ],
+ [
+ "Feilong Chen",
+ "Fandong Meng"
+ ],
+ [
+ "Shuai Fan",
+ "Zhuo Chen"
+ ],
+ [
+ "Chenpeng Du",
+ "Kai Yu"
+ ],
+ [
+ "Chenpeng Du",
+ "Yiwei Guo"
+ ]
+ ],
+ "node_feat": {
+ "Tao Liu": [
+ {
+ "url": "http://arxiv.org/abs/2405.03121v1",
+ "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding",
+ "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.",
+ "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu",
+ "published": "2024-05-06",
+ "updated": "2024-05-06",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.AI"
+ ],
+ "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6"
+ }
+ ],
+ "Feilong Chen": [
+ {
+ "url": "http://arxiv.org/abs/2305.04160v3",
+ "title": "X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages",
+ "abstract": "Large language models (LLMs) have demonstrated remarkable language abilities.\nGPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities\nbeyond previous visual language models. We attribute this to the use of more\nadvanced LLMs compared with previous multimodal models. Unfortunately, the\nmodel architecture and training strategies of GPT-4 are unknown. To endow LLMs\nwith multimodal capabilities, we propose X-LLM, which converts Multi-modalities\n(images, speech, videos) into foreign languages using X2L interfaces and inputs\nthem into a large Language model (ChatGLM). Specifically, X-LLM aligns multiple\nfrozen single-modal encoders and a frozen LLM using X2L interfaces, where ``X''\ndenotes multi-modalities such as image, speech, and videos, and ``L'' denotes\nlanguages. X-LLM's training consists of three stages: (1) Converting Multimodal\nInformation: The first stage trains each X2L interface to align with its\nrespective single-modal encoder separately to convert multimodal information\ninto languages. (2) Aligning X2L representations with the LLM: single-modal\nencoders are aligned with the LLM through X2L interfaces independently. (3)\nIntegrating multiple modalities: all single-modal encoders are aligned with the\nLLM through X2L interfaces to integrate multimodal capabilities into the LLM.\nOur experiments show that X-LLM demonstrates impressive multimodel chat\nabilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen\nimages/instructions, and yields a 84.5\\% relative score compared with GPT-4 on\na synthetic multimodal instruction-following dataset. And we also conduct\nquantitative tests on using LLM for ASR and multimodal ASR, hoping to promote\nthe era of LLM-based speech recognition.",
+ "authors": "Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, Bo Xu",
+ "published": "2023-05-07",
+ "updated": "2023-05-22",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.CV",
+ "eess.AS"
+ ],
+ "main_content": "Introduction In recent years, multimodal language models [31, 29, 24] have undergone rapid development. These models possess excellent abilities in multimodal understanding and response generation and can perform well in tasks such as image captioning [50], visual question answering [1], visual dialog [9], video captioning [18], and spoken dialogue [52]. It is worth noting that a large-scale multimodal model, GPT-4 [37], has recently been introduced, demonstrating many impressive capabilities. For example, GPT-4 can follow various instructions to complete language tasks, and can also answer various questions about images. For instance, GPT-4 can give detailed and accurate descriptions of images, understand and explain the humor in visual content, and even provide correct website-building code based on handwritten code images. Although GPT-4 demonstrates remarkable capabilities, Preprint. Work in progress arXiv:2305.04160v3 [cs.CL] 22 May 2023 \funfortunately, we do not know the details of its model structure and training methods. We believe that this is due to the fact that GPT-4 uses a more advanced and larger language model compared to previous multimodal models. With the support of powerful language abilities, GPT-4 can express understood visual content in the form of language. To validate this hypothesis and endow LLM with multimodal capabilities, we propose X-LLM. It converts multimodal information, such as images, speech, and videos, into foreign languages using X2L interfaces, and then feeds converted multimodal information into a large language model (ChatGLM). Speci\ufb01cally, X-LLM aligns multiple frozen single-modal encoders and a frozen LLM using X2L interfaces. X2L interfaces consist of an image I2L interface, a video V2L interface, and a speech S2L interface, where \u201cX\u201d denotes the multi-modalities and \u201cL\u201d denotes languages. The image interface and video interface have the same structure, and we adopt the Q-Former from BLIP-2 [29] to convert visual information into foreign language representations. For ef\ufb01ciency, the video interface reuses the parameters of the image interface with image-text data but is further trained with video-text data to align the encoded video features with the LLM. The speech interface utilizes the continuous integrate-and-\ufb01re (CIF) mechanism [12, 23] and transformer structure to convert speech utterance into foreign language representations. The training of X-LLM consists of three stages. (1) Converting Multimodal Information: the \ufb01rst stage trains each X2L interface to align with its respective single-modal encoder separately to convert multimodal information into languages. (2) Aligning X2L representations with the LLM: single-modal encoders are aligned with the LLM through X2L interfaces. (3) Integrating multiple modalities: all single-modal encoders are aligned with the LLM through X2L interfaces to integrate multimodal capabilities into the LLM. In the \ufb01rst two stages, we use image caption data, video caption data and automatic speech recognition (ASR) data to train the X2L interfaces. To better equip LLM with multimodal capabilities, we construct a multimodal instruction dataset (\u223c10K) based on open-source datasets to further improve the proposed model. Although without the third training stage, X-LLM already has the ability to accomplish multimodal tasks such as visual spoken question answering, we \ufb01nd that with only rare additional multimodal instruction data, LLM can further unify the capabilities of multiple modalities. In our experiments, we \ufb01nd that X-LLM has abilities similar to those of GPT-4. For example, X-LLM can generate complex image descriptions and explain unusual visual phenomena. In our research, when using input images, X-LLM can recognize the location in the image, such as identifying the Forbidden City and providing relevant information about it, observe the food in the image and provide detailed recipes; create stories for pictures, and come up with textual meanings for logos. We also \ufb01nd that X-LLM\u2019s image-related abilities can be extended to videos, such as introducing the content of a video, retrieving movie names, or art-related facts directly from the video. Moreover, X-LLM can answer questions based on spoken questions and can combine images or videos to answer spoken questions. These abilities are previously not present in previous multimodal models but are now made possible by the powerful language modeling capabilities of X-LLM. Our contributions are summarised as follows: \u2022 Multimodal LLM framework. We propose X-LLM, a Multimodal LLM which injects multiple modalities (such as images, speech, and videos) into LLM through X2L interfaces, giving LLM the ability to process multimodal data. This framework has good scalability and can be extended to more modalities. \u201cX\u201d in X2L interfaces can be any modality. We compare our X-LLM with LLaVA and MiniGPT-4 in terms of the ability to handle visual inputs with Chinese elements, and \ufb01nd that X-LLM outperformed them signi\ufb01cantly. We also conduct quantitative tests on using LLM for ASR and multimodal ASR, hoping to promote the era of LLM-based speech recognition. \u2022 Transferability of parameters in English image-text alignment modules. We \ufb01nd that the Qformer module trained on English image-text data can be transferred to other languages. In our experiments, we have successfully transferred the model parameters from Indo-European English to Sino-Tibetan Chinese. The transferability of language greatly increases the possibility of using English image-text data and its trained model parameters, and improves the ef\ufb01ciency of training multimodal LLMs in other languages. \u2022 Open-source. We construct a concise and high-quality Chinese multimodal instruction dataset. By training X-LLM on this multimodal instruction data, X-LLM can better integrate the multimodal capabilities acquired through multiple encoders and corresponding X2L interfaces. And We release the following assets to the public: the generated multimodal instruction data, the codebase for model training, the model checkpoint, and a multimodal chat demo. 2 \fLLM Image Encoder Video Encoder Speech Encoder Q-Former I-Adapter\u00a0 Q-Former V-Adapter\u00a0 CIF S-Adapter\u00a0 X2L interfaces \u8fd9\u662f\u4ec0\u4e48\u5730\u65b9\uff1f (Where is this?) \u4ecb\u7ecd\u4e00\u4e0b\u8fd9\u6bb5\u89c6\u9891\u7684\u5185\u5bb9 (Describe this video) \u8bc6\u522b\u8bed\u97f3\u4e2d\u7684\u5185\u5bb9 (Recognize text in this speech) \u8fd9\u662f\u6545\u5bab\uff0c\u53c8\u79f0\u7d2b\u7981\u57ce\uff0c\u662f\u4e2d\u56fd\u660e\u6e05 \u4e24\u671d\u7684\u7687\u5bab\uff0c\u4f4d\u4e8e\u5317\u4eac\u5e02\u4e2d\u5fc3\uff0c\u5360\u5730 \u9762\u79ef\u7ea672\u4e07\u5e73\u65b9\u7c73\uff0c\u662f\u4e16\u754c\u4e0a\u6700\u5927\u3001 \u4fdd\u5b58\u6700\u4e3a\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf\u5efa\u7b51\u7fa4\u4e4b \u4e00... (This is the Palace Museum, also known as the Forbidden City, which served as the imperial palace during the Ming and Qing dynasties in China. Located in the center of Beijing, it covers an area of about 720,000 square meters and is one of the largest and best-preserved ancient palace complexes in the world...) \u4e00\u7fa4\u72ee\u5b50\u5728\u8349\u539f\u4e0a\u6652\u592a\u9633... (A group of lions sunbathing on the savannah ...) \u63a8\u8350\u4e00\u4e0b\u5317\u4eac\u7684\u65c5\u6e38\u666f\u70b9 (Recommend some tourist attractions in Beijing) Q-Former I-Adapter\u00a0 V-Adapter\u00a0 Q-Former C-Former Figure 1: X-LLM network architecture. 2 Related Work Vision-Language Models. As summarized in many surveys [5, 16], visual language models [58, 30] have made great strides with the development of pre-training techniques [11, 2, 60]. In the early days, researchers used Faster-RCNN [19] to extract image features and concatenated them with language models such as BERT [11] to perform vision-language pre-training. VisualBERT [31], for example, combines image regions and language using a Transformer [49] to allow self-attention to discover implicit alignments between language and vision. It is pre-trained with masked language modeling [11] and a sentence-image prediction task [31]. With the introduction of ViLT [26], researchers use vision transformers [13, 25] to process images, textual transformers (such as BERT [11], GPT-2 [42], T5 [43]) to process text, and pre-training objectives such as masked language modeling, image-text matching, and image-text contrast to train visual language models. CLIP [41] uses a text encoder and an image encoder to encode text and images separately and then performs unsupervised contrastive learning to obtain good representations of vision-language alignment. BLIP [30] is a new VLP framework that transfers \ufb02exibly to both vision-language understanding and generation tasks. In the \ufb01eld of visual dialogue [9, 6, 4], researchers design pre-training objectives related to visual dialogue based on vision-language pre-training models [11, 31] and \ufb01netune vison-language models on visual dialogue data [9] to achieve better dialogue performance. VisDial-BERT [36] and VDBERT [51], for example, use pre-trained ViLBERT [35] and BERT to \ufb01netune models on visual dialogue data using masked language modeling and image-text matching. AlignVD [7] proposes two methods for visual-language alignment based on pre-trained ViT [41] and BERT to achieve better performance in visual dialogue. Enhancing Vision-language Understanding with Advanced LLMs. Although the aforementioned vision-language models have achieved some success, there is still signi\ufb01cant room for improvement in terms of language generation [5, 34, 29]. A recent method [29, 14, 37] for enhancing visual language understanding using advanced large-scale language models [48, 8] has been proposed. For example, BLIP2 [29] uses a Q-Former to connect a visual encoder with an LLM, aligning the learned queries of the Q-Former with language-related visual features extracted by the visual encoder. The Q-Former then connects the visual encoder with the language model, allowing the learned query representations to adapt to the LLM. PaLM-E [14] combines ViT-22B [10] with PaLM560B [2] to inject multimodal information into the embedding space of the pre-trained language model, establishing a connection between perception and language and greatly enhancing the model\u2019s visual language understanding ability. In addition, Visual ChatGPT [53] and HuggingGPT [46] use ChatGPT as the core logic controller, which understands user intent and then call upon speci\ufb01c domain visual language models. Finally, the recently proposed GPT-4 [37] demonstrates powerful multimodal capabilities: building on its strong language understanding abilities, it can generate complex image descriptions, create websites based on handwritten text instructions, and explain unusual visual phenomena. However, the model structure and training strategies of GPT-4 remain a mystery. MiniGPT-4 [59] and LLaVA [33] align text and image data to the large-scale language 3 \fmodel Vicuna [8] and ViT [57] to complete image-based language tasks. In contrast, X-LLM is a universal framework for multimodal LLMs that bootstraps advanced large language models by treating multi-modalities as foreign languages. In this paper, we implement X-LLM that supports images, videos, and speech. Based on the X-LLM framework, we can extend the model to more modalities, such as injecting continuous space robot states, terminal information, or audio rather than speech into the LLM. 3 Approach X-LLM aims to align multiple pre-trained single-modal encoders with advanced large-scale language models (LLMs), as shown in Figure 1. Speci\ufb01cally, we use ChatGLM 1 as the language decoder, which is built on top of GLM [17, 56] and can perform various complex language tasks. For visual perception, we adopt ViT-g [57], as the image encoder and video encoder. For speech perception, we use a speech encoder comprised of convolution layers and conformer structure [21]. We design a module that aligns multimodal information with LLM, collectively referred to as the X2L interfaces, which includes an image interface, a video interface, and a speech interface. The image interface and the video interface have the same structure which consists of Q-Formers [29] and Adapter modules. The speech interface includes the C-Former and an Adapter module. The C-Former could compress the frame-level speech feature sequence from the speech encoder into the token-level speech embedding sequence with continuous integrate-and-\ufb01re (CIF) mechanism [12, 23, 22]. As the token-level speech embedding sequence is strictly aligned with the token sequence of the transcription corresponding to the speech utterance, representing speech using token-level speech embeddings can effectively reduce the GPU memory usage when incorporating speech into LLMs. 3.1 X2L Interfaces X2L interfaces aim to convert multimodal information into foreign languages, which includes an image interface, a video interface, and a speech interface. The Image Interface. Inspired by [29], the image interface consists of a Q-Formers [29] and an I-Adapter module. The Q-Formers aims to convert images into languages, where image features obtained from the image encoder are converted into a sequence with Li quasi-linguistic embeddings. The I-Adapter module aims to align the dimensions of the quasi-linguistic embeddings and the embedding dimension of the LLM. The Video Interface. The video interface has the same structure as the image interface, which also consists of Q-Formers [29] and a V-Adapter module. We use uniform sampling and represent each video with T frames. We then treat each frame as an image. The video interface converts each frame features into a sequence with Li quasi-linguistic embeddings. Then the video interface concatenates all the sequences to obtain the \ufb01nal quasi-linguistic embeddings, which have a length of T \u00d7 Li. The Speech Interface. To transform the speech features from the speech encoder into more semantic representations, we introduce a speech-to-language interface called the speech interface. The speech interface consists of two parts, namely the C-Former and the S-Adaptor. The C-Former is the combination of a CIF module and a 12-layer transformer structure [11]. First, the CIF module compresses the speech feature sequence from the speech encoder into a token-level speech embedding sequence with the same length as the corresponding transcription via variable-length down-sampling. Assuming the length of the feature sequence emitted by the speech encoder for the input speech is U, and the length of the token sequence of the transcription of the speech utterance is Ls, the length of the token-level speech embedding sequence should be Ls (U is usually several times longer than Ls). Then, the transformer structure provides contextual modeling for the token-level speech embeddings from the CIF module. Finally, the S-Adaptor is used to project the outputs of the transformer structure to the input vector space of the LLM, further narrowing down the semantic gap between speech and language. 1https://github.com/THUDM/ChatGLM-6B 4 \f3.2 Training Strategy To ef\ufb01ciently implement X-LLM, we propose a three-stage training strategy. (1) Converting Multimodal Information: we align the Image Encoder with the Q-Former of the image (green part), and the Speech Encoder with the CIF module. (2) Aligning X2L representations with the LLM: in the second stage, we align the Image Encoder with the LLM through the image interface, align the Video Encoder with the LLM through the video interface, and align the Speech Encoder with LLM through the speech interface. In the third stage, we integrate training of the image, video, and speech, and align the overall single-modal encoders with the LLM using a smaller but high-quality multimodal instruction dataset (such as instructions containing visual spoken dialogue, i.e., responding to spoken dialogue inputs based on images). 3.2.1 First Training Stage: Converting Multimodal Information In the \ufb01rst stage, the traditional approach is to align the Image Encoder with the image Q-Former using a large amount of image-text data, similar to the \ufb01rst stage of BLIP2 [29] which utilized around 500 million image-text pairs. However, we \ufb01nd that while BLIP2 used English data, we can still leverage the pretrained parameters of the Q-Former in BLIP2 to implement a Chinese Multimodal LLM. Therefore, in the \ufb01rst stage, to ef\ufb01ciently implement X-LLM, we only convert the representation of the speech encoder to a quasi-linguistic representation through the speech interface. For the speech-related structures, we train a CIF-based ASR model with multiple ASR datasets containing to obtain the speech encoder and CIF module in the C-Former. The CIF-based ASR model consists of a speech encoder, a CIF module, and a decoder [12]. We employ the speech encoder of this ASR model as the speech encoder and employ the CIF module of this ASR model as that in the C-Former of the speech interface. Note that the parameters of the speech encoder and CIF module are kept frozen during all subsequent training stages. Please refer to the appendix for more details about the structure and training of the CIF-based ASR model. 3.2.2 Second Training Stage: Aligning X2L Representations with the LLM As mentioned above, despite the difference in language, we are still able to reuse the parameters of the Q-Former in BLIP2. Speci\ufb01cally, we used the Q-Former trained in the second stage of BLIP2 to initialize the image interface\u2019s Q-Former in X-LLM. To adapt the Q-Former to Chinese LLM, we use a combined dataset, totaling approximately 14 million Chinese image-text pairs for training. Next, we use the trained image interface to initialize the video interface (the Q-Former and the V-Adapter) and train the video interface on the translated video-text data. Finally, we train the speech interface using ASR data to align the output of the speech interface with the LLM. It should be noted that throughout the entire second training stage, all the encoders and the LLM remain frozen, with only the interfaces being trained. 3.2.3 Third Training stage: Integrating Multiple Modalities After the \ufb01rst two stages of training, our X-LLM has demonstrated a remarkable ability to provide reasonable answers to human queries based on multimodal information and has gained a vast amount of knowledge. We have observed that, even without the instruction for joint training on multiple modalities, such as \"answer questions based on images using voice input,\" X-LLM is capable of performing tasks that require multiple modalities, such as visual spoken dialogue, multimodal speech recognition, and multimodal machine translation. This remarkable ability is likely due to X-LLM\u2019s integration of LLM\u2019s excellent instruction generalization capability, which has been extended to the multimodal domain. This ability enables us to train more modalities independently in the \ufb01rst two stages and integrate them into the model without the need for joint training with existing modalities. To explore the potential of multimodal joint instruction data in further enhancing X-LLM\u2019s ability to perform multimodal tasks, such as visual spoken question answering, we have constructed a concise but high-quality multimodal instruction dataset. Different from MiniGPT-4 [59] and LLaVA [33]\u2019s datasets, which only contain image-text instruction data and other textual instruction datasets for instruction \ufb01netuning and conversations, our dataset supports multimodal joint instructions and includes (1) image-text instruction data, (2) speech-text instruction data, (3) video-text instruction data, and (4) image-text-speech instruction data. 5 \fConstructing a High-quality Alignment Dataset for Multimodal LLM. We use ChatGPT to translate 3.5K image-text instructions built by MiniGPT-4. Then, we manually select 2k data from AISHELL-2 [15] and write 5 different instructions for speech recognition tasks. We use ChatGPT to translate the ActivityNet dataset [27], followed by manually selecting 1k data and writing 5 different instructions for corresponding video-text tasks. We manually select and rewrite 1k data from self-constructed VSDial-CN data, aiming to enable the model to perform dialogue generation tasks based on images and speech. More details of the data can be found in the appendix, including the details of the training data for the \ufb01rst two stages and the multimodal instruction data. The Third Training Stage. During this stage, we use the constructed compact yet high-quality data to \ufb01netune our model. During \ufb01netuning, we use the prede\ufb01ned prompts in the following template: Question: \\n Answer: In this prompt, represents a randomly sampled instruction from our prede\ufb01ned instruction set, including different forms such as \u201cdescribe this image in detail\u201d, \u201ccan you describe what you notice in the video\u201d, or \u201canswer the question in the speech based on the image\u201d. It should be noted that we do not calculate regression loss speci\ufb01cally for this particular instruction prompt. Therefore, X-LLM can integrate multiple modalities and generate more natural and reliable responses based on various combinations of instructions as needed. 4 Experiments 4.1 Multimodal Chat We have developed a Chatbot demo to show multimodal understanding and conversation abilities of X-LLM. For comparisons, query LLaVA [33]2 and MiniGPT-4 [59]3 from their online demos to get their response. As shown in Table 2 and 3, although LLaVA and MiniGPT-4 also exhibit the characteristic of generating answers based on the given prompt, their answers regarding visual content with Chinese elements are not as satisfactory. In the \ufb01rst example about the Forbidden City shown in Table 2, X-LLM recognizes that the place is the Forbidden City and provides a detailed introduction to its history, architecture, and style. LLaVA describes Chinese palaces and \ufb02ags, but it does not recognize that the famous palace is the Forbidden City and therefore cannot provide relevant information about it. MiniGPT-4 exhibits the same problem and tends to describe the image more. In the second example about the game \u201cHonor of Kings\u201d shown in Table 3, X-LLM identi\ufb01es it as a multiplayer online battle arena game, \u201cHonor of Kings\u201d, developed by Tencent and provides accurate release time. LLaVA, on the other hand, gives multiple incorrect answers, as there are no elements of popular games such as snakes and pocket monsters in the image, and the game is not played with a mouse. MiniGPT-4 fails to recognize the game and provides a more generic description. For video input and speech input, we provide some examples as shown in Appendix B. Quantitative Evaluation. In order to systematically evaluate the performance of the X-LLM model on visual input, we aim to use quantitative metrics to measure the model\u2019s ability to follow instructions. We adopt an evaluation method similar to that proposed by LLaVA [33] and use ChatGPT to measure the quality of the answers generated by our model. Speci\ufb01cally, we use the LLaVA-test dataset [33] provided by LLaVA, which contains 30 randomly selected images from the COCO validation set, each with three types of questions (conversation, detailed description, and complex reasoning). We \ufb01rst translate the questions into Chinese, and X-LLM predicts the answers based on the translated Chinese questions and visual input images. Then we translate the responses given by X-LLM into English for comparison with GPT-4. GPT-4 makes reference predictions based on the question, ground truth bounding boxes, and captions, marking the upper limit. After obtaining the responses from the two models, we provide the question, visual information (in the form of captions and bounding boxes), and generated responses from both assistants to ChatGPT. ChatGPT evaluates the usefulness, 2https://llava-vl.github.io/ 3https://minigpt-4.github.io/ 6 \fModel Conversation Detail description Complex reasoning All LLaVA 83.1 75.3 96.5 85.1 X-LLM 85.4 83.5 84.6 84.5 w/ 4M 74.8 83.7 86.5 81.9 w/ 4M no init 64.7 71.9 85.0 73.8 Table 1: Relative scores for different settings w.r.t. GPT-4 (text-only) on 30 randomly sampled images from COCO Val 2014. Each image is associated one short question, one detailed question, and one complex reasoning question, resulting in a total of 90 questions. We prompt ChatGPT with the answers from our model outputs and the answers by GPT-4 (text-only), and let it compare between both responses and give a rating with an explanation. \u201cw/ 4M\u201d denotes that we train the image interface only using 4M image-text pairs. \u201cw/ 4M no init\u201d denotes that we train the image interface only using 4M image-text pairs and without using the parameters of pretrained BLIP2. relevance, accuracy, and level of detail of the assistants\u2019 responses and gives an overall score from 1 to 10, with higher scores indicating better overall performance. ChatGPT is also required to provide a comprehensive evaluation explanation for a better understanding of the model. LLaVA used GPT-4 as a teacher to evaluate the quality of the responses generated by LLaVA and GPT-4, while we believe that using a non-GPT-4 evaluation model (i.e. using ChatGPT) will be more objective (Also because we do not have GPT-4 API.). Examples of test questions can be found in Appendix A.2. We show the results in Table 1. Although different evaluation models are used (LLaVA uses GPT-4, X-LLM uses ChatGPT), we are able to make rough comparisons. The results show that X-LLM yields a performance of 84.5% nearly GPT-4. X-LLM outperforms LLaVA in terms of conversation and detail description but is inferior in complex reasoning. There are two reasons for this. One reason is that X-LLM do not use the 150k visual instruction dataset proposed by LLaVA, which has the same format as the test set. The second reason is that X-LLM has fewer language model parameters. It is based on ChatGLM with 6B parameters, while LLaVA is based on Vicuna with 13B parameters. And we do not \ufb01netune the LLM while LLaVA \ufb01netune the LLM Vicuna. Furthermore, comparing \u201cX-LLM w/ 4M\u201d and \u201cX-LLM w/ 4M no init\u201d, we can observe that using the BLIP2 pre-trained Q-Former parameters signi\ufb01cantly improves the model\u2019s performance, which This veri\ufb01es the transferability of parameters in the English image text alignment module. The transferability of language greatly increases the possibility of using English image-text data and its trained model parameters, and improves the ef\ufb01ciency of training multimodal LLMs in other languages. Comparing X-LLM and \u201cX-LLM w/ 4M\u201d, we can see that increasing the number of image-text pairs used during training can enhance the model\u2019s performance. However, we also notice that X-LLM performs worse than \u201cX-LLM w/ 4M\u201d in complex reasoning, which may be attributed to the additional use of the Wukong dataset [20], whose quality is inferior to that of the dataset comprising the 4M data. Note that all three variants of X-LLM achieve high performances on complex reasoning which is because of the powerful ability of language models and complex reasoning questions can also be answered without images to some extent. 4.2 ASR and Multimodal ASR We evaluate the ASR and multmodal ASR (MASR) on AISHELL-2 and VSDial-CN datasets. Please refer to Appendix A for details. As shown in Table 4, we compare X-LLM with the CIF-based model mentioned in Section 3.2.1. From the results in the table, overall, the speech recognition performance of X-LLM is weaker than that of the CIF-based Model. This is because the CIF-based model is trained with full parameters, while X-LLM only trains the BERT in the C-Former and the S-Adapter. In the results of multimodal speech recognition on the VSDial-CN dataset, the introduction of images caused a decrease in the speech recognition performance of X-LLM (from 3.4 to 4.7 on test-orig and from 4.7 to 6.8 on test-art). This may be because we provided X-LLM with a \ufb01xed single instruction \u201cPlease faithfully recognize the speech\u201d during the second training stage for speech recognition. However, when conducting multimodal speech recognition, we replace the instruction with \u201cPlease faithfully recognize the speech based on the image.\u201d. After \ufb01netuning X-LLM with 2k speech recognition-related instructions in the third stage, the error rate of X-LLM\u2019s multimodal speech recognition decreased from 4.7 to 4.3 on test-orig and from 6.8 to 6.3 on test-art. In the future, we will supplement the training process by using various speech recognition instructions in the second stage and increasing the amount of 7 \fVisual input example, The Forbidden City: User \u8be6\u7ec6\u4ecb\u7ecd\u4e00\u4e0b\u8fd9\u4e2a\u5730\u65b9(Introduce the place in details) X-LLM \u5317\u4eac\u7684\u6545\u5bab\u662f\u4e2d\u56fd\u5c01\u5efa\u738b\u671d\u7687\u5bb6\u5bab\u6bbf\u7684\u4ee3\u8868\uff0c\u4e5f\u662f\u4e16\u754c\u4e0a\u4fdd\u5b58\u6700\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf \u5efa\u7b51\u7fa4\u4e4b\u4e00\uff0c\u4f4d\u4e8e\u5317\u4eac\u5e02\u4e2d\u5fc3\u7d2b\u7981\u57ce\u5185\uff0c\u662f\u4e2d\u56fd\u6700\u91cd\u8981\u7684\u6587\u5316\u9057\u4ea7\u4e4b\u4e00\u3002 \u6545\u5bab\u59cb\u5efa\u4e8e\u660e\u671d\u6c38\u4e50\u5e74\u95f4\uff0c\u662f\u4e2d\u56fd\u53e4\u4ee3\u5bab\u5ef7\u5efa\u7b51\u4e4b\u7cbe\u534e\uff0c\u4e5f\u662f\u4e2d\u534e\u6c11\u65cf\u6587\u5316\u7684\u7470 \u5b9d\u4e4b\u4e00\u3002\u6545\u5bab\u5360\u5730\u9762\u79ef\u7ea672\u4e07\u5e73\u65b9\u7c73\uff0c\u5efa\u7b51\u9762\u79ef\u7ea615\u4e07\u5e73\u65b9\u7c73\uff0c\u662f\u4e16\u754c\u4e0a\u73b0\u5b58\u6700 \u5927\u3001\u4fdd\u5b58\u6700\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf\u5efa\u7b51\u7fa4\u4e4b\u4e00\u3002 \u6545\u5bab\u5efa\u7b51\u7fa4\u5e03\u5c40\u4e25\u8c28\u3001\u7cbe\u7f8e\uff0c\u5305\u62ec\u4e7e\u6e05\u5bab\u3001\u4ea4\u6cf0\u6bbf\u3001\u4fdd\u548c\u6bbf\u3001\u5348\u95e8\u3001\u4e09\u5927\u6bbf\u3001\u592a \u548c\u6bbf\u3001\u4e7e\u6e05\u5bab\u3001\u517b\u5fc3\u6bbf\u3001\u7d2b\u7981\u57ce\u7b49108\u4e2a\u5efa\u7b51\u548c\u90e8\u95e8\uff0c\u67099000\u591a\u95f4\u623f\u95f4\uff0c\u62e5\u6709\u73cd\u8d35 \u7684\u6587\u7269\u548c\u827a\u672f\u54c1\uff0c\u88ab\u8a89\u4e3a\u4e16\u754c\u7470\u5b9d\u3002 \u6545\u5bab\u7684\u5efa\u7b51\u98ce\u683c\u4ee5\u9ec4\u8272\u7409\u7483\u74e6\u9876\u3001\u7ea2\u5899\u9ec4\u74e6\u3001\u6a90\u89d2\u98de\u6a90\u4e3a\u4e3b\uff0c\u8272\u5f69\u9c9c\u8273\uff0c\u5bcc\u4e3d\u5802 \u7687\u3002\u6545\u5bab\u7684\u6587\u7269\u4e30\u5bcc\u591a\u5f69\uff0c\u5305\u62ec\u5bab\u5ef7\u7528\u54c1\u3001\u827a\u672f\u54c1\u3001\u56fe\u4e66\u3001\u5178\u7ae0\u5236\u5ea6\u3001\u5386\u53f2\u6587\u732e \u7b49\uff0c\u88ab\u8a89\u4e3a\u4e2d\u56fd\u53e4\u4ee3\u6587\u5316\u4e4b\u5b9d\u5e93\u3002 \u6545\u5bab\u662f\u4e2d\u56fd\u5386\u53f2\u548c\u6587\u5316\u7684\u91cd\u8981\u8c61\u5f81\uff0c\u4e5f\u662f\u4e2d\u56fd\u4eba\u6c11\u7684\u73cd\u8d35\u9057\u4ea7\uff0c\u5438\u5f15\u7740\u4f17\u591a\u56fd\u5185 \u5916\u6e38\u5ba2\u524d\u6765\u53c2\u89c2\u548c\u63a2\u7d22\u3002 (Translated in English:) The Forbidden City, located in the center of Beijing within the Imperial Palace complex, is a representation of the royal palaces of feudal dynasties in China and one of the most complete ancient palace architectural complexes in the world. It is one of the most important cultural heritages in China. The construction of the Forbidden City began in the Yongle reign of the Ming Dynasty, and it represents the essence of ancient Chinese court architecture and one of the treasures of Chinese culture. The Forbidden City covers an area of about 720,000 square meters, with a building area of about 150,000 square meters, making it one of the largest and most complete ancient palace architectural complexes in the world. The layout of the Forbidden City is rigorous and exquisite, including 108 buildings and departments such as the Qianqing Palace, Jiaotai Hall, Baohe Hall, Meridian Gate, Three Great Halls, Taihe Palace, Qianqing Palace, Yangxin Hall, and the Purple Forbidden City. It has more than 9,000 rooms and houses precious cultural relics and artworks, making it a world treasure. The architectural style of the Forbidden City is characterized by yellow glazed tile roofs, red walls, yellow tiles, and \ufb02ying eaves, with bright colors and magni\ufb01cent grandeur. The cultural relics in the Forbidden City are rich and diverse, including court articles, artworks, books, systems of laws and regulations, and historical documents, making it a treasure trove of ancient Chinese culture. The Forbidden City is an important symbol of Chinese history and culture, as well as a precious heritage of the Chinese people, attracting numerous domestic and foreign tourists to visit and explore. User Introduce the place in details LLaVA [33] The image features a large and beautiful red palace, which appears to be a Chinese temple or government building, situated on a grassy hill. The main building is surrounded by several smaller buildings, all of which are adorned with red \ufb02ags. These \ufb02ags display the Chinese \ufb02ag design, emphasizing the cultural signi\ufb01cance of the location. In the foreground, there are various vehicles, including a few cars and trucks, parked close to the palace grounds. Additionally, a person can be seen standing further away from the main building, possibly admiring the architecture or waiting for someone. With its grand architecture and serene setting, the palace serves as an impressive landmark in the area. User Introduce the place in details MiniGPT-4 [59] The image shows a red building with a large gate in the front. The building has a large red roof and white walls. There are several \ufb02ags on the roof of the building, including the Chinese \ufb02ag. The building is surrounded by green grass and bushes. The sky is clear and blue. Table 2: Example prompt demonstrating X-LLM, LLaVA and MiniGPT-4\u2019s Chinese characteristic visual input capability. The prompt requires image understanding. 8 \fVisual input example, Honor of Kings: User \u8fd9\u662f\u4ec0\u4e48\u6e38\u620f\uff1f\u4ecb\u7ecd\u4e00\u4e0b\u3002(What game is this? Introduce to me.) X-LLM \u738b\u8005\u8363\u8000\u662f\u7531\u817e\u8baf\u6e38\u620f\u5f00\u53d1\u5e76\u53d1\u884c\u7684\u4e00\u6b3e\u591a\u4eba\u5728\u7ebf\u6218\u6597\u7ade\u6280\u624b\u6e38\uff0c \u4e8e2015\u5e7411\u670826\u65e5\u5728\u4e2d\u56fd\u5185\u5730\u6b63\u5f0f\u53d1\u884c\uff0c\u662f\u4e00\u6b3e\u7ade\u6280\u6e38\u620f\uff0c\u73a9\u5bb6\u53ef\u4ee5\u9009\u62e9\u4e0d\u540c \u7684\u82f1\u96c4\u8fdb\u884c\u6218\u6597\uff0c\u5e76\u4e0e\u5176\u4ed6\u73a9\u5bb6\u7ec4\u961f\u7ade\u6280\u3002\u738b\u8005\u8363\u8000\u5728\u4e2d\u56fd\u53d7\u5230\u4e86\u5e7f\u6cdb\u6b22\u8fce\uff0c\u5e76 \u4e14\u5728\u5168\u7403\u8303\u56f4\u5185\u4e5f\u62e5\u6709\u5927\u91cf\u7684\u73a9\u5bb6\u3002 (Translated in English:) Honor of Kings is a multiplayer online battle arena (MOBA) mobile game developed and published by Tencent Games. It was of\ufb01cially released in mainland China on November 26, 2015. As a competitive game, players can select different heroes to \ufb01ght and compete with other players in teams. Honor of Kings has gained widespread popularity in China and has a large player base globally. User What game is this? Introduce to me. LLaVA [33] The image shows a computer screen with a game that appears to involve a combination of elements from popular games like snake and pokemon. The game is being played with a mouse, and the screen is \ufb01lled with many small images, likely representing different characters or items within the game. The game seems to be set in a grassy \ufb01eld, where the player encounters and engages with various creatures or obstacles. As an AI, I am unable to provide further information on the speci\ufb01c game or its rules, but it is clear that the user is actively engaged in playing this unique game on their computer. User What game is this? Introduce to me. MiniGPT-4 [59] This is a screenshot of a game with a character selection screen. The game appears to be a multiplayer game with various characters to choose from. The game appears to be a role-playing game with different characters and abilities. Table 3: Example prompt demonstrating X-LLM, LLaVA and MiniGPT-4\u2019s Chinese characteristic visual input capability. The prompt requires image understanding. Model AISHELL-2 VSDial-CN test-ios test-android test-mic test-orig test-art S S + V S S + V CIF-based Model 6.0 6.4 6.4 2.8 1.7 6.0 5.3 X-LLM 8.7 9.0 9.4 3.4 4.7 / 4.3 4.7 6.8 / 6.3 Table 4: Performance of X-LLM on AISHELL-2 and VSDial-CN (CER%). \u201cS\u201d denotes only using speech for ASR. \u201cS+V\u201d denotes using speech and images for MASR. data for \ufb01netuning instructions in the third stage to observe the changes in the multimodal speech recognition ability of X-LLM. Additionally, a more powerful LLM may have stronger instruction generalization, which could improve the performance of multimodal speech recognition. We observe that although the addition of images to X-LLM\u2019s speech recognition task results in a slight decrease in performance, X-LLM is able to comprehend spoken questions in speech without \ufb01netuning, and provide appropriate responses. It can also incorporate images to provide suitable answers to spoken questions. After a small amount of data \ufb01netuning in the third phase, X-LLM\u2019s ability in this regard is further improved. 9 \f5 Discussions This paper demonstrates the effectiveness of X-LLM, which injects multiple modalities as foreign languages into a large language model through the X2L interface, endowing LLM with powerful multimodal capabilities. We design a three-stage training method to train X-LLM, where each modality interface has high independence in the \ufb01rst two stages, facilitating simultaneous training. Through the \ufb01rst two stages of training, X-LLM can interact with each modality through language. Furthermore, X-LLM can complete tasks involving multiple modalities (such as visual spoken question answering) without further \ufb01netuning on joint instruction datasets, thanks to its integration of the instruction generalization ability of large language models and its adaptation to the multimodal domain. The integration of multiple modalities without training greatly facilitates the modality expansion of X-LLM. To further explore the impact of joint multimodal instruction data on X-LLM\u2019s ability to integrate multiple modalities, we construct a streamlined but high-quality multimodal instruction dataset, and X-LLM\u2019s performance is further improved after \ufb01ne-tuning on this data. This project is still ongoing and currently has several limitations: (1) Limitations of the language model. X-LLM is built on top of ChatGLM with only 6B parameters and inherits its limitations, including but not limited to unreliable reasoning ability and fabrication of non-existent facts. (2) Insuf\ufb01cient training for modal connections. X-LLM\u2019s multi-modal perception ability is somewhat limited. We only used a small amount of multi-modal data sets to connect the multi-modal encoder and a large language model. There are several directions for further exploration: (1) Data scale. Compared to BLIP2, we only used a small amount of Chinese multimodal data. We believe that using larger Chinese data for training can signi\ufb01cantly improve the model\u2019s performance by increasing concept coverage. (2) Connecting more modalities. We can connect audio to enable LLM to understand and interact with non-verbal audio. We can also connect the status information of various terminals to LLM, so that LLM can control the terminals based on their status information. (3) Using better LLM. Due to the limitation of computing resources, we only used a 6B language model for experimentation. It can be expected that using a stronger language model, X-LLM will gain more powerful capabilities."
+ },
+ {
+ "url": "http://arxiv.org/abs/2204.07302v1",
+ "title": "Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning",
+ "abstract": "Visual Dialog is a challenging vision-language task since the visual dialog\nagent needs to answer a series of questions after reasoning over both the image\ncontent and dialog history. Though existing methods try to deal with the\ncross-modal understanding in visual dialog, they are still not enough in\nranking candidate answers based on their understanding of visual and textual\ncontexts. In this paper, we analyze the cross-modal understanding in visual\ndialog based on the vision-language pre-training model VD-BERT and propose a\nnovel approach to improve the cross-modal understanding for visual dialog,\nnamed ICMU. ICMU enhances cross-modal understanding by distinguishing different\npulled inputs (i.e. pulled images, questions or answers) based on four-way\ncontrastive learning. In addition, ICMU exploits the single-turn visual\nquestion answering to enhance the visual dialog model's cross-modal\nunderstanding to handle a multi-turn visually-grounded conversation.\nExperiments show that the proposed approach improves the visual dialog model's\ncross-modal understanding and brings satisfactory gain to the VisDial dataset.",
+ "authors": "Feilong Chen, Xiuyi Chen, Shuang Xu, Bo Xu",
+ "published": "2022-04-15",
+ "updated": "2022-04-15",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "main_content": "INTRODUCTION Recently, with the rise of pre-trained models [2], researchers have begun to explore vision-and-language task [3, 4, 5] with pre-trained models [1]. Speci\ufb01cally, visual dialog [6, 7, 8, 9], which aims to hold a meaningful conversation with a human about a given image, is a challenging task that requires models have suf\ufb01cient cross-modal understanding based on both visual and textual context to answer the current question. One way to gain suf\ufb01cient cross-modal understanding is through utilizing kinds of attention mechanism [10, 11, 12]. ReDAN [13] and DMAM [14] use multi-step reasoning based on dual attention to learn cross-modal understanding. DAN [15], MCAN [7] and LTMI [16] utilize multi-head attention mechanisms to manage multimodal intersection. Moreover, there are some approaches [17, 18, 19, 20, 21] using graph-based structures to learn cross-modal understanding. However, the approaches mentioned above do not utilize pretrained models, which have a strong power to deal with vision-andlanguage tasks. Visdial-BERT [22] and VD-BERT [1] take advantage of the pre-trained model to greatly improve the performance of the visual dialog task. As shown in Figure 1, the SOTA model VDBERT often makes mistakes and usually ranks the wrong answers \ufb01rst. VD-BERT does not have enough cross-modal understanding Fig. 1. A motivating example of cross-modal understanding of VDBERT [1]. We show the candidates ranking results of VD-VBERT based on its cross-modal understanding. It can be seen that in the \ufb01rst 8 candidates, wrong answers account for most of them, and the ranking results of correct answers are not so good. capabilities, so that it often scores unrelated wrong answers very high, such as the top 1 candidate answer \u201cno\u201d to the question Q4 \u201cis the food in his mouth ?\u201d shown in Figure 1. In this paper, we propose a novel approach to improve the crossmodal understanding for visual dialog, named ICMU. ICMU enhances cross-modal understanding by distinguishing different pulled inputs (i.e. pulled images, questions or answers) based on four-way contrastive learning. What\u2019s more, ICMU exploits the single-turn visual question answering to enhance the visual dialog model\u2019s crossmodal understanding to handle a multi-turn visually-grounded conversation. Experiments show that the proposed approach improves the visual dialog model\u2019s cross-modal understanding and brings satisfactory gain on the VisDial dataset [5]. The contributions of this work are summarized as follows: \u2022 We propose a novel approach ICMU, including 4-way contrastive learning and enhancing by utilizing VQA, to improve the cross-modal understanding based on vision-and-language pre-trained models for visual dialog. \u2022 We conduct extensive experiments and ablation studies on the large-scale datasets VisDial v1.0. Experimental results show that our approach improves the visual dialog model\u2019s crossmodal understanding and brings satisfactory gain. arXiv:2204.07302v1 [cs.CV] 15 Apr 2022 \fFig. 2. The Framework of our ICMU. * indicates the pulled inputs. 2. METHODOLOGY In this section, we \ufb01rst formally describe the visual dialog task. Given a current question Qt with an image I at t-th turn, as well as its dialog history Ht = {C, (Q1, A1), ..., (Qt\u22121, At\u22121)} (where C denotes the image caption), the dialog model is required to predict its answer At by ranking a list of 100 answer candidates { \u02c6 A1 t, \u02c6 A2 t, ..., \u02c6 A100 t }. Figure 2 shows the overview of our approach. First, we employ a uni\ufb01ed vision-dialog Transformer to encode both the image and dialog history, where we append an answer candidate \u02c6 At in the input to model their interactions in an early fusion manner. Next, we adopt cross-modal masked token loss and cross-modal contrastive loss to train the model for effective cross-modal understanding in visual dialog. In addition, we exploit the single-turn visual question answering to enhance the visual dialog model\u2019s cross-modal understanding to handle a multi-turn visually-grounded conversation. 2.1. Vision-Dialog Transformer 2.1.1. Visual Features. Given an image I, we employ Faster R-CNN [23] pre-trained on Visual Genome [24] to extract the object-level vision features RI = {o1, ..., ok}, where each object feature oi is a 2048-d Region-ofInterest (RoI) feature. k is \ufb01xed to 36 in our setting. In addition, we adopt normalized bounding box coordinates as the spatial location due to disorder of visual objects. Speci\ufb01cally, we de\ufb01ne the location information by constructing a 5-d vector: pi = ( x1 W , y1 H , x2 W , y2 H , (x2\u2212x1)(y2\u2212y1) W H ), where (x1, y1) and (x2, y2) are the coordinates of the bottom-left and top-right corner of the i-th object, W and H respectively denote the width and height of the input image, and the last element is the relative area of the object. We also extend pi with its class id and con\ufb01dence score for a richer representation to 7-d vector. 2.1.2. Textual Features. For the textual features, we pack all the textual elements (the history, question and answer candidate) into a long sequence and employ WordPiece tokenizer [25] to split it into a word sequence w, where each word is embedded with an absolute positional code following [26]. 2.1.3. Cross-Modality Encoding. Like a most vision-and-language transformers, we integrate the image objects with language elements into a whole input sequence. As shown in Figure 2, we use some special tokens to segment different elements in the input sequence. We use [CLS] to denote the beginning of the sequence, and [SEP] to separate the two modalities. Moreover, we utilize a special token [HIS] to denote end of turn [27], which informs the model when the dialog turn ends. And we use [Ques] and [Ans] to segment the current question and the answer candidate. As such, we prepare the input sequence into the format as x = ([CLS], o1, ..., ok, [SEP], C, [His], Q1A1, [His], ..., [Ques], Qt, [Ans], \u02c6 At, [SEP]). Finally, We combine each input token embedding with its position embedding and segment embedding (0 or 1, indicating whether it is image or text) and then perform layer normalization [28]. 2.1.4. Transformer Backbone. We utilize transformer encoder as the Transformer backbone to handle cross-modal understanding. Formally, we denote the embedded vision-language inputs as H0 = [e1, ..., e|x|] and then encode them into multiple levels of cross-modal representations Hl = [hl 1, ..., hl |x|] using L-stacked Transformer blocks, where the l-th Transformer block is denoted as Hl = Transformer(Hl\u22121), l \u2208 [1, L]. Speci\ufb01cally, the cross-modal representations Hl is calculated by using the multi-head self-attention [29] as follows: Q = Hl\u22121WQ l , K = Hl\u22121WK l , V = Hl\u22121WV l , (1) Mij = ( 0, allow to attend, \u2212\u221e, prevent from attending, (2) Al = softmax(QKT \u221adk + M)V, (3) where WQ l , WK l , WV l \u2208Rdh\u00d7dk are learnable weights for computing the queries, keys, and values respectively, and M \u2208R|x|\u00d7|x| is the self-attention mask that determines whether tokens from two \fModel NDCG MRR R@1 R@5 R@10 Mean ReDAN 57.63 64.75 51.10 81.73 90.90 3.89 GNN-EM 52.82 61.37 47.33 77.98 87.83 4.57 DualVD 56.32 63.23 49.25 80.23 89.70 4.11 FGA 56.90 66.20 52.75 82.92 91.07 3.80 CAG 56.64 63.49 49.85 80.63 90.15 4.11 KBGN 57.60 64.13 50.47 80.70 90.16 4.08 LG 58.55 64.00 50.63 80.58 90.20 4.12 GoG 60.38 63.13 49.88 79.65 89.05 4.39 VD-BERT 59.96 65.44 51.63 82.23 90.68 3.90 ICMU (Ours) 61.30 66.82 53.50 83.05 92.05 3.59 Table 1. Main comparisons on VisDial v1.0 test datasets (online). Our approach improves the strong baseline signi\ufb01cantly. (t-test, pvalue<0.01) sources can attend each other. Then Al is passed into a feedforward layer to compute Hl for the next layer: Hl = FFN(Al) (4) 2.2. Cross-Modal Training Objectives To make the model learn cross-modal understanding, we use two cross-modal training losses\u2014cross-modal masked token loss and cross-modal contrastive loss: L = LCMT L + LCCL4, (5) where LCMT L is the cross-modal masked token loss and LCCL4 is a novel 4-way contrastive loss. 2.2.1. Cross-modal Masked Token Loss At each iteration, we randomly mask each input token with probability 15% and replace the masked one with a special token [MASK]. The model is then required to recover them based not only on the surrounding tokens w\\m but also on the image I by minimizing the negative log-likelihood: LCMT L = \u2212E(I,w)\u223cD log P(wm|w\\m, I), (6) where wm refers to the masked token and D denotes the training set. 2.2.2. Cross-modal Contrastive Loss As shown in Figure 2, to compute contrastive losses, for each input quartette X = (I, H, Q, A), we construct three types of negative (unmatched) quartettes, where I denotes the image, H denotes the history, Q denotes the question, A denotes the answer. The \ufb01rst one is the polluted image (I\u2217, H, Q, A), the second is the polluted question (I, H, Q\u2217, A) and the \ufb01nal one is the polluted answer (I, H, Q, A\u2217), where \u2217denotes the polluted input. Since the encoding of [CLS] can be viewed as a representation of the quartette X = (I, H, Q, A), we apply a fully-connected (FC) layer on top of it as a 4-way classi\ufb01er f(\u00b7) to predict whether the quartette is matched (c = 0), contains a polluted I\u2217(c = 1), or contains a polluted Q\u2217(c = 2) or contains a polluted A\u2217(c = 3). The 4-way contrastive loss is de\ufb01ned as LCCL4 = \u2212E(I,H,Q,A;c)\u223cD log P(c|f(I, H, Q, A), (7) where the datasets I, H, Q, A \u2208D contains 50% matched quartettes, and the three negatives evenly divide the remaining 50% in the training set. Model NDCG MRR R@1 R@5 R@10 Mean MN 60.29 46.14 77.68 87.57 4.84 HCIAE 61.96 48.25 78.97 88.43 4.56 CoAtt 62.77 49.38 78.99 88.49 4.56 ReDAN 64.29 50.65 81.29 90.17 4.10 KBGN 59.08 64.86 51.37 81.71 90.54 4.00 LG 59.67 65.03 51.69 81.49 90.32 4.02 GoG 63.15 62.68 49.46 78.77 87.87 4.81 VisDial-BERT 62.64 67.86 54.54 84.34 92.36 3.44 VD-BERT 63.22 67.44 54.02 83.96 92.33 3.53 ICMU (Ours) 64.30 69.14 56.80 85.09 93.42 3.37 Table 2. Main comparisons on VisDial v1.0 val datasets. Our approach improves the strong baseline signi\ufb01cantly. (t-test, pvalue<0.01) Model NDCG MRR R@1 R@5 R@10 Mean ICMU 64.30 69.14 56.80 85.09 93.42 3.37 VQA 63.32 67.62 54.50 84.10 92.90 3.44 CL 63.34 67.90 54.82 84.35 92.43 3.52 Table 3. Ablation study on VisDial v1.0 val datasets. \u201cVQA\u201d denotes enhancing by utilizing VQA. \u201cCL\u201d denotes the 4-way contrastive learning. 2.3. Using VQA to Enhance Visual Dialog Although VQA is single-turn, VQA models and visual dialog models require similar cross-modal understanding capabilities. We use VQA to enhance visual dialogue. We exploit the training and val split of VQA v2.0 dataset, which contains the same images as VisDial v1.0 train split. As there is no caption for the image in VQA v2.0, we use VisDial v1.0 to construct a caption for each image in the VQA v2.0. Thus each input from VQA v2.0 can be de\ufb01ned as (I, C, Q, A), where I denotes the image, C denotes the constructed caption, Q denotes the question, A denotes the answer. We let the history H be null. 3. EXPERIMENTS 3.1. Experiment Setup 3.1.1. Datasets and Implementation Details. We evaluate our model on the VisDial v1.0 datasets [30]. Specifically, v1.0 contains a training set of 123287 images, a validation set of 2048 images and a testing set (hosted blindly in the task organizers\u2019 server) of 8,000 images. Each image is associated with one caption and 10 question-answer pairs. For each question, it is paired with a list of 100 answer candidates, one of which is regarded as the correct answer. VQA v2.0 contains the same 123287 images as VisDial v.10 but different question-answer pairs. We use BERTBASE as the backbone, which consists of 12 Transformer blocks, each with 12 attention heads and a hidden state dimensions of 768. We use Adam [31] with an initial learning rate of 3e\u22125 and a batch size of 80 to train our model. A linear learning rate decay schedule with a warmup of 0.1 is employed. We \ufb01rst train our model for 20 epochs on a cluster of 4 A100 GPUs with 40G memory using CMTL and CCL4 losses (with equal coef\ufb01cients). Here we only utilize one previous dialog turn for training ef\ufb01ciency. After that, we train for another 15 epochs only using CCL4 losses. Dur\fFig. 3. Case study. ing inference, we rank the answer candidates according to the class score c = 0 of the CCL4 loss. 3.1.2. Automatic Evaluation We use a retrieval setting to evaluate individual responses at each round of a dialog, following [5]. Speci\ufb01cally, at test time, apart from the image, ground truth dialog history and the question, a list of 100candidate answers is also given. The model is evaluated on retrieval metrics: (1) Mean Rank of human response (Mean \u2193), (2) Existence of the human response in top \u2212k ranked responses, i.e., R@k \u2191(3) Mean Reciprocal Rank (MRR \u2191) of the human response and (4) Normalized Discounted Cumulative Gain (NDCG \u2191) for VisDial v1.0. 3.2. Main Results 3.2.1. Baseline Methods We compare our method with the following baseline methods: (1) Attention-based models: HCIAE [10], CoAtt [11], ReDAN [13], LG [32]. (2) The pretraining model: VD-BERT [1] and VisDialBERT [22]. (4) Graph-based models: GNN-EM [17], DualVD [19], FGA [18], GoG [6], KBGN [21]. 3.2.2. Results Performance on the benchmarks VisDial is shown in Table 1 and Table 2. From the results on VisDial v1.0 test shown in Table 1, we can observe that: (1) ICMU outperforms previous works on all metrics and obtains R@1 at 53.50%, beating the previous method VD-BERT by 1.47%, which shows that ICMU can select the standard groundtruth more accurate. (2) Comparing the performance of ICMU and model VD-BERT on NDCG, ICMU beats the pre-trained model VDBERT by 1.34%. This shows the superiority of our proposed method to understand cross-modal information at a \ufb01ne-grained level. Note that NGCG is invariant to the order of options with identical relevance and to the order of options outside of the top K, where K is the number of answers marked as correct by at least one annotator. (3) Our approach is not only more accurate (R@1, Mean), but also better than previous models on multi-modal semantic understanding (NDCG). From the results on VisDial v1.0 val shown in Table 2, we can get the same observations. From the ablation study on VisDial v1.0 val shown in Table 3, we can observe that: (1) Both cross-modal contrastive learning and enhancement by VQA bring satisfactory improvements. (2) cross-modal contrastive learning and enhancement by VQA can get along with each other and further improve the performance of the model. 3.2.3. Case Study As shown in Figure 3, we provide two samples to analyze the crossmodal understanding of VD-BERT and ICMU. As shown in the left half of Figure 3, for Q4 \u201cDoes he have food in his mouth?\u201d, there are many reasonable answers to this question. VD-BERT ranks the opposite answer \u2018\u2018no\u2019\u2019 \ufb01rst, and many reasonable answers \u201cyes, it is, it is\u201d are ranked lower. As shown in the right half of Figure 3, for Q4 \u201care there people on bus?\u201d, ICMU outperforms the VD-BERT. This shows that ICMU learns better cross-modal understanding than VD-BERT due to CCL4 and enhancing by VQA. 4."
+ },
+ {
+ "url": "http://arxiv.org/abs/2202.09061v4",
+ "title": "VLP: A Survey on Vision-Language Pre-training",
+ "abstract": "In the past few years, the emergence of pre-training models has brought\nuni-modal fields such as computer vision (CV) and natural language processing\n(NLP) to a new era. Substantial works have shown they are beneficial for\ndownstream uni-modal tasks and avoid training a new model from scratch. So can\nsuch pre-trained models be applied to multi-modal tasks? Researchers have\nexplored this problem and made significant progress. This paper surveys recent\nadvances and new frontiers in vision-language pre-training (VLP), including\nimage-text and video-text pre-training. To give readers a better overall grasp\nof VLP, we first review its recent advances from five aspects: feature\nextraction, model architecture, pre-training objectives, pre-training datasets,\nand downstream tasks. Then, we summarize the specific VLP models in detail.\nFinally, we discuss the new frontiers in VLP. To the best of our knowledge,\nthis is the first survey focused on VLP. We hope that this survey can shed\nlight on future research in the VLP field.",
+ "authors": "Feilong Chen, Duzhen Zhang, Minglun Han, Xiuyi Chen, Jing Shi, Shuang Xu, Bo Xu",
+ "published": "2022-02-18",
+ "updated": "2022-07-30",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL"
+ ],
+ "main_content": "Introduction Making machines respond in ways similar to humans has been a relentless goal of AI researchers. To enable machines to perceive and think, researchers propose a series of related tasks, such as face recognition, reading comprehension, and human-machine dialogue, to train and evaluate the intelligence of machines in a particular aspect. Speci\ufb01cally, domain experts manually construct standard datasets and then train and evaluate relevant models on them. However, due to the limitations of related technologies, it is often necessary to train on a large amount of labelled data to obtain a better and more capable model. The recent emergence of pre-training models based on the Transformer structure [1] has alleviated this problem. They are \ufb01rst pre-trained via self-supervised learning that typically exploits auxiliary tasks (pre-training objectives) to mine supervision signals from large-scale unlabelled data to train the model, thereby learning universal representations. Then they can achieve surprising e\ufb00ectiveness by \ufb01ne-tuning with only a tiny amount of manually-labelled data on downstream tasks. Since the advent of BERT [2] in natural language processing (NLP), various pre-training models have sprung up in the uni-modal \ufb01eld, such as Vision Transformer (ViT) [3] in computer vision (CV) and Wave2Vec [4] in speech. Substantial works have shown they are bene\ufb01cial for downstream uni-modal tasks and avoid training a new model from scratch. Similar to the uni-modal \ufb01eld, there is also a problem of less highquality labelled data in the multi-modal \ufb01eld. The natural question is, can the above pre-training method be applied to multi-modal tasks? Researchers have explored this problem and made signi\ufb01cant progress. In this paper, we focus on mainstream vision-language pre-training (VLP), including image-text and video-text pre-training. VLP mainly learns the semantic correspondence between di\ufb00erent modalities by pre-training on large-scale data. For example, in image-text pre-training, we expect the model to associate \u201cdog\u201d in text with what \u201cdog\u201d looks like in images. In video-text pre-training, we expect the model to map objects/actions in the text to objects/actions in the video. To achieve this goal, the VLP objects and model architecture need to be cleverly designed to allow the model to mine the associations between di\ufb00erent modalities. To give readers a better global grasp of VLP, we \ufb01rst comprehensively review its recent advances and focus on \ufb01ve signi\ufb01cant aspects: \u2022 Feature extraction. This section includes the preprocessing and representation methods of image, video, and text in VLP models (see Section 2). \u2022 Model architecture. We introduce the architecture of the VLP models from two di\ufb00erent perspectives: Single-stream versus Dual-stream from multi-modal fusion perspective, and Encoder-only versus Encoder-decoder from the overall architectural design perspective (see Section 3). \fVLP: A Survey on Vision-Language Pre-training 3 \u2022 Pre-training objectives. Pre-training objectives are the core of VLP, mainly used to guide the model to learn vision-language associated information. We summarize typical and characteristic pre-training objectives divided into completion, matching, temporal, and particular types (see Section 4). \u2022 Pre-training datasets. Data is critical for VLP. We brie\ufb02y introduce mainstream corpora for VLP and their speci\ufb01c sizes (see Section 5). \u2022 Downstream tasks. Various tasks requires a cooperative knowledge of both vision and language. We discuss the basic details and goals of these tasks (see Section 6). Then we summarize the speci\ufb01c state-of-the-art (SOTA) VLP models in detail (see Section 7). Finally, We conclude the paper and have broad discussions on new frontiers in VLP (see Section 8). Although there are many surveys on pretrained language models [5, 6] and pretrained vision models [7], to the best of our knowledge, this is the \ufb01rst survey focused on VLP. We hope that our survey can help researchers better understand this \ufb01eld and inspire them to design better models. 2 Feature Extraction This section describes how VLP models preprocess and represent an image, video and text to obtain counterpart features. 2.1 Feature Extraction 2.1.1 Image Feature Extraction (1) OD-based Region Features (OD-RFs). Most previous work [8, 9, 10] on VLP utilizes pre-trained object detectors to extract visual features. The most commonly used object detection model is Faster R-CNN [11] with bottom-up attention [12]. It is designed to identify objects belonging to certain classes and localize them with bounding boxes. By using the Faster R-CNN, VLP models obtain the OD-based Region feature embedding V = [o1, o2, . . . , ok] of an image with k selected regions. Each region feature oi is a 2048-d Region-of-Interest (RoI) feature with its bounding box. The bounding box is de\ufb01ned by the coordinates of the bottom-left and top-right corners of the region. VLP models use bounding boxes to construct 5-d vectors, and the vector is embedded into a high-dimensional representation (2048-d) named visual geometry embedding. The OD-RFs are obtained by adding the OD-based Region feature embedding with its visual geometry embedding. Although ODFs have brought impressive performance, extracting region features can be time-consuming. To relieve this problem, the pre-trained object detectors are usually frozen during pre-training, which can limit the capacity of VLP models. \f4 VLP: A Survey on Vision-Language Pre-training (2) CNN-based Grid Features (CNN-GFs). VLP models [13, 14] extract visual features by utilizing convolutional neural networks (CNNs) to obtain the grid features. On the one hand, VLP models can train the CNNs end-to-end by using the grid features [15] directly. On the other hand, VLP models can also \ufb01rst discretize grid features using a learned vision dictionary, then feed them into the cross-modal module. (3) ViT-based Patch Features (ViT-PFs). Inspired by ViT [3, 16], VLP models reshape the image Ii \u2208RH\u00d7W \u00d7C into a sequence of \ufb02attened 2D patches Ip \u2208RN\u00d7(P 2\u00b7C), where (H, W) is the resolution of the original image, C is the number of channels, (P, P) is the resolution of each image patch, and N = HW/P 2 is the resulting number of patches, which also serves as the e\ufb00ective input sequence length for the Transformer. An input image Ii is encoded into a sequence of embeddings: {vcls, v1, ..., vN}, where vcls is the embedding of the [CLS] token. 2.1.2 Video Feature Extraction A video clip is denoted as M frames (images). VLP models [17, 18] extract the frame features by using the method mentioned above. The two most commonly used features are CNN-GFs and ViT-PFs. For CNN-GFs, VLP models \ufb01rst use ResNet [19] pre-trained on ImageNet [20] or SlowFast [21] and I3D [22] pre-trained on Kinetics [23] to extract 2D and 3D visual features for each video frame. These features are concatenated as visual features and fed through a fully-connected (FC) layer to be projected into the same lower-dimensional space as token embeddings. For ViT-PFs, a video clip Vi \u2208RM\u00d7H\u00d7W \u00d7C consisting of M frames of resolution H\u00d7W, where M = 1 for images. Following the protocol in ViT and Timesformer, the input video clip is divided into M \u00d7 N non-overlapping spatio-temporal patches of size P \u00d7 P, where N = HW/P 2. 2.1.3 Text Feature Extraction For the textual features, following pretrained language model such as BERT [2], RoBERTa [24], AlBERT [25], and XLNet [26], VLP models [9, 27, 28] \ufb01rst segment the input sentence into a sequence of subwords. And then, insert a start-of-sequence token and an end-of-sequence token at the beginning and the end of the sequence to generate the input text sequence. Text input representations are computed via summing the corresponding word embedding, text position embedding, and text type embedding. 2.2 Feature Representation To make full use of uni-modal pre-trained models, VLP models can send the visual or text features to a transformer encoder [1]. Speci\ufb01cally, VLP models utilize the standard transformer encoder with random initialization to generate the visual or textual representation. In addition, VLP models can utilize a pretrained visual transformer to encode the ViT-PFs, such as ViT and DeiT [29]. \fVLP: A Survey on Vision-Language Pre-training 5 Cross-Attn Self-Attn Feedforward Visual Features Cross-Attn Self-Attn Feedforward Textual Features (b) Dual-Stream\u00a0Architecture Self-Attn Feedforward Visual Features Textual Features (a) Single-Stream\u00a0Architecture Fig. 1 Illustration of two types of model architectures for VLP. VLP models can use a pre-trained textual transformer to encode the textual features, such as BERT. For simplicity, we name these transformer Xformer. 3 Model Architecture In this section, we introduce the architecture of the VLP models from two di\ufb00erent perspectives: (1) Single-stream versus Dual-stream from multi-modal fusion perspective, and (2) Encoder-only versus Encoder-decoder from the overall architectural design perspective. 3.1 Single-stream versus Dual-stream Single-stream Architecture. The single-stream architecture [9, 30, 31] refers to that the text and visual features are concatenated together, then fed into a single transformer block as shown in Firgue 1 (a). The single-stream structure utilizes merged attention to fuse multimodal inputs. The single-stream architecture is more parametere\ufb03cient, as the same set of parameters is used for both modalities. Dual-stream Architecture. The dual-stream architecture [32, 33] refers to that the text and visual features are not concatenated together but sent to two di\ufb00erent transformer blocks independently, as shown in Firgue 1 (b). These two transformer blocks do not share parameters. To achieve higher performance, cross-attention (as shown by the dotted line in Firgue 1 (b)) are used to enable cross-modal interaction. To achieve higher e\ufb03ciency, there can also be no cross-attention between the visual transformer and textual transformer blocks. 3.2 Encoder-only versus Encoder-decoder Many VLP models adopt the encoder-only architecture, where the cross-modal representations are directly fed into an output layer to generate the \ufb01nal \f6 VLP: A Survey on Vision-Language Pre-training outputs. In contrast, other VLP models advocate using a transformer encoderdecoder architecture, where the cross-modal representations are \ufb01rst fed into a decoder and then to an output layer. 4 Pre-training Objectives This section introduces how we pre-train VLP models by using di\ufb00erent pre-training objectives, which are crucial for learning the universal representation of vision-language. We summarize the pre-training objectives into four categories: completion, matching, temporal, and particular types. \u2022 Completion is to reconstruct the masked element by leverage the unmasked remainders to understand the modality. (see section 4.1, 4.2 and 4.3). \u2022 Matching is to unify the vision and language into a shared hidden space to generate universal vision-language representation (see Section 4.4, 4.5 and 4.6). \u2022 Temporal is to learn good representation by reorder the disrupted input sequence (see Section 4.7) \u2022 Particular types consists of other pre-training objects, such as visual question answering and visual captioning (see Section 4.8). Now we introduce the most used pre-training objectives. 4.1 Masked Language Modeling Masked language modeling (MLM), which was \ufb01rst proposed by Talylor [34] in the literature, is widely known because the BERT model adapted it as a novel pre-training task. To model language conditioned on vision, MLM in VLP models is similar to MLM in pre-training language models (PLMs) but predicts the masked textual tokens not only by the rest of the textual tokens but also by the visual tokens. Empirically, VLP models following BERT randomly mask each textual input token with probability 15% and replace the masked one by using a special token [MASK] 80% of the time, a random textual token 10% of the time and the original token 10% of the time to perform masking. The formal de\ufb01nition is as follows: LMLM = \u2212E(v,w)\u223cD log P(wm|w\\m, v), (1) where v denotes the vision, w denotes the textual tokens, wm denotes the masked textual tokens, w\\m denotes the remained textual tokens and D denotes the training dataset. 4.2 Pre\ufb01x Language Modeling Pre\ufb01x Language Modeling (Pre\ufb01xLM) [14] is uni\ufb01ed of MLM and language modeling (LM). To make the model simultaneously has good understanding and generation ability, Pre\ufb01xLM is proposed to facilitate the model with \fVLP: A Survey on Vision-Language Pre-training 7 solid generation capability that enables text-induced zero-shot generalization without \ufb01netuning. Pre\ufb01xLM di\ufb00ers from the standard LM such that it enables bi-directional attention on the pre\ufb01x sequence and only conducts autoregressive factorization on the remaining tokens. Pre\ufb01xLM under the sequence-to-sequence (seq2seq) framework not only enjoys the bidirectional contextualized representation as in MLM but also can perform text generation similar to LM. The formal de\ufb01nition is as follows: LPre\ufb01xLM = \u2212E(v,w)\u223cD log P(w\u2265Tp|w\u2264TP , v), (2) where TP denotes the length of the pre\ufb01x sequence. 4.3 Masked Vision Modeling To have good understanding on vision or generate images/videos given text, like MLM, masked vision modeling (MVM) [30] samples vision (image or video) regions or patches and usually masks their visual features with a probability of 15%. VLP models need to reconstruct the masked visual features given the remaining visual features and all the textual features. The masked visual features are set to zeros. Because visual features are high-dimensional and continuous, VLP models propose two variants for MVM. (1) Masked Features Regression learns to regress the model output of masked features to its original visual features. VLP models convert the model output of the masked features to a vector of the same dimension as the original visual features \ufb01rst and apply L2 regression between the original visual features and the vector. The formal de\ufb01nition is as follows: LMVM = E(v,w)\u223cDf(vm|v\\m, w), (3) f(vm|v\\m, w) = K X i=1 \u2225h(vi m) \u2212O(vi m))\u22252 2, (4) where h(vi m) denotes the predicted vision representation and O(vi m) denotes the original vision representation. (2) Masked Feature Classi\ufb01cation learns to predict the object semantic class for the masked features. VLP models \ufb01rst feed the output of the masked features into an FC layer to predict the scores of object class, which further goes through a softmax function to be transformed into a prediction normalized distribution. Note that there is no ground-truth label. There are two kinds of methods to train VLP models. One is that VLP models take the most likely object class from the object detection model as the hard label (w.p. 0 or 1), assuming the detected object class is the ground-truth label for the masked features and apply cross-entropy loss to \f8 VLP: A Survey on Vision-Language Pre-training minimize the gap between the prediction and pseudo class. The other is that VLP models utilize soft label as supervision signal, which is the raw output from the detector (i.e., a distribution of object classes) and minimize the KL divergence between two distributions. The formal de\ufb01nition is as follows: LMVM = E(v,w)\u223cDf(vm|v\\m, w). (5) We use the object detection output from Faster R-CNN, and take the detected object category as the label of the masked region: f1(vm|v\\m, w) = K X i=1 CE(c(vi m) \u2212g1(vi m))), (6) where g1(vi m) the detected detected object category and K denotes the number of vision regions. We avoid this assumption by using soft label as supervision signal, which is the raw output from the detector: f2(vm|v\\m, w) = K X i=1 DKL(\u02c6 c(vi m) \u2212g2(vi m))). (7) where g1(vi m) the detected detected object category distribution. 4.4 Vision-Language Matching Vision-Language Matching (VLM) [35] is the most commonly used pre-training objective to align vision and language, which aims to project vision and language into the same space. In the single-stream VLP models, they use the representation of the special token [CLS] as the fused representation of both modalities. In the dual-stream VLP models, they concatenate the visual representation of the special visual token [CLSV ] and the textual representation of the special textual token [CLST ] as the fused representation of both modalities. VLP models feed the fused representation of both modalities to an FC layer and a sigmoid function to predict a score between 0 and 1, where 0 indicates the vision and language are mismatched, and 1 indicates the vision and language are matched. During training, VLP models sample positive or negative pairs from the dataset at each step. The negative pair is created by replacing the vision or text in a paired sample with randomly selected from other samples. 4.5 Vision-Language Contrastive Learning Vision-Language Contrastive Learning (VLC) [35] also aims to align vision and language. Di\ufb00erent VLM, VLC predicts the matched vision-language pairs from N \u00d7 N possible vision-language pairs given a batch of N vision-language \fVLP: A Survey on Vision-Language Pre-training 9 pairs. Note that there are N 2\u2212N negative vision-language pairs within a training batch. VLP models use the visual representation of the special visual token [CLSV ] and the textual representation of the special textual token [CLST ] to denote the aggregated representation of the vision and language, respectively. VLP models compute the softmax-normalized vision (image or video)-to-text similarity and text-to-vision similarity and leverage cross-entropy losses over vision-to-text and text-to-vision similarities to update themselves. The similarity is often implemented by dot products. The formal de\ufb01nitions are as follows: pv2t m (I) = exp(s(I, Tm)/\u03c4) PM m=1 exp(s(I, Tm)/\u03c4) , (8) pt2v m (T) = exp(s(T, Im)/\u03c4) PM m=1 exp(s(T, Im)/\u03c4) , (9) LVLC = 1 2E(I,T )\u223cD[CE(yv2t, pv2t(I)) + CE(yt2v, pt2v(T)], (10) where I. T denotes the images and texts, s(cot) denotes the similarity function and \u03c4 denotes temperature coe\ufb03cient. yv2t and yt2v denote the labels of vision2text retrieval and text2vision retrieval. 4.6 Word-Region Alignment Word-Region Alignment (WRA) [30] is an unsupervised pre-training objective to align vision regions (vision patches) and words. VLP models utilize Optimal Transport to learn the alignment between vision and language. Empirically, VLP models use the IPOT algorithm to approximate the OT distance since the exact minimization is computationally intractable. After solving minimization, the OT distance serves as the WRA loss to train VLP models. The formal de\ufb01nition is as follows: LWRA = min T\u2208II(a,b) T X i=1 K X j=1 Tij \u00b7 c(wi, vj), (11) where c(wi, vj) is the cost function evaluating the distance between wi and vj, T \u2208II(a, b) = {T \u2208RT \u00d7K|T1m = a, T\u22a41n = b}, a and b Dirac function coe\ufb03cients centered on wi and vj. 4.7 Frame Order Modeling To better model the timing of the video, VLP models randomly disrupt the order of some input frames and then predict the actual position of each frame. Frame Order Modeling (FOM) [36] is modeled as a classi\ufb01cation task in practice. \f10 VLP: A Survey on Vision-Language Pre-training Table 1 Details of some popular pre-training datasets for VLP. Names of some datasets are abbreviated for the convenience of subsequent description. FLKR represents Flickr30k, and HT100M represents HowTo100M. Dataset # Images # Image-text Pairs Duration (hrs) # Clips # Videos SBU [44] 875K 875K FLKR [45] 29K 145K COCO [46] 113K 567K VG [47] 108K 5.4M VGQA [47] 108K 1.8M VQA [48] 83K 444K Matterport3D [49] 104K 104K FashionGen [50] 260K 260K CC3M [51] 3M 3M GQA [52] 82K 1M LAIT [53] 10M 10M CC12M [54] 12M 12M ALIGN [55] 1.8B 1.8B Kinetics400 [23] 817 306K 306K TVQA [38] 461 22K 925 HT100M [56] 134K 136M 1.2M WebVid2M [57] 13K 2.5M 2.5M 4.8 Particular Pre-training Objects To better adapt to downstream tasks, VLP models sometimes use the training objects of some downstream tasks, such as visual question answering (VQA) [37, 38, 12], and visual captioning (VC) [39, 40], as pre-training objectives. As for VQA, VLP models take the fused representation mentioned above, apply an FC layer, and use the transformed representation to predict the classi\ufb01cation over prede\ufb01ned answer candidates. In addition to VLP models tackling the task as classi\ufb01cation over prede\ufb01ned answer candidates, VLP models also can directly generate answers in their original text format. As for VC, to reconstruct the input sentence to endow VLP models with the generation capability, VLP models employ an auto-regressive decoder to generate a corresponding textual description of the image or video. Note that due to space limitations, we only introduce some popular pretraining objectives. We omit some speci\ufb01c pre-training objectives such as grounding referring expression (GRE), image-conditioned denoising autoencoding (IDA) [41], text-conditioned image feature generation (TIFG) [41], object detection (OD) [42] and aligned Kaleido patch modeling (AKPM) [43]. Moreover, we put masked action prediction into the category of MVM. 5 Pre-training Datasets Pre-training datasets are signi\ufb01cant for the success of cross-modal representation learning. The quality and the size of pre-training datasets sometimes overwhelm the importance of training strategies and algorithms. Hence, a detailed description of several widely used pre-training datasets is necessary. Table 1 shows statistics of some popular pre-training datasets for VLP. \fVLP: A Survey on Vision-Language Pre-training 11 Since VLP includes image-language pre-training and video-language pretraining, we roughly divide pre-training datasets into two main categories. In later sections, we provide more details about representative pre-training datasets for each category. It is worth noting that no matter which category pre-training datasets belong, they di\ufb00er in size and sources across di\ufb00erent researches. In most works, the pre-training datasets for VLP are constructed by combining public datasets across di\ufb00erent cross-modal tasks or scenarios. However, other works, such as VideoBERT [58], ImageBERT [53], ALIGN [55], and CLIP [16], conduct pre-training with self-constructed datasets. These selfconstructed datasets are usually larger than most public datasets but might contain more noise. 5.1 Datasets for Image-language Pre-training For image-language pre-training, the most widely used data form is image-text pairs. Most image-language pre-training datasets consist of a large number of image-caption pairs. SBU [44] and Flickr30k [45] are collected from Flickr and labelled with human-generated annotations. COCO [46] consists of images with \ufb01ve human-generated captions, \ufb01ltered with special procedures to guarantee the quality of images and annotations. CC3M [51] and CC12M [54] are constructed by crawling images and their alt-text HTML attributes from the Internet and annotating these pictures with \ufb01ltered descriptions. Due to looser \ufb01ltering strategies, CC12M contains more noise than CC3M. Another data source is the visual question answering task. Many image-language datasets are organized as structured data in the context of visual question answering. The representative large-scale dataset is Visual Genome (VG) [47]. VG contains rich information in its structured data form. Its region-level descriptions and question-answer pairs are frequently used in the study of image-language pre-training. Besides VG, VQA [48] and GQA [52] are also popular datasets of visual question-answer pairs. Compared with VGA, GQA further alleviates the systematic biases. Datasets mentioned above are suitable for most common scenarios. There are also some datasets designed for special cases. Matterport3D [49] consists of RGB-D images of building-scale scenes, annotated with labels for classi\ufb01cation and segmentation. Fashion-Gen [50] contains fashion images paired with item descriptions generated by professional stylists. 5.2 Datasets for Video-language Pre-training Compared to image-language pre-training datasets, video-language pretraining datasets are usually more time-consuming and more di\ufb03cult to collect and process. These inconveniences restrict the development of the community and the scale of pre-training. Datasets for video-language pre-training cover di\ufb00erent scenarios and sources. Most of them, such as Kinetics-400 [23], HowTo100M [56] and WebVid-2M [57], are collected from the Internet and processed with di\ufb00erent procedures. These kinds of videos are usually accompanied \f12 VLP: A Survey on Vision-Language Pre-training Downstream Tasks Classification Regression Retrieval Generation Other tasks Visual Question Answering\uff08VQA\uff09 Visual Reasoning and Compositional Question Answering (GQA) Video-Language Inference (VLI) Natural Language for Visual Reasoning (NLVR) Visual Entailment (VE) Visual Commonsense Reasoning (VCR) Grounding Referring Expressions (GRE) Category Recognition (CR) Multi-modal Sentiment Analysis (MSA) Vision-Language Retrieval (VLR) Visual Captioning (VC) Novel Object Captioning at Scale (NoCaps) Visual Dialogue (VD) Multi-modal Machine Translation (MMT) Vision-Language Navigation (VLN) Optical Character Recognition (OCR) Fig. 2 Illustration of downstream tasks in VLP. by subtitles, thus providing weak or strong alignments between video clips and text. Although those subtitles sometimes might be too weak to align modalities, they still provide useful information, especially for the pre-training on large-scale datasets. Another source of video-text pairs is television programs. TVQA [38] is a video-language pre-training dataset generated from television shows. These television shows are collected and converted to a dataset comprised of many dialogues for understanding the videos and recognizing semantic concepts in videos. Considering the diversity of the sources and formation of these datasets, researchers apply di\ufb00erent annotation and processing procedures. For example, Kinetics-400 [23] consists of many action-related videos annotated with action classes. For other datasets [38, 56, 57], the accompanying captions/subtitles of video clips or the class of concepts in videos are usually processed and used as annotations. 6 Downstream Tasks As shown in Figure 2, a diverse range of tasks requires a cooperative knowledge of vision and language. In this section, we introduce the fundamental details and goals of these tasks. Visual Question Answering (VQA) [37, 59, 60, 61] . Giving a visual input (image or video), VQA represents the task of correctly providing an answer to a question. It is usually regarded as a classi\ufb01cation task where the model predicts the most suitable answer from a pool of choices. To obtain accurate performance, it is important to infer logical entailments from images (or videos) based on the question posed. Visual Reasoning and Compositional Question Answering (GQA) [52, 62, 63] . GQA is an upgraded version of VQA and aims to advance research on the visual reasoning of natural scenes. The images, questions, and answers in its dataset have matching semantic representations. The advantage of this structured representation is that the distribution of answers can be more uniform, \fVLP: A Survey on Vision-Language Pre-training 13 and we can analyze the model\u2019s performance from more dimensions. Compared with the single evaluation metric (e.g., accuracy) of traditional VQA, GQA includes multi-dimensional evaluation metrics: consistency, validity, plausibility, distribution, and grounding. Video-Language Inference (VLI) [36, 64, 65] . Given a video clip with aligned subtitles as a premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. Visual Entailment (VE) [66, 67, 68] . In the VE task, image is the premise, and text is the hypothesis. Its goal is to predict whether the text is \u201cEntailment Image\u201d. There are three labels, Entailment, Neutral, and Contradiction. Visual Commonsense Reasoning (VCR) [69, 70, 71] . VCR is the task of inferring commonsense information and cognitive understanding by a machine when it sees an image. It exists in the form of multiple-choice questions. For a question posed about the image, there are several alternative answers. The model must choose an answer from several answers and then select the reason for choosing this answer from several alternative reasons. Thus, VCR can be divided into two tasks, including question answering (selecting the best answer from a pool of expected answers to the question) and answer justi\ufb01cation (providing the rationale behind the given answer). You can follow VCR\u2019s leaderboard1 to track VLP\u2019s latest ideas. Natural Language for Visual Reasoning (NLVR) [72, 73] . NLVR is a subtask of the broader VCR category, limited to the classi\ufb01cation paradigm. The input of the NLVR task is two images and a text description, and the output is whether the corresponding relationship between the images and the text description is consistent (two labels: true or false). It is typically di\ufb00erent from VQA due to longer text sequences covering various linguistic phenomena. Grounding Referring Expressions (GRE) [74, 75, 76] . The GRE task aims to localize certain regions (e.g., objects and persons) in an image given a referring expression, where the main challenge is to comprehend and align various types of information from visual and textual domain, such as visual attributes, locations and interactions with surrounding regions. Speci\ufb01cally, the model can output a score for each region, and the region with the highest score is used as the prediction region. 1https://visualcommonsense.com/leaderboard/ \f14 VLP: A Survey on Vision-Language Pre-training Category Recognition (CR) [43]. CR refers to identifying the category and sub-category of a product, such as {HOODIES, SWEATERS}, {TROUSERS, PANTS}, which are vital attributes for describing a product, and are useful in lots of real-life applications. Multi-modal Sentiment Analysis. (MSA) [77, 78, 79, 80]. MSA is aimed to detect sentiments in videos by leveraging multi-modal signals (e.g., vision, language, etc.). It is to predict the a\ufb00ective orientation of an utterance as a continuous intensity variable. Vision-Language Retrieval (VLR) [81, 82, 83, 84]. VLR involves understanding both vision (image or video) and language domains with appropriate matching strategies. It includes two subtasks, visionto-text, and text-to-vision retrieval, where vision-to-text retrieval is to fetch the top-most relevant text description from a larger pool of descriptions as per the vision and vice versa. VLR is widely used in domain-speci\ufb01c searches, multiple search engines, and context-based vision retrieval design systems. Visual Captioning (VC) [85, 40, 86]. VC aims to generate semantically and syntactically appropriate text descriptions for a given visual (image or video) input. Generating relevant and explanatory captions for a visual input requires not only a rich knowledge of language, but also a consistent understanding of scenes, entities, and their interactions appreare in the visual input. Novel Object Captioning at Scale (NoCaps) [87, 88] . NoCaps extends the VC task to test a model\u2019s capability of describing novel objects from the Open Images dataset, which are unseen in the training corpus. Visual Dialogue (VD) [89, 90, 91, 92]. The speci\ufb01c task in VD is the following: given an image, a dialog history consisting of a sequence of question-answer pairs, and a natural language follow-up question, the goal for the task is to response the question in free-form natural language (e.g., generate an answer). VD is the visual analogue of the Turing Test. Multi-modal Machine Translation (MMT) [93, 94, 95]. MMT is a two-fold task of translation and text generation, translating text from one language to another with additional information from other modalities, e.g., image. The additional visual features aim to remove ambiguities that may arise in straightforward text machine translation and help retain the context of the text descriptions. The multi-modal representation space facilitates robust latent representations to complement the inherent semantic information preserved by visual and linguistic embeddings, respectively. \fVLP: A Survey on Vision-Language Pre-training 15 Vision-Language Navigation (VLN) [96, 97, 98]. VLN is a grounding language task of an agent\u2019s locomotion as it sees and explores the real-world dynamics based on linguistic instructions. Like generation tasks, it is typically seen as the task of sequence-to-sequence transcoding. However, VLN has unique characteristics. It usually has longer sequences, and the dynamics of the problem are quite di\ufb00erent since it is a real-time evolving task. Its main challenge lies in understanding the environment and making con\ufb01dent decisions during exploring. Optical Character Recognition (OCR) [99, 100]. OCR generally refers to extract handwritten or printed text from images (such as street signs and photos of products) as well as documents (articles, bills, invoices, \ufb01nancial reports, etc.), which includes two parts: text detection (similar to regression) and text recognition (similar to classi\ufb01cation). In addition, there are some iamge-related downstream tasks for evaluating the image-text pre-training models, including semantic segmentation [101, 102], and object detection [103, 104]. There are also some video-related downstream tasks for evaluating the video-text pre-training models, including action classi\ufb01cation (AC) [58], action segmentation (AS) [105], and action step Localization (ASL) [106]. Recently, Changpinyo et.al [54] scale up pre-training data for VLP tasks and benchmark its e\ufb00ectiveness against Conceptual Captions 3M on multiple downstream tasks with an emphasis on long-tail visual recognition. Rethmeier et.al [107] study the performance of pretrained model on a challenging long-tail task and analyze the resulting long-tail learning capabilities under zero-shot, few-shot and full supervision conditions to explore the performance in\ufb02uence of model size and self-supervision signal amount. 7 SOTA VLP models Image-Text VLP models. VisualBERT [9], known as the \ufb01rst image-text pre-training model, uses the visual features extracted by Faster R-CNN, concatenates the visual features and textual embeddings, and then fed the concatenated features to a single transformer initialed by BERT. Many VLP models [13, 110, 30, 53] follow the similar feature extraction and architecture as VisualBERT while adjusting the pre-training objectives and pre-training datasets. Recently, VDBERT [134] models the common implicit vision-language alignment in vision and language by pretraining on large-scale image-text pairs via transfer learning [135, 136]. VLMO [129] leverages patch embeddings for image and word embeddings for text and feeds the concatenated embeddings into a single transformer with modality experts and achieves an impressive performance. METER [33] explores how to use a uni-modal pre-trained model and proposes a dualstream architecture model to handle the multimodel fusion, which achieves the \f16 VLP: A Survey on Vision-Language Pre-training Table 2 The summary of mainstream image-text VLP models. The number of downstream tasks determines whether the model is generic or domain-speci\ufb01c VLP. FE: Feature Extraction. PT: Pre-training. Emb: Embedding. SC in Datatsets column: self-constructed or self-collected. MTL in Datatsets column: all datasets for multi-task learning in corresponding work. See other abbreviations in Datatsets column in Table 1. Model Domain Vision FE Language FE Multimodal Fusion Decoder PT Objectives PT Datasets Downstream Tasks VisualBERT [9] Image OD-RFs Emb Single-stream No MLM+VLM COCO GRE+NLVR+VCR+VQA ViLBERT [8] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM COCO+VG VLR+NLVR+VE+VQA LXMERT [108] Image OD-RFs+Xformer Xformer Dual-stream No MLM+VLM+MVM+VQA COCO+VG+VQA+GQA+VGQA GQA+NLVR+VQA B2T2 [109] Image CNN-GFs Emb Single-stream No MLM+VLM CC3M VCR Unicoder-VL [13] Image OD-RFs Emb Single-stream No MLM+VLM+MVM CC3M+SBU VLR+VCR VL-BERT [110] Image OD-RFs Emb Single-stream No MLM+MVM CC3M GRE+VCR+VQA VLP [111] Image OD-RFs Emb Dual-stream Yes MLM+LM CC3M VC+VQA UNITER [30] Image OD-RFs Emb Single-stream No MLM+VLM+MVM+WRA COCO+VG+SBU+CC3M GRE+VLR+NLVR+VCR+VE+VQA 12-IN-1 [112] Image OD-RFs Emb Single-stream No MLM+MVM MTL GQA+GRE+VC+NLVR+VE+VQA VisDial-BERT [113] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM CC3M+VQA VD ImageBERT [53] Image OD-RFs Emb Single-stream No MLM+VLM+MVM LAIT+CC3M+SBU VLR PREVALENT [114] Image CNN-GFs+Xformer Xformer Single-stream No MLM+MVM Matterport3D VLN XGPT [41] Image OD-RFs Emb Dual-stream Yes MLM+IDA+VC+TIFG CC3M VC+VLR InterBER [115] Image OD-RFs Emb Single-stream No MLM+VLM+MVM COCO+CC3M+SBU VLR+VCR PixelBERT [116] Image CNN-GFs Emb Single-stream No MLM+VLM COCO+VG VLR+NLVR+VQA OSCAR [10] Image OD-RFs Emb Single-stream No MLM+VLM COCO+SBU+CC3M+FLKR+VQA+GQA+VGQA GQA+VC+VLR+NLVR+NoCaps+VQA VLN-BERT [117] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM CC3M VLN FashionBERT [118] Image Xformer Emb Single-stream No MLM+VLM+MVM FashionGen VLR VILLA [119] Image OD-RFs+Xformer Xformer Single-stream No MLM+VLM+MVM COCO+VG+CC3M+SBU GRE+VLR+NLVR+VCR+VE+VQA ERNIE-ViL [120] Image OD-RFs Emb Single-stream No MLM+MVM CC3M+SBU GRE+VLR+VCR+VQA RVL-BERT [121] Image OD-RFs Emb Single-stream No MLM+VLM+MVM CC3M VC+VQA VinVL [27] Image OD-RFs Emb Single-stream No MLM+VLM COCO+CC3M+SBU+FLKR+VQA+GQA+VGQA GQA+VC+VLR+NLVR+NoCaps+VQA VL-T5 [122] Image OD-RFs Emb Single-stream Yes MLM+VLM+VQA+GRE+VC COCO+VG+VQA+GQA+VGQA GQA+GRE+VC+MMT+NLVR+VCR+VQA ViLT [123] Image ViT-PFs Emb Single-stream No MLM+VLM COCO+VG+SBU+CC3M VLR+NLVR+VQA ALIGN [55] Image CNN-GFs Xformer Dual-stream No VLC ALIGN VLR Kaleido-BERT [43] Image CNN-GFs Emb Single-stream No MLM+VLM+AKPM FashionGen CR+VC+VLR MDETR [42] Image Xformer Xformer Single-stream Yes OD+MLM+VLC COCO+VG+FLKR+GQA GQA+VQA SOHO [124] Image CNN-GFs Emb Single-stream No MLM+VLM+MVM COCO+VG VLR+NLVR+VE+VQA E2E-VLP [125] Image CNN-GFs Emb Single-stream Yes OD+MLM+VLM COCO+VG VC+VLR+NLVR+VQA Visual Parsing [126] Image Xformer Emb Single-stream No MLM+VLM+MVM COCO+VG VLR+VCR+VE+VQA CLIP-ViL [127] Image CNN-GFs Emb Single-stream Yes MLM+VLM+VQA COCO+VG+VQA+GQA+VGQA VE+VLN+VQA ALBEF [35] Image Xformer Xformer Dual-stream No MLM+VLM+VLC COCO+VG+CC3M+SBU VLR+NLVR+VQA SimVLM [14] Image CNN-GFs Emb Single-stream Yes Pre\ufb01xLM ALIGN VC+NLVR+VE+VQA MURAL [128] Image CNN-GFs Xformer Dual-stream No VLC CC12M+ALIGN VC+VLR VLMO [129] Image ViT-PFs Emb Single-stream No MLM+VLC+VLM COCO+VG+CC3M+SBU VQA+NLVR+VLR METER [33] Image Xformer Xformer Dual-stream No MLM+VLM COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA X-VLM [28] Image Xformer Xformer Single-stream No MLM+VLM+VG COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA TCL [130] Image Xformer Xformer Single-stream No MLM+VLM+TCL COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA \fVLP: A Survey on Vision-Language Pre-training 17 Table 3 The summary of mainstream video-text VLP models. The number of downstream tasks determines whether the model is generic or domain-speci\ufb01c VLP. FE: Feature Extraction. PT: Pre-training. Emb: Embedding. SC in Datatsets column: self-constructed or self-collected. MTL in Datatsets column: all datasets for multi-task learning in corresponding work. See other abbreviations in Datatsets column in Table 1. Model Domain Vision FE Language FE Multimodal Fusion Decoder PT Objectives PT Datasets Downstream Tasks VideoBERT [58] Video CNN-GFs Emb Single-stream No MLM+VLM+MVM SC AC+VC CBT [105] Video CNN-GFs+Xformer Xformer Single-stream No VLC Kinetics AC+AS+VC UniVL [106] Video CNN-GFs Xformer Dual-stream Yes MLM+VLM+VC HT100M AS+ASL+MSA+VC+VLR HERO [36] Video CNN-GFs+Xformer Xformer Single-stream No MLM+VLM+MVM+FOM HT100M+TV VC+VLI+VQA+VLR MMFT-BERT [131] Video OD-RFs+Xformer Xformer Single-stream No VQA TV VQA ActBERT [132] Video OD-RFs+CNN Emb Single-stream No MLM+VLM+MVM HT100M AS+ASL+VC+VQA+VLR CLIP [16] Image / Video CNN/Xformer Xformer Dual-stream No VLC SC OCR +AC etc. Frozen [57] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR Region-Learner [133] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR CLIP4Clip [17] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR CLIP2Video [18] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR \f18 VLP: A Survey on Vision-Language Pre-training SOTA performance on many downstream tasks. The summary of mainstream image-text VLP models is shown in Table 2. Video-Text VLP models. VideoBERT [58], known as the \ufb01rst video-text pre-training model, extends the BERT model to process videos and texts simultaneously. VideoBERT uses the pre-trained ConvNet and S3D [137] to extract video features and concatenate them with textual word embeddings to feed into a transformer initialed with BERT. ConvNet and S3D are frozen when training the VideoBERT, which indicates the approach is not end-to-end. Recently, inspired by ViT, CLIP4Clip [17] and CLIP2Video [18] \ufb01rst process video clips into frames and get patch embeddings according to the method of ViT processing images for each frame. CLIP4clip and CLIP2Video optimize themselves in an end-toend manner and achieve SOTA performance. The summary of mainstream video-text VLP models is shown in Table 3. 8"
+ },
+ {
+ "url": "http://arxiv.org/abs/2109.08478v1",
+ "title": "Multimodal Incremental Transformer with Visual Grounding for Visual Dialogue Generation",
+ "abstract": "Visual dialogue is a challenging task since it needs to answer a series of\ncoherent questions on the basis of understanding the visual environment.\nPrevious studies focus on the implicit exploration of multimodal co-reference\nby implicitly attending to spatial image features or object-level image\nfeatures but neglect the importance of locating the objects explicitly in the\nvisual content, which is associated with entities in the textual content.\nTherefore, in this paper we propose a {\\bf M}ultimodal {\\bf I}ncremental {\\bf\nT}ransformer with {\\bf V}isual {\\bf G}rounding, named MITVG, which consists of\ntwo key parts: visual grounding and multimodal incremental transformer. Visual\ngrounding aims to explicitly locate related objects in the image guided by\ntextual entities, which helps the model exclude the visual content that does\nnot need attention. On the basis of visual grounding, the multimodal\nincremental transformer encodes the multi-turn dialogue history combined with\nvisual scene step by step according to the order of the dialogue and then\ngenerates a contextually and visually coherent response. Experimental results\non the VisDial v0.9 and v1.0 datasets demonstrate the superiority of the\nproposed model, which achieves comparable performance.",
+ "authors": "Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie Zhou",
+ "published": "2021-09-17",
+ "updated": "2021-09-17",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.CV",
+ "cs.MM"
+ ],
+ "main_content": "Introduction Recently, there is increasing interest in visionlanguage tasks, such as image caption (Xu et al., 2015; Anderson et al., 2016, 2018; Cornia et al., 2020) and visual question answering (Ren et al., 2015a; Gao et al., 2015; Lu et al., 2016; Anderson et al., 2018). In the real world, our conversations (Chen et al., 2020b, 2019) usually have multiple turns. As an extension of conventional single-turn visual question answering, Das et al. (2017) introduce a multi-turn visual question answering task named visual dialogue, which aims to Q1: how many people ? Q2: is anyone holding a frisbee ? Q3: is the coach on the right ? Q4: are they wearing matching uniforms ? Caption: there is a frisbee team with their coach taking a team photo A1: 7 people A2: yes A3: yes, on the far right A4: all except the coach Figure 1: An example of visual dialogue. The color in text background corresponds to the same color box in the image, which indicates the same entity. Our model \ufb01rstly associates textual entities with objects explicitly and then gives contextually and visually coherent answers to contextual questions. explore the ability of an AI agent to hold a meaningful multi-turn dialogue with humans in natural language about visual content. Visual dialogue (Agarwal et al., 2020; Wang et al., 2020; Qi et al., 2020; Murahari et al., 2020) requires agents to give a response on the basis of understanding both visual and textual content. One of the key challenges in visual dialogue is how to solve multimodal co-reference (Das et al., 2017; Kottur et al., 2018). Therefore, some fusion-based models (Das et al., 2017) are proposed to fuse spatial image features and textual features in order to obtain a joint representation. Then attention-based models (Lu et al., 2017; Wu et al., 2018; Kottur et al., 2018) are proposed to dynamically attend to spatial image features in order to \ufb01nd related visual content. Furthermore, models based on object-level image features (Niu et al., 2019; Gan et al., 2019; Chen et al., 2020a; Jiang et al., 2020a; Nguyen arXiv:2109.08478v1 [cs.CL] 17 Sep 2021 \fet al., 2020; Jiang et al., 2020b) are proposed to effectively leverage the visual content for multimodal co-reference. However, as implicit exploration of multimodal co-reference, these methods implicitly attend to spatial or object-level image features, which is trained with the whole model and is inevitably distracted by unnecessary visual content. Intuitively, speci\ufb01c mapping of objects and textual entities can reduce the noise of attention. As shown in Figure 1, the related objects can help the agent to understand the entities (e.g., Q1: \u201cpeople\u201d, Q2: \u201cfrisbee\u201d, Q3: \u201ccoach\u201d) for the generation of correct answers. Then when it answers the question Q4 \u201care they wearing matching uniforms ?\u201d, the agent has already comprehended \u201cpeople\u201d and \u201ccoach\u201d from the previous conversation. On this basis, it can learn the entity \u201cuniforms\u201d with the corresponding object in the image, and generate the answer \u201call except the coach\u201d. To this end, we need to 1) explicitly locate related objects guided by textual entities to exclude undesired visual content, and 2) incrementally model the multi-turn structure of the dialogue to develop a uni\ufb01ed representation combining multi-turn utterances with the corresponding related objects. However, previous work overlooks these two important aspects. In this paper, we thus propose a novel and effective Multimodal Incremental Transformer with Visual Grounding, named MITVG, which contains two key parts: visual grounding and multimodal incremental transformer. Visual grounding aims to establish speci\ufb01c mapping of objects and textual entities by explicitly locating related objects in the image with the textual entities. By doing so, our model can exclude undesired visual content and reduce attention noise. On the basis of visual grounding, the multimodal incremental transformer is used to model the multi-turn dialogue history combined with the speci\ufb01c visual content to generate visually and contextually coherent responses. As an encoder-decoder framework, MITVG contains a Multimodal Incremental Transformer Encoder (MITE) and a Gated Cross-Attention Decoder (GCAD). We test the effectiveness of our proposed model on large-scale datasets: VisDial v0.9 and v1.0 (Das et al., 2017). Both automatic and manual evaluations show that our model substantially outperforms the competitive baselines and achieves the new state-of-the-art results on substantial metrics. Our main contributions are as follows: \u2022 To the best of our knowledge, we are the \ufb01rst to leverage visual grounding to explicitly locate related objects in the image guided by textual entities for visual dialogue. \u2022 We propose a novel multimodal incremental transformer to encode the multi-turn dialogue history step by step combined with the visual content and then generate a contextually and visually coherent response. \u2022 We achieve comparable performance on VisDial v0.9 and v1.0 datasets. 2 Approach 2.1 Overview In this section, we formally describe the visual dialogue task and then proceed to our proposed Multimodal Incremental Transformer with Visual Grounding (MITVG). Following Das et al.(2017), a visual dialogue agent is given three inputs, i.e., an image I, a dialogue history (the caption and question-answer pairs) till round t \u22121: H = (Cap |{z} H0 , (Q1, A1) | {z } H1 , \u00b7 \u00b7 \u00b7 , (Qt\u22121, At\u22121) | {z } Ht\u22121 ) and the current question Qt at round t, where Cap is the caption describing the image taken as H0 and H1, . . . , Ht\u22121 are concatenations of questionanswer pairs. The goal of the visual dialogue agent is to generate a response (or answer) At to the question Qt. Cap, Q\u2217and A\u2217are sentences. Figure 2 shows the framework of MITVG, which aims to explicitly model multi-turn dialogue history step by step based on the explicit modeling relationship between multiple modalities. MITVG \ufb01rstly locates related objects in the image explicitly guided by the textual entities via visual grounding, then encodes multi-turn dialogue history in the order of the dialogue utterance based on visual grounding via Multimodal Incremental Encoder (MITE), and \ufb01nally utilizes the outputs of both encoder and visual grounding to generate the response word by word via Gated Cross-Attention Decoder (GCAD). 2.2 Input Representation Before describing our method, we introduce the input representation. Image Features. We use a pre-trained Faster RCNN model (Ren et al., 2015b) to extract object\fMITE MITE MITE Self-Attention Gated CrossAttention FFN Target Inputs Softmax Layer Target Outputs Current Ques. Input 1-st Round History Inputs Embedding Layer VG Model i-th Round Ques. Input VG Model i-th Round History Inputs VG Model Caption Input \u30fb\u30fb\u30fb \u30fb\u30fb\u30fb \ud835\udc41\"\u00d7 1-st Round Ques. Input Current Ques. Input Image Input Encoder Decoder Figure 2: The framework of Multimodal Incremental Transformer with Visual Grounding (MITVG). \u201cVG Model\u201d indicates visual grounding model (Yang et al., 2019b) (Details are described in Sec. 2.3). \u201cMITE\u201d denotes the multimodal incremental transformer encoder (Details are described in Sec. 2.4.1). MITVG \ufb01rstly uses the VG model to explicitly model the relationship between the textual content and the visual content, and encodes multiturn dialogue history in the order of the dialogue based on visual grounding, and \ufb01nally utilizes the outputs of both encoder and visual grounding to generate the response word by word in the decoding process. level image features. Speci\ufb01cally, the image features v for an image I are represented by: v = Faster R \u2212CNN(I) \u2208RK\u00d7V , (1) where K denotes the total number of the detected objects per image and V denotes the dimension of features for each object. Language Features. The current (at the t-th round) L-word question features are a sequence of M-dimension word embedding with positional encoding added (Vaswani et al., 2017), as follows: qt = [st,1, st,2, . . . , st,L] \u2208RL\u00d7M, (2) st,j = wj + PE(j), (3) where wj is the word embedding of the j-th word in the question Qt, and PE(\u00b7) denotes positional encoding function (Vaswani et al., 2017). For the dialogue history H = {H0, H1, . . . , Ht\u22121} and the answer At, the dialogue history features u = {u0, u1, . . . , ut\u22121} and the answer features at are obtained in the same way as the question Qt. 2.3 Visual Grounding To exclude the needless visual content, we introduce visual grounding, which is de\ufb01ned to ground a natural language query (phrase or sentence) about an image onto a correct region of the image. First of all, we use NeuralCoref1 for reference resolution. For example, when it processes the question Q4 \u201care they wearing matching uniforms ?\u201d shown in Figure 1, NeuralCoref takes the question Q4 and its history as inputs, and then generates a new question \u201care the people wearing matching uniforms ?\u201d as a new Q4. As shown in Figure 3 (a), visual grounding model (Yang et al., 2019b) takes the i-th question Qi and the image I as inputs and generates initial visual grounding features, as follows: v(0) gi = VGM(Qi, I), (4) where VGM(\u00b7) denotes visual grounding model2. Then v(0) gi is sent to the multi-head self-attention 1Introduction and code of NeuralCoref are available at https://github.com/huggingface/neuralcoref. NeuralCoref is only used for visual grounding. 2Introduction and code are available at https://github.com/zyang-ur/onestage grounding. \fImage Input i-th Round History Inputs Cross-modal Attention Embedding Layer Self-Attention FFN History Attention Self-Attention FFN \ud835\udc41\"\u00d7 VG Model i-th Round Ques. Input \ud835\udc41$\u00d7 \ud835\udc63&' \ud835\udc50)*+ \ud835\udc50) (\ud835\udc4e) (\ud835\udc4f) \ud835\udc63&' \ud835\udc62) Figure 3: Framework of (a) Visual Grounding and (b) Multimodal Incremental Transformer Encoder (MITE). layer followed by a position wise feed-forward network (FFN) layer (stacked Nv times) to generate the i-th visual grounding features as follows3: \u02c6 vn gi = MultiHead \u0010 v(n\u22121) gi , v(n\u22121) gi , v(n\u22121) gi \u0011 , (5) where n = 1, . . . , Nv and MultiHead(\u00b7) denotes the multi-head self-attention layer (Vaswani et al., 2017), then v(n) gi = FFN \u0000\u02c6 vn gi \u0001 , (6) where n = 1, . . . , Nv and FFN(\u00b7) denotes the position wise feed-forward networks (Vaswani et al., 2017). After Nv layers computation, we obtain the \ufb01nal visual grounding features vgi by: vgi = v(Nv) gi , (7) Actually, there are some questions that do not contain any entities in the visual dialogue, such as \u201canything else ?\u201d. For such questions, we use the features of the whole image instead, i.e. vgi = v. 2.4 Multimodal Incremental Transformer Inspired by the idea of incremental transformer (Li et al., 2019) which is originally designed for the single-modal dialogue task, we make an extension and propose a multimodal incremental transformer, which is composed of a Multimodal Incremental Transformer Encoder (MITE) and a Gated CrossAttention Decoder (GCAD). The MITE uses an incremental encoding scheme to encode multi-turn 3For simplicity, we omit the descriptions of layer normalization and residual connection. dialogue history with an understanding of the image. The GCAD leverages the outputs from both the encoder and visual grounding via the gated cross-attention layer to fuse the two modal information in order to generate a contextually and visually coherent response word by word. 2.4.1 MITE To effectively encode multi-turn utterances grounded in visual content, we design the Multimodal Incremental Transformer Encoder (MITE). As shown in Figure 3 (b), at the i-th round, where i = 1, 2, ..., t\u22121, the MITE takes the visual grounding features vgi, the dialogue history features ui and the context state ci\u22121 as inputs, and utilizes attention mechanism to incrementally build up the representation of the relevant dialogue history and the associated image regions, and then outputs the new context state ci. This process can be stated recursively as follows: ci = MITE (vgi, ui, ci\u22121) , (8) where MITE(\u00b7) denotes the encoding function, ci denotes the context state after the dialogue history features ui and the visual grounding features vgi being encoded, and c0 is the dialogue history features u0. As shown in Figure 3 (b), we use a stack of Nh identical layers to encode vgi, ui and ci\u22121, and to generate ci. Each layer consists of four sub-layers. The \ufb01rst sub-layer is a multi-head self-attention for the dialogue history: A(n) = MultiHead \u0010 C(n\u22121), C(n\u22121), C(n\u22121)\u0011 , (9) where n = 1, . . . , Nh, C(n\u22121) is the output of the last layer Nn\u22121, and C(0) is the dialog history features ui. The second sub-layer is a multi-head cross-modal attention: B(n) = MultiHead (An, vgi, vgi) , (10) where vgi is the visual grounding features. The third sub-layer is a multi-head history attention: F(n) = MultiHead \u0010 B(n), ci\u22121, ci\u22121 \u0011 , (11) where ci\u22121 is the context state after the previous dialogue history features ui\u22121 being encoded. That\u2019s why we call this encoder \u201cMultimodal Incremental Transformer\u201d. The fourth sub-layer is a position wise feed-forward network (FFN): C(n) = FFN \u0010 F(n)\u0011 . (12) \fWe use ci to denote the \ufb01nal representation at Nh-th layer: ci = C(Nh). (13) The mulitmodal incremental transformer encoder at the current turn t, i.e., the bottom one in Figure 2, has the same structure as all the other MITEs but takes the visual grounding features vgt, the current question features qt and the context state ct\u22121 as inputs and generates the \ufb01nal context state ct. 2.4.2 GCAD Motivated by the real-world human cognitive process, we design a Gated Cross-Attention Decoder (GCAD) shown in Figure 2, which takes the masked answer features a, Chenpeng Du . Copyright 2024 by the author(s). so the output sequence is significantly longer than its corresponding input phoneme sequence. Mainstream neural text-to-speech models, such as FastSpeech 2 (Ren et al., 2020), GradTTS (Popov et al., 2021) and VoiceFlow (Guo et al., 2024), integrate a duration prediction module. Prior to training, the target duration is conventionally derived using the Viterbi forced alignment algorithm. During training, this module is optimized by minimizing the mean square error (MSE) between predicted and target durations. In the inference phase, the duration predictor module predicts the duration for each input phoneme, establishing the alignment between the input and output sequences accordingly. The encoded input phoneme sequence is then expanded to the frame level based on the predicted duration and is subsequently passed to the speech decoder. This mechanism enforces monotonic alignment constraints on the sequenceto-sequence process, ensuring robustness in the synthesis of speech. Over the past two years, utilizing discrete speech tokens for speech generation is proposed in GSLM (Lakhotia et al., 2021) and VQTTS (Du et al., 2022), paving the way for integrating cutting-edge language modeling techniques into TTS systems. Inspired by exceptional strides in natural language processing driven by decoder-only large Transformer models like GPT 3 (Brown et al., 2020) and the LLAMA 2 (Touvron et al., 2023), Tortoise-TTS (Betker, 2023), SPEAR-TTS (Kharitonov et al., 2023), VALL-E (Wang et al., 2023a) and LauraGPT (Wang et al., 2023b) adopted the decoder-only architecture for TTS, achieving remarkable naturalness. SPEAR-TTS and VALL-E also have the ability to perform zero-shot speaker adaptation through auto-regressive (AR) continuation from a given speech prompt. Furthermore, these decoder-only TTS models, unlike traditional neural TTS model, circumvent explicit duration modeling and the requirement for phoneme durations obtained prior to training. This characteristic offers convenience and simplifies training process, especially when training on large scale datasets. However, the implicit duration modeling within these systems lacks the monotonic alignment constraints, often leading to hallucination issues like mispronunciation, word skipping and repeating. 1 arXiv:2401.14321v4 [eess.AS] 30 Jan 2024 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech In fact, we do have a training scheme named Transducer (Graves, 2012) designed specifically for monotonic sequence-to-sequence task and has demonstrated success in automatic speech recognition (ASR) (He et al., 2019). It adopts a modularized architecture, composed of an encoder, a prediction network and a joint network. However, such modularized architecture of Transducer is specifically designed for ASR as a classification task, making it less suited for TTS as a generation task. Further insights into this matter will be discussed in Chapter 3. To achieve the best of both worlds, we propose VALL-T, a generative Transducer model that utilizes the decoder-only Transformer architecture. Specifically, alongside the conventional absolute position embedding, we incorporate additional relative position embeddings into the input phoneme sequence. Here, a relative position of 0 specifies the current phoneme under synthesis, allowing us to explicitly guide the monotonic generation process through shifting the relative positions from left to right. To the best of our knowledge, this is the first work that implements Transducer with a decoder-only Transformer architecture. VALL-T presents several advantages compared to previous TTS models: \u2022 VALL-T introduces monotonic alignment constraints without altering the decoder-only architecture, leading to a better robustness against hallucination. \u2022 VALL-T utilizes implicit duration modeling, removing the necessity for acquiring phoneme durations before training. \u2022 The alignment controllability of VALL-T during decoding enables the utilization of untranscribed speech prompts, even in unknown languages. 2. Related work 2.1. Decoder-only zero-shot TTS with speech prompts Zero-shot TTS refers to the ability to generate speech in the voice of an unseen speaker given only a short sample of that speaker\u2019s speech. Decoder-only TTS models, such as VALLE (Wang et al., 2023a), are able to perform zero-shot speaker adaptation through auto-regressive continuation from the target speaker\u2019s sample. Therefore, the speech sample of the target speaker is also named speech prompt. Specifically, in the training process, illustrated in Figure 1(a), the phoneme and speech sequences are concatenated along the time axis and fed into a decoder-only Transformer model. It is assumed that the speaker\u2019s voice remains constant within each training utterance. In the inference phase, as shown in Figure 1(b), a speech prompt yp is required to determine the voice of the generated speech. The phoneme transcription of the speech prompt xp and the speech prompt 0 1 2 3 0 1 2 3 4 5 6 : Speech token : Phoneme token Pos emb: : Predicted distribution Decoder-only Transformer Cross-entropy loss (a) Training. Decoder-only Transformer 2 3 4 5 0 1 2 3 4 5 6 7 0 1 Speech prompt Output speech through continuation Prompt transcription Input phonemes 8 Pos emb: Sampling: 9 (b) Auto-regressive zero-shot inference. Figure 1. The training and auto-regressive zero-shot inference process of previous decoder-only TTS models, such as VALL-E. itself yp are positioned at the beginning of the input and output sequences respectively, followed by the input phonemes to be generated and their corresponding output speech tokens. The process of auto-regressive continuation from the speech prompt is believed to preserve the speaker\u2019s voice in the generated output. 2.2. Transducer The Transducer model (Graves, 2012), also known as RNNT, is designed for monotonic sequence-to-sequence tasks and comprises three components: an encoder, a prediction network, and a joint network. Here, the prediction network is an auto-regressive network, such as RNN and LSTM. Transducer model also introduces a special output token called blank, denoted as \u2205, which signifies the alignment boundary between output and input sequence. We define Y as the vocabulary of output tokens and \u00af Y = Y \u222a{\u2205} as the extended vocabulary. Also, we denote the lengths of the input sequence x and output sequence y as T and U and the size of the extended vocabulary \u00af Y as \u00af V . In the training phase, as shown in Figure 2(a), the encoder and prediction network encode the two sequences x and y respectively, yielding encoded hidden sequences f and g. Subsequently, we slice the hidden vectors ft and gu at positions t and u respectively, then send them to the joint net2 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech Encoder Prediction Network Joint Network (a) Model architecture. (b) Monotonic alignment path \u00af y of length T + U. Figure 2. The model architecture and monotonic alignment path of Transducer. work to calculate the probability pt,u = Pr(\u00af yt+u|ft, gu) for the next token prediction, where \u00af yt+u \u2208\u00af Y. We iterate over all possible sliced hidden vectors of the two sequences, from f0 to fT \u22121 and from g0 to gU, generating a matrix p of shape T \u00d7(U +1) whose entry at (t, u) is pt,u. Each path \u00af y from the bottom left corner to the top right corner represents an alignment between x and y, with a length of T +U. Figure 2(b) demonstrates an example of the alignment path where \u00af y = [y1, y2, \u2205, y3, \u2205, y4, y5, \u2205, y6, \u2205]. The training criterion of Transducer model is to maximize the probability of Pr(y|x), which is the summation of the probabilities of all possible alignment paths \u00af y, that is Pr(y|x) = X \u00af y Pr(\u00af y|x) = X \u00af y T +U\u22121 Y i=0 Pr(\u00af yi|fti, gui) (1) where fti and gui are sliced hidden vectors at corresponding positions specified by the alignment path \u00af y. In practice, this probability can be effectively calculated with dynamic programming. In the inference phase, the prediction network autoregressively predicts the next token, conditioning on the sliced input hidden vectors that slide from f0 to fT \u22121 whenever the blank token \u2205emerges. The Transducer model has demonstrated remarkable success in ASR. However, its modularized architecture is not suitable enough for generation tasks. Recently, some literatures have explored the application of Transducer to TTS (Chen et al., 2021; Kim et al., 2023), but they still rely on the typical modularized architecture and consequently result in limited performance. Different from the previous works, we propose for the first time to implement Transducer with a decoder-only architecture that achieves better performance. 3. VALL-T: Decoder-Only Generative Transducer Current modularized Transducer model has demonstrated significant success in ASR. Nevertheless, its suitability for generation tasks is limited. Typically, the joint network is a small network, comprising only one or a few linear projection layers, and the prediction network is LSTM or Transformer blocks. This architecture introduces a limitation wherein the input condition x is not incorporated into the generation process until it reaches the joint network. Worse still, the joint network is too small to effectively integrate input conditions into the generation process. Moreover, the modularized Transducer model utilizes slicing to denote specific positions. Consequently, the joint network is unable to explicitly perceive the input context, further making difficulties in achieving satisfactory performance for conditional generation tasks. To address the above issues, we propose VALL-T that integrates the encoder, the prediction network and the joint network into one single decoder-only Transformer architecture and leverages relative position embedding to denote the corresponding positions. We discuss the training and inference details below. 3.1. Training We use a decoder-only architecture for VALL-T. Similar to the approach in the previous work VALL-E, we concatenate the input phoneme and output speech tokens along the time axis and present them to the model as a unified sequence. Unlike traditional RNN and LSTM architectures, the Transformer lacks a specific time order for input tokens, relying instead on position embeddings to indicate their positions. The position indices for the input sequence range from 0 to T \u22121 and are converted into position embeddings through a sinusoidal function (Vaswani et al., 2017). Similarly, the output sequence adopts position indices from 0 to U, including an additional token at the beginning. Following 3 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech Training Auto-regressive zero-shot inference \u2026\u2026 Transducer loss Figure 3. The training and monotonic auto-regressive inference of VALL-T. VALL-E, we utilize a triangular attention mask for the output sequence, facilitating auto-regressive generation. This mask ensures that each speech token attends to only previously generated tokens, maintaining a proper sequential order in the output. Beyond the typical absolute position indices starting from 0, we introduce additional relative position indices in VALL-T for input tokens. The relative position index 0 specifies the current phoneme under synthesis. The phonemes to its left are assigned negative position indices starting from \u22121, while those to its right are assigned positive position indices starting from 1. These relative position indices are converted to relative position embeddings with a same sinusoidal function as the absolute position indices. The resulting absolute and relative position embeddings are added to the input phoneme embeddings and subsequently presented to the decoder-only Transformer. In adopting this approach, the model gains awareness of the phoneme presently undergoing synthesis, specifically the one assigned a relative position of 0, and the phonemes serving as its preceding and subsequent contexts. To eliminate the need for explicit duration modeling, we introduce a special output token called blank, which serves as a marker denoting the end of each phoneme\u2019s generation. Consequently, the output projection following the decoderonly Transformer projects the hidden sequence into a size of \u00af V . The projected hidden sequence, with a length of U + 1, undergoes a Softmax function to yield a sequence representing the output distribution. Illustrated in Figure 3, we iteratively assign relative position 0 to each of the T phonemes and subsequently stack every output sequence, each of length U + 1. This stacking process results in 4 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech a matrix p of shape T \u00d7 (U + 1). The optimization of VALL-T utilizes the Transducer loss, calculated using this matrix and the ground-truth speech tokens, to maximize the probability of p(y|x) following Equation (1). 3.2. Monotonic auto-regressive inference Let us first consider the auto-regressive inference process without a speech prompt. Initially, the relative position 0 is designated to the first phoneme, starting the speech generation from the token. The model then autoregressively produces speech tokens based on the input phoneme tokens and previously generated speech tokens until the blank token \u2205emerges. The emergence of \u2205denotes the completion of the first phoneme\u2019s generation and triggers a shift in relative positions. We iteratively conduct the above process until the appearance of \u2205for the last phoneme, indicating the conclusion of the entire generation process for the input phoneme sequence. Since the model is encouraged to generate speech tokens for the phoneme assigned relative position 0 by Transducer loss during traning, the step-by-step shifting operation during decoding facilitates the monotonic generation process and consequently enhance the robustness against hallucination. Next, we consider the integration of the speech prompt for zero-shot speaker adaptation. Following the approach used in VALL-E, the phoneme transcription of the speech prompt is placed at the start of the input sequence, while the speech prompt itself is positioned at the beginning of the output sequence. The two sequences are followed by the input phonemes to be generated and their corresponding output speech tokens respectively. Given that the speech prompt are provided, we assign the relative position 0 to the first phoneme right after the prompt transcription, as shown in Figure 3, and perform speech continuation. Likewise, the relative positions undergo a shift each time \u2205emerges, repeating until the generation for the final phoneme is completed. 3.3. Pseudo prompt transcription for untranscribed speech prompt In previous decoder-only TTS models, the alignment is learned implicitly with self-attentions. These models have to discern which phoneme is currently being synthesized at each time step solely based on the self-attentions between the input tokens and the preceding output tokens. Therefore, they rely on correct transcription of the speech prompt to get correct alignment and start the generation accordingly. However, in practice, it is inconvenient to obtain transcribed speech prompt, so we hope to leverage speech prompt directly and eliminate the need of its transcription. In VALL-T, it is evident that the alignment is controllable during inference, allowing us to manipulate the generation process by assigning position 0 to the phoneme we intend to synthesize without relying on a paired speech prompt and its transcription. Accordingly, we can perform zeroshot adaptation with untranscribed speech prompts. Specifically, given an untranscribed speech prompt, we use the phoneme sequence of a random utterance, referred to as pseudo prompt transcription, as its transcription and place it at the beginning of the input sequence. Then the generation can start correctly by leveraging exactly the same algorithm as described in section 3.2. The reason for using a pseudo prompt transcription rather than no prompt transcription lies in the presence of absolute position embeddings in the input sequence. We need to avoid unseen alignment pattern in the view of absolute position embeddings. Moreover, since there is no necessity for transcribing the speech prompt, the utilization of untranscribed speech prompts can be expanded to include prompts in unknown languages. This enables cross-lingual zero-shot adaptive speech synthesis. 3.4. Aligned context window for lengthy speech synthesis Decoder-only Transformer models have very limited ability to generalize to unseen position embeddings. That means if we are synthesizing lengthy speech that exceeds the maximum length encountered during training, the performance would be degraded. Fortunately, in VALL-T, the alignment is available during inference, allowing us to employ aligned context window that constrains both the input and output sequence length simultaneously. Specifically, at each decoding step, we retain only n phonemes that precede the current phoneme and m phonemes that follow it, creating a constrained sliding context window on input phonemes. Also, we preserve only the speech tokens corresponding to the n preceding phonemes given the alignment and discard more distant history, forming a context window on the output sequence as well. Hence, by leveraging aligned context window, VALLT consistently maintains a limited context on both input and output sequence, allowing it to generate speech of any lengths. 4. Experiments and Results 4.1. Setup In our experiments, we leverage our Encodec (D\u00b4 efossez et al., 2022) speech tokenizer whose frame shift is 20ms and the sampling rate of output waveforms is 16k. It comprises 8 residual vector quantization (RVQ) indices for each frame. To ensure a fair comparison between VALL-E and our proposed model VALL-T, we follow the approach introduced in VALL-E that predicts the sequence of the first RVQ index 5 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech (a) Forward variable map. (b) Backward variable map. (c) Posterior probability map.(d) Monotonic alignment path. Figure 4. Alignment analysis with VALL-T given both speech and its transcription. with the auto-regressive models and then predicts the remaining 7 RVQ indices conditioned on the first RVQ index with a separate non-auto-regressive (NAR) model. Both the input and output sequences are encoded with BPE (Sennrich et al., 2016) algorithm to shorten sequence lengths and diminish GPU memory consumption. VALL-T adopts an identical architecture to VALL-E, containing 12 layers of Transformer blocks. Each block comprises 12 attention heads and has a hidden dimension of 1024. We use LibriTTS (Zen et al., 2019) dataset in our experiments, which is a multi-speaker transcribed English speech dataset. Its training set consists of approximately 580 hours of speech data from 2,306 speakers. We train our model for 40 epochs using a ScaledAdam (Yao et al., 2023) optimizer. The learning rate scheduler is Eden (Yao et al., 2023) with a base learning rate of 0.05, an epoch scheduling factor of 4 and a step scheduling factor of 5000. 4.2. Alignment analysis We first do alignment analysis to check if relative position embedding in VALL-T indicates the alignment as expected. Given the speech y and its transcription x, we iterate over all relative positions and calculate the matrix p of output distributions in the shape of T \u00d7(U +1). Then we calculate the forward variables, backward variables and posterior probabilities accordingly. The concepts of forward variable, backward variables, and posterior probabilities were initially introduced in Hidden Markov Models (Young et al., 2002) and were also introduced in Transducer (Graves, 2012). The definitions and calculation for these values are elaborated in Appendix A. In Figure 4, we illustrate an example of the forward variable, backward variable, and posterior probability for VALL-T, with darker colors indicating lower values. The values are plotted on a logarithmic scale. In Figure 4(a) and 4(b), we can see a faint bright line on the diagonal of the two graphs. Pixel-wise summing the values from Figure 4(a) and Figure 4(b) produces Figure 4(c), which represents the posterior probability. The diagonal line becomes much clearer in this composite figure, indicating that VALL-T correctly models the alignment between the input and output sequences with relative position embeddings. Accordingly, VALL-T is capable of forced alignment, where the most probable path from the bottom-left corner to the top-right corner in the posterior probability map serves as the alignment path. The alignment path for this example is depicted in Figure 4(d). Since ground-truth labels for alignment are unavailable, our alignment analysis here only focuses on qualitative aspects. 4.3. Evaluation on zero-shot TTS In this section, we conduct an evaluation of our models on zero-shot TTS task. The task refers to synthesizing speech in the voices of unseen speakers given speech prompts and their corresponding transcriptions. Our test set uses a same test set as in (Du et al., 2024), containing 500 utterances and involving 37 speakers from the LibriTTS test set. Each speaker is assigned a specific speech prompt. Before assessing the performance of our models, we conduct speech resynthesis using our Encodec to evaluate the speech tokenizer. We also do an experiment named \u201cNAR resynthesis\u201d. In this experiment, we send the ground-truth first RVQ index to the NAR model for predicting the remaining 7 RVQ indices. Then, we convert all the 8 RVQ indices to waveform using the Encodec decoder. The purpose of the NAR resynthesis experiment is to demonstrate the performance degradation introduced by the NAR model, so we can better analyze the results of the entire pipelines, where the AR models are the primary focus of our paper. 6 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech Table 1. The performance of zero-shot TTS. Method WER(%) \u2193 MCD \u2193 Naturalness MOS \u2191 Similarity MOS \u2191 SECS \u2191 Ground-truth 1.92 0 4.63 \u00b1 0.07 4.23 \u00b1 0.10 0.837 Encodec resynthesis 2.08 2.50 4.55 \u00b1 0.07 4.19 \u00b1 0.11 0.835 NAR resynthesis 3.75 2.95 4.44 \u00b1 0.07 4.24 \u00b1 0.10 0.846 Transduce and Speak 6.14 4.38 4.07 \u00b1 0.10 4.02 \u00b1 0.11 0.838 VALL-E 5.80 4.00 4.25 \u00b1 0.08 4.12 \u00b1 0.10 0.857 VALL-T (ours) 4.16 3.98 4.26 \u00b1 0.08 4.21 \u00b1 0.09 0.849 Table 2. Zero-shot TTS with untranscribed speech prompt. The three baselines fail to generate the speech correctly, resulting in much higher WERs. We do not conduct MOS tests on the three baselines since it makes no sense to assess the naturalness and similarity for entirely incorrect generated audio samples. Method Pseudo Prompt WER(%) \u2193 MCD \u2193 Naturalness MOS \u2191 Similarity MOS \u2191 SECS \u2191 Transcription VALL-E \u00d7 68.22 4.97 0.795 \u221a 21.01 4.28 0.836 VALL-T \u00d7 30.86 4.43 0.836 \u221a 3.48 3.97 4.29 \u00b1 0.09 4.14 \u00b1 0.10 0.848 The baselines of this experiment include two models. One is the popular decoder-only TTS model VALL-E and another is the recently proposed TTS model with a modularized Transducer achitecture called \u201cTransduce and Speak\u201d (Kim et al., 2023). The main evaluation metric in this paper is the word error rate (WER). In our evaluation process, we first synthesize speech for the test set, and then perform speech recognition using a well-known ASR model, Whisper1 (Radford et al., 2023). The transcriptions obtained from the ASR model are then compared to the ground-truth input text to calculate the word error rate. Table 1 shows that VALL-T attains significant lower WER than baselines, which is a 28.3% relative reduction when compared to VALL-E and is only 0.41 higher than NAR resynthesis, suggesting the robustness of VALL-T. Additionally, we present the mel-cepstral distortion (MCD) in the table, serving as a metric for quantifying the distance between the generated speech and the corresponding ground-truth recordings. VALL-T also achieves the lowest MCD across all models. Further evaluations extend to Mean Opinion Score (MOS) listening tests for naturalness and speaker similarity. 15 listeners were tasked with rating each utterance on a scale from 1 to 5, with higher scores indicating better naturalness and similarity. Note that the speaker similarity is evaluated between the generated speech and the provided speech prompt, not the corresponding groundtruth speech. This distinction arises from the variability in a speaker\u2019s timbre across different utterances, and the goal is to emulate solely the timbre of the given prompt. In the listening tests, VALL-T achieves a naturalness score comparable to VALL-E, with a slightly better speaker similarity. Finally, the evaluation extends to the calculation of Speaker Embedding Cosine Similarity (SECS), measured 1https://huggingface.co/openai/whisper-medium using a pretrained speaker verification model2. This metric measures the speaker similarity by assessing the cosine similarity between the speaker embeddings of the generated speech and the provided speech prompt. While VALL-T exhibits a marginally lower SECS value than VALL-E, it still surpasses other models and does not detrimentally affect human perception according to the results of subjective listening tests on similarity. 4.4. Leveraging untranscribed speech prompt The alignment controllability of VALL-T allow us to leverage untranscribed speech prompts for zero-shot TTS. In this experiment, we still use a same test set as in the previous section, excluding the transcription of the speech prompts to simulate a scenario where prompt transcriptions are unavailable. One utterance is randomly chosen from the LibriTTS test set, and its phoneme transcription serves as the pseudo prompt transcription for generating all utterances in the test set. We compare the proposed approach with three baselines. The first baseline is generating with VALL-T but do not use any prompt transcription. The remaining two baselines use VALL-E, one utilizing pseudo prompt transcriptions and the other using no prompt transcription. The results are presented in Table 2. We find VALL-E consistently fails to perform continuation in the absence of the correct prompt transcription, regardless of whether pseudo prompt transcriptions are provided or not. Although VALLT exhibits improved robustness, it still fails in continuation tasks when no prompt transcription is used. This failure is caused by the unseen alignment pattern in the view of absolute position embeddings. When provided with pseudo prompt transcriptions, VALL-T successfully accomplishes 2https://github.com/resemble-ai/Resemblyzer 7 \fVALL-T: Decoder-Only Generative Transducer for Robust and Decoding-Controllable Text-to-Speech Table 3. Zero-shot TTS with untranscribed speech prompt in unknown languages. Method WER(%) \u2193 Naturalness MOS \u2191 Similarity MOS \u2191 SECS \u2191 VALL-E 39.83 0.779 VALL-T 4.22 4.25 \u00b1 0.08 4.36 \u00b1 0.06 0.782 Table 4. Evaluation on generating lengthy speech that exceeds the maximum length encountered during training. Method Aligned Context WER(%) \u2193 MCD \u2193 Naturalness MOS \u2191 Similarity MOS \u2191 SECS \u2191 Window Ground-truth 1.68 2.50 4.81 \u00b1 0.05 4.48 \u00b1 0.09 0.877 VALL-E \u00d7 50.82 4.37 0.875 VALL-T \u00d7 14.63 4.31 4.21 \u00b1 0.08 4.31 \u00b1 0.07 0.828 VALL-T \u221a 5.50 4.26 4.39 \u00b1 0.06 4.37 \u00b1 0.07 0.847 the continuation from the speech prompt. The WER is significantly lower than the three baselines and even lower than both the results obtained using real prompt transcription and using NAR resynthesis in Table 1. This improvement may be attributed to the reduced noise in the fixed pseudo prompt transcription compared to the diverse real prompt transcriptions. This result further demonstrate the robustness of VALL-T. Similarly, we observe a lower MCD compared with other baselines with the proposed approach. We do not conduct listening tests on the three baselines since it makes no sense to assess the naturalness and similarity for entirely incorrect generated audio samples. The naturalness of the proposed approach is almost the same as that observed when using real prompt transcriptions while its speaker similarity is slightly lower. We can also observe that in SECS evaluation. Next, we extend the utilization of untranscribed speech prompts to those spoken in unknown languages. Specifically, we continue to use the same test set as in the previous experiments, but leverage speech prompts from 10 German and 10 Spanish speakers randomly selected from the Multilingual Librispeech dataset (Pratap et al., 2020), simulating the speech prompt in unknown languages. Employing the same English pseudo prompt transcription as in the previous experiment for both VALL-T and the baseline VALL-E, we generate continuations from the speech prompts in German and Spanish. The results are posted in Table 3. VALL-E continues to fail in the generation due to the unknown prompt transcription. On the contrary, VALL-T still successfully performs the zero-shot TTS from the speech prompts in German and Spanish, achieving a WER of 4.22. Note that the similarity MOS and SECS in this experiment cannot be directly compared with the corresponding results in Table 1 and 2 since the speakers of the speech prompts differ. We do not have corresponding ground-truth speech that speaks the utterances in the test set in the voice of German and Spanish speakers, so we also do not calculate the MCD in this experiment. 4.5. Evaluate on lengthy speech generation We also evaluate our model on lengthy speech synthesis that exceeds the maximum length encountered during training. Due to the limitation of GPU memory, the maximum duration of training utterances is approximately 15 seconds. The test set for this experiment consists of 85 utterances, each formed by concatenating five utterances from the previous test set to simulate lengthy utterance. The generated speech in this test set exceeds 20 seconds. We use n = 50 and m = 15 as the context window size. Examining the results in Table 4, we observe that VALL-T exhibits superior generalization to long speech compared to VALL-E, attributed to its utilization of relative position embedding, even in the absence of an aligned context window. In contrast, VALL-E often starts mumbling after generating approximately 20 seconds of speech and frequently terminates prematurely without completing the generation. Upon applying the aligned context window, the WER of VALL-T further decreases and approaches the result of generating normal utterances. Additionally, the gap in MOS scores for naturalness and speaker similarity between generated speech and ground-truth is also comparable to the result of synthesizing normal utterances. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2306.07547v6",
+ "title": "UniCATS: A Unified Context-Aware Text-to-Speech Framework with Contextual VQ-Diffusion and Vocoding",
+ "abstract": "The utilization of discrete speech tokens, divided into semantic tokens and\nacoustic tokens, has been proven superior to traditional acoustic feature\nmel-spectrograms in terms of naturalness and robustness for text-to-speech\n(TTS) synthesis. Recent popular models, such as VALL-E and SPEAR-TTS, allow\nzero-shot speaker adaptation through auto-regressive (AR) continuation of\nacoustic tokens extracted from a short speech prompt. However, these AR models\nare restricted to generate speech only in a left-to-right direction, making\nthem unsuitable for speech editing where both preceding and following contexts\nare provided. Furthermore, these models rely on acoustic tokens, which have\naudio quality limitations imposed by the performance of audio codec models. In\nthis study, we propose a unified context-aware TTS framework called UniCATS,\nwhich is capable of both speech continuation and editing. UniCATS comprises two\ncomponents, an acoustic model CTX-txt2vec and a vocoder CTX-vec2wav.\nCTX-txt2vec employs contextual VQ-diffusion to predict semantic tokens from the\ninput text, enabling it to incorporate the semantic context and maintain\nseamless concatenation with the surrounding context. Following that,\nCTX-vec2wav utilizes contextual vocoding to convert these semantic tokens into\nwaveforms, taking into consideration the acoustic context. Our experimental\nresults demonstrate that CTX-vec2wav outperforms HifiGAN and AudioLM in terms\nof speech resynthesis from semantic tokens. Moreover, we show that UniCATS\nachieves state-of-the-art performance in both speech continuation and editing.",
+ "authors": "Chenpeng Du, Yiwei Guo, Feiyu Shen, Zhijun Liu, Zheng Liang, Xie Chen, Shuai Wang, Hui Zhang, Kai Yu",
+ "published": "2023-06-13",
+ "updated": "2024-03-28",
+ "primary_cat": "cs.SD",
+ "cats": [
+ "cs.SD",
+ "eess.AS"
+ ],
+ "main_content": "Introduction Recently, two types of discrete speech tokens have been proposed, which are known as semantic tokens and acoustic tokens (Borsos et al. 2022). Semantic tokens, such as vq-wav2vec (Baevski, Schneider, and Auli 2019), wav2vec 2.0 (Baevski et al. 2020) and HuBERT (Hsu et al. 2021), are trained for discrimination or masking prediction. Consequently, they primarily capture articulation information while providing limited acoustic details. On the other hand, acoustic tokens, which have been introduced by audio codec *Corresponding author. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Original speech: His brother likes travelling so much. Speech editing: His brother likes Shakespeare's works so much. Context A Context B Edited speech Speech continuation (for zero-shot speaker adaptation): His brother likes likes travelling so much. To be or not to be, that is the question. Continuation speech Context A (a.k.a. 'prompt') Figure 1: Definitions of context-aware TTS tasks, including speech continuation and speech editing. models like Soundstream (Zeghidour et al. 2022) and Encodec (D\u00b4 efossez et al. 2022), are trained specifically for speech reconstruction. As a result, they capture acoustic details, especially speaker identity. The typical neural text-to-speech (TTS) pipeline, such as Tacotron 2 (Shen et al. 2018) and FastSpeech 2 (Ren et al. 2020), consists of two stages: predicting the melspectrogram from text and then vocoding it into waveform. Additional techniques, such as normalizing flow (Valle et al. 2020; Kim et al. 2020) and diffusion models (Liu et al. 2022; Popov et al. 2021; Liu, Guo, and Yu 2023), have been introduced to generate the mel-spectrogram. Recently, VQTTS (Du et al. 2022) proposes a novel approach by utilizing discrete speech tokens as the intermediate representation for text-to-speech synthesis. The discrete tokens have been found to exhibit superior naturalness and robustness compared to mel-spectrograms. Textless NLP (Lakhotia et al. 2021) and AudioLM (Borsos et al. 2022) propose to leverage wav2vec 2.0 and w2v-BERT (Chung et al. 2021) respectively for language model training and consequently are able to generate speech unconditionally via auto-regressive inference. InstructTTS (Yang et al. 2023) uses VQ-diffusion to generate acoustic tokens whose speaking style is guided by arXiv:2306.07547v6 [cs.SD] 28 Mar 2024 \fCTX-txt2vec CTX-vec2wav Continuation speech Text / phoneme tokens Context A Context A Semantic context Acoustic context Semantic tokens Semantic tokens Melspectrogram (a) Speech continuation CTX-txt2vec CTX-vec2wav Semantic tokens Context A Context B Semantic context Acoustic context Semantic tokens Melspectrogram Context A Context B Edited speech Text / phoneme tokens (b) Speech editing Figure 2: The unified context-aware framework UniCATS for speech continuation and editing. Both the two tasks share the same model, with the only distinction being the presence of context B. a natural language prompt. VALL-E (Wang et al. 2023) and SPEAR-TTS (Kharitonov et al. 2023) further extend the use of discrete tokens to zero-shot speaker adaptation. Specifically, they generate acoustic tokens based on the input text using a decoder-only auto-regressive (AR) model. During inference, they conduct AR continuation from the acoustic tokens of a short speech prompt provided by the target speaker. As a result, these models are capable of generating speech in the target speaker\u2019s voice. NaturalSpeech 2 (Shen et al. 2023) employs a typical diffusion model to generate discrete acoustic tokens as continuous features. In addition to speech continuation, there is another context-aware TTS task called speech editing (Tae, Kim, and Kim 2022; Yin et al. 2022). Illustrated in Figure 1, speech editing means synthesizing speech based on input text while ensuring smooth concatenation with its surrounding context. Unlike speech continuation, speech editing takes into account both the preceding context A and the following context B. However, current TTS models that based on discrete speech tokens face three limitations. Firstly, most of these models are autoregressive (AR) models, which restricts them to generate speech only in a left-to-right direction. This limitation makes them unsuitable for speech editing, where both preceding and following contexts are provided. Secondly, the construction of acoustic tokens involves residual vector quantization (RVQ), resulting in multiple indices for each frame. This approach introduces prediction challenges and complexity into text-to-speech. For instance, VALL-E incorporates a non-auto-regressive (NAR) module to generate the residual indices, while SPEAR-TTS addresses this issue by padding the RVQ indices into a longer sequence, which further complicates modeling. Lastly, the audio quality of these TTS systems is constrained by the performance of audio codec models. To tackle all the three limitations, we propose a unified context-aware TTS framework called UniCATS in this study, designed to handle both speech continuation and editing tasks. UniCATS comprises two components: an acoustic model CTX-txt2vec and a vocoder CTX-vec2wav. Figure 2 illustrates the pipelines of UniCATS for speech continuation and editing. In these pipelines, CTX-txt2vec employs contextual VQ-diffusion to predict semantic tokens from the input text, enabling it to incorporate the semantic context and maintain seamless concatenation with the surrounding context. Following that, CTX-vec2wav utilizes contextual vocoding to convert these semantic tokens into waveforms, taking into consideration the acoustic context, especially speaker identity. Both speech continuation and editing tasks in UniCATS share the same model, with the only distinction being the presence of context B. Our experiments conducted on the LibriTTS dataset (Zen et al. 2019) demonstrate that CTX-vec2wav outperforms HifiGAN and AudioLM in terms of speech resynthesis from semantic tokens. Furthermore, we show that the overall UniCATS framework achieves state-of-the-art performance in both speech continuation for zero-shot speaker adaptation and speech editing. The main contributions of this work are as follows: \u2022 We propose a unified context-aware TTS framework called UniCATS to address both speech continuation and editing, which achieves state-of-the-art performance on both the two tasks. \u2022 We introduce contextual VQ-diffusion within CTXtxt2vec, enabling the generation of sequence data that seamlessly concatenates with its surrounding context. \u2022 We introduce contextual vocoding within CTX-vec2wav to take into consideration the acoustic context when converting the semantic tokens into waveforms. \fVQ-Diffusion Decoder Discard Discard Text / phoneme tokens Text Encoder (Transformer blocks) Duration Predictor Length Regulator Text encoding (a) Overall architecture Positional Encoding Embedding Table Context Indicator Linear AdaLN Multi-head Self-attention Text encoding Linear Layer Norm & MLP Layer Norm & Linear & Softmax (b) VQ-diffusion decoder Figure 3: The model architecture of CTX-txt2vec with contextual VQ-diffusion. 2 UniCATS In this study, we propose a unified context-aware TTS framework called UniCATS, designed to address both speech continuation and editing tasks. UniCATS comprises two components: an acoustic model CTX-txt2vec and a vocoder CTX-vec2wav. In the following sections, we describe these two components respectively. CTX-txt2vec with Contextual VQ-Diffusion CTX-txt2vec employs contextual VQ-diffusion to predict semantic tokens from the input text, enabling it to incorporate the semantic context and maintain seamless concatenation with the surrounding context. We leverage vq-wav2vec tokens as the semantic tokens in this work. In this section, we begin by a brief review of VQ-diffusion (Gu et al. 2022) and then introduce contextual VQ-diffusion. After that, we describe the model architecture, training and inference algorithm of CTX-txt2vec. VQ-Diffusion. Inspired by diffusion model that has been widely employed in continuous data generation, VQdiffusion uses a Markovian process for discrete data. Let us consider a data sample consisting of a sequence of discrete indices x0 = [x(1) 0 , x(2) 0 , ..., x(l) 0 ] where x(i) 0 \u2208{1, 2, ..., K}. During each forward diffusion step, the indices in x0 undergo masking, substitution, or remain unchanged. Following t steps of corruption, the resulting sequence is denoted as xt. For simplicity, we omit the superscript i in the following description. Formally, the equation representing the forward process is q(xt|xt\u22121) = v\u22a4(xt)Qtv(xt\u22121) (1) where v(xt) \u2208R(K+1) represents a one-hot vector where xt = k, indicating that only the k-th value is 1 while the remaining values are 0. The index value K + 1 corresponds to the special [mask] token. Qt \u2208R(K+1)\u00d7(K+1) denotes the transition matrix for the t-th step. By integrating multiple forward steps, we obtain q(xt|x0) = v\u22a4(xt)Qtv(x0) (2) where Qt = Qt \u00b7 \u00b7 \u00b7 Q1. Applying Bayesian\u2019s rule, we have q(xt\u22121|xt, x0) = q(xt|xt\u22121, x0)q(xt\u22121|x0) q(xt|x0) = \u0000v\u22a4(xt)Qtv(xt\u22121) \u0001 \u0000v\u22a4(xt\u22121)Qt\u22121v(x0) \u0001 v\u22a4(xt)Qtv(x0) . (3) The VQ-diffusion model is constructed using a stack of Transformer blocks and is trained to estimate the distribution of x0 from xt conditioned on y, denoted as p\u03b8(\u02dc x0|xt, y). As a result, during the backward process, we can sample xt\u22121 given xt and y from the following equation p\u03b8(xt\u22121|xt, y) = X \u02dc x0 q(xt\u22121|xt, \u02dc x0)p\u03b8(\u02dc x0|xt, y). (4) Contextual VQ-Diffusion. This study focuses on speech editing and continuation tasks, where the input text serves as the condition y, and the semantic tokens to be generated are represented by the data x0. In contrast to the standard VQdiffusion approach mentioned above, our generation process also takes into account additional context tokens cA and cB associated with the data x0. Consequently, we need to model the probability of p\u03b8(\u02dc x0|xt, y, cA, cB). (5) To facilitate contextual VQ-diffusion, we propose concatenating the corrupted semantic tokens xt at diffusion step t with their clean preceding and following context tokens cA and cB in chronological order. This combined sequence, denoted as [cA, xt, cB], is then fed into the Transformer-based \fVQ-diffusion model. By doing so, our model can effectively integrate the contextual information using the self-attention layers of the Transformer-based blocks. Similar to Equation 4, we can now calculate the posterior using p\u03b8(xt\u22121|xt, y, cA, cB) = X \u02dc x0 q(xt\u22121|xt, \u02dc x0)p\u03b8(\u02dc x0|xt, y, cA, cB). (6) Model Architecture. The architecture of CTX-txt2vec is depicted in Figure 3(a), consisting of a text encoder, a duration predictor, a length regulator, and a VQ-diffusion decoder. The sequence of text or phoneme tokens are first encoded by the text encoder, which comprises Transformer blocks, and then employed for duration prediction. Subsequently, the output of the text encoder is expanded based on the corresponding duration values, resulting in the text encoding h that matches the length of semantic tokens. This process follows the idea introduced in FastSpeech 2 (Ren et al. 2020). Figure 3(b) illustrates the architecture of the VQ-diffusion decoder. The corrupted data xt, resulting from t diffusion steps, is concatenated with its preceding and following context cA and cB, forming the input sequence [cA, xt, cB] for the decoder. To distinguish between the data and context, we utilize a binary indicator sequence of the same length as the input. After converting the indicator sequence into embeddings using an embedding table, it is added to the input and then projected and combined with positional encoding. Our VQ-diffusion blocks, based on Transformer, largely follow the architecture in (Gu et al. 2022). However, we incorporate the text encoding h differently. Instead of using crossattention, we directly add h to the output of self-attention layers after applying linear projections. This adjustment is made to accommodate the strict alignment between h and semantic tokens. After passing through N such blocks, the output is layer-normed, projected, and regularized with Softmax to predict the distribution of p\u03b8(\u02dc x0|xt, y, cA, cB). As the Transformer-based VQ-diffusion decoder generates an output sequence of the same length as its input, only the output segment corresponding to xt is considered as \u02dc x0, while the remaining segments are discarded. Training Scheme. During training, each utterance is randomly utilized in one of three different configurations: with both context A and B, with only context A, or with no context. In the first configuration, the utterance is randomly divided into three segments: context A, x0, and context B. To be specific, we first randomly determine the length of x0, which must be longer than 100 frames yet shorter than the total length of the utterance itself. Next, we randomly determine the starting position of x0. The segments on the left and right sides of x0 are considered as context A and B, respectively. In the second configuration, we randomly determine the length of context A to be 2-3 seconds. We consider the initial segment of this determined length as context A, while the remaining segment on the right side of context A is assigned as x0. In the third configuration, the entire utterance is treated as x0 without any context. The proportion Algorithm 1: Inference of CTX-txt2vec for speech editing. Input: The phonemes, durations and semantic tokens of Context A and B, referred to as yA, yB, dA, dB, cA, cB. The phonemes of speech to be generated yD. Parameter: The number of diffusion steps T, fully corrupted tokens xT . Output: Edited semantic tokens. 1: y = [yA, yD, yB] 2: e = TextEncoder(y) 3: [\u02dc d A, \u02dc d D, \u02dc d B] = DurationPredictor(e) 4: \u03b1 = (dA + dB)/(\u02dc d A + \u02dc d B) 5: h = LengthRegulator(e, [dA, \u03b1\u02dc d D, dB]) 6: for t = T, T \u22121, ..., 1 do 7: p\u03b8(\u02dc x0|xt, y, cA, cB) = VQDiffusionDecoder([cA, xt, cB], t, h) 8: xt\u22121 \u223cp\u03b8(xt\u22121|xt, y, cA, cB) calculated by Equation (6) 9: end for 10: return [cA, x0, cB] of the three configurations is set to 0.6, 0.3, and 0.1, respectively. Once the division of the context and the data to be generated is determined, we proceed to corrupt x0 using Equation 2, resulting in xt. Subsequently, this corrupted segment, along with its associated context if applicable, is concatenated and utilized as the input for the VQ-diffusion decoder. The training criterion for CTX-txt2vec, denoted as LCTX-txt2vec, is determined by the weighted summation of the mean square error of duration prediction Lduration and the VQ-diffusion loss LVQ-diffusion as introduced in (Gu et al. 2022), that is LCTX-txt2vec = Lduration + \u03b3LVQ-diffusion (7) where \u03b3 is a hyper-parameter. Inference Algorithm. The inference process for speech editing is outlined in Algorithm 1. We first concatenate the phonemes of the speech to be generated, denoted as yD, with the provided context phonemes. This combined sequence is then fed into the text encoder for duration prediction. The predicted duration \u02dc d D, corresponding to yD, is rescaled using a factor \u03b1 to maintain a similar speech speed to that of the context. Then, we iteratively refine the data starting from fully corrupted xT with its context semantic tokens, following the backward procedure of VQ-diffusion. Finally, we obtain the edited semantic tokens [cA, x0, cB]. CTX-vec2wav with Contextual Vocoding We introduce contextual vocoding within CTX-vec2wav to take into consideration the acoustic context, especially speaker identity, when converting the semantic tokens into waveforms. Consequently, we eliminate the use of speaker embedding and acoustic tokens. In this section, we delve into the architecture of CTX-vec2wav and outline its training scheme. \fSemantic Encoder Auxiliary Feature Adaptor Linear Semantic Encoder Generator (Same to HifiGAN Generator) Semantic tokens Mel-spectrogram (No positional encoding) Cross-attention Data preparation for training Mel Encoder Simulating context speech Used for vocoding (a) Overall architecture and data preparation for training Feed Forward Layer Norm Convolution Multi-head Cross-attention Multi-head Self-attention Feed Forward (b) Semantic encoder blocks based on Conformer Figure 4: The model architecture of CTX-vec2wav with contextual vocoding. Model Architecture. The architecture of CTX-vec2wav is illustrated in Figure 4(a). The semantic tokens are first projected and encoded through two semantic encoders. Then, the results are passed through convolution and upsampling layers, which are identical to the generator in HifiGAN (Kong, Kim, and Bae 2020), to generate the waveforms. An optional auxiliary feature adaptor is set between the two semantic encoders. This module, akin to the variance adaptor in FastSpeech 2, facilitates conditioning the generation on three-dimensional auxiliary features: pitch, energy, and probability of voice (POV) (Ghahremani et al. 2014). Through preliminary experiments, we have observed improvement in audio quality by utilizing this module. As a result, we incorporate it in our model throughout this paper. During training, the model uses ground-truth auxiliary features as conditions and learns to predict them from the output of the first semantic encoder using a projection layer. During inference, the predicted auxiliary features are utilized as conditions. The literature (Borsos et al. 2022; Polyak et al. 2021) reveals that semantic tokens primarily capture articulation information while lacking sufficient acoustic details, particularly in relation to speaker identity. Therefore, CTXvec2wav proposes a novel approach of leveraging the melspectrogram m to prompt the acoustic contexts, as opposed to conventional methods such as x-vectors (Snyder et al. 2018) or acoustic tokens from audio codec models. The semantic encoders in CTX-vec2wav consist of M Conformerbased blocks (Gulati et al. 2020), each of which incorporates an additional cross-attention layer compared to the vanilla Conformer block, enabling the integration of acoustic contexts from the mel-spectrogram. We depict its architecture in Figure 4(b). Before entering the cross-attention layer, the mel-spectrogram m is encoded by a mel encoder into m\u2032 using a simple 1D convolution layer in order to integrate consecutive frames. Note that the mel-spectrogram has no position encoding, resulting in m\u2032 being a collection of unordered features. This characteristic allows us to utilize mel-spectrograms of varying lengths to prompt acoustic contexts with crossattention during inference, even though we only utilize a short segment of the mel-spectrogram during training. Increasing the length of the mel-spectrogram has the potential to improve speaker similarity. However, we do not involve this issue in this paper and leave it to be discussed in the future works. Training Scheme. To effectively utilize speech datasets with inaccurate or absent speaker labels during the training of CTX-vec2wav, we make an assumption that the speaker identity remains consistent within each training utterance. Based on this assumption, we divide each utterance into two segments, as illustrated in Figure 4(a). The first segment, which varies randomly in length between 2 to 3 seconds, is utilized for extracting mel-spectrograms and prompting acoustic contexts. The second segment comprises the remaining portion and is used for extracting semantic tokens and performing vocoding. The training process of CTXvec2wav follows the same criterion as HifiGAN with an additional L1 loss for auxiliary features prediction. Furthermore, we adopt the multi-task warmup technique proposed in (Du et al. 2022). Unified Framework for Context-Aware TTS UniCATS prompts semantic and acoustic contexts through their respective semantic tokens and mel-spectrograms. Following Algorithm 1, the edited semantic tokens [cA, x0, cB] \fMethod Feature for Resynthesis Speaker Control Naturalness MOS Similarity MOS SECS Ground-truth 4.91 \u00b1 0.04 4.51 \u00b1 0.08 0.837 Encodec Acoustic token 4.39 \u00b1 0.07 4.00 \u00b1 0.08 0.829 HifiGAN Semantic token X-vector 4.30 \u00b1 0.08 3.96 \u00b1 0.08 0.776 AudioLM Semantic token AR continuation 3.99 \u00b1 0.07 3.96 \u00b1 0.08 0.801 CTX-vec2wav Semantic token Contextual vocoding 4.75 \u00b1 0.06 4.50 \u00b1 0.07 0.845 Table 1: The performance of speech resynthesis from semantic tokens. are obtained. These tokens are then vocoded into waveforms, with the speaker information indicated by the melspectrogram of the contexts [mA, mB]. Since the only distinction between speech continuation and editing lies in the presence or absence of context B, all the aforementioned algorithms for speech editing can be readily generalized to speech continuation by excluding context B. Consequently, UniCATS demonstrates the capability to handle both the two context-aware TTS tasks. 3 Experiments and Results Dataset LibriTTS is a multi-speaker transcribed English speech dataset. Its training set consists of approximately 580 hours of speech data from 2,306 speakers. For evaluation purposes, we exclude 500 utterances from the official LibriTTS training set, which will serve as one of our test sets referred to as \u201ctest set A\u201d. Test set A comprises 369 speakers out of the 2,306 training speakers. In addition, we utilize 500 utterances from the \u201ctest-clean\u201d set of LibriTTS, designated as \u201ctest set B\u201d, to assess the zero-shot adaptation capability for new and unseen speakers. Test set B contains 37 speakers. Each speaker in test sets A and B is associated with a brief speech prompt lasting approximately 3 seconds. The utterance list for both test sets A and B, along with their corresponding prompts, is available on our demo page. Lastly, for evaluating speech editing, we employ the same test set as utilized in (Yin et al. 2022). The utterances for this evaluation are also derived from the \u201ctest-clean\u201d set of LibriTTS and are denoted as \u201ctest set C\u201d. Training Setup In CTX-txt2vec, the text encoder consists of 6 layers of Transformer blocks. The VQ-diffusion decoder employs N = 12 Transformer-based blocks with attention layers comprising 8 heads and a dimension of 512. In Equation 7, the value of \u03b3 is set to 1. The semantic tokens are extracted using a pretrained kmeans-based vq-wav2vec model1. CTX-txt2vec is trained for 50 epochs using an AdamW (Loshchilov and Hutter 2017) optimizer with a weight decay of 4.5 \u00d7 10\u22122. The number of diffusion steps is set to T = 100. In CTX-vec2wav, both semantic encoders consist of M = 2 Conformer-based blocks. The attention layers within these blocks have 2 heads and a dimension of 184. The mel encoder employs a 1D convolution with a kernel size of 5 and an output channel of 184. CTX-vec2wav is 1https://github.com/facebookresearch/fairseq/tree/main/examples /wav2vec trained using an Adam (Kingma and Ba 2014) optimizer for 800k steps. The initial learning rate is set to 2 \u00d7 10\u22124 and is halved every 200k steps. Speech Resynthesis from Semantic Tokens We begin by examining the performance of CTX-vec2wav in speech resynthesis from semantic tokens on test set B. Two common methods for vocoding semantic tokens, namely HifiGAN and AudioLM, are used as baselines in our evaluation. We utilize the open-source implementation of AudioLM2, as we do not have access to its official internal implementation. Each speaker in the test set is associated with a brief speech prompt that indicates the speaker\u2019s identity. In HifiGAN vocoding, we employ a pretrained speakerverification model3 to extract x-vectors from the prompts, enabling us to control the speaker information, following the idea presented in (Polyak et al. 2021). In AudioLM decoding, we use acoustic tokens from the prompts for AR continuation. In CTX-vec2wav, as previously discussed, we use the mel-spectrogram of the prompt to control the speaker identity by contextual vocoding. All these models are trained to resynthesize speech from the same semantic tokens extracted by vq-wav2vec. We also evaluate the official Encodec model4 for resynthesizing speech from the acoustic tokens, which is theoretically an easier task. We evaluate the generated results using MOS listening tests, where 15 listeners rate the presented utterances on a scale of 1 to 5 in terms of naturalness and speaker similarity to the prompt. Additionally, we compute the Speaker Encoder Cosine Similarity (SECS) (Casanova et al. 2022) as an auxiliary metric to assess speaker similarity. The SECS scores are calculated using the speaker encoder in Resemblyzer5. The results are shown in Table 1. Our proposed CTXvec2wav demonstrates the best performance in speech resynthesis from semantic tokens in terms of both naturalness and speaker similarity. In contrast, when vocoding semantic tokens with HifiGAN, we observe the lowest SECS score. This can be attributed to the information compression inherent in x-vectors as a bottleneck feature of the speaker-verification model. Although x-vectors effectively distinguish between speakers, they are not ideal for accurately reconstructing the speaker\u2019s voice. Remarkably, CTXvec2wav even outperforms Encodec in subjective evalua2https://github.com/lucidrains/audiolm-pytorch 3https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb 4https://github.com/facebookresearch/encodec 5https://github.com/resemble-ai/Resemblyzer \fMethod Seen Speakers Unseen Speakers Naturalness MOS Similarity MOS SECS Naturalness MOS Similarity MOS SECS Ground-truth 4.89 \u00b1 0.04 4.54 \u00b1 0.08 0.833 4.91 \u00b1 0.04 4.50 \u00b1 0.08 0.837 FastSpeech 2 3.81 \u00b1 0.08 3.94 \u00b1 0.06 0.820 3.65 \u00b1 0.08 3.65 \u00b1 0.07 0.770 VALL-E 4.23 \u00b1 0.07 3.98 \u00b1 0.06 0.796 4.17 \u00b1 0.09 3.83 \u00b1 0.07 0.786 UniCATS 4.54 \u00b1 0.07 4.56 \u00b1 0.07 0.831 4.43 \u00b1 0.08 4.25 \u00b1 0.08 0.836 Table 2: The performance of zero-shot speaker adaptative text-to-speech. tions and achieves a SECS score comparable to the ground truth. We notice that Encodec resynthesis occasionally introduces artifacts that negatively impact the subjective scores. Speech Continuation for Zero-Shot Speaker Adaptation In this section, we assess the performance of UniCATS in zero-shot speaker adaptation with speech continuation. We utilize test sets A and B for evaluating seen and unseen speakers respectively. Our baselines include x-vector-based multi-speaker TTS model FastSpeech 2 from ESPnet toolkit (Watanabe et al. 2018) and the state-of-the-art zero-shot speaker adaptive TTS model VALL-E. As the official implementation of VALL-E is not publicly available, we employ the open-source VALL-E model6 trained on LibriTTS for our evaluation. To evaluate the generated results, we conduct MOS listening tests following the same methodology as described in the previous section. 15 listeners are asked to rate the presented utterances on a scale of 1 to 5, considering naturalness and speaker similarity to the prompt. Similarly, we introduce SECS as another metric to assess speaker similarity. We demonstrate the results in Table 2. For seen speakers, UniCATS achieves a much better naturalness compared with the FastSpeech 2 and VALL-E baselines. FastSpeech 2 has a relatively limited naturalness due to the use of melspectrogram, while VALL-E\u2019s performance is limited by the performance of Encodec. The speaker similarity of UniCATS is close to the ground-truth and outperforms the two baselines in both subjective and objective evaluations. For unseen speakers, UniCATS also achieves the best performance in terms of both naturalness and speaker similarity. However, all systems perform slightly worse for unseen speakers than for seen speakers in the subjective scores. The results are reported in Table 2. In the case of seen speakers, UniCATS achieves significantly better naturalness compared to the FastSpeech 2 and VALL-E baselines. FastSpeech 2\u2019s naturalness is relatively limited due to its reliance on mel-spectrograms, while VALL-E\u2019s performance is constrained by the capabilities of Encodec. UniCATS also achieves speaker similarity scores that is quite close to the ground truth, outperforming both baselines in both subjective and objective metrics. All systems demonstrate slightly diminished subjective scores for unseen speakers when compared to seen speakers. It is worth noting that MOS scores for naturalness and speaker similarity achieved by UniCATS are even higher 6https://github.com/lifeiteng/vall-e than those of Encodec resynthesis, indicating that our model breaks the upper bound of a series of other works that uses acoustic tokens. Speech Editing We utilize test set C to evaluate speech editing, where each utterance is divided into three segments: context A, the segment x to be generated, and context B. This division allows us to simulate speech editing and compare the generated results with the ground truth. To evaluate short and long segment editing separately, we employ two different segment division approaches. For short editing, x consists of randomly chosen 1 to 3 words. For long editing, x contains as many words as possible while remaining within a 2-second duration. In our experiments, we compare UniCATS with the state-of-the-art speech editing model RetrieverTTS (Yin et al. 2022). In the MOS listening test, participants are requested to rate the naturalness of the generated segments x and their contextual coherence. Method MOS@short MOS@long Ground-truth 4.77 \u00b1 0.06 4.90 \u00b1 0.04 RetrieverTTS 4.43 \u00b1 0.08 4.37 \u00b1 0.08 UniCATS 4.62 \u00b1 0.06 4.63 \u00b1 0.06 Table 3: The performance of speech editing. The results in Table 3 demonstrate that UniCATS outperforms RetrieverTTS in both scenarios. Moreover, as the length of the editing segment increases, the performance of RetrieverTTS declines. Conversely, UniCATS exhibits consistent performance across varying segment lengths. 4"
+ },
+ {
+ "url": "http://arxiv.org/abs/2304.13121v1",
+ "title": "Multi-Speaker Multi-Lingual VQTTS System for LIMMITS 2023 Challenge",
+ "abstract": "In this paper, we describe the systems developed by the SJTU X-LANCE team for\nLIMMITS 2023 Challenge, and we mainly focus on the winning system on\nnaturalness for track 1. The aim of this challenge is to build a multi-speaker\nmulti-lingual text-to-speech (TTS) system for Marathi, Hindi and Telugu. Each\nof the languages has a male and a female speaker in the given dataset. In track\n1, only 5 hours data from each speaker can be selected to train the TTS model.\nOur system is based on the recently proposed VQTTS that utilizes VQ acoustic\nfeature rather than mel-spectrogram. We introduce additional speaker embeddings\nand language embeddings to VQTTS for controlling the speaker and language\ninformation. In the cross-lingual evaluations where we need to synthesize\nspeech in a cross-lingual speaker's voice, we provide a native speaker's\nembedding to the acoustic model and the target speaker's embedding to the\nvocoder. In the subjective MOS listening test on naturalness, our system\nachieves 4.77 which ranks first.",
+ "authors": "Chenpeng Du, Yiwei Guo, Feiyu Shen, Kai Yu",
+ "published": "2023-04-25",
+ "updated": "2023-04-25",
+ "primary_cat": "cs.SD",
+ "cats": [
+ "cs.SD",
+ "eess.AS"
+ ],
+ "main_content": "INTRODUCTION The LIMMITS 2023 Challenge is organized as part of ICASSP 2023 which aims at the development of a lightweight multi-speaker multilingual Indic text-to-speech (TTS) model using datasets in Marathi, Hindi and Telugu. Each of the 3 languages has a male and a female speaker in the given dataset, so there are a total of 6 speakers. Each speaker has 40 hours speech data and the corresponding transcript. In track 1, participants may only use at most 5 hours data from each speaker for training the TTS model. Track 2 focuses on the model size. It speci\ufb01es mel-spectrogram as the acoustic feature and restricts the number of parameters in text-to-mel model to 5 million. In track 3, both the constraints on the amount of training data and the number of model parameters are imposed. For track 2 and track 3 where the model size is limited, we use GradTTS [1] that leverages diffusion model for mel-spectrogram prediction. The denoising process has a large number of in\ufb01nitesimal steps but shares a same U-net architecture, so it can achieve high voice quality with limited number of parameters. We reduce the parameter size from original 15 million to less than 5 million by using less layers in text encoder and lower channels in the U-net. EMA is applied on model parameters. We also add speaker embedding and language embedding to the encoder output for controlling the speaker and language information. In the objective evaluations \u2020Equal contribution. \u2021Corresponding author. on naturalness, our system ranks \ufb01rst in track 3 with a MOS score of 4.44, and third in track 2 with MOS score 4.40. In track 1, we choose VQTTS [2]-based system that utilizes VQ acoustic feature rather than mel-spectrogram for robust and high\ufb01delity TTS training. In the subjective MOS listening test on naturalness, our system achieves 4.77 which also ranks \ufb01rst. Note that this MOS score is already very close to human-level naturalness. Next, we describe the system in detail in the rest of this paper. 2. SYSTEM DESCRIPTION 2.1. Data selection In track 1, only 5 hours data from each speaker can be used to train the TTS model, so we need to select the training data from the given total 40 hours data per speaker. First, we recognize the all speech data with the ASR model provided by the challenge organizers and compute the CER for each utterance between the transcript and the recognition result. We also train an alignment search model [3] and computes the likelihood of the alignment for each utterance. Accordingly, we select 10 hours data that has the lowest CER and the highest likelihood of its alignment. Then we calculate the focus rate of the alignment for the selected 10 hours data and further select the best 5 hours data for TTS training. 2.2. Text preprocessing The three Indian languages can all be regarded as ideal phonemic orthography, so it suf\ufb01ces to use raw characters as TTS inputs. We \ufb01nd explicit schwa deletion is also not necessary in preliminary experiments. Note that we substitute digits with their pronunciations which are surrounded by braces in the provided transcripts. Additional to raw characters, we also introduce an additional token that indicates a short silence. In order to detect the position of automatically in the training data, we train an HMM-GMM ASR model with Kaldi toolkit [4] and compute the alignment with token-passing algorithm on the WFST decoding graph. There is an optional silence between all the two consecutive words, so the algorithm automatically decides whether to go through the silence nodes for maximizing the likelihood. Given the training data with auto-detected silence position, we train a Transformerbased model to predict whether there are tokens between all the two consecutive words from raw text. In inference, we insert the tokens at the predicted positions into the character sequence and send the result to the TTS model. From the Kaldi alignment, we also obtain the durations of all input tokens, which is used as the target for training the duration predictor of the TTS model. arXiv:2304.13121v1 [cs.SD] 25 Apr 2023 \f(a) txt2vec (b) vec2wav Fig. 1. Cross-lingual speech synthesis in multi-speaker multi-lingual VQTTS. 2.3. Multi-speaker multi-lingual VQTTS Our TTS model is based on the recently proposed VQTTS [2], consisting of an acoustic model txt2vec and a vocoder vec2wav. It uses VQ acoustic feature extracted by vq-wav2vec [5] rather than mel-spectrogram as the acoustic feature. In this way, our acoustic model txt2vec basically becomes a classi\ufb01cation model rather than a traditional regression model. Instead of predicting the complicated mel-spectrogram with high correlation along both time and frequency axes, txt2vec only needs to consider the correlation along time axis in feature prediction, which narrows the gap between GT and predicted acoustic feature dramatically. Besides the VQ feature, VQTTS also introduces three dimensional auxiliary features including pitch, energy and POV. The vocoder vec2wav uses an additional feature encoder before Hi\ufb01GAN generator for smoothing the discontinuous quantized feature. To enable the model to control the speaker identity and language, we add speaker embeddings and language embeddings to the text encoder output. The speaker embeddings are also provided to the vocoder to control the generated voice. Our speaker embeddings are extracted by a pretrained x-vector extractor1 for all utterances. Then we average the speaker embeddings over all utterances within each speaker and take the result as the input to the TTS model. The language embeddings are stored in a trainable lookup table and are jointly trained with the TTS model. 2.4. Cross-lingual speech synthesis There are evaluations on both mono-lingual and cross-lingual speech synthesis in this challenge. For mono-lingual speech synthesis, we just provide the speci\ufb01ed speaker embedding and language embedding to the VQTTS. As for cross-lingual speech synthesis, we notice that giving unmatched speaker embedding and language embedding to the acoustic model txt2vec sometimes generates unstable results and mispronunciation in our preliminary experiments. Therefore, we provide a native speaker\u2019s embedding to txt2vec for generating the proper articulation \ufb01rst. Then we provide the speci\ufb01ed cross-lingual speaker\u2019s embedding to the vec2wav for generating the voice of the correct speaker. We \ufb01nd providing different speaker embeddings as such helps improve the pronunciation to a large extent, indicat1https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb ing the potential of VQTTS to automatically disentangle acoustic and linguistic information. Note that the predicted pitch of the native speaker in the auxiliary feature is also shifted and rescaled to the speci\ufb01ed speaker and then sent to the vocoder. The pipeline is demonstrated in Figure 1. 3. EVALUATION RESULT In the evaluation, we need to synthesize speech given speci\ufb01ed content, speaker and language, including both mono-lingual and crosslingual combinations of speaker and language. The MOS scores in the subjective listening tests are shown in Table 1. Table 1. Mean MOS scores of multi-speaker multi-lingual VQTTS. All Mono-lingual Cross-lingual Nautralness 4.77 4.80 4.74 Speaker Similarity 3.86 4.25 3.45 Our proposed VQTTS system achieves the best naturalness among all the systems in track 1. We notice that the speaker similarity of cross-lingual synthesis is still worse than that of mono-lingual synthesis. We leave this issue to be addressed in the future work. 4."
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.17550v5",
+ "title": "DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder",
+ "abstract": "While recent research has made significant progress in speech-driven talking\nface generation, the quality of the generated video still lags behind that of\nreal recordings. One reason for this is the use of handcrafted intermediate\nrepresentations like facial landmarks and 3DMM coefficients, which are designed\nbased on human knowledge and are insufficient to precisely describe facial\nmovements. Additionally, these methods require an external pretrained model for\nextracting these representations, whose performance sets an upper bound on\ntalking face generation. To address these limitations, we propose a novel\nmethod called DAE-Talker that leverages data-driven latent representations\nobtained from a diffusion autoencoder (DAE). DAE contains an image encoder that\nencodes an image into a latent vector and a DDIM image decoder that\nreconstructs the image from it. We train our DAE on talking face video frames\nand then extract their latent representations as the training target for a\nConformer-based speech2latent model. This allows DAE-Talker to synthesize full\nvideo frames and produce natural head movements that align with the content of\nspeech, rather than relying on a predetermined head pose from a template video.\nWe also introduce pose modelling in speech2latent for pose controllability.\nAdditionally, we propose a novel method for generating continuous video frames\nwith the DDIM image decoder trained on individual frames, eliminating the need\nfor modelling the joint distribution of consecutive frames directly. Our\nexperiments show that DAE-Talker outperforms existing popular methods in\nlip-sync, video fidelity, and pose naturalness. We also conduct ablation\nstudies to analyze the effectiveness of the proposed techniques and demonstrate\nthe pose controllability of DAE-Talker.",
+ "authors": "Chenpeng Du, Qi Chen, Xie Chen, Kai Yu",
+ "published": "2023-03-30",
+ "updated": "2024-03-01",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.MM"
+ ],
+ "main_content": "INTRODUCTION Generating high-fidelity videos of talking faces that synchronize with input speech is a challenging and crucial task. Researchers have proposed various approaches to tackle this problem, broadly categorized as 2D-based and 3D-based methods. The former typically uses facial landmarks as an intermediate representation. The system first predicts these landmarks from the speech input and then generates corresponding video frame. For instance, [25] predicts only mouth-related landmarks and textures, while [35] predicts all facial landmarks and uses them to warp a static image. Some methods, such as [1, 18], directly optimize their models using a generative adversarial network (GAN), without any intermediate representation. In contrast, 3D-based methods use 3D morphable models (3DMM) coefficients [17, 27, 31], which are obtained from a pretrained 3D face reconstruction model, as their intermediate representation. Recently, some researchers have applied the neural radiance field (NeRF) to this task, as reported in [8, 20]. NeRF generates the final image by volume rendering. Despite recent advancements in speech-driven talking face generation, the fidelity of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations, such as facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movement. Additionally, these methods necessitate an external pretrained model to extract these representations, which may introduce additional errors. The arXiv:2303.17550v5 [cs.CV] 1 Mar 2024 \fMM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Chenpeng Du et al. performance of the representation extractor sets an upper bound on talking face generation. In this paper, we introduce a novel system named DAE-Talker for generating speech-driven talking face. Our method employs datadriven latent representations from a diffusion autoencoder (DAE) [19]. DAE contains an image encoder that encodes an image into a latent vector and an image decoder based on denoising diffusion implicit model (DDIM) that reconstructs the image from the latent vector. We train the DAE on talking face video frames and then extract their intermediate latent representations as the training target for a Conformer-based speech2latent model. Different from using a pretrained model for extracting the traditional handcrafted representations, the image encoder in DAE is jointly optimized with the DDIM image decoder in a data-driven manner to achieve a better reconstruction performance. During inference, DAE-Talker first predicts the latents from speech and then generates the video frames with the image decoder in DAE from the predicted latents. This allows DAE-Talker to synthesize full video frames and produce natural head movements that align with the content of speech, rather than relying on a predetermined head pose from a template video. Moreover, we model the head pose explicitly in speech2latent for the pose controllability. Hence, DAE-Talker can either generate natural head movements based on the input speech or control the head pose with a specified one. It is worth noting that the DAE is trained on individual video frames, without taking into account the correlation between consecutive frames. In this work, we introduces a novel method that employs the DDIM image decoder to generate continuous video frames. The denoising process of DDIM behaves as an ODE solver [22], resulting in a continuous deterministic trajectory. Rather than sampling various Gaussian noises to produce different frames, we propose to utilize a shared Gaussian noise \ud835\udc65\ud835\udc47to generate all video frames. This allows the DDIM\u2019s denoising process to initiate from a fixed point. Additionally, our speech2latent model is trained on sequence-level, with a global and local context-aware architecture, leading to a continuous predicted latent sequence. This continuity enables the ODE trajectory to change smoothly along different frames. As a result, we can produce continuous video frames with the DDIM image decoder, obviating the need to directly model the joint distribution of consecutive video frames. Our experiments show that DAE-Talker outperforms existing popular methods in lip-sync, video fidelity, and pose naturalness. We also conduct ablation studies to analyze the effectiveness of the proposed techniques and demonstrate the pose controllability of DAE-Talker. The main contributions of this work are as follows: \u2022 We propose a novel approach for generating talking faces, which involves utilizing data-driven latent representations of a diffusion autoencoder. Our method outperforms existing popular methods in terms of lip-sync accuracy, video fidelity, and pose naturalness. \u2022 DAE-Talker is able to synthesize full video frames and produce natural head movements that align with the content of speech, rather than relying on a predetermined head pose from a template video. \u2022 To generate continuous video frames, we propose to start the denoising process of the DDIM image decoder from a shared Gaussian noise \ud835\udc65\ud835\udc47for all video frames and produce smoothly changed ODE trajectory by continuous latent sequence predicted by the speech2latent model. The remainder of this paper is structured as follows. In Section 2, we conduct a review of the related work on the diffusion model and speech-driven talking face generation. Section 3 presents the proposed DAE-Talker and the theoretical validity of our approach for generating continuous video frames. In Section 4, we demonstrate and analyze the results of our experiments. Finally, we conclude this paper in Section 5. 2 RELATED WORK 2.1 Diffusion model Recent advancements in image generation have been made by denoising diffusion probabilistic models (DDPM) [12, 21] and scorebased models [23], surpassing GAN-based models [5, 13] in both fidelity and diversity. DDPM is a parameterized Markov chain that incrementally adds Gaussian noise to images and uses a deep neural network to reconstruct the original image from the noise step by step. DDIM [22] utilizes a class of non-Markovian diffusion processes, leading to accelerated sampling. An autoencoder based on DDIM was proposed by [19] for extracting meaningful representations in face generation. Additionally, [16] disentangled face identity and motion using a diffusion autoencoder. The diffusion model has also been applied to video generation. For instance, [14] utilizes DDPM to model the joint distribution of multiple consecutive video frames. To improve the generation speed, [34] proposes to model the video in a low dimensional latent space. [10] further extends this approach to generate video of arbitrary length with the latent video diffusion model. Another successful method for high-resolution video generation is proposed in [11], where the authors cascade diffusion models and super-resolution models. Despite these advancements, modeling the joint distribution of consecutive video frames often results in large model sizes and slow convergence speed. Rather than relying on the above video diffusion model, DAE-Talker generates continuous video frames using a DDIM image decoder trained on independent video frames. The correlation between consecutive frames is instead modeled by speech2latent, which is trained at sequence-level. 2.2 Speech-driven talking face generation Speech-driven talking face generation has been extensively explored in recent literature. [26] uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. [15] predicts the 3D vertex coordinates of a face model from speech. GAN is one of the most popular tools for talking face generation. [28] introduces temporal GAN that uses 3 discriminators focused on achieving detailed frames, audio-visual synchronization, and realistic expressions. Wav2lip [18] proposes to use a pretrained lip-sync expert model as the discriminator. SPACE [9] leverages latent representation of a pretrained face-vid2vid model but still relies on facial landmark prediction. Neural radiance field (NeRF) is another type of talking face model. AD-NeRF [8] models the head and torso with two separate NeRFs, \fDAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder MM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada DDIM Image Decoder Image Encoder \ud835\udc65\ud835\udc47 \ud835\udc650 \ud835\udc50 (a) Training Stage 1: Diffusion autoencoder. Image Encoder (frozen) \u2026\u2026 Wav2vec 2.0 Feature Extractor (frozen) Convolution Downsampling Speech Encoder Speech2latent Latent Decoder (Conformer block \u00d7 \ud835\udc41) Linear Projection (Conformer block \u00d7 \ud835\udc41) Pose Adaptor Pose Predictor Predicted pose Ground-truth pose Pose Projection \ud835\udcdb\ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc52 (b) Training Stage 2: Speech2latent. Figure 1: Two-stage training process of DAE-Talker. In stage 1, we first train a diffusion autoencoder on video frames for extracting latent representation. In stage 2, we train a speech2latent model for predicting latent from speech. conditioning the prediction on the input speech. [30] disentangle head poses from lip movements and sample them from a probabilistic model. DFRF [20] proposes dynamic facial radiance fields for few-shot talking head synthesis, generalizing to an unseen identity with few training data. However, NeRF-based methods have shown limitations in modeling the dynamic radiance field with uncertainty, sometimes resulting in blurry torso. Our concurrent work Diffused Head [24] proposes a similar idea to us, using diffusion models for speech-driven talking face generation, but they model the correlation between video frames in an auto-regressive manner, which limits the inference speed. In contrast, DAE-Talker proposes a novel approach to generate all video frames in parallel while maintaining continuity of consecutive frames. 3 DAE-TALKER In this paper, we propose DAE-Talker, a novel system for generating speech-driven talking faces that utilizes data-driven latent representations from a diffusion autoencoder (DAE). The training process involves two stages, as shown in Figure 1. We begin by training a DAE on video frames of talking faces, then we extract the latent representations and use them as the training target for a Conformerbased speech2latent model. During inference, we first predicts the latents from speech and then generates the video frames with the image decoder in DAE from the predicted latents. In this section, we discuss the training stages and the inference process. 3.1 Latent extraction from diffusion autoencoder As illustrated in Figure 1(a), DAE contains an image encoder that encodes an image into a latent vector \ud835\udc50and a DDIM image decoder that reconstructs the image from it. Different from DDPM, DDIM [22] defines a group of non-Markovian random processes by \ud835\udc5e(\ud835\udc651:\ud835\udc47|\ud835\udc650) := \ud835\udc5e(\ud835\udc65\ud835\udc47|\ud835\udc650) \ud835\udc47 \u00d6 \ud835\udc61=2 \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650) (1) where \ud835\udc5e(\ud835\udc65\ud835\udc47|\ud835\udc650) = N (\u221a\ud835\udefc\ud835\udc47\ud835\udc650, (1 \u2212\ud835\udefc\ud835\udc47)I) (2) and \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650) =N \u0012\u221a\ud835\udefc\ud835\udc61\u22121\ud835\udc650 + \u221a\ufe03 1 \u2212\ud835\udefc\ud835\udc61\u22121 \u2212\ud835\udf0e2 \ud835\udc61\u00b7 \ud835\udc65\ud835\udc61\u2212\u221a\ud835\udefc\ud835\udc61\ud835\udc650 \u221a1 \u2212\ud835\udefc\ud835\udc61 , \ud835\udf0e2 \ud835\udc61I \u0013 . (3) Based on the definitions given above, we can derive that N (\ud835\udc65\ud835\udc61|\ud835\udc650) in DDIM takes on the same form as it does in DDPM. As a result, the training process for DDIM is identical to that of DDPM. The key difference lies in the decoding process where the value of \ud835\udf0e\ud835\udc61is set to 0 in DDIM, where we have \ud835\udc65\ud835\udc61\u22121 =\u221a\ud835\udefc\ud835\udc61\u22121\ud835\udc650 + \u221a1 \u2212\ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udc65\ud835\udc61\u2212\u221a\ud835\udefc\ud835\udc61\ud835\udc650 \u221a1 \u2212\ud835\udefc\ud835\udc61 =\u221a\ud835\udefc\ud835\udc61\u22121 \u00a9 \u00ad \u00ab \ud835\udc65\ud835\udc61\u2212\u221a1 \u2212\ud835\udefc\ud835\udc61\u00b7 \ud835\udf16(\ud835\udc61) \ud835\udf03 (\ud835\udc65\ud835\udc61,\ud835\udc50) \u221a\ud835\udefc\ud835\udc61 \u00aa \u00ae \u00ac + \u221a1 \u2212\ud835\udefc\ud835\udc61\u22121 \u00b7 \ud835\udf16(\ud835\udc61) \ud835\udf03 (\ud835\udc65\ud835\udc61,\ud835\udc50). (4) Therefore, the decoding process of DDIM is a deterministic process rather than a stochastic process, which allows us to control the generated image with only the initial point \ud835\udc65\ud835\udc47and the latent representation \ud835\udc50. We train the DAE on talking face video frames. After that, we extract the latent representations \ud835\udc50for all video frames, which are then used as the training target for the speech2latent model. \fMM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Chenpeng Du et al. 3.2 Speech2latent: Latent prediction from speech In order to generate lip-sync video from speech, we train a model named speech2latent to predict the corresponding latent representations from the speech, as shown in Figure 1(b). 3.2.1 Deep acoustic feature. Deep acoustic features extracted by neural networks have been found to be superior to traditional acoustic features such as MFCC and mel-spectrogram in both speech recognition [2, 3] and speech synthesis [6]. These types of neural networks are typically trained with only speech data in a selfsupervised manner. Recently, they are also leveraged in talking face generation models [7]. Therefore, we leverage the deep acoustic feature extracted by pretrained wav2vec 2.0 [3] in this work. 3.2.2 Local and global context-aware architecture. Latent prediction from speech is a sequence-to-sequence task, where the two sequences are monotonically aligned. Accordingly, we build our speech2latent model based on the Conformer architecture, which utilizes a convolution layer, a speech encoder, a pose adaptor, a latent decoder and a linear projection layer. Because speech has a higher frame rate than video, we set the stride size of the first convolution layer to the ratio of the two frame rates to ensure appropriate downsampling. Both the speech encoder and latent decoder consist of \ud835\udc41Conformer blocks, which is a variant of Transformer block that incorporates a convolution layer after each self-attention layer. This combination of the two layers allows for capturing both local and global context information. During training, we calculate the mean square loss L\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61between the predicted latent and the target one extracted from the ground-truth video frames. 3.2.3 Pose modelling. Note that latent prediction from speech is a one-to-many mapping. When speaking a sentence, there are multiple possible poses that can be considered correct. To address this, we propose to condition the latent prediction on the extracted head pose during training. This approach enhances the latent prediction by incorporating additional condition information and, as a result, alleviate the problem. Furthermore, it provides us with the ability to control the head pose in talking-face generation. During training, we utilize a pose predictor to estimate the head poses based on the speech input. Also, the ground-truth head poses are projected and added to the speech encoder output to predict the latent sequence. During inference, we can use either the predicted head poses or user-specified ones to predict the latent sequence. The overall training criterion of speech2latent can be written as L\ud835\udc60\ud835\udc5d\ud835\udc52\ud835\udc52\ud835\udc50\u210e2\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61= L\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61+ \ud835\udefcL\ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc52 (5) where L\ud835\udc5d\ud835\udc5c\ud835\udc60\ud835\udc52is the mean square loss of pose prediction and \ud835\udefcis the relative weight between the two terms. 3.2.4 Pseudo-sentence sampling for data augmentation. There are two main methods for processing data in talking face systems. The first method predicts the target solely based on the corresponding speech frame and its consecutive frames, such as in AD-NERF, without considering long-term context. The second method divides the speech into complete sentences, such as in FaceFormer [7], and trains the model on the sequence level. However, when the available data is limited to just a few minutes, the number of sentences DDIM Image Decoder (shared across all frames) \u2026\u2026 \u2026\u2026 Speech2latent \ud835\udc65\ud835\udc47 \ud835\udc650 (1) \ud835\udc650 (2) \ud835\udc650 (\ud835\udc5b) \ud835\udc50(1) \ud835\udc50(2) \ud835\udc50(\ud835\udc5b) Figure 2: The inference process of DAE-Talker. The Gaussian noise \ud835\udc65\ud835\udc47is shared across all frames for the continuity of video frames. available for training is quite small. This makes the second method more susceptible to overfitting compared to the first method that has more training samples available. In this work, we introduces a novel data augmentation technique called pseudo-sentence sampling, which offers the benefits of both worlds. Instead of dividing the data into fixed real sentences, we randomly select a starting point and generate a pseudo-sentence by cutting it at a random length between 5 to 20 seconds for each training sample. This approach enables us to obtain sufficient training samples while maintaining sequence-level training. 3.3 Frame-wise conditioned video generation with DDIM image decoder Different from generating a large number of images independently, video generation requires considering the correlation between consecutive frames to ensure that the changes among frames are smooth. In this section, we discuss how to generate continuous video frames by using the DDIM image decoder. According to Equation 4, the denoising process of DDIM is a deterministic process, which is equivalent to an Euler solver to an ordinary differential equation (ODE). Euler method is a numerical method for solving an ODE given an initial point by taking a large amount of discrete infinitesimal steps along the direction of the gradient. Note that the direction field of the ODE is continuous, the inference process simulates the trajectory for solving the ODE which is a continuous curve. Therefore, we can approximate \ud835\udc650, namely the ending point of the curve, as a continuous function of \ud835\udc65\ud835\udc47and \ud835\udc50, that is \ud835\udc650 = \ud835\udc53(\ud835\udc65\ud835\udc47,\ud835\udc50). (6) Rather than sampling various Gaussian noises to produce different frames, we propose to utilize a shared Gaussian noise \ud835\udc65\ud835\udc47to generate all video frames. This provides a consistent starting point for \fDAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder MM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Ground-truth MakeItTalk Wav2Lip AD-NeRF DAE-Talker Figure 3: Samples of synthetic results from DAE-Talker and the baseline methods. the denoising process. Additionally, by the definition of continuous function, continuous variations in the input result in continuous changes in its output. As a result, as the latent representation \ud835\udc50 changes continuously, the generated data \ud835\udc650 also changes continuously. This statement can be formalized as lim \u0394\ud835\udc50\u21920 \ud835\udc53(\ud835\udc65\ud835\udc47,\ud835\udc50+ \u0394\ud835\udc50) = \ud835\udc650. (7) Therefore, we can use a shared Gaussian noise \ud835\udc65\ud835\udc47and a continuous latent sequence to generate continuous video frames with the DDIM decoder, as illustrated in Figure 2. The global and local context-aware architecture of speech2latent, as explained in Section 3.2.2, ensures the continuity of the latent sequence. This approach can also be applied to any frame-wise conditioned video generation task. Instead of relying on a large video diffusion model that explicitly models the correlation between consecutive frames, it suffices to use a DDIM image decoder to generate such videos. 4 EXPERIMENTS 4.1 Setup 4.1.1 Dataset. Our dataset consists of a 15-minute video of Obama\u2019s address, which is the same one used in AD-NeRF [8]. The video has a resolution of 256 \u00d7 256 and is recorded at a frame rate of 25 fps. The audio is sampled at a rate of 16000Hz. We divide the video into two parts, using the first 12 minutes for training and the remaining 3 minutes for testing (referred to as test set A). In addition, we use another Obama\u2019s address from the Synthesizing Obama [25] dataset as test set B. To assess our model\u2019s performance on external speakers, we also introduce LibriTTS [32] dataset that comprises audiobook readings as the test set C. 4.1.2 Training details. We use Unet-based architecture for our diffusion autoencoder following [19]. The dimension of the latent code is 512. The number of training diffusion steps is 1000 and we use linear scheduler for adding noise. The model is trained on L1 loss with noise prediction. The optimizer is Adam with a learning rate of 1e-4 and the batch size is set to 32. We take about 3 days to train the autoencoder for 80k steps on 8 Nvidia V100 GPUs. During inference, we use \ud835\udc47= 100 steps for generation. The input of speech2latent module is the acoustic feature extracted by a pretrained wav2vec 2.0 model XLSR1. The ground-truth head pose is extracted by a python package2 and is represented with 3 dimensional vectors including roll, pitch and yaw. Both the speech encoder and the latent decoder consists of 4 Conformer blocks with 2 attention heads. The dimension of the self attention is 256 and the kernel size of each CNN layer is 13. The pose predictor is composed of two 1D convolutional layers with kernel size of 3 and channel size of 384, each of which followed by ReLU activation, layer normalization and dropout. The optimizer of speech2latent is also Adam with a learning rate of 1e-4 and the batch size is set to 16. 1https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec 2https://pypi.org/project/headpose \fMM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Chenpeng Du et al. Table 1: Evaluations on lip-sync with different talking face models. Model LSE-D \u2193 LSE-C \u2191 LMD \u2193 Test set A Test set B Test set C Test set A Test set B Test set C Test set A Test set B GT 6.59 7.12 8.43 8.27 0 0 MakeItTalk 8.51 9.08 9.19 5.37 5.88 5.34 3.48 3.40 Wav2Lip 8.89 7.83 7.58 6.49 7.81 7.58 2.34 2.60 AD-NeRF 7.80 8.06 8.13 6.90 7.51 7.03 1.06 2.44 DAE-Talker 7.25 6.98 7.63 8.05 9.25 8.12 1.55 2.25 (a) Lip-sync. (b) Pose naturalness. (c) Video fidelity. Figure 4: Subjective evaluations on talking face models with user studies. 4.2 Experimental results In this section, we evaluate DAE-Talker\u2019s performance and compare it with several baselines using both objective and subjective measures. We introduce three baselines used in our experiments. The first one is MakeItTalk [35], which utilizes facial landmarks as an intermediate representation. The system predicts the positions of the landmarks from speech and generates an animated image by translating it using an image-to-image translation network. Although it controls the head pose implicitly through facial landmarks, it does not explicitly model it. Wav2Lip [18], on the other hand, uses a GAN-based model that takes a sequence of upper-half face images and speech segments as input and generates lower-half face images. It does not use an intermediate representation but relies on an external lip-sync expert. However, the head pose in Wav2Lip may be inconsistent with the speech content as it is controlled by a template video that provides upper-half face motion. Both of the two methods require extensive training data, and we use their released checkpoints in our evaluations. In contrast, AD-NeRF [8] requires only a short video for training but can still generate natural results. It feeds speech into a conditional function and generates a dynamic neural radiance field, which is then rendered into video frames using volume rendering algorithm. In our experiments, we train the AD-NeRF model on our 12-minute training set using the official implementation3. We evaluate lip-sync performance use three metrics: LSE-D, LSEC [18], and LMD [4]. LSE-D and LSE-C utilize a publicly available 3https://github.com/YudongGuo/AD-NeRF pre-trained SyncNet4 to measure the lip-sync error between the generated frames and the corresponding speech segment. On the other hand, LMD calculates the L2 distance between the landmarks of the generated video and the ground-truth one after normalization. It is worth noting that the audiobook dataset, test set C, does not have corresponding ground-truth videos, so LMD is only calculated on test sets A and B. The results are presented in Table 1, where it shows that DAE-Talker outperforms all other methods with the lowest LSE-D and LMD and the highest LSE-C scores in most cases. Therefore, DAE-Talker has the best lip-sync performance among all the methods. As we mentioned earlier, talking face generation is a one-tomany problem. For example, when speaking a sentence, there are multiple possible poses that can be considered correct. It is not reasonable to evaluate the pose naturalness and video fidelity by calculating the discrepancy between the generated and the groundtruth video. Therefore, we conduct user studies to evaluate lip-sync, video fidelity, and head movement naturalness. 15 participants rate each video on the above three aspects using a scale of 1 to 10, where a higher score indicates better results. The average scores are presented in Figure 4. We found that the poses in Wav2Lip and AD-NeRF, which are cloned from a video template, have better naturalness than MakeItTalk, which only warps a static image. However, these poses could still be inconsistent with the speech content and are inferior to those of DAE-Talker. Video snapshots of DAE-Talker 4https://github.com/joonson/syncnet \fDAE-Talker: High Fidelity Speech-Driven Talking Face Generation with Diffusion Autoencoder MM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Table 2: Evaluations on lip-sync for ablation study. Model LSE-D \u2193 LSE-C \u2191 LMD \u2193 Test set A Test set B Test set C Test set A Test set B Test set C Test set A Test set B DAE-Talker 7.25 6.98 7.63 8.05 9.25 8.12 1.55 2.25 w/o shared \ud835\udc65\ud835\udc47 7.40 6.98 7.63 7.99 9.25 8.14 1.55 2.25 w/o data aug 7.82 7.55 8.17 7.42 8.61 7.47 1.88 2.33 and the baseline methods are presented in Figure 3. Moreover, ADNeRF models the head and torso separately and has better video fidelity than MakeItTalk and Wav2Lip in the head part. However, the torso part of AD-NeRF sometimes appears blurry, which is also mentioned in its paper. In contrast, DAE-Talker generates the full video frames simultaneously with the DDIM image decoder and achieves the highest scores in video fidelity. It also outperforms other methods in terms of lip-sync, which is consistent with the results of objective evaluations. 75.6% 24.4% 60.9% 39.1% DAE-Talker DAE-Talker w/o shared \ud835\udc65\ud835\udc47 w/o data aug Figure 5: AB preference test for ablation study. 4.3 Ablation study We propose to generate continuous video frames with the DDIM image decoder from a fixed noise across different frames. Also, in order to model the long-term context with limited training data, we introduce pseudo-sentence sampling for data augmentation. In this section, we perform ablation studies on the two techniques. Ablation experiment 1: w/o shared \ud835\udc65\ud835\udc47. We use the same model as DAE-Talker but sample different noise \ud835\udc65\ud835\udc47from N (0, I) for generating different frames with the DDIM. Ablation experiment 2: w/o data aug. We train a new speech2latent model without data augmentation with pseudo-sentence sampling, where we only cut the speech into 20 seconds segments at fixed timestamps. It can be observed that discontinuous frames and jiggling talking face often occur in the ablation experiment 1, while the lip-sync performance is similar to DAE-Talker. In the ablation experiment 2, the generated video sometimes has unnatural head movement. The lip-sync performance also degrades compared with DAE-Talker. Here, we present the results of our evaluation of two ablation experiments in Table 2 using the same objective metrics as in the previous section. The results indicate that experiment 1 has lip-sync performance similar to that of DAE-Talker, while experiment 2 is slightly worse. In addition, we conducted an AB preference test, where 15 raters are asked to choose between the videos generated by the ablation experiments and that of DAE-Talker based on video fidelity for experiment 1 and pose naturalness for experiment 2. The results are shown in Figure 5, indicating that both ablation approaches lead to worse performance compared to DAE-Talker. These results are in line with our preliminary observations. 4.4 Analysis and discussion 4.4.1 Reconstruction quality from intermediate representation. DAETalker leverages the diffusion autoencoder to provide latent representations of video frames. In this section, we analyze the performance of reconstructing from the latent representation. Our DAE is trained on the 12 minutes video of Obama\u2019s address and is evaluated on the test set A. The noise \ud835\udc65\ud835\udc47is randomly sampled from N (0, I) and the number of steps for denoising is set to\ud835\udc47= 100. The metrics for the evaluation are the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM) [29] and the learned perceptual image patch similarity (LPIPS) [33]. The baseline in this experiment is the image generator in MakeItTalk that reconstructs the image from the facial landmarks. We use the pretrained model released by the authors in testing. Table 3: Reconstruction quality from intermediate representation. \u201cInter. Repr\u201d is the abbreviation of intermediate representation. Model Inter. Repr. PSNR SSIM LPIPS MakeItTalk Landmark 20.070 0.692 0.171 DAE-Talker DAE latent 33.702 0.965 0.012 Table 3 presents the results of our study, which demonstrate that the image reconstruction from diffusion latent outperforms the landmark-based method. 4.4.2 Alleviating one-to-many mapping with pose modelling. In Section 3.2.3, we discuss the challenge of the one-to-many mapping from speech to latent, as multiple videos can correspond to the same speech input. To address this issue, we incorporate head pose modeling into our speech2latent model, conditioning the latent prediction on the head pose. In this section, we investigate the necessity of pose modeling by comparing the performance of our speech2latent model with and without the pose adaptor. Specifically, we train the models and plot their mean square error curves for latent prediction during training on the test set A. Figure 6 displays the results. We can find that the mean square error of latent prediction decreases much faster and achieves a lower value when using the pose adaptor, which shows the necessity of pose modelling for alleviating the one-to-many mapping issue. 4.4.3 Pose controllability. DAE-Talker leverages head poses to condition the latent prediction, allowing us to utilize either the natural \fMM \u201923, October 29-November 3, 2023, Ottawa, ON, Canada Chenpeng Du et al. Figure 6: The curves of latent prediction loss during training. head movements predicted from the input speech or manually specified head poses. In this section, we assess the controllability of DAE-Talker in terms of head poses. Specifically, we synthesize lip-sync videos based on the speech data in three test sets. In the first experiment, we generate talking faces with natural head poses predicted from the input speech. In the second experiment, we manually specify three different head poses, namely towards left, right, and straight ahead. We then measure the mean square distance between the desired specified head poses and the poses extracted from the generated videos. The results are presented in Table 4. (a) Natural head movement predicted from speech. (b) Heading straight forward. (c) Heading towards the left. (d) Heading towards the right. Figure 7: Samples of pose controlled synthesis. Table 4: The mean square distance between the specified head poses and the poses extracted from the generated video. Pose Condition Test set A Test set B Test set C Natural 113.93 90.84 76.94 Specified 7.19 7.11 6.00 The prediction of head poses from speech is a one-to-many mapping, which can result in a large distance between the predicted natural head poses and the specified head poses. However, the use of specified head poses can significantly reduces this distance, demonstrating the effectiveness of pose controllability. Several snapshots of the generated videos are shown in Figure 7, showing the generated results with controlled head poses. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/2204.00768v3",
+ "title": "VQTTS: High-Fidelity Text-to-Speech Synthesis with Self-Supervised VQ Acoustic Feature",
+ "abstract": "The mainstream neural text-to-speech(TTS) pipeline is a cascade system,\nincluding an acoustic model(AM) that predicts acoustic feature from the input\ntranscript and a vocoder that generates waveform according to the given\nacoustic feature. However, the acoustic feature in current TTS systems is\ntypically mel-spectrogram, which is highly correlated along both time and\nfrequency axes in a complicated way, leading to a great difficulty for the AM\nto predict. Although high-fidelity audio can be generated by recent neural\nvocoders from ground-truth(GT) mel-spectrogram, the gap between the GT and the\npredicted mel-spectrogram from AM degrades the performance of the entire TTS\nsystem. In this work, we propose VQTTS, consisting of an AM txt2vec and a\nvocoder vec2wav, which uses self-supervised vector-quantized(VQ) acoustic\nfeature rather than mel-spectrogram. We redesign both the AM and the vocoder\naccordingly. In particular, txt2vec basically becomes a classification model\ninstead of a traditional regression model while vec2wav uses an additional\nfeature encoder before HifiGAN generator for smoothing the discontinuous\nquantized feature. Our experiments show that vec2wav achieves better\nreconstruction performance than HifiGAN when using self-supervised VQ acoustic\nfeature. Moreover, our entire TTS system VQTTS achieves state-of-the-art\nperformance in terms of naturalness among all current publicly available TTS\nsystems.",
+ "authors": "Chenpeng Du, Yiwei Guo, Xie Chen, Kai Yu",
+ "published": "2022-04-02",
+ "updated": "2022-06-30",
+ "primary_cat": "eess.AS",
+ "cats": [
+ "eess.AS",
+ "cs.SD"
+ ],
+ "main_content": "Introduction Text-to-speech (TTS) synthesis is a process that transforms a transcript into its corresponding speech. Compared with traditional statistical parametric speech synthesis [1], neural TTS model [2, 3] based on deep neural network shows a better performance. The mainstream neural text-to-speech(TTS) pipeline is a cascade system, including an acoustic model(AM) that predicts acoustic feature from the input transcript and a vocoder that generates waveform according to the given acoustic feature. Two well-known AMs are Tacotron 2 [4] based on encoderattention-decoder architecture and FastSpeech 2 [5] based on Transformer blocks. As for the vocoders, generative adversarial network (GAN) [6] based vocoders, such as multi-band MelGAN [7] and Hi\ufb01GAN [8], are widely used for their high quality of speech and fast generation speed. Another important type of vocoders is neural source-\ufb01lter model [9, 10] based on the mechanism of human voice production. However, the acoustic feature in current models is typically mel-spectrogram, which is highly correlated along both time and frequency axes in a complicated way, leading to a great * Corresponding author. dif\ufb01culty for the AM to predict. Although high-\ufb01delity audio can be generated by the neural vocoders from ground-truth(GT) mel-spectrogram, the gap between the GT mel-spectrogram and the predicted one from AM degrades the performance of the entire TTS system. Generally, there are two approaches to alleviate this problem. The \ufb01rst one is to provide stronger conditions for the AM in addition to the input transcript, such as prosody and linguistic feature. In this way, the AM could be more certain in acoustic feature prediction and often provides a better speech quality. The prosodies in different granularities have been provided to TTS in the literature. For example, [11, 12, 13] uses phoneme-level prosodies, [14] uses word-level prosodies and [15, 16] uses hierarchical prosodies. The effectiveness of VQVAE is also explored in the related literature [17, 18]. In addition, [19, 20] introduces syntactic graph and [21, 22] introduces word embeddings to TTS models. Another approach to address the problem is to exploit better training criterion. The most common training criterion for the AM is the L1 or L2 loss, which assumes the distribution of the acoustic feature is unimodal. However, the real distribution is much more complicated. Hence, some research uses normalizing \ufb02ow [23] in AM, such as FlowTTS [24] and GlowTTS [25]. The normalizing \ufb02ow transforms the data distribution into a known simple distribution and is optimized via maximum log likelihood. However, the \ufb02ow models should be carefully designed to ensure invertibility, which greatly restricts the capabilities of such models. In this work, we propose VQTTS, consisting of an AM txt2vec and a vocoder vec2wav, which uses selfsupervised vector-quantized(VQ) acoustic feature rather than mel-spectrogram. In particular, txt2vec basically becomes a classi\ufb01cation model rather than a traditional regression model while vec2wav uses an additional feature encoder before Hi\ufb01GAN generator for smoothing the discontinuous quantized feature. Instead of predicting the complicated mel-spectrogram which is highly correlated along both time and frequency axes, txt2vec only needs to consider the correlation along time axis in feature prediction, which narrows the gap between GT and predicted acoustic feature dramatically. Our experiments show that vec2wav achieves better reconstruction performance than Hi\ufb01GAN when using self-supervised VQ acoustic feature. Moreover, our entire TTS system VQTTS achieves state-of-the-art performance in terms of naturalness among all current publicly available TTS systems. 2. Self-Supervised VQ Acoustic Feature Recently, the acoustic features extracted by deep neural networks have been found to be superior to traditional acoustic arXiv:2204.00768v3 [eess.AS] 30 Jun 2022 \f(a) txt2vec (b) vec2wav Figure 1: Model architecture of VQTTS, consisting of txt2vec and vec2wav. The two parts are connected with VQ acoustic feature together with prosody feature. features in automatic speech recognition (ASR). These types of neural networks are typically trained with only speech data in a self-supervised manner. They take the raw speech waveform X as input and generate the features Z that represents the characteristics of the speech segments. For example, wav2vec [26] trains a multi-layer convolutional network optimized via a contrastive loss. Speci\ufb01cally, it tries to extract the features where we can predict several successive frames from the current and several previous frames. Later, vector quantization is applied to self-supervised feature extraction. Vq-wav2vec [27] quantizes the acoustic feature Z to VQ acoustic feature \u02c6 Z with gumbel-softmax or k-means. Then \u02c6 Z is used to train a BERT [28] model for ASR initialization. In avoid of mode collapse where only a little amount of vectors in the codebook are actually used, vq-wav2vec divides the dimension of \u02c6 Z into 2 groups and quantizes them separately. Based on that, wav2vec 2.0 [29] jointly trains the feature extractor and the BERT via a contrastive loss and a diversity loss that encourages the model to utilize more vectors in the codebook. HuBERT [30] introduces another quantization strategy. Instead of training the codebook jointly with the feature extractor, HuBERT clusters the features with k-means in advance. In addition to the ASR task, self-supervised VQ acoustic feature has been also proven to be effective in a range of other tasks, such as voice conversion [31, 32], speech translation [33] and speech separation [34]. In this paper, we further investigate the use of VQ acoustic feature in the TTS task. 3. VQTTS VQTTS uses self-supervised VQ acoustic feature for speech synthesis. It has been found in the literature [32, 31] that waveform reconstruction from VQ acoustic feature needs additional prosody feature. Therefore, in this work, we use 3 dimensional prosody feature, including log pitch, energy and probability of voice(POV)[35]. The prosody features are then normalized to zero means and unit variances. For simplicity, we abbreviate the combination of the VQ acoustic feature and the 3 dimensional prosody feature to VQ&pros in the following sections. VQTTS contains two parts, the acoustic model txt2vec which predicts VQ&pros from input phoneme sequence and the vocoder vec2wav which generates waveform from VQ&pros. Here we elaborate the two parts in the following two subsections respectively. 3.1. txt2vec 3.1.1. Model architecture Before training txt2vec, we label the phoneme-level(PL) prosodies for all phonemes in advance, which will be detailed in the next subsection. Here we \ufb01rst demonstrate the overall model architecture of txt2vec in Figure 1(a). The text encoder consists of 6 Conformer blocks [36], which encodes the input phonemes into hidden states h. The hidden states are then sent to a PL prosody controller which predicts the PL prosody labels and a duration predictor that predicts the duration for each phoneme. After that, we repeat the hidden states according to the corresponding phoneme durations as in FastSpeech 2. The decoder is composed of 3 Conformer blocks and its output is passed through an LSTM layer followed by a Softmax activation function for VQ acoustic feature classi\ufb01cation. Then the decoder output and the VQ acoustic feature are further concatenated and passed into 4 convolution layers, each followed by layer normalization and a dropout layer, for prosody feature prediction. The phoneme duration and prosody feature are trained with L2 and L1 loss respectively while the PL prosody label and the VQ acoustic feature are trained with cross entropy loss. The overall training criterion is Ltxt2vec = LPL lab + Ldur + LVQ + Lpros. (1) 3.1.2. Phoneme-level prosody labelling As mentioned earlier, we have 3 dimensional normalized prosody features p and then we calculate their dynamic features \u2206p and \u22062p. The total 9 dimensional prosody features [p, \u2206p, \u22062p] are averaged over the frames within each \fphoneme, so that we can represent the prosody of each phoneme with one vector. Then we cluster all the PL prosody representations into n classes with k-means and take the cluster index as the PL prosody label. Figure 2: The detail of phoneme-level prosody controller. The PL prosodies are quantized with the k-means. The architecture of the PL prosody controller is illustrated in Figure 2, which is trained to predict the PL prosody labels from the text encoder output h with an LSTM. Then the quantized PL prosodies, i.e. the centers of the corresponding kmeans clusters, are then projected and added to h for controlling the following acoustic feature generation. Note that we use GT quantized PL prosodies in training and predicted ones in inference. 3.1.3. Beam search decoding There are two LSTMs in txt2vec, which are used for the autoregressive predictions of PL prosody label and VQ acoustic feature respectively. During training, both the LSTMs are conditioned on their inputs and the ground-truth previous outputs. During inference, we apply beam search decoding. In particular, the decoding starts with an all-zero vector . We denote the beam size as k here. At each decoding step, we consider the top k classes for all current hypotheses and take the results with the top k probabilities as the new k hypotheses. Compared with greedy search decoding that always selects the best result at each step based on the history, beam search decoding considers both the history and the future. 3.2. vec2wav 3.2.1. Model architecture The model architecture of vec2wav is demonstrated in Figure 1(b). Both the VQ acoustic feature and the prosody feature are transformed with a convolution layer whose channels are 92 and 32 respectively and the kernel sizes are 5. The two outputs are then concatenated and passed to a convolution layer, a feature encoder, and a Hi\ufb01GAN generator successively. The feature encoder here is designed for smoothing the discontinuous quantized acoustic feature. It contains 4 Conformer blocks, each of which uses 2 attention heads and 384 dimensional selfattention. The output of the Hi\ufb01GAN generator is the corresponding waveform. The training criterion for Hi\ufb01GAN is used for optimizing the vec2wav model. 3.2.2. Multi-task warmup We \ufb01nd that vec2wav is hard to converge when we train the model from scratch with only Hi\ufb01GAN loss. Therefore, we propose a multi-task warmup trick, which additionally uses a linear projection layer to predict the mel-spectrogram from the feature encoder output. Formally, we can write the training criterion during the warmup as Lvec2wav = LHifiGAN + \u03b1Lmel. (2) After the warmup, we remove the mel-spectrogram prediction task, which means the \u03b1 is set to 0. 4. Experiments and Results 4.1. Experimental setup We use LJSpeech [37] dataset in our experiments, which is an English dataset containing about 24 hours speech recorded by a female speaker. We leave out 100 utterances for validation and 150 utterances for testing. All the speech data in this work is resampled to 16kHz for simplicity. We use a publicly available pretrained k-means-based vq-wav2vec model1 for VQ acoustic feature extraction. The frame shift of vq-wav2vec is 10ms and the number of possible VQ acoustic vectors is 21.5k. The 3dimensional prosody feature is extracted by Kaldi [35]. Audio samples are available online2. 4.2. Speech reconstruction with vocoders We train both the vec2wav and Hi\ufb01GAN on the training set with VQ&pros using Adam optimizer for 1 million steps. In vec2wav training, we set \u03b1 to 60 at the \ufb01rst 200k training steps for warmup. A Hi\ufb01GAN with mel-spectrogram is also trained for comparison. Then we evaluate the performance of speech reconstruction on the test set given GT acoustic feature in both subjective and objective ways. In particular, we perform a mean opinion score(MOS) listening test where 15 listeners are asked to rate each utterance from 1 to 5 in terms of speech quality. Each listener is presented with 20 utterances randomly selected from the test set. For objective evaluations, we compute PESQ [38] which measures the similarity between synthetic speech and the corresponding recordings. We also analyze the Gross Pitch Error (GPE) [39] which calculates the proportion of frames whose pitch discrepancy in the recording and synthetic speech is less than 20% among voiced frames. The results are listed in Table 1. Table 1: Speech reconstruction performance of vocoders on the test set. Feature Method MOS PESQ GPE(%) Recording 4.86\u00b10.04 Mel Hi\ufb01GAN 4.68\u00b10.04 3.60 0.79 VQ&pros Hi\ufb01GAN 4.53\u00b10.06 2.38 0.98 VQ&pros vec2wav 4.79\u00b10.06 2.54 0.76 In the objective evaluations, it can be found that vec2wav can better reconstruct the recordings than Hi\ufb01GAN with VQ&pros. Also, we can see that the PESQ value of vec2wav is worse than that of the Hi\ufb01GAN with mel-spectrogram. This is largely due to the information loss brought by quantization. However, a closer reconstruction does not indicate a better speech quality. Actually, the differences between the 1https://github.com/pytorch/fairseq/tree/ main/examples/wav2vec. 2https://cpdu.github.io/vqtts. \fgenerated speech from vec2wav and the Hi\ufb01GAN with melspectrogram are almost imperceptible. In the subjective listening test, vec2wav performs better than the Hi\ufb01GAN with VQ&pros and achieves comparable quality to the Hi\ufb01GAN with mel-spectrogram. As for the Hi\ufb01GAN with VQ&pros, we can sometimes hear some undesired artifacts, which could be caused by the discontinuous quantized input feature. 4.3. Naturalness of text-to-speech synthesis We train the entire text-to-speech system VQTTS in which txt2vec is optimized with Adam optimizer for 1000 epochs. The number of PL prosody clusters n is set to 128 and the beam sizes in beam search decoding are set to 5 and 10 in PL prosody and VQ acoustic feature prediction respectively. Then we compare VQTTS with other current popular methods, including Tacotron 2, GlowTTS, FastSpeech 2 and the fully end-to-end TTS model VITS [40]. In the \ufb01rst three baseline systems, we use 80 dimensional mel-spectrogram as the acoustic feature and Hi\ufb01GAN as the vocoder. Again, MOS listening test is conducted in the same way as the previous section for evaluating the naturalness of the synthetic speech. The results are shown in Table 2 with 95% con\ufb01dence interval. Table 2: Evaluation for the text-to-speech synthesis systems. Setup Feature MOS Recording 4.86\u00b10.04 GT Mel + Hi\ufb01GAN Mel 4.68\u00b10.04 GT VQ&pros + vec2wav VQ&pros 4.79\u00b10.06 Tacotron 2 + Hi\ufb01GAN Mel 3.67\u00b10.05 GlowTTS + Hi\ufb01GAN Mel 3.72\u00b10.05 FastSpeech 2 + Hi\ufb01GAN Mel 3.79\u00b10.05 VITS 4.62\u00b10.04 VQTTS (txt2vec + vec2wav) VQ&pros 4.71\u00b10.05 As is expected, quality degradation can be observed in all the cascade baseline TTS systems compared with speech reconstruction from GT mel-spectrogram. Although the fully end-toend model VITS has a similar quality to VQTTS, it sometimes suffers from unnatural prosodies. The proposed VQTTS, however, generates high-\ufb01delity and natural speech and shows little quality degradation compared with speech reconstruction from GT VQ&pros. Moreover, VQTTS is a cascade TTS system, which is more \ufb02exible than the fully end-to-end TTS system VITS. 4.4. Prosody Diversity in PL prosody hypotheses Text-to-speech is a one-to-many mapping, since diverse prosodies are contained in speech in addition to the transcripts. VQTTS models the diversities with the PL prosody controller, which enables us to control the speech synthesis with different PL prosody hypotheses in beam search. Here we synthesize a sentence in the test set with 3 different prosody hypotheses and demonstrate their pitch tracks in Figure 3 where obviously we can see their differences. 4.5. Decoding algorithm We explore the effectiveness of beam search decoding for both PL prosody label and VQ acoustic feature predictions. To this end, we exploit greedy search and beam search with a beam size of 5 and 10 in the two tasks separately. Here, VQ acoustic feature prediction is conditioned on the GT durations and PL Figure 3: Pitch tracks of the synthetic speech with different prosodies. prosody labels in order to make sure that the predicted feature is precisely aligned with the GT feature so that we can calculate the prediction accuracy. The results are presented in Table 3 and 4. We can \ufb01nd that the accuracy in all setups is not so high because we have demonstrated the diversity of speech in the previous section. Despite that, the accuracy of beam search decoding is still slightly better than greedy search in both inference tasks. Also, the beam size of 5 is better in PL prosody label prediction while the size of 10 is better in VQ acoustic feature prediction. Table 3: Prediction accuracy for phone-level prosody label. Decoding Algorithm Accuracy(%) greedy search 12.12 beam search (beam size = 5) 12.60 beam search (beam size = 10) 12.46 Table 4: Prediction accuracy for VQ acoustic feature. Decoding Algorithm Accuracy(%) greedy search 13.96 beam search (beam size = 5) 14.09 beam search (beam size = 10) 14.20 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2102.00851v3",
+ "title": "Rich Prosody Diversity Modelling with Phone-level Mixture Density Network",
+ "abstract": "Generating natural speech with diverse and smooth prosody pattern is a\nchallenging task. Although random sampling with phone-level prosody\ndistribution has been investigated to generate different prosody patterns, the\ndiversity of the generated speech is still very limited and far from what can\nbe achieved by human. This is largely due to the use of uni-modal distribution,\nsuch as single Gaussian, in the prior works of phone-level prosody modelling.\nIn this work, we propose a novel approach that models phone-level prosodies\nwith GMM based mixture density network (GMM-MDN). Experiments on the LJSpeech\ndataset demonstrate that phone-level prosodies can precisely control the\nsynthetic speech and GMM-MDN can generate more natural and smooth prosody\npattern than a single Gaussian. Subjective evaluations further show that the\nproposed approach not only achieves better naturalness, but also significantly\nimproves the prosody diversity in synthetic speech without the need of manual\ncontrol.",
+ "authors": "Chenpeng Du, Kai Yu",
+ "published": "2021-02-01",
+ "updated": "2023-10-01",
+ "primary_cat": "cs.SD",
+ "cats": [
+ "cs.SD"
+ ],
+ "main_content": "Introduction Neural text-to-speech(TTS) synthesis models with sequence-tosequence architecture [1, 2, 3] can be applied to generate naturally sounding speech. Recently, non-autoregressive TTS models such as FastSpeech [4] and FastSpeech2 [5] are proposed for fast generation speed without frame-by-frame generation. Besides the progress of acoustic modelling, prosody modelling is also widely investigated. Utterance level prosody modelling in TTS is proposed in [6], in which an utterancelevel prosody embedding is extracted from a reference speech for controlling the prosody of TTS output. [7] factorizes the prosody embedding with several global style tokens(GST). Variational auto-encoder(VAE) is used for prosody modelling in [8], which enables us to sample various prosody embeddings from the standard Gaussian prior in inference stage. Utterance-level prosody is hard to precisely control the synthetic speech due to its coarse granularity. Therefore, phonelevel prosody modelling is also analyzed in recent works. [9] extracts prosody information from acoustic features and uses an attention module to align them with each phoneme encoding. [10] directly models phone-level prosody with a VAE, thus improving the stability compared with [9]. Hierarchical and quantized versions of VAE for phone-level prosody modelling is also investigated in [11, 12, 13, 14], which improves the interpretability and naturalness in synthetic speech. However, most the prior works for phone-level prosody modelling assumes that the distribution of phone-level prosodies is uni-modal distribution, such as a single Gaussian, which is not reasonable enough and hence can only provide limited diversity. Actually, phone-level prosodies are highly diverse even for the same context, hence it is natural to apply multi-modal distribution. In traditional ASR systems, one of the most dominant techniques is HMM-GMM [15, 16, 17], in which the distribution of acoustic features for each HMM state is modeled with a GMM. Similarly, GMM is also used to model acoustic features in traditional statistical parametric speech synthesis(SPSS) [18, 19], thus improving the voice quality. Inspired by these prior works, we propose a novel approach that models phone-level prosodies with GMM based mixture density network (GMM-MDN) [20]. We use a prosody extractor to extract phone-level prosody embeddings from groundtruth mel-spectrograms and use a prosody predictor as the MDN to predict the GMM distribution of the embeddings. In inference stage, the prosody of each phoneme is randomly sampled from the predicted GMM distribution for generating speech with diverse prosodies. Our experiments on the LJSpeech [21] dataset demonstrate that phone-level prosodies can precisely control the synthetic speech and GMM-MDN can generate more natural and smooth prosody pattern than a single Gaussian. The subjective evaluations suggest that our method not only achieves a better naturalness, but also significantly improves the prosody diversity in synthetic speech without the need of manual control. In the rest of this paper, we first review the MDN in Section 2 and introduce the proposed model in Section 3. Section 4 gives experiments comparison and results analysis, and Section 5 concludes the paper. 2. Mixture Density Network Mixture density network [20] is defined as the combined structure of a neural network and a mixture model. We focus on GMM based MDN in this work to predict the parameters of the GMM distribution, including the means \u00b5i, variances \u03c32 i , and mixture weights wi. It should be noted that the sum of the mixture weights is constrained to 1, which can be achieved by applying a Softmax function, formalized as wi = exp (\u03b1i) PM j=1 exp (\u03b1j) (1) where M is the number of Gaussian components and \u03b1i is the corresponding neural network output. The mean and variance of Gaussian components are presented as \u00b5i = mi (2) \u03c32 i = exp (vi) (3) where mi and vi are the neural network outputs corresponding to the mean and variance of the i-th Gaussian component. Equation (3) constrains the \u03c32 i to be positive. arXiv:2102.00851v3 [cs.SD] 1 Oct 2023 \f(a) Overal architecture based on FastSpeech2 (b) Prosody extractor (c) Prosody predictor Figure 1: GMM-based phone-level prosody modelling. \u201cSG\u201d represents the stop gradient operation. \u201cOR\u201d selects the extracted \u201cground-truth\u201d e in training and the sampled \u02c6 e in inference. We use red lines for loss calculation and dash lines for sampling. The criterion for training the MDN is the negative loglikelihood of the observation y given its input x. Here we can formulate the loss function as LMDN = \u2212log p (y; x) = \u2212log M X i=1 wi \u00b7 N \u0000y; \u00b5i, \u03c32 i \u0001 ! (4) Therefore, given the input x, the mixture density network is optimized to predict GMM parameters wi, \u00b5i and \u03c3i that maximize the likelihood of y. 3. GMM-MDN for Phone-Level Prosody Modelling The TTS model in this paper is based on the recent proposed FastSpeech2[5], where the input phoneme sequence is first converted into a hidden state sequence h by the encoder and then passed through a variance adaptor and a decoder for predicting the output mel-spectrogram. Compared with the original FastSpeech[4], FastSpeech2 is optimized to minimize the mean square error(MSE) LMEL between the predicted and the groundtruth mel-spectrograms, instead of applying a teacher-student training. Moreover, the duration target is not extracted from the attention map of an autoregressive teacher model, but from the forced alignment of speech and text. Additionally, [5] condition the prediction of mel-spectrogram on the variance information such as pitch and energy with a variance adaptor. The adaptor is trained to predict the variance information with an MSE loss LVAR. In this work, we propose a novel approach that models phone-level prosodies with a GMM-MDN in the FastSpeech2based TTS system. Accordingly, we introduce a prosody extractor and a prosody predictor, whose architecture and training strategy are described below. 3.1. Overall architecture The overall architecure of the proposed system is shown in Figure 1(a). In the training stage, the prosody embeddings e = [e1, e2, ..., eK] (5) are extracted for all the K phonemes by the prosody extractor from the corresponding mel-spectrogram segment. It is then projected and added to the corresponding hidden state sequence h in order to better reconstruct the mel-spectrogram. We use ek to represent the prosody embedding for the k-th phoneme. The distribution of ek is assumed to be a GMM whose parameters are predicted by an MDN. Here, the GMM-MDN is the prosody predictor. The prosody predictor autoregressively predicts the GMM distributions of the prosody embeddings. In inference stage, we sample the \u02c6 ek from the predicted distribution for each phoneme, so that we can generate speech with diverse prosodies. 3.2. Prosody extractor The detailed architecture of the prosody extractor is shown in Figure 1(b). It contains 2 layers of 2D convolution, each followed by a batch normalization layer and a ReLU activation function. A bidirectional GRU is designed after the above modules. The concatenated forward and backward states from the GRU layer is the output of the prosody extractor, which is referred to as the prosody embedding of the phoneme. 3.3. Prosody predictor The GMM-MDN in this work is the prosody predictor whose detailed architecture is demonstrated in Figure 1(c). The hidden state h of the input phoneme sequence is passed through 2 layers of 1D convolution, each followed by a ReLU, layer normalization and dropout layer. The output of the above modules is then concatenated with the previous prosody embedding and \f(a) LJSpeech training set (b) LJSpeech test set Figure 2: Log-likelihood curves of the extracted phone-level prosodies with different numbers of Gaussian components on LJSpeech and LibriTTS. sent to a GRU. The GRU is designed to condition the prediction of the current prosody distribution on the previous prosodies. Then we project the GRU output to obtain wk,i, mk,i and vk,i, which is then transformed to the GMM parameters according to Equation (1) (3). Equation (4) formulates the training criterion for an MDN, which is the negative log-likelihood of the observations. Here, the observations are the prosody embeddings e, so we obtain the loss function for training the prosody predictor LPP = K X k=1 \u2212log p (ek; e 4. To strike the balance of performance and parameters, T = 4 is a good choice. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2103.04584v1",
+ "title": "Deep Gradient Projection Networks for Pan-sharpening",
+ "abstract": "Pan-sharpening is an important technique for remote sensing imaging systems\nto obtain high resolution multispectral images. Recently, deep learning has\nbecome the most popular tool for pan-sharpening. This paper develops a\nmodel-based deep pan-sharpening approach. Specifically, two optimization\nproblems regularized by the deep prior are formulated, and they are separately\nresponsible for the generative models for panchromatic images and low\nresolution multispectral images. Then, the two problems are solved by a\ngradient projection algorithm, and the iterative steps are generalized into two\nnetwork blocks. By alternatively stacking the two blocks, a novel network,\ncalled gradient projection based pan-sharpening neural network, is constructed.\nThe experimental results on different kinds of satellite datasets demonstrate\nthat the new network outperforms state-of-the-art methods both visually and\nquantitatively. The codes are available at https://github.com/xsxjtu/GPPNN.",
+ "authors": "Shuang Xu, Jiangshe Zhang, Zixiang Zhao, Kai Sun, Junmin Liu, Chunxia Zhang",
+ "published": "2021-03-08",
+ "updated": "2021-03-08",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "eess.IV"
+ ],
+ "main_content": "Introduction Multispectral images store multiple images corresponding to each band (or say, channel) in an optical spectrum, and they are widely utilized in literature of remote sensing. With the limitation of imaging devices, satellites however often measure the low spatial resolution multispectral (LRMS) images [4, 21, 29]. Compared with the multispectral image, the panchromatic (PAN) image is characterized by the high spatial resolution but only one band. Lots of satellites carry both multispectral and panchromatic sensors to simultaneously measure the complementary images, such as Landsat8, GaoFen2 and QuickBird. To obtain the high resolution multispectral (HRMS) image, a promising way is to fuse the complementary information of the LRMS image and the PAN image. This technique is called as pansharpening [4, 21]. Pan-sharpening can be cast as a typical image fusion on *Corresponding author. (c) GPPNN Blurring&Downsampling + \ud835\udc6b \ud835\udc72 HRMS Band Reduction PAN LRMS \ud835\udc7a \ud835\udc6f\u2208\ud835\udc45\ud835\udc40\ud835\udc41\u00d7\ud835\udc35 \ud835\udc77\u2208\ud835\udc45\ud835\udc40\ud835\udc41\u00d7\ud835\udc4f \ud835\udc73\u2208\ud835\udc45\ud835\udc5a\ud835\udc5b\u00d7\ud835\udc35 (a) Observation model PAN Block MS Block \ud835\udc73= \ud835\udc6b\ud835\udc72\ud835\udc6f \ud835\udc77= \ud835\udc6f\ud835\udc7a min \ud835\udc6f \ud835\udc77\u2212\ud835\udc6f\ud835\udc7a\ud835\udfd0 \ud835\udfd0+ DeepPrior(\ud835\udc6f) min \ud835\udc6f \ud835\udc73\u2212\ud835\udc6b\ud835\udc72\ud835\udc6f\ud835\udfd0 \ud835\udfd0+ DeepPrior(\ud835\udc6f) \u2022 \u0de1 \ud835\udc77\ud835\udc61= \ud835\udc6f\ud835\udc61\u22121 \ud835\udc7a \u2022 \ud835\udc79\ud835\udc5d \ud835\udc61= \ud835\udc77\u2212\u0de1 \ud835\udc77\ud835\udc61 \u2022 \ud835\udc79\u210e \ud835\udc61= \ud835\udf0c\ud835\udc79\ud835\udc5d \ud835\udc61\ud835\udc7a\ud835\udc47 \u2022 \ud835\udc6f\ud835\udc61= prox\u210e\ud835\udc5d\ud835\udc6f\ud835\udc61\u22121 + \ud835\udc79\u210e \ud835\udc61 \u2022 \u0de0 \ud835\udc73\ud835\udc61= \ud835\udc6b\ud835\udc72\ud835\udc6f\ud835\udc61\u22121 \u2022 \ud835\udc79\ud835\udc59 \ud835\udc61= \ud835\udc73\u2212\u0de0 \ud835\udc73\ud835\udc61 \u2022 \ud835\udc79\u210e \ud835\udc61= \ud835\udf0c\ud835\udc6b\ud835\udc72\ud835\udc47\ud835\udc79\ud835\udc59 \ud835\udc61 \u2022 \ud835\udc6f\ud835\udc61= prox\u210e\ud835\udc59\ud835\udc6f\ud835\udc61\u22121 + \ud835\udc79\u210e \ud835\udc61 (b) Optimization \u00a9 Convolutional Operator \u229dElement-wise subtraction \u2a01Element-wise addition Input \ud835\udc6f\ud835\udc61\u22121 \u0de0 \ud835\udc73\ud835\udc61\u22121 \ud835\udc73 \ud835\udc79\ud835\udc59 \ud835\udc61 \u2a01 \u229d \ud835\udc79\u210e \ud835\udc61 \u00a9 \u00a9 \u00a9 Output \ud835\udc6f\ud835\udc61 \u2a01 \u0de1 \ud835\udc77\ud835\udc61\u22121 \ud835\udc79\ud835\udc5d \ud835\udc61 \ud835\udc79\u210e \ud835\udc61 \ud835\udc77 \u229d \u00a9 \u00a9 \u00a9 Input \ud835\udc6f\ud835\udc61\u22121 Output \ud835\udc6f\ud835\udc61 Figure 1. (a) The observation models for LRMS and PAN images. (b) Two formulated optimization problems and iterative steps of the gradient projection algorithm. (c) The main blocks in our proposed GPPNN. super-resolution problems. The past decades witnessed the development of pan-sharpening. The classic algorithms including component substitution (CS) [7, 12], multiresolution analysis (MRA) [1, 23] and other techniques. In the era of deep learning, convolutional neural networks have emerged as a signi\ufb01cant tool for pan-sharpening [19]. One of the seminal work is the pan-sharpening neural network (PNN) proposed by Masi et al. [20]. Borrowing the idea of the \ufb01rst super-resolution network [2], PNN is fed with the concatenation of a PAN image and an upsampled LRMS 1 arXiv:2103.04584v1 [cs.CV] 8 Mar 2021 \fimage to regress the HRMS image. In fact, there are only three convolutional units in PNN, so it is a relatively shallow network. Recently, numerous models have been proposed to improve the PNN. Owing to the triumphs of residual networks [8], several papers utilize the shortcut or residual convolutional units to build deep networks, including MIPSM [16], DRPNN [34] and PanNet [38]. They generally contain 10 or more convolutional units. Besides these networks, to make the best of advantages of neural networks, some researchers build deeper networks. For example, Wang et al. employ the densely connected convolutional unit [10] to design a 44-layer network [31] for pan-sharpening. It is well-known that deepening the layers of networks does not necessarily improve the performance, since it is dif\ufb01cult to train deeper networks and redundant parameters make them easily over-\ufb01t. Very recently, the remote sensing community begins to rethink how to make the full use of PAN images\u2019 information [9, 24]. It is worthy noting that most the pan-sharpening networks regard the PAN image as a channel of the input. This manner ignores different characteristics between PAN and LRMS images. A growing number of researchers attempt to propose the two-branch networks [18, 40]. In the \ufb01rst stage, the two branches separately extract the features for PAN and LRMS images. In the second stage, the features are fused to reconstruct the HRMS image. Although convolutional neural networks exhibit promising performance in pan-sharpening, they require a large amount of training samples [22, 33], and they do not account for the observation progress of PAN and LRMS images, i.e., lacking the interpretability. Therefore, there still leaves the room for improvement. The research on modelbased deep learning is the trend in image processing \ufb01eld to close the gap between classic models and neural networks, and it is found that model-based deep networks usually outperform the intuitively designed networks [22, 33]. Xie et al. present a multispectral and hyperspectral (HS) image fusion network (MHNet) for the hyperspectral pansharpening task [35]. There is no doubt that MHNet can be naturally adapted to pan-sharpening [36]. Nonetheless, MHNet is designed to describe the low-rank property for hyperspectral images, and our experiments show that MHNet may perform badly in the pan-sharpening scenario. In this paper, we develop a novel model-based deep network for pan-sharpening. Our contributions are summarized as follows: Firstly, this paper considers the generative models for PAN and LRMS images. That is, as shown in Fig. 1(a), PAN images are the linear combination of the bands in HRMS images, and LRMS images are generated by blurring and downsampling HRMS images. Combining the observation models and the deep prior, we propose two optimization problems, and they can be effectively solved by the gradient projection method as illustrated in Fig. 1(b). Secondly, inspired by the idea of algorithm unrolling techniques, the iterative steps are generalized as two neural blocks separately justifying the generative models for PAN and LRMS images. The computational \ufb02ows in the proposed neural blocks are interpretable. As show in Fig. 1(c), for the MS Block, given a current estimation of the HRMS image, it generates corresponding LRMS image and computes the residual between the generated LRMS image and the real one. This residual then is upsampled and is added into the current estimation to reconstruct the next HRMS image. The PAN block can be interpreted similarly. We build a new network by alternatively stacking the two blocks. In what follows, it calls the gradient projection based pan-sharpening neural network (GPPNN). To the best of our knowledge, it is the \ufb01rst model-driven deep network for pan-sharpening. Thirdly, the proposed GPPNN is compared with the 13 state-of-the-art (SOTA) and classic pan-sharpening methods. The extensive experiments conducted on three popular satellites (i.e., Landsat8, QuickBird, GF2) demonstrate that our networks outperform other counterparts both quantitatively and visually. 2. Related work 2.1. Classic pan-sharpening methods The classic pan-sharpening methods mainly consists of CS based algorithms, MRA based algorithms and other algorithms. CS methods assume that the spatial and spectral information of a multispectral image can be decomposed. Therefore, an HRMS image is reconstructed by combining the spatial information of a PAN image and the spectral information of an LRMS image. In the past decades, researchers have designed various decomposition algorithms. For example, intensity-hue-saturation (IHS) fusion [7] employs the IHS transformation, Brovey method [5] uses a multiplicative injection scheme, and Gram-Schmidt (GS) method [12] exploits the Gram-Schmidt orthogonalization procedure. The main drawback of CS methods is that the image contains artifacts if the spectral and spatial information is not appropriately decomposed. The MRA methods apply the multi-scale transformation to PAN images to extract spatial details which then are injected into the upsampled LRMS images. Typical algorithms include highpass \ufb01lter (HPF) fusion [26], and Indusion method [11], smoothing \ufb01lter-based intensity modulation (SFIM) [15]. The performance of the MRA method strongly depends on the multi-scale transformation. 2 \f2.2. Deep learning based methods Recently, convolutional neural networks have been one of the most effective tools for remote sensing. Given a parameterized network, it is fed with an LRMS image and a PAN image to regress an HRMS image, and its parameters (or say, weights) are learned from data in the endto-end fashion. The \ufb01rst attempt is the PNN with three convolutional units [20]. Recently, thanks to the rapid development of computer vision [8, 10], it is able to train very deep networks. Researchers propose the deep pansharpening networks with dozens of layers and the performance has been greatly improved [27, 31, 38, 40]. At the same time, researchers also explore the two-branch networks to separately extract the features from MS and PAN images [18, 40]. Recently, one of the research trends of the pan-sharpening community is to combine the classic methods with deep neural networks to improve the interpretability of the deep learning based methods. For example, inspired by the idea of MRA algorithms, MIPSM [16] designs a spatial detail extraction network for the PAN images and injects the details into the LRMS images. Liu et al. propose an adaptive weight network for integrating the advantages of different classic methods [14]. It overcomes the shortcomings of the CS and MRA algorithms, and outperforms some SOTA deep learning based methods. 2.3. Model-driven deep networks Most of the deep neural networks are designed intuitively. Recently, a growing number of researchers focus on model-based neural networks for image processing tasks [22, 33]. The basic idea of model-driven deep learning is to formulate an observation model or optimization problem by integrating the prior knowledge for a speci\ufb01c task and to translate each iteration of the algorithm step into a layer of deep neural networks [22, 33]. Passing through the stacked layers corresponds to execute the algorithm with a certain number of times. Model-based deep learning builds the bridge between classic models and deep neural networks. This idea has been successfully applied in various tasks, including sparse coding [6], compressive sensing [39], image deblurring [13], image dehazing [37] and image deraining [32]. It is worth mentioning the MHNet, a model-driven network for the hyperspectral pan-sharpening task [35] to super-resolve HS images with the guidance of MS images. It can be naturally adapted to pan-sharpening, but MHNet mainly focuses on the low-rank property for HS images, i.e., its rank rHS is far lower than the number of bands BHS. In practice, there are dozens or hundreds of bands in an HS image, while there are no more than 10 bands in an MS image. So, the low-rank property is not evident for MS images, and MHNet may break down in pan-sharpening task. 3. GPPNN In this section, we develop a model-driven network for pan-sharpening. For convenience, we summarize the notations in this paper before presentation of the GPPNN. L \u2208Rmn\u00d7B is an LRMS image with a height of m, a width of n and the number of bands of B. H \u2208RMN\u00d7B is an HRMS image with a height of M, a width of N and the number of bands of B. P \u2208RMN\u00d7b is a PAN image whose spatial resolution is the same with that of H, but there is only one band (i.e., b = 1). r = M/m = N/n is the spatial resolution ratio. With abuse of notations, we use their tensor versions in the context of deep learning (namely, L \u2208Rm\u00d7n\u00d7B, H \u2208RM\u00d7N\u00d7B, P \u2208RM\u00d7N\u00d7b). Notation Conv(\u00b7; cin, cout) is the convolutional operator whose input and output are with cin and cout channels, respectively. In what follows, the function Conv(\u00b7; cin, cmid, cout) denotes the cascaded convolutional operator, that is, Conv \u0000\u00b7; cin, cmid, cout\u0001 =Conv \u0000ReLU \u0000Conv \u0000\u00b7; cin, cmid\u0001\u0001 ; cmid, cout\u0001 . (1) 3.1. Model formulation Our network starts with the observation model for the LRMS, HRMS and PAN images. It is assumed that an LRMS image is obtained by downsampling and blurring an HRMS image, while a PAN image is the result of spectral response for an HRMS image. In formula, we have L = DKH, P = HS, where D \u2208Rmn\u00d7MN denotes a downsampling matrix and K is the (low-passing) circular convolution matrix, and S \u2208RB\u00d7b is the so-called spectral response function. It is well-known that inferring the HRMS image is an ill-posed problem. Hence, it often formulates the following penalized optimization, min H f(L, H) + g(P , H) + \u03bbh(H), (2) where h(\u00b7) is the prior term, and f(L, H) = \u2225L \u2212 DKH\u22252 2/2 and g(P , H) = \u2225P \u2212HS\u22252 2/2 are data \ufb01delity terms which are responsible to LRMS and PAN images, respectively. In the classic methods, h(\u00b7) is usually designed as a hand-craft function, such as the total variation or nuclear norm [17]. However, in the era of deep learning, it is suggested to set h(\u00b7) as a deep prior [28, 41]. In other words, it is better to set an implicit prior captured by the neural network parametrization. Additionally, the deep prior is learned from data and can adapt to different tasks and observation models. To make the best of deep prior, instead of the above issue, we consider an LRMS-aware problem and a PAN-aware problem: min H 1 2 \u2225L \u2212DKH\u22252 2 + \u03bbhl(H), (3a) min H 1 2 \u2225P \u2212HS\u22252 2 + \u03bbhp(H). (3b) 3 \fHere, hl(\u00b7) and hp(\u00b7) are two deep priors accounting for the observations of LRMS and PAN images, respectively. The ablation experiment in section 4.4 veri\ufb01es that Eq. (3) achieves better results than Eq. (2). In the next, we describe how to solve the two problems. Moreover, the solutions are generalized into an LRMS-aware block (MS Block) and a PAN-aware block (PAN block) that can be embedded into neural networks. 3.2. MS Block We employ the gradient projection method [25] to solve Eq. (3a) and the updating rule is H(t) = proxhl \u0010 H(t\u22121) \u2212\u03c1\u2207f(H(t\u22121)) \u0011 , (4) where \u03c1 is the step size, proxhl(\u00b7) is a proximal operator corresponding to penalty hl(\u00b7) and \u2207f(H(t\u22121)) = \u2212(DK)T (L \u2212DKH) denotes the gradient of the data \ufb01delity term. Inspired by the principle of model-driven deep learning [22], we generalize Eq. (4) as a network block. To begin with, Eq. (4) is split into four steps as follows, \u02c6 L(t) = DKH(t\u22121), (5a) R(t) l = L \u2212\u02c6 L(t), (5b) R(t) h = \u03c1 (DK)T R(t) l , (5c) H(t) = proxhl \u0010 H(t\u22121) + R(t) h \u0011 . (5d) Then, each step is translated with deep learning terminologies. For convenience, we use the tensor versions to represent the variables in the context of deep learning. In Eq. (5a), given a current HRMS image H(t\u22121), it generates an LRMS image \u02c6 L(t) by applying a low-passing \ufb01lter and downsampling. In neural networks, this step is implemented by \u02c6 L(t) = Conv \u0010 H(t\u22121); B, C, B \u0011 \u2193, (6) where downsampling is conducted with a bicubic interpolation \u2193and the \ufb01lter K is replaced by a cascaded convolutional operator Conv(\u00b7; B, C, B) to obtain more expressive features. C is the number of channels for the feature maps, and we set it to 64 in this paper. B is the number of channels for MS images, and it depends on the input data. Afterwards, Eq. (5b) computes residuals between the real LRMS image L and the generated LRMS image \u02c6 L(t), and the translation is trivial as shown in following equation, R(t) l = L \u2212\u02c6 L(t). (7) In the next, Eq. (5c) obtains the high-resolution residuals. Analogous to Eqs. (5a) and (6), this step is rewritten as R(t) h = \u03c1Conv \u0010 R(t) l ; B, C, B \u0011 \u2191. (8) Remark that the \ufb01lters in Eqs. (5a) and (5c) transpose to each other, but we do not force the convolutional kernels in Eqs. (6) and (8) to satisfy this requirement for \ufb02exibility. The ablation experiment in section 4.4 shows that it slightly improves GPPNN\u2019s performance. At last, Eq. (5d) outputs the HRMS image by taking the residual into account with a proximal operator. As illustrated before, proximal operators regarding the deep prior are modeled by the deep networks [28, 41]. In this manner, the deep prior can be learned implicitly from data. So, we have H(t) = Conv \u0010 H(t\u22121) + R(t) h ; B, C, B \u0011 . (9) In what follows, Eqs. (6), (7), (8) and (9) are named as an MS Block. For better understanding, the computational \ufb02ow for an MS Block is displayed in Fig. 1(c). 3.3. PAN block In this subsection, we consider the observation model for PAN (i.e., Eq. (3b)). With the gradient projection method, the updating rule is H(t) = proxhp \u0010 H(t\u22121) \u2212\u03c1\u2207g(H(t\u22121)) \u0011 , (10) where \u2207g(H(t\u22121)) = \u2212(P \u2212HS)ST . With the similar techniques, it is able to translate Eq. (10) into a block of neural networks. At \ufb01rst, Eq. (10) is split into four steps as follows, \u02c6 P (t) = H(t\u22121)S, (11a) R(t) p = P \u2212\u02c6 P (t), (11b) R(t) h = \u03c1R(t) p ST , (11c) H(t) = proxhp \u0010 H(t\u22121) + R(t) h \u0011 . (11d) In the context of deep learning, as shown in Fig. 1(c), these steps are rewritten as, \u02c6 P(t) = Conv \u0010 H(t\u22121); B, C, b \u0011 , (12a) R(t) p = P \u2212\u02c6 P(t), (12b) R(t) h = \u03c1Conv \u0010 R(t) p ; b, C, B \u0011 , (12c) H(t) = Conv \u0010 H(t\u22121) + R(t) h ; B, C, B \u0011 . (12d) Here, b = 1 is the number of channel for PAN images. Remark that the underlying assumption of Eq. (3b) is that the PAN image is a linear combination of the HRMS image. S/ST is regarded as a band reduction/expansion operator. With this assumption, convolutional units in Eqs. (12a) and (12c) should be with the kernel size of 1. 4 \f\ud835\udcd7\ud835\udfce Input Backbone Layer 1 Layer 2 Layer K \u2026\u2026 MS Block PAN Block MS Block PAN Block MS Block PAN Block \u2026\u2026 Bicubic Interpolation Output \u0de2 \ud835\udcd7 L1 Loss Label \ud835\udcd7 Input Layer Output Layer Figure 2. The structure of GPPNN. 3.4. GPPNN Now, with the MS Block and the PAN block, we are ready to construct the gradient projection based pansharpening neural network (GPPNN). The structure of our GPPNN is shown in Fig. 2. The network starts with an input layer, and it requires an initial value of the HRMS image. We initialize H(0) \u2208RM\u00d7N\u00d7B by applying the bicubic interpolation to the input LRMS image L \u2208Rm\u00d7n\u00d7B. The network is followed by a backbone subnetwork. There are K layers, each of which consists of an MS Block and a PAN block. In formula, there are H(t+0.5) = MS Block \u0010 H(t), L \u0011 (13) and H(t+1) = PAN Block \u0010 H(t+0.5), P \u0011 . (14) The output of the last layer, denoted by \u02c6 H \u2208RM\u00d7N\u00d7B, is the \ufb01nal reconstructed HRMS. 3.5. Training details Our GPPNN is supervised by the \u21131 loss between \u02c6 H and the ground truth H, \u2225\u02c6 H\u2212H\u22251. The paired training samples are unavailable in practice. When we construct the training set, the Wald protocol [30] is employed to generate the paired samples. For example, given the multispectral image H \u2208RM\u00d7N\u00d7B and the PAN image P \u2208RrM\u00d7rN\u00d7b, both of them are downsampled with ratio r, and the downsampled versions are denoted by L \u2208RM/r\u00d7N/r\u00d7B and \u02dc P \u2208RM\u00d7N\u00d7b. In the training set, L and \u02dc P are regarded as the inputs, while H is the ground-truth. GPPNN is implemented with Pytorch framework. They are optimized by Adam over 100 epochs with a learning rate of 5 \u00d7 10\u22124 and a batch size of 16. In our experiments, kLR = 3 and kPAN = 1. In section 4.2, we report the performance of GPPNN with different C and K. As a balance, C and K are set to 64 and 8, respectively. Table 1. The information of datasets. B is the number of bands for multispectral images. Landsat8 GaoFen2 QuickBird B 10 4 4 Resolution-MS 256 256 256 Resolution-PAN 512 1024 1024 # Train/Val/Test 350/50/100 350/50/100 474/103/100 4. Experiments A series of experiments are carried out to evaluate the performance of GPPNN. SOTA deep learning based methods are selected for comparison, namely, MIPSM [16], DRPNN [34], MSDCNN [40], RSIFNN [27], PanNet [38], and MHNet [35]. Our method is also compared with seven classic methods, including BDSD [3], Brovey [5], GS [12], HPF [26], IHS fusion [7], Indusion [11], SFIM [15]. The experiments are conducted on a computer with an Intel i79700K CPU at 3.60GHz and an NVIDIA GeForce RTX 2080ti GPU. 4.1. Datasets and metrics Remote sensing images acquired by three satellites are used in our experiments, including Landsat8, QuickBird and GaoFen2, the basic information of which is listed in Table 1. For each satellite, we have hundreds of image pairs, and they are divided into three parts for training, validation and test. Note that we determine K and C on the validation dataset. In the training set, the multispectral images are cropped into patches with the size of 32 \u00d7 32, and the corresponding PAN patches are with the size of 64 \u00d7 64 (for Landsat8) or 128 \u00d7 128 (for GaoFen2 and QuickBird). For numerical stability, each patch is normalized by dividing the maximum value to make the pixels range from 0 to 1. Four popular metrics are used to evaluate the algorithms\u2019 performances, including peak signal-to-noise ra5 \f(a) LRMS (b) PAN (c) GT(PSNR) (d) BDSD(17.23) (e) GS(14.21) (f) MIPSM(19.80) (g) DRPNN(21.45) (h) MSDCNN(21.94) (i) RSIFNN(17.14) (j) PANNET(20.09) (k) MHNet(19.58) (l) GPPNN(24.40) Figure 3. Visual inspection on Landsat8 dataset. The caption of each subimage displays the corresponding PSNR value. Table 2. The PSNR values on validation datasets for GPPNN with different K and C. The best value is highlighted by the bold. Satellites K layers C Filters 2 4 6 8 10 12 14 8 16 32 64 128 Landsat8 39.0648 39.5878 39.9876 40.0368 40.1336 39.9531 40.0509 36.6455 39.6156 39.6702 40.0368 39.0841 QuickBird 30.4994 30.4392 30.6370 30.5636 30.4803 30.4773 30.5560 30.2962 30.4681 30.4592 30.5636 30.5979 GaoFen2 36.7583 36.9740 36.2181 37.5606 37.0589 36.7835 36.6840 35.8116 36.9061 36.2810 37.5606 36.5873 tio (PSNR), structural similarity (SSIM) and erreur relative globale adimensionnelle de synthese (ERGAS) and spectral angle mapper (SAM). The \ufb01rst three metrics measure the spatial distortion, and the last one measures the spectral distortion. An image is better if its PSNR and SSIM are higher, and ERGAS and SAM are lower. 4.2. The effect of depth and width The network\u2019s depth K and width C play signi\ufb01cant roles. Table 2 lists the PSNR values on validation datasets for GPPNN with different K and C. At \ufb01rst, C is \ufb01xed to 64, and K is set to 2, 4, \u00b7 \u00b7 \u00b7 , 14. It is shown that more layers do not necessarily increase the PSNR value, and K = 8 strikes the balance between performance and the number of weights. The reason may be that it is not easy to train a GPPNN with more layers. Then, we \ufb01x K to 8 and set C to 8, 16, 32, 64 and 128. The similar conclusion can be drawn, and the best choice for C is 64. In summary, our GPPNN is con\ufb01gured with K = 8 layers and C = 64 \ufb01lters in the next experiments. 4.3. Comparison with SOTA methods The evaluation metrics on three datasets are reported in Table 3. It is found that GPPNN outperforms other methods regarding all metrics on three satellites. Figs. 3, 4 and 5 show the RGB bands of the three satellites for some representative methods. Our GPPNN is the closest to the ground truth. From the ampli\ufb01ed local regions in Fig. 3, it found that BDSD, GS, MIPSM, DRPNN, PANNET suffer from spatial distortion, and GS, MSDCNN, RSIFNN and MHNet suffer from spectral distortion. However, our GPPNN has the smallest spatial and spectral distortions. As for Fig. 4, it is a dif\ufb01cult case. It is shown that the most methods have obvious artifacts or noise, and their images are blurring or spectrally distorted. Our GPPNN is without artifacts, noise or spectral distortion. As shown in Fig. 5, it is observed that compared with other methods, our GPPNN has \ufb01ner-grained textures and coarser-grained structures. 4.4. Ablation experiments To further investigate the role of some modules in the proposed GPPNN, a series of ablation experiments are car6 \fTable 3. The four metrics on test datasets. The best and the second best values are highlighted by the bold and underline, respectively. The up or down arrow indicates higher or lower metric corresponds to better images. Landsat8 QuickBird GaoFen2 PSNR\u2191 SSIM\u2191 SAM\u2193 ERGAS\u2193 PSNR\u2191 SSIM\u2191 SAM\u2193 ERGAS\u2193 PSNR\u2191 SSIM\u2191 SAM\u2193 ERGAS\u2193 BDSD 33.8065 0.9128 0.0255 1.9128 23.5540 0.7156 0.0765 4.8874 30.2114 0.8732 0.0126 2.3963 Brovey 32.4030 0.8533 0.0206 1.9806 25.2744 0.7370 0.0640 4.2085 31.5901 0.9033 0.0110 2.2088 GS 32.0163 0.8687 0.0304 2.2119 26.0305 0.6829 0.0586 3.9498 30.4357 0.8836 0.0101 2.3075 HPF 32.6691 0.8712 0.0250 2.0669 25.9977 0.7378 0.0588 3.9452 30.4812 0.8848 0.0113 2.3311 IHS 32.8772 0.8615 0.0245 2.3128 24.3826 0.6742 0.0647 4.6208 30.4754 0.8639 0.0108 2.3546 Indusion 30.8476 0.8168 0.0359 2.4216 25.7623 0.6377 0.0674 4.2514 30.5359 0.8849 0.0113 2.3457 SFIM 32.7207 0.8714 0.0248 2.0775 24.0351 0.6409 0.0739 4.8282 30.4021 0.8501 0.0129 2.3688 MIPSM 35.4891 0.9389 0.0209 1.5769 27.7323 0.8411 0.0522 3.1550 32.1761 0.9392 0.0104 1.8830 DRPNN 37.3639 0.9613 0.0173 1.3303 31.0415 0.8993 0.0378 2.2250 35.1182 0.9663 0.0098 1.3078 MSDCNN 36.2536 0.9581 0.0176 1.4160 30.1245 0.8728 0.0434 2.5649 33.6715 0.9685 0.0090 1.4720 RSIFNN 37.0782 0.9547 0.0172 1.3273 30.5769 0.8898 0.0405 2.3530 33.0588 0.9588 0.0112 1.5658 PANNET 38.0910 0.9647 0.0152 1.3021 30.9631 0.8988 0.0368 2.2648 34.5774 0.9635 0.0089 1.4750 MHNet 37.0049 0.9566 0.0189 1.3509 31.1557 0.8947 0.0368 2.1931 33.8930 0.9291 0.0176 1.3697 GPPNN 38.9939 0.9727 0.0138 1.2483 31.4973 0.9075 0.0351 2.1058 35.9680 0.9725 0.0084 1.2798 (a) LRMS (b) PAN (c) GT(PSNR) (d) BDSD(16.23) (e) GS(15.80) (f) MIPSM(16.84) (g) DRPNN(18.85) (h) MSDCNN(18.15) (i) RSIFNN(17.33) (j) PANNET(18.22) (k) MHNet(19.50) (l) GPPNN(21.13) Figure 4. Visual inspection on QuickBird dataset. The caption of each subimage displays the corresponding PSNR value. ried out. There are 5 different con\ufb01gurations and the results of ablation experiments are shown in Table 4. (I) The proximal operators make the current HRMS image restricted to deep priors. In the \ufb01rst experiment, we delete proximal modules (namely, the convolutional units in Eqs. (9)&(12d)) to verify the necessity of deep priors. Table 4 shows that deleting proximal modules make all metrics dramatically get worse. Therefore, the deep prior plays a signi\ufb01cant role in our network. (II) In the second experiment, we share the weights of all layers. In other words, the network contains only an MS Block and a PAN block, and the network is repeatedly fed with the current HRMS image K times. The results in Table 4 demonstrate that sharing the weights will weaken our network\u2019s performance. (III) As illustrated in Section 3.1, the original problem Eq. (2) is split into an LRMS-aware subproblem and a PANaware subproblem. Now, to verify the rationality, we generalize Eq. (2) as a neural network with the same techniques for GPPNN. We exploit this block to build a neural network corresponding to Eq. (2). From Table 4, we learn that the network for Eq. (2) is worse than GPPNN. It is necessary to 7 \f(a) LRMS (b) PAN (c) GT(PSNR) (d) BDSD(21.86) (e) GS(21.43) (f) MIPSM(22.11) (g) DRPNN(26.41) (h) MSDCNN(26.90) (i) RSIFNN(24.74) (j) PANNET(26.72) (k) MHNet(25.17) (l) GPPNN(28.70) Figure 5. Visual inspection on GaoFen2 dataset. The caption of each subimage displays the corresponding PSNR value. Table 4. The results of ablation experiments on the Landsat8 dataset. Con\ufb01gurations Proximal Sharing Block for TransposePSNR\u2191 SSIM\u2191 SAM\u2193 ERGAS\u2193 Module Weights Eq. (2) ment I \u00d7 \u00d7 \u00d7 \u00d7 37.0404 0.9498 0.0180 1.4246 II \u2713 \u2713 \u00d7 \u00d7 38.1650 0.9669 0.0164 1.2943 III \u2713 \u00d7 \u2713 \u00d7 38.3213 0.9682 0.0155 1.3215 IV \u2713 \u00d7 \u00d7 \u2713 38.5487 0.9700 0.0150 1.2746 GPPNN \u2713 \u00d7 \u00d7 \u00d7 38.9939 0.9727 0.0138 1.2483 consider two deep priors to separately account for the generative models of LRMS and PAN images. (IV) In the last experiment, the convolutional kernel in Eq. (8)/(12c) is replaced by the kernel in Eq. (6)/(12a) with the rotation of 180\u00b0 to force them to satisfy the transposing requirement. It is found that, if the two kernels transpose to each other, the metrics will slightly become worse. The reason may be that the model with transposed kernels has fewer degrees of freedom weakening network\u2019s performance. 5."
+ }
+ ],
+ "Fandong Meng": [
+ {
+ "url": "http://arxiv.org/abs/2010.00247v2",
+ "title": "WeChat Neural Machine Translation Systems for WMT20",
+ "abstract": "We participate in the WMT 2020 shared news translation task on Chinese to\nEnglish. Our system is based on the Transformer (Vaswani et al., 2017a) with\neffective variants and the DTMT (Meng and Zhang, 2019) architecture. In our\nexperiments, we employ data selection, several synthetic data generation\napproaches (i.e., back-translation, knowledge distillation, and iterative\nin-domain knowledge transfer), advanced finetuning approaches and self-bleu\nbased model ensemble. Our constrained Chinese to English system achieves 36.9\ncase-sensitive BLEU score, which is the highest among all submissions.",
+ "authors": "Fandong Meng, Jianhao Yan, Yijin Liu, Yuan Gao, Xianfeng Zeng, Qinsong Zeng, Peng Li, Ming Chen, Jie Zhou, Sifan Liu, Hao Zhou",
+ "published": "2020-10-01",
+ "updated": "2020-10-05",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Our WeChat AI team participates in the WMT 2020 shared news translation task on Chinese\u2192English. In this year\u2019s translation task, we mainly focus on exploiting several effective model architectures, better data augmentation, training and model ensemble strategies. For model architectures, we mainly exploit two different architectures in our approaches, namely Transformers and RNMT. For Transformers, we implement the Deeper transformer with Pre-Norm, the Wider Transformer with larger \ufb01lter-size and the average attention based transformer (Zhang et al., 2018). For the RNMT, we use the deep transition based DTMT (Meng and Zhang, 2019) model. We \ufb01nally ensemble four kinds of models in our system. For synthetic data generation, we explore various methods for out-of-domain and in-domain data generation. For out-of-domain data generation, we explore the back-translation method (Sennrich et al., 2016a) to leverage the target side monolingual data and the knowledge distillation method (Kim and Rush, 2016) to leverage source side of golden parallel data. For in-domain data generation, we employ iterative in-domain knowledge transfer to leverage the source side monolingual data and golden parallel data. Furthermore, data augmentation methods, including noisy fake data (Wu et al., 2019) and sampling (Edunov et al., 2018), are used for training more robust NMT models. For training strategies, we mainly focus on the parallel scheduled sampling (Mihaylova and Martins, 2019; Duckworth et al., 2019), the target denoising and minimum risk training (Shen et al., 2016; Wang and Sennrich, 2020) algorithm for indomain \ufb01netuning. We also exploit a self-bleu (Zhu et al., 2018) based model ensemble approach to enhance our system. As a result, our constrained Chinese\u2192English system achieves the highest case-sensitive BLEU score among all submitted systems. In the remainder of this paper, we start with an overview of model architectures in Section 2. Section 3 describes the details of our systems and training strategies. Then Section 4 shows the experimental settings and results. Finally, we conclude our work in Section 5. 2 Model Architectures In this section, we \ufb01rst describe the model architectures we use in the Chinese\u2212 \u2192English Shared Task, including the Transformer-based (Vaswani et al., 2017) models and RNN-based (Bahdanau et al., 2014; Meng and Zhang, 2019) models. 2.1 Deeper Transformer As shown in previous studies (Wang et al., 2019; Sun et al., 2019), deeper Transformers with prenorm outperform its shallow counterparts on various machine translation benchmarks. In their work, increasing the encoder depth signi\ufb01cantly improves the model performance, while they only introduces mild overhead in terms of speed in training and arXiv:2010.00247v2 [cs.CL] 5 Oct 2020 \finference, compared with increasing the decoder side depth. Hence, we train deeper Transformers with a deep encoder aiming for a better encoding representation. In our experiments, we mainly adopt two settings, with the hidden size 512 (Base) and 1024 (Large). We adopt a 30-layer encoder for Base models, and 20/24-layer encoders for Large models. Further increasing the encoder depth does not lead to a signi\ufb01cant BLEU improvement. To keep the total trainable parameters the same among models, the \ufb01lter sizes of Base and Large models are 16384 and 4096, respectively. For training, the batch size is 4,096 tokens per GPU, and we train each model using 8 NVIDIA V100 GPUs for about 7 days. 2.2 Wider Transformer Inspired by last year\u2019s Baidu system (Sun et al., 2019), we also train Wider Transformers with larger inner dimension of the Feed-Forward Network than the standard Transformer Large system. Speci\ufb01cally, two settings are used in our experiments. With \ufb01lter size as 15,000, we set the number of encoder layers to 10, and with \ufb01lter size 12,288, we set the number of encoder layers to 12. The number of total trainable parameters of Wider Transformer is kept approximately the same as our Deeper Transformers. In our experiments, we also set the batch size to be 4,096 and train the Wider Transformers with 8 NVIDIA V100 GPUs for about 7 days. 2.3 Average Attention Transformer To introduce more diversity in our Transformer models, we use Average Attention Transformer (AAN) (Zhang et al., 2018) as one of our candidate architectures. The Average Attention Transformer replaces the decoder self-attention module in auto-regressive order with a simple average attention, and introduces almost no loss in model performance. We believe that even though the performance of AAN does not drop in terms of BLEU, the output distributions of AAN networks should be different from the output distributions of original Transformers, which brings diversity for \ufb01nal ensemble. This also complies with our \ufb01ndings in self-bleu experiments (Section 3.6). In practice, AAN models are trained for both the Wider Transformer and Deeper Transformer. The batch size and other hyper-parameters are kept the same its non-AAN counterpart. 2.4 DTMT DTMT (Meng and Zhang, 2019) is the recently proposed deep transition RNN-based model for Neural Machine Translation, whose encoder and decoder are composed of the well designed transition blocks, each of which consists of a linear transformation enhanced GRU (L-GRU) followed by several transition GRUs (T-GRUs). DTMT enhances the hidden-to-hidden transition with multiple non-linear transformations, as well as maintains a linear transformation path throughout this deep transition by the well-designed linear transformation mechanism to alleviate the vanishing gradient problem. This architecture has demonstrated its superiority over the conventional Transformer model and stacked RNN-based models in NMT (Meng and Zhang, 2019), and also achieves surprising performances on other NLP tasks, such as sequence labeling (Liu et al., 2019) and aspect-based sentiment analysis (Liang et al., 2019). In our experiments, we use the bidirectional deep transition encoder, where each directional deep transition block consists of 1 L-GRU and 4 T-GRU. The decoder contains a query transition block and the decoder transition block, each of which consists of 1 L-GRU and 4 T-GRU. Therefore the DTMT consists of a 5 layer encoder and a 10 layer decoder, with hidden size 1,024. We use 8 NVIDIA V100 GPUs to train each model for about three weeks and the batch size is set to 4,096 tokens per GPU. 3 System Overview In this section, we describe our system used in the WMT 2020 news shared task. Figure 1 depicts the overview of our Wechat NMT. Our system can be divided into four parts, namely data \ufb01ltering, synthetic data generation, indomain \ufb01netuning, and ensemble. The synthetic generation part further includes the generation of out-of-domain and in-domain data. Next, we will illustrate these four parts. 3.1 Data Filter Following previous work (Li et al., 2019), we \ufb01lter the training bilingual corpus with the following rules: \u2022 Normalize punctuation with Moses scripts. \u2022 Filter out the sentences longer than 100 words, or exceed 40 characters in a single word. \fIn-Domain Synthetic Training Finetuning Out-of-Domain Synthetic Ensemble In-domain Knowledge Transfer Model N Model 1 Model 2 \u2026\u2026. Normal Forward Translation Data Bilingual Data + Out synthetic + In Synthetic Back Translation Knowledge Distillation N x Finetuning Normal Target Denoise Scheduled Sampling MRT Models Deeper Transformer Wider Transformer AveAtt Transformer DTMT Figure 1: Architecture of WeChat NMT system. For simplicity, the data \ufb01ltering module is ignored in the overview. NUM Bilingual Data 20.7M Chinese Monolingual Data 153.5M English Monolingual Data 121.2M Table 1: Statistics of all training data. \u2022 Filter out the duplicated sentence pairs. \u2022 The word ratio between the source and the target must not exceed 1:4 or 4:1. We also \ufb01lter the monolingual corpus with the language model trained by the corresponding data of bilingual training corpus. In our experiments, the bilingual training data is a combination of News Commentary v15, Wiki Titles v2, WikiMatrix, CCMT and the UN corpus. The Chinese monolingual data includes News crawl, News Commentary, Common Crawl and Gigaword corpus. The English monolingual data includes News crawl, News discussions, Europarl v10, News Commentary, Common Crawl, Wiki dumps and the Gigaword corpus. After data \ufb01ltering, statistics of all training data are shown in Table 1. 3.2 Out-of-Domain Synthetic Data Generation Now, we describe our techniques for constructing both out-of-domain and in-domain synthetic data. The out-of-domain synthetic corpus is generated via both large-scale back-translation and knowledge distillation to enhance the models\u2019 performance for all domains. Then, we propose iterative in-domain knowledge transfer, which transfers in-domain knowledge to the huge monolingual corpus (i.e., Chinese), and builds our in-domain synthetic corpus. In the following sections, we elaborate above techniques in detail. \f3.2.1 Large-scale Back-Translation Back-translation is shown to be very effective to boost the performance of NMT models in both academic research (Hoang et al., 2018; Edunov et al., 2018) and previous years\u2019 WMT competitions (Deng et al., 2018; Sun et al., 2019; Ng et al., 2019; Xia et al., 2019). Following their work, we also train baseline English-to-Chinese models with the parallel data provided by WMT2020. Both the Left-to-Right Transformer (L2R) and the Right-toLeft Transformer (R2L) are used to translate the \ufb01ltered monolingual English corpus combined with the English side of golden parallel bitext to Chinese. Then the generated Chinese text and the original English text are regarded as the source side and target side, respectively. In practice, it costs us 7 days on 5 NVIDIA V100 GPU machines to generating all back-translated data. 3.2.2 Knowledge Distillation Knowledge distillation (KD) is proven to be a powerful technique for NMT (Kim and Rush, 2016) to transfer knowledge from teacher model to student models. In particular, we \ufb01rst use the teacher models to generate synthetic corpus in the forward direction (i.e., Chinese\u2192English). Then, we use the generated corpus to train our student models. In this work, with baseline Chinese\u2192English models (i.e., L2R and R2L) as teacher models, we translate the Chinese sentences of the parallel corpus to English to form our synthetic KD dataset. The knowledge distillation costs about 2 days on 2 NVIDIA V100 GPU machines to generate all synthetic data. 3.3 Iterative In-domain Knowledge Transfer Since in-domain \ufb01netuning demonstrates substantial BLEU improvements (Sun et al., 2019; Li et al., 2019), we speculate that the parallel data and the dev/test sets fall in different domains. Therefore, adapting our models to the target domain in advance will provide gains over the dev/test sets and give a better initialization point for in-domain \ufb01netuning. To this end, we use knowledge transfer to inject more in-domain information into our synthetic data. In particular, we \ufb01rst use normal \ufb01netuning (see Section 3.5) to equip our models with in-domain knowledge. Then, we ensemble these models and use the ensemble model to translate the Chinese monolingual corpus into English. For our ensemble translator, we use 4 models with different architectures. Next, we pair original Chinese sentences with generated in-domain pseudo English sentences to form a pseudo parallel corpus. So far, the in-domain knowledge from ensembled models is transferred to the generated pseudo-parallel corpus. Finally, we retrain our model with both the in-domain pseudoparallel and out-of-domain parallel data. We refer to the above process as the in-domain knowledge transfer. In our experiments, we \ufb01nd that iteratively performing the in-domain knowledge transfer can further provide improvements (see Table 2). For each iteration, we replace the indomain synthetic data and retrain our models, and it costs about 10 days on 8 NVIDIA V100 GPU machines. For the \ufb01nal submission, the knowledge transfer is conducted twice. 3.4 Data Augmentation Aside from synthetic data generation, we also apply two data augmentation methods over our synthetic corpus. Firstly, adding synthetic/natural noises to training data is widely applied in the NLP \ufb01elds (Li et al., 2017; Belinkov and Bisk, 2017; Cheng et al., 2019) to improve model robustness and enhance model performance. Therefore, we proposed to add token-level synthetic noises. Concretely, we perform random replace, random delete, and random permutation over our data. The probability for enabling each of the three operations is 0.1. We refer to this corrupted corpus as Noisy data. Secondly, as illustrated in (Edunov et al., 2018), sampling generation over back-translation shows its potential in building robust NMT systems. Consequently, we investigate the performance of sampled synthetic data. For back-translated data, we replace beam search with sampling in its generation. For in-domain synthetic data, we replace the golden Chinese with the back sampled pseudo Chinese sentences. We refer to the data with sampling generation as Sample data. As a special case, we refer to the without augmentation data as Clean data. 3.5 In-domain Finetuning We train the model on large-scale out-of-domain data until convergence and then \ufb01netune it on smallscale in-domain data, which is widely used for domain adaption (Luong and Manning, 2015; Li et al., 2019). Speci\ufb01cally, we take Chinese\u2212 \u2192English test sets of WMT 17 and 18 as in-domain data, and \ufb01lter out documents that are originally created in \fEnglish (Sun et al., 2019). We name above \ufb01netuning approach as normal \ufb01netuning. In all our \ufb01netuning experiments, we set the batch size to 4096, and \ufb01netune the model for around 400 steps1 on the in-domain data. Furthermore, the well-known problem of exposure bias in sequence-to-sequence generation becomes more serious under domain shift (Wang and Sennrich, 2020). To solve this issue, we further explore some advanced \ufb01netuning approaches and describe details in the following paragraphs. Parallel Scheduled Sampling. We apply a twopass decoding strategy for the Transformer decoder when \ufb01netuning, which is named as parallel scheduled sampling (Mihaylova and Martins, 2019; Duckworth et al., 2019). In the \ufb01rst pass, we obtain model predictions as a standard Transformer, and then mix the predicted sequence with the golden target sequence. In the second pass, we feed above mixture of both golden and predicted tokens as decoder inputs for the \ufb01nal prediction. Thus the problem of training-generation discrepancy is alleviated in the \ufb01netuning stage. According to our preliminary experiments, we set the proportion of predicted tokens in mixed tokens to 50%. Target Denoising. In the training stage, the model never sees its own errors. Thus the model trained with teacher-forcing is prone to accumulated errors in testing (Ranzato et al., 2015). To mitigate this training-generation discrepancy, we add noisy perturbations into decoder inputs when \ufb01netuning. Thus the model becomes more robust to prediction errors by target denoising. Speci\ufb01cally, the \ufb01netuning data generator chooses 30% of sentence pairs to add noise, and keeps remaining 70% of sentence pairs unchanged. For a chosen pair, we keep the source sentence unchanged, and replace the i-th token of target sentence with (1) a random token of current target sentence 15% of the time (2) the unchanged i-th token 85% of the time. Minimum Risk Training. To further avoid the problem of exposure bias, we propose to use minimum risk training (Shen et al., 2016) in the \ufb01netuning stage, which directly optimizes the expected BLEU score instead of the Cross-Entropy loss, and naturally avoids exposure bias. Speci\ufb01cally, the ob1According our experiments, \ufb01netuing with more steps will make the model easy to over\ufb01t on the small in-domain data. jective is computed by, R(\u03b8) = S X s=1 X y\u2208S(x(s)) Q(y|x(s); \u03b8, \u03b1)\u2206(y, y(s)), (1) where x(s) and y(s) are two paired sentences. \u2206denotes a risk function and S(x(s)) \u2208Y is a sampled subset of full search space. Then, the distribution Q is de\ufb01ned over space S(x(s)), Q(y|x(s); \u03b8, \u03b1) = P(y|x(s; \u03b8)\u03b1 P y\u2032\u2208S(x(s)) P(y\u2032|x(s; \u03b8)\u03b1 . (2) In practice, we use 4 candidates for each source sentence x(s). Although the paper claimed that sampling generates better candidates, we \ufb01nd that beam search performs better in our extremely large Transformer model. The risk function we used is the 4-gram sentence-level BLEU (Chen and Cherry, 2014) and we tune the optimal \u03b1 via grid search within {0.005, 0.05, 0.5, 1, 1.5, 2}. Each model is \ufb01ne-tuned for a max of 1000 steps. 3.6 Ensemble We split each training data into three shards among Clean, Noisy and Sample data respectively, which yields a total number of 9 shards. For each shard, we train seven varieties (two Deeper transformers, two Wider transformers, two AANs and one DTMT) with different model architecture. Then we apply four \ufb01netuning approaches on each model, thus the total number of models are quadrupled (about 200 models). For ensemble, it is dif\ufb01cult and inef\ufb01cient to enumerate over all combinations of candidate models (e.g., grid search). Therefore a pruning strategy for model selection is necessary when ensemble. We try to greedily select the top performing models for ensemble. However, only a slight improvement is obtained (less than 0.1 BLEU), as our models are too similar to each other after \ufb01netuning. To further promote diversity among candidate models, we propose the self-bleu driven pruning strategy for advanced ensemble. Speci\ufb01cally, we take the translation of one model as hypothesis and translations of other models as references. Then we calculate BLEU score for each model to evaluate its diversity among other models. Models with small BLEU scores are selected for ensemble, and vice versa. According to our experiments, we observe that (1) AAN and DTMT show a clear difference \fSETTINGS DEEPER WIDER AVEATT DTMT Baseline 26.24 26.35 26.17 26.08 + Back Translation 29.64 29.70 29.48 28.88 + Finetune 35.71 35.89 35.80 35.03 + 1st In-domain Knowledge Transfer 38.14 38.22 38.21 37.98 + Finetune 38.36 38.25 38.13 37.85 + 2nd In-domain Knowledge Transfer 38.32 38.29 38.34 38.05 + Finetune 38.49 38.31 38.38 38.12 + Advanced Finetune 39.08 39.12 38.93 38.66 + Normal Ensemble 39.19 + Advanced Ensemble\u22c6 39.89 Table 2: Case-sensitive BLEU scores (%) on the Chinese\u2192English newstest2019, where \u2018\u22c6\u2019 denotes the submitted system. For each model architecture, we report the highest score among three shards of clean data. FINETUNING APPROACH DEEPER WIDER AVEATT DTMT Normal 38.49 38.31 38.38 38.12 Parallel Scheduled Sampling 38.76 38.84 38.93 \u2013 Target Denoising 38.88 38.92 38.63 38.66 Minimum Risk Training 39.08 39.12 38.78 38.45 Table 3: Case-sensitive BLEU scores (%) on the Chinese\u2192English newstest2019 for different \ufb01netuning approaches after the 2nd in-domain knowledge transfer. For each model architecture, we report the highest score among three shards of clean data and bold the best result among different \ufb01netuning approaches. with other architectures; (2) data sharding is effective to promote diversity, especially for models trained with Clean data; (3) different \ufb01netuning approaches cannot bring diversity for the same model. Under the guidance of self-bleu scores, our advanced ensemble models consists of 20 single models with differences in model architectures, data types, shards and \ufb01netuning approaches. As shown in Table 2, the advanced ensemble achieves absolute improvements over the normal ensemble (up to 0.7 BLEU improvements). 4 Experiments 4.1 Settings All of our experiments are carried out on 15 machines with 8 NVIDIA V100 GPUs each of which have 32 GB memory. We use cased BLEU scores calculated with Moses2 mteval-v13a.pl script as evaluation metric. newstest2019 is used as the development set. For all experiments, we use LazyAdam optimizer with \u03b21 = 0.9, \u03b22 = 0.998 and \u03f5 = 10\u22129. The learning rate is set to 2.0 and decay with training steps. We use warmup step = 8000. We set beam size to 4 and alpha to 0.6 during decoding. 2http://www.statmt.org/moses/ 4.2 Pre-processing and Post-processing We segment the Chinese sentences with an in-house word segmentation tool. For English sentences, we successively apply punctuation normalization, tokenization and truecasing with the scripts provided in Moses. To enable open-vocabulary, we use byte pair encoding BPE (Sennrich et al., 2016b) with 32K operations for both Chinese and English sides. For the post-processing, we apply de-truecaseing and de-tokenizing on the English translations with the scripts provided in Moses. 4.3 Main Results Table 2 shows that the translation quality is largely improved with proposed techniques. We observe a solid improvement of 2.8\u223c3.4 BLEU for the baseline system after back translation. In-domain \ufb01netuning yields substantial improvements among all model architectures, which are 6.07\u223c6.32 BLEU. The \ufb01netuned Transformer models achieve about 35.89 BLEU scores, and the DTMT achieves a 35.03 BLEU score. These \ufb01ndings demonstrate that the domain of training corpus is apart from the target domain, and hence domain adaptation has great potential in improving model performance in the target domain. As described in Section 3.3, we inject the indomain knowledge into our monolingual corpus. \fTwo In-domain knowledge transfers provide another up to 3.02 BLEU score gain (i.e., from about 35.03 to 38.05). The in-domain knowledge transfer brings more improvement compared with the normal \ufb01netuned models. Besides, we \ufb01nd that models further \ufb01netuned after in-domain transfer performs slightly better (about 0.1 BLEU). The improvement suggests that although in-domain transfer has already provided plenty of in-domain knowledge, it still has room for in-domain \ufb01netuning. We further apply advanced \ufb01netuning techniques to our models, as described in Section 3.5. The advanced \ufb01netuning further brings about 0.81 BLEU score gains, and we obtain our best single model with 39.12 BLEU scores. In our preliminary ensemble experiments, we combine some top performing models at each decoding step, but only achieve slight improvement over single models (about 0.1 BLEU). With our advanced ensemble strategies in section 3.6, further improvements are achieved over the normal ensemble (0.7 BLEU). As a result, our WMT 2020 Chinese\u2192English submission achieves a cased BLEU score of 36.9 on newstest2020, which is the highest among all submissions. 4.4 Effects of Advanced Finetuning Approaches In this section, we describe our experiments on advanced \ufb01netuning. Here we take clean models as examples, but models trained with noisy data and sampled data show similar trends. As shown in Table 3, all three advanced \ufb01netuning methods signi\ufb01cantly outperform normal \ufb01netuning. For Wider and Deeper Transformers, Minimum Risk Training provides the highest BLEU gain, which is 0.81. For Average Attention Transformer, Parallel Schedule Sampling improves the model performance from 38.38 to 38.93. For the DTMT model, Target Denoising performs the best, improving from 38.12 to 38.66. These \ufb01ndings are in line with the conclusion of Wang and Sennrich (2020) that links exposure bias with domain shift. For each type of model, we only keep the bestperforming \ufb01netuned one for the \ufb01nal model ensemble. 5"
+ }
+ ],
+ "Zhuo Chen": [
+ {
+ "url": "http://arxiv.org/abs/2403.06832v2",
+ "title": "The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework",
+ "abstract": "The advancement of Multi-modal Pre-training highlights the necessity for a\nrobust Multi-Modal Knowledge Graph (MMKG) representation learning framework.\nThis framework is crucial for integrating structured knowledge into multi-modal\nLarge Language Models (LLMs) at scale, aiming to alleviate issues like\nknowledge misconceptions and multi-modal hallucinations. In this work, to\nevaluate models' ability to accurately embed entities within MMKGs, we focus on\ntwo widely researched tasks: Multi-modal Knowledge Graph Completion (MKGC) and\nMulti-modal Entity Alignment (MMEA). Building on this foundation, we propose a\nnovel SNAG method that utilizes a Transformer-based architecture equipped with\nmodality-level noise masking for the robust integration of multi-modal entity\nfeatures in KGs. By incorporating specific training objectives for both MKGC\nand MMEA, our approach achieves SOTA performance across a total of ten datasets\n(three for MKGC and seven for MEMA), demonstrating its robustness and\nversatility. Besides, SNAG can not only function as a standalone model but also\nenhance other existing methods, providing stable performance improvements. Our\ncode and data are available at: https://github.com/zjukg/SNAG.",
+ "authors": "Zhuo Chen, Yin Fang, Yichi Zhang, Lingbing Guo, Jiaoyan Chen, Huajun Chen, Wen Zhang",
+ "published": "2024-03-11",
+ "updated": "2024-03-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.AI"
+ ],
+ "main_content": "INTRODUCTION The exploration of multi-modal dimensions within Knowledge Graphs (KGs) has become a pivotal force in the semantic web domain, catalyzing advancements in various artificial intelligence applications. With the evolution of Large language Models (LLMs) and Multi-modal Pre-training, the imperative for a robust and comprehensive Multi-Modal Knowledge Graph (MMKG) representation learning framework has become apparent. Such a framework is essential for the effective integration of structured knowledge into multi-modal LLMs at scale, addressing prevalent challenges like knowledge misconceptions and multi-modal hallucination. Current efforts to integrate MMKG with pre-training are scarce. Triple-level methods [38] treat triples as standalone knowledge units, embedding the (head entity, relationship, tail entity) structure \u2020Corresponding author. Figure 1: While existing works design models to refuse and combat noise in MMKGs, our SnAg accepts and deliberately incorporates noise to adapt to the noisy real-world scenarios. into Visual Language Model\u2019s space. On the other hand, Graphlevel methods [18, 26] capitalize on the structural connections among entities in a global MMKG. By selectively gathering multimodal neighbor nodes around each entity featured in the training corpus, they apply techniques such as Graph Neural Networks (GNNs) or concatenation to effectively incorporate knowledge during the pre-training process. However, these approaches predominantly view MMKG from a traditional KG perspective, not fully separating the MMKG representation process from downstream or pre-training tasks. In this work, we revisit MMKG representation learning uniquely from the MMKG perspective itself, employing two tasks: Multimodal Knowledge Graph Completion (MKGC) and Multi-modal Entity Alignment (MMEA) to validate our method. Specifically, we introduce a unified Transformer-based framework (SnAg) that achieves SOTA results across an array of ten datasets by simply aligning our framework with Task-Specific Training targets. SnAg stands out for its lightweight design, efficiency, and adaptability, incorporating components like Entity-Level Modality Interaction that can be seamlessly upgraded with advanced technologies. A key aspect of our method is the Gauss Modality Noise Masking module, whose design sharply contrasts with previous MMKGrelated efforts that primarily focus on designing methods to refuse and combat noise in MMKGs. In contrast, as shown in Figure 1, our SnAg accepts and deliberately incorporates noise, adapting to the noisy real-world scenarios. This strategy can significantly boost performance across various MKGC and MMEA approaches. arXiv:2403.06832v2 [cs.CL] 20 Mar 2024 \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Importantly, as the first MMKG effort to concurrently support both MKGC and MMEA tasks, this work demonstrates its adaptability of our strategy, highlighting its potential to interface with more training tasks in the future and paving the way for further research in MMKG Pre-training and Multi-modal Knowledge Injection. 2 RELATED WORK Typically, a KG is considered multi-modal when it contains knowledge symbols expressed across various modalities, including, but not limited to, text, images, sound, or video [12]. Current research primarily concentrates on the visual modality, assuming that other modalities can be processed similarly. 2.1 MMKG Representation The current mainstream approaches to MMKG representation learning, which focus on integrating entity modalities within MMKGs, can broadly be classified into two distinct categories: (i) Late Fusion methods focus on the interactions and weighting of different modalities, typically employing techniques like Summation, Concatenation, Multi-Layer Perceptrons (MLPs), or Gating Mechanisms to aggregate features just before generating outputs. For example, MKGRL-MS [52] crafts distinct single-modal embeddings, using multi-head self-attention to evaluate the contribution of each modality to the semantic composition and summing the weighted multi-modal features for MMKG entity representation. MMKRL [36] learns cross-modal embeddings in a unified translational semantic space, merging modality embeddings for each entity through concatenation. DuMF [29] adopts a dual-track strategy, utilizing a bilinear layer for feature projection and an attention block for modality preference learning in each track, with a gate network to synthesize these features into a unified representation. (ii) Early Fusion methods integrate multi-modal feature at an initial stage, fostering deeper interaction between modalities that\u2019s essential for complex reasoning. This fosters a unified and potent entity representation, enhancing their compatibility in the process of integrating with other models. For example, CMGNN [16] first normalizes entity modalities into a unified embedding using an MLP, then refines them by contrasting with perturbed negative samples. MMRotatH [56] utilizes a gated encoder to merge textual and structural data, filtering irrelevant information within a rotational dynamics-based KGE framework. Recent studies [8, 23, 31] utilize Pre-trained Language Models (PLMs) like BERT and Vision Transformers like ViT for multi-modal data integration. They format graph structures, text, and images into sequences or dense embeddings compatible with PLMs, thereby utilizing the PLMs\u2019 reasoning capabilities and the knowledge embedded in their parameters to support downstream tasks. In this paper, we propose a Transformer-based method SnAg that introduce fine-grained, entity-level modality preference to enhance entity representation. This strategy combines the benefits of Early Fusion, with its effective modality interaction, while also aligning with the Late Fusion modality integration paradigm. Furthermore, our model is lightweight, boasting a significantly lower parameter count compared to traditional PLM-based methods, which offers increased flexibility and wider applicability. 2.2 Multi-Modal Knowledge Graph Completion Multi-modal Knowledge Graph Completion (MKGC) is crucial for inferring missing triples in existing MMKGs, involving three subtasks: Entity Prediction, Relation Prediction, and Triple Classification. Currently, most research in MKGC focuses on Entity Prediction, also widely recognized as Link Prediction, with two main methods emerging: Embedding-based Approaches build on conventional Knowledge Graph Embedding (KGE) methods [2, 45], adapted to integrate multi-modal data, enhancing entity embeddings. (i) Modality Fusion methods [21, 23, 32, 52, 57] integrate multi-modal and structural embeddings to assess triple plausibility. Early efforts, like IKRL [58], utilize multiple TransE-based scoring functions [2] for modal interaction. RSME [53] employs gates for selective modal information integration. OTKGE [3] leverages optimal transport for fusion, while CMGNN [17] implements a multi-modal GNN with cross-modal contrastive learning. (ii) Modality Ensemble methods train distinct models per modality, merging outputs for predictions. For example, MoSE [67] utilizes structural, textual, and visual data to train three KGC models and employs, using ensemble strategies for joint predictions. Similarly, IMF [27] proposes an interactive model to achieve modal disentanglement and entanglement to make robust predictions. (iii) Modality-aware Negative Sampling methods boost differentiation between correct and erroneous triples by incorporating multi-modal context for superior negative sample selection. MMKRL [36] introduces adversarial training to MKGC, adding perturbations to modal embeddings. Following this, VBKGC [66] and MANS [62] develop fine-grained visual negative sampling to better align visual with structural embeddings for more nuanced comparison training. MMRNS [59] enhances this with relation-based sample selection. Finetune-based Approaches exploit the world understanding capabilities of pre-trained Transformer models like BERT [15] and VisualBERT [25] for MKGC. These approaches reformat MMKG triples as token sequences for PLM processing [30], often framing KGC as a classification task. For example, MKGformer [8] integrates multi-modal fusion at multiple levels, treating MKGC as a Masked Language Modeling (MLM) task, while SGMPT [31] extends this by incorporating structural data and a dual-strategy fusion module. 2.3 Multi-Modal Entity Alignment Entity Alignment (EA) is pivotal for KG integration, aiming to identify identical entities across different KGs by leveraging relational, attributive, and literal (surface) features. Multi-Modal Entity Alignment (MMEA) enhances this process by incorporating visual data, thereby improving alignment accuracy accuracy [5, 35]. EVA [34] applies an attention mechanism to modulate the importance of each modality and introduces an unsupervised approach that utilizes visual similarities for alignment, reducing reliance on goldstandard labels. MSNEA [6] leverages visual cues to guide relational feature learning. MCLEA [33] employs KL divergence to mitigate the modality distribution gap between uni-modal and joint embeddings. PathFusion [68] and ASGEA [37] combine information from different modalities using the modality similarity or alignment path as an information carrier. MEAformer [9] adjusts mutual modality preferences dynamically for entity-level modality fusion, addressing inconsistencies in entities\u2019 surrounding modalities. \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Figure 2: The overall framework of SnAg. Despite nearly five years of development, tasks like MMEA and MKGC have evolved independently within the MMKG community without a unified representation learning framework to address both. With the advancement of multi-modal LLMs, it\u2019s timely to reconsider these challenges from a broader perspective, aiming for a holistic framework that addresses both tasks and delivers meaningful multi-modal entity representations. 3 METHOD 3.1 Preliminaries Drawing on the categorization proposed in [69], we distinguish between two types of MMKGs: A-MMKG and N-MMKG. In AMMKGs, images are attached to entities as attributes, while in NMMKGs, images are treated as standalone entities interconnected with others. A-MMKGs are more prevalent in current research and applications within the semantic web community due to their accessibility and similarity to traditional KGs [12]. Therefore, this paper will focus exclusively on A-MMKG, unless stated otherwise. Definition 1. Multi-modal Knowledge Graph. A KG is defined as G = {E, R, A, T, V} where T = {T A, T R} with T R = E \u00d7 R \u00d7 E and T A = E \u00d7A \u00d7V. MMKG utilizes multi-modal data (e.g., images) as specific attribute values for entities or concepts, with T A = E\u00d7A\u00d7 (V \ud835\udc3e\ud835\udc3a\u222aV \ud835\udc40\ud835\udc40), where V \ud835\udc3e\ud835\udc3aand V \ud835\udc40\ud835\udc40are values of KG and multimodal data, respectively. For instance, in an MMKG, an attribute triple (\ud835\udc52,\ud835\udc4e, \ud835\udc63) in T A might associates an image as \ud835\udc63to an entity \ud835\udc52via an attribute \ud835\udc4e, typically denoted as hasImage. Definition 2. MMKG Completion. The objective of MKGC is to augment the set of relational triples T \ud835\udc45within MMKGs by identifying and adding missing relational triples among existing entities and relations, potentially utilizing attribute triples T A. Specifically, our focus is on Entity Prediction, which involves determining the missing head or tail entities in queries of the form (\u210e\ud835\udc52\ud835\udc4e\ud835\udc51,\ud835\udc5f, ?) or (?,\ud835\udc5f,\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59). Definition 3. Multi-modal Entity Alignment. Given two aligned MMKGs G1 and G2, the objective of MMEA is to identify entity pairs (\ud835\udc521 \ud835\udc56, \ud835\udc522 \ud835\udc56) from E1 and E2, respectively, that correspond to the same real-world entity \ud835\udc52\ud835\udc56. This process utilizes a set of pre-aligned entity pairs, divided into a training set (seed alignments S) and a testing set S\ud835\udc61\ud835\udc52, following a pre-defined seed alignment ratio \ud835\udc45\ud835\udc60\ud835\udc4e= |S|/|S \u222aS\ud835\udc61\ud835\udc52|. The modalities associated with an entity are denoted by M = {\ud835\udc54,\ud835\udc5f,\ud835\udc4e, \ud835\udc63,\ud835\udc60}, signifying graph structure, relation, attribute, vision, and surface (i.e., entity names) modalities, respectively. 3.2 Multi-Modal Knowledge Embedding 3.2.1 Graph Structure Embedding. Let \ud835\udc65\ud835\udc54 \ud835\udc56\u2208R\ud835\udc51represents the graph embedding of entity \ud835\udc52\ud835\udc56, which is randomly initialized and learnable, with \ud835\udc51representing the predetermined hidden dimension. In MKGC, we follow prior work [64] to set \u210e\ud835\udc54 \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc54(\ud835\udc4a\ud835\udc54,\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc39\ud835\udc36\ud835\udc54is a KG-specific fully connected layer applied to \ud835\udc65\ud835\udc54 \ud835\udc56with weights \ud835\udc4a\ud835\udc54. For MMEA, we follow [9, 10] to utilize the Graph Attention Network (GAT) [50], configured with two attention heads and two layers, to capture the structural information of G. This is facilitated by a diagonal weight matrix [60] \ud835\udc4a\ud835\udc54\u2208R\ud835\udc51\u00d7\ud835\udc51for linear transformation. The structure embedding is thus defined as \u210e\ud835\udc54 \ud835\udc56= \ud835\udc3a\ud835\udc34\ud835\udc47(\ud835\udc4a\ud835\udc54, \ud835\udc40\ud835\udc54;\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc40\ud835\udc54refers to the graph\u2019s adjacency matrix. 3.2.2 Relation and Attribute Embedding. Our study for MKGC, consistent with the domain practices [8, 27, 53, 56, 67], focuses exclusively on relation triples. These are represented by learnable embeddings \ud835\udc65\ud835\udc5f \ud835\udc57\u2208R\ud835\udc51/2, where \ud835\udc57uniquely identifies each relation \ud835\udc5f\ud835\udc57, distinguishing it from entity indices. We exclude attribute triples to maintain consistency with methodological practices in the field. The choice of dimensionality \ud835\udc51/2 is informed by our use of the RotatE model [45] as the scoring function for assessing triple plausibility. RotatE models relations as rotations in a complex space, requiring the relation embedding\u2019s dimension to be half that of the entity embedding to account for the real and imaginary components of complex numbers. For MMEA, following Yang et al. [61], we use bag-of-words features for relation (\ud835\udc65\ud835\udc5f) and attribute (\ud835\udc65\ud835\udc4e) representations of entities (detailed in \u00a7 4.1.3) . Separate FC layers, parameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51, are employed for embedding space harmonization: \u210e\ud835\udc5a \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc5a(\ud835\udc4a\ud835\udc5a,\ud835\udc65\ud835\udc5a \ud835\udc56), where \ud835\udc5a\u2208{\ud835\udc5f,\ud835\udc4e} and \ud835\udc65\ud835\udc5a \ud835\udc56 \u2208R\ud835\udc51\ud835\udc5arepresents the input feature of entity \ud835\udc52\ud835\udc56for modality \ud835\udc5a. 3.2.3 Visual and Surface Embedding. For visual embeddings, a pre-trained (and thereafter frozen) visual encoder, denoted as \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63, is used to extract visual features \ud835\udc65\ud835\udc63 \ud835\udc56for each entity \ud835\udc52\ud835\udc56with associated image data. In cases where entities lack corresponding image data, we synthesize random image features adhering to a normal distribution, parameterized by the mean and standard deviation \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. observed across other entities\u2019 images [9, 10, 64]. Regarding surface embeddings, we leverage Sentence-BERT [40], a pre-trained textual encoder, to derive textual features from each entity\u2019s description. The [CLS] token serves to aggregate sentence-level textual features \ud835\udc65\ud835\udc60 \ud835\udc56. Consistent with the approach applied to other modalities, we utilize \ud835\udc39\ud835\udc36\ud835\udc5aparameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51to integrate the extracted features \ud835\udc65\ud835\udc63 \ud835\udc56and \ud835\udc65\ud835\udc60 \ud835\udc56into the embedding space, yielding the embeddings \u210e\ud835\udc63 \ud835\udc56and \u210e\ud835\udc60 \ud835\udc56. 3.3 Gauss Modality Noise Masking Recent research in MMKG [10, 19, 64] suggests that models can tolerate certain noise levels without a noticeable decline in the expressive capability of multi-modal entity representations, a finding echoed across various machine learning domains [4, 22, 43]. Additionally, Cuconasu et al. [13] observe that in the RetrievalAugmented Generation (RAG) process of LLMs, filling up the retrieved context with irrelevant documents consistently improves model performance in realistic scenarios. Similarly, Chen et al. [11] demonstrate that cross-modal masking and reconstruction can improve a model\u2019s cross-modal alignment capabilities. Inspired by evidence of model noise resilience, we hypothesize that introducing noise during MMKG modality fusion training could enhance both modal feature robustness and real-world performance. In light of these observations, we propose a new mechanism termed Gauss Modality Noise Masking (GMNM), aimed at enhancing modality feature representations through controlled noise injection at the training stage for MMKG. This stochastic mechanism introduces a probabilistic transformation to each modality feature \ud835\udc65\ud835\udc5a \ud835\udc56 at the beginning of every training epoch, described as follows: c \ud835\udc65\ud835\udc5a \ud835\udc56 = ( \ud835\udc65\ud835\udc5a \ud835\udc56, if \ud835\udc5d> \ud835\udf0c, (1 \u2212\ud835\udf16)\ud835\udc65\ud835\udc5a \ud835\udc56+ \ud835\udf16f \ud835\udc65\ud835\udc5a \ud835\udc56, otherwise, (1) where \ud835\udc5d\u223c\ud835\udc48(0, 1) denotes a uniformly distributed random variable that determines whether noise is applied, with \ud835\udf0cbeing the threshold probability for noise application to each \ud835\udc65\ud835\udc5a \ud835\udc56. Here, \ud835\udf16signifies the noise (mask) ratio. We define the generation of noise vector f \ud835\udc65\ud835\udc5a \ud835\udc56 as: f \ud835\udc65\ud835\udc5a \ud835\udc56 = \ud835\udf11\ud835\udc5a\u2299\ud835\udc67+ \ud835\udf07\ud835\udc5a, \ud835\udc67\u223cN (0, \ud835\udc3c), (2) where \ud835\udf11\ud835\udc5aand \ud835\udf07\ud835\udc5arepresent the standard deviation and mean of the modality-specific non-noisy data for \ud835\udc5a, respectively, and \ud835\udc67denotes a sample drawn from a Gaussian distribution N (0, \ud835\udc3c) with mean vector with mean 0 and identity covariance matrix \ud835\udc3c, ensuring that the introduced noise is statistically coherent with the intrinsic data variability of the respective modality. Additionally, the intensity of noise (\ud835\udf16) can be dynamically adjusted to simulate realworld data imperfections. This adaptive noise injection strategy is designed to foster a model resilient to data variability, capable of capturing and representing complex multi-modal interactions with enhanced fidelity in practical applications. Note that after the transformation from \ud835\udc65\ud835\udc5ato c \ud835\udc65\ud835\udc5a, these modified features are still subject to further processing through \ud835\udc39\ud835\udc36\ud835\udc5aas detailed in \u00a7 3.2. This critical step secures the generation of the ultimate modal representation, symbolized as c \u210e\ud835\udc5a. For clarity in subsequent sections, we will treat \u210e\ud835\udc5aand \u210e\ud835\udc5a \ud835\udc56as representing their final states, c \u210e\ud835\udc5aand c \u210e\ud835\udc5a \ud835\udc56, unless specified otherwise. 3.4 Entity-Level Modality Interaction This phase is designed for instance-level modality weighting and fusion, enabling dynamic adjustment of training weights based on modality information\u2019s signal strength and noise-induced uncertainty. We utilize a Transformer architecture [49] for this purpose, noted for its efficacy in modality fusion and its ability to derive confidence-based weighting for modalitieswhich improves interpretability and adaptability. The Transformer\u2019s self-attention mechanism is crucial for ensuring the model evaluates and prioritizes accurate and relevant modal inputs. Specifically, we adapt the vanilla Transformer through integrating three key components: Multi-Head Cross-Modal Attention (MHCA), Fully Connected Feed-Forward Networks (FFN), and Instance-level Confidence (ILC). (i) MHCA operates its attention function across \ud835\udc41\u210eparallel heads. Each head, indexed by \ud835\udc56, employs shared matrices\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\ud835\udc4a(\ud835\udc56) \ud835\udc63 \u2208R\ud835\udc51\u00d7\ud835\udc51\u210e(where \ud835\udc51\u210e= \ud835\udc51/\ud835\udc41\u210e), to transform input \u210e\ud835\udc5ainto queries \ud835\udc44(\ud835\udc56) \ud835\udc5a, keys \ud835\udc3e(\ud835\udc56) \ud835\udc5a, and values \ud835\udc49(\ud835\udc56) \ud835\udc5a: \ud835\udc44(\ud835\udc56) \ud835\udc5a, \ud835\udc3e(\ud835\udc56) \ud835\udc5a,\ud835\udc49(\ud835\udc56) \ud835\udc5a = \u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc63 . (3) The output for modality\ud835\udc5a\u2019s feature is then generated by combining the outputs from all heads and applying a linear transformation: \ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) = \u00ca\ud835\udc41\u210e \ud835\udc56=1 \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56\u00b7\ud835\udc4a0 , (4) \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56= \u2211\ufe01 \ud835\udc57\u2208M \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57\ud835\udc49(\ud835\udc56) \ud835\udc57 , (5) where \ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc51. The attention weight \ud835\udefd\ud835\udc5a\ud835\udc57calculates the relevance between modalities \ud835\udc5aand \ud835\udc57: \ud835\udefd\ud835\udc5a\ud835\udc57= exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc57/ \u221a\ufe01 \ud835\udc51\u210e) \u00cd \ud835\udc56\u2208M exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc56/ \u221a\ufe01 \ud835\udc51\u210e) . (6) Besides, layer normalization (LN) and residual connection (RC) are incorporated to stabilize training: \u00af \u210e\ud835\udc5a= \ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) + \u210e\ud835\udc5a) . (7) (ii) FFN: This network, consisting of two linear transformations and a ReLU activation, further processes the MHCA output: \ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) = \ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc48( \u00af \u210e\ud835\udc5a\ud835\udc4a1 + \ud835\udc4f1)\ud835\udc4a2 + \ud835\udc4f2 , (8) \u00af \u210e\ud835\udc5a\u2190\ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) + \u00af \u210e\ud835\udc5a) , (9) where \ud835\udc4a1 \u2208R\ud835\udc51\u00d7\ud835\udc51\ud835\udc56\ud835\udc5band \ud835\udc4a2 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5b\u00d7\ud835\udc51. (iii) ILC: We calculate the confidence \u02dc \ud835\udc64\ud835\udc5afor each modality via: \u02dc \ud835\udc64\ud835\udc5a= exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57/ \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) \u00cd \ud835\udc58\u2208M exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc58\ud835\udc57 \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) , (10) which captures crucial inter-modal interactions and tailors the model\u2019s confidence for each entity\u2019s modality. 3.5 Task-Specific Training Building upon the foundational processes detailed in previous sections, we have derived multi-modal KG representations denoted as \u210e\ud835\udc5a(discussed in \u00a7 3.3) and \u00af \u210e\ud835\udc5a(elaborated in \u00a7 3.4), along with confidence scores \u02dc \ud835\udc64\ud835\udc5afor each modality \ud835\udc5awithin the MMKG (introduced in \u00a7 3.4). \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 3.5.1 MMKG Completion. Within MKGC, we consider two methods for entity representation as candidates: (i) \u00af \u210e\ud835\udc54: Reflecting insights from previous research [9, 64], graph structure embedding emerges as crucial for model performance. After being processed by the Transformer layer, \u00af \u210e\ud835\udc54not only maintains its structural essence but also blends in other modal insights (refer to Equation (4) and (5)), offering a comprehensive multi-modal entity representation. (ii) \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54: For an equitable multi-modal representation, we average all modality-specific representations via \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54= 1 |M| \u00cd \ud835\udc5a\u2208M \u00af \u210e\ud835\udc5a, where M is the set of all modalities. This averaging ensures equal modality contribution, leveraging the rich, diverse information within MMKGs. For consistency in the following descriptions, we will refer to both using the notation \u00af \u210e. We apply the RotatE model [45] as our score function to assess the plausibility of triples. It is defined as: F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61) = || \u00af \u210e\u210e\ud835\udc52\ud835\udc4e\ud835\udc51\u25e6\ud835\udc65\ud835\udc5f\u2212\u00af \u210e\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59|| , (11) where \u25e6represents the rotation operation in complex space, which transforms the head entity\u2019s embedding by the relation to approximate the tail entity\u2019s embedding. To prioritize positive triples with higher scores, we optimize the embeddings using a sigmoid-based loss function [45]. The loss function is given by: L\ud835\udc58\ud835\udc54\ud835\udc50= 1 |T R| \u2211\ufe01 (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)\u2208T R \u0010 \u2212log\ud835\udf0e(\ud835\udf06\u2212F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)) \u2212 \u2211\ufe01\ud835\udc3e \ud835\udc56=1 \ud835\udf10\ud835\udc56log\ud835\udf0e(F (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032) \u2212\ud835\udf06) \u0011 , (12) where \ud835\udf0edenotes the sigmoid function, \ud835\udf06is the margin, \ud835\udc3eis the number of negative samples per positive triple, and\ud835\udf10\ud835\udc56represents the selfadversarial weight for each negatively sampled triple (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032). Concretely, \ud835\udf10\ud835\udc56is calculated as: \ud835\udf10\ud835\udc56= exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc56,\ud835\udc5f\u2032 \ud835\udc56,\ud835\udc52\ud835\udc61\u2032 \ud835\udc56)) \u00cd\ud835\udc3e \ud835\udc57=1 exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc57,\ud835\udc5f\u2032 \ud835\udc57,\ud835\udc52\ud835\udc61\u2032 \ud835\udc57)) , (13) with \ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50being the temperature parameter. Our primary objective is to minimize L\ud835\udc58\ud835\udc54\ud835\udc50, thereby refining the embeddings to accurately capture MMKG\u2019s underlying relationships. 3.5.2 Multi-modal Entity Alignment. In MMEA, following [9, 10], we adopt the Global Modality Integration (GMI) derived multimodal features as the representations for entities. GMI emphasizes global alignment by concatenating and aligning multi-modal embeddings with a learnable global weight, enabling adaptive learning of each modality\u2019s quality across two MMKGs. The GMI joint embedding \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 for entity \ud835\udc52\ud835\udc56is defined as: \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 = \u00ca \ud835\udc5a\u2208M [\ud835\udc64\ud835\udc5a\u210e\ud835\udc5a \ud835\udc56] , (14) where \u00c9 signifies vector concatenation and \ud835\udc64\ud835\udc5ais the global weight for modality\ud835\udc5a, distinct from the entity-level dynamic modality weights \u02dc \ud835\udc64\ud835\udc5ain Equation (10). The distinction between MMEA and MKGC lies in their focus: MMEA emphasizes aligning modal features between entities and distinguishing non-aligned entities, prioritizing original feature retention. In contrast, MKGC emphasizes the inferential benefits of modality fusion across different multi-modal entities. As demonstrated by Chen et al. [10], the modality feature is often smoothed by the Transformer Layer in MMEA, potentially reducing entity distinction. GMI addresses this by preserving essential information, aiding alignment stability. Moreover, as a unified MMKG representation framework, modal features extracted earlier are optimized through MMEA-specific training objectives [33]. Specifically, for each aligned entity pair (\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) in training set (seed alignments S), we define a negative entity set N\ud835\udc5b\ud835\udc54 \ud835\udc56 = {\ud835\udc521 \ud835\udc57|\u2200\ud835\udc521 \ud835\udc57\u2208E1, \ud835\udc57\u2260\ud835\udc56} \u222a{\ud835\udc522 \ud835\udc57|\u2200\ud835\udc522 \ud835\udc57\u2208E2, \ud835\udc57\u2260\ud835\udc56} and utilize in-batch (B) negative sampling [7] to enhance efficiency. The alignment probability distribution is: \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) = \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \u00cd \ud835\udc52\ud835\udc57\u2208N\ud835\udc5b\ud835\udc54 \ud835\udc56 \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc52\ud835\udc57) , (15) where \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp(\u210e\ud835\udc5a\u22a4 \ud835\udc56 \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e) and \ud835\udf0f\ud835\udc52\ud835\udc4eis the temperature hyper-parameter. We establish a bi-directional alignment objective to account for MMEA directions: L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56) ]/2, (16) (i) The training objective denoted as L\ud835\udc3a\ud835\udc40\ud835\udc3cwhen using GMI joint embeddings, i.e., \ud835\udefe\ud835\udc3a\ud835\udc40\ud835\udc3c(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) is set to exp(\u210e\ud835\udc3a\ud835\udc40\ud835\udc3c\u22a4 \ud835\udc56 \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc57 /\ud835\udf0f\ud835\udc52\ud835\udc4e). To integrate dynamic confidences into the training process and enhance multi-modal entity alignment, we adopt two specialized training objectives from UMAEA [10]: (ii) Explicit Confidenceaugmented Intra-modal Alignment (ECIA): This objective modifies Equation (16) to incorporate explicit confidence levels within the same modality, defined as: L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34= \u00cd \ud835\udc5a\u2208M e L\ud835\udc5a, where: e L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \u2217(\ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) +\ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56)) ]/2 . (17) Here,\ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) represents the minimum confidence value between entities \ud835\udc521 \ud835\udc56and \ud835\udc522 \ud835\udc56in modality\ud835\udc5a, i.e., \ud835\udf19\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = \ud835\udc40\ud835\udc56\ud835\udc5b( \u02dc \ud835\udc64\ud835\udc5a \ud835\udc56, \u02dc \ud835\udc64\ud835\udc5a \ud835\udc57), addressing the issue of aligning high-quality features with potentially lower-quality ones or noise. (iii) Implicit Inter-modal Refinement (IIR) refines entity-level modality alignment by leveraging the transformer layer outputs \u00af \u210e\ud835\udc5a, aiming to align output hidden states directly and adjust attention scores adaptively. The corresponding loss function is: L\ud835\udc3c\ud835\udc3c\ud835\udc45= \u00cd \ud835\udc5a\u2208M \u00af L\ud835\udc5a, where \u00af L\ud835\udc5ais also a variant of L\ud835\udc5a(Equation (16)) with \u00af \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp( \u00af \u210e\ud835\udc5a\u22a4 \ud835\udc56 \u00af \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e). The comprehensive training objective is formulated as: L\ud835\udc52\ud835\udc4e= L\ud835\udc3a\ud835\udc40\ud835\udc3c+ L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34+ L\ud835\udc3c\ud835\udc3c\ud835\udc45. Note that our SnAg framework can not only function as a standalone model but also enhance other existing methods, providing stable performance improvements in MMEA, as demonstrated in Table 4 from \u00a7 4.2.2. 4 EXPERIMENTS 4.1 Experiment Setup In MMKG datasets like DBP15KJA-EN, where 67.58% of entities have images, the image association ratio (\ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54) varies due to the data collection process [12]. 4.1.1 Datasets. MKGC: (i) DB15K [35] is constructed from DBPedia [24], enriched with images obtained via a search engine. (ii) MKG-W and MKG-Y [59] are subsets of Wikidata [51] and YAGO [44] respectively. Text descriptions are aligned with the corresponding entities using the additional sameAs links provided by the \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 1: MKGC performance on DB15K [35], MKG-W and MKG-Y [59] datasets. The best results are highlighted in bold, and the third-best results are underlined for each column. Models DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 IKRL (IJCAI \u201917) [58] .268 .141 .349 .491 .324 .261 .348 .441 .332 .304 .343 .383 TBKGC (NAACL \u201918) [41] .284 .156 .370 .499 .315 .253 .340 .432 .340 .305 .353 .401 TransAE (IJCNN \u201919) [55] .281 .213 .312 .412 .300 .212 .349 .447 .281 .253 .291 .330 RSME (ACM MM \u201921) [53] .298 .242 .321 .403 .292 .234 .320 .404 .344 .318 .361 .391 VBKGC (KDD \u201922) [66] .306 .198 .372 .494 .306 .249 .330 .409 .370 .338 .388 .423 OTKGE (NeurIPS \u201922) [3] .239 .185 .259 .342 .344 .289 .363 .449 .355 .320 .372 .414 IMF (WWW \u201923) [27] .323 .242 .360 .482 .345 .288 .366 .454 .358 .330 .371 .406 QEB (ACM MM \u201923) [54] .282 .148 .367 .516 .324 .255 .351 .453 .344 .295 .370 .423 VISTA (EMNLP \u201923) [23] .304 .225 .336 .459 .329 .261 .354 .456 .305 .249 .324 .415 MANS (IJCNN \u201923) [62] .288 .169 .366 .493 .309 .249 .336 .418 .290 .253 .314 .345 MMRNS (ACM MM \u201922) [59] .297 .179 .367 .510 .341 .274 .375 .468 .359 .306 .391 .455 AdaMF (COLING \u201924) [64] .325 .213 .397 .517 .343 .272 .379 .472 .381 .335 .404 .455 SnAg (Ours) .363 .274 .411 .530 .373 .302 .405 .503 .395 .354 .411 .471 w/o GMNM .357 .269 .406 .523 .365 .296 .398 .490 .387 .345 .407 .457 Table 2: Statistics for the MKGC datasets, where the symbol definitions in the table header align with Definition 1. Dataset |E| |R| |T R (Train)| |T R (Valid)| |T R (Test)| DB15K 12842 279 79222 9902 9904 MKG-W 15000 169 34196 4276 4274 MKG-Y 15000 28 21310 2665 2663 Table 3: Statistics for the MMEA datasets. Each dataset contains 15,000 pre-aligned entity pairs (|S| = 15000). Note that not every entity is paired with associated images or equivalent counterparts in the other KG. Additional abbreviations include: DB (DBpedia), WD (Wikidata), ZH (Chinese), JA (Japanese), FR (French), EN (English), DE (German). Dataset G |E| |R| |A| |T R| |T A| |V \ud835\udc40\ud835\udc40| DBP15KZH-EN ZH 19,388 1,701 8,111 70,414 248,035 15,912 EN 19,572 1,323 7,173 95,142 343,218 14,125 DBP15KJA-EN JA 19,814 1,299 5,882 77,214 248,991 12,739 EN 19,780 1,153 6,066 93,484 320,616 13,741 DBP15KFR-EN FR 19,661 903 4,547 105,998 273,825 14,174 EN 19,993 1,208 6,422 115,722 351,094 13,858 OpenEAEN-FR EN 15,000 267 308 47,334 73,121 15,000 FR 15,000 210 404 40,864 67,167 15,000 OpenEAEN-DE EN 15,000 215 286 47,676 83,755 15,000 DE 15,000 131 194 50,419 156,150 15,000 OpenEAD-W-V1 DB 15,000 248 342 38,265 68,258 15,000 WD 15,000 169 649 42,746 138,246 15,000 OpenEAD-W-V2 DB 15,000 167 175 73,983 66,813 15,000 WD 15,000 121 457 83,365 175,686 15,000 OpenEA benchmarks [48]. Detailed statistics are available in the Appendix. MMEA: (i) Multi-modal DBP15K [34] extends DBP15K [46] by adding images from DBpedia and Wikipedia [14], covering three bilingual settings (DBP15KZH-EN, DBP15KJA-EN, DBP15KFR-EN) and featuring around 400K triples and 15K aligned entity pairs per setting. (ii) MMEA-UMVM [10] includes two bilingual datasets (ENFR-15K, EN-DE-15K) and two monolingual datasets (D-W-15K-V1, D-W-15K-V2) derived from Multi-OpenEA datasets (\ud835\udc45\ud835\udc60\ud835\udc4e= 0.2) [28] and all three bilingual datasets from DBP15K [34]. It offers variability in visual information by randomly removing images, resulting in 97 distinct dataset splits with different \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54. For this study, we focus on representative \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54values of {0.4, 0.6, \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a} to validate our experiments. When \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a, the dataset corresponds to the original Standard dataset (as shown in Table 4). Note that for the Multi-modal DBP15K dataset, the \u201c\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a\u201d value is not 1.0. 4.1.2 Iterative Training for MMEA. We employ a probation technique for iterative training, which acts as a buffering mechanism, temporarily storing a cache of mutual nearest entity pairs across KGs from the testing set [33]. Specifically, at every \ud835\udc3e\ud835\udc52(where \ud835\udc3e\ud835\udc52= 5) epochs, models identify and add mutual nearest neighbor entity pairs from different KGs to a candidate list N\ud835\udc50\ud835\udc51. An entity pair in N\ud835\udc50\ud835\udc51is then added to the training set if it continues to be mutual nearest neighbors for \ud835\udc3e\ud835\udc60(= 10) consecutive iterations. This iterative expansion of the training dataset serves as data augmentation in the EA domain, enabling further evaluation of the model\u2019s robustness across various scenarios. 4.1.3 Implementation Details. MKGC: (i) Following Zhang et al. [64], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are configured with VGG [42] for DBP15K, and BEiT [1] for MKG-W and MKG-Y. For entities associated with multiple images, the feature vectors of these images are averaged to obtain a singular representation. (ii) The head number \ud835\udc41\u210ein MHCA is set to 2. For entity representation in DBP15K, graph structure embedding \u00af \u210e\ud835\udc54is used, while for MKG-W and MKG-Y, mean pooling across modality-specific representations \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54is employed. This distinction is made due to DBP15K\u2019s denser KG and greater absence of modality information compared to MKG-W and MKG-Y. (iii) We simply selected a set of candidate parameters in AdaMF [64]. Specifically, the number of negative samples \ud835\udc3eper positive triple is 32, the hidden dimension \ud835\udc51is 256, the training \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Table 4: Non-iterative MMEA results across three degrees of visual modality missing. Results are underlined when the baseline, equipped with the Gauss Modality Noise Masking (GMNM) module, surpasses its own original performance, and highlighted in bold when achieving SOTA performance. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .623 .876 .715 .625 .877 .717 .683 .906 .762 w/ GMNM .629 .883 .724 .625 .881 .717 .680 .907 .760 MCLEA [33] .627 .880 .715 .670 .899 .751 .732 .926 .801 w/ GMNM .652 .895 .740 .699 .912 .775 .754 .933 .819 MEAformer [9] .678 .924 .766 .720 .938 .798 .776 .953 .840 w/ GMNM .680 .925 .767 .719 .939 .798 .777 .955 .841 SnAg (Ours) .735 .945 .812 .757 .953 .830 .798 .963 .858 DBP15KJA-EN EVA [34] .546 .829 .644 .552 .829 .647 .587 .851 .678 w/ GMNM .618 .876 .709 .625 .874 .714 .664 .902 .748 MCLEA [33] .568 .848 .665 .639 .882 .723 .678 .897 .755 w/ GMNM .659 .901 .745 .723 .924 .795 .752 .935 .818 MEAformer [9] .677 .933 .768 .736 .953 .815 .767 .959 .837 w/ GMNM .678 .937 .770 .738 .953 .816 .767 .958 .837 SnAg (Ours) .735 .952 .814 .771 .961 .841 .795 .963 .857 DBP15KFR-EN EVA [34] .622 .895 .719 .634 .899 .728 .686 .926 .771 w/ GMNM .628 .897 .725 .634 .900 .728 .686 .929 .772 MCLEA [33] .622 .892 .722 .694 .915 .774 .734 .926 .805 w/ GMNM .663 .916 .756 .726 .934 .802 .759 .942 .827 MEAformer [9] .676 .944 .774 .734 .958 .816 .776 .967 .846 w/ GMNM .678 .946 .776 .735 .965 .819 .779 .969 .849 SnAg (Ours) .757 .963 .835 .790 .970 .858 .814 .974 .875 OpenEAEN-FR EVA [34] .532 .830 .635 .553 .835 .652 .784 .931 .836 w/ GMNM .537 .829 .638 .554 .833 .652 .787 .935 .839 MCLEA [33] .535 .842 .641 .607 .858 .696 .821 .945 .866 w/ GMNM .554 .848 .658 .624 .873 .714 .830 .950 .874 MEAformer [9] .582 .891 .690 .645 .904 .737 .846 .862 .889 w/ GMNM .588 .895 .696 .647 .905 .738 .847 .963 .890 SnAg (Ours) .621 .905 .721 .667 .922 .757 .848 .964 .891 OpenEAEN-DE EVA [34] .718 .918 .789 .734 .921 .800 .922 .982 .945 w/ GMNM .728 .919 .794 .740 .921 .803 .923 .983 .946 MCLEA [33] .702 .910 .774 .748 .912 .805 .940 .988 .957 w/ GMNM .711 .912 .782 .762 .928 .821 .942 .990 .960 MEAformer [9] .749 .938 .816 .789 .951 .847 .955 .994 .971 w/ GMNM .753 .939 .817 .791 .952 .848 .957 .995 .971 SnAg (Ours) .776 .948 .837 .810 .958 .862 .958 .995 .972 OpenEAD-W-V1 EVA [34] .567 .796 .651 .592 .810 .671 .859 .945 .890 w/ GMNM .597 .826 .678 .611 .826 .688 .870 .953 .900 MCLEA [33] .586 .821 .672 .663 .854 .732 .882 .955 .909 w/ GMNM .604 .841 .689 .678 .869 .748 .889 .960 .915 MEAformer [9] .640 .877 .725 .706 .898 .776 .902 .969 .927 w/ GMNM .656 .884 .738 .718 .905 .786 .904 .971 .929 SnAg (Ours) .678 .897 .758 .728 .915 .796 .905 .971 .930 OpenEAD-W-V2 EVA [34] .774 .949 .838 .789 .953 .848 .889 .981 .922 w/ GMNM .787 .956 .848 .799 .958 .856 .892 .983 .924 MCLEA [33] .751 .941 .822 .801 .950 .856 .929 .984 .950 w/ GMNM .766 .956 .836 .811 .965 .868 .938 .990 .957 MEAformer [9] .807 .976 .869 .834 .980 .886 .939 .994 .960 w/ GMNM .833 .980 .886 .857 .983 .903 .942 .995 .962 SnAg (Ours) .852 .986 .901 .870 .988 .913 .946 .996 .965 batch size is 1024, the margin \ud835\udf06is 12, the temperature\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50is 2.0, and the learning rate is set to 1\ud835\udc52\u22124. No extensive parameter tuning was conducted; theoretically, SnAg could achieve better performance with parameter optimization. (iv) The probability \ud835\udf0cof applying noise in GMNM is set at 0.2, with a noise ratio \ud835\udf16of 0.7. MMEA: (i) Following Yang et al. [61], Bag-of-Words (BoW) is employed for encoding relations (\ud835\udc65\ud835\udc5f) and attributes (\ud835\udc65\ud835\udc4e) into fixed-length vectors (\ud835\udc51\ud835\udc5f= \ud835\udc51\ud835\udc4e= 1000). This process entails sorting relations and attributes by frequency, followed by truncation or padding to Table 5: Iterative MMEA results. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .696 .902 .773 .699 .903 .775 .749 .914 .810 w/ GMNM .708 .906 .780 .705 .911 .778 .752 .919 .813 MCLEA [33] .719 .921 .796 .764 .941 .831 .818 .956 .871 w/ GMNM .741 .945 .818 .782 .954 .846 .830 .968 .882 MEAformer [9] .754 .953 .829 .788 .958 .853 .843 .966 .890 w/ GMNM .763 .947 .832 .799 .959 .860 .845 .970 .891 SnAg (Ours) .798 .957 .859 .821 .963 .876 .857 .972 .900 DBP15KJA-EN EVA [34] .646 .888 .733 .657 .892 .743 .695 .904 .770 w/ GMNM .696 .910 .773 .700 .912 .776 .745 .916 .807 MCLEA [33] .690 .922 .778 .756 .948 .828 .788 .955 .851 w/ GMNM .739 .937 .815 .796 .959 .858 .820 .969 .877 MEAformer [9] .759 .957 .833 .808 .969 .868 .831 .972 .882 w/ GMNM .769 .953 .838 .817 .967 .872 .842 .974 .890 SnAg (Ours) .808 .959 .864 .839 .975 .890 .861 .976 .904 DBP15KFR-EN EVA [34] .710 .931 .792 .716 .935 .797 .769 .946 .834 w/ GMNM .714 .929 .794 .720 .932 .798 .777 .950 .841 MCLEA [33] .731 .943 .814 .789 .958 .854 .814 .967 .873 w/ GMNM .759 .964 .840 .806 .974 .871 .837 .980 .893 MEAformer [9] .763 .963 .842 .811 .976 .874 .844 .980 .897 w/ GMNM .779 .968 .847 .817 .974 .876 .852 .981 .899 SnAg (Ours) .826 .976 .885 .852 .983 .904 .875 .987 .919 OpenEAEN-FR EVA [34] .605 .869 .700 .619 .870 .710 .848 .973 .896 w/ GMNM .606 .870 .701 .621 .874 .713 .856 .971 .898 MCLEA [33] .613 .889 .714 .702 .928 .785 .893 .983 .928 w/ GMNM .625 .902 .726 .707 .934 .790 .893 .983 .928 MEAformer [9] .660 .913 .751 .729 .947 .810 .895 .984 .930 w/ GMNM .666 .916 .755 .741 .943 .815 .905 .984 .937 SnAg (Ours) .692 .927 .778 .743 .945 .817 .907 .986 .939 OpenEAEN-DE EVA [34] .776 .935 .833 .784 .937 .839 .954 .984 .965 w/ GMNM .779 .936 .837 .789 .938 .843 .955 .984 .966 MCLEA [33] .766 .942 .829 .821 .956 .871 .969 .994 .979 w/ GMNM .779 .948 .840 .829 .959 .876 .971 .995 .980 MEAformer [9] .803 .950 .854 .835 .958 .878 .963 .994 .976 w/ GMNM .807 .949 .856 .841 .961 .882 .975 .995 .982 SnAg (Ours) .826 .962 .874 .859 .970 .899 .977 .998 .984 OpenEAD-W-V1 EVA [34] .647 .856 .727 .669 .860 .741 .916 .984 .943 w/ GMNM .663 .859 .735 .673 .862 .743 .927 .986 .950 MCLEA [33] .686 .896 .766 .770 .941 .836 .947 .991 .965 w/ GMNM .699 .907 .778 .776 .946 .840 .949 .991 .966 MEAformer [9] .718 .901 .787 .785 .934 .841 .943 .990 .962 w/ GMNM .728 .901 .793 .803 .942 .855 .956 .991 .970 SnAg (Ours) .753 .930 .820 .808 .953 .864 .958 .993 .972 OpenEAD-W-V2 EVA [34] .854 .980 .904 .859 .983 .908 .925 .996 .951 w/ GMNM .866 .980 .909 .872 .981 .913 .948 .997 .969 MCLEA [33] .841 .984 .899 .877 .990 .923 .971 .998 .983 w/ GMNM .845 .987 .902 .882 .992 .926 .973 .999 .984 MEAformer [9] .886 .990 .926 .904 .992 .938 .965 .999 .979 w/ GMNM .902 .990 .936 .918 .993 .948 .975 .999 .985 SnAg (Ours) .904 .994 .939 .924 .994 .952 .980 .999 .988 standardize vector lengths, thus streamlining representation and prioritizing significant features. For any entity \ud835\udc52\ud835\udc56, vector positions correspond to the presence or frequency of top-ranked attributes and relations, respectively. (ii) Following [5, 33], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are selected as ResNet-152 [20] for DBP15K, and CLIP [39] for Multi-OpenEA. (iii) An alignment editing method is applied to minimize error accumulation [47]. (iv) The head number \ud835\udc41\u210ein MHCA is set to 1. The hidden layer dimensions \ud835\udc51for all networks are unified into 300. The total epochs for baselines are set to 500 with an option for an additional 500 epochs of iterative training [33]. Our training strategies incorporates a cosine warm-up schedule (15% of steps for LR warm-up), early stopping, and gradient accumulation, using the AdamW optimizer (\ud835\udefd1 = 0.9, \ud835\udefd2 = 0.999) \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 6: Component Analysis for SnAg on MKGC datasets. The icon v indicates the activation of the Gauss Modality Noise Masking (GMNM) module; u denotes its deactivation. By default, GMNM\u2019s noise application probability \ud835\udf0cis set to 0.2, with a noise ratio \ud835\udf16of 0.7. Our Transformer-based structure serves as the default fusion method for SnAg. Alternatives include: \u201cFC\u201d (concatenating features from various modalities followed by a fully connected layer); \u201cWS\u201d (summing features weighted by a global learnable weight per modality); \u201cAT\u201d (leveraging an Attention network for entitylevel weighting); \u201cTS\u201d (using a Transformer for weighting to obtain confidence scores \u02dc \ud835\udc64\ud835\udc5afor weighted summing); \u201cw/ Only \u210e\ud835\udc54\u201d (using Graph Structure embedding for uni-modal KGC). \u201cDropout\u201d is an experimental adjustment where Equation (1) is replaced with the Dropout function to randomly zero modal input features, based on a defined probability. Variants DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 v SnAg (Full) .363 .274 .530 .373 .302 .503 .395 .354 .471 v \ud835\udf0c= 0.3, \ud835\udf16= 0.6 .361 .272 .528 .373 .302 .502 .393 .353 .468 v \ud835\udf0c= 0.1, \ud835\udf16= 0.8 .360 .272 .525 .371 .299 .496 .391 .348 .463 v \ud835\udf0c= 0.4, \ud835\udf16= 0.4 .358 .268 .526 .365 .296 .492 .388 .346 .458 v \ud835\udf0c= 0.5, \ud835\udf16= 0.2 .360 .270 .528 .368 .299 .493 .389 .348 .457 v \ud835\udf0c= 0.7, \ud835\udf16= 0.2 .359 .270 .526 .367 .299 .490 .387 .345 .456 u SnAg .357 .269 .523 .365 .296 .490 .387 .345 .457 u FC Fusion .327 .210 .522 .350 .287 .467 .378 .340 .442 u WS Fusion .334 .218 .529 .361 .298 .480 .384 .345 .449 u AT Fusion .336 .225 .528 .361 .296 .481 .379 .343 .445 u TS Fusion .335 .221 .529 .358 .292 .472 .378 .344 .437 u w/ Only \u210e\ud835\udc54 .293 .179 .497 .337 .268 .467 .350 .291 .453 u Dropout (0.1) .349 .252 .527 .361 .297 .479 .382 .344 .446 u Dropout (0.2) .346 .249 .526 .359 .294 .478 .381 .343 .446 u Dropout (0.3) .343 .242 .524 .356 .290 .477 .381 .343 .445 u Dropout (0.4) .341 .238 .521 .356 .295 .467 .379 .341 .442 with a consistent batch size of 3500. (v) The total learnable parameters of our model are comparable to those of baseline models. For instance, under the DBP15KJA-EN dataset: EVA has 13.27M, MCLEA has 13.22M, and our SnAg has 13.82M learnable parameters. 4.2 Overall Results 4.2.1 MKGC Results. As shown in Table 1, SnAg achieves SOTA performance across all metrics on three MKGC datasets, especially notable when compared with recent works like MANS [62] and MMRNS [59] which all have refined the Negative Sampling techniques. Our Entity-level Modality Interaction approach for MMKG representation learning not only demonstrates a significant advantage but also benefits from the consistent performance enhancement provided by our Gauss Modality Noise Masking (GMNM) module, maintaining superior performance even in its absence. 4.2.2 MMEA Results. As illustrated in the third segment of Table 4, our SnAg achieves SOTA performance across all metrics on seven standard MMEA datasets. Notably, in the latter four datasets of the OpenEA series (EN-FR-15K, EN-DE-15K, D-W-15K-V1, D-W-15KV2) under the Standard setting where \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 1.0 indicating full image representation for each entity, our GMNM module maintains or even boosts performance. This suggests that strategic noise integration can lead to beneficial results, demonstrating the module\u2019s effectiveness even in scenarios where visual data is abundant and complete. This aligns with findings from related work [10, 12], which suggest that image ambiguities and multi-aspect visual information can sometimes misguide the use of MMKGs. Unlike these studies that typically design models to refuse and combat noise, our SnAg accepts and intentionally integrates noise to better align with the inherently noisy conditions of real-world scenarios. Most importantly, as a versatile MMKG representation learning approach, it is compatible with both MMEA and MKGC tasks, illustrating its robust adaptability in diverse operational contexts. 4.3 Uncertainly Missing Modality. The first two segments from Table 4 present entity alignment performance with \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4, 0.6, where 60%/40% of entities lack image data. These missing images are substituted with random image features following a normal distribution based on the observed mean and standard deviation across other entities\u2019 images (details in 3.2.3). This simulates uncertain modality absence in real-world scenarios. Our method outperforms baselines more significantly when the modality absence is greater (i.e., \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4), with the GMNM module providing notable benefits. This demonstrates that intentionally introducing noise can increase training challenges while enhancing model robustness in realistic settings. 4.4 Ablation studies. In Table 6, we dissect the influence of various components on our model\u2019s performance, focusing on three key aspects: (i) Noise Parameters: The noise application probability \ud835\udf0cand noise ratio \ud835\udf16are pivotal. Optimal values of \ud835\udf0c= 0.2 and \ud835\udf16= 0.7 were determined empirically, suggesting that the model tolerates up to 20% of entities missing images and that a modality-mask ratio of 0.7 acts as a soft mask. For optimal performance, we recommend empirically adjusting these parameters to suit other specific scenario. Generally, conducting a grid search on a smaller dataset subset can quickly identify suitable parameter combinations. (ii) Entity-Level Modality Interaction: Our exploration shows that absence of image information (w/ Only \u210e\ud835\udc54) markedly reduces performance, emphasizing MKGC\u2019s importance. Weighted summing methods (WS, AT, TS) surpass simple FC-based approaches, indicating the superiority of nuanced modality integration. Purely using Transformer modality weights \u02dc \ud835\udc64\ud835\udc5afor weighting does not show a clear advantage over Attention-based or globally learnable weight methods in MKGC. In contrast, our approach using \u00af \u210e\ud835\udc54(for DBP15K) and \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54(for MKG-W and MKG-Y) which significantly outperforms others, demonstrating their efficacy. (iii) Modality-Mask vs. Dropout: In assessing their differential impacts, we observe that even minimal dropout (0.1) adversely affects performance, likely because dropout to some extent distorts the original modal feature distribution, thereby hindering model optimization toward the alignment objective. Conversely, our modality-mask\u2019s noise is inherent, replicating the feature distribution seen when modality is absent, and consequently enhancing model robustness more effectively. \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.05686v2",
+ "title": "Bridging the gap between luminous red novae and common envelope evolution: the role of recombination energy and radiation force",
+ "abstract": "Luminous red novae (LRNe) and their connection to common envelope evolution\n(CEE) remain elusive in astrophysics. Here, we present a radiation hydrodynamic\nmodel capable of simulating the light curves of material ejected during a CEE.\nFor the first time, the radiation hydrodynamic model incorporates complete\nrecombination physics for hydrogen and helium. The radiation hydrodynamic\nequations are solved with Guangqi. With time-independent ejecta simulations, we\nshow that the peaks in the light curves are attributed to radiation-dominated\nejecta, while the extended plateaus are produced by matter-dominated ejecta. To\nshowcase our model's capability, we fit the light curve of AT2019zhd. The\ncentral mass object of $6M_{\\odot}$ is assumed based on observations and\nscaling relations. Our model demonstrates that the ejecta mass of AT2019zhd\nfalls within the range of $0.04M_{\\odot}$ to $0.1M_{\\odot}$. Additionally, we\ndemonstrate that recombination energy and radiation force acceleration\nsignificantly impact the light curves, whereas dust formation has a limited\neffect during the peak and plateau phases.",
+ "authors": "Zhuo Chen, Natalia Ivanova",
+ "published": "2024-02-08",
+ "updated": "2024-02-17",
+ "primary_cat": "astro-ph.SR",
+ "cats": [
+ "astro-ph.SR",
+ "astro-ph.HE"
+ ],
+ "main_content": "INTRODUCTION Since the discovery of V1309 Sco (Tylenda et al. 2011), a luminous red nova (LRN), also a confirmed binary merger, many more new LRNe have been discovered (Kurtenkov et al. 2015; Blagorodnova et al. 2017; Cai et al. 2019; Pastorello et al. 2019; Blagorodnova et al. 2021; Pastorello et al. 2021a,b; Cai et al. 2022a,b), to name a few. It is argued that, at least for some of the LRNe, the progenitors are binary stars that have undergone common envelope evolution (CEE) (Ivanova et al. 2013a) one of the most mysterious events in binary evolution. Bridging the gap between the observables of LRNe and theoretical models of CEE could be crucial to getting a better understanding of binary evolution (Chen et al. 2024), including cataclysmic variables (Warner 2003), X-ray binaries (Reig 2011), gravitational wave sources (Renzo et al. 2021), and type Ia supernovae (Wang & Han 2012; Liu et al. 2023). Corresponding author: Zhuo Chen (\u9648\u5353) chenzhuo astro@tsinghua.edu.cn Observationally, LRNe exhibit three prominent characteristic features. Firstly, they all have extended plateaus with decreasing effective temperatures in the light curves. This phenomenon suggests a substantial mass ejection from the central object, which cools as it expands outward. During this phase, hydrogen and helium may recombine and release their latent heat, contributing to the thermal energy of the plateau (Ivanova et al. 2013b, 2015; Matsumoto & Metzger 2022). Secondly, a notable number of LRNe present a peak preceding the plateau in their light curves. Modeling this peak is challenging due to its exponential increasing and decreasing phases. Pejcha et al. (2017) employed a hydrodynamic model with post-processing to fit the slowly increasing luminosity before the peak, attributing the peak to an eruption-like event. Although Matsumoto & Metzger (2022) successfully fitted the rapidly decreasing luminosity after the peak with their hot ejecta model in 1D, the rapidly increasing phase remains a puzzle. Thirdly, LRNe frequently exhibit strong H\u03b1 emission (Munari et al. 2002; Stritzinger et al. 2020; Blagorodnova et al. 2021; Pastorello et al. 2021a), indicating a arXiv:2402.05686v2 [astro-ph.SR] 17 Feb 2024 \f2 possible collision between the ejecta and circumstellar matter (Blagorodnova et al. 2020). A prevailing theory on the origin of LRNe implicates the progenitor binary undergoes CEE and experiences a plunge-in phase accompanied with ejection of some envelope material. Numerous 3D simulations have been conducted to study the CEE dynamics (Nandez et al. 2014, 2015; Ivanova & Nandez 2016; Nandez & Ivanova 2016; Ohlmann et al. 2016; Chamandy et al. 2018; Iaconi et al. 2019; Prust & Chang 2019; Reichardt et al. 2019; Sand et al. 2020; Gonz\u00b4 alez-Bol\u00b4 \u0131var et al. 2023; R\u00a8 opke & De Marco 2023; Chamandy et al. 2024). However, due to the complexity of physical processes and the multi-scale nature of the CEE problem, there is still no consensus on how much mass is ejected during a CEE, even if 3D simulations start with the same initial conditions. Furthermore, predicting the observational appearance of these simulations is inherently difficult (Hatfull et al. 2021). Notably, radiation hydrodynamics, a crucial factor in CEE, is often absent from these 3D models due to its difficulty and is easier to consider in 1D models (Soker et al. 2018; Bronner et al. 2023; O\u2019Connor et al. 2023). Recent work by Matsumoto & Metzger (2022) demonstrates the effectiveness of a cooling shell model, devoid of radiation transport and radiation hydrodynamics, in producing LRNe-like light curves. However, there is still a lack of first principle models that can relate CEE to LRNe. In CEE, the plunge-in phase marks the rapid conversion of the gravitational potential energy into the kinetic energy. Some kinetic energy would be converted to thermal energy via a shock between the plunge-in star and the envelope (MacLeod et al. 2017). We would expect the shocked gas to be close to adiabatic because of the high optical depth inside a common envelope (CE). The shocked gas may be accelerated in the radial direction due to high radiation pressure and radiation force acceleration \u2013 this has been overlooked previously \u2013 and become an eruptionlike event, eventually appearing as an LRN. CEE is a 3D problem by nature. However, solving 3D radiation hydrodynamic equations with complex physics is computationally demanding. Therefore, as the first step, and hoping to resolve the microphysics better, we start by approximating the radiation hydrodynamic problem with 1D spherical symmetry. In a follow-up work, we will revisit this problem with 2D axisymmetric models. This Letter introduces the first radiation hydrodynamic model that comprehensively incorporates recombination energies of H and He, radiation transport, and radiation force acceleration. The model can successfully produce both the peak and plateau phases of the light curve, allowing for the estimation of ejecta mass through curve fitting. The organization of the Letter is as follows: Section 2 presents the physical model, governing equations, initial and boundary conditions, and numerical setups, including Adaptive Mesh Refinement (AMR) criteria. Simulation results are presented in Section 3. The Letter concludes in Section 4. 2. METHODS 2.1. Physical model We adopt the following radiation hydrodynamic equations to model the evolution of the LRNe in a 1D spherical coordinate, \u2202\u03c1 \u2202t + 1 r2 \u2202 \u2202r \u0000r2\u03c1v \u0001 = 0, (1) \u2202\u03c1v \u2202t + 1 r2 \u2202 \u2202r \u0000r2\u03c1v2\u0001 = \u2212\u2202p \u2202r + \u03c1(arad \u2212g), (2) \u2202E \u2202t + 1 r2 \u2202 \u2202r[r2(E + p)v] = \u03c1v(arad \u2212g) + G, (3) \u2202Er \u2202t + 1 r2 \u2202 \u2202r[r2(Fr + Erv)] = \u2212G, (4) where \u03c1, v, p, and E are the density, radial velocity, pressure, and total energy of the gas. In addition, arad (to be explained later) and g = GM\u22c6/r2 are the radiation and gravitational force acceleration, respectively, where G is the gravitational constant and M\u22c6is the mass of the central object. The time and coordinate are denoted by t and r. The radiation-related variables Er, Fr, and G are explained later in this section. We adopt a simple hydrogen and helium mixture equation of state (EoS, see Appendix A), E = \u03c1v2 2 + eg(\u03c1, Tg) (5) p = X i nikbTg (6) where i is the species index, ni is the number density of species i, eg and Tg are the internal energy and temperature of the gas, kb is the Boltzmann constant. Throughout this Letter, we assume the hydrogen mass ratio X = 0.74 and the helium mass ratio Y = 0.26 for simplicity, because metal does not contribute much to gas thermodynamics. The hydrodynamics, together with the complex EoS, is solved by the approximate HLLC Riemann solver in Chen et al. (2019). To solve the radiation transport problem, we use the flux-limited diffusion (FLD) approximation, which relates the radiation flux Fr to the gradient of radiation energy \u2202Er/\u2202r with a flux limiter \u03bb (Levermore & Pom\f3 raning 1981), i.e., Fr = \u2212c\u03bb(R) \u03baR\u03c1 \u2202Er \u2202r , (7) \u03bb(R) = 2 + R 6 + 3R + R2 , (8) R = |\u2202Er/\u2202r| \u03baR\u03c1Er , (9) where \u03baR is the Rosseland mean opacity (see Appendix B). The flux limiter has the property that c\u03bb \u03baR\u03c1 \u2202Er \u2202r \u2192 \uf8f1 \uf8f2 \uf8f3 c 3\u03baR\u03c1 \u2202Er \u2202r optically thick, cEr optically thin. (10) The radiation and gas energy coupling is modeled by solving the following equations implicitly with subtimesteps, \u2202eg \u2202t = G, (11) \u2202Er \u2202t + 1 r2 \u2202 \u2202r[r2(Fr + Erv)] = \u2212G, (12) where G = \u03baP\u03c1c(Er \u2212arT 4 g ) is the energy coupling strength, \u03baP is the Planck mean opacity, and ar is the radiation constant. Meanwhile, the radiation and gravitational acceleration are integrated explicitly through, \u2202\u03c1v \u2202t = \u03c1(arad \u2212g), (13) where arad = \u03baRFr/c is the radiation force acceleration and c is the speed of the light. We use Guangqi(Chen & Bai in prep) to solve Equation 1-4. Guangqi is a second-order in time and space accurate and finite volume radiation hydrodynamic code. It has HLLC Riemann solvers, realistic EoS (Chen et al. 2019), and adaptive mesh refinement (AMR). Guangqi solves the radiation transport problem with FLD approximation implicitly (similar to Kolb et al. (2013)), using iterative solvers from Petsc (Balay et al. 1997, 2019). Currently, Guangqi has spherical and Cartesian geometry in 1D and 2D. 2.2. Initial and boundary conditions We set our initial condition to be an outflow at escape velocity with a constant mass loss rate, \u03c1init(r) = \u03c10(rin/r)3/2, (14) vinit(r) = p 2GM\u22c6/r (15) Tinit(r) = T0(rin/r) (16) where \u03c10 = 10\u221213g\u00b7cm\u22123, T0 = 1000K, and rin = 10R\u2299 is the inner radius of the computational domain. The constant mass loss rate is \u02d9 M = 2.30 \u00d7 10\u22127M\u2299\u00b7yr\u22121, which is low compared to the mass loss rate of the ejecta of the CEE. Radiation\u2019s initial condition is assumed to be in local thermal equilibrium (LTE) with gas. The initial mass and energy in the computational domain are 8.52\u00d710\u22127M\u2299and 3.68\u00d71035erg, respectively. They are significantly smaller than the ejecta\u2019s mass and energy, and our light curve results are insensitive to the initial condition. The outer boundary is free, i.e., the gas and radiation can leave the computational domain freely. In our simulations, we set the out boundary at rout =4000R\u2299. The free boundary for the radiation is, \u2202(r2Er) \u2202r = 0. (17) This outer boundary condition means that the radiation flux is optically thin and outward. We can confirm that the outer region of our computational domain is indeed optically thin (see Section 3.2) and always outward. We calculated the luminosity at the outer boundary by, L = 4\u03c0r2 outFr(rout) (18) The inner boundary is time-dependent. We specify the time-dependent density, velocity, and temperature of the ejecta at the inner boundary, i.e., [\u03c1(t), vej(t), Tg(t)]. vej = fej(t) p 2GM\u22c6/rin, (19) \u03c1= \u02d9 M/(4\u03c0r2 invej), (20) Tg = \u03b1mHv2 ej/2kb, (21) where fej(t) can be a time-dependent factor, \u02d9 M is the mass loss rate, and we introduce \u03b1 to characterize the temperature of the ejecta. When the ejection of the CEE stops, the inner boundary of the gas is changed to free to let the fallback gas pass through the inner boundary, therefore, we do not model any fallback shocks and fallback accretion disks. Meanwhile, radiation transport is turned off at the inner boundary. Mathematically, we use the zero-gradient (\u2202Er/\u2202r = 0) radiation inner boundary condition. We assume the gas and radiation are in LTE at the inner boundary because the density and opacity are very high inside the CE, and the thermal timescale is short. 2.3. Numerical setups We adopt a uniform base grid with spherical geometry. The computational domain is r \u2208[10, 4000]R\u2299, and the base resolution is N = 1536. We add 5 levels of static mesh refinement where r \u2208[10, 15]R\u2299to resolve the strong gradient of the gravitational potential. Each level of mesh refinement doubles the resolution. We also \f4 adaptively refine zones with temperature gradients up to 5 levels. The mesh refine and derefine criterion are as follows, refine |Tg,i+1 \u2212Tg,i\u22121| 2Tg,i > 0.03, (22) derefine |Tg,i+1 \u2212Tg,i\u22121| 2Tg,i < 0.001, (23) where i\u00b11 in the subscript represents the cell index (not to be confused with species index). We can capture the shock, radiative layer, and radiation-dominated zones with photon trapping with AMR. The finest cell has a length of 8.12\u00d710\u22122R\u2299. The Courant\u2013Friedrichs\u2013Lewy number is 0.95 in our simulations. The simulation time is 4.5 \u00d7 106s=52.08 days. 3. RESULTS We first show some simple simulations and get a sense of the correspondence between the properties of the ejecta and the light curves. After that, we fit the light curve of AT2019zhd with a more complex ejecta. 3.1. Simple ejecta For simplicity, in this subsection, we set M\u22c6= 6M\u2299, rin = 10R\u2299. We consider seven models and list the physical properties of the seven models in Table 1. In what follows, we refer to a model as radiation-dominated if Er/eg \u226b1, and as matter-dominated if Er/eg < 1. In particular, we calculate the total energy Etotal of the ejecta by, Etotal = 4\u03c0r2 in(Er + eg)fej p 2GM\u22c6/rintej, (24) where tej is the duration of the ejection. Figure 1 shows the light curves of the simple models. We can see that the light curves of m02a045v2 and m01a18 have pronounced peaks, i.e., rapid, exponential increases and decreases, while m25a18 and m25a09 have long plateaus. The light curves of other models are in between. Because the physical system is highly nonlinear, we provide a qualitative discussion here. The optical depth of an ejecta decreases as it expands, and the ejecta cools faster as the optical depth decreases. The more mass in an ejecta, the higher the optical depth; the faster the ejecta, the faster the optical depth decreases. Therefore, m25a18 and m25a09 have long plateaus because their ejecta are the most massive ones among the seven simple models; m25a18 has a longer plateau than m25a09 because m25a18 has a higher energy budget and thus a longer cooling timescale. On the other hand, m02a045v2 has a sharper peak than m01a18 because its expansion speed is faster, resulting in a more 5 10 15 20 25 30 t [day] 1035 1036 1037 1038 1039 L [erg/s] m01a18 m01a09 m05a18 m05a09 m25a18 m25a09 m02a045v2 Figure 1. The light curves of simple simulations with different mass and temperature of the ejecta. The physical parameters of the ejecta are listed in Table1. rapid decrease in the optical depth and a shorter cooling timescale. Because m02a045v2 and m01a18 have the same amount of energy budget, m02a045v2 has a higher luminosity peak. Overall, we can also relate that we observe the luminosity peak formation (exponential increase and decrease) in the case of radiation-dominating ejecta. In contrast, the plateau is the feature observed in the case of matter-dominated ejecta. 3.2. Fitting AT2019zhd We choose AT2019zhd as a fitting example because it is a recently well-observed LRN and its light curve resembles V1309 Sco, whose progenitor is a confirmed binary (Tylenda et al. 2011; Pastorello et al. 2021a). The mass of the progenitor of AT2019zhd is unclear (Pastorello et al. 2021a). We take the following steps to estimate the total mass of the merger product. 1. The total mass of V1309 Sco is estimated to be 1 \u2212 2M\u2299(Tylenda et al. 2011). 2. The peak luminosity of AT2019zhd is roughly LAT2019zhd = 2.08 \u00d7 1039erg\u00b7s\u22121, and the peak captured luminosity of V1309 Sco is LV1309Sco = 1.26 \u00d7 1038erg\u00b7s\u22121. Therefore, LAT2019zhd/LV1309Sco = 16.5. 3. The total radiation energy released during the peak and the plateau of AT2019zhd is roughly 10 times more than V1309 Sco. If we assume that the luminosity energy mainly comes from the release of the gravitational potential energy, it \f5 model fej tej \u02d9 M \u2206M \u03c1 \u03b1 Tg Er/eg Etotal [days] [M\u2299\u00b7 yr\u22121] [M\u2299] [g\u00b7cm\u22123] [K] [erg] m01a18 1 1 0.1 2.74 \u00d7 10\u22124 2.17 \u00d7 10\u22128 0.018 2.50 \u00d7 105 19.9 7.76 \u00d7 1044 m02a045v2 2 0.5 0.2 2.74 \u00d7 10\u22124 2.17 \u00d7 10\u22128 0.0045 2.50 \u00d7 105 19.9 7.76 \u00d7 1044 m01a09 1 1 0.1 2.74 \u00d7 10\u22124 2.17 \u00d7 10\u22128 0.009 1.25 \u00d7 105 2.01 6.91 \u00d7 1043 m05a18 1 1 0.5 1.37 \u00d7 10\u22123 1.08 \u00d7 10\u22127 0.018 2.50 \u00d7 105 3.98 9.24 \u00d7 1044 m05a09 1 1 0.5 1.37 \u00d7 10\u22123 1.08 \u00d7 10\u22127 0.009 1.25 \u00d7 105 0.40 1.61 \u00d7 1043 m25a18 1 1 2.5 6.85 \u00d7 10\u22123 5.42 \u00d7 10\u22127 0.018 2.50 \u00d7 105 0.80 1.66 \u00d7 1045 m25a09 1 1 2.5 6.85 \u00d7 10\u22123 5.42 \u00d7 10\u22127 0.009 1.25 \u00d7 105 0.08 6.20 \u00d7 1044 Table 1. From left to right, model name, ejecta\u2019s velocity factor, ejecta duration tej, mass loss rate \u02d9 M, cumulative mass loss \u2206M, ejecta\u2019s density, \u03b1, the temperature of the ejecta, the ratio of Er and eg, and the total energy of the ejecta of the seven simple models. may scale as, L \u223cM 2. (25) This scaling relation motivates us to set the mass of the central object to be the same as was adopted for the simple models, 6M\u2299, and we set \u03b1 = 0.018 in this subsection. Unlike the case of simple models, the inner boundary conditions here are time-dependent, see Figure 2. As a result, the time-dependent ejecta is initially radiationdominated (denoted by the orange color in Figure 2) and then transits to matter-dominated. We anticipate that the initial high-temperature ejecta is produced by a shock between the rapidly plunge-in companion star and the envelope (MacLeod et al. 2017). The plungein speed vp is comparable to the Keplerian speed at the plunge-in radius rp, but much larger than the envelope\u2019s speed venv. On the other hand, the kinetic energy required to produce Er and eg in the ejecta, converted to speed, can be calculated by, vsource = s 2(Er + eg) \u03c1 , (26) where \u03c1 is the ejecta\u2019s density. Due to energy conversion, v2 p \u2248GM\u22c6 rp > v2 env + v2 source. (27) The maximum vsource in Figure 2 is 448km\u00b7s\u22121, which means that rp < 5.69R\u2299. More sophisticated stellar and binary evolution analysis should be considered to further narrow down rp and M\u22c6(Ge et al. 2010, 2015, 2020). We carry out simulations of two sub-models: the shock model and the shock-free model. The shock model has an ejecta with a slightly increasing speed during the late stage and the shock-free model has an ejecta with a slightly decreasing speed during the late stage. Figure 2 shows the time-dependent variables of the two submodels. The functions that generate these variables can be found in Appendix C. Figure 3 shows the light curves of the shock and shockfree models. We have shifted the time axis to let the peak be located at t = 0. We can see that the light curves of the shock and shock-free models both resemble AT2019zhd. Consequently, the time evolution of the shock-free model is similar to the shock model, therefore, we just show the time evolution of the shock model in Figure 4 to save space. In particular, we calculate the optical depth of the radiation flux by, \u03c4R(r) = Z rout r \u03c1\u03baRdr. (28) The time axis is adjusted to be consistent with the light curves. The time evolution of H and He species can be found in Appendix D, since they are determined by \u03c1 and Tg. There are several prominent results in the time evolution plots, see Figure 5. 1. In panel 2, the temperature profile has a spike near the peak of the luminosity. This is when the radiation-dominated ejecta becomes optically thin and the radiation heats the surrounding gas. 2. The observationally important ejecta material is the one that has a temperature of about 5000K. The ejecta\u2019s matter that is colder than 5000K is closer to the observer but is mostly transparent, revealing behind it the ejecta\u2019s matter with a temperature of about 5000K. The time-evolution of the surface log10 Tg = 3.7 (see panel 2) resembles the shape of the light curve in Figure 3. 3. Panel 3 shows that the arad exceeds the g at small radii during the first 30 days, and at large radii during the late stage. This phenomenon is consistent with the evolution of \u03baR. When 104 \u2272Tg \u2272105K, most of the gas is ionized and \u03baR is large, radiation flux can be significant. When Tg < 1400K, dust may form and provide the opacity, and the arad becomes \f6 400 450 500 vej [km/s] vesc 0.5 1.0 1.5 M [M yr 1] 200 300 400 vsource [km/s] shock shock-free 0.00 0.05 0.10 M [M ] 0 5 10 15 20 t [day] 1.5 2.0 2.5 3.0 T [\u00d7105K] 0 5 10 15 20 t [day] 100 101 Er/eg Figure 2. The solid and dashed lines are the time-dependent inner boundary conditions of the shock and shock-free models, respectively. The red line indicates the escape velocity at the inner boundary. The orange region denotes the radiation-dominated ejecta. On the left panel, from top to bottom, each plot shows the velocity of the ejecta, vsource calculated from Equation 26, and the temperature of the ejecta. On the right panel, from the top to the bottom, they are the mass loss rate, cumulative mass loss, and Er/eg, respectively. The shock model and shock-free model also differ in the duration of the ejecta. 0 10 20 30 40 t [day] 1038 1039 L [erg/s] shock shock-free AT2019zhd Figure 3. The solid line with black dots is the observed light curve of AT 2019zhd (Pastorello et al. 2021a), and the red and blue solid lines are the light curves of the shock and the shock-free models, respectively. significant again. At this stage, the LRNe is similar to the asymptotic-giant-branch (AGB) stars (H\u00a8 ofner & Olofsson 2018). 4. In panel 4, the radius of \u03c4R = 1 expands rapidly when the inner region of the ejecta cools off because dust can form at low temperatures and obscure the object. 5. The inner region (r < 2000R\u2299) transits from radiation-dominated to matter-dominated as the ejecta expands into the ambient and cools off. It can be seen in panel 5 of Figure 4. Our model is comprehensive in the sense that we incorporate many physical processes. To address the impact of each specific physics on the light curve, we run some companion simulations by turning off the corresponding physics in Appendix E, F, and G. We show that our calculations are converged in Appendix H. 4."
+ }
+ ],
+ "Kai Yu": [
+ {
+ "url": "http://arxiv.org/abs/2311.13617v1",
+ "title": "Boosting3D: High-Fidelity Image-to-3D by Boosting 2D Diffusion Prior to 3D Prior with Progressive Learning",
+ "abstract": "We present Boosting3D, a multi-stage single image-to-3D generation method\nthat can robustly generate reasonable 3D objects in different data domains. The\npoint of this work is to solve the view consistency problem in single\nimage-guided 3D generation by modeling a reasonable geometric structure. For\nthis purpose, we propose to utilize better 3D prior to training the NeRF. More\nspecifically, we train an object-level LoRA for the target object using\noriginal image and the rendering output of NeRF. And then we train the LoRA and\nNeRF using a progressive training strategy. The LoRA and NeRF will boost each\nother while training. After the progressive training, the LoRA learns the 3D\ninformation of the generated object and eventually turns to an object-level 3D\nprior. In the final stage, we extract the mesh from the trained NeRF and use\nthe trained LoRA to optimize the structure and appearance of the mesh. The\nexperiments demonstrate the effectiveness of the proposed method. Boosting3D\nlearns object-specific 3D prior which is beyond the ability of pre-trained\ndiffusion priors and achieves state-of-the-art performance in the single\nimage-to-3d generation task.",
+ "authors": "Kai Yu, Jinlin Liu, Mengyang Feng, Miaomiao Cui, Xuansong Xie",
+ "published": "2023-11-22",
+ "updated": "2023-11-22",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Estimating 3D model from only one input image is challenging for the ambiguity and the complexity of real world objects. Many previous works [11, 28, 29, 38, 42] focus on only some particular category, such as human, due to the promising application. 3D human dataset are collected first to train a 3D network. Whereas, these kinds of methods are not applicable to open-vocabulary image-to-3D task due to the lack of diverse 3D datasets. To solve the dataset problem, some previous works try to learn 3D structure from only 2D image collections [30, 33]. 2D image collections such as ImageNet [3] contain diverse images with different view angles. And thus, 3D structures can be learned from these 2D images. Recent amazing progress in diffusion models makes it possible to generate diverse images with only text prompt. 2D diffusion models are trained using billions of images LAION 5B [31], which contain object photos taken from different views. Welltrained 2D diffusion models can thus be used to learn 3D structures for open-vocabulary objects. Many recent works [2, 21, 34, 37] use 2D diffusion prior for text-to-3D generation. 3D representation networks such as NeRF or DMtet [32] are trained using pretrained 2D diffusion models with SDS [21] or VDS loss [37]. Furthermore, some recent works use diffusion prior to solve openvocabulary image-to-3D task. Image-to-3D aims at estimating 3D structure given an input image. As the input image may be different from typical generated images from 2D diffusion model, it becomes more difficult to train than text-to-3D task. Zero-1-to-3 [15] trains a diffusion model using multiple views of images rendered from 3D dataset. This trained diffusion model using 3D dataset is more powerful than normal pretrained diffusion models in respect to 3D capability and is referred as 3D diffusion prior. Lots of recent methods [14, 17, 22, 35, 39] manage to train a 3D representation network with only one given image using 2D diffusion prior or 3D diffusion prior. Although amazing progress has been achieved by recent methods, we notice that it may fail when the input image containing uncommon objects with asymmetry structure, such as objects from video games. These kinds of irregular object are beyond the ability of normal 2D diffusion prior and 3D diffusion prior. Because of this, we propose Boosting3D to boost normal 2D diffusion prior to 3D diffusion prior with progressive learning. First, we optimize a coarse NeRF using the pretrained diffusion models. Simutaneously, we train a LoRA for the specific input object. Next we train the LoRA and NeRF in a progressive way. The LoRA and NeRF will boost each other while training. After this step, we obtain a refined NeRF and a well trained LoRA with object-level 3D prior. Finally, we extract a coarse surface mesh from the trained NeRF and finetune both surface geometry and appearance using the trained LoRA. Our method is able to obtain highquality and stable 3D object from one input image as shown in Fig.1. In summary, we make the following three main contributions: \u2022 We present Boosting3D, a novel image-to-3D pipeline that uses three-stage optimization process, i.e. coarse NeRF, fine NeRF and mesh refinement, to generate a high-quality textured mesh. \u2022 We propose a novel 3D mesh optimization method that can explicitly optimize 3D model representation and texture using T2I model. The proposed method outperforms explicit 3D representation method DMtet in terms of mesh and texture quality. \u2022 We boost 2D diffusion prior to 3D prior in a bootstrap way by training object-level LoRA . Our method achieves state-of-the-art results in 3D reconstruction of single objects for both real-world photos and synthetic images. 2. Related work 2.1. Diffusion models Recently, large-scale diffusion models have shown great performance in text-to-image synthesis [7], which provides an opportunity to utilize it for zero-shot text-to-3D generation [6, 13, 21, 37]. LoRA[8] propose to use the low rank matrix to learn the generation information of a category or object, reducing the amount of trained parameters. Dreambooth[27] propose a training method that uses a fixed prompt and a small number of samples to finetune the whole model. Both methods enable learning the specific objectlevel information at a low cost. To acquire different views of the input image, Zero-1-to3 [15] and syncdreamer[16] train a diffusion model using multiple views of images rendered from 3D dataset. The trained diffusion model can then be used to generated multiple views of the given image. For the capability of generating multiple views, this diffusion model is treated as 3D diffusion prior 2.2. Text-to-3D generation The goal of text-to-3D task is to generate a 3D model that is consistent with the semantics of the input prompt. Dreamfusion[21] proposes score decomposition sampling (SDS) loss to generate 3D models, which aims to minimize the distribution difference between NeRF[19] rendering and pre-trained text-to-image (T2I) models. Latentnerf[18] improves the performance of 3D generation by optimizing NeRF in latent space. In addition to generating 3D objects, SDS loss can also work in scene generation[43]. Some works[2, 13, 34, 36] use other 3D representation methods but also used SDS loss for optimization. Prolificdreamer[37] propose variable score decomposition (VSD) loss, which can generate high-quality and \fhigh-fidelity results. Text-to-3d method[1, 5, 10, 24, 26] uses prompt to control views when generating 3D views, which may lead to multi-face problem. Dreamtime[9] controls the change of noise sampling level during the generation process to mitigate multi-face problem. As text prompt is not accurate enough to describe 3D model, some other methods using image guidance to generate 3D model. 2.3. Image-to-3D generation The image-to-3d task can be regarded as a task of 3D reconstruction from a single image[4, 41]. Previous single image reconstruction works focus on fixed class reconstruction tasks[28, 29, 38], which often require a large-scale 3D training data. The difficulty of obtaining 3D data makes it not applicable to open-vocabulary objects. The text-toimage model trained with large amount of images contains 3D related information, which is the key of single image zero-shot reconstruction. Make-it-3D[35] introduces SDS loss into image-to-3d task, and uses pre-trained diffusion model and clip[23] model to complete 3D generation. Models such as zero123[15] and syncdreamer[16] can directly generate multi-view of the input image for multiview reconstruction. Limited by the training data, the multiview generated can not guarantee the complete 3D consistency for open-vocabulary inputs. Magic123[22] uses zero123 and pre-trained diffusion model as priors, which can achieve high-quality single image guided 3D generation. Dreamgaussion[34] and one-2-3-45[14] uses the new 3D representation combined with diffusion model to achieve rapid 3D generation. The above methods[22, 34, 35] are optimized by SDS loss using pre-trained diffusion priors. We notice that these methods may fail when the input image containing uncommon objects with asymmetry structure, such as objects from video games. These kinds of irregular object are beyond the ability of normal 2D diffusion prior and 3D diffusion prior. To solve this, we introduce object-specific LoRA to boost 2D diffusion prior to 3D prior. Moreover, we optimize the texture and structure of the extracted mesh using the trained LoRA, generating high-quality 3d model. 3. Pipeline In this section, we introduce Boosting3D, a three-stage pipeline for Image-to-3D task as illustrated in Fig.2 and present preliminaries on score distillation sampling, variational score distillation and multi-views generation (Section 3.1). Firstly, we optimze a NeRF using pretrained model, and train a LoRA initialize the object-level information (Section 3.2). Next we train the LoRA and NeRF in a progressive way. The LoRA and NeRF boost each other during training. After this step, we obtain a refined NeRF and a well trained LoRA with object-level 3D prior. (Section 3.3). Finally, we extract a coarse surface mesh from trained NeRF and fine-tune both surface geometry and appearance using trained LoRA (Section 3.4). 3.1. Preliminaries Many text-to-3D and image-to-3D methods use largescale diffusion models as an optimization foundation. Dreamfusion[21] uses pretrained diffusion model \u03f5\u03d5 to realize the conversion from text to 3D model, which proposes score distillation sampling (SDS) loss to use prompt y to guide 3D model \u03b8 generation. SDS loss encourages the trained 3D model to sample image information from the pretrained diffusion models, so that the 3D rendering results x are consistent with the diffusion models distribution mode. Specifically, the SDS loss computes the gradient: \u2207\u03b8LSDS = Et,\u03f5,p \u0014 wt (\u03f5\u03d5 (xp t ; t, y) \u2212\u03f5) \u2202xp \u2202\u03b8 \u0015 (1) where \u03f5\u03d5(\u00b7) is the predicted noise by the 2D diffusion prior \u03d5, xp t is the render image xp t in view p add noise at the noise level t, wt is a weight about t. SDS loss can realize the conversion of text to 3D, but suffers from over-saturation, low-diversity, and smoothing problems. ProlificDreamer[37] proposed variational score distillation (VSD) loss to solve these problem, which can obtain more refined 3D representation and texture. Different from SDS in minimizing the image distribution, VSD uses LoRA to sample distribution in the pre-trained space, which can produce results with photorealistic rendering. The VSD loss computes the gradient: \u2207\u03b8LV SD = Et,\u03f5,p \u0014 wt (\u03f5\u03d5 (xt; t, y) \u2212\u03f5lora (xp t ; t, y, c)) \u2202xp \u2202\u03b8 \u0015 (2) where \u03f5lora estimates the score of the rendered images using a LoRA (Low-rank adaptation) model. In addition to the text-to-image model, there are also some models specially trained to generate multi-views. Such models contain more accurate 3D information of objects, such as Zero123XL [15] used in this paper. For Zero123XL, input an image x0 and the viewing angle difference with the input image to generate an image corresponding to the viewing angle. For Zero123XL, the gradient of SDS loss can be changed to the following form; \u2207\u03b8L3D SDS = Et,\u03f5,p \u0014 wt \u0000\u03f5\u03d5 \u0000xp t ; t, x0, \u2206p \u0001 \u2212\u03f5 \u0001 \u2202xp \u2202\u03b8 \u0015 (3) where \u2206p is the camera pose difference between the current view xp and the input view x0. \fTraining: Fixed: Camera View: Loss path: Render Pretrained Lora \u2207\ud835\udc49\ud835\udc52\ud835\udc50 \u2207\ud835\udc48\ud835\udc49 + NeRF from Stage2 Stage3\uff1a Refine 3D NeRF \ud835\udc3f!\"# Lora Zero123 + Diffusion Prior \ud835\udc3f$!\"% \ud835\udc3f&\"#!\" A fired horse model A fired horse model Zero123 Diffusion Prior Lora \ud835\udc3f!\"# \ud835\udc3f'(' )( \ud835\udc3f*'( Stage1: Coarse Nerf Stage2: Refine Nerf Figure 2. The pipeline of Boosting3D. Boosting3D is a three-stage framework for high quality 3D generation from a reference image. In stage 1, we optimized a course NeRF and a object-level LoRA. In stage 2, we refined the NeRF using the pre-trained model and the LoRA trained in stage 1. In stage 3, we extracted the 3D mesh from the trained NeRF and refined the 3D model using the pre-trained LoRA. 3.2. Stage1: Coarse NeRF Generation In the first stage, we obtain a coarse NeRF model that can correspond to the objects in the input image. In the process of training the NeRF model, we divide the training views into two modes: the original view of input image, using the original image as supervision; the new views of the object, using pre-trained text-to-image model and pre-trained 3D priors (Zero123XL) as supervision. For the original view of the input image I0, we obtain image I and corresponding mask M through NeRF rendering. Here we use the original image to calculate L1 loss for I, use MSE loss to calculate the loss of the original image corresponding to mask M0 and M, and add corresponding weights to the two losses to obtain Loss: Lori = \u03bbrgb\u2225I0 \u2212I\u22251 + \u03bbmask\u2225M0 \u2212M\u22252 2 (4) For new view of the object, we render the current image through NeRF to obtain the image In and normal map Nn. We add noise to In and then input it into the pre-trained 3D prior model and the pre-trained T2I model to obtain the SDS loss of both, and add the corresponding weights to the two losses. The gradient consists of Eq.1 and Eq.3: \u2207\u03b8Lprior = \u03bbsds\u2207\u03b8LSDS + \u03bb3d\u2207\u03b8L3D SDS (5) The model corresponding to NeRF at this stage will have a lot of noise, so we added 2D normal map smooth loss to make the overall NeRF smoother: Lnormal = \u03bbnormal\u2225Nn \u2212\u03b4(Nn)\u22252 2 (6) where \u03b4(\u00b7) represents the result of moving the normal map by 1 pixels to random direction. In the first stage, we will train a LoRA in the process of training NeRF based on the original image and the render image of NeRF, which will use a higher noise level tlora when training LoRA, as shown in Fig.3. Llora =\u2225\u03f5lora (xp t ; tlora, y, c) \u2212\u03f5\u22252 (7) In practice, we parameterize \u03f5lora by a LoRA of the pretrained model \u03f5\u03d5, and use camera parameter c as the class embeddings. The LoRA will serve as the initialization of LoRA in the second stage. \f\u039d \u039c 1000 500 0 Iteration step i Lora Noise level \ud835\udc61!\"#$ Refine NeRF & Refine 3D Coarse NeRF Figure 3. The proposed noise level for training. We use a higher noise level to train in stage 1 and use a lower noise level in stage 2&3. N represents the training steps in stage 1, and M represents the training steps in total. Overall, the stage 1 is optimized by Ls1: Ls1 = Lori + Lprior + Lnormal + Llora (8) In process of training, We alternately train NeRF using the original input image and the new view, while training LoRA using the rendering results of the NeRF. And we find that using a specific range of noise level can make the results more refined and fit the input image. 3.3. Stage2: ReFine NeRF In this stage, we continued to optimize based on the coarse NeRF. After the first stage, we get a coarse NeRF and a pretrained LoRA. We used the pre-trained LoRA to initialize LoRA in this stage. The training process is also divided into original view training and new view training. The original view training part is consistent with the stage 1, Eq.4 is used as the loss function for optimization. In the new view training, we obtain the image I and normal map Nn through NeRF rendering. We use the noisy latent of the image I as the input of the LoRA model and the original pre-trained T2I model to obtain the corresponding view results respectively, and calculate the Variational Score Distillation loss using Eq.2. In this stage, LoRA is still trained through the images by NeRF rendering using Eq.7. Different from stage 1, the noise level sampling range used by the LoRA model needs to be reduced as shown in Fig.3. Therefore, in this stage, the loss function Ls2 we use to optimization is: Lp2 = \u03bbvsdLV SD + \u03bb3dL3D SDS (9) Ls2 = Lori + Lp2 + Lnormal + Llora (10) The reason for training LoRA in advance in stage 1 is to make LoRA conform to the current object as much as possible. In the original VSD[37], only using prompt to sample 3D information from the T2I model makes it difficult to control the details of 3D generation. On the other hand, it will cause the model generated in the image-to-3D task to be too different from the original image. Therefore, we pre-train LoRA using object-level rendering data in stage 1 and control the optimization range of LoRA from promptlevel to object-level. After stage 2 training, LoRA will be able to generate multi-view image corresponding to the input image using image-to-image method as shown in Fig.4, which shows that the trained LoRA already has object-level 3D prior. 3.4. Stage3: Refine 3D model After stage 2, we get a refined NeRF and a object-level LoRA model. NeRF can render high-quality image results, but the extracted mesh is coarse. In this stage, we will optimize the extracted mesh to achieve the same high-quality as NeRF rendering. When extracting a model from NeRF, we usually need to use a threshold to determine the position of the mesh extraction surface. After determining the vertices to extract the mesh, we can get the color of vertices through the vertices positions, and then we unwrap the UV coordinates of the mesh using Xatlas[40]. In this way, we get a 3D model with UVmap, mesh M = {V ec, F, UV }. We will optimize the UV-corresponding to mesh vertices V ec and UV , in order to obtain a high-quality mesh. During the 3D mesh rendering process, the camera intrinsics are aligned with the stage 2 to ensure that images of same views as the previous two stages can be obtained. We assign a trainable offset \u2206vi to each vertex vi, and assign a texture offset \u2206UV to the UVmap. During the rendering process: Ic 3d = f(V ec + \u2206V ec, UV + MLP(\u2206UV \u2032), F, c) (11) where f represents the differentiable renderer, F represents the faces of the mesh, c represents the camera extrinsics of rendering and MLP represents a multi-layer perceptron, which will calculate the real \u2206UV . When using \u2206UV directly without using MLP for mapping, the optimization effect is not ideal. During the optimization process, we will also divide it into the original view and the new view. The original view uses the original image to calculate the loss like Eq.4. In the new view, we use the LoRA model trained in previous two stages as our pre-trained model to optimize the parameters, the gradient of rendering image Ic 3d can be computed as follow: \u2207LI3d = Et,\u03f5,c [wt (\u03f5lora (Ic 3d; t, y, c) \u2212\u03f5)] (12) \fInput image LoRA results Input image LoRA results Figure 4. Results from LoRA after stage 2. Different images are obtained using different camera parameters as class embeddings and using no-texture rendering as base image. The LoRA model is able to generate an image with better similarity to our current object than the original T2I model. In this stage, the LoRA model is no longer trained. To prevent abrupt geometry, we apply a normal smoothing loss Eq.6 on the rendering image and add an L2 loss to \u2206vi. Loffset = X i (\u2206vi)2 (13) These loss will prevent our vertex optimization from being too far away from the original position while ensuring the smoothness of the mesh. 4. Experiments 4.1. Implementation Details In all experiments, the basic model and optimizer used by all methods are same. We adopt the stable diffusion[7] v2.1-base version as pre-trained text-to-image model, and Zero123XL[15] as 3d prior diffusion model. We use Blip2[12] to generate the prompt corresponding to the input image. During the training phase, Adam is used for optimization, and the learning rate is set to 0.0001. We use multi-scale hash encoding in Instant-NGP [20] as the basic model for NeRF in stages 1 and 2, and use pytorch3d[25] as differentiable renderer in stage 3.In stage 1, we trained 1500 steps. The rendering resolution was set to 64 in the first 500 steps and 128 in the last 1000 steps. In stage 2, the resolution of novel view is set to 256, the resolution of original view is set to 512, and 3500 steps are trained. In stage 3, the resolution is set to 800 for mesh optimization, and trained 2000 steps. At stage 3, the mesh is extracted at a resolution of 5123 with a density threshold of 10 by marching cubes from NeRF trained in stage 2. \u03bbSDS and \u03bb3d are set to 0.2 and 1 for stage 1 and \u03bbvsd is set to 1 in stage 2, which reduces the oversaturation of the texture. The loss weights \u03bbrgb for color are linearly increased from 100 to 1000 during training, \u03bbmask linearly increased from 50 to 500 during training, and the \u03bbnormal is increased from 0 to 100 in the first two stages and reduced from 100 to 10 in stage 3. In the training process of NeRF, we use pure white as the background. In the training process, we assume that the input image is shot from the front view, that is, the initial polar angle is 90\u00b0 and the azimuth angle is 0\u00b0. During the new view training, we will randomly sample the azimuth angle within 360\u00b0 and the camera polar angle between 60 and 150, but keep the distance from the camera to the center of the object unchanged throughout the training process. At the same time, the intrinsics parameters of camera are all fixed during the training process. In the training process, it is only necessary to ensure that the rendering range of NeRF is within the range of the camera, and the intrinsics parameters of camera does not need to use a specific value. 4.2. Results and Comparisons Qualitative Comparisons. Our method will be compared with the state-of-the-art Zero123XL and Magic123. For Zero123XL, we use 3D-SDS loss to optimize a NeRF with the same parameters as our method. For Magic123, we use the original code, but replace the pre-trained diffusion model from v1.5 to v2.1-base version, and replace 3d prior from Zero123 to Zero123XL with higher performance, which yields better quality than the original implementation. In Fig.5, we show the comparison results of our method with Zero123XL and Magic123. Our method achieves the best effect in texture performance and 3D structure. It is worth noting that our method can still generate very reasonable structure and fine texture in the case of rare objects, such as the monster related images in the last two lines, which also shows the robustness of our method. Quantitative Evaluation. We used the indicators \finput image Zero123XL Magic123 Ours Our mesh Figure 5. Qualitative comparisons of different methods. Compared with Magic123[22] and Zero123XL[15], our method performs better on both texture and 3D structure. The last column is the no-texture rendering results of the mesh obtained by our pipeline. input image (b) w/o pretrained (a) w pretrained Figure 6. Ablation study of LoRA pre-trained in stage 1. (a) Trained by our presented method. (b) Without the pretraining of LoRA in stage 1. \f(a) Input image (b) Mesh (c) DMtet (d) Stage3 Figure 7. Effect of stage 3. The rendering result (d) using our stage 3 refinement strategy is more consistent with the original input image than using DMtet (c). employed in previous studies[22]: PSNR and CLIPSimilarity[23]. We used a self-built dataset for evaluation, which contains real images similar to the input image shown in Fig.5. PSNR is measured in the original view of results to measure the reconstruction quality. Clip-similarity calculates the average clip distance between the rendered image and the input image, and measures the 3D consistency through the appearance similarity between the new view and the original view. As shown in Table.1, compared with previous methods, our method achieves first-class performance in all metrics. Among them, ZeroXL-DMtet represents the result of refinement using DMtet, Ours-DMtet represents the result of optimization using DMtet and original diffusion in stage 3, and Ours-mesh represents the result of the final mesh of our method. PSNR results show that our method can restore the input better than other methods. The improvement of CLIP-Similarity reflects that our results have better 3D consistency. 4.3. Ablation Study The effect of pre-training LoRA in the stage 1. In Fig.6, we study the impact of LoRA training process on the results. It is obvious that without the pre-train of LoRA, the directly combination of VSD loss and 3D SDS will not generate a reasonable structure, and there may be a multi-face effect, as shown in the last line. Therefore, in our method, LoRA pre training in stage 1 is a necessary process. Table 1. We show the quantitative results based on CLIPSimilarity/PSNR. The bold is the best. ZeroXL-DMtet represents the result of refinement using DMtet, Ours-DMtet represents the result of optimization using DMtet and original diffusion in stage 3, and Ours-mesh represents the result of the final mesh of our method. Algorithms PSNR\u2191 CLIP-Similarity\u2191 Magic123 25.56 0.74 Zero123XL 21.32 0.61 Zero123XL-DMtet 22.91 0.69 Ours 26.13 0.78 Ours-DMtet 22.15 0.75 Ours-mesh 24.45 0.81 Effect of stage 3. We show the effect of stage 3 on the results in Fig.7. (a) is the input image, (b) is the mesh extracted from the trained NeRF, and (c) is the effect of using Deep Marching Tetrahedra (DMTet) [32] and original SDS loss to replace stage 3. It can be seen that the texture of (c) is relatively fuzzy. The texture generated after stage 3 (d) is more detailed, and the rendering result will be more consistent with the original input image. It can be observed an intuitive improvement in the quality of the final mesh using the proposed method. 4.4. Limitations Although our method can achieve precise and robust 3D content generation, the overall time consumption is relatively high, requiring about than an hour of training time. We will optimize the speed using faster 3D representation in future work. 5."
+ }
+ ],
+ "Yiwei Guo": [
+ {
+ "url": "http://arxiv.org/abs/2309.05027v2",
+ "title": "VoiceFlow: Efficient Text-to-Speech with Rectified Flow Matching",
+ "abstract": "Although diffusion models in text-to-speech have become a popular choice due\nto their strong generative ability, the intrinsic complexity of sampling from\ndiffusion models harms their efficiency. Alternatively, we propose VoiceFlow,\nan acoustic model that utilizes a rectified flow matching algorithm to achieve\nhigh synthesis quality with a limited number of sampling steps. VoiceFlow\nformulates the process of generating mel-spectrograms into an ordinary\ndifferential equation conditional on text inputs, whose vector field is then\nestimated. The rectified flow technique then effectively straightens its\nsampling trajectory for efficient synthesis. Subjective and objective\nevaluations on both single and multi-speaker corpora showed the superior\nsynthesis quality of VoiceFlow compared to the diffusion counterpart. Ablation\nstudies further verified the validity of the rectified flow technique in\nVoiceFlow.",
+ "authors": "Yiwei Guo, Chenpeng Du, Ziyang Ma, Xie Chen, Kai Yu",
+ "published": "2023-09-10",
+ "updated": "2024-01-16",
+ "primary_cat": "eess.AS",
+ "cats": [
+ "eess.AS",
+ "cs.AI",
+ "cs.HC",
+ "cs.SD"
+ ],
+ "main_content": "INTRODUCTION Modern text-to-speech (TTS) has witnessed tremendous progress by adopting different types of advanced generative algorithms, such as TTS models with GANs [1, 2], normalizing flows [2\u20134], self-supervised features [5, 6] or denoising diffusion models [7\u2013 10]. Among them, diffusion-based TTS models recently received growing attention because of their high synthesis quality, such as GradTTS [7] and DiffVoice [9]. They also show versatile functionalities such as conditional generation [11, 12], speech editing [9, 10, 13] and speaker adaptation [9, 10]. By estimating the score function \u2207log pt(x) of a stochastic differential equation (SDE), diffusion models are stable to train [14]. They generate realistic samples by numerically solving the reverse-time SDE or the associated probability-flow ordinary differential equation (ODE). However, a major drawback of diffusion models lies in their efficiency. Regardless of SDE or ODE sampling methods, diffusion models typically require numerous steps to generate a satisfying sample, causing a large latency in inference. Some efforts have been made to mitigate this issue and improve the speed-quality tradeoff in diffusion-based TTS models, usually by extra mathematical tools or knowledge distillation. Fast GradTTS[15] adopts maximum likelihood SDE solver [16], progressive distillation [17] and denoising diffusion GAN [18] to accelerate diffusion sampling. FastDiff [19] optimizes diffusion noise schedules inspired by BDDM [20]. ProDiff [21] also uses a progressive distillation technique to halve the sampling steps from DDIM [22] teacher \u2217Corresponding author iteratively. LightGrad [23] adopts DPM-Solver [24] to explicitly derive a solution of probability-flow ODE. A concurrent work, CoMoSpeech [25], integrates the consistency model [26] as a special type of diffusion distillation. These models successfully decrease the necessary number of sampling steps in diffusion models to some extent. However, due to the intricate nature of the diffusion process, the speed-quality tradeoff still exists and is hard to overcome. Despite denoising diffusion, another branch in the family of differential-equation-based generative models began to arise recently, namely the flow matching generative models [27\u201329]. While diffusion models learn the score function of a specific SDE, flow matching aims to model the vector field implied by an arbitrary ODE directly. A neural network is used for approximating the vector field, and the ODE can also be numerically solved to obtain data samples. The design of such ODE and vector field often considers linearizing the sampling trajectory and minimizing the transport cost [29]. As a result, flow matching models have simpler formulations and fewer constraints but better quality. VoiceBox [30] shows the potential of flow matching in fitting large-scale speech data, and LinDiff [31] shares a similar concept in the study of vocoders. More importantly, the rectified flow [28] technique in flow matching models further straightens the ODE trajectory in a concise way. By training a flow matching model again but with its own generated samples, the sampling trajectory of rectified flow theoretically approaches a straightforward line, which improves the efficiency of sampling. In essence, rectified flow matching achieves good sample quality even with a very limited number of sampling steps. As a side note, its ODE nature also makes flow matching extensible for knowledge distillation similar in previous diffusion-based works [28]. Inspired by these, we propose to utilize rectified flow matching in the TTS acoustic model for the first time in literature. We construct an ODE to flow between noise distribution and melspectrogram while conditioning it with phones and duration. An estimator learns to model the underlying vector field. Then, a flow rectification process is applied, where we generate samples from the trained flow matching model to train itself again. In this way, our model is able to generate decent mel-spectrograms with much fewer steps. We name our model VoiceFlow. To fully investigate its ability, we experiment both on the single-speaker benchmark LJSpeech and the larger multi-speaker dataset LibriTTS. The results show that VoiceFlow outperforms the diffusion baseline in a sufficient number of sampling steps. In a highly limited budget such as two steps, VoiceFlow still maintains a similar performance while the diffusion model cannot generate reasonable speech. Therefore, VoiceFlow achieves better efficiency and speed-quality tradeoff while sampling. The code and audio samples are available online1. 1https://cantabile-kwok.github.io/VoiceFlow arXiv:2309.05027v2 [eess.AS] 16 Jan 2024 \f2. FLOW MATCHING AND RECTIFIED FLOW 2.1. Flow Matching Generative Models Denote the data distribution as p1(x1) and some tractable prior distribution as p0(x0). Most generative models work by finding a way to map samples x0 \u223cp0(x0) to data x1. Particularly, diffusion models manually construct a special SDE, and then estimate the score function of the probability path pt(xt) yielded by it. Sampling is tackled by solving either the reverse-time SDE or probabilityflow ODE alongside this probability path. Flow matching generative models, on the other hand, model the probability path pt(xt) directly [27]. Consider an arbitrary ODE dxt = vt(xt)dt (1) with vt(\u00b7) named the vector field and t \u2208[0, 1]. This ODE is associated with a probability path pt(xt) by the continuity equation d dt log pt(x) + div(pt(x)vt(x)) = 0. It is sufficient to generate realistic data if a neural network can accurately estimate the vector field vt(\u00b7), since the ODE in Eq.(1) can be solved numerically then. However, the design of the vector field needs to be instantiated before practically applied. [27] proposes the method of constructing a conditional probability path with a data sample x1. Suppose this probability path is pt(x | x1), with boundary condition pt=0(x | x1) = p0(x) and pt=1(x | x1) = N(x | x1, \u03c32I) for sufficiently small \u03c3. By the continuity equation, there is an associated vector field vt(x | x1). It is proven that estimating the conditional vector field by neural network u\u03b8 is equivalent, in the sense of expectation, to estimating the unconditional vector field, i.e. min \u03b8 Et,pt(x)\u2225u\u03b8(x, t) \u2212vt(x)\u22252 (2) \u2261min \u03b8 Et,p1(x1),pt(x|x1)\u2225u\u03b8(x, t) \u2212vt(x | x1)\u22252. (3) Then, by designing a simple conditional probability path pt(x | x1) and the corresponding vt(x | x1), one can easily draw samples from pt(x | x1) and minimize Eq.(3). For example, [27] uses the Gaussian path pt(x | x1) = N(x | \u00b5t(x1), \u03c3t(x1)2I) and linear vector field vt(x | x1) = \u03c3\u2032 t(x1) \u03c3t(x1)(x \u2212\u00b5t(x1)) + \u00b5\u2032 t(x1). Meanwhile, this conditioning technique can be further generalized, i.e. any condition z for pt(x | z) can lead to the same form of optimization target like Eq.(3). Thus, [29] proposes to additionally condition on a noise sample x0 to form a probability path pt(x | x0, x1) = N(x | tx1 + (1 \u2212t)x0, \u03c32I). The conditional vector field therefore becomes vt(x | x0, x1) = x1 \u2212x0, which is a constant straight line towards x1. In this formulation, training the generative model only requires the following steps: 1. Sample x1 from data and x0 from any noise distribution p0(x0); 2. Sample a time t \u2208[0, 1] and then xt \u223cN(tx1+(1\u2212t)x0, \u03c32I); 3. Apply gradient descent on loss \u2225u\u03b8(x, t) \u2212(x1 \u2212x0)\u22252. This is often referred to as the \u201cconditional flow matching\u201d algorithm, which is proven to outperform diffusion-based models with deep correlation to the optimal transport theory [29]. 2.2. Improved Sampling Efficiency with Rectified Flow The notion of rectified flow is proposed in [28]. It is a simple but mathematically solid approach to improve the sampling efficiency of flow matching models. The flow matching model here has the same formulation as that of [29], which is conditioned on both x1 and x0. Suppose a flow matching model is trained to generate data \u02c6 x1 from Fig. 1. Working diagram of the VoiceFlow model noise x0 by the ODE in Eq.(1). In other words, x0 and \u02c6 x1 are a pair of the starting and ending points of the ODE trajectory. Then, this flow matching model is trained again, but conditions vt(x | x0, x1) and pt(x | x0, x1) on the given pair (x0, \u02c6 x1) instead of independently sampling x0, x1. This flow rectification step can be iterated multiple times, denoted by the recursion \u0000zk+1 0 , zk+1 1 \u0001 = FM \u0000zk 0, zk 1 \u0001 , with FM the flow matching model and (z0 0, z0 1) = (x0, x1) the independently drawn noise and data samples. Intuitively, rectified flow \u201crewires\u201d the sampling trajectory of flow matching models to become more straight. Because the ODE trajectories cannot intersect when being solved, most likely the trajectory cannot be as straight as the conditional vector field in training. However, by training the flow matching model again on the endpoints of the same trajectory, the model learns to find a shorter path to connect these noise and data. This straightening tendency is theoretically guaranteed in [28]. By rectifying the trajectories, flow matching models will be able to sample data more efficiently with fewer steps of ODE simulation. 3. VOICEFLOW 3.1. Flow Matching-Based Acoustic Model To utilize flow matching models in TTS, we cast it as a nonautoregressive conditional generation problem with mel-spectrogram x1 \u2208Rd as the target data and noise x0 \u2208Rd from standard Gaussian distribution N(0, I). We consider using an explicit duration learning module from forced alignments like in [8]. Denote the duplicated latent phone representation as y, where each phone\u2019s latent embedding is repeated according to its duration. Then, y is regarded as the condition of the generation process. Specifically, suppose vt(xt | y) \u2208Rd is the underlying vector field for the ODE dxt = vt(xt | y)dt. Suppose this ODE connects the noise distribution p0(x0 | y) = N(0, I) with mel distribution given text p1(x1 | y) = pmel(x1 | y). Our goal is to accurately estimate the vector field vt given condition y, as we can then generate a mel-spectrogram by solving this ODE from t = 0 to t = 1. Inspired by [28, 29], we opt to use both a noise sample x0 and a data sample x1 to construct conditional probability paths as pt(x | x0, x1, y) = N(x | tx1 + (1 \u2212t)x0, \u03c32I) (4) where \u03c3 is a sufficiently small constant. In this formulation, the endpoints of these paths are N(x0, \u03c32I) for t = 0 and N(x1, \u03c32I) for t = 1 respectively. These paths also determine a probability path pt(x | y) marginal w.r.t x0, x1, whose boundaries approximate the noise distribution p0(x0 | y) and mel distribution p1(x1 | y). \fIntuitively, Eq.(4) specifies a family of Gaussians moving in a linear path. The related vector field can be simply vt(x | x0, x1, y) = x1 \u2212x0, also a constant linear line. Then, we use a neural network u\u03b8 to estimate the vector field. Similar to Eq.(3), the objective here is min \u03b8 Et,p1(x1|y),p0(x0|y),pt(xt|x0,x1,y)\u2225u\u03b8(xt, y, t)\u2212(x1 \u2212x0)\u22252 (5) The corresponding flow matching loss is denoted by LFM. The total loss function to train VoiceFlow will be L = LFM + Ldur, where Ldur is the mean squared loss for duration predictor. So, the whole acoustic model of VoiceFlow consists of the text encoder, duration predictor, duration adaptor and vector field estimator, as is shown in Fig. 1. The text encoder transforms the input phones into a latent space, upon which the duration per phone is predicted and fed to the duration adaptor. The repeated frame-level sequence y is then fed to the vector field estimator as a condition. The other two inputs to the vector field estimator are the sampled time t and the sampled xt from the conditional probability path in Eq.(4). We adopt the same U-Net architecture in the vector field estimator as in GradTTS2. The condition y is concatenated with xt before entering the estimator, and the time t is passed through some fully connected layers before being added to the hidden variable in residual blocks each time. In multi-speaker scenarios, the condition will become both the text y and some speaker representation s. But for simplicity, we will still use the notation of y as the condition in the following sections. 3.2. Sampling and Flow Rectification Step By Eq.(5), the vector field estimator u\u03b8 is able to approximate vt in the expectation sense. Then, the ODE dxt = u\u03b8(xt, y, t)dt can be discretized for sampling a synthetic mel-spectrogram x1 given text y. Off-the-shelf ODE solvers like Euler, Runge-Kutta, DormandPrince method, etc. can be directly applied for sampling. In the example of the Euler method with N steps, each sampling step is \u02c6 x k+1 N = \u02c6 x k N + 1 N u\u03b8 \u0012 \u02c6 x k N , y, k N \u0013 , k = 0, 1, ..., N \u22121 (6) with \u02c6 x0 \u223cp0(x0 | y) being the initial point and \u02c6 x1 being the generated sample. Regardless of the discretization method, the solvers will produce a sequence of samples {\u02c6 xk/N} along the ODE trajectory, which gradually approximates a realistic spectrogram. Then we apply the rectified flow technique to further straighten the ODE trajectory. For every utterance in the training set, we draw a noise sample x\u2032 0 and run the ODE solver to obtain \u02c6 x1 given text y. The sample pair (x\u2032 0, \u02c6 x1) is then fed to the VoiceFlow again for rectifying the vector field estimator. In this flow rectification step, the new training criterion will be min \u03b8 Et,p(x\u2032 0,\u02c6 x1|y),pt(xt|x\u2032 0,\u02c6 x1,y)\u2225u\u03b8(xt, y, t) \u2212(\u02c6 x1 \u2212x\u2032 0)\u22252 (7) where the only difference with Eq.(5) is paired (x\u2032 0, \u02c6 x1) are used instead of independently sampled. In Eq.(7), every spectrogram sample \u02c6 x1 is associated with a noise sample in the same trajectory. In this way, the vector field estimator is asked to find a more straightforward sampling trajectory connecting (x\u2032 0, \u02c6 x1), which improves the sampling efficiency to a large extent. Note that we provide the model with the ground truth duration sequence while generating data for rectified flow. This ensures that the model is fed with more natural speech, reducing the risk that inaccurate duration prediction degrades the model performance. 2Two down and up-samples with 2D convolution as residual blocks Algorithm 1 summarizes the whole process of training VoiceFlow, including flow rectification. Algorithm 1: Training VoiceFlow with flow rectification Input: Paired text, duration and mel-spectrogram x1 with optional speaker information Result: Trained VoiceFlow model, which contains vector field estimator u\u03b8(x, y, t) 1 Function TrainStep(u\u03b8, x0, x1): 2 Compute y, Ldur using text and duration; 3 Sample t \u223cUniform[0, 1]; 4 Sample xt \u223cN(tx1 + (1 \u2212t)x0, \u03c32I); 5 LFM \u2190\u2225u\u03b8(xt, y, t) \u2212(x1 \u2212x0)\u22252; 6 Gradient descent on LFM + Ldur; 7 while perform flow matching do 8 Take batch and sample x0 from N(0, I); 9 TrainStep(u\u03b8, x0, x1); 10 for every utterance in training set do 11 Sample a x\u2032 0 and solve the ODE by Eq.(6) to obtain \u02c6 x1, using trained u\u03b8 and ground truth duration; 12 while perform flow rectification do 13 Take batch with associated x\u2032 0; 14 TrainStep(u\u03b8, x\u2032 0, \u02c6 x1); 4. EXPERIMENTS AND RESULTS 4.1. Experimental Setup We evaluated VoiceFlow both on the single-speaker and multispeaker benchmarks, so as to obtain a comprehensive observation of the proposed TTS system. For single-speaker evaluations, we used the LJSpeech [32] dataset, which contains approximately 24 hours of high-quality female voice recordings. For multi-speaker experiments, we included all the training partitions of the LibriTTS [33] dataset, which amountedt to 585 hours and over 2300 speakers. We downsampled all the training data to 16kHz for simplicity. Melspectrogram and forced alignments were extracted with 12.5ms frame shift and 50ms frame length on each corpus by Kaldi [34]. We compared VoiceFlow with the diffusion-based acoustic model GradTTS. To only focus on the algorithmic differences, we used the official implementation of GradTTS3 and trained it with the same data configurations. We also used ground truth duration instead of the monotonic alignment search algorithm in GradTTS to mitigate the impact of different durations. Notably, we used exactly the same model architecture to build VoiceFlow, so the two compared models have nearly identical inference costs when the ODEs are both solved using Euler method with the same number of steps. As the acoustic models generate mel-spectrograms as targets, HifiGAN [35] was adopted as the vocoder and trained separately on the two datasets. 4.2. Subjective Evaluations We first evaluated the system performance of VoiceFlow compared to GradTTS using subjective listening tests. In this test, listeners were asked to rate the mean opinion score (MOS) of the provided speech clips based on the audio quality and naturalness. For both the acoustic models, we used 2, 10 and 100 steps representing low, 3https://github.com/huawei-noah/ Speech-Backbones/tree/main/Grad-TTS \fTable 1. MOS evaluation in low, medium and large sampling steps Model Sampling Steps LJSpeech LibriTTS GradTTS 2 2.98\u00b10.06 2.52\u00b10.12 VoiceFlow (\u22483605frames/s) 3.92\u00b10.07 3.81\u00b10.07 GradTTS 10 3.97\u00b10.07 3.43\u00b10.09 VoiceFlow (\u2248985frames/s) 4.10\u00b10.06 3.84\u00b10.07 GradTTS 100 4.03\u00b10.09 3.45\u00b10.12 VoiceFlow (\u2248102frames/s) 4.17\u00b10.07 3.85\u00b10.12 GT (voc.) 4.52\u00b10.07 4.42\u00b10.06 Fig. 2. MOSnet evaluations in multiple choices of sampling steps medium and large number of sampling steps for synthesis. The results are presented in Table 1, where \u201cGT (voc.)\u201d means the vocoded ground truth speech. It can be seen that in both the two datasets and three sampling scenarios, VoiceFlow achieves a consistently higher MOS score than GradTTS. Also, when the sampling steps are decreased, the performance of GradTTS drops significantly while VoiceFlow does not suffer from such huge degeneration. Specifically, in 2-step sampling situations, samples from GradTTS become heavily degraded, but that from VoiceFlow remains to be satisfying. Note that the 10-step GradTTS was already reported to be competitive against other baselines [7]. In LibriTTS, the corpus with large speaker and environment variability, the difference of compared systems becomes more obvious. This suggests the stronger potential of flow-matching-based models in fitting complex speech data. 4.3. Objective Evaluations We also objectively evaluated the performance of VoiceFlow. Two metrics were included for comparison: MOSnet [36] and melcepstral distortion (MCD). MOSnet is a neural network designed to fit human perception of speech signals, and we found it correctly reflects speech quality to a reasonable extent. We use the officially trained MOSnet model to evaluate synthetic speech on more choices of sampling steps. The results are plotted in Fig. 2, where the shadowed region stands for the mean and 95% confidence interval of MOSnet score on ground truth speech. It can be seen to mainly Fig. 3. MCD evaluations in multiple choices of sampling steps Table 2. CMOS evaluation in 2 sampling steps Model LJSpeech LibriTTS VoiceFlow -ReFlow -0.78\u00b10.13 -1.21\u00b10.19 Fig. 4. Visualization of sampling trajectories conform with the MOS results, as the change of VoiceFlow\u2019s scores among different sampling steps is much lower than that of GradTTS. MCD is another objective tool to measure the distortion of the cepstrum against ground truth. The cepstrum order here is set to be 13. Similarly, the MCD values on different numbers of sampling steps are shown in Fig. 3, also verifying the better speed-quality tradeoff of VoiceFlow compared to the diffusion counterpart. 4.4. Ablation Study We also conducted an ablation study to verify the effectiveness of the rectified flow technique in VoiceFlow. A comparative MOS (CMOS) test was performed where raters were asked to rate the score of a given sentence compared to the reference, ranging from -3 to 3. Table 2 shows the results with 2 sampling steps, where \u201c-ReFlow\u201d means VoiceFlow without rectified flow. It is noticeable that rectified flow makes a remarkable effect in such limited sampling steps, and LibriTTS exhibits an even more significant difference than LJSpeech. To provide an intuition on the impact of rectified flow, we visualized some sampling trajectories of VoiceFlow both with and without rectified flow on two out of the 80 mel dimensions in Figure 4. The trajectory of GradTTS is also shown here. Then, the visual contrast between the straight and curving trajectories leaves no doubt on the efficacy of using rectified flow in TTS models. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2211.09496v2",
+ "title": "EmoDiff: Intensity Controllable Emotional Text-to-Speech with Soft-Label Guidance",
+ "abstract": "Although current neural text-to-speech (TTS) models are able to generate\nhigh-quality speech, intensity controllable emotional TTS is still a\nchallenging task. Most existing methods need external optimizations for\nintensity calculation, leading to suboptimal results or degraded quality. In\nthis paper, we propose EmoDiff, a diffusion-based TTS model where emotion\nintensity can be manipulated by a proposed soft-label guidance technique\nderived from classifier guidance. Specifically, instead of being guided with a\none-hot vector for the specified emotion, EmoDiff is guided with a soft label\nwhere the value of the specified emotion and \\textit{Neutral} is set to\n$\\alpha$ and $1-\\alpha$ respectively. The $\\alpha$ here represents the emotion\nintensity and can be chosen from 0 to 1. Our experiments show that EmoDiff can\nprecisely control the emotion intensity while maintaining high voice quality.\nMoreover, diverse speech with specified emotion intensity can be generated by\nsampling in the reverse denoising process.",
+ "authors": "Yiwei Guo, Chenpeng Du, Xie Chen, Kai Yu",
+ "published": "2022-11-17",
+ "updated": "2023-02-16",
+ "primary_cat": "eess.AS",
+ "cats": [
+ "eess.AS",
+ "cs.AI",
+ "cs.HC",
+ "cs.LG",
+ "cs.SD"
+ ],
+ "main_content": "INTRODUCTION Although current neural text-to-speech (TTS) models are able to generate high-quality speech, such as Grad-TTS [1], VITS [2] and VQTTS [3], intensity controllable emotional TTS is still a challenging task. Unlike prosody modelling in recent literatures [4\u20136] that no speci\ufb01c label is provided in advance, emotional TTS typically utilizes dataset with categorical emotion labels. Mainstream emotional TTS models [7, 8] can only synthesize emotional speech given the emotion label without intensity controllability. In intensity controllable TTS models, efforts have been made to properly de\ufb01ne and calculate emotion intensity values for training. The most preferred method to de\ufb01ne and obtain emotion intensity is the relative attributes rank (RAR)[9], which is used in [10\u201314]. RAR seeks a ranking matrix by a max-margin optimization problem, which is solved by support vector machines. The solution is then fed to the model for training. As this is a manually constructed and separated stage, it might result in suboptimal results that bring bias into training. In addition to RAR, the operation on emotion embedding space is also explored. [15] designs an algorithm to maximize distance between emotion embeddings, and interpolates the embedding space to control emotion intensity. [16] quantizes the distance of emotion embeddings to obtain emotion intensities. However, the structure of the embedding space also greatly in\ufb02uences the \u2217Corresponding author performance of these models, resulting in the need for careful extra constraints. Intensity control for emotion conversion is investigated in [17, 18], with similar methods. Some of the mentioned works also have degraded speech quality. As an example, [14] (which we refer to as \u201cMixedEmotion\u201d later) is an autoregressive model with intensity values from RAR to weight the emotion embeddings. It adopts pretraining to improve synthetic quality, but still with obvious quality degradation. To overcome these issues, we need a conditional sampling method that can directly control emotions weighted with intensity. In this work, we propose a soft-label guidance technique, based on the classi\ufb01er guidance technique [19, 20] in denoising diffusion models [21, 22]. Classi\ufb01er guidance is an ef\ufb01cient sampling technique that uses the gradient of a classi\ufb01er to guide the sampling trajectory given a one-hot class label. In this paper, based on the extended soft-label guidance, we propose EmoDiff which is an emotional TTS model with suf\ufb01cient intensity controllability. Speci\ufb01cally, we \ufb01rst train an emotionunconditional acoustic model. Then an emotion classi\ufb01er is trained on any xt on the diffusion process trajectory where t is the diffusion timestamp. In inference, we guide the reverse denoising process with the classi\ufb01er and a soft emotion label where the value of the speci\ufb01ed emotion and Neutral is set to \u03b1 and 1 \u2212\u03b1 respectively, instead of a one-hot distribution where only the speci\ufb01ed emotion is 1 while all others are 0. \u03b1 \u2208[0, 1] here represents the emotion intensity. Our experiments show that EmoDiff can precisely control the emotion intensity while maintaining high voice quality. Moreover, it also generates diverse speech samples even with the same emotion as a strength of diffusion models [19]. In short words, the main advantages of EmoDiff are: 1. We de\ufb01ne the emotion intensity as the weight for classi\ufb01er guidance when using soft-labels. This achieves precise intensity control in terms of classi\ufb01er probability, needless for extra optimizations. Thus it enables us to generate speech with arbitrary speci\ufb01ed emotion intensity effectively. 2. It poses no harm to the synthesized speech. The generated samples have good quality and naturalness. 3. It also generates diverse samples even in the same emotion. 2. DIFFUSION MODELS WITH CLASSIFIER GUIDANCE 2.1. Denoising Diffusion Models and TTS Applications Denoising diffusion probabilistic models [21, 22] have proven successful in many generative tasks. In the score-based interpretation arXiv:2211.09496v2 [eess.AS] 16 Feb 2023 \f[22, 23], diffusion models construct a forward stochastic differential equation (SDE) to transform the data distribution p0(x0) into a known distribution pT (xT ), and use a corresponding reverse-time SDE to generate realistic samples starting from noises. Thus, the reverse process is also called \u201cdenoising\u201d process. Neural networks are then to estimate the score function \u2207x log pt(xt) for any t \u2208 [0, T] on the SDE trajectory, with score-matching objectives [22, 23]. In applications, diffusion models bypass the training instability and mode collapse problem in GANs, and outperform previous methods on sample quality and diversity [19]. Denoising diffusion models have also been used in TTS [1, 24\u2013 27] and vocoding [28, 29] tasks, with remarkable results. In this paper, we build EmoDiff on the design of GradTTS [1]. Denote x \u2208Rd a frame of mel-spectrogram, it constructs a forward SDE: dxt = 1 2\u03a3\u22121(\u00b5 \u2212xt)\u03b2tdt + p \u03b2tdBt (1) where Bt is a standard Brownian motion and t \u2208[0, 1] is the SDE time index. \u03b2t is referred to as noise schedule such that \u03b2t is increasing and exp n \u2212 R 1 0 \u03b2sds o \u22480. Then we have p1(x1) \u2248 N(x; \u00b5, \u03a3). This SDE also indicates the conditional distribution xt | x0 \u223cN(\u03c1(x0, \u03a3, \u00b5, t), \u03bb(\u03a3, t)), where \u03c1(\u00b7), \u03bb(\u00b7) both has closed forms. Thus we can directly sample xt from x0. In practice, we set \u03a3 to identity matrix and \u03bb(\u03a3, t) therefore becomes \u03bbtI where \u03bbt is a scalar with known closed form. Meanwhile, we condition the terminal distribution p1(x1) on text, i.e. let \u00b5 = \u00b5\u03b8(y), where y is the aligned phoneme representation of that frame. The SDE of Eq.(1) has a reverse-time counterpart: dxt = \u00121 2\u03a3\u22121(\u00b5 \u2212xt) \u2212\u2207x log pt(xt) \u0013 \u03b2tdt+ p \u03b2td e Bt (2) where \u2207log pt(xt) is the score function that is to be estimated, and e Bt is a reverse-time Brownian motion. It shares the trajectory of distribution pt(xt) with forward SDE in Eq.(1). So, solving it from x1 \u223cN(\u00b5, \u03a3), we can end up with a realistic sample x0 \u223cp(x0 | y). A neural network s\u03b8(xt, y, t) is trained to estimate the score function, in the following score-matching [23] objective: min \u03b8 L = Ex0,y,t[\u03bbt\u2225s\u03b8(xt, y, t) \u2212\u2207xt log p(xt | x0)\u22252]. (3) 2.2. Conditional Sampling Based on Classi\ufb01er Guidance Denoising diffusion models provide a new way of modeling conditional probabilities p(x | c) where c is a class label. Suppose we now have an unconditional generative model p(x), and a classi\ufb01er p(c | x). By Bayes formula, we have \u2207x log p(x | c) = \u2207x log p(c | x) + \u2207x log p(x). (4) In the diffusion framework, to sample from conditional distribution p(x | c), we need to estimate score function \u2207x log p(xt | c). By Eq.(4), we only need to add the gradient from a classi\ufb01er to the unconditional model. This conditional sampling method is named classi\ufb01er guidance [19, 20], and is also used in unsupervised TTS [30]. In practice, classi\ufb01er gradients are often scaled [19, 30] to control the strength of guidance. Instead of original \u2207x log p(c | x) in Eq.(4), we now use \u03b3\u2207x log p(c | x), where \u03b3 \u22650 is called guidance level. Larger \u03b3 will result in highly class-correlated samples while smaller one will encourage sample variability [19]. Different from ordinary classi\ufb01ers, the input to the classi\ufb01er used here is all the xt along the trajectory of SDE in Eq.(1), instead of clean x0 only. The time index t can be anything in [0, 1]. Thus, the classi\ufb01er can also be denoted as p(c | xt, t). While Eq.(6) can effectively control sampling on class label c, it cannot be directly applied to soft-labels, i.e. labels weighted with intensity, as the guidance p(c | x) is not well-de\ufb01ned now. Therefore, we extend this technique for emotion intensity control in Section 3.2. 3. EMODIFF 3.1. Unconditional Acoustic Model and Classi\ufb01er Training The training of EmoDiff mainly includes the training of the unconditional acoustic model and emotion classi\ufb01er. We \ufb01rst train a diffusion-based acoustic model on emotional data, but don\u2019t provide it with emotion conditions. This is referred to as \u201cunconditional acoustic model training\u201d as in Figure 1(a). This model is based on GradTTS [1], except that we provide explicit duration sequence by forced aligners to ease duration modeling. In this stage, the training objective is Ldur + Ldiff, where Ldur is the \u21132 loss of logarithmic duration, and Ldiff is the diffusion loss as Eq.(3). In practice, following GradTTS, we also adopt prior loss Lprior = \u2212log N(x0; \u00b5, I) to encourage converging. For notation simplicity, we use Ldiff to denote diffusion and prior loss together in Figure 1(a). After training, the acoustic model can estimate score function of noisy mel-spectrogram xt given input phoneme sequence y, i.e. \u2207log p(xt | y), which is unconditonal of emotion labels. Following Section 2.2, we then need an emotion classi\ufb01er to distinguish emotion categories e from noisy mel-spectrograms xt. Meanwhile, as we always have a text condition y, the classi\ufb01er is formulated as p(e | xt, y, t). As is shown in Figure 1(b), the input to the classi\ufb01er consists of three components: SDE timestamp t, noisy melspectrogram xt and phoneme-dependent Gaussian mean \u00b5. This classi\ufb01er is trained with the standard cross-entropy loss LCE. Note that we freeze the acoustic model parameters in this stage, and only update the weights in emotion classi\ufb01er. As we always need text y as condition along through the paper, we omit it and denote this classi\ufb01er as p(e | x) in later sections to simplify the notation, if no ambiguity is caused. 3.2. Intensity Controllable Sampling with Soft-Label Guidance In this section, we extend the classi\ufb01er guidance to soft-label guidance which can control emotion weighted with intensity. Suppose the number of basic emotions is m, and every basic emotion ei has a one-hot vector form ei \u2208Rm, i \u2208{0, 1, ..., m \u22121}. For each ei, only the i-th dimension is 1. We specially use e0 to denote Neutral. For an emotion weighted with intensity \u03b1 on ei, we de\ufb01ne it to be d = \u03b1ei + (1 \u2212\u03b1)e0. Then the gradient of log-probability of clasi\ufb01er p(d | x) w.r.t x can be de\ufb01ned as \u2207x log p(d | x) \u225c\u03b1\u2207x log p(ei | x) + (1 \u2212\u03b1)\u2207x log p(e0 | x). (5) The intuition of this de\ufb01nition is that, intensity \u03b1 stands for the contribution of emotion ei on the sampling trajectory of x. Larger \u03b1 means we sample x along a trajectory with large \u201cforce\u201d towards emotion ei, otherwise e0. Thus we can extend Eq.(4) to \u2207x log p(x | d) = \u03b1\u2207x log p(ei | x) + (1 \u2212\u03b1)\u2207x log p(e0 | x) + \u2207x log p(x). (6) When the intensity \u03b1 is 1.0 (100% emotion ei) or 0.0 (100% Neutral), the above operation reduces to the standard classi\ufb01er guidance form Eq.(4). Hence we can use the soft-label guidance Eq.(5) in the sampling process, and generate a realistic sample with speci\ufb01ed emotion d = \u03b1ei + (1 \u2212\u03b1)e0 with intensity \u03b1. \fFig. 1: Training and sampling diagrams of EmoDiff. In training, xt is directly sampled from known distribution p(xt | x0). When sampling with a certain emotion intensity, the score function \u2207x log pt(xt) is estimated by score estimator. \u201cSG\u201d means stop gradient operation. Figure 1(c) illustrates the intensity controllable sampling process. After feeding the acoustic model and obtaining phonemedependent \u00b5 sequence, we sample x1 \u223cN(\u00b5, I) and simulate reverse-time SDE from t = 1 to t = 0 through a numerical simulator. In each simulator update, we feed the classi\ufb01er with current xt and get the output probabilities pt(\u00b7 | xt). Eq.(6) is then used to calculate the guidance term. Similar as Section 2.2, we also scale the guidance term with guidance level \u03b3. At the end, we obtain \u02c6 x0 which is not only intelligible with input text, but also corresponding to the target emotion d with intensity \u03b1. This lead to precise intensity that correlates well to classi\ufb01er probability. Generally, in addition to intensity control, our soft-label guidance is capable for more complicated control on mixed emotions [14]. Denote d = Pm\u22121 i=0 wiei a combination of all emotions where wi \u2208[0, 1], Pm\u22121 i=0 wi = 1, Eq.(5) can be generalized to \u2207x log p(d | x) \u225c m\u22121 X i=0 wi\u2207x log p(ei | x). (7) Then Eq.(6) can also be expressed in such generalized form. This extension can also be interpreted from the probabilistic view. As the combination weights {wi} can be viewed as a categorical distribution pe(\u00b7) over basic emotions {ei}, Eq.(7) is equivalent to \u2207x log p(d | x) \u225cEe\u223cpe\u2207x log p(e | x) (8) = \u2212\u2207x CE [pe(\u00b7), p(\u00b7 | x)] (9) where CE is the cross-entropy function. Eq.(9) implies the fact that we are actually decreasing the cross-entropy of target emotion distribution pe and classi\ufb01er output p(\u00b7 | x), when sampling along the gradient \u2207x log p(d | x). The gradient of cross-entropy w.r.t x can guide the sampling process. Hence, this soft-label guidance technique can generally be used to control any arbitrary complex emotion as a weighted combination of several basic emotions. In Figure 1(c), we use cross-entropy as a concise notation for soft-label guidance term. In our intensity control scenario, it reduces to Eq.(5) mentioned before. 4. EXPERIMENTS AND RESULTS 4.1. Experimental Setup We used the English part of the Emotional Speech Dataset (ESD) [31] to perform all the experiments. It has 10 speakers, each with Table 1: MOS and MCD comparisons. MOS is presented with 95% con\ufb01dence interval. Note that \u201cGradTTS w/ emo label\u201d cannot control emotion intensity. MOS MCD25 GT 4.73\u00b10.09 GT (voc.) 4.69\u00b10.10 2.96 MixedEmotion [14] 3.43\u00b10.12 6.62 GradTTS w/ emo label 4.16\u00b10.10 5.75 EmoDiff (ours) 4.13\u00b10.10 5.98 4 emotional categories Angry, Happy, Sad, Surprise together with a Neutral category. There are 350 parallel utterances per speaker and emotion category, amounting to about 1.2 hours each speaker. Melspectrogram and forced alignments were extracted by Kaldi [32] in 12.5ms frame shift and 50ms frame length, followed by cepstral normalization. Audio samples in these experiments are available 1. In this paper, we only consider single-speaker emotional TTS problem. Throughout the following sections, we trained an unconditional GradTTS acoustic model on all 10 English speakers for a reasonable data coverage, and a classi\ufb01er on a female speaker (ID:0015) only. The unconditional GradTTS model was trained with Adam optimizer at 10\u22124 learning rate for 11M steps. We used exponential moving average on model weights as it is reported to improve diffusion model\u2019s performance [22]. The structure of the classi\ufb01er is a 4-layer 1D CNN, with BatchNorm and Dropout in each block. In the inference stage, guidance level \u03b3 was \ufb01xed to 100. We chose Hi\ufb01GAN [33] trained on all the English speakers here as a vocoder for all the following experiments. 4.2. Emotional TTS Quality We \ufb01rst measure the speech quality, which contains audio quality and speech naturalness. We did comparisons of the proposed EmoDiff with the following systems: 1. GT and GT (voc.): ground truth recording and analysis synthesis result (vocoded with GT mel-spectrogram). 1https://cantabile-kwok.github.io/EmoDiff-intensity-ctrl/ \fFig. 2: Classi\ufb01cation probabilities when controlling on intensity \u03b1 \u2208{0.0, 0.2, 0.4, 0.6, 0.8, 1.0}. Errorbars represent standard deviation. 2. MixedEmotion2: proposed in [14]. It is an autoregressive model based on relative attributes rank to pre-calculate intensity values for training. It much resembles Emovox [18] for intensity controllable emotion conversion. 3. GradTTS w/ emo label: a conditional GradTTS model with hard emotion labels as input. It therefore does not have intensity controllability, but should have good sample quality, as a certi\ufb01ed acoustic model. Note that in this experiment, samples from EmoDiff and MixedEmotion were controlled with \u03b1 = 1.0 intensity weight, so that they are directly comparable with others. Table 1 presents the mean opinion score (MOS) and mel cepstral distortion (MCD) evaluations. It is shown that the vocoder causes little deterioration on sample quality, and our EmoDiff outperforms MixedEmotion baseline with a large margin. Meanwhile, EmoDiff and the hard-conditioned GradTTS both have decent and very close MOS results. The MCD results of them only have a small difference. This means EmoDiff does not harm sample quality for intensity controllability, unlike MixedEmotion. 4.3. Controllability of Emotion Intensity To evaluate the controllability of emotion intensity, we used our trained classi\ufb01er to classify the synthesized samples under a certain intensity that was being controlled. The t input to the classi\ufb01er was now set to 0. The average classi\ufb01cation probability on the target emotion class was used as the evaluation metric. Larger values indicate large discriminative con\ufb01dence. For both EmoDiff and MixedEmotion on each emotion, we varied the intensity from \u03b1 = 0.0 to 1.0. When intensity is 0.0, it equivalents to synthesize 100% Neutral samples. Larger intensity should result in larger probability. Figure 2 presents the results. To demonstrate the capability of this classi\ufb01er, we plotted the classi\ufb01cation probability on ground truth data. To show the performance of hard-conditioned GradTTS model, we also plotted the probability on its synthesized samples. As it doesn\u2019t have intensity controllability, we only plotted the values when intensity was 1.0. Standard deviations are presented as an errobar here as well for each experiment. It can be found from the \ufb01gure that the trained classi\ufb01er has a reasonable performance on ground truth data at \ufb01rst. As a remark, the classi\ufb01cation accuracy on validation set is 93.1%. Samples from GradTTS w/ emo label have some lower classi\ufb01cation probabilities. Most importantly, the proposed EmoDiff always covers a larger range from intensity \u03b1 = 0.0 to 1.0 than the baseline. The error range of EmoDiff is also always lower than the baseline, meaning that our control is more stable. This proves the effectiveness of our proposed soft-label guidance technique. We also notice that sometimes EmoDiff reaches higher classi\ufb01cation probability than 2We used the of\ufb01cial implementation https://github.com/ KunZhou9646/Mixed_Emotions Fig. 3: Diversity preference test of each emotion. hard-conditioned GradTTS at intensity 1.0. This is also reasonable, as conditioning on emotion labels when training is not guaranteed to achieve better class-correlation than classi\ufb01er guidance, with a strong classi\ufb01er and suf\ufb01cient guidance level. 4.4. Diversity of Emotional Samples Despite genearating high-quality and intensity controllable emotional samples, EmoDiff also has good sample diversity even in the same emotion, bene\ufb01ting from the powerful generative ability of diffusion models. To evaluate the diversity of emotional samples, we conducted a subjective preference test for each emotion between our EmoDiff and MixedEmotion. Listeners were asked to choose the more diverse one, or \u201cCannot Decide\u201d. Note that the test was done for each emotion in \u03b1 = 1.0 weight. Figure 3 shows the preference result. It is clear that for each of the three emotion categories Angry, Happy and Surprise, EmoDiff owns a large advantage of being preferred in diversity. Only for Sad, EmoDiff outperforms the baseline with a little margin. This is mainly because MixedEmotion is autoregressive, and we found its variation on duration accounts much especially for Sad samples. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2202.07200v2",
+ "title": "Unsupervised word-level prosody tagging for controllable speech synthesis",
+ "abstract": "Although word-level prosody modeling in neural text-to-speech (TTS) has been\ninvestigated in recent research for diverse speech synthesis, it is still\nchallenging to control speech synthesis manually without a specific reference.\nThis is largely due to lack of word-level prosody tags. In this work, we\npropose a novel approach for unsupervised word-level prosody tagging with two\nstages, where we first group the words into different types with a decision\ntree according to their phonetic content and then cluster the prosodies using\nGMM within each type of words separately. This design is based on the\nassumption that the prosodies of different type of words, such as long or short\nwords, should be tagged with different label sets. Furthermore, a TTS system\nwith the derived word-level prosody tags is trained for controllable speech\nsynthesis. Experiments on LJSpeech show that the TTS model trained with\nword-level prosody tags not only achieves better naturalness than a typical\nFastSpeech2 model, but also gains the ability to manipulate word-level prosody.",
+ "authors": "Yiwei Guo, Chenpeng Du, Kai Yu",
+ "published": "2022-02-15",
+ "updated": "2022-02-16",
+ "primary_cat": "eess.AS",
+ "cats": [
+ "eess.AS",
+ "cs.AI",
+ "cs.LG",
+ "cs.SD"
+ ],
+ "main_content": "INTRODUCTION Prosody modeling in neural speech synthesis has been extensively explored in recent research, aiming for natural, diverse, and controllable synthesis. The naturalness of synthetic speech is improved with prosody modeling taken into account [1\u20133]. Recently, more attention has been attracted by rich prosody modeling and control. Explicit prosodic features, which have clear linguistic or phonological interpretation, are \ufb01rst investigated. [4, 5] both provide solutions to control speci\ufb01c acoustic aspects of phone-level speech. [4] introduces temporal structures in the embedding networks that can control pitch and amplitude either on speech side or text side. [5] proposes a generative model that controls affect and speaking rate with semi-supervised latent variables. [6] effectively controls wordlevel pitch accent by multiplying optional bias to pitch encoder\u2019s output. [7, 8] presents F0, duration and energy control with variational auto-encoders (VAE). They disentangle these prosody features and provide more independent control. [9, 10] model these features with clustering, which is a purely data-driven method that have more interpretability. In contrast to explicit representation, implicit prosody representation is more complete and richer when modelling prosody diversity, yet uninterpretable. Prosody embeddings sampled from prior distribution with VAE are widely investigated in many linguistic levels. [11] models the global characteristics for an utterance. [12] improves the performance by incorporating GMM prior \u2217Corresponding author in VAE. [13] enhances phone-level prosody latent representations by VAE in prosody transfer. [14] uses vector quantization and trains an autoregressive prior model to generate synthetic speech with better sound quality. [15\u201317] models prosody hierarchically, by conditioning phone and word-level latent variables on coarser ones. These works incorporate more semantic information, thus improve the naturalness of synthetic speech to a great extent. Recently, unsupervised prosody clustering with mixture density network is also proposed in [18, 19], enabling richer prosody diversity. However, all the prior works control the prosodies manually by providing a reference speech or specifying the values of explicit prosodic features, such as pitch, which is hard to be practically applied. For example, it is expensive to collect reference speech with the prosodies that one needs. Also, hand-written values of explicit features may not correspond to a natural speech, and these explicit features do not represent the entire prosody space. As for implicit prosody representations, there are few known methods that can control prosody in inference stage. This is mainly because of the continuous prosody distributions they use. Therefore, few of the existing works achieve good and interpretable controllability with diverse prosody in natural speech. In this work, we propose an unsupervised word-level prosody tagging system that can be directly used for prosody control. We extract prosody embeddings from the mel-spectrogram of reference speech. Then, we obtain the word-level prosody tags in two stages. First, we construct a decision tree that recursively clusters all the words into different text-dependent sets, with a set of questions regarding their phonetic contents. Then, for each text-dependent leaf node, we cluster the prosody embeddings using Gaussian mixture models. The obtained prosody tags represent word-level prosody types and are further embedded to train a TTS system with a prosody tag predictor. The prosody tag predictor is capable of controlling the prosody of synthetic speech by manually specifying the prosody tag of any word. Our approach has several advantages besides the improved naturalness and controllability. First, the prosody tags are obtained in an unsupervised manner, without the need for expensive manual annotations like emotional labels. Second, the decision tree design makes it easy and robust to generalize to unseen words in inference, in terms of identifying a word into its phonetic cluster. Furthermore, as most of the questions in decision tree are language-agnostic, this design can be easily extended to different languages. By selecting the questions, the tree can also be used for multiple tasks. The rest of the paper is organized as follows. Section 2 illustrates the overall system. Experiments and results analysis are given in Section 3, and Section 4 draws a conclusion. arXiv:2202.07200v2 [eess.AS] 16 Feb 2022 \fFig. 1: Prosody extracting and tagging system architecture 2. WORD-LEVEL PROSODY TAGGING AND CONTROL Our system is built in three steps: word-level prosody embedding extracting, two-stage word-level prosody tagging, and TTS training with the prosody tags. Note that the TTS models in our system are based on FastSpeech2 [20]. 2.1. Word-level prosody extracting In order to obtain word-level prosody embeddings, we \ufb01rst build a typical FastSpeech2-based TTS model together with a prosody extractor following [19]. As is shown in Fig.1(a), the prosody extractor generates a hidden vector (named as prosody embedding e) for each word from the corresponding mel-spectrogram segment. The generated prosody embeddings are then aligned with the phoneme sequence and concatenated to the encoder output. Accordingly, the extractor is optimized to extract useful information for better reconstructing the output speech, including both prosody information and phonetic contents of the words. 2.2. Prosody tagging with two stages It is an intuitive idea that words with greatly different phonetic contents, such as the long word \u2019congratulation\u2019 and the short word \u2019cat\u2019, are uttered in a completely different ways and consequently should not be tagged with the same set of prosody tags. Therefore, in this work, we design a two-stage prosody tagging strategy, where we \ufb01rst group the words into different types with a decision tree according to their phonetic contents and then cluster the prosodies using GMM within each type of words separately. 2.2.1. Stage one: decision tree clustering Following the HMM state-tying in ASR [21], we construct a binary decision tree for word clustering with a set of questions Q on its phonetic contents, where all the words in the root are clustered into l leaves. These questions are designed based on our expert knowledge, such as \u201dWhether the phonemes of the word are more than 4 or not?\u201d and \u201dWhether the word ends with a closed syllable?\u201d. We reference the phonetic questions in HTS[22], which is a direct product of [21]. Each node in the decision tree contains a set of words whose prosody embeddings can be modeled with a Gaussian distribution and the log-likelihood can be formulated as LL(i) = X e\u2208E(i) log N \u0010 e | \u00b5(i), \u03a3(i)\u0011 (1) where i is the node index and E(i) is the set of all prosody embeddings corresponding to the words in the node i. Each non-leaf node i is related to a question q that partitions the words in the node into its left or right child, leading to an increase in log-likelihood of the prosody embeddings \u2206qLL(i) = LL(i\u2019s left child under q) + LL(i\u2019s right child under q) \u2212LL(i). (2) The initial tree contains only a root node, which is also a leaf node. Then we recursively perform the following step: \ufb01nd the question that maximizes the increase in log-likelihood for all the leaf nodes, and select a leaf node j whose increase is the maximum over all the leaf nodes, which is j = arg max i\u2208leaf nodes \u0012 max q\u2208Q \u2206qLL(i) \u0013 , (3) and split the selected node with the corresponding question. This process continues until the increase in log-likelihood is smaller than a threshold. Consequently, the topology of the decision tree is obtained. In this work, the number of leaves l is 10 as shown in Fig. 1(b), whose indices are denoted as letters from a to j. 2.2.2. Stage two: Gaussian mixture clustering The word-level prosody embeddings e extracted by neural networks contain both prosody information and phonetic content of the words. However, the decision tree clusters the words into l leaves according to the questions only on their phonetic contents, so we assume that the prosody embeddings of the words in a leaf node differ only in prosodies and are similar in phonetic contents. Therefore, clustering within a leaf node is dominated by the prosodies instead of phonetic contents. We perform GMM-based clustering for the prosody embeddings within each leaf node i separately, which is e(i) \u223c m X k=1 \u03c9(i) k N \u0010 e(i)|\u00b5(i) k , \u03a3(i) k \u0011 (4) \fFig. 2: Prosody control model architecture in training and inference stage where k is the Gaussian component index and m is the number of components. The prosody of each word is tagged with the index of the Gaussian component that maximizes the posterior probability of its prosody embedding e t = arg max k n log N \u0010 e | \u00b5(i) k , \u03a3(i) k \u0011 + log \u03c9(i) k o . (5) In this work, m is set to 5, so the Gaussian component ids range from 0 to 4. Accordingly, all the words in the training set are labelled with the m \u00d7 l = 5 \u00d7 10 = 50 prosody tags, which is the combination of 10 leaf ids and 5 Gaussian component ids. As shown in Fig. 1(b), the prosody tags are from a0 to j4. Note that our prosody extracting and tagging system is fully unsupervised in which only audio information is utilized. Also, the tagging system is driven by both data and knowledge. 2.3. Prosody control with prosody tags Finally, we train a TTS model with the derived word-level prosody tags as shown in Fig.2. In the training stage, the TTS model is guided by prosody embeddings retrieved from a trainable embedding table given the ground-truth prosody tags. In the inference stage, the prosody tags can be either predicted from input text by a prosody predictor or be manually speci\ufb01ed. The prosody predictor in this work is similar to [19]. It predicts the prosody tag for each word given its corresponding phoneme hidden states, i.e. the encoder output sequence h. The prosody predictor contains a bi-GRU that transforms the phoneme hidden states to a vector for each word, two convolutional blocks and a softmax layer. The convolutional blocks here consist of a 1D convolutional layer followed by a ReLU activation layer, layer normalization, and a dropout layer. The predictor is optimized by the cross-entropy loss LPP with the ground-truth prosody tags. Hence, the overall loss for the model training is de\ufb01ned as L = \u03b1LPP + LFastSpeech2, (6) where LFastSpeech2 is the loss of FastSpeech2[20] and \u03b1 is the relative weight between the two terms. 3. EXPERIMENTS AND RESULTS 3.1. Experimental setup We use LJSpeech [23], a single-speaker dataset containing about 24 hours of recordings for our experiments. 242 utterances are left out as a test set. All utterances are down-sampled to 16kHz. We use 800point window length, 200-point hop size, 1024 FFT points, and 320 mel-bins for feature extraction. The phoneme alignment is obtained from an HMM-GMM ASR model trained on Librispeech [24]. The vocoder used in this work is MelGAN [25]. The coef\ufb01cient \u03b1 in Eq.(6) is set to 1.0. The prosody embedding e is 128 dimensional. 3.2. The performance of decision tree in prosody tagging Fig. 3: Curve of overall log-likelihood of leaves and average number of leaf samples We demonstrate the curve of the average number of prosody embeddings in each leaf node and the overall log-likelihood of prosody embeddings over all leaf nodes P i\u2208leaf nodes LL(i) in Fig.3 when the tree grows. With the increase of the number of leaf nodes, the average number of prosody embeddings in each leaf node decreases whilst the overall log-likelihood of prosody embeddings increases. We stop the growth of the tree when the number of leaves reaches 10, in consideration of both the performance and the complexity. 3.3. Naturalness of predicted prosodies The TTS model with a prosody predictor is trained with the derived word-level prosody tags. In the inference stage, the word-level prosodies can be either predicted from the input text by the prosody predictor or be manually speci\ufb01ed. In this section, we synthesize the test set whose prosodies are predicted and sampled. Then we evaluate the naturalness with a MUSHRA test in which 30 listeners are asked to rate each utterance in a range from 0 to 100. We compare our model with two baselines: the typical FastSpeech2 model [20] Raw FSP and a TTS model in which phone-level prosodies are modeled with a mixture density network [19] PLP MDN. Also, the ground-truth mel-spectrograms of the recordings are reconstructed by the vocoder and then provided as GT in the listening test. The \fFig. 4: Subjective evaluation of naturalness results are reported in Fig.4. It can be observed that our proposed word-level prosody prediction system with predicted prosody tags (WLP predict) outperforms both other models in terms of naturalness, due to our word-level prosody modelling, although it is still slightly worse than GT. 3.4. Prosody controllability In order to evaluate the word-level prosody controllability of our TTS model, we \ufb01rst label the ground-truth word prosodies for the test set with the proposed prosody tagging system. Then we synthesize the test set 5 times where the prosody tags of the words in leaf d are manually speci\ufb01ed as d0 to d4 respectively while the prosody tags of other words are predicted and sampled. 1 Fig. 5 shows an example in which the word \u201cresponsibilities\u201d between the yellow dash lines are manually controlled with d0 to d4 respectively. It can be observed that all the 5 prosodies of the word are different, showing the controllability of the prosody tags. Fig. 5: An example of synthetic speech with manually speci\ufb01ed prosodies. The word between the yellow dash lines is \u201cresponsibilities\u201d whose prosody tags are speci\ufb01ed as d0 to d4 respectively. In addition, we need to con\ufb01rm that same prosody tags lead to similar prosodies. Therefore, we evaluate the prosody similarity between the recordings and the synthetic speech with different speci\ufb01ed prosody tags for all the words in the leaf d in the test set. Theoretically, when the speci\ufb01ed prosody tag is equal to the ground-truth prosody tag, the word prosody in the synthetic speech should be most similar to the recordings. We perform the evaluation of prosody similarity in objective and subjective ways respectively. We \ufb01rst compute the average Mel cepstral distortion (MCD) over all the words with ground-truth prosody 1The audio examples are available at https://cantabile-kwok. github.io/word-level-prosody-tagging-control/ Ctrl Tag GT Tag 0 1 2 3 4 0 5.389 5.434 5.371 5.490 5.420 1 5.410 5.348 5.379 5.796 5.420 2 5.612 5.670 5.356 5.548 5.517 3 5.828 6.023 5.578 5.442 5.714 4 5.507 5.507 5.362 5.562 5.309 Table 1: Mel cepstral distortion between the recordings and the synthetic speech with different speci\ufb01ed prosody tags for all the words in the leaf d in the test set. tag dt where t ranges from 0 to 4 between the recordings and the synthetic speech with a certain speci\ufb01ed prosody tag. The results are reported in Table 1. As expected, we can \ufb01nd that all the diagonal values are the lowest among the values on their columns, showing that same prosody tags lead to similar prosodies in synthetic speech. Also, we evaluate the prosody similarity with a subjective listening test where 30 listeners are provided with the recording and 5 synthetic speech with different prosody tags for each group and are asked to select the synthetic speech whose prosody of the corresponding word is the most similar to the recording. The proportions of the selections are depicted as a confusion matrix in Fig. 6. Similar to the results of objective evaluation, the proportion of the synthetic speech with the same prosody tags to the ground-truth ones, i.e. the diagonal values, achieves the highest among their columns, which further con\ufb01rms the controllability of prosody tags. Fig. 6: Subjective evaluation of controllability 4."
+ }
+ ]
+ },
+ "edge_feat": {}
+ }
+}
\ No newline at end of file