diff --git "a/abs_29K_G/test_abstract_long_2405.03121v1.json" "b/abs_29K_G/test_abstract_long_2405.03121v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03121v1.json" @@ -0,0 +1,444 @@ +{ + "url": "http://arxiv.org/abs/2405.03121v1", + "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding", + "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.", + "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.", + "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6", + "additional_graph_info": { + "graph": [ + [ + "Tao Liu", + "Feilong Chen" + ], + [ + "Tao Liu", + "Shuai Fan" + ], + [ + "Tao Liu", + "Chenpeng Du" + ], + [ + "Feilong Chen", + "Shuang Xu" + ], + [ + "Feilong Chen", + "Fandong Meng" + ], + [ + "Shuai Fan", + "Zhuo Chen" + ], + [ + "Chenpeng Du", + "Kai Yu" + ], + [ + "Chenpeng Du", + "Yiwei Guo" + ] + ], + "node_feat": { + "Tao Liu": [ + { + "url": "http://arxiv.org/abs/2405.03121v1", + "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding", + "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.", + "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6" + } + ], + "Feilong Chen": [ + { + "url": "http://arxiv.org/abs/2305.04160v3", + "title": "X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages", + "abstract": "Large language models (LLMs) have demonstrated remarkable language abilities.\nGPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities\nbeyond previous visual language models. We attribute this to the use of more\nadvanced LLMs compared with previous multimodal models. Unfortunately, the\nmodel architecture and training strategies of GPT-4 are unknown. To endow LLMs\nwith multimodal capabilities, we propose X-LLM, which converts Multi-modalities\n(images, speech, videos) into foreign languages using X2L interfaces and inputs\nthem into a large Language model (ChatGLM). Specifically, X-LLM aligns multiple\nfrozen single-modal encoders and a frozen LLM using X2L interfaces, where ``X''\ndenotes multi-modalities such as image, speech, and videos, and ``L'' denotes\nlanguages. X-LLM's training consists of three stages: (1) Converting Multimodal\nInformation: The first stage trains each X2L interface to align with its\nrespective single-modal encoder separately to convert multimodal information\ninto languages. (2) Aligning X2L representations with the LLM: single-modal\nencoders are aligned with the LLM through X2L interfaces independently. (3)\nIntegrating multiple modalities: all single-modal encoders are aligned with the\nLLM through X2L interfaces to integrate multimodal capabilities into the LLM.\nOur experiments show that X-LLM demonstrates impressive multimodel chat\nabilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen\nimages/instructions, and yields a 84.5\\% relative score compared with GPT-4 on\na synthetic multimodal instruction-following dataset. And we also conduct\nquantitative tests on using LLM for ASR and multimodal ASR, hoping to promote\nthe era of LLM-based speech recognition.", + "authors": "Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, Bo Xu", + "published": "2023-05-07", + "updated": "2023-05-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "eess.AS" + ], + "main_content": "Introduction In recent years, multimodal language models [31, 29, 24] have undergone rapid development. These models possess excellent abilities in multimodal understanding and response generation and can perform well in tasks such as image captioning [50], visual question answering [1], visual dialog [9], video captioning [18], and spoken dialogue [52]. It is worth noting that a large-scale multimodal model, GPT-4 [37], has recently been introduced, demonstrating many impressive capabilities. For example, GPT-4 can follow various instructions to complete language tasks, and can also answer various questions about images. For instance, GPT-4 can give detailed and accurate descriptions of images, understand and explain the humor in visual content, and even provide correct website-building code based on handwritten code images. Although GPT-4 demonstrates remarkable capabilities, Preprint. Work in progress arXiv:2305.04160v3 [cs.CL] 22 May 2023 \funfortunately, we do not know the details of its model structure and training methods. We believe that this is due to the fact that GPT-4 uses a more advanced and larger language model compared to previous multimodal models. With the support of powerful language abilities, GPT-4 can express understood visual content in the form of language. To validate this hypothesis and endow LLM with multimodal capabilities, we propose X-LLM. It converts multimodal information, such as images, speech, and videos, into foreign languages using X2L interfaces, and then feeds converted multimodal information into a large language model (ChatGLM). Speci\ufb01cally, X-LLM aligns multiple frozen single-modal encoders and a frozen LLM using X2L interfaces. X2L interfaces consist of an image I2L interface, a video V2L interface, and a speech S2L interface, where \u201cX\u201d denotes the multi-modalities and \u201cL\u201d denotes languages. The image interface and video interface have the same structure, and we adopt the Q-Former from BLIP-2 [29] to convert visual information into foreign language representations. For ef\ufb01ciency, the video interface reuses the parameters of the image interface with image-text data but is further trained with video-text data to align the encoded video features with the LLM. The speech interface utilizes the continuous integrate-and-\ufb01re (CIF) mechanism [12, 23] and transformer structure to convert speech utterance into foreign language representations. The training of X-LLM consists of three stages. (1) Converting Multimodal Information: the \ufb01rst stage trains each X2L interface to align with its respective single-modal encoder separately to convert multimodal information into languages. (2) Aligning X2L representations with the LLM: single-modal encoders are aligned with the LLM through X2L interfaces. (3) Integrating multiple modalities: all single-modal encoders are aligned with the LLM through X2L interfaces to integrate multimodal capabilities into the LLM. In the \ufb01rst two stages, we use image caption data, video caption data and automatic speech recognition (ASR) data to train the X2L interfaces. To better equip LLM with multimodal capabilities, we construct a multimodal instruction dataset (\u223c10K) based on open-source datasets to further improve the proposed model. Although without the third training stage, X-LLM already has the ability to accomplish multimodal tasks such as visual spoken question answering, we \ufb01nd that with only rare additional multimodal instruction data, LLM can further unify the capabilities of multiple modalities. In our experiments, we \ufb01nd that X-LLM has abilities similar to those of GPT-4. For example, X-LLM can generate complex image descriptions and explain unusual visual phenomena. In our research, when using input images, X-LLM can recognize the location in the image, such as identifying the Forbidden City and providing relevant information about it, observe the food in the image and provide detailed recipes; create stories for pictures, and come up with textual meanings for logos. We also \ufb01nd that X-LLM\u2019s image-related abilities can be extended to videos, such as introducing the content of a video, retrieving movie names, or art-related facts directly from the video. Moreover, X-LLM can answer questions based on spoken questions and can combine images or videos to answer spoken questions. These abilities are previously not present in previous multimodal models but are now made possible by the powerful language modeling capabilities of X-LLM. Our contributions are summarised as follows: \u2022 Multimodal LLM framework. We propose X-LLM, a Multimodal LLM which injects multiple modalities (such as images, speech, and videos) into LLM through X2L interfaces, giving LLM the ability to process multimodal data. This framework has good scalability and can be extended to more modalities. \u201cX\u201d in X2L interfaces can be any modality. We compare our X-LLM with LLaVA and MiniGPT-4 in terms of the ability to handle visual inputs with Chinese elements, and \ufb01nd that X-LLM outperformed them signi\ufb01cantly. We also conduct quantitative tests on using LLM for ASR and multimodal ASR, hoping to promote the era of LLM-based speech recognition. \u2022 Transferability of parameters in English image-text alignment modules. We \ufb01nd that the Qformer module trained on English image-text data can be transferred to other languages. In our experiments, we have successfully transferred the model parameters from Indo-European English to Sino-Tibetan Chinese. The transferability of language greatly increases the possibility of using English image-text data and its trained model parameters, and improves the ef\ufb01ciency of training multimodal LLMs in other languages. \u2022 Open-source. We construct a concise and high-quality Chinese multimodal instruction dataset. By training X-LLM on this multimodal instruction data, X-LLM can better integrate the multimodal capabilities acquired through multiple encoders and corresponding X2L interfaces. And We release the following assets to the public: the generated multimodal instruction data, the codebase for model training, the model checkpoint, and a multimodal chat demo. 2 \fLLM Image Encoder Video Encoder Speech Encoder Q-Former I-Adapter\u00a0 Q-Former V-Adapter\u00a0 CIF S-Adapter\u00a0 X2L interfaces \u8fd9\u662f\u4ec0\u4e48\u5730\u65b9\uff1f (Where is this?) \u4ecb\u7ecd\u4e00\u4e0b\u8fd9\u6bb5\u89c6\u9891\u7684\u5185\u5bb9 (Describe this video) \u8bc6\u522b\u8bed\u97f3\u4e2d\u7684\u5185\u5bb9 (Recognize text in this speech) \u8fd9\u662f\u6545\u5bab\uff0c\u53c8\u79f0\u7d2b\u7981\u57ce\uff0c\u662f\u4e2d\u56fd\u660e\u6e05 \u4e24\u671d\u7684\u7687\u5bab\uff0c\u4f4d\u4e8e\u5317\u4eac\u5e02\u4e2d\u5fc3\uff0c\u5360\u5730 \u9762\u79ef\u7ea672\u4e07\u5e73\u65b9\u7c73\uff0c\u662f\u4e16\u754c\u4e0a\u6700\u5927\u3001 \u4fdd\u5b58\u6700\u4e3a\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf\u5efa\u7b51\u7fa4\u4e4b \u4e00... (This is the Palace Museum, also known as the Forbidden City, which served as the imperial palace during the Ming and Qing dynasties in China. Located in the center of Beijing, it covers an area of about 720,000 square meters and is one of the largest and best-preserved ancient palace complexes in the world...) \u4e00\u7fa4\u72ee\u5b50\u5728\u8349\u539f\u4e0a\u6652\u592a\u9633... (A group of lions sunbathing on the savannah ...) \u63a8\u8350\u4e00\u4e0b\u5317\u4eac\u7684\u65c5\u6e38\u666f\u70b9 (Recommend some tourist attractions in Beijing) Q-Former I-Adapter\u00a0 V-Adapter\u00a0 Q-Former C-Former Figure 1: X-LLM network architecture. 2 Related Work Vision-Language Models. As summarized in many surveys [5, 16], visual language models [58, 30] have made great strides with the development of pre-training techniques [11, 2, 60]. In the early days, researchers used Faster-RCNN [19] to extract image features and concatenated them with language models such as BERT [11] to perform vision-language pre-training. VisualBERT [31], for example, combines image regions and language using a Transformer [49] to allow self-attention to discover implicit alignments between language and vision. It is pre-trained with masked language modeling [11] and a sentence-image prediction task [31]. With the introduction of ViLT [26], researchers use vision transformers [13, 25] to process images, textual transformers (such as BERT [11], GPT-2 [42], T5 [43]) to process text, and pre-training objectives such as masked language modeling, image-text matching, and image-text contrast to train visual language models. CLIP [41] uses a text encoder and an image encoder to encode text and images separately and then performs unsupervised contrastive learning to obtain good representations of vision-language alignment. BLIP [30] is a new VLP framework that transfers \ufb02exibly to both vision-language understanding and generation tasks. In the \ufb01eld of visual dialogue [9, 6, 4], researchers design pre-training objectives related to visual dialogue based on vision-language pre-training models [11, 31] and \ufb01netune vison-language models on visual dialogue data [9] to achieve better dialogue performance. VisDial-BERT [36] and VDBERT [51], for example, use pre-trained ViLBERT [35] and BERT to \ufb01netune models on visual dialogue data using masked language modeling and image-text matching. AlignVD [7] proposes two methods for visual-language alignment based on pre-trained ViT [41] and BERT to achieve better performance in visual dialogue. Enhancing Vision-language Understanding with Advanced LLMs. Although the aforementioned vision-language models have achieved some success, there is still signi\ufb01cant room for improvement in terms of language generation [5, 34, 29]. A recent method [29, 14, 37] for enhancing visual language understanding using advanced large-scale language models [48, 8] has been proposed. For example, BLIP2 [29] uses a Q-Former to connect a visual encoder with an LLM, aligning the learned queries of the Q-Former with language-related visual features extracted by the visual encoder. The Q-Former then connects the visual encoder with the language model, allowing the learned query representations to adapt to the LLM. PaLM-E [14] combines ViT-22B [10] with PaLM560B [2] to inject multimodal information into the embedding space of the pre-trained language model, establishing a connection between perception and language and greatly enhancing the model\u2019s visual language understanding ability. In addition, Visual ChatGPT [53] and HuggingGPT [46] use ChatGPT as the core logic controller, which understands user intent and then call upon speci\ufb01c domain visual language models. Finally, the recently proposed GPT-4 [37] demonstrates powerful multimodal capabilities: building on its strong language understanding abilities, it can generate complex image descriptions, create websites based on handwritten text instructions, and explain unusual visual phenomena. However, the model structure and training strategies of GPT-4 remain a mystery. MiniGPT-4 [59] and LLaVA [33] align text and image data to the large-scale language 3 \fmodel Vicuna [8] and ViT [57] to complete image-based language tasks. In contrast, X-LLM is a universal framework for multimodal LLMs that bootstraps advanced large language models by treating multi-modalities as foreign languages. In this paper, we implement X-LLM that supports images, videos, and speech. Based on the X-LLM framework, we can extend the model to more modalities, such as injecting continuous space robot states, terminal information, or audio rather than speech into the LLM. 3 Approach X-LLM aims to align multiple pre-trained single-modal encoders with advanced large-scale language models (LLMs), as shown in Figure 1. Speci\ufb01cally, we use ChatGLM 1 as the language decoder, which is built on top of GLM [17, 56] and can perform various complex language tasks. For visual perception, we adopt ViT-g [57], as the image encoder and video encoder. For speech perception, we use a speech encoder comprised of convolution layers and conformer structure [21]. We design a module that aligns multimodal information with LLM, collectively referred to as the X2L interfaces, which includes an image interface, a video interface, and a speech interface. The image interface and the video interface have the same structure which consists of Q-Formers [29] and Adapter modules. The speech interface includes the C-Former and an Adapter module. The C-Former could compress the frame-level speech feature sequence from the speech encoder into the token-level speech embedding sequence with continuous integrate-and-\ufb01re (CIF) mechanism [12, 23, 22]. As the token-level speech embedding sequence is strictly aligned with the token sequence of the transcription corresponding to the speech utterance, representing speech using token-level speech embeddings can effectively reduce the GPU memory usage when incorporating speech into LLMs. 3.1 X2L Interfaces X2L interfaces aim to convert multimodal information into foreign languages, which includes an image interface, a video interface, and a speech interface. The Image Interface. Inspired by [29], the image interface consists of a Q-Formers [29] and an I-Adapter module. The Q-Formers aims to convert images into languages, where image features obtained from the image encoder are converted into a sequence with Li quasi-linguistic embeddings. The I-Adapter module aims to align the dimensions of the quasi-linguistic embeddings and the embedding dimension of the LLM. The Video Interface. The video interface has the same structure as the image interface, which also consists of Q-Formers [29] and a V-Adapter module. We use uniform sampling and represent each video with T frames. We then treat each frame as an image. The video interface converts each frame features into a sequence with Li quasi-linguistic embeddings. Then the video interface concatenates all the sequences to obtain the \ufb01nal quasi-linguistic embeddings, which have a length of T \u00d7 Li. The Speech Interface. To transform the speech features from the speech encoder into more semantic representations, we introduce a speech-to-language interface called the speech interface. The speech interface consists of two parts, namely the C-Former and the S-Adaptor. The C-Former is the combination of a CIF module and a 12-layer transformer structure [11]. First, the CIF module compresses the speech feature sequence from the speech encoder into a token-level speech embedding sequence with the same length as the corresponding transcription via variable-length down-sampling. Assuming the length of the feature sequence emitted by the speech encoder for the input speech is U, and the length of the token sequence of the transcription of the speech utterance is Ls, the length of the token-level speech embedding sequence should be Ls (U is usually several times longer than Ls). Then, the transformer structure provides contextual modeling for the token-level speech embeddings from the CIF module. Finally, the S-Adaptor is used to project the outputs of the transformer structure to the input vector space of the LLM, further narrowing down the semantic gap between speech and language. 1https://github.com/THUDM/ChatGLM-6B 4 \f3.2 Training Strategy To ef\ufb01ciently implement X-LLM, we propose a three-stage training strategy. (1) Converting Multimodal Information: we align the Image Encoder with the Q-Former of the image (green part), and the Speech Encoder with the CIF module. (2) Aligning X2L representations with the LLM: in the second stage, we align the Image Encoder with the LLM through the image interface, align the Video Encoder with the LLM through the video interface, and align the Speech Encoder with LLM through the speech interface. In the third stage, we integrate training of the image, video, and speech, and align the overall single-modal encoders with the LLM using a smaller but high-quality multimodal instruction dataset (such as instructions containing visual spoken dialogue, i.e., responding to spoken dialogue inputs based on images). 3.2.1 First Training Stage: Converting Multimodal Information In the \ufb01rst stage, the traditional approach is to align the Image Encoder with the image Q-Former using a large amount of image-text data, similar to the \ufb01rst stage of BLIP2 [29] which utilized around 500 million image-text pairs. However, we \ufb01nd that while BLIP2 used English data, we can still leverage the pretrained parameters of the Q-Former in BLIP2 to implement a Chinese Multimodal LLM. Therefore, in the \ufb01rst stage, to ef\ufb01ciently implement X-LLM, we only convert the representation of the speech encoder to a quasi-linguistic representation through the speech interface. For the speech-related structures, we train a CIF-based ASR model with multiple ASR datasets containing to obtain the speech encoder and CIF module in the C-Former. The CIF-based ASR model consists of a speech encoder, a CIF module, and a decoder [12]. We employ the speech encoder of this ASR model as the speech encoder and employ the CIF module of this ASR model as that in the C-Former of the speech interface. Note that the parameters of the speech encoder and CIF module are kept frozen during all subsequent training stages. Please refer to the appendix for more details about the structure and training of the CIF-based ASR model. 3.2.2 Second Training Stage: Aligning X2L Representations with the LLM As mentioned above, despite the difference in language, we are still able to reuse the parameters of the Q-Former in BLIP2. Speci\ufb01cally, we used the Q-Former trained in the second stage of BLIP2 to initialize the image interface\u2019s Q-Former in X-LLM. To adapt the Q-Former to Chinese LLM, we use a combined dataset, totaling approximately 14 million Chinese image-text pairs for training. Next, we use the trained image interface to initialize the video interface (the Q-Former and the V-Adapter) and train the video interface on the translated video-text data. Finally, we train the speech interface using ASR data to align the output of the speech interface with the LLM. It should be noted that throughout the entire second training stage, all the encoders and the LLM remain frozen, with only the interfaces being trained. 3.2.3 Third Training stage: Integrating Multiple Modalities After the \ufb01rst two stages of training, our X-LLM has demonstrated a remarkable ability to provide reasonable answers to human queries based on multimodal information and has gained a vast amount of knowledge. We have observed that, even without the instruction for joint training on multiple modalities, such as \"answer questions based on images using voice input,\" X-LLM is capable of performing tasks that require multiple modalities, such as visual spoken dialogue, multimodal speech recognition, and multimodal machine translation. This remarkable ability is likely due to X-LLM\u2019s integration of LLM\u2019s excellent instruction generalization capability, which has been extended to the multimodal domain. This ability enables us to train more modalities independently in the \ufb01rst two stages and integrate them into the model without the need for joint training with existing modalities. To explore the potential of multimodal joint instruction data in further enhancing X-LLM\u2019s ability to perform multimodal tasks, such as visual spoken question answering, we have constructed a concise but high-quality multimodal instruction dataset. Different from MiniGPT-4 [59] and LLaVA [33]\u2019s datasets, which only contain image-text instruction data and other textual instruction datasets for instruction \ufb01netuning and conversations, our dataset supports multimodal joint instructions and includes (1) image-text instruction data, (2) speech-text instruction data, (3) video-text instruction data, and (4) image-text-speech instruction data. 5 \fConstructing a High-quality Alignment Dataset for Multimodal LLM. We use ChatGPT to translate 3.5K image-text instructions built by MiniGPT-4. Then, we manually select 2k data from AISHELL-2 [15] and write 5 different instructions for speech recognition tasks. We use ChatGPT to translate the ActivityNet dataset [27], followed by manually selecting 1k data and writing 5 different instructions for corresponding video-text tasks. We manually select and rewrite 1k data from self-constructed VSDial-CN data, aiming to enable the model to perform dialogue generation tasks based on images and speech. More details of the data can be found in the appendix, including the details of the training data for the \ufb01rst two stages and the multimodal instruction data. The Third Training Stage. During this stage, we use the constructed compact yet high-quality data to \ufb01netune our model. During \ufb01netuning, we use the prede\ufb01ned prompts in the following template: Question: \\n Answer: In this prompt, represents a randomly sampled instruction from our prede\ufb01ned instruction set, including different forms such as \u201cdescribe this image in detail\u201d, \u201ccan you describe what you notice in the video\u201d, or \u201canswer the question in the speech based on the image\u201d. It should be noted that we do not calculate regression loss speci\ufb01cally for this particular instruction prompt. Therefore, X-LLM can integrate multiple modalities and generate more natural and reliable responses based on various combinations of instructions as needed. 4 Experiments 4.1 Multimodal Chat We have developed a Chatbot demo to show multimodal understanding and conversation abilities of X-LLM. For comparisons, query LLaVA [33]2 and MiniGPT-4 [59]3 from their online demos to get their response. As shown in Table 2 and 3, although LLaVA and MiniGPT-4 also exhibit the characteristic of generating answers based on the given prompt, their answers regarding visual content with Chinese elements are not as satisfactory. In the \ufb01rst example about the Forbidden City shown in Table 2, X-LLM recognizes that the place is the Forbidden City and provides a detailed introduction to its history, architecture, and style. LLaVA describes Chinese palaces and \ufb02ags, but it does not recognize that the famous palace is the Forbidden City and therefore cannot provide relevant information about it. MiniGPT-4 exhibits the same problem and tends to describe the image more. In the second example about the game \u201cHonor of Kings\u201d shown in Table 3, X-LLM identi\ufb01es it as a multiplayer online battle arena game, \u201cHonor of Kings\u201d, developed by Tencent and provides accurate release time. LLaVA, on the other hand, gives multiple incorrect answers, as there are no elements of popular games such as snakes and pocket monsters in the image, and the game is not played with a mouse. MiniGPT-4 fails to recognize the game and provides a more generic description. For video input and speech input, we provide some examples as shown in Appendix B. Quantitative Evaluation. In order to systematically evaluate the performance of the X-LLM model on visual input, we aim to use quantitative metrics to measure the model\u2019s ability to follow instructions. We adopt an evaluation method similar to that proposed by LLaVA [33] and use ChatGPT to measure the quality of the answers generated by our model. Speci\ufb01cally, we use the LLaVA-test dataset [33] provided by LLaVA, which contains 30 randomly selected images from the COCO validation set, each with three types of questions (conversation, detailed description, and complex reasoning). We \ufb01rst translate the questions into Chinese, and X-LLM predicts the answers based on the translated Chinese questions and visual input images. Then we translate the responses given by X-LLM into English for comparison with GPT-4. GPT-4 makes reference predictions based on the question, ground truth bounding boxes, and captions, marking the upper limit. After obtaining the responses from the two models, we provide the question, visual information (in the form of captions and bounding boxes), and generated responses from both assistants to ChatGPT. ChatGPT evaluates the usefulness, 2https://llava-vl.github.io/ 3https://minigpt-4.github.io/ 6 \fModel Conversation Detail description Complex reasoning All LLaVA 83.1 75.3 96.5 85.1 X-LLM 85.4 83.5 84.6 84.5 w/ 4M 74.8 83.7 86.5 81.9 w/ 4M no init 64.7 71.9 85.0 73.8 Table 1: Relative scores for different settings w.r.t. GPT-4 (text-only) on 30 randomly sampled images from COCO Val 2014. Each image is associated one short question, one detailed question, and one complex reasoning question, resulting in a total of 90 questions. We prompt ChatGPT with the answers from our model outputs and the answers by GPT-4 (text-only), and let it compare between both responses and give a rating with an explanation. \u201cw/ 4M\u201d denotes that we train the image interface only using 4M image-text pairs. \u201cw/ 4M no init\u201d denotes that we train the image interface only using 4M image-text pairs and without using the parameters of pretrained BLIP2. relevance, accuracy, and level of detail of the assistants\u2019 responses and gives an overall score from 1 to 10, with higher scores indicating better overall performance. ChatGPT is also required to provide a comprehensive evaluation explanation for a better understanding of the model. LLaVA used GPT-4 as a teacher to evaluate the quality of the responses generated by LLaVA and GPT-4, while we believe that using a non-GPT-4 evaluation model (i.e. using ChatGPT) will be more objective (Also because we do not have GPT-4 API.). Examples of test questions can be found in Appendix A.2. We show the results in Table 1. Although different evaluation models are used (LLaVA uses GPT-4, X-LLM uses ChatGPT), we are able to make rough comparisons. The results show that X-LLM yields a performance of 84.5% nearly GPT-4. X-LLM outperforms LLaVA in terms of conversation and detail description but is inferior in complex reasoning. There are two reasons for this. One reason is that X-LLM do not use the 150k visual instruction dataset proposed by LLaVA, which has the same format as the test set. The second reason is that X-LLM has fewer language model parameters. It is based on ChatGLM with 6B parameters, while LLaVA is based on Vicuna with 13B parameters. And we do not \ufb01netune the LLM while LLaVA \ufb01netune the LLM Vicuna. Furthermore, comparing \u201cX-LLM w/ 4M\u201d and \u201cX-LLM w/ 4M no init\u201d, we can observe that using the BLIP2 pre-trained Q-Former parameters signi\ufb01cantly improves the model\u2019s performance, which This veri\ufb01es the transferability of parameters in the English image text alignment module. The transferability of language greatly increases the possibility of using English image-text data and its trained model parameters, and improves the ef\ufb01ciency of training multimodal LLMs in other languages. Comparing X-LLM and \u201cX-LLM w/ 4M\u201d, we can see that increasing the number of image-text pairs used during training can enhance the model\u2019s performance. However, we also notice that X-LLM performs worse than \u201cX-LLM w/ 4M\u201d in complex reasoning, which may be attributed to the additional use of the Wukong dataset [20], whose quality is inferior to that of the dataset comprising the 4M data. Note that all three variants of X-LLM achieve high performances on complex reasoning which is because of the powerful ability of language models and complex reasoning questions can also be answered without images to some extent. 4.2 ASR and Multimodal ASR We evaluate the ASR and multmodal ASR (MASR) on AISHELL-2 and VSDial-CN datasets. Please refer to Appendix A for details. As shown in Table 4, we compare X-LLM with the CIF-based model mentioned in Section 3.2.1. From the results in the table, overall, the speech recognition performance of X-LLM is weaker than that of the CIF-based Model. This is because the CIF-based model is trained with full parameters, while X-LLM only trains the BERT in the C-Former and the S-Adapter. In the results of multimodal speech recognition on the VSDial-CN dataset, the introduction of images caused a decrease in the speech recognition performance of X-LLM (from 3.4 to 4.7 on test-orig and from 4.7 to 6.8 on test-art). This may be because we provided X-LLM with a \ufb01xed single instruction \u201cPlease faithfully recognize the speech\u201d during the second training stage for speech recognition. However, when conducting multimodal speech recognition, we replace the instruction with \u201cPlease faithfully recognize the speech based on the image.\u201d. After \ufb01netuning X-LLM with 2k speech recognition-related instructions in the third stage, the error rate of X-LLM\u2019s multimodal speech recognition decreased from 4.7 to 4.3 on test-orig and from 6.8 to 6.3 on test-art. In the future, we will supplement the training process by using various speech recognition instructions in the second stage and increasing the amount of 7 \fVisual input example, The Forbidden City: User \u8be6\u7ec6\u4ecb\u7ecd\u4e00\u4e0b\u8fd9\u4e2a\u5730\u65b9(Introduce the place in details) X-LLM \u5317\u4eac\u7684\u6545\u5bab\u662f\u4e2d\u56fd\u5c01\u5efa\u738b\u671d\u7687\u5bb6\u5bab\u6bbf\u7684\u4ee3\u8868\uff0c\u4e5f\u662f\u4e16\u754c\u4e0a\u4fdd\u5b58\u6700\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf \u5efa\u7b51\u7fa4\u4e4b\u4e00\uff0c\u4f4d\u4e8e\u5317\u4eac\u5e02\u4e2d\u5fc3\u7d2b\u7981\u57ce\u5185\uff0c\u662f\u4e2d\u56fd\u6700\u91cd\u8981\u7684\u6587\u5316\u9057\u4ea7\u4e4b\u4e00\u3002 \u6545\u5bab\u59cb\u5efa\u4e8e\u660e\u671d\u6c38\u4e50\u5e74\u95f4\uff0c\u662f\u4e2d\u56fd\u53e4\u4ee3\u5bab\u5ef7\u5efa\u7b51\u4e4b\u7cbe\u534e\uff0c\u4e5f\u662f\u4e2d\u534e\u6c11\u65cf\u6587\u5316\u7684\u7470 \u5b9d\u4e4b\u4e00\u3002\u6545\u5bab\u5360\u5730\u9762\u79ef\u7ea672\u4e07\u5e73\u65b9\u7c73\uff0c\u5efa\u7b51\u9762\u79ef\u7ea615\u4e07\u5e73\u65b9\u7c73\uff0c\u662f\u4e16\u754c\u4e0a\u73b0\u5b58\u6700 \u5927\u3001\u4fdd\u5b58\u6700\u5b8c\u6574\u7684\u53e4\u4ee3\u5bab\u6bbf\u5efa\u7b51\u7fa4\u4e4b\u4e00\u3002 \u6545\u5bab\u5efa\u7b51\u7fa4\u5e03\u5c40\u4e25\u8c28\u3001\u7cbe\u7f8e\uff0c\u5305\u62ec\u4e7e\u6e05\u5bab\u3001\u4ea4\u6cf0\u6bbf\u3001\u4fdd\u548c\u6bbf\u3001\u5348\u95e8\u3001\u4e09\u5927\u6bbf\u3001\u592a \u548c\u6bbf\u3001\u4e7e\u6e05\u5bab\u3001\u517b\u5fc3\u6bbf\u3001\u7d2b\u7981\u57ce\u7b49108\u4e2a\u5efa\u7b51\u548c\u90e8\u95e8\uff0c\u67099000\u591a\u95f4\u623f\u95f4\uff0c\u62e5\u6709\u73cd\u8d35 \u7684\u6587\u7269\u548c\u827a\u672f\u54c1\uff0c\u88ab\u8a89\u4e3a\u4e16\u754c\u7470\u5b9d\u3002 \u6545\u5bab\u7684\u5efa\u7b51\u98ce\u683c\u4ee5\u9ec4\u8272\u7409\u7483\u74e6\u9876\u3001\u7ea2\u5899\u9ec4\u74e6\u3001\u6a90\u89d2\u98de\u6a90\u4e3a\u4e3b\uff0c\u8272\u5f69\u9c9c\u8273\uff0c\u5bcc\u4e3d\u5802 \u7687\u3002\u6545\u5bab\u7684\u6587\u7269\u4e30\u5bcc\u591a\u5f69\uff0c\u5305\u62ec\u5bab\u5ef7\u7528\u54c1\u3001\u827a\u672f\u54c1\u3001\u56fe\u4e66\u3001\u5178\u7ae0\u5236\u5ea6\u3001\u5386\u53f2\u6587\u732e \u7b49\uff0c\u88ab\u8a89\u4e3a\u4e2d\u56fd\u53e4\u4ee3\u6587\u5316\u4e4b\u5b9d\u5e93\u3002 \u6545\u5bab\u662f\u4e2d\u56fd\u5386\u53f2\u548c\u6587\u5316\u7684\u91cd\u8981\u8c61\u5f81\uff0c\u4e5f\u662f\u4e2d\u56fd\u4eba\u6c11\u7684\u73cd\u8d35\u9057\u4ea7\uff0c\u5438\u5f15\u7740\u4f17\u591a\u56fd\u5185 \u5916\u6e38\u5ba2\u524d\u6765\u53c2\u89c2\u548c\u63a2\u7d22\u3002 (Translated in English:) The Forbidden City, located in the center of Beijing within the Imperial Palace complex, is a representation of the royal palaces of feudal dynasties in China and one of the most complete ancient palace architectural complexes in the world. It is one of the most important cultural heritages in China. The construction of the Forbidden City began in the Yongle reign of the Ming Dynasty, and it represents the essence of ancient Chinese court architecture and one of the treasures of Chinese culture. The Forbidden City covers an area of about 720,000 square meters, with a building area of about 150,000 square meters, making it one of the largest and most complete ancient palace architectural complexes in the world. The layout of the Forbidden City is rigorous and exquisite, including 108 buildings and departments such as the Qianqing Palace, Jiaotai Hall, Baohe Hall, Meridian Gate, Three Great Halls, Taihe Palace, Qianqing Palace, Yangxin Hall, and the Purple Forbidden City. It has more than 9,000 rooms and houses precious cultural relics and artworks, making it a world treasure. The architectural style of the Forbidden City is characterized by yellow glazed tile roofs, red walls, yellow tiles, and \ufb02ying eaves, with bright colors and magni\ufb01cent grandeur. The cultural relics in the Forbidden City are rich and diverse, including court articles, artworks, books, systems of laws and regulations, and historical documents, making it a treasure trove of ancient Chinese culture. The Forbidden City is an important symbol of Chinese history and culture, as well as a precious heritage of the Chinese people, attracting numerous domestic and foreign tourists to visit and explore. User Introduce the place in details LLaVA [33] The image features a large and beautiful red palace, which appears to be a Chinese temple or government building, situated on a grassy hill. The main building is surrounded by several smaller buildings, all of which are adorned with red \ufb02ags. These \ufb02ags display the Chinese \ufb02ag design, emphasizing the cultural signi\ufb01cance of the location. In the foreground, there are various vehicles, including a few cars and trucks, parked close to the palace grounds. Additionally, a person can be seen standing further away from the main building, possibly admiring the architecture or waiting for someone. With its grand architecture and serene setting, the palace serves as an impressive landmark in the area. User Introduce the place in details MiniGPT-4 [59] The image shows a red building with a large gate in the front. The building has a large red roof and white walls. There are several \ufb02ags on the roof of the building, including the Chinese \ufb02ag. The building is surrounded by green grass and bushes. The sky is clear and blue. Table 2: Example prompt demonstrating X-LLM, LLaVA and MiniGPT-4\u2019s Chinese characteristic visual input capability. The prompt requires image understanding. 8 \fVisual input example, Honor of Kings: User \u8fd9\u662f\u4ec0\u4e48\u6e38\u620f\uff1f\u4ecb\u7ecd\u4e00\u4e0b\u3002(What game is this? Introduce to me.) X-LLM \u738b\u8005\u8363\u8000\u662f\u7531\u817e\u8baf\u6e38\u620f\u5f00\u53d1\u5e76\u53d1\u884c\u7684\u4e00\u6b3e\u591a\u4eba\u5728\u7ebf\u6218\u6597\u7ade\u6280\u624b\u6e38\uff0c \u4e8e2015\u5e7411\u670826\u65e5\u5728\u4e2d\u56fd\u5185\u5730\u6b63\u5f0f\u53d1\u884c\uff0c\u662f\u4e00\u6b3e\u7ade\u6280\u6e38\u620f\uff0c\u73a9\u5bb6\u53ef\u4ee5\u9009\u62e9\u4e0d\u540c \u7684\u82f1\u96c4\u8fdb\u884c\u6218\u6597\uff0c\u5e76\u4e0e\u5176\u4ed6\u73a9\u5bb6\u7ec4\u961f\u7ade\u6280\u3002\u738b\u8005\u8363\u8000\u5728\u4e2d\u56fd\u53d7\u5230\u4e86\u5e7f\u6cdb\u6b22\u8fce\uff0c\u5e76 \u4e14\u5728\u5168\u7403\u8303\u56f4\u5185\u4e5f\u62e5\u6709\u5927\u91cf\u7684\u73a9\u5bb6\u3002 (Translated in English:) Honor of Kings is a multiplayer online battle arena (MOBA) mobile game developed and published by Tencent Games. It was of\ufb01cially released in mainland China on November 26, 2015. As a competitive game, players can select different heroes to \ufb01ght and compete with other players in teams. Honor of Kings has gained widespread popularity in China and has a large player base globally. User What game is this? Introduce to me. LLaVA [33] The image shows a computer screen with a game that appears to involve a combination of elements from popular games like snake and pokemon. The game is being played with a mouse, and the screen is \ufb01lled with many small images, likely representing different characters or items within the game. The game seems to be set in a grassy \ufb01eld, where the player encounters and engages with various creatures or obstacles. As an AI, I am unable to provide further information on the speci\ufb01c game or its rules, but it is clear that the user is actively engaged in playing this unique game on their computer. User What game is this? Introduce to me. MiniGPT-4 [59] This is a screenshot of a game with a character selection screen. The game appears to be a multiplayer game with various characters to choose from. The game appears to be a role-playing game with different characters and abilities. Table 3: Example prompt demonstrating X-LLM, LLaVA and MiniGPT-4\u2019s Chinese characteristic visual input capability. The prompt requires image understanding. Model AISHELL-2 VSDial-CN test-ios test-android test-mic test-orig test-art S S + V S S + V CIF-based Model 6.0 6.4 6.4 2.8 1.7 6.0 5.3 X-LLM 8.7 9.0 9.4 3.4 4.7 / 4.3 4.7 6.8 / 6.3 Table 4: Performance of X-LLM on AISHELL-2 and VSDial-CN (CER%). \u201cS\u201d denotes only using speech for ASR. \u201cS+V\u201d denotes using speech and images for MASR. data for \ufb01netuning instructions in the third stage to observe the changes in the multimodal speech recognition ability of X-LLM. Additionally, a more powerful LLM may have stronger instruction generalization, which could improve the performance of multimodal speech recognition. We observe that although the addition of images to X-LLM\u2019s speech recognition task results in a slight decrease in performance, X-LLM is able to comprehend spoken questions in speech without \ufb01netuning, and provide appropriate responses. It can also incorporate images to provide suitable answers to spoken questions. After a small amount of data \ufb01netuning in the third phase, X-LLM\u2019s ability in this regard is further improved. 9 \f5 Discussions This paper demonstrates the effectiveness of X-LLM, which injects multiple modalities as foreign languages into a large language model through the X2L interface, endowing LLM with powerful multimodal capabilities. We design a three-stage training method to train X-LLM, where each modality interface has high independence in the \ufb01rst two stages, facilitating simultaneous training. Through the \ufb01rst two stages of training, X-LLM can interact with each modality through language. Furthermore, X-LLM can complete tasks involving multiple modalities (such as visual spoken question answering) without further \ufb01netuning on joint instruction datasets, thanks to its integration of the instruction generalization ability of large language models and its adaptation to the multimodal domain. The integration of multiple modalities without training greatly facilitates the modality expansion of X-LLM. To further explore the impact of joint multimodal instruction data on X-LLM\u2019s ability to integrate multiple modalities, we construct a streamlined but high-quality multimodal instruction dataset, and X-LLM\u2019s performance is further improved after \ufb01ne-tuning on this data. This project is still ongoing and currently has several limitations: (1) Limitations of the language model. X-LLM is built on top of ChatGLM with only 6B parameters and inherits its limitations, including but not limited to unreliable reasoning ability and fabrication of non-existent facts. (2) Insuf\ufb01cient training for modal connections. X-LLM\u2019s multi-modal perception ability is somewhat limited. We only used a small amount of multi-modal data sets to connect the multi-modal encoder and a large language model. There are several directions for further exploration: (1) Data scale. Compared to BLIP2, we only used a small amount of Chinese multimodal data. We believe that using larger Chinese data for training can signi\ufb01cantly improve the model\u2019s performance by increasing concept coverage. (2) Connecting more modalities. We can connect audio to enable LLM to understand and interact with non-verbal audio. We can also connect the status information of various terminals to LLM, so that LLM can control the terminals based on their status information. (3) Using better LLM. Due to the limitation of computing resources, we only used a 6B language model for experimentation. It can be expected that using a stronger language model, X-LLM will gain more powerful capabilities." + }, + { + "url": "http://arxiv.org/abs/2204.07302v1", + "title": "Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning", + "abstract": "Visual Dialog is a challenging vision-language task since the visual dialog\nagent needs to answer a series of questions after reasoning over both the image\ncontent and dialog history. Though existing methods try to deal with the\ncross-modal understanding in visual dialog, they are still not enough in\nranking candidate answers based on their understanding of visual and textual\ncontexts. In this paper, we analyze the cross-modal understanding in visual\ndialog based on the vision-language pre-training model VD-BERT and propose a\nnovel approach to improve the cross-modal understanding for visual dialog,\nnamed ICMU. ICMU enhances cross-modal understanding by distinguishing different\npulled inputs (i.e. pulled images, questions or answers) based on four-way\ncontrastive learning. In addition, ICMU exploits the single-turn visual\nquestion answering to enhance the visual dialog model's cross-modal\nunderstanding to handle a multi-turn visually-grounded conversation.\nExperiments show that the proposed approach improves the visual dialog model's\ncross-modal understanding and brings satisfactory gain to the VisDial dataset.", + "authors": "Feilong Chen, Xiuyi Chen, Shuang Xu, Bo Xu", + "published": "2022-04-15", + "updated": "2022-04-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "INTRODUCTION Recently, with the rise of pre-trained models [2], researchers have begun to explore vision-and-language task [3, 4, 5] with pre-trained models [1]. Speci\ufb01cally, visual dialog [6, 7, 8, 9], which aims to hold a meaningful conversation with a human about a given image, is a challenging task that requires models have suf\ufb01cient cross-modal understanding based on both visual and textual context to answer the current question. One way to gain suf\ufb01cient cross-modal understanding is through utilizing kinds of attention mechanism [10, 11, 12]. ReDAN [13] and DMAM [14] use multi-step reasoning based on dual attention to learn cross-modal understanding. DAN [15], MCAN [7] and LTMI [16] utilize multi-head attention mechanisms to manage multimodal intersection. Moreover, there are some approaches [17, 18, 19, 20, 21] using graph-based structures to learn cross-modal understanding. However, the approaches mentioned above do not utilize pretrained models, which have a strong power to deal with vision-andlanguage tasks. Visdial-BERT [22] and VD-BERT [1] take advantage of the pre-trained model to greatly improve the performance of the visual dialog task. As shown in Figure 1, the SOTA model VDBERT often makes mistakes and usually ranks the wrong answers \ufb01rst. VD-BERT does not have enough cross-modal understanding Fig. 1. A motivating example of cross-modal understanding of VDBERT [1]. We show the candidates ranking results of VD-VBERT based on its cross-modal understanding. It can be seen that in the \ufb01rst 8 candidates, wrong answers account for most of them, and the ranking results of correct answers are not so good. capabilities, so that it often scores unrelated wrong answers very high, such as the top 1 candidate answer \u201cno\u201d to the question Q4 \u201cis the food in his mouth ?\u201d shown in Figure 1. In this paper, we propose a novel approach to improve the crossmodal understanding for visual dialog, named ICMU. ICMU enhances cross-modal understanding by distinguishing different pulled inputs (i.e. pulled images, questions or answers) based on four-way contrastive learning. What\u2019s more, ICMU exploits the single-turn visual question answering to enhance the visual dialog model\u2019s crossmodal understanding to handle a multi-turn visually-grounded conversation. Experiments show that the proposed approach improves the visual dialog model\u2019s cross-modal understanding and brings satisfactory gain on the VisDial dataset [5]. The contributions of this work are summarized as follows: \u2022 We propose a novel approach ICMU, including 4-way contrastive learning and enhancing by utilizing VQA, to improve the cross-modal understanding based on vision-and-language pre-trained models for visual dialog. \u2022 We conduct extensive experiments and ablation studies on the large-scale datasets VisDial v1.0. Experimental results show that our approach improves the visual dialog model\u2019s crossmodal understanding and brings satisfactory gain. arXiv:2204.07302v1 [cs.CV] 15 Apr 2022 \fFig. 2. The Framework of our ICMU. * indicates the pulled inputs. 2. METHODOLOGY In this section, we \ufb01rst formally describe the visual dialog task. Given a current question Qt with an image I at t-th turn, as well as its dialog history Ht = {C, (Q1, A1), ..., (Qt\u22121, At\u22121)} (where C denotes the image caption), the dialog model is required to predict its answer At by ranking a list of 100 answer candidates { \u02c6 A1 t, \u02c6 A2 t, ..., \u02c6 A100 t }. Figure 2 shows the overview of our approach. First, we employ a uni\ufb01ed vision-dialog Transformer to encode both the image and dialog history, where we append an answer candidate \u02c6 At in the input to model their interactions in an early fusion manner. Next, we adopt cross-modal masked token loss and cross-modal contrastive loss to train the model for effective cross-modal understanding in visual dialog. In addition, we exploit the single-turn visual question answering to enhance the visual dialog model\u2019s cross-modal understanding to handle a multi-turn visually-grounded conversation. 2.1. Vision-Dialog Transformer 2.1.1. Visual Features. Given an image I, we employ Faster R-CNN [23] pre-trained on Visual Genome [24] to extract the object-level vision features RI = {o1, ..., ok}, where each object feature oi is a 2048-d Region-ofInterest (RoI) feature. k is \ufb01xed to 36 in our setting. In addition, we adopt normalized bounding box coordinates as the spatial location due to disorder of visual objects. Speci\ufb01cally, we de\ufb01ne the location information by constructing a 5-d vector: pi = ( x1 W , y1 H , x2 W , y2 H , (x2\u2212x1)(y2\u2212y1) W H ), where (x1, y1) and (x2, y2) are the coordinates of the bottom-left and top-right corner of the i-th object, W and H respectively denote the width and height of the input image, and the last element is the relative area of the object. We also extend pi with its class id and con\ufb01dence score for a richer representation to 7-d vector. 2.1.2. Textual Features. For the textual features, we pack all the textual elements (the history, question and answer candidate) into a long sequence and employ WordPiece tokenizer [25] to split it into a word sequence w, where each word is embedded with an absolute positional code following [26]. 2.1.3. Cross-Modality Encoding. Like a most vision-and-language transformers, we integrate the image objects with language elements into a whole input sequence. As shown in Figure 2, we use some special tokens to segment different elements in the input sequence. We use [CLS] to denote the beginning of the sequence, and [SEP] to separate the two modalities. Moreover, we utilize a special token [HIS] to denote end of turn [27], which informs the model when the dialog turn ends. And we use [Ques] and [Ans] to segment the current question and the answer candidate. As such, we prepare the input sequence into the format as x = ([CLS], o1, ..., ok, [SEP], C, [His], Q1A1, [His], ..., [Ques], Qt, [Ans], \u02c6 At, [SEP]). Finally, We combine each input token embedding with its position embedding and segment embedding (0 or 1, indicating whether it is image or text) and then perform layer normalization [28]. 2.1.4. Transformer Backbone. We utilize transformer encoder as the Transformer backbone to handle cross-modal understanding. Formally, we denote the embedded vision-language inputs as H0 = [e1, ..., e|x|] and then encode them into multiple levels of cross-modal representations Hl = [hl 1, ..., hl |x|] using L-stacked Transformer blocks, where the l-th Transformer block is denoted as Hl = Transformer(Hl\u22121), l \u2208 [1, L]. Speci\ufb01cally, the cross-modal representations Hl is calculated by using the multi-head self-attention [29] as follows: Q = Hl\u22121WQ l , K = Hl\u22121WK l , V = Hl\u22121WV l , (1) Mij = ( 0, allow to attend, \u2212\u221e, prevent from attending, (2) Al = softmax(QKT \u221adk + M)V, (3) where WQ l , WK l , WV l \u2208Rdh\u00d7dk are learnable weights for computing the queries, keys, and values respectively, and M \u2208R|x|\u00d7|x| is the self-attention mask that determines whether tokens from two \fModel NDCG MRR R@1 R@5 R@10 Mean ReDAN 57.63 64.75 51.10 81.73 90.90 3.89 GNN-EM 52.82 61.37 47.33 77.98 87.83 4.57 DualVD 56.32 63.23 49.25 80.23 89.70 4.11 FGA 56.90 66.20 52.75 82.92 91.07 3.80 CAG 56.64 63.49 49.85 80.63 90.15 4.11 KBGN 57.60 64.13 50.47 80.70 90.16 4.08 LG 58.55 64.00 50.63 80.58 90.20 4.12 GoG 60.38 63.13 49.88 79.65 89.05 4.39 VD-BERT 59.96 65.44 51.63 82.23 90.68 3.90 ICMU (Ours) 61.30 66.82 53.50 83.05 92.05 3.59 Table 1. Main comparisons on VisDial v1.0 test datasets (online). Our approach improves the strong baseline signi\ufb01cantly. (t-test, pvalue<0.01) sources can attend each other. Then Al is passed into a feedforward layer to compute Hl for the next layer: Hl = FFN(Al) (4) 2.2. Cross-Modal Training Objectives To make the model learn cross-modal understanding, we use two cross-modal training losses\u2014cross-modal masked token loss and cross-modal contrastive loss: L = LCMT L + LCCL4, (5) where LCMT L is the cross-modal masked token loss and LCCL4 is a novel 4-way contrastive loss. 2.2.1. Cross-modal Masked Token Loss At each iteration, we randomly mask each input token with probability 15% and replace the masked one with a special token [MASK]. The model is then required to recover them based not only on the surrounding tokens w\\m but also on the image I by minimizing the negative log-likelihood: LCMT L = \u2212E(I,w)\u223cD log P(wm|w\\m, I), (6) where wm refers to the masked token and D denotes the training set. 2.2.2. Cross-modal Contrastive Loss As shown in Figure 2, to compute contrastive losses, for each input quartette X = (I, H, Q, A), we construct three types of negative (unmatched) quartettes, where I denotes the image, H denotes the history, Q denotes the question, A denotes the answer. The \ufb01rst one is the polluted image (I\u2217, H, Q, A), the second is the polluted question (I, H, Q\u2217, A) and the \ufb01nal one is the polluted answer (I, H, Q, A\u2217), where \u2217denotes the polluted input. Since the encoding of [CLS] can be viewed as a representation of the quartette X = (I, H, Q, A), we apply a fully-connected (FC) layer on top of it as a 4-way classi\ufb01er f(\u00b7) to predict whether the quartette is matched (c = 0), contains a polluted I\u2217(c = 1), or contains a polluted Q\u2217(c = 2) or contains a polluted A\u2217(c = 3). The 4-way contrastive loss is de\ufb01ned as LCCL4 = \u2212E(I,H,Q,A;c)\u223cD log P(c|f(I, H, Q, A), (7) where the datasets I, H, Q, A \u2208D contains 50% matched quartettes, and the three negatives evenly divide the remaining 50% in the training set. Model NDCG MRR R@1 R@5 R@10 Mean MN 60.29 46.14 77.68 87.57 4.84 HCIAE 61.96 48.25 78.97 88.43 4.56 CoAtt 62.77 49.38 78.99 88.49 4.56 ReDAN 64.29 50.65 81.29 90.17 4.10 KBGN 59.08 64.86 51.37 81.71 90.54 4.00 LG 59.67 65.03 51.69 81.49 90.32 4.02 GoG 63.15 62.68 49.46 78.77 87.87 4.81 VisDial-BERT 62.64 67.86 54.54 84.34 92.36 3.44 VD-BERT 63.22 67.44 54.02 83.96 92.33 3.53 ICMU (Ours) 64.30 69.14 56.80 85.09 93.42 3.37 Table 2. Main comparisons on VisDial v1.0 val datasets. Our approach improves the strong baseline signi\ufb01cantly. (t-test, pvalue<0.01) Model NDCG MRR R@1 R@5 R@10 Mean ICMU 64.30 69.14 56.80 85.09 93.42 3.37 VQA 63.32 67.62 54.50 84.10 92.90 3.44 CL 63.34 67.90 54.82 84.35 92.43 3.52 Table 3. Ablation study on VisDial v1.0 val datasets. \u201cVQA\u201d denotes enhancing by utilizing VQA. \u201cCL\u201d denotes the 4-way contrastive learning. 2.3. Using VQA to Enhance Visual Dialog Although VQA is single-turn, VQA models and visual dialog models require similar cross-modal understanding capabilities. We use VQA to enhance visual dialogue. We exploit the training and val split of VQA v2.0 dataset, which contains the same images as VisDial v1.0 train split. As there is no caption for the image in VQA v2.0, we use VisDial v1.0 to construct a caption for each image in the VQA v2.0. Thus each input from VQA v2.0 can be de\ufb01ned as (I, C, Q, A), where I denotes the image, C denotes the constructed caption, Q denotes the question, A denotes the answer. We let the history H be null. 3. EXPERIMENTS 3.1. Experiment Setup 3.1.1. Datasets and Implementation Details. We evaluate our model on the VisDial v1.0 datasets [30]. Specifically, v1.0 contains a training set of 123287 images, a validation set of 2048 images and a testing set (hosted blindly in the task organizers\u2019 server) of 8,000 images. Each image is associated with one caption and 10 question-answer pairs. For each question, it is paired with a list of 100 answer candidates, one of which is regarded as the correct answer. VQA v2.0 contains the same 123287 images as VisDial v.10 but different question-answer pairs. We use BERTBASE as the backbone, which consists of 12 Transformer blocks, each with 12 attention heads and a hidden state dimensions of 768. We use Adam [31] with an initial learning rate of 3e\u22125 and a batch size of 80 to train our model. A linear learning rate decay schedule with a warmup of 0.1 is employed. We \ufb01rst train our model for 20 epochs on a cluster of 4 A100 GPUs with 40G memory using CMTL and CCL4 losses (with equal coef\ufb01cients). Here we only utilize one previous dialog turn for training ef\ufb01ciency. After that, we train for another 15 epochs only using CCL4 losses. Dur\fFig. 3. Case study. ing inference, we rank the answer candidates according to the class score c = 0 of the CCL4 loss. 3.1.2. Automatic Evaluation We use a retrieval setting to evaluate individual responses at each round of a dialog, following [5]. Speci\ufb01cally, at test time, apart from the image, ground truth dialog history and the question, a list of 100candidate answers is also given. The model is evaluated on retrieval metrics: (1) Mean Rank of human response (Mean \u2193), (2) Existence of the human response in top \u2212k ranked responses, i.e., R@k \u2191(3) Mean Reciprocal Rank (MRR \u2191) of the human response and (4) Normalized Discounted Cumulative Gain (NDCG \u2191) for VisDial v1.0. 3.2. Main Results 3.2.1. Baseline Methods We compare our method with the following baseline methods: (1) Attention-based models: HCIAE [10], CoAtt [11], ReDAN [13], LG [32]. (2) The pretraining model: VD-BERT [1] and VisDialBERT [22]. (4) Graph-based models: GNN-EM [17], DualVD [19], FGA [18], GoG [6], KBGN [21]. 3.2.2. Results Performance on the benchmarks VisDial is shown in Table 1 and Table 2. From the results on VisDial v1.0 test shown in Table 1, we can observe that: (1) ICMU outperforms previous works on all metrics and obtains R@1 at 53.50%, beating the previous method VD-BERT by 1.47%, which shows that ICMU can select the standard groundtruth more accurate. (2) Comparing the performance of ICMU and model VD-BERT on NDCG, ICMU beats the pre-trained model VDBERT by 1.34%. This shows the superiority of our proposed method to understand cross-modal information at a \ufb01ne-grained level. Note that NGCG is invariant to the order of options with identical relevance and to the order of options outside of the top K, where K is the number of answers marked as correct by at least one annotator. (3) Our approach is not only more accurate (R@1, Mean), but also better than previous models on multi-modal semantic understanding (NDCG). From the results on VisDial v1.0 val shown in Table 2, we can get the same observations. From the ablation study on VisDial v1.0 val shown in Table 3, we can observe that: (1) Both cross-modal contrastive learning and enhancement by VQA bring satisfactory improvements. (2) cross-modal contrastive learning and enhancement by VQA can get along with each other and further improve the performance of the model. 3.2.3. Case Study As shown in Figure 3, we provide two samples to analyze the crossmodal understanding of VD-BERT and ICMU. As shown in the left half of Figure 3, for Q4 \u201cDoes he have food in his mouth?\u201d, there are many reasonable answers to this question. VD-BERT ranks the opposite answer \u2018\u2018no\u2019\u2019 \ufb01rst, and many reasonable answers \u201cyes, it is, it is\u201d are ranked lower. As shown in the right half of Figure 3, for Q4 \u201care there people on bus?\u201d, ICMU outperforms the VD-BERT. This shows that ICMU learns better cross-modal understanding than VD-BERT due to CCL4 and enhancing by VQA. 4." + }, + { + "url": "http://arxiv.org/abs/2202.09061v4", + "title": "VLP: A Survey on Vision-Language Pre-training", + "abstract": "In the past few years, the emergence of pre-training models has brought\nuni-modal fields such as computer vision (CV) and natural language processing\n(NLP) to a new era. Substantial works have shown they are beneficial for\ndownstream uni-modal tasks and avoid training a new model from scratch. So can\nsuch pre-trained models be applied to multi-modal tasks? Researchers have\nexplored this problem and made significant progress. This paper surveys recent\nadvances and new frontiers in vision-language pre-training (VLP), including\nimage-text and video-text pre-training. To give readers a better overall grasp\nof VLP, we first review its recent advances from five aspects: feature\nextraction, model architecture, pre-training objectives, pre-training datasets,\nand downstream tasks. Then, we summarize the specific VLP models in detail.\nFinally, we discuss the new frontiers in VLP. To the best of our knowledge,\nthis is the first survey focused on VLP. We hope that this survey can shed\nlight on future research in the VLP field.", + "authors": "Feilong Chen, Duzhen Zhang, Minglun Han, Xiuyi Chen, Jing Shi, Shuang Xu, Bo Xu", + "published": "2022-02-18", + "updated": "2022-07-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction Making machines respond in ways similar to humans has been a relentless goal of AI researchers. To enable machines to perceive and think, researchers propose a series of related tasks, such as face recognition, reading comprehension, and human-machine dialogue, to train and evaluate the intelligence of machines in a particular aspect. Speci\ufb01cally, domain experts manually construct standard datasets and then train and evaluate relevant models on them. However, due to the limitations of related technologies, it is often necessary to train on a large amount of labelled data to obtain a better and more capable model. The recent emergence of pre-training models based on the Transformer structure [1] has alleviated this problem. They are \ufb01rst pre-trained via self-supervised learning that typically exploits auxiliary tasks (pre-training objectives) to mine supervision signals from large-scale unlabelled data to train the model, thereby learning universal representations. Then they can achieve surprising e\ufb00ectiveness by \ufb01ne-tuning with only a tiny amount of manually-labelled data on downstream tasks. Since the advent of BERT [2] in natural language processing (NLP), various pre-training models have sprung up in the uni-modal \ufb01eld, such as Vision Transformer (ViT) [3] in computer vision (CV) and Wave2Vec [4] in speech. Substantial works have shown they are bene\ufb01cial for downstream uni-modal tasks and avoid training a new model from scratch. Similar to the uni-modal \ufb01eld, there is also a problem of less highquality labelled data in the multi-modal \ufb01eld. The natural question is, can the above pre-training method be applied to multi-modal tasks? Researchers have explored this problem and made signi\ufb01cant progress. In this paper, we focus on mainstream vision-language pre-training (VLP), including image-text and video-text pre-training. VLP mainly learns the semantic correspondence between di\ufb00erent modalities by pre-training on large-scale data. For example, in image-text pre-training, we expect the model to associate \u201cdog\u201d in text with what \u201cdog\u201d looks like in images. In video-text pre-training, we expect the model to map objects/actions in the text to objects/actions in the video. To achieve this goal, the VLP objects and model architecture need to be cleverly designed to allow the model to mine the associations between di\ufb00erent modalities. To give readers a better global grasp of VLP, we \ufb01rst comprehensively review its recent advances and focus on \ufb01ve signi\ufb01cant aspects: \u2022 Feature extraction. This section includes the preprocessing and representation methods of image, video, and text in VLP models (see Section 2). \u2022 Model architecture. We introduce the architecture of the VLP models from two di\ufb00erent perspectives: Single-stream versus Dual-stream from multi-modal fusion perspective, and Encoder-only versus Encoder-decoder from the overall architectural design perspective (see Section 3). \fVLP: A Survey on Vision-Language Pre-training 3 \u2022 Pre-training objectives. Pre-training objectives are the core of VLP, mainly used to guide the model to learn vision-language associated information. We summarize typical and characteristic pre-training objectives divided into completion, matching, temporal, and particular types (see Section 4). \u2022 Pre-training datasets. Data is critical for VLP. We brie\ufb02y introduce mainstream corpora for VLP and their speci\ufb01c sizes (see Section 5). \u2022 Downstream tasks. Various tasks requires a cooperative knowledge of both vision and language. We discuss the basic details and goals of these tasks (see Section 6). Then we summarize the speci\ufb01c state-of-the-art (SOTA) VLP models in detail (see Section 7). Finally, We conclude the paper and have broad discussions on new frontiers in VLP (see Section 8). Although there are many surveys on pretrained language models [5, 6] and pretrained vision models [7], to the best of our knowledge, this is the \ufb01rst survey focused on VLP. We hope that our survey can help researchers better understand this \ufb01eld and inspire them to design better models. 2 Feature Extraction This section describes how VLP models preprocess and represent an image, video and text to obtain counterpart features. 2.1 Feature Extraction 2.1.1 Image Feature Extraction (1) OD-based Region Features (OD-RFs). Most previous work [8, 9, 10] on VLP utilizes pre-trained object detectors to extract visual features. The most commonly used object detection model is Faster R-CNN [11] with bottom-up attention [12]. It is designed to identify objects belonging to certain classes and localize them with bounding boxes. By using the Faster R-CNN, VLP models obtain the OD-based Region feature embedding V = [o1, o2, . . . , ok] of an image with k selected regions. Each region feature oi is a 2048-d Region-of-Interest (RoI) feature with its bounding box. The bounding box is de\ufb01ned by the coordinates of the bottom-left and top-right corners of the region. VLP models use bounding boxes to construct 5-d vectors, and the vector is embedded into a high-dimensional representation (2048-d) named visual geometry embedding. The OD-RFs are obtained by adding the OD-based Region feature embedding with its visual geometry embedding. Although ODFs have brought impressive performance, extracting region features can be time-consuming. To relieve this problem, the pre-trained object detectors are usually frozen during pre-training, which can limit the capacity of VLP models. \f4 VLP: A Survey on Vision-Language Pre-training (2) CNN-based Grid Features (CNN-GFs). VLP models [13, 14] extract visual features by utilizing convolutional neural networks (CNNs) to obtain the grid features. On the one hand, VLP models can train the CNNs end-to-end by using the grid features [15] directly. On the other hand, VLP models can also \ufb01rst discretize grid features using a learned vision dictionary, then feed them into the cross-modal module. (3) ViT-based Patch Features (ViT-PFs). Inspired by ViT [3, 16], VLP models reshape the image Ii \u2208RH\u00d7W \u00d7C into a sequence of \ufb02attened 2D patches Ip \u2208RN\u00d7(P 2\u00b7C), where (H, W) is the resolution of the original image, C is the number of channels, (P, P) is the resolution of each image patch, and N = HW/P 2 is the resulting number of patches, which also serves as the e\ufb00ective input sequence length for the Transformer. An input image Ii is encoded into a sequence of embeddings: {vcls, v1, ..., vN}, where vcls is the embedding of the [CLS] token. 2.1.2 Video Feature Extraction A video clip is denoted as M frames (images). VLP models [17, 18] extract the frame features by using the method mentioned above. The two most commonly used features are CNN-GFs and ViT-PFs. For CNN-GFs, VLP models \ufb01rst use ResNet [19] pre-trained on ImageNet [20] or SlowFast [21] and I3D [22] pre-trained on Kinetics [23] to extract 2D and 3D visual features for each video frame. These features are concatenated as visual features and fed through a fully-connected (FC) layer to be projected into the same lower-dimensional space as token embeddings. For ViT-PFs, a video clip Vi \u2208RM\u00d7H\u00d7W \u00d7C consisting of M frames of resolution H\u00d7W, where M = 1 for images. Following the protocol in ViT and Timesformer, the input video clip is divided into M \u00d7 N non-overlapping spatio-temporal patches of size P \u00d7 P, where N = HW/P 2. 2.1.3 Text Feature Extraction For the textual features, following pretrained language model such as BERT [2], RoBERTa [24], AlBERT [25], and XLNet [26], VLP models [9, 27, 28] \ufb01rst segment the input sentence into a sequence of subwords. And then, insert a start-of-sequence token and an end-of-sequence token at the beginning and the end of the sequence to generate the input text sequence. Text input representations are computed via summing the corresponding word embedding, text position embedding, and text type embedding. 2.2 Feature Representation To make full use of uni-modal pre-trained models, VLP models can send the visual or text features to a transformer encoder [1]. Speci\ufb01cally, VLP models utilize the standard transformer encoder with random initialization to generate the visual or textual representation. In addition, VLP models can utilize a pretrained visual transformer to encode the ViT-PFs, such as ViT and DeiT [29]. \fVLP: A Survey on Vision-Language Pre-training 5 Cross-Attn Self-Attn Feedforward Visual Features Cross-Attn Self-Attn Feedforward Textual Features (b) Dual-Stream\u00a0Architecture Self-Attn Feedforward Visual Features Textual Features (a) Single-Stream\u00a0Architecture Fig. 1 Illustration of two types of model architectures for VLP. VLP models can use a pre-trained textual transformer to encode the textual features, such as BERT. For simplicity, we name these transformer Xformer. 3 Model Architecture In this section, we introduce the architecture of the VLP models from two di\ufb00erent perspectives: (1) Single-stream versus Dual-stream from multi-modal fusion perspective, and (2) Encoder-only versus Encoder-decoder from the overall architectural design perspective. 3.1 Single-stream versus Dual-stream Single-stream Architecture. The single-stream architecture [9, 30, 31] refers to that the text and visual features are concatenated together, then fed into a single transformer block as shown in Firgue 1 (a). The single-stream structure utilizes merged attention to fuse multimodal inputs. The single-stream architecture is more parametere\ufb03cient, as the same set of parameters is used for both modalities. Dual-stream Architecture. The dual-stream architecture [32, 33] refers to that the text and visual features are not concatenated together but sent to two di\ufb00erent transformer blocks independently, as shown in Firgue 1 (b). These two transformer blocks do not share parameters. To achieve higher performance, cross-attention (as shown by the dotted line in Firgue 1 (b)) are used to enable cross-modal interaction. To achieve higher e\ufb03ciency, there can also be no cross-attention between the visual transformer and textual transformer blocks. 3.2 Encoder-only versus Encoder-decoder Many VLP models adopt the encoder-only architecture, where the cross-modal representations are directly fed into an output layer to generate the \ufb01nal \f6 VLP: A Survey on Vision-Language Pre-training outputs. In contrast, other VLP models advocate using a transformer encoderdecoder architecture, where the cross-modal representations are \ufb01rst fed into a decoder and then to an output layer. 4 Pre-training Objectives This section introduces how we pre-train VLP models by using di\ufb00erent pre-training objectives, which are crucial for learning the universal representation of vision-language. We summarize the pre-training objectives into four categories: completion, matching, temporal, and particular types. \u2022 Completion is to reconstruct the masked element by leverage the unmasked remainders to understand the modality. (see section 4.1, 4.2 and 4.3). \u2022 Matching is to unify the vision and language into a shared hidden space to generate universal vision-language representation (see Section 4.4, 4.5 and 4.6). \u2022 Temporal is to learn good representation by reorder the disrupted input sequence (see Section 4.7) \u2022 Particular types consists of other pre-training objects, such as visual question answering and visual captioning (see Section 4.8). Now we introduce the most used pre-training objectives. 4.1 Masked Language Modeling Masked language modeling (MLM), which was \ufb01rst proposed by Talylor [34] in the literature, is widely known because the BERT model adapted it as a novel pre-training task. To model language conditioned on vision, MLM in VLP models is similar to MLM in pre-training language models (PLMs) but predicts the masked textual tokens not only by the rest of the textual tokens but also by the visual tokens. Empirically, VLP models following BERT randomly mask each textual input token with probability 15% and replace the masked one by using a special token [MASK] 80% of the time, a random textual token 10% of the time and the original token 10% of the time to perform masking. The formal de\ufb01nition is as follows: LMLM = \u2212E(v,w)\u223cD log P(wm|w\\m, v), (1) where v denotes the vision, w denotes the textual tokens, wm denotes the masked textual tokens, w\\m denotes the remained textual tokens and D denotes the training dataset. 4.2 Pre\ufb01x Language Modeling Pre\ufb01x Language Modeling (Pre\ufb01xLM) [14] is uni\ufb01ed of MLM and language modeling (LM). To make the model simultaneously has good understanding and generation ability, Pre\ufb01xLM is proposed to facilitate the model with \fVLP: A Survey on Vision-Language Pre-training 7 solid generation capability that enables text-induced zero-shot generalization without \ufb01netuning. Pre\ufb01xLM di\ufb00ers from the standard LM such that it enables bi-directional attention on the pre\ufb01x sequence and only conducts autoregressive factorization on the remaining tokens. Pre\ufb01xLM under the sequence-to-sequence (seq2seq) framework not only enjoys the bidirectional contextualized representation as in MLM but also can perform text generation similar to LM. The formal de\ufb01nition is as follows: LPre\ufb01xLM = \u2212E(v,w)\u223cD log P(w\u2265Tp|w\u2264TP , v), (2) where TP denotes the length of the pre\ufb01x sequence. 4.3 Masked Vision Modeling To have good understanding on vision or generate images/videos given text, like MLM, masked vision modeling (MVM) [30] samples vision (image or video) regions or patches and usually masks their visual features with a probability of 15%. VLP models need to reconstruct the masked visual features given the remaining visual features and all the textual features. The masked visual features are set to zeros. Because visual features are high-dimensional and continuous, VLP models propose two variants for MVM. (1) Masked Features Regression learns to regress the model output of masked features to its original visual features. VLP models convert the model output of the masked features to a vector of the same dimension as the original visual features \ufb01rst and apply L2 regression between the original visual features and the vector. The formal de\ufb01nition is as follows: LMVM = E(v,w)\u223cDf(vm|v\\m, w), (3) f(vm|v\\m, w) = K X i=1 \u2225h(vi m) \u2212O(vi m))\u22252 2, (4) where h(vi m) denotes the predicted vision representation and O(vi m) denotes the original vision representation. (2) Masked Feature Classi\ufb01cation learns to predict the object semantic class for the masked features. VLP models \ufb01rst feed the output of the masked features into an FC layer to predict the scores of object class, which further goes through a softmax function to be transformed into a prediction normalized distribution. Note that there is no ground-truth label. There are two kinds of methods to train VLP models. One is that VLP models take the most likely object class from the object detection model as the hard label (w.p. 0 or 1), assuming the detected object class is the ground-truth label for the masked features and apply cross-entropy loss to \f8 VLP: A Survey on Vision-Language Pre-training minimize the gap between the prediction and pseudo class. The other is that VLP models utilize soft label as supervision signal, which is the raw output from the detector (i.e., a distribution of object classes) and minimize the KL divergence between two distributions. The formal de\ufb01nition is as follows: LMVM = E(v,w)\u223cDf(vm|v\\m, w). (5) We use the object detection output from Faster R-CNN, and take the detected object category as the label of the masked region: f1(vm|v\\m, w) = K X i=1 CE(c(vi m) \u2212g1(vi m))), (6) where g1(vi m) the detected detected object category and K denotes the number of vision regions. We avoid this assumption by using soft label as supervision signal, which is the raw output from the detector: f2(vm|v\\m, w) = K X i=1 DKL(\u02c6 c(vi m) \u2212g2(vi m))). (7) where g1(vi m) the detected detected object category distribution. 4.4 Vision-Language Matching Vision-Language Matching (VLM) [35] is the most commonly used pre-training objective to align vision and language, which aims to project vision and language into the same space. In the single-stream VLP models, they use the representation of the special token [CLS] as the fused representation of both modalities. In the dual-stream VLP models, they concatenate the visual representation of the special visual token [CLSV ] and the textual representation of the special textual token [CLST ] as the fused representation of both modalities. VLP models feed the fused representation of both modalities to an FC layer and a sigmoid function to predict a score between 0 and 1, where 0 indicates the vision and language are mismatched, and 1 indicates the vision and language are matched. During training, VLP models sample positive or negative pairs from the dataset at each step. The negative pair is created by replacing the vision or text in a paired sample with randomly selected from other samples. 4.5 Vision-Language Contrastive Learning Vision-Language Contrastive Learning (VLC) [35] also aims to align vision and language. Di\ufb00erent VLM, VLC predicts the matched vision-language pairs from N \u00d7 N possible vision-language pairs given a batch of N vision-language \fVLP: A Survey on Vision-Language Pre-training 9 pairs. Note that there are N 2\u2212N negative vision-language pairs within a training batch. VLP models use the visual representation of the special visual token [CLSV ] and the textual representation of the special textual token [CLST ] to denote the aggregated representation of the vision and language, respectively. VLP models compute the softmax-normalized vision (image or video)-to-text similarity and text-to-vision similarity and leverage cross-entropy losses over vision-to-text and text-to-vision similarities to update themselves. The similarity is often implemented by dot products. The formal de\ufb01nitions are as follows: pv2t m (I) = exp(s(I, Tm)/\u03c4) PM m=1 exp(s(I, Tm)/\u03c4) , (8) pt2v m (T) = exp(s(T, Im)/\u03c4) PM m=1 exp(s(T, Im)/\u03c4) , (9) LVLC = 1 2E(I,T )\u223cD[CE(yv2t, pv2t(I)) + CE(yt2v, pt2v(T)], (10) where I. T denotes the images and texts, s(cot) denotes the similarity function and \u03c4 denotes temperature coe\ufb03cient. yv2t and yt2v denote the labels of vision2text retrieval and text2vision retrieval. 4.6 Word-Region Alignment Word-Region Alignment (WRA) [30] is an unsupervised pre-training objective to align vision regions (vision patches) and words. VLP models utilize Optimal Transport to learn the alignment between vision and language. Empirically, VLP models use the IPOT algorithm to approximate the OT distance since the exact minimization is computationally intractable. After solving minimization, the OT distance serves as the WRA loss to train VLP models. The formal de\ufb01nition is as follows: LWRA = min T\u2208II(a,b) T X i=1 K X j=1 Tij \u00b7 c(wi, vj), (11) where c(wi, vj) is the cost function evaluating the distance between wi and vj, T \u2208II(a, b) = {T \u2208RT \u00d7K|T1m = a, T\u22a41n = b}, a and b Dirac function coe\ufb03cients centered on wi and vj. 4.7 Frame Order Modeling To better model the timing of the video, VLP models randomly disrupt the order of some input frames and then predict the actual position of each frame. Frame Order Modeling (FOM) [36] is modeled as a classi\ufb01cation task in practice. \f10 VLP: A Survey on Vision-Language Pre-training Table 1 Details of some popular pre-training datasets for VLP. Names of some datasets are abbreviated for the convenience of subsequent description. FLKR represents Flickr30k, and HT100M represents HowTo100M. Dataset # Images # Image-text Pairs Duration (hrs) # Clips # Videos SBU [44] 875K 875K FLKR [45] 29K 145K COCO [46] 113K 567K VG [47] 108K 5.4M VGQA [47] 108K 1.8M VQA [48] 83K 444K Matterport3D [49] 104K 104K FashionGen [50] 260K 260K CC3M [51] 3M 3M GQA [52] 82K 1M LAIT [53] 10M 10M CC12M [54] 12M 12M ALIGN [55] 1.8B 1.8B Kinetics400 [23] 817 306K 306K TVQA [38] 461 22K 925 HT100M [56] 134K 136M 1.2M WebVid2M [57] 13K 2.5M 2.5M 4.8 Particular Pre-training Objects To better adapt to downstream tasks, VLP models sometimes use the training objects of some downstream tasks, such as visual question answering (VQA) [37, 38, 12], and visual captioning (VC) [39, 40], as pre-training objectives. As for VQA, VLP models take the fused representation mentioned above, apply an FC layer, and use the transformed representation to predict the classi\ufb01cation over prede\ufb01ned answer candidates. In addition to VLP models tackling the task as classi\ufb01cation over prede\ufb01ned answer candidates, VLP models also can directly generate answers in their original text format. As for VC, to reconstruct the input sentence to endow VLP models with the generation capability, VLP models employ an auto-regressive decoder to generate a corresponding textual description of the image or video. Note that due to space limitations, we only introduce some popular pretraining objectives. We omit some speci\ufb01c pre-training objectives such as grounding referring expression (GRE), image-conditioned denoising autoencoding (IDA) [41], text-conditioned image feature generation (TIFG) [41], object detection (OD) [42] and aligned Kaleido patch modeling (AKPM) [43]. Moreover, we put masked action prediction into the category of MVM. 5 Pre-training Datasets Pre-training datasets are signi\ufb01cant for the success of cross-modal representation learning. The quality and the size of pre-training datasets sometimes overwhelm the importance of training strategies and algorithms. Hence, a detailed description of several widely used pre-training datasets is necessary. Table 1 shows statistics of some popular pre-training datasets for VLP. \fVLP: A Survey on Vision-Language Pre-training 11 Since VLP includes image-language pre-training and video-language pretraining, we roughly divide pre-training datasets into two main categories. In later sections, we provide more details about representative pre-training datasets for each category. It is worth noting that no matter which category pre-training datasets belong, they di\ufb00er in size and sources across di\ufb00erent researches. In most works, the pre-training datasets for VLP are constructed by combining public datasets across di\ufb00erent cross-modal tasks or scenarios. However, other works, such as VideoBERT [58], ImageBERT [53], ALIGN [55], and CLIP [16], conduct pre-training with self-constructed datasets. These selfconstructed datasets are usually larger than most public datasets but might contain more noise. 5.1 Datasets for Image-language Pre-training For image-language pre-training, the most widely used data form is image-text pairs. Most image-language pre-training datasets consist of a large number of image-caption pairs. SBU [44] and Flickr30k [45] are collected from Flickr and labelled with human-generated annotations. COCO [46] consists of images with \ufb01ve human-generated captions, \ufb01ltered with special procedures to guarantee the quality of images and annotations. CC3M [51] and CC12M [54] are constructed by crawling images and their alt-text HTML attributes from the Internet and annotating these pictures with \ufb01ltered descriptions. Due to looser \ufb01ltering strategies, CC12M contains more noise than CC3M. Another data source is the visual question answering task. Many image-language datasets are organized as structured data in the context of visual question answering. The representative large-scale dataset is Visual Genome (VG) [47]. VG contains rich information in its structured data form. Its region-level descriptions and question-answer pairs are frequently used in the study of image-language pre-training. Besides VG, VQA [48] and GQA [52] are also popular datasets of visual question-answer pairs. Compared with VGA, GQA further alleviates the systematic biases. Datasets mentioned above are suitable for most common scenarios. There are also some datasets designed for special cases. Matterport3D [49] consists of RGB-D images of building-scale scenes, annotated with labels for classi\ufb01cation and segmentation. Fashion-Gen [50] contains fashion images paired with item descriptions generated by professional stylists. 5.2 Datasets for Video-language Pre-training Compared to image-language pre-training datasets, video-language pretraining datasets are usually more time-consuming and more di\ufb03cult to collect and process. These inconveniences restrict the development of the community and the scale of pre-training. Datasets for video-language pre-training cover di\ufb00erent scenarios and sources. Most of them, such as Kinetics-400 [23], HowTo100M [56] and WebVid-2M [57], are collected from the Internet and processed with di\ufb00erent procedures. These kinds of videos are usually accompanied \f12 VLP: A Survey on Vision-Language Pre-training Downstream Tasks Classification Regression Retrieval Generation Other tasks Visual Question Answering\uff08VQA\uff09 Visual Reasoning and Compositional Question Answering (GQA) Video-Language Inference (VLI) Natural Language for Visual Reasoning (NLVR) Visual Entailment (VE) Visual Commonsense Reasoning (VCR) Grounding Referring Expressions (GRE) Category Recognition (CR) Multi-modal Sentiment Analysis (MSA) Vision-Language Retrieval (VLR) Visual Captioning (VC) Novel Object Captioning at Scale (NoCaps) Visual Dialogue (VD) Multi-modal Machine Translation (MMT) Vision-Language Navigation (VLN) Optical Character Recognition (OCR) Fig. 2 Illustration of downstream tasks in VLP. by subtitles, thus providing weak or strong alignments between video clips and text. Although those subtitles sometimes might be too weak to align modalities, they still provide useful information, especially for the pre-training on large-scale datasets. Another source of video-text pairs is television programs. TVQA [38] is a video-language pre-training dataset generated from television shows. These television shows are collected and converted to a dataset comprised of many dialogues for understanding the videos and recognizing semantic concepts in videos. Considering the diversity of the sources and formation of these datasets, researchers apply di\ufb00erent annotation and processing procedures. For example, Kinetics-400 [23] consists of many action-related videos annotated with action classes. For other datasets [38, 56, 57], the accompanying captions/subtitles of video clips or the class of concepts in videos are usually processed and used as annotations. 6 Downstream Tasks As shown in Figure 2, a diverse range of tasks requires a cooperative knowledge of vision and language. In this section, we introduce the fundamental details and goals of these tasks. Visual Question Answering (VQA) [37, 59, 60, 61] . Giving a visual input (image or video), VQA represents the task of correctly providing an answer to a question. It is usually regarded as a classi\ufb01cation task where the model predicts the most suitable answer from a pool of choices. To obtain accurate performance, it is important to infer logical entailments from images (or videos) based on the question posed. Visual Reasoning and Compositional Question Answering (GQA) [52, 62, 63] . GQA is an upgraded version of VQA and aims to advance research on the visual reasoning of natural scenes. The images, questions, and answers in its dataset have matching semantic representations. The advantage of this structured representation is that the distribution of answers can be more uniform, \fVLP: A Survey on Vision-Language Pre-training 13 and we can analyze the model\u2019s performance from more dimensions. Compared with the single evaluation metric (e.g., accuracy) of traditional VQA, GQA includes multi-dimensional evaluation metrics: consistency, validity, plausibility, distribution, and grounding. Video-Language Inference (VLI) [36, 64, 65] . Given a video clip with aligned subtitles as a premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. Visual Entailment (VE) [66, 67, 68] . In the VE task, image is the premise, and text is the hypothesis. Its goal is to predict whether the text is \u201cEntailment Image\u201d. There are three labels, Entailment, Neutral, and Contradiction. Visual Commonsense Reasoning (VCR) [69, 70, 71] . VCR is the task of inferring commonsense information and cognitive understanding by a machine when it sees an image. It exists in the form of multiple-choice questions. For a question posed about the image, there are several alternative answers. The model must choose an answer from several answers and then select the reason for choosing this answer from several alternative reasons. Thus, VCR can be divided into two tasks, including question answering (selecting the best answer from a pool of expected answers to the question) and answer justi\ufb01cation (providing the rationale behind the given answer). You can follow VCR\u2019s leaderboard1 to track VLP\u2019s latest ideas. Natural Language for Visual Reasoning (NLVR) [72, 73] . NLVR is a subtask of the broader VCR category, limited to the classi\ufb01cation paradigm. The input of the NLVR task is two images and a text description, and the output is whether the corresponding relationship between the images and the text description is consistent (two labels: true or false). It is typically di\ufb00erent from VQA due to longer text sequences covering various linguistic phenomena. Grounding Referring Expressions (GRE) [74, 75, 76] . The GRE task aims to localize certain regions (e.g., objects and persons) in an image given a referring expression, where the main challenge is to comprehend and align various types of information from visual and textual domain, such as visual attributes, locations and interactions with surrounding regions. Speci\ufb01cally, the model can output a score for each region, and the region with the highest score is used as the prediction region. 1https://visualcommonsense.com/leaderboard/ \f14 VLP: A Survey on Vision-Language Pre-training Category Recognition (CR) [43]. CR refers to identifying the category and sub-category of a product, such as {HOODIES, SWEATERS}, {TROUSERS, PANTS}, which are vital attributes for describing a product, and are useful in lots of real-life applications. Multi-modal Sentiment Analysis. (MSA) [77, 78, 79, 80]. MSA is aimed to detect sentiments in videos by leveraging multi-modal signals (e.g., vision, language, etc.). It is to predict the a\ufb00ective orientation of an utterance as a continuous intensity variable. Vision-Language Retrieval (VLR) [81, 82, 83, 84]. VLR involves understanding both vision (image or video) and language domains with appropriate matching strategies. It includes two subtasks, visionto-text, and text-to-vision retrieval, where vision-to-text retrieval is to fetch the top-most relevant text description from a larger pool of descriptions as per the vision and vice versa. VLR is widely used in domain-speci\ufb01c searches, multiple search engines, and context-based vision retrieval design systems. Visual Captioning (VC) [85, 40, 86]. VC aims to generate semantically and syntactically appropriate text descriptions for a given visual (image or video) input. Generating relevant and explanatory captions for a visual input requires not only a rich knowledge of language, but also a consistent understanding of scenes, entities, and their interactions appreare in the visual input. Novel Object Captioning at Scale (NoCaps) [87, 88] . NoCaps extends the VC task to test a model\u2019s capability of describing novel objects from the Open Images dataset, which are unseen in the training corpus. Visual Dialogue (VD) [89, 90, 91, 92]. The speci\ufb01c task in VD is the following: given an image, a dialog history consisting of a sequence of question-answer pairs, and a natural language follow-up question, the goal for the task is to response the question in free-form natural language (e.g., generate an answer). VD is the visual analogue of the Turing Test. Multi-modal Machine Translation (MMT) [93, 94, 95]. MMT is a two-fold task of translation and text generation, translating text from one language to another with additional information from other modalities, e.g., image. The additional visual features aim to remove ambiguities that may arise in straightforward text machine translation and help retain the context of the text descriptions. The multi-modal representation space facilitates robust latent representations to complement the inherent semantic information preserved by visual and linguistic embeddings, respectively. \fVLP: A Survey on Vision-Language Pre-training 15 Vision-Language Navigation (VLN) [96, 97, 98]. VLN is a grounding language task of an agent\u2019s locomotion as it sees and explores the real-world dynamics based on linguistic instructions. Like generation tasks, it is typically seen as the task of sequence-to-sequence transcoding. However, VLN has unique characteristics. It usually has longer sequences, and the dynamics of the problem are quite di\ufb00erent since it is a real-time evolving task. Its main challenge lies in understanding the environment and making con\ufb01dent decisions during exploring. Optical Character Recognition (OCR) [99, 100]. OCR generally refers to extract handwritten or printed text from images (such as street signs and photos of products) as well as documents (articles, bills, invoices, \ufb01nancial reports, etc.), which includes two parts: text detection (similar to regression) and text recognition (similar to classi\ufb01cation). In addition, there are some iamge-related downstream tasks for evaluating the image-text pre-training models, including semantic segmentation [101, 102], and object detection [103, 104]. There are also some video-related downstream tasks for evaluating the video-text pre-training models, including action classi\ufb01cation (AC) [58], action segmentation (AS) [105], and action step Localization (ASL) [106]. Recently, Changpinyo et.al [54] scale up pre-training data for VLP tasks and benchmark its e\ufb00ectiveness against Conceptual Captions 3M on multiple downstream tasks with an emphasis on long-tail visual recognition. Rethmeier et.al [107] study the performance of pretrained model on a challenging long-tail task and analyze the resulting long-tail learning capabilities under zero-shot, few-shot and full supervision conditions to explore the performance in\ufb02uence of model size and self-supervision signal amount. 7 SOTA VLP models Image-Text VLP models. VisualBERT [9], known as the \ufb01rst image-text pre-training model, uses the visual features extracted by Faster R-CNN, concatenates the visual features and textual embeddings, and then fed the concatenated features to a single transformer initialed by BERT. Many VLP models [13, 110, 30, 53] follow the similar feature extraction and architecture as VisualBERT while adjusting the pre-training objectives and pre-training datasets. Recently, VDBERT [134] models the common implicit vision-language alignment in vision and language by pretraining on large-scale image-text pairs via transfer learning [135, 136]. VLMO [129] leverages patch embeddings for image and word embeddings for text and feeds the concatenated embeddings into a single transformer with modality experts and achieves an impressive performance. METER [33] explores how to use a uni-modal pre-trained model and proposes a dualstream architecture model to handle the multimodel fusion, which achieves the \f16 VLP: A Survey on Vision-Language Pre-training Table 2 The summary of mainstream image-text VLP models. The number of downstream tasks determines whether the model is generic or domain-speci\ufb01c VLP. FE: Feature Extraction. PT: Pre-training. Emb: Embedding. SC in Datatsets column: self-constructed or self-collected. MTL in Datatsets column: all datasets for multi-task learning in corresponding work. See other abbreviations in Datatsets column in Table 1. Model Domain Vision FE Language FE Multimodal Fusion Decoder PT Objectives PT Datasets Downstream Tasks VisualBERT [9] Image OD-RFs Emb Single-stream No MLM+VLM COCO GRE+NLVR+VCR+VQA ViLBERT [8] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM COCO+VG VLR+NLVR+VE+VQA LXMERT [108] Image OD-RFs+Xformer Xformer Dual-stream No MLM+VLM+MVM+VQA COCO+VG+VQA+GQA+VGQA GQA+NLVR+VQA B2T2 [109] Image CNN-GFs Emb Single-stream No MLM+VLM CC3M VCR Unicoder-VL [13] Image OD-RFs Emb Single-stream No MLM+VLM+MVM CC3M+SBU VLR+VCR VL-BERT [110] Image OD-RFs Emb Single-stream No MLM+MVM CC3M GRE+VCR+VQA VLP [111] Image OD-RFs Emb Dual-stream Yes MLM+LM CC3M VC+VQA UNITER [30] Image OD-RFs Emb Single-stream No MLM+VLM+MVM+WRA COCO+VG+SBU+CC3M GRE+VLR+NLVR+VCR+VE+VQA 12-IN-1 [112] Image OD-RFs Emb Single-stream No MLM+MVM MTL GQA+GRE+VC+NLVR+VE+VQA VisDial-BERT [113] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM CC3M+VQA VD ImageBERT [53] Image OD-RFs Emb Single-stream No MLM+VLM+MVM LAIT+CC3M+SBU VLR PREVALENT [114] Image CNN-GFs+Xformer Xformer Single-stream No MLM+MVM Matterport3D VLN XGPT [41] Image OD-RFs Emb Dual-stream Yes MLM+IDA+VC+TIFG CC3M VC+VLR InterBER [115] Image OD-RFs Emb Single-stream No MLM+VLM+MVM COCO+CC3M+SBU VLR+VCR PixelBERT [116] Image CNN-GFs Emb Single-stream No MLM+VLM COCO+VG VLR+NLVR+VQA OSCAR [10] Image OD-RFs Emb Single-stream No MLM+VLM COCO+SBU+CC3M+FLKR+VQA+GQA+VGQA GQA+VC+VLR+NLVR+NoCaps+VQA VLN-BERT [117] Image OD-RFs Emb Dual-stream No MLM+VLM+MVM CC3M VLN FashionBERT [118] Image Xformer Emb Single-stream No MLM+VLM+MVM FashionGen VLR VILLA [119] Image OD-RFs+Xformer Xformer Single-stream No MLM+VLM+MVM COCO+VG+CC3M+SBU GRE+VLR+NLVR+VCR+VE+VQA ERNIE-ViL [120] Image OD-RFs Emb Single-stream No MLM+MVM CC3M+SBU GRE+VLR+VCR+VQA RVL-BERT [121] Image OD-RFs Emb Single-stream No MLM+VLM+MVM CC3M VC+VQA VinVL [27] Image OD-RFs Emb Single-stream No MLM+VLM COCO+CC3M+SBU+FLKR+VQA+GQA+VGQA GQA+VC+VLR+NLVR+NoCaps+VQA VL-T5 [122] Image OD-RFs Emb Single-stream Yes MLM+VLM+VQA+GRE+VC COCO+VG+VQA+GQA+VGQA GQA+GRE+VC+MMT+NLVR+VCR+VQA ViLT [123] Image ViT-PFs Emb Single-stream No MLM+VLM COCO+VG+SBU+CC3M VLR+NLVR+VQA ALIGN [55] Image CNN-GFs Xformer Dual-stream No VLC ALIGN VLR Kaleido-BERT [43] Image CNN-GFs Emb Single-stream No MLM+VLM+AKPM FashionGen CR+VC+VLR MDETR [42] Image Xformer Xformer Single-stream Yes OD+MLM+VLC COCO+VG+FLKR+GQA GQA+VQA SOHO [124] Image CNN-GFs Emb Single-stream No MLM+VLM+MVM COCO+VG VLR+NLVR+VE+VQA E2E-VLP [125] Image CNN-GFs Emb Single-stream Yes OD+MLM+VLM COCO+VG VC+VLR+NLVR+VQA Visual Parsing [126] Image Xformer Emb Single-stream No MLM+VLM+MVM COCO+VG VLR+VCR+VE+VQA CLIP-ViL [127] Image CNN-GFs Emb Single-stream Yes MLM+VLM+VQA COCO+VG+VQA+GQA+VGQA VE+VLN+VQA ALBEF [35] Image Xformer Xformer Dual-stream No MLM+VLM+VLC COCO+VG+CC3M+SBU VLR+NLVR+VQA SimVLM [14] Image CNN-GFs Emb Single-stream Yes Pre\ufb01xLM ALIGN VC+NLVR+VE+VQA MURAL [128] Image CNN-GFs Xformer Dual-stream No VLC CC12M+ALIGN VC+VLR VLMO [129] Image ViT-PFs Emb Single-stream No MLM+VLC+VLM COCO+VG+CC3M+SBU VQA+NLVR+VLR METER [33] Image Xformer Xformer Dual-stream No MLM+VLM COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA X-VLM [28] Image Xformer Xformer Single-stream No MLM+VLM+VG COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA TCL [130] Image Xformer Xformer Single-stream No MLM+VLM+TCL COCO+VG+CC3M+SBU VLR+NLVR+VE+VQA \fVLP: A Survey on Vision-Language Pre-training 17 Table 3 The summary of mainstream video-text VLP models. The number of downstream tasks determines whether the model is generic or domain-speci\ufb01c VLP. FE: Feature Extraction. PT: Pre-training. Emb: Embedding. SC in Datatsets column: self-constructed or self-collected. MTL in Datatsets column: all datasets for multi-task learning in corresponding work. See other abbreviations in Datatsets column in Table 1. Model Domain Vision FE Language FE Multimodal Fusion Decoder PT Objectives PT Datasets Downstream Tasks VideoBERT [58] Video CNN-GFs Emb Single-stream No MLM+VLM+MVM SC AC+VC CBT [105] Video CNN-GFs+Xformer Xformer Single-stream No VLC Kinetics AC+AS+VC UniVL [106] Video CNN-GFs Xformer Dual-stream Yes MLM+VLM+VC HT100M AS+ASL+MSA+VC+VLR HERO [36] Video CNN-GFs+Xformer Xformer Single-stream No MLM+VLM+MVM+FOM HT100M+TV VC+VLI+VQA+VLR MMFT-BERT [131] Video OD-RFs+Xformer Xformer Single-stream No VQA TV VQA ActBERT [132] Video OD-RFs+CNN Emb Single-stream No MLM+VLM+MVM HT100M AS+ASL+VC+VQA+VLR CLIP [16] Image / Video CNN/Xformer Xformer Dual-stream No VLC SC OCR +AC etc. Frozen [57] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR Region-Learner [133] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR CLIP4Clip [17] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR CLIP2Video [18] Video ViT-PFs Emb Dual-Stream No VLC WebVid2M+CC3M VLR \f18 VLP: A Survey on Vision-Language Pre-training SOTA performance on many downstream tasks. The summary of mainstream image-text VLP models is shown in Table 2. Video-Text VLP models. VideoBERT [58], known as the \ufb01rst video-text pre-training model, extends the BERT model to process videos and texts simultaneously. VideoBERT uses the pre-trained ConvNet and S3D [137] to extract video features and concatenate them with textual word embeddings to feed into a transformer initialed with BERT. ConvNet and S3D are frozen when training the VideoBERT, which indicates the approach is not end-to-end. Recently, inspired by ViT, CLIP4Clip [17] and CLIP2Video [18] \ufb01rst process video clips into frames and get patch embeddings according to the method of ViT processing images for each frame. CLIP4clip and CLIP2Video optimize themselves in an end-toend manner and achieve SOTA performance. The summary of mainstream video-text VLP models is shown in Table 3. 8" + }, + { + "url": "http://arxiv.org/abs/2109.08478v1", + "title": "Multimodal Incremental Transformer with Visual Grounding for Visual Dialogue Generation", + "abstract": "Visual dialogue is a challenging task since it needs to answer a series of\ncoherent questions on the basis of understanding the visual environment.\nPrevious studies focus on the implicit exploration of multimodal co-reference\nby implicitly attending to spatial image features or object-level image\nfeatures but neglect the importance of locating the objects explicitly in the\nvisual content, which is associated with entities in the textual content.\nTherefore, in this paper we propose a {\\bf M}ultimodal {\\bf I}ncremental {\\bf\nT}ransformer with {\\bf V}isual {\\bf G}rounding, named MITVG, which consists of\ntwo key parts: visual grounding and multimodal incremental transformer. Visual\ngrounding aims to explicitly locate related objects in the image guided by\ntextual entities, which helps the model exclude the visual content that does\nnot need attention. On the basis of visual grounding, the multimodal\nincremental transformer encodes the multi-turn dialogue history combined with\nvisual scene step by step according to the order of the dialogue and then\ngenerates a contextually and visually coherent response. Experimental results\non the VisDial v0.9 and v1.0 datasets demonstrate the superiority of the\nproposed model, which achieves comparable performance.", + "authors": "Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie Zhou", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CV", + "cs.MM" + ], + "main_content": "Introduction Recently, there is increasing interest in visionlanguage tasks, such as image caption (Xu et al., 2015; Anderson et al., 2016, 2018; Cornia et al., 2020) and visual question answering (Ren et al., 2015a; Gao et al., 2015; Lu et al., 2016; Anderson et al., 2018). In the real world, our conversations (Chen et al., 2020b, 2019) usually have multiple turns. As an extension of conventional single-turn visual question answering, Das et al. (2017) introduce a multi-turn visual question answering task named visual dialogue, which aims to Q1: how many people ? Q2: is anyone holding a frisbee ? Q3: is the coach on the right ? Q4: are they wearing matching uniforms ? Caption: there is a frisbee team with their coach taking a team photo A1: 7 people A2: yes A3: yes, on the far right A4: all except the coach Figure 1: An example of visual dialogue. The color in text background corresponds to the same color box in the image, which indicates the same entity. Our model \ufb01rstly associates textual entities with objects explicitly and then gives contextually and visually coherent answers to contextual questions. explore the ability of an AI agent to hold a meaningful multi-turn dialogue with humans in natural language about visual content. Visual dialogue (Agarwal et al., 2020; Wang et al., 2020; Qi et al., 2020; Murahari et al., 2020) requires agents to give a response on the basis of understanding both visual and textual content. One of the key challenges in visual dialogue is how to solve multimodal co-reference (Das et al., 2017; Kottur et al., 2018). Therefore, some fusion-based models (Das et al., 2017) are proposed to fuse spatial image features and textual features in order to obtain a joint representation. Then attention-based models (Lu et al., 2017; Wu et al., 2018; Kottur et al., 2018) are proposed to dynamically attend to spatial image features in order to \ufb01nd related visual content. Furthermore, models based on object-level image features (Niu et al., 2019; Gan et al., 2019; Chen et al., 2020a; Jiang et al., 2020a; Nguyen arXiv:2109.08478v1 [cs.CL] 17 Sep 2021 \fet al., 2020; Jiang et al., 2020b) are proposed to effectively leverage the visual content for multimodal co-reference. However, as implicit exploration of multimodal co-reference, these methods implicitly attend to spatial or object-level image features, which is trained with the whole model and is inevitably distracted by unnecessary visual content. Intuitively, speci\ufb01c mapping of objects and textual entities can reduce the noise of attention. As shown in Figure 1, the related objects can help the agent to understand the entities (e.g., Q1: \u201cpeople\u201d, Q2: \u201cfrisbee\u201d, Q3: \u201ccoach\u201d) for the generation of correct answers. Then when it answers the question Q4 \u201care they wearing matching uniforms ?\u201d, the agent has already comprehended \u201cpeople\u201d and \u201ccoach\u201d from the previous conversation. On this basis, it can learn the entity \u201cuniforms\u201d with the corresponding object in the image, and generate the answer \u201call except the coach\u201d. To this end, we need to 1) explicitly locate related objects guided by textual entities to exclude undesired visual content, and 2) incrementally model the multi-turn structure of the dialogue to develop a uni\ufb01ed representation combining multi-turn utterances with the corresponding related objects. However, previous work overlooks these two important aspects. In this paper, we thus propose a novel and effective Multimodal Incremental Transformer with Visual Grounding, named MITVG, which contains two key parts: visual grounding and multimodal incremental transformer. Visual grounding aims to establish speci\ufb01c mapping of objects and textual entities by explicitly locating related objects in the image with the textual entities. By doing so, our model can exclude undesired visual content and reduce attention noise. On the basis of visual grounding, the multimodal incremental transformer is used to model the multi-turn dialogue history combined with the speci\ufb01c visual content to generate visually and contextually coherent responses. As an encoder-decoder framework, MITVG contains a Multimodal Incremental Transformer Encoder (MITE) and a Gated Cross-Attention Decoder (GCAD). We test the effectiveness of our proposed model on large-scale datasets: VisDial v0.9 and v1.0 (Das et al., 2017). Both automatic and manual evaluations show that our model substantially outperforms the competitive baselines and achieves the new state-of-the-art results on substantial metrics. Our main contributions are as follows: \u2022 To the best of our knowledge, we are the \ufb01rst to leverage visual grounding to explicitly locate related objects in the image guided by textual entities for visual dialogue. \u2022 We propose a novel multimodal incremental transformer to encode the multi-turn dialogue history step by step combined with the visual content and then generate a contextually and visually coherent response. \u2022 We achieve comparable performance on VisDial v0.9 and v1.0 datasets. 2 Approach 2.1 Overview In this section, we formally describe the visual dialogue task and then proceed to our proposed Multimodal Incremental Transformer with Visual Grounding (MITVG). Following Das et al.(2017), a visual dialogue agent is given three inputs, i.e., an image I, a dialogue history (the caption and question-answer pairs) till round t \u22121: H = (Cap |{z} H0 , (Q1, A1) | {z } H1 , \u00b7 \u00b7 \u00b7 , (Qt\u22121, At\u22121) | {z } Ht\u22121 ) and the current question Qt at round t, where Cap is the caption describing the image taken as H0 and H1, . . . , Ht\u22121 are concatenations of questionanswer pairs. The goal of the visual dialogue agent is to generate a response (or answer) At to the question Qt. Cap, Q\u2217and A\u2217are sentences. Figure 2 shows the framework of MITVG, which aims to explicitly model multi-turn dialogue history step by step based on the explicit modeling relationship between multiple modalities. MITVG \ufb01rstly locates related objects in the image explicitly guided by the textual entities via visual grounding, then encodes multi-turn dialogue history in the order of the dialogue utterance based on visual grounding via Multimodal Incremental Encoder (MITE), and \ufb01nally utilizes the outputs of both encoder and visual grounding to generate the response word by word via Gated Cross-Attention Decoder (GCAD). 2.2 Input Representation Before describing our method, we introduce the input representation. Image Features. We use a pre-trained Faster RCNN model (Ren et al., 2015b) to extract object\fMITE MITE MITE Self-Attention Gated CrossAttention FFN Target Inputs Softmax Layer Target Outputs Current Ques. Input 1-st Round History Inputs Embedding Layer VG Model i-th Round Ques. Input VG Model i-th Round History Inputs VG Model Caption Input \u30fb\u30fb\u30fb \u30fb\u30fb\u30fb \ud835\udc41\"\u00d7 1-st Round Ques. Input Current Ques. Input Image Input Encoder Decoder Figure 2: The framework of Multimodal Incremental Transformer with Visual Grounding (MITVG). \u201cVG Model\u201d indicates visual grounding model (Yang et al., 2019b) (Details are described in Sec. 2.3). \u201cMITE\u201d denotes the multimodal incremental transformer encoder (Details are described in Sec. 2.4.1). MITVG \ufb01rstly uses the VG model to explicitly model the relationship between the textual content and the visual content, and encodes multiturn dialogue history in the order of the dialogue based on visual grounding, and \ufb01nally utilizes the outputs of both encoder and visual grounding to generate the response word by word in the decoding process. level image features. Speci\ufb01cally, the image features v for an image I are represented by: v = Faster R \u2212CNN(I) \u2208RK\u00d7V , (1) where K denotes the total number of the detected objects per image and V denotes the dimension of features for each object. Language Features. The current (at the t-th round) L-word question features are a sequence of M-dimension word embedding with positional encoding added (Vaswani et al., 2017), as follows: qt = [st,1, st,2, . . . , st,L] \u2208RL\u00d7M, (2) st,j = wj + PE(j), (3) where wj is the word embedding of the j-th word in the question Qt, and PE(\u00b7) denotes positional encoding function (Vaswani et al., 2017). For the dialogue history H = {H0, H1, . . . , Ht\u22121} and the answer At, the dialogue history features u = {u0, u1, . . . , ut\u22121} and the answer features at are obtained in the same way as the question Qt. 2.3 Visual Grounding To exclude the needless visual content, we introduce visual grounding, which is de\ufb01ned to ground a natural language query (phrase or sentence) about an image onto a correct region of the image. First of all, we use NeuralCoref1 for reference resolution. For example, when it processes the question Q4 \u201care they wearing matching uniforms ?\u201d shown in Figure 1, NeuralCoref takes the question Q4 and its history as inputs, and then generates a new question \u201care the people wearing matching uniforms ?\u201d as a new Q4. As shown in Figure 3 (a), visual grounding model (Yang et al., 2019b) takes the i-th question Qi and the image I as inputs and generates initial visual grounding features, as follows: v(0) gi = VGM(Qi, I), (4) where VGM(\u00b7) denotes visual grounding model2. Then v(0) gi is sent to the multi-head self-attention 1Introduction and code of NeuralCoref are available at https://github.com/huggingface/neuralcoref. NeuralCoref is only used for visual grounding. 2Introduction and code are available at https://github.com/zyang-ur/onestage grounding. \fImage Input i-th Round History Inputs Cross-modal Attention Embedding Layer Self-Attention FFN History Attention Self-Attention FFN \ud835\udc41\"\u00d7 VG Model i-th Round Ques. Input \ud835\udc41$\u00d7 \ud835\udc63&' \ud835\udc50)*+ \ud835\udc50) (\ud835\udc4e) (\ud835\udc4f) \ud835\udc63&' \ud835\udc62) Figure 3: Framework of (a) Visual Grounding and (b) Multimodal Incremental Transformer Encoder (MITE). layer followed by a position wise feed-forward network (FFN) layer (stacked Nv times) to generate the i-th visual grounding features as follows3: \u02c6 vn gi = MultiHead \u0010 v(n\u22121) gi , v(n\u22121) gi , v(n\u22121) gi \u0011 , (5) where n = 1, . . . , Nv and MultiHead(\u00b7) denotes the multi-head self-attention layer (Vaswani et al., 2017), then v(n) gi = FFN \u0000\u02c6 vn gi \u0001 , (6) where n = 1, . . . , Nv and FFN(\u00b7) denotes the position wise feed-forward networks (Vaswani et al., 2017). After Nv layers computation, we obtain the \ufb01nal visual grounding features vgi by: vgi = v(Nv) gi , (7) Actually, there are some questions that do not contain any entities in the visual dialogue, such as \u201canything else ?\u201d. For such questions, we use the features of the whole image instead, i.e. vgi = v. 2.4 Multimodal Incremental Transformer Inspired by the idea of incremental transformer (Li et al., 2019) which is originally designed for the single-modal dialogue task, we make an extension and propose a multimodal incremental transformer, which is composed of a Multimodal Incremental Transformer Encoder (MITE) and a Gated CrossAttention Decoder (GCAD). The MITE uses an incremental encoding scheme to encode multi-turn 3For simplicity, we omit the descriptions of layer normalization and residual connection. dialogue history with an understanding of the image. The GCAD leverages the outputs from both the encoder and visual grounding via the gated cross-attention layer to fuse the two modal information in order to generate a contextually and visually coherent response word by word. 2.4.1 MITE To effectively encode multi-turn utterances grounded in visual content, we design the Multimodal Incremental Transformer Encoder (MITE). As shown in Figure 3 (b), at the i-th round, where i = 1, 2, ..., t\u22121, the MITE takes the visual grounding features vgi, the dialogue history features ui and the context state ci\u22121 as inputs, and utilizes attention mechanism to incrementally build up the representation of the relevant dialogue history and the associated image regions, and then outputs the new context state ci. This process can be stated recursively as follows: ci = MITE (vgi, ui, ci\u22121) , (8) where MITE(\u00b7) denotes the encoding function, ci denotes the context state after the dialogue history features ui and the visual grounding features vgi being encoded, and c0 is the dialogue history features u0. As shown in Figure 3 (b), we use a stack of Nh identical layers to encode vgi, ui and ci\u22121, and to generate ci. Each layer consists of four sub-layers. The \ufb01rst sub-layer is a multi-head self-attention for the dialogue history: A(n) = MultiHead \u0010 C(n\u22121), C(n\u22121), C(n\u22121)\u0011 , (9) where n = 1, . . . , Nh, C(n\u22121) is the output of the last layer Nn\u22121, and C(0) is the dialog history features ui. The second sub-layer is a multi-head cross-modal attention: B(n) = MultiHead (An, vgi, vgi) , (10) where vgi is the visual grounding features. The third sub-layer is a multi-head history attention: F(n) = MultiHead \u0010 B(n), ci\u22121, ci\u22121 \u0011 , (11) where ci\u22121 is the context state after the previous dialogue history features ui\u22121 being encoded. That\u2019s why we call this encoder \u201cMultimodal Incremental Transformer\u201d. The fourth sub-layer is a position wise feed-forward network (FFN): C(n) = FFN \u0010 F(n)\u0011 . (12) \fWe use ci to denote the \ufb01nal representation at Nh-th layer: ci = C(Nh). (13) The mulitmodal incremental transformer encoder at the current turn t, i.e., the bottom one in Figure 2, has the same structure as all the other MITEs but takes the visual grounding features vgt, the current question features qt and the context state ct\u22121 as inputs and generates the \ufb01nal context state ct. 2.4.2 GCAD Motivated by the real-world human cognitive process, we design a Gated Cross-Attention Decoder (GCAD) shown in Figure 2, which takes the masked answer features a