diff --git "a/abs_29K_G/test_abstract_long_2405.00954v1.json" "b/abs_29K_G/test_abstract_long_2405.00954v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00954v1.json" @@ -0,0 +1,206 @@ +{ + "url": "http://arxiv.org/abs/2405.00954v1", + "title": "X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation", + "abstract": "Recent advancements in automatic 3D avatar generation guided by text have\nmade significant progress. However, existing methods have limitations such as\noversaturation and low-quality output. To address these challenges, we propose\nX-Oscar, a progressive framework for generating high-quality animatable avatars\nfrom text prompts. It follows a sequential Geometry->Texture->Animation\nparadigm, simplifying optimization through step-by-step generation. To tackle\noversaturation, we introduce Adaptive Variational Parameter (AVP), representing\navatars as an adaptive distribution during training. Additionally, we present\nAvatar-aware Score Distillation Sampling (ASDS), a novel technique that\nincorporates avatar-aware noise into rendered images for improved generation\nquality during optimization. Extensive evaluations confirm the superiority of\nX-Oscar over existing text-to-3D and text-to-avatar approaches. Our anonymous\nproject page: https://xmu-xiaoma666.github.io/Projects/X-Oscar/.", + "authors": "Yiwei Ma, Zhekai Lin, Jiayi Ji, Yijun Fan, Xiaoshuai Sun, Rongrong Ji", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Recent advancements in automatic 3D avatar generation guided by text have\nmade significant progress. However, existing methods have limitations such as\noversaturation and low-quality output. To address these challenges, we propose\nX-Oscar, a progressive framework for generating high-quality animatable avatars\nfrom text prompts. It follows a sequential Geometry->Texture->Animation\nparadigm, simplifying optimization through step-by-step generation. To tackle\noversaturation, we introduce Adaptive Variational Parameter (AVP), representing\navatars as an adaptive distribution during training. Additionally, we present\nAvatar-aware Score Distillation Sampling (ASDS), a novel technique that\nincorporates avatar-aware noise into rendered images for improved generation\nquality during optimization. Extensive evaluations confirm the superiority of\nX-Oscar over existing text-to-3D and text-to-avatar approaches. Our anonymous\nproject page: https://xmu-xiaoma666.github.io/Projects/X-Oscar/.", + "main_content": "Introduction The creation of high-quality avatars holds paramount importance in a wide range of applications, including cartoon production (Li et al., 2022b; Zhang et al., 2022), virtual try-on (Santesteban et al., 2021; 2022), immersive telepresence (Li et al., 2020a;b; Xiu et al., 2023), and video game design (Zheng et al., 2021; Zhu et al., 2020). Conventional methods for avatar creation are notorious for being timeconsuming and labor-intensive, often demanding thousands of hours of manual work, specialized design tools, and expertise in aesthetics and 3D modeling. In this research, we propose an innovative solution that revolutionizes the generation of high-quality 3D avatars with intricate geometry, refined appearance, and realistic animation, solely based on a text prompt. Our approach eliminates the need for manual sculpting, professional software, or extensive artistic skills, thus democratizing avatar creation and making it accessible to a broader audience. The emergence of deep learning has brought forth a new era in 3D human body reconstruction, showcasing promising methods for automatic reconstruction from photos (Liao et al., 2023b; Han et al., 2023; Men et al., 2024; Zhang et al., 2023d) and videos (Weng et al., 2022; Jiang et al., 2022). However, these approaches primarily focus on reconstructing human bodies from visual cues, limiting their applicability to real-world scenarios and posing challenges when it comes to incorporating creativity, editing, and control. Recent advancements in large-scale vision-language 1 arXiv:2405.00954v1 [cs.CV] 2 May 2024 \fX-Oscar models (VLM) (Radford et al., 2021; Li et al., 2022a; 2023a; Xu et al., 2023a; Ma et al., 2023b) and diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015; Welling & Teh, 2011; Kulikov et al., 2023) have opened up exciting possibilities for generating 3D objects and avatars from text prompts. These methods effectively combine pretrained VLMs and diffusion models with 3D representations such as DeepSDF (Park et al., 2019), NeRF (Mildenhall et al., 2021), DMTET (Shen et al., 2021), and 3D Gaussian Splatting (Kerbl et al., 2023). Despite these promising developments, current approaches still face several limitations. Some methods (Ma et al., 2023c; Chen et al., 2023a; Wang et al., 2023b) focus solely on generating static everyday objects, lacking animation ability. Other methods that aim to generate avatars based on human prior knowledge often suffer from poor geometry and appearance quality (Liao et al., 2023a; Hong et al., 2022; Zhang et al., 2023b) or are incompatible with conventional computer graphics workflows (Liu et al., 2023; Huang et al., 2023b; Cao et al., 2023). This paper presents X-Oscar, an innovative and advanced framework that leverages text prompts to generate highquality animatable 3D avatars. Specifically, X-Oscar builds upon the SMPL-X body model (Pavlakos et al., 2019a) as prior knowledge and employs a strategic optimization sequence of \u201cGeometry \u2192Texture \u2192Animation\u201d. To overcome the common challenge of oversaturation during avatar generation, we propose Adaptive Variational Parameter (AVP), a novel technique that utilizes a trainable adaptive distribution to represent the geometry and appearance of the avatars. By optimizing the distribution as a whole instead of focusing on specific parameters, X-Oscar effectively mitigates oversaturation, resulting in visually appealing avatars. Furthermore, we introduce Avatar-aware Score Distillation Sampling (ASDS), an innovative module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process. This strategic approach significantly enhances the visual attributes of the avatars and improves their geometry and appearance quality. Extensive experimentation demonstrates the superiority of X-Oscar over existing methods, showcasing improvements in both geometry and appearance quality. Moreover, the avatars generated by X-Oscar are fully animatable, unlocking exciting possibilities for applications in gaming, animation, and virtual reality. To summarize, our main contributions are three-fold: \u2022 We present X-Oscar, an innovative and progressive framework that enables the creation of delicate animatable 3D avatars from text prompts. \u2022 To overcome the persistent challenge of oversaturation, we propose Adaptive Variational Parameter (AVP), which represents avatars as adaptive distributions instead of specific parameters. \u2022 We introduce Avatar-aware Score Distillation Sampling (ASDS), an advanced module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process, resulting in high-quality outputs. 2. Related Work Text-to-3D Generation. The emergence of vision-language models (VLMs) (Radford et al., 2021; Ma et al., 2022) and diffusion models has brought about a revolutionary impact on text-to-3D content generation. Pioneering studies like CLIP-forge (Sanghi et al., 2022), DreamFields (Jain et al., 2022), CLIP-Mesh (Mohammad Khalid et al., 2022), and XMesh (Ma et al., 2023c) have showcased the potential of utilizing CLIP scores (Radford et al., 2021) to align 3D representations with textual prompts, enabling the generation of 3D assets based on textual descriptions. Subsequently, DreamFusion (Poole et al., 2022) introduced Score Distillation Sampling (SDS), a groundbreaking technique that leverages pretrained diffusion models (Saharia et al., 2022) to supervise text-to-3D generation. This approach has significantly elevated the quality of generated 3D content. Building on these foundations, researchers have explored various strategies to further enhance text-to-3D generation. These strategies encompass coarse-to-fine optimization (Lin et al., 2023), conditional control (Li et al., 2023c; Chen et al., 2023b), bridging the gap between 2D and 3D (Ma et al., 2023a), introducing variational score distillation (Wang et al., 2023b), and utilizing 3D Gaussian Splatting (Chen et al., 2023c; Li et al., 2023b; Yi et al., 2023; Tang et al., 2023). Nevertheless, despite these advancements, existing methodologies primarily concentrate on generating common static objects. When applied to avatar generation, they face challenges such as poor quality and the inability to animate the generated avatars. In contrast, our proposed framework, X-Oscar, specifically aims to generate high-quality 3D animatable avatars from text prompts. X-Oscar caters to the unique requirements of avatar generation, including intricate geometry, realistic textures, and fluid animations, to produce visually appealing avatars suitable for animation. Text-to-Avatar Generation. The domain of text-to-avatar generation (Kolotouros et al., 2024; Zhang et al., 2024; Huang et al., 2023a; Xu et al., 2023b; Zhou et al., 2024) has emerged as a prominent and vital research area to cater to the demands of animated avatar creation. This field incorporates human priors such as SMPL (Loper et al., 2015), SMPL-X (Pavlakos et al., 2019b), and imGHUM (Alldieck et al., 2021) models. AvatarCLIP (Hong et al., 2022) utilizes SMPL and Neus (Wang et al., 2021) models to generate 3D avatars guided by the supervision of CLIP scores. Dreamwaltz (Huang et al., 2023b) introduces NeRF (Mildenhall et al., 2021) to generate 3D avatars based on 3D2 \fX-Oscar consistent occlusion-aware SDS and 3D-aware skeleton conditioning. AvatarBooth (Zeng et al., 2023) leverages dual fine-tuned diffusion models to achieve customizable 3D human avatar generation. AvatarVerse (Zhang et al., 2023a) utilizes ControlNet (Zhang et al., 2023c) and DensePose (G\u00a8 uler et al., 2018) to enhance view consistency. TADA (Liao et al., 2023a) employs a displacement layer and a texture map to predict the geometry and appearance of avatars. HumanNorm (Huang et al., 2023a) proposes a normal diffusion model for improved geometry. HumanGaussian (Liu et al., 2023) uses 3D Gaussian Splatting as human representation for text-to-avatar generation. Despite these advancements, existing methods often produce low-quality and over-saturated results. To overcome these limitations, we introduce a progressive framework that incorporates two key modules, namely Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. Our framework effectively generates high-fidelity avatars that are visually appealing and realistic. 3. Preliminaries Score Distillation Sampling (SDS) (Poole et al., 2022), also known as Score Jacobian Chaining (SJC) (Wang et al., 2023a), is a powerful optimization method that adapts pretrained text-to-image diffusion models for text-to-3D generation. Given a pretrained diffusion model p\u03d5(zt|y, t), where \u03d5 represents the model\u2019s parameters, y is the input text prompt, and zt denotes the noised image at timestep t, SDS aims to optimize a 3D representation to align with the text prompt. The forward diffusion process in SDS is formulated as q(zt|g(\u03b8, c), y, t), where \u03b8 represents the trainable parameters of the 3D representation, c denotes the camera, and g(\u00b7) is the rendering function. The objective of SDS can be expressed as follows: min LSDS(\u03b8) = E(t,c) \u0014r1 \u2212\u03b3t \u03b3t \u03c9(t)DKL(q(zt|g(\u03b8, c), y, t) \u2225p\u03d5(zt|y, t)) \u0015 , (1) where \u03c9(t) is a weighting function dependent on the timestep t, zt = \u221a\u03b3tg(\u03b8, c) + \u221a1 \u2212\u03b3t\u03f5 is the noised image, and DKL(\u00b7) represents the Kullback-Leibler Divergence (Kullback & Leibler, 1951). To approximate the gradient of the SDS objective, the following equation is leveraged: \u2207\u03b8LSDS(\u03b8) \u225cEt,\u03f5,c \uf8ee \uf8ef \uf8f0\u03c9(t)(\u02c6 \u03f5\u03d5(zt; y, t) | {z } predicted noise \u2212 \u03f5 |{z} Guassian noise )\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (2) where \u03f5 \u223cN (0, I) represents sampled noise from a normal distribution, and \u02c6 \u03f5\u03d5(zt; y, t) denotes the predicted noise of the pretrained diffusion model at timestep t. SMPL-X (Pavlakos et al., 2019b) is a widely adopted parametric 3D human body model in the fields of computer graphics and animation. It offers a comprehensive representation of the human body, consisting of 10, 475 vertices and 54 joints, facilitating detailed and realistic character rendering. By specifying shape s, pose p, and expression e parameters, the SMPL-X model generates a human body using the following equation: T(s, p, e) = T + Bs(s) + Bp(p) + Be(e), (3) where T denotes a standard human template, while Bs(\u00b7), Bp(\u00b7), Be(\u00b7) represent shape, expression, and pose blend shapes, respectively. These blend shapes deform the template to generate a wide range of body shapes, poses, and expressions. To transition the human body from a standard pose to a target pose, linear blend skinning (LBS) is employed: M(s, p, e) = WLBS(T(s, p, e), J(s), p, W), (4) where WLBS(\u00b7) represents the LBS function, J(s) corresponds to the skeleton joints, and W represents the skinning weight. The LBS function calculates the final vertex positions by interpolating between the deformed template vertices based on the assigned skinning weights. This process ensures a smooth and natural deformation of the body mesh. 4. Approach The overview of X-Oscar is depicted in Fig. 2, and the workflow is illustrated in Fig. 3. In the upcoming sections, we present a comprehensive description of the X-Oscar framework: In Sec. 4.1, we delve into the progressive modeling pipeline of X-Oscar. This pipeline breaks down the complex task of avatar generation into three manageable subtasks, with each subtask focusing on a specific aspect of avatar creation. In Sec. 4.2, we introduce Adaptive Variational Parameter (AVP). This component employs a trainable adaptive distribution to represent the avatar, addressing the issue of oversaturation that is commonly encountered in avatar generation. In Sec. 4.3, we present Avatar-aware Score Distillation Sampling (ASDS). This module incorporates geometry-aware and appearance-aware noise into the denoising process, enabling the pretrained diffusion model to perceive the current state of the generated avatar, resulting in the production of high-quality outputs. 4.1. Progressive Modeling Geomotry Modeling. During this phase, our objective is to optimize the geometry of the avatars, represented by the SMPL-X model, to align with the input text prompt y. Formally, we aim to optimize the trainable vertex offsets \u03c8v \u2208RN\u00d73, initialized as a matrix of zeros, to align the modified vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v with the text 3 \fX-Oscar Stable Diffusion Vertex Coordinates Stable Diffusion \u201cFlash from DC\u201d Stable Diffusion \u201cFlash from DC\u201d \u201cFlash from DC\u201d Frozen Trainable Random Sampling Motified SMPL-X Motified SMPL-X Offset Distribution Appearance Distribution Offset Distribution Appearance Distribution Pose Prior (a) Geometry Modeling (b) Appearance Modeling (c) Animation Refinement Update Update Update Avatar-aware Noise Avatar-aware Noise Avatar-aware Noise Pose Prior Camera Figure 2: Overview of the proposed X-Oscar, which consists of three generation stages: (a) geometry modeling, (b) appearance modeling, and (c) animation refinement. prompt y, where \u03bd represents the vertex coordinates of the template avatar body, and N is the number of vertices of the SMPL-X model. To achieve this, we utilize a differentiable rendering pipeline. By taking the original mesh M of SMPL-X and the predicted vertex offsets \u03c8v as inputs, we render a normal image N of the modified mesh using a differentiable renderer (Laine et al., 2020): N = g(M, \u03c8v, c), (5) where g(\u00b7) denotes the rendering function, and c represents a randomly sampled camera parameter. In each iteration, we introduce Gaussian noise \u03f5 to the normal map N and apply a pretrained Stable Diffusion (SD) model (Rombach et al., 2022) to denoise it. The gradient of the trainable vertex offsets \u03c8v during denoising is then calculated as follows: \u2207\u03c8vLgeo(\u03c8v, N) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zN t ; y, t) \u2212\u03f5 \u0011 \u2202N \u2202\u03c8v \u0015 , (6) where \u02c6 \u03f5\u03d5(zN t ; y, t) represents the predicted noise by SD based on the timestep t, input text embedding y, and the noisy normal image zN t . Appearance Modeling. After completing the geometry modeling phase, we obtain a mesh that aligns with the prompt in terms of shape, with vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v. In this stage, our objective is to optimize an albedo map \u03c8a \u2208Rh\u00d7w\u00d73 to represent the appearance of the resulting avatar, where h and w represent the height and width of the albedo map. To achieve this, we start by rendering a colored image I from a randomly sampled camera parameter c based on the vertex offsets \u03c8v and the albedo map \u03c8a using a differentiable renderer (Laine et al., 2020): I = g(M, \u03c8v, \u03c8a, c). (7) To optimize the albedo map \u03c8a, we employ a loss function similar to Eq. (6) used in the geometry modeling phase: \u2207\u03c8aLapp(\u03c8a, I) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zI t ; y, t) \u2212\u03f5 \u0011 \u2202I \u2202\u03c8a \u0015 , (8) where \u02c6 \u03f5\u03d5(zI t ; y, t) represents the predicted noise by the SD model. This loss function encourages the rendered image I to align with the text prompt y by minimizing the discrepancy between the predicted noise \u02c6 \u03f5\u03d5 and the added Gaussian noise \u03f5. By optimizing the albedo map \u03c8a using this loss function, we can generate appearances for the avatars that are consistent with the provided text prompts. Animation Refinement. Given that both the geometry modeling and appearance modeling stages optimize the avatar in a canonical pose, it is inevitable that certain parts of the avatar may be obstructed, leading to lower-quality results in those areas. To overcome this challenge, we introduce an animation refinement stage where we adjust the pose of the avatar and simultaneously optimize both the geometry and appearance. Specifically, we sample viable pose parameters p from a pre-trained model such as VPoser (Pavlakos et al., 2019a). For each sampled pose, we render the normal image Np and colored image Ip of the animated avatar using a differentiable renderer (Laine et al., 2020): Np = g(M, \u03c8v, c, p), Ip = g(M, \u03c8v, \u03c8a, c, p), (9) where pose parameters p and camera parameters c vary in each iteration. To optimize the geometry and appearance of the avatar in the animated pose, we define an animation loss 4 \fX-Oscar Lani as follows: Lani(\u03c8v, \u03c8a, Np, Ip) = Lgeo(\u03c8v, Np)+Lapp(\u03c8v, \u03c8a, Ip), (10) where Lgeo and Lapp are the geometry loss and appearance loss, respectively. The gradients of the animation loss for the vertex offsets \u03c8v and the albedo maps \u03c8a are calculated as follows: \u2207\u03c8v Lani(\u03c8v, Np, Ip) =Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Np t ; y, t) \u2212\u03f5 \u0011 \u2202Np \u2202\u03c8v + w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8v \u0015 , (11) \u2207\u03c8aLani(\u03c8a, Ip) = E(t,\u03f5) \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8a \u0015 , (12) The notations used here are similar to those defined in Eq. (2). By minimizing the animation loss using these gradients, we refine the geometry and appearance of the avatar in various poses, resulting in improved quality in the final output. 4.2. Adaptive Variational Parameter As formulated in Eq. (1) and Eq. (2), SDS aims to optimize a precise 3D representation to align all images rendered from arbitrary viewpoints with the input prompt evaluated by 2D diffusion models. However, there exists a fundamental contradiction between achieving an accurate 3D representation and the inherent multi-view inconsistency associated with 2D diffusion models. Specifically, it is often unreasonable to expect high similarity scores of a 2D diffusion model between all multi-view images of a specific 3D representation and text prompts. Consequently, when SDS is employed to enforce similarity between each perspective of a specific 3D representation and the text prompt, it can lead to the undesirable issue of oversaturation. To address this concern, we propose formulating the 3D representation as a distribution of vertex offsets, denoted as offset distribution, and a distribution of albedo maps, referred to as appearance distribution. Specifically, we perturb \u03c8v and \u03c8a of the 3D human representation with Gaussian noises to improve the robustness of the model and alleviate the oversaturation problem. This perturbation process can be expressed as: \u03c8\u2032 v \u223c\u03c8v + \u03bbvN (0, I) , \u03c8\u2032 a \u223c\u03c8a + \u03bbaN (0, I) , (13) where \u03bbv and \u03bba serve as weights to control the magnitude of the perturbations. The mean of the offset distribution and appearance distribution can be learned by optimizing \u03c8v and \u03c8a, while their standard deviations are determined by \u03bbv and \u03bba. Thus, choosing appropriate values for \u03bbv and \u03bba is crucial and challenging. If these values are too small, the model may not fully benefit from learning the distributions. In extreme cases, when \u03bbv = \u03bba = 0, the model essentially learns specific parameters instead of distributions. Conversely, when \u03bbv and \u03bba are excessively large, the learning 3D Paramaters Adaptive Variational Parameter 2D Image Sample &Render Add Perturbation Add Noise Update Avatar-aware Diffusion Model Figure 3: The workflow of the proposed X-Oscar. First, we incorporate the adaptive perturbation into the 3D parameters, forming the avatar distribution. Next, we sample a set of parameters from the avatar distribution and render a 2D image. Finally, we apply avatar-aware noise to the rendered image for denoising to optimize 3D parameters. process becomes challenging due to highly unstable perturbations. In extreme cases, when \u03bbv = \u03bba = +\u221e, the generated results become independent of the underlying \u03c8v and \u03c8a. To overcome the above challenges and facilitate a learning process that progresses from easy to difficult without manual weight assignment, we propose Adaptive Variational Parameter (AVP) for 3D representation. Specifically, we leverage the standard deviations of \u03c8v and \u03c8a as weights for perturbations, which can be formulated as follows: \u03c8\u2032 v \u223c\u03c8v + \u03c3(\u03c8v)N (0, I) = N \u0000\u03c8v, \u03c3(\u03c8v)2\u0001 , (14) \u03c8\u2032 a \u223c\u03c8a + \u03c3(\u03c8a)N (0, I) = N \u0000\u03c8a, \u03c3(\u03c8a)2\u0001 , (15) where \u03c3(\u00b7) represents the standard deviation. This adaptive approach has several advantages. Firstly, it enables the model to learn progressively from easy to difficult scenarios. Initially, \u03c8v and \u03c8a are initialized as matrices of all zeros and all 0.5, respectively, resulting in a standard deviation of 0. Consequently, during the early stages of training, the model focuses on optimizing the means of \u03c8\u2032 v and \u03c8\u2032 a to reasonable values. As training progresses, the standard deviations gradually increase, promoting the model\u2019s ability to maintain high similarity between the 3D representation and the text even in the presence of noise interference. Secondly, this approach is fully automatic. The model learns to adapt the perturbation weights based on the current state of the 3D representation, eliminating the need for manual intervention or hyperparameter tuning. During the inference phase, we utilize the mean values of \u03c8\u2032 v and \u03c8\u2032 a to represent the avatar. 4.3. Avatar-aware Score Distillation Sampling In previous work on SDS (Poole et al., 2022), a Gaussian noise related to timestep t was introduced to the rendered 5 \fX-Oscar image, and a pretrained diffusion model was utilized to denoise the noisy image for optimizing the 3D representation. The process of adding noise can be formulated as follows: zt =\u221a\u03b1tzt\u22121 + \u221a 1 \u2212\u03b1t\u03f5t\u22121 =\u221a\u03b1t\u03b1t\u22121zt\u22122 + p 1 \u2212\u03b1t\u03b1t\u22121\u00af \u03f5t\u22122 = \u00b7 \u00b7 \u00b7 =\u221a\u00af \u03b1tz0 + \u221a 1 \u2212\u00af \u03b1t\u00af \u03f50, (16) where zt represents the noised image at timestep t, \u00af \u03b1t = Qt i=1 \u03b1i, and \u03f5i, \u00af \u03f5i \u223cN (0, I). Since t \u223cU(0.02, 0.98) is randomly sampled, the noise added to the rendered image is independent of the avatar\u2019s current state. To establish a correlation between the denoising process and the avatar\u2019s current state, and to facilitate a learning process from easy to difficult, we propose Avatar-aware Score Distillation Sampling (ASDS). Specifically, the noised image with avatar-aware noise can be formulated as follows: zt = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t(\u03bbn\u03f5n + \u03bbv\u03c3(\u03c8v)\u03f5v + \u03bba\u03c3(\u03c8a)\u03f5a) = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t p (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2\u03f5 = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t\u03f5\u03b8, (17) where \u03f5n, \u03f5v, \u03f5a, and \u03f5 are i.i.d. Gaussian random variables with zero mean and unit variance, i.e., \u03f5n, \u03f5v, \u03f5a, \u03f5 \u223c N(0, I), and \u03f5\u03b8 \u223c N(0, (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2). At the initial stage, when \u03c3(\u03c8v) = \u03c3(\u03c8a) = 0, the initial variance of the noise is relatively small, resulting in an easier denoising process for diffusion models. As the training progresses, \u03c3(\u03c8v) and \u03c3(\u03c8a) gradually increase, leading to an increase in the noise variance. Consequently, this increases the difficulty of denoising. By incorporating avatar-aware noise, the model can undergo a learning process from easy to difficult. The gradient of ASDS is then formulated as follows: \u2207\u03b8LASDS(\u03b8) \u225c E(t,\u03f5,c) \uf8ee \uf8ef \uf8f0\u03c9(t) \u0000\u02c6 \u03f5\u03d5(zt; y, t) | {z } precited noise \u2212 \u03f5\u03b8 |{z} avatar-aware noise \u0001\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (18) where zt = \u221a\u00af \u03b1g(\u03b8, c) + \u221a1 \u2212\u00af \u03b1\u03f5\u03b8 represents the noised image, and \u03f5\u03b8 is an avatar-aware noise that encourages the paradigm of learning from easy to difficult. 5. Experiments 5.1. Implementation Details Our experiments are conducted using a single Nvidia RTX 3090 GPU with 24GB of memory and the PyTorch library (Paszke et al., 2019). The diffusion model employed in our implementation is the Stable Diffusion provided by HuggingFace Diffusers (von Platen et al., 2022). During the training phase, we set the resolution of the rendered images to 800 \u00d7 800 pixels. The resolution of the albedo map is 2048 \u00d7 2048 pixels. The geometry modeling, appearance modeling, and animation refinement stages consist of 5000, 10000, and 5000 iterations, respectively. We set the learning rates for the vertex offset \u03c8v and albedo map \u03c8a to 1e-4 and 5e-3, respectively. Furthermore, we set the values of \u03bbn, \u03bbv, and \u03bba to 0.8, 0.1, and 0.1, respectively. To enhance facial details, we employ a strategy where there is a 0.2 probability of rendering facial images for optimization during the training process, and a 0.8 probability of rendering full-body images for optimization. 5.2. Comparison Qualitative Comparison with Text-to-Avatar Methods. We present a comparative analysis of our methodology against five state-of-the-art (SOTA) baselines: TADA (Liao et al., 2023a), DreamWaltz (Huang et al., 2023b), HumanGaussian (Liu et al., 2023), AvatarCLIP (Hong et al., 2022), and AvatarCraft (Jiang et al., 2023), as illustrated in Fig. 4. We observe certain limitations in the geometry and texture of avatars generated by TADA, which we emphasize by enclosing them within a red box. Furthermore, the outcomes produced by the other baselines exhibit issues such as blurriness and inconsistencies with the provided text. In contrast, our proposed X-Oscar consistently generates high-quality avatars with intricate details. Moreover, in addition to static avatars, X-Oscar is also capable of generating animatable avatars, as demonstrated in Fig. 1. Qualitative Comparison with Text-to-3D Methods. We also conduct a comparative analysis of X-Oscar with SOTA text-to-3D methods, namely DreamFusion (Poole et al., 2022), Magic3D (Lin et al., 2023), Fantasia3D (Chen et al., 2023a), and ProlificDreamer (Wang et al., 2023b). As shown in Fig. 5, we observe evident limitations in the avatars generated by text-to-3D methods, including poor geometry and noisy texture. Furthermore, owing to the absence of human prior knowledge, the avatars generated by text-to-3D methods lack flexibility and pose challenges in terms of animation. In contrast, our proposed method excels in generating high-quality, animatable avatars. Quantitative Comparison. To assess X-Oscar quantitatively, we conduct user studies comparing its performance with SOTA text-to-3D content and text-to-avatar methods using the same prompts. We randomly selected 40 prompts generated by ChatGPT for avatar creation, and the user studies involved 52 participants who provided subjective evaluations. Participants rated the generated avatars based on three specific aspects: texture quality (Geo. Qua.), geometry quality (Tex. Qua.), and text consistency (Tex. Con.). Scores range from 1 to 10, with higher scores indicating better quality. As shown in Tab. 1, our method consistently outperforms all other methods across all evaluated aspects. 6 \fX-Oscar Table 1: Quantitative comparison of SOTA Methods: The top-performing and second-best results are highlighted in bolded and underlined, respectively. As AvatarCLIP employs the CLIP score as its training supervision signal, it is inappropriate to gauge its performance using the CLIP score. Therefore, we set the CLIP score of AvatarCLIP to gray. User Study CLIP Score OpenCLIP Score Method Geo. Qua. Tex. Qua. Tex. Con. ViT-B/32 ViT-B/16 ViT-L/14 ViT-B/32 ViT-B/16 ViT-L/14 DreamFusion 2.66 4.18 3.29 29.29 29.29 25.30 31.57 28.22 30.17 Magic3D 4.21 3.12 1.61 28.52 30.92 27.02 31.14 28.21 30.21 Fantasia3D 2.14 2.42 2.53 30.34 30.42 26.12 29.68 28.46 31.46 ProlificDreamer 2.11 3.72 6.29 30.30 30.28 25.00 30.81 28.59 30.75 AvatarCLIP 3.28 2.64 2.09 34.49 32.45 28.20 32.77 31.20 31.98 AvatarCraft 4.39 4.55 3.37 27.59 29.70 25.23 26.19 24.60 25.55 DreamWaltz 6.38 6.09 6.99 30.86 31.20 27.32 30.65 29.09 29.83 HumanGuassian 6.03 4.51 6.08 28.46 29.18 26.26 26.37 26.82 29.09 TADA 5.03 6.95 7.62 31.09 30.48 27.72 30.67 30.05 30.17 X-Oscar 8.85 8.91 9.22 31.70 31.97 28.10 30.91 30.28 30.42 Ours TADA DreamWaltz HumanGaussian AvatarCLIP AvatarCraft Figure 4: Qualitative comparisons with SOTA text-to-avatar methods. The prompts (top \u2192down) are \u201cGandalf from The Lord of the Rings\u201d, \u201cAladdin in Aladdin\u201d, and \u201cCaptain Jack Sparrow from Pirates of the Caribbean\u201d. DreamFusion Fantasia3D Magic3D ProlificDreamer Ours Figure 5: Qualitative comparisons with SOTA text-to-3D methods. The prompts (top \u2192down) are \u201cAnna in Frozen\u201d, \u201cHilary Clinton\u201d, and \u201cKnight\u201d. 7 \fX-Oscar w/o AVP w/o ASDS X-Oscar Figure 6: Ablation study on the Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. The prompts (top \u2192down) are \u201cBatman\u201d, and \u201cMulan\u201d. w/o PM w PM \u201cWarren Buffett\u201d \u201cJeff Bezos\u201d \u201cAlbert Einstein\u201d w/o PM w PM w/o PM w PM Figure 7: Ablation study on progressive modeling. \u201cPM\u201d is short for \u201cprogressive modeling\u201d. \u201cw/o PM\u201d means that geometry, appearance, and animation are optimized together. Additionally, we calculate similarity scores between the generated results and text prompts using CLIP (Radford et al., 2021) and OpenCLIP (Cherti et al., 2023) with different backbones. Our method consistently achieves either the best or second-best results, demonstrating its ability to generate 3D avatars that are semantically consistent with the provided text prompts. 5.3. Ablation Studies Progressive Modeling. To evaluate the effectiveness of the progressive modeling paradigm in X-Oscar, we performed additional experiments by coupling the three training stages together. The results shown in Fig. 7 reveal a significant enhancement in the quality of geometry and appearance in the generated avatars when using the progressive modeling paradigm. For example, consider the prompt \u201cAlbert Einstein\u201d. Without employing the progressive modeling approach, the generated avatar is limited to a rudimentary shape and color, lacking the intricate details necessary for recognizing Albert Einstein. However, when employing the progressive modeling paradigm, we observe a remarkable improvement in the generated avatars. Adaptive Variational Parameter. To provide robust evidence of the impact of AVP, we conducted comprehensive ablation studies by using specific parameters instead of distributions to represent avatars. As depicted in Fig. 6, our observations strongly indicate that the omission of AVP in X-Oscar can lead to an excessive optimization of geometry and appearance, as an effort to align the generated outputs with the text. This subsequently leads to the problem of oversaturation. Geometry oversaturation leads to topological overlay problems in the generated meshes, while appearance oversaturation results in avatars with exaggerated color contrast. By integrating AVP, we successfully tackle these issues, significantly improving the realism of both the geometry and appearance in the generated avatars. Avatar-aware Score Distillation Sampling. To investigate the impact of ASDS, we conducted additional experiments by adding random Gaussian noise instead of avatar-aware noise to the rendered image for optimization. As demonstrated in Fig. 6, the absence of ASDS directly results in a noticeable decline in the overall quality of both the geometry and appearance of the generated avatars. For instance, without ASDS, two ears on Batman\u2019s head exhibit a geometric merging phenomenon. In the case of Mulan, the facial details become blurred and the colors on the front and back of the pants are inconsistent. 6.", + "additional_graph_info": { + "graph": [ + [ + "Yiwei Ma", + "Jiayi Ji" + ], + [ + "Yiwei Ma", + "Haowei Wang" + ], + [ + "Jiayi Ji", + "Haowei Wang" + ], + [ + "Jiayi Ji", + "Changli Wu" + ] + ], + "node_feat": { + "Yiwei Ma": [ + { + "url": "http://arxiv.org/abs/2405.00954v1", + "title": "X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation", + "abstract": "Recent advancements in automatic 3D avatar generation guided by text have\nmade significant progress. However, existing methods have limitations such as\noversaturation and low-quality output. To address these challenges, we propose\nX-Oscar, a progressive framework for generating high-quality animatable avatars\nfrom text prompts. It follows a sequential Geometry->Texture->Animation\nparadigm, simplifying optimization through step-by-step generation. To tackle\noversaturation, we introduce Adaptive Variational Parameter (AVP), representing\navatars as an adaptive distribution during training. Additionally, we present\nAvatar-aware Score Distillation Sampling (ASDS), a novel technique that\nincorporates avatar-aware noise into rendered images for improved generation\nquality during optimization. Extensive evaluations confirm the superiority of\nX-Oscar over existing text-to-3D and text-to-avatar approaches. Our anonymous\nproject page: https://xmu-xiaoma666.github.io/Projects/X-Oscar/.", + "authors": "Yiwei Ma, Zhekai Lin, Jiayi Ji, Yijun Fan, Xiaoshuai Sun, Rongrong Ji", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The creation of high-quality avatars holds paramount importance in a wide range of applications, including cartoon production (Li et al., 2022b; Zhang et al., 2022), virtual try-on (Santesteban et al., 2021; 2022), immersive telepresence (Li et al., 2020a;b; Xiu et al., 2023), and video game design (Zheng et al., 2021; Zhu et al., 2020). Conventional methods for avatar creation are notorious for being timeconsuming and labor-intensive, often demanding thousands of hours of manual work, specialized design tools, and expertise in aesthetics and 3D modeling. In this research, we propose an innovative solution that revolutionizes the generation of high-quality 3D avatars with intricate geometry, refined appearance, and realistic animation, solely based on a text prompt. Our approach eliminates the need for manual sculpting, professional software, or extensive artistic skills, thus democratizing avatar creation and making it accessible to a broader audience. The emergence of deep learning has brought forth a new era in 3D human body reconstruction, showcasing promising methods for automatic reconstruction from photos (Liao et al., 2023b; Han et al., 2023; Men et al., 2024; Zhang et al., 2023d) and videos (Weng et al., 2022; Jiang et al., 2022). However, these approaches primarily focus on reconstructing human bodies from visual cues, limiting their applicability to real-world scenarios and posing challenges when it comes to incorporating creativity, editing, and control. Recent advancements in large-scale vision-language 1 arXiv:2405.00954v1 [cs.CV] 2 May 2024 \fX-Oscar models (VLM) (Radford et al., 2021; Li et al., 2022a; 2023a; Xu et al., 2023a; Ma et al., 2023b) and diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015; Welling & Teh, 2011; Kulikov et al., 2023) have opened up exciting possibilities for generating 3D objects and avatars from text prompts. These methods effectively combine pretrained VLMs and diffusion models with 3D representations such as DeepSDF (Park et al., 2019), NeRF (Mildenhall et al., 2021), DMTET (Shen et al., 2021), and 3D Gaussian Splatting (Kerbl et al., 2023). Despite these promising developments, current approaches still face several limitations. Some methods (Ma et al., 2023c; Chen et al., 2023a; Wang et al., 2023b) focus solely on generating static everyday objects, lacking animation ability. Other methods that aim to generate avatars based on human prior knowledge often suffer from poor geometry and appearance quality (Liao et al., 2023a; Hong et al., 2022; Zhang et al., 2023b) or are incompatible with conventional computer graphics workflows (Liu et al., 2023; Huang et al., 2023b; Cao et al., 2023). This paper presents X-Oscar, an innovative and advanced framework that leverages text prompts to generate highquality animatable 3D avatars. Specifically, X-Oscar builds upon the SMPL-X body model (Pavlakos et al., 2019a) as prior knowledge and employs a strategic optimization sequence of \u201cGeometry \u2192Texture \u2192Animation\u201d. To overcome the common challenge of oversaturation during avatar generation, we propose Adaptive Variational Parameter (AVP), a novel technique that utilizes a trainable adaptive distribution to represent the geometry and appearance of the avatars. By optimizing the distribution as a whole instead of focusing on specific parameters, X-Oscar effectively mitigates oversaturation, resulting in visually appealing avatars. Furthermore, we introduce Avatar-aware Score Distillation Sampling (ASDS), an innovative module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process. This strategic approach significantly enhances the visual attributes of the avatars and improves their geometry and appearance quality. Extensive experimentation demonstrates the superiority of X-Oscar over existing methods, showcasing improvements in both geometry and appearance quality. Moreover, the avatars generated by X-Oscar are fully animatable, unlocking exciting possibilities for applications in gaming, animation, and virtual reality. To summarize, our main contributions are three-fold: \u2022 We present X-Oscar, an innovative and progressive framework that enables the creation of delicate animatable 3D avatars from text prompts. \u2022 To overcome the persistent challenge of oversaturation, we propose Adaptive Variational Parameter (AVP), which represents avatars as adaptive distributions instead of specific parameters. \u2022 We introduce Avatar-aware Score Distillation Sampling (ASDS), an advanced module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process, resulting in high-quality outputs. 2. Related Work Text-to-3D Generation. The emergence of vision-language models (VLMs) (Radford et al., 2021; Ma et al., 2022) and diffusion models has brought about a revolutionary impact on text-to-3D content generation. Pioneering studies like CLIP-forge (Sanghi et al., 2022), DreamFields (Jain et al., 2022), CLIP-Mesh (Mohammad Khalid et al., 2022), and XMesh (Ma et al., 2023c) have showcased the potential of utilizing CLIP scores (Radford et al., 2021) to align 3D representations with textual prompts, enabling the generation of 3D assets based on textual descriptions. Subsequently, DreamFusion (Poole et al., 2022) introduced Score Distillation Sampling (SDS), a groundbreaking technique that leverages pretrained diffusion models (Saharia et al., 2022) to supervise text-to-3D generation. This approach has significantly elevated the quality of generated 3D content. Building on these foundations, researchers have explored various strategies to further enhance text-to-3D generation. These strategies encompass coarse-to-fine optimization (Lin et al., 2023), conditional control (Li et al., 2023c; Chen et al., 2023b), bridging the gap between 2D and 3D (Ma et al., 2023a), introducing variational score distillation (Wang et al., 2023b), and utilizing 3D Gaussian Splatting (Chen et al., 2023c; Li et al., 2023b; Yi et al., 2023; Tang et al., 2023). Nevertheless, despite these advancements, existing methodologies primarily concentrate on generating common static objects. When applied to avatar generation, they face challenges such as poor quality and the inability to animate the generated avatars. In contrast, our proposed framework, X-Oscar, specifically aims to generate high-quality 3D animatable avatars from text prompts. X-Oscar caters to the unique requirements of avatar generation, including intricate geometry, realistic textures, and fluid animations, to produce visually appealing avatars suitable for animation. Text-to-Avatar Generation. The domain of text-to-avatar generation (Kolotouros et al., 2024; Zhang et al., 2024; Huang et al., 2023a; Xu et al., 2023b; Zhou et al., 2024) has emerged as a prominent and vital research area to cater to the demands of animated avatar creation. This field incorporates human priors such as SMPL (Loper et al., 2015), SMPL-X (Pavlakos et al., 2019b), and imGHUM (Alldieck et al., 2021) models. AvatarCLIP (Hong et al., 2022) utilizes SMPL and Neus (Wang et al., 2021) models to generate 3D avatars guided by the supervision of CLIP scores. Dreamwaltz (Huang et al., 2023b) introduces NeRF (Mildenhall et al., 2021) to generate 3D avatars based on 3D2 \fX-Oscar consistent occlusion-aware SDS and 3D-aware skeleton conditioning. AvatarBooth (Zeng et al., 2023) leverages dual fine-tuned diffusion models to achieve customizable 3D human avatar generation. AvatarVerse (Zhang et al., 2023a) utilizes ControlNet (Zhang et al., 2023c) and DensePose (G\u00a8 uler et al., 2018) to enhance view consistency. TADA (Liao et al., 2023a) employs a displacement layer and a texture map to predict the geometry and appearance of avatars. HumanNorm (Huang et al., 2023a) proposes a normal diffusion model for improved geometry. HumanGaussian (Liu et al., 2023) uses 3D Gaussian Splatting as human representation for text-to-avatar generation. Despite these advancements, existing methods often produce low-quality and over-saturated results. To overcome these limitations, we introduce a progressive framework that incorporates two key modules, namely Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. Our framework effectively generates high-fidelity avatars that are visually appealing and realistic. 3. Preliminaries Score Distillation Sampling (SDS) (Poole et al., 2022), also known as Score Jacobian Chaining (SJC) (Wang et al., 2023a), is a powerful optimization method that adapts pretrained text-to-image diffusion models for text-to-3D generation. Given a pretrained diffusion model p\u03d5(zt|y, t), where \u03d5 represents the model\u2019s parameters, y is the input text prompt, and zt denotes the noised image at timestep t, SDS aims to optimize a 3D representation to align with the text prompt. The forward diffusion process in SDS is formulated as q(zt|g(\u03b8, c), y, t), where \u03b8 represents the trainable parameters of the 3D representation, c denotes the camera, and g(\u00b7) is the rendering function. The objective of SDS can be expressed as follows: min LSDS(\u03b8) = E(t,c) \u0014r1 \u2212\u03b3t \u03b3t \u03c9(t)DKL(q(zt|g(\u03b8, c), y, t) \u2225p\u03d5(zt|y, t)) \u0015 , (1) where \u03c9(t) is a weighting function dependent on the timestep t, zt = \u221a\u03b3tg(\u03b8, c) + \u221a1 \u2212\u03b3t\u03f5 is the noised image, and DKL(\u00b7) represents the Kullback-Leibler Divergence (Kullback & Leibler, 1951). To approximate the gradient of the SDS objective, the following equation is leveraged: \u2207\u03b8LSDS(\u03b8) \u225cEt,\u03f5,c \uf8ee \uf8ef \uf8f0\u03c9(t)(\u02c6 \u03f5\u03d5(zt; y, t) | {z } predicted noise \u2212 \u03f5 |{z} Guassian noise )\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (2) where \u03f5 \u223cN (0, I) represents sampled noise from a normal distribution, and \u02c6 \u03f5\u03d5(zt; y, t) denotes the predicted noise of the pretrained diffusion model at timestep t. SMPL-X (Pavlakos et al., 2019b) is a widely adopted parametric 3D human body model in the fields of computer graphics and animation. It offers a comprehensive representation of the human body, consisting of 10, 475 vertices and 54 joints, facilitating detailed and realistic character rendering. By specifying shape s, pose p, and expression e parameters, the SMPL-X model generates a human body using the following equation: T(s, p, e) = T + Bs(s) + Bp(p) + Be(e), (3) where T denotes a standard human template, while Bs(\u00b7), Bp(\u00b7), Be(\u00b7) represent shape, expression, and pose blend shapes, respectively. These blend shapes deform the template to generate a wide range of body shapes, poses, and expressions. To transition the human body from a standard pose to a target pose, linear blend skinning (LBS) is employed: M(s, p, e) = WLBS(T(s, p, e), J(s), p, W), (4) where WLBS(\u00b7) represents the LBS function, J(s) corresponds to the skeleton joints, and W represents the skinning weight. The LBS function calculates the final vertex positions by interpolating between the deformed template vertices based on the assigned skinning weights. This process ensures a smooth and natural deformation of the body mesh. 4. Approach The overview of X-Oscar is depicted in Fig. 2, and the workflow is illustrated in Fig. 3. In the upcoming sections, we present a comprehensive description of the X-Oscar framework: In Sec. 4.1, we delve into the progressive modeling pipeline of X-Oscar. This pipeline breaks down the complex task of avatar generation into three manageable subtasks, with each subtask focusing on a specific aspect of avatar creation. In Sec. 4.2, we introduce Adaptive Variational Parameter (AVP). This component employs a trainable adaptive distribution to represent the avatar, addressing the issue of oversaturation that is commonly encountered in avatar generation. In Sec. 4.3, we present Avatar-aware Score Distillation Sampling (ASDS). This module incorporates geometry-aware and appearance-aware noise into the denoising process, enabling the pretrained diffusion model to perceive the current state of the generated avatar, resulting in the production of high-quality outputs. 4.1. Progressive Modeling Geomotry Modeling. During this phase, our objective is to optimize the geometry of the avatars, represented by the SMPL-X model, to align with the input text prompt y. Formally, we aim to optimize the trainable vertex offsets \u03c8v \u2208RN\u00d73, initialized as a matrix of zeros, to align the modified vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v with the text 3 \fX-Oscar Stable Diffusion Vertex Coordinates Stable Diffusion \u201cFlash from DC\u201d Stable Diffusion \u201cFlash from DC\u201d \u201cFlash from DC\u201d Frozen Trainable Random Sampling Motified SMPL-X Motified SMPL-X Offset Distribution Appearance Distribution Offset Distribution Appearance Distribution Pose Prior (a) Geometry Modeling (b) Appearance Modeling (c) Animation Refinement Update Update Update Avatar-aware Noise Avatar-aware Noise Avatar-aware Noise Pose Prior Camera Figure 2: Overview of the proposed X-Oscar, which consists of three generation stages: (a) geometry modeling, (b) appearance modeling, and (c) animation refinement. prompt y, where \u03bd represents the vertex coordinates of the template avatar body, and N is the number of vertices of the SMPL-X model. To achieve this, we utilize a differentiable rendering pipeline. By taking the original mesh M of SMPL-X and the predicted vertex offsets \u03c8v as inputs, we render a normal image N of the modified mesh using a differentiable renderer (Laine et al., 2020): N = g(M, \u03c8v, c), (5) where g(\u00b7) denotes the rendering function, and c represents a randomly sampled camera parameter. In each iteration, we introduce Gaussian noise \u03f5 to the normal map N and apply a pretrained Stable Diffusion (SD) model (Rombach et al., 2022) to denoise it. The gradient of the trainable vertex offsets \u03c8v during denoising is then calculated as follows: \u2207\u03c8vLgeo(\u03c8v, N) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zN t ; y, t) \u2212\u03f5 \u0011 \u2202N \u2202\u03c8v \u0015 , (6) where \u02c6 \u03f5\u03d5(zN t ; y, t) represents the predicted noise by SD based on the timestep t, input text embedding y, and the noisy normal image zN t . Appearance Modeling. After completing the geometry modeling phase, we obtain a mesh that aligns with the prompt in terms of shape, with vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v. In this stage, our objective is to optimize an albedo map \u03c8a \u2208Rh\u00d7w\u00d73 to represent the appearance of the resulting avatar, where h and w represent the height and width of the albedo map. To achieve this, we start by rendering a colored image I from a randomly sampled camera parameter c based on the vertex offsets \u03c8v and the albedo map \u03c8a using a differentiable renderer (Laine et al., 2020): I = g(M, \u03c8v, \u03c8a, c). (7) To optimize the albedo map \u03c8a, we employ a loss function similar to Eq. (6) used in the geometry modeling phase: \u2207\u03c8aLapp(\u03c8a, I) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zI t ; y, t) \u2212\u03f5 \u0011 \u2202I \u2202\u03c8a \u0015 , (8) where \u02c6 \u03f5\u03d5(zI t ; y, t) represents the predicted noise by the SD model. This loss function encourages the rendered image I to align with the text prompt y by minimizing the discrepancy between the predicted noise \u02c6 \u03f5\u03d5 and the added Gaussian noise \u03f5. By optimizing the albedo map \u03c8a using this loss function, we can generate appearances for the avatars that are consistent with the provided text prompts. Animation Refinement. Given that both the geometry modeling and appearance modeling stages optimize the avatar in a canonical pose, it is inevitable that certain parts of the avatar may be obstructed, leading to lower-quality results in those areas. To overcome this challenge, we introduce an animation refinement stage where we adjust the pose of the avatar and simultaneously optimize both the geometry and appearance. Specifically, we sample viable pose parameters p from a pre-trained model such as VPoser (Pavlakos et al., 2019a). For each sampled pose, we render the normal image Np and colored image Ip of the animated avatar using a differentiable renderer (Laine et al., 2020): Np = g(M, \u03c8v, c, p), Ip = g(M, \u03c8v, \u03c8a, c, p), (9) where pose parameters p and camera parameters c vary in each iteration. To optimize the geometry and appearance of the avatar in the animated pose, we define an animation loss 4 \fX-Oscar Lani as follows: Lani(\u03c8v, \u03c8a, Np, Ip) = Lgeo(\u03c8v, Np)+Lapp(\u03c8v, \u03c8a, Ip), (10) where Lgeo and Lapp are the geometry loss and appearance loss, respectively. The gradients of the animation loss for the vertex offsets \u03c8v and the albedo maps \u03c8a are calculated as follows: \u2207\u03c8v Lani(\u03c8v, Np, Ip) =Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Np t ; y, t) \u2212\u03f5 \u0011 \u2202Np \u2202\u03c8v + w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8v \u0015 , (11) \u2207\u03c8aLani(\u03c8a, Ip) = E(t,\u03f5) \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8a \u0015 , (12) The notations used here are similar to those defined in Eq. (2). By minimizing the animation loss using these gradients, we refine the geometry and appearance of the avatar in various poses, resulting in improved quality in the final output. 4.2. Adaptive Variational Parameter As formulated in Eq. (1) and Eq. (2), SDS aims to optimize a precise 3D representation to align all images rendered from arbitrary viewpoints with the input prompt evaluated by 2D diffusion models. However, there exists a fundamental contradiction between achieving an accurate 3D representation and the inherent multi-view inconsistency associated with 2D diffusion models. Specifically, it is often unreasonable to expect high similarity scores of a 2D diffusion model between all multi-view images of a specific 3D representation and text prompts. Consequently, when SDS is employed to enforce similarity between each perspective of a specific 3D representation and the text prompt, it can lead to the undesirable issue of oversaturation. To address this concern, we propose formulating the 3D representation as a distribution of vertex offsets, denoted as offset distribution, and a distribution of albedo maps, referred to as appearance distribution. Specifically, we perturb \u03c8v and \u03c8a of the 3D human representation with Gaussian noises to improve the robustness of the model and alleviate the oversaturation problem. This perturbation process can be expressed as: \u03c8\u2032 v \u223c\u03c8v + \u03bbvN (0, I) , \u03c8\u2032 a \u223c\u03c8a + \u03bbaN (0, I) , (13) where \u03bbv and \u03bba serve as weights to control the magnitude of the perturbations. The mean of the offset distribution and appearance distribution can be learned by optimizing \u03c8v and \u03c8a, while their standard deviations are determined by \u03bbv and \u03bba. Thus, choosing appropriate values for \u03bbv and \u03bba is crucial and challenging. If these values are too small, the model may not fully benefit from learning the distributions. In extreme cases, when \u03bbv = \u03bba = 0, the model essentially learns specific parameters instead of distributions. Conversely, when \u03bbv and \u03bba are excessively large, the learning 3D Paramaters Adaptive Variational Parameter 2D Image Sample &Render Add Perturbation Add Noise Update Avatar-aware Diffusion Model Figure 3: The workflow of the proposed X-Oscar. First, we incorporate the adaptive perturbation into the 3D parameters, forming the avatar distribution. Next, we sample a set of parameters from the avatar distribution and render a 2D image. Finally, we apply avatar-aware noise to the rendered image for denoising to optimize 3D parameters. process becomes challenging due to highly unstable perturbations. In extreme cases, when \u03bbv = \u03bba = +\u221e, the generated results become independent of the underlying \u03c8v and \u03c8a. To overcome the above challenges and facilitate a learning process that progresses from easy to difficult without manual weight assignment, we propose Adaptive Variational Parameter (AVP) for 3D representation. Specifically, we leverage the standard deviations of \u03c8v and \u03c8a as weights for perturbations, which can be formulated as follows: \u03c8\u2032 v \u223c\u03c8v + \u03c3(\u03c8v)N (0, I) = N \u0000\u03c8v, \u03c3(\u03c8v)2\u0001 , (14) \u03c8\u2032 a \u223c\u03c8a + \u03c3(\u03c8a)N (0, I) = N \u0000\u03c8a, \u03c3(\u03c8a)2\u0001 , (15) where \u03c3(\u00b7) represents the standard deviation. This adaptive approach has several advantages. Firstly, it enables the model to learn progressively from easy to difficult scenarios. Initially, \u03c8v and \u03c8a are initialized as matrices of all zeros and all 0.5, respectively, resulting in a standard deviation of 0. Consequently, during the early stages of training, the model focuses on optimizing the means of \u03c8\u2032 v and \u03c8\u2032 a to reasonable values. As training progresses, the standard deviations gradually increase, promoting the model\u2019s ability to maintain high similarity between the 3D representation and the text even in the presence of noise interference. Secondly, this approach is fully automatic. The model learns to adapt the perturbation weights based on the current state of the 3D representation, eliminating the need for manual intervention or hyperparameter tuning. During the inference phase, we utilize the mean values of \u03c8\u2032 v and \u03c8\u2032 a to represent the avatar. 4.3. Avatar-aware Score Distillation Sampling In previous work on SDS (Poole et al., 2022), a Gaussian noise related to timestep t was introduced to the rendered 5 \fX-Oscar image, and a pretrained diffusion model was utilized to denoise the noisy image for optimizing the 3D representation. The process of adding noise can be formulated as follows: zt =\u221a\u03b1tzt\u22121 + \u221a 1 \u2212\u03b1t\u03f5t\u22121 =\u221a\u03b1t\u03b1t\u22121zt\u22122 + p 1 \u2212\u03b1t\u03b1t\u22121\u00af \u03f5t\u22122 = \u00b7 \u00b7 \u00b7 =\u221a\u00af \u03b1tz0 + \u221a 1 \u2212\u00af \u03b1t\u00af \u03f50, (16) where zt represents the noised image at timestep t, \u00af \u03b1t = Qt i=1 \u03b1i, and \u03f5i, \u00af \u03f5i \u223cN (0, I). Since t \u223cU(0.02, 0.98) is randomly sampled, the noise added to the rendered image is independent of the avatar\u2019s current state. To establish a correlation between the denoising process and the avatar\u2019s current state, and to facilitate a learning process from easy to difficult, we propose Avatar-aware Score Distillation Sampling (ASDS). Specifically, the noised image with avatar-aware noise can be formulated as follows: zt = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t(\u03bbn\u03f5n + \u03bbv\u03c3(\u03c8v)\u03f5v + \u03bba\u03c3(\u03c8a)\u03f5a) = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t p (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2\u03f5 = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t\u03f5\u03b8, (17) where \u03f5n, \u03f5v, \u03f5a, and \u03f5 are i.i.d. Gaussian random variables with zero mean and unit variance, i.e., \u03f5n, \u03f5v, \u03f5a, \u03f5 \u223c N(0, I), and \u03f5\u03b8 \u223c N(0, (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2). At the initial stage, when \u03c3(\u03c8v) = \u03c3(\u03c8a) = 0, the initial variance of the noise is relatively small, resulting in an easier denoising process for diffusion models. As the training progresses, \u03c3(\u03c8v) and \u03c3(\u03c8a) gradually increase, leading to an increase in the noise variance. Consequently, this increases the difficulty of denoising. By incorporating avatar-aware noise, the model can undergo a learning process from easy to difficult. The gradient of ASDS is then formulated as follows: \u2207\u03b8LASDS(\u03b8) \u225c E(t,\u03f5,c) \uf8ee \uf8ef \uf8f0\u03c9(t) \u0000\u02c6 \u03f5\u03d5(zt; y, t) | {z } precited noise \u2212 \u03f5\u03b8 |{z} avatar-aware noise \u0001\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (18) where zt = \u221a\u00af \u03b1g(\u03b8, c) + \u221a1 \u2212\u00af \u03b1\u03f5\u03b8 represents the noised image, and \u03f5\u03b8 is an avatar-aware noise that encourages the paradigm of learning from easy to difficult. 5. Experiments 5.1. Implementation Details Our experiments are conducted using a single Nvidia RTX 3090 GPU with 24GB of memory and the PyTorch library (Paszke et al., 2019). The diffusion model employed in our implementation is the Stable Diffusion provided by HuggingFace Diffusers (von Platen et al., 2022). During the training phase, we set the resolution of the rendered images to 800 \u00d7 800 pixels. The resolution of the albedo map is 2048 \u00d7 2048 pixels. The geometry modeling, appearance modeling, and animation refinement stages consist of 5000, 10000, and 5000 iterations, respectively. We set the learning rates for the vertex offset \u03c8v and albedo map \u03c8a to 1e-4 and 5e-3, respectively. Furthermore, we set the values of \u03bbn, \u03bbv, and \u03bba to 0.8, 0.1, and 0.1, respectively. To enhance facial details, we employ a strategy where there is a 0.2 probability of rendering facial images for optimization during the training process, and a 0.8 probability of rendering full-body images for optimization. 5.2. Comparison Qualitative Comparison with Text-to-Avatar Methods. We present a comparative analysis of our methodology against five state-of-the-art (SOTA) baselines: TADA (Liao et al., 2023a), DreamWaltz (Huang et al., 2023b), HumanGaussian (Liu et al., 2023), AvatarCLIP (Hong et al., 2022), and AvatarCraft (Jiang et al., 2023), as illustrated in Fig. 4. We observe certain limitations in the geometry and texture of avatars generated by TADA, which we emphasize by enclosing them within a red box. Furthermore, the outcomes produced by the other baselines exhibit issues such as blurriness and inconsistencies with the provided text. In contrast, our proposed X-Oscar consistently generates high-quality avatars with intricate details. Moreover, in addition to static avatars, X-Oscar is also capable of generating animatable avatars, as demonstrated in Fig. 1. Qualitative Comparison with Text-to-3D Methods. We also conduct a comparative analysis of X-Oscar with SOTA text-to-3D methods, namely DreamFusion (Poole et al., 2022), Magic3D (Lin et al., 2023), Fantasia3D (Chen et al., 2023a), and ProlificDreamer (Wang et al., 2023b). As shown in Fig. 5, we observe evident limitations in the avatars generated by text-to-3D methods, including poor geometry and noisy texture. Furthermore, owing to the absence of human prior knowledge, the avatars generated by text-to-3D methods lack flexibility and pose challenges in terms of animation. In contrast, our proposed method excels in generating high-quality, animatable avatars. Quantitative Comparison. To assess X-Oscar quantitatively, we conduct user studies comparing its performance with SOTA text-to-3D content and text-to-avatar methods using the same prompts. We randomly selected 40 prompts generated by ChatGPT for avatar creation, and the user studies involved 52 participants who provided subjective evaluations. Participants rated the generated avatars based on three specific aspects: texture quality (Geo. Qua.), geometry quality (Tex. Qua.), and text consistency (Tex. Con.). Scores range from 1 to 10, with higher scores indicating better quality. As shown in Tab. 1, our method consistently outperforms all other methods across all evaluated aspects. 6 \fX-Oscar Table 1: Quantitative comparison of SOTA Methods: The top-performing and second-best results are highlighted in bolded and underlined, respectively. As AvatarCLIP employs the CLIP score as its training supervision signal, it is inappropriate to gauge its performance using the CLIP score. Therefore, we set the CLIP score of AvatarCLIP to gray. User Study CLIP Score OpenCLIP Score Method Geo. Qua. Tex. Qua. Tex. Con. ViT-B/32 ViT-B/16 ViT-L/14 ViT-B/32 ViT-B/16 ViT-L/14 DreamFusion 2.66 4.18 3.29 29.29 29.29 25.30 31.57 28.22 30.17 Magic3D 4.21 3.12 1.61 28.52 30.92 27.02 31.14 28.21 30.21 Fantasia3D 2.14 2.42 2.53 30.34 30.42 26.12 29.68 28.46 31.46 ProlificDreamer 2.11 3.72 6.29 30.30 30.28 25.00 30.81 28.59 30.75 AvatarCLIP 3.28 2.64 2.09 34.49 32.45 28.20 32.77 31.20 31.98 AvatarCraft 4.39 4.55 3.37 27.59 29.70 25.23 26.19 24.60 25.55 DreamWaltz 6.38 6.09 6.99 30.86 31.20 27.32 30.65 29.09 29.83 HumanGuassian 6.03 4.51 6.08 28.46 29.18 26.26 26.37 26.82 29.09 TADA 5.03 6.95 7.62 31.09 30.48 27.72 30.67 30.05 30.17 X-Oscar 8.85 8.91 9.22 31.70 31.97 28.10 30.91 30.28 30.42 Ours TADA DreamWaltz HumanGaussian AvatarCLIP AvatarCraft Figure 4: Qualitative comparisons with SOTA text-to-avatar methods. The prompts (top \u2192down) are \u201cGandalf from The Lord of the Rings\u201d, \u201cAladdin in Aladdin\u201d, and \u201cCaptain Jack Sparrow from Pirates of the Caribbean\u201d. DreamFusion Fantasia3D Magic3D ProlificDreamer Ours Figure 5: Qualitative comparisons with SOTA text-to-3D methods. The prompts (top \u2192down) are \u201cAnna in Frozen\u201d, \u201cHilary Clinton\u201d, and \u201cKnight\u201d. 7 \fX-Oscar w/o AVP w/o ASDS X-Oscar Figure 6: Ablation study on the Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. The prompts (top \u2192down) are \u201cBatman\u201d, and \u201cMulan\u201d. w/o PM w PM \u201cWarren Buffett\u201d \u201cJeff Bezos\u201d \u201cAlbert Einstein\u201d w/o PM w PM w/o PM w PM Figure 7: Ablation study on progressive modeling. \u201cPM\u201d is short for \u201cprogressive modeling\u201d. \u201cw/o PM\u201d means that geometry, appearance, and animation are optimized together. Additionally, we calculate similarity scores between the generated results and text prompts using CLIP (Radford et al., 2021) and OpenCLIP (Cherti et al., 2023) with different backbones. Our method consistently achieves either the best or second-best results, demonstrating its ability to generate 3D avatars that are semantically consistent with the provided text prompts. 5.3. Ablation Studies Progressive Modeling. To evaluate the effectiveness of the progressive modeling paradigm in X-Oscar, we performed additional experiments by coupling the three training stages together. The results shown in Fig. 7 reveal a significant enhancement in the quality of geometry and appearance in the generated avatars when using the progressive modeling paradigm. For example, consider the prompt \u201cAlbert Einstein\u201d. Without employing the progressive modeling approach, the generated avatar is limited to a rudimentary shape and color, lacking the intricate details necessary for recognizing Albert Einstein. However, when employing the progressive modeling paradigm, we observe a remarkable improvement in the generated avatars. Adaptive Variational Parameter. To provide robust evidence of the impact of AVP, we conducted comprehensive ablation studies by using specific parameters instead of distributions to represent avatars. As depicted in Fig. 6, our observations strongly indicate that the omission of AVP in X-Oscar can lead to an excessive optimization of geometry and appearance, as an effort to align the generated outputs with the text. This subsequently leads to the problem of oversaturation. Geometry oversaturation leads to topological overlay problems in the generated meshes, while appearance oversaturation results in avatars with exaggerated color contrast. By integrating AVP, we successfully tackle these issues, significantly improving the realism of both the geometry and appearance in the generated avatars. Avatar-aware Score Distillation Sampling. To investigate the impact of ASDS, we conducted additional experiments by adding random Gaussian noise instead of avatar-aware noise to the rendered image for optimization. As demonstrated in Fig. 6, the absence of ASDS directly results in a noticeable decline in the overall quality of both the geometry and appearance of the generated avatars. For instance, without ASDS, two ears on Batman\u2019s head exhibit a geometric merging phenomenon. In the case of Mulan, the facial details become blurred and the colors on the front and back of the pants are inconsistent. 6." + }, + { + "url": "http://arxiv.org/abs/2312.00085v2", + "title": "X-Dreamer: Creating High-quality 3D Content by Bridging the Domain Gap Between Text-to-2D and Text-to-3D Generation", + "abstract": "In recent times, automatic text-to-3D content creation has made significant\nprogress, driven by the development of pretrained 2D diffusion models. Existing\ntext-to-3D methods typically optimize the 3D representation to ensure that the\nrendered image aligns well with the given text, as evaluated by the pretrained\n2D diffusion model. Nevertheless, a substantial domain gap exists between 2D\nimages and 3D assets, primarily attributed to variations in camera-related\nattributes and the exclusive presence of foreground objects. Consequently,\nemploying 2D diffusion models directly for optimizing 3D representations may\nlead to suboptimal outcomes. To address this issue, we present X-Dreamer, a\nnovel approach for high-quality text-to-3D content creation that effectively\nbridges the gap between text-to-2D and text-to-3D synthesis. The key components\nof X-Dreamer are two innovative designs: Camera-Guided Low-Rank Adaptation\n(CG-LoRA) and Attention-Mask Alignment (AMA) Loss. CG-LoRA dynamically\nincorporates camera information into the pretrained diffusion models by\nemploying camera-dependent generation for trainable parameters. This\nintegration enhances the alignment between the generated 3D assets and the\ncamera's perspective. AMA loss guides the attention map of the pretrained\ndiffusion model using the binary mask of the 3D object, prioritizing the\ncreation of the foreground object. This module ensures that the model focuses\non generating accurate and detailed foreground objects. Extensive evaluations\ndemonstrate the effectiveness of our proposed method compared to existing\ntext-to-3D approaches. Our project webpage:\nhttps://xmu-xiaoma666.github.io/Projects/X-Dreamer/ .", + "authors": "Yiwei Ma, Yijun Fan, Jiayi Ji, Haowei Wang, Xiaoshuai Sun, Guannan Jiang, Annan Shu, Rongrong Ji", + "published": "2023-11-30", + "updated": "2023-12-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The field of text-to-3D synthesis, which seeks to generate superior 3D content predicated on input textual descriptions, has shown significant potential to impact a diverse range of applications. These applications extend beyond traditional areas such as architecture, animation, and gaming, and encompass contemporary domains like virtual and augmented reality. In recent years, extensive research has demonstrated significant performance improvement in the text-to-2D generation task [2, 24\u201326] by leveraging pretrained diffusion models [6, 30, 31] on a large-scale text-image dataset [27]. Building on these advancements, DreamFusion [22] introduces an effective approach that utilizes a pretrained 2D diffusion model [26] to autonomously generate 3D assets from text, eliminating the need for a dedicated 3D asset dataset. A key innovation introduced by DreamFusion is the Score Distillation Sampling (SDS) algorithm. This algorithm aims to optimize a single 3D representation, such as arXiv:2312.00085v2 [cs.CV] 25 Dec 2023 \fNeRF [18], to ensure that rendered images from any camera perspective maintain a high likelihood with the given text, as evaluated by the pretrained 2D diffusion model. Inspired by the groundbreaking SDS algorithm, several recent works [4, 13, 16, 35, 36] have emerged, envisioning the textto-3D generation task through the application of pretrained 2D diffusion models. While text-to-3D generation has made significant strides through the utilization of pretrained text-to-2D diffusion models, it is crucial to recognize and address the persistent and substantial domain gap that remains between text-to-2D and text-to-3D generation. This distinction is clearly illustrated in Fig. 1. To begin with, the text-to-2D model produces camera-independent generation results, focusing on generating high-quality images from specific angles while disregarding other angles. In contrast, 3D content creation is intricately tied to camera parameters such as position, shooting angle, and field of view. As a result, a text-to-3D model must generate high-quality results across all possible camera parameters. This fundamental difference emphasizes the necessity for innovative approaches that enable the pretrained diffusion model to consider camera parameters. Furthermore, a text-to-2D generation model must simultaneously generate both foreground and background elements while maintaining the overall coherence of the image. Conversely, a text-to-3D generation model only needs to concentrate on creating the foreground object. This distinction allows text-to-3D models to allocate more resources and attention to precisely represent and generate the foreground object. Consequently, the domain gap between text-to-2D and text-to-3D generation poses a significant performance obstacle when directly employing pretrained 2D diffusion models for 3D asset creation. In this study, we present a pioneering framework, XDreamer, designed to address the domain gap between textto-2D and text-to-3D generation, thereby facilitating the creation of high-quality text-to-3D content. Our framework incorporates two innovative designs that are specifically tailored to address the aforementioned challenges. Firstly, existing approaches [4, 13, 16, 35] commonly employ 2D pretrained diffusion models [25, 26] for text-to-3D generation, which lack inherent linkage to camera parameters. To address this limitation and ensure that our text-to-3D model produces results that are directly influenced by camera parameters, we introduce Camera-Guided Low-Rank Adaptation (CG-LoRA) to fine-tune the pretrained 2D diffusion model. Notably, the parameters of CG-LoRA are dynamically generated based on the camera information during each iteration, establishing a robust relationship between the text-to-3D model and camera parameters. Furthermore, pretrained text-to-2D diffusion models allocate attention to both foreground and background generation, whereas the creation of 3D assets necessitates a stronger focus on accurately generating foreground objects. To address this requirement, we introduce Attention-Mask Alignment (AMA) Loss, which leverages the rendered binary mask of the 3D object to guide the attention map of the pretrained 2D stable diffusion model [25]. By incorporating this module, XDreamer prioritizes the generation of foreground objects, resulting in a significant enhancement of the overall quality of the generated 3D content. We present a compelling demonstration of the effectiveness of X-Dreamer in synthesizing high-quality 3D assets based on textual cues. By incorporating CG-LoRA and AMA loss to address the domain gap between text-to-2D and text-to-3D generation, our proposed framework exhibits substantial advancements over prior methods in textto-3D generation. In summary, our study contributes to the field in three key aspects: \u2022 We propose a novel method, X-Dreamer, for high-quality text-to-3D content creation, effectively bridging the domain gap between text-to-2D and text-to-3D generation. \u2022 To enhance the alignment between the generated results and the camera perspective, we propose CG-LoRA, which leverages camera information to dynamically generate specific parameters for 2D diffusion models. \u2022 To prioritize the creation of foreground objects in the textto-3D model, we introduce AMA loss, which utilizes binary masks of the foreground 3D object to guide the attention maps of the 2D diffusion model. 2. Related Work 2.1. Text-to-3D Content Creation In recent years, there has been a significant surge in interest surrounding the evolution of text-to-3D generation [12, 17, 22]. This growing field has been propelled, in part, by advancements in pretrained vision-and-language models, such as CLIP [23], as well as diffusion models like Stable Diffusion [25] and Imagen [26]. Contemporary text-to-3D models can generally be classified into two distinct categories: the CLIP-based text-to-3D approach and the diffusion-based text-to-3D approach. The CLIP-based text-to-3D approach [9, 11, 14, 17, 19, 37] employs CLIP encoders [23] to project textual descriptions and rendered images derived from the 3D object into a modal-shared feature space. Subsequently, CLIP loss is harnessed to align features from both modalities, optimizing the 3D representation to conform to the textual description. Various scholars have made significant contributions to this field. For instance, Michel et al. [17] are pioneers in proposing the use of CLIP loss to harmonize the text prompt with the rendered images of the 3D object, thereby enhancing text-to-3D generation. Ma et al. [14] introduce dynamic textual guidance during 3D object synthesis to improve convergence speed and generation performance. However, these approaches \fDMTET Render Image Guided Mesh DMTET DMTET \"An ice cream sundae, ... view.\" Render Mask Geometry Initialization Geometry Modeling Stage1 : Geometry Learning Attention Map Mesh Stage2 : Appearance Learning \"An ice cream sundae, ... view.\" Attention Map PBR material CGLoRA Difussion U-Net CGLoRA Render Image Render Mask Update Update Difussion U-Net Camera Parameters Camera Parameters Text-to-3D Result Figure 2. Overview of the proposed X-Dreamer, which consists of geometry learning and appearance learning. have inherent limitations, as they tend to generate 3D representations with a scarcity of geometry and appearance detail. To overcome this shortcoming, the diffusion-based text-to-3D approach [4, 8, 10, 13, 22, 32] leverages pretrained text-to-2D diffusion models [25, 26] to guide the optimization of 3D representations. Central to these models is the application of SDS loss [22] to align the rendered images stemming from a variety of camera perspectives with the textual description. Specifically, given the target text prompt, Lin et al. [13] leverage a coarse-to-fine pipeline to generate high-resolution 3D content. Chen et al. [4] decouple geometry modeling and appearance modeling to generate realistic 3D assets. For specific purposes, some researchers [28, 36] integrate trainable LoRA [7] branches into pretrained diffusion models. For instance, Seo et al. [28] put forth 3DFuse, a model that harnesses the power of LoRA to comprehend object semantics. Wang et al. [36] introduce ProlificDreamer, where the role of LoRA is to evaluate the score of the variational distribution for 3D parameters. However, the LoRA parameter begins its journey from random initialization and maintains its independence from the camera and text. To address these limitations, we present two innovative modules: CG-LoRA and AMA loss. These modules are designed to enhance the model\u2019s ability to consider important camera parameters and prioritize the generation of foreground objects throughout the text-to-3D creation process. 2.2. Low-Rank Adaptation (LoRA) Low-Rank Adaptation (LoRA) is a technique used to reduce memory requirements when fine-tuning a large model. It involves injecting only a small set of trainable parameters into the pretrained model, while keeping the original parameters fixed. During the optimization process, gradients are passed through the fixed pretrained model weights to the LoRA adapter, which is then updated to optimize the loss function. LoRA has been applied in various fields, including natural language processing [3, 7], image synthesis [38] and 3D generation [28, 36]. To achieve low-rank adaptation, a linear projection with a pretrained weight matrix W0 \u2208Rdin\u00d7dout is augmented with an additional low-rank factorized projection. This augmentation is represented as W0 + \u2206W = W0 + AB, where A \u2208Rdin\u00d7r, B \u2208 Rr\u00d7dout, and r \u226amin(din, dout). During training, W0 remains fixed, while A and B are trainable. The modified forward pass, given the original forward pass Y = XW0, can be formulated as follows: Y = XW0 + XAB. (1) In this paper, we introduce CG-LoRA, which involves the dynamic generation of trainable parameters for A based on camera information. This technique allows for integrating perspective information, including camera parameters and direction-aware descriptions, into the pretrained text-to-2D diffusion model. As a result, our method significantly enhances text-to-3D generation capabilities. \fC + Camera Parameters + Element-wise Addition C Concatenation An ice cream sundae, front/side/back view. Direction-aware Text Figure 3. Illustration of Camera-Guided Low-Rank Adaptation. 3. Approach 3.1. Architecture In this section, we present a comprehensive introduction to the proposed X-Dreamer, which consists of two main stages: geometry learning and appearance learning. For geometry learning, we employ DMTET [29] as the 3D representation. DMTET is an MLP parameterized with \u03a6dmt and is initialized with a 3D ellipsoid using the mean squared error (MSE) loss LMSE. Subsequently, we optimize DMTET and CG-LoRA using the SDS loss [22] LSDS and the proposed AMA loss LAMA to ensure the alignment between the 3D representation and the input text prompt. For appearance learning, we leverage bidirectional reflectance distribution function (BRDF) modeling [33] following the previous approach [4]. Specifically, we utilize an MLP with trainable parameters \u03a6mat to predict surface materials. Similar to the geometry learning stage, we optimize \u03a6mat and CG-LoRA using the SDS loss LSDS and the AMA loss LAMA to achieve alignment between the 3D representation and the text prompt. Fig. 2 provides a detailed depiction of our proposed X-Dreamer. Geometry Learning. For geometry learning, an MLP network \u03a6dmt is utilized to parameterize DMTET as a 3D representation. To enhance the stability of geometry modeling, we employ a 3D ellipsoid as the initial configuration for DMTET \u03a6dmt. For each vertex vi \u2208VT belonging to the tetrahedral grid T, we train \u03a6dmt to predict two important values: the SDF value s(vi) and the deformation offset \u03b4(vi). To initialize \u03a6dmt with the 3D ellipsoid, we sample a set of N points {pi \u2208R3}|N i=1 approximately distributed on the surface of an ellipsoid and compute the corresponding SDF values {SDF(pi)}|N i=1. Subsequently, we optimize \u03a6dmt using MSE loss. This optimization process ensures that \u03a6dmt effectively initializes DMTET to resemble the 3D ellipsoid. The formulation of the MSE loss is given by: LMSE = 1 N N X i=1 (s(pi; \u03a6dmt) \u2212SDF(pi))2 . (2) After initializing the geometry, our objective is to align the geometry of DMTET with the input text prompt. Specifically, we generate the normal map n and the object mask m from the initialized DMTET \u03a6dmt by employing a differentiable rendering technique [33], given a randomly sampled camera pose c. Subsequently, we input the normal map n into the frozen stable diffusion (SD) with a trainable CGLoRA and update \u03a6dmt using the SDS loss, which is defined as follows: \u2207\u03a6dmtLSDS = Et,\u03f5 \u0014 w(t) (\u02c6 \u03f5\u0398(nt; y, t) \u2212\u03f5) \u2202n \u2202\u03a6dmt \u0015 , (3) where \u0398 represents the parameter of SD, \u02c6 \u03f5\u0398(nt; y, t) denotes the predicted noise of SD given the noise level t and text embedding y. Additionally, nt = \u03b1tn + \u03c3t\u03f5, where \u03f5 \u223cN(0, I) represents noise sampled from a normal distribution. The implementation of w(t), \u03b1t, and \u03c3t is based on the DreamFusion [22]. Furthermore, to focus SD on generating foreground objects, we introduce an additional AMA loss to align the object mask m with the attention map of SD, given by: LAMA = 1 L L X i=1 |ai \u2212\u03b7(m)|, (4) where L denotes the number of attention layers, and ai is the attention map of i-th attention layer. The function \u03b7(\u00b7) is employed to resize the rendered mask, ensuring its dimensions align with those of the attention maps. Appearance Learning. After obtaining the geometry of the 3D object, our objective is to compute its appearance using the Physically-Based Rendering (PBR) material model [15]. The material model comprises the diffuse term kd \u2208R3, the roughness and metallic term krm \u2208R2, and the normal variation term kn \u2208R3. For any point p \u2208R3 on the surface of the geometry, we utilize an MLP parameterized by \u03a6mat to obtain the three material terms, which can be expressed as follows: (kd, kn, krm) = MLP (P(p); \u03a6mat) , (5) where P(\u00b7) represents the positional encoding using a hashgrid technique [20]. Subsequently, each pixel of the rendered image can be computed as follows: V (p, \u03c9) = Z \u2126 Li(p, \u03c9i)f(p, \u03c9i, \u03c9)(\u03c9i \u00b7 np)d\u03c9i, (6) where V (p, \u03c9) denotes the rendered pixel value from the direction \u03c9 for the surface point p. \u2126denotes a hemisphere \fdefined by the set of incident directions \u03c9i satisfying the condition \u03c9i \u00b7 np \u22650, where \u03c9i denotes the incident direction, and np represents the surface normal at point p. Li(\u00b7) corresponds to the incident light from an off-the-shelf environment map, and f(\u00b7) is the Bidirectional Reflectance Distribution Function (BRDF) related to the material properties (i.e., kd, kn, krm). By aggregating all rendered pixel colors, we obtain a rendered image x = {V (p, \u03c9)}. Similar to the geometry modeling stage, we feed the rendered image x into SD. The optimization objective remains the same as Eq. (3) and Eq. (4), where the rendered normal map n and the parameters of DMTET \u03a6det are replaced with the rendered image x and the parameters of the material encoder \u03a6mat, respectively. 3.2. Camera-Guided Low-Rank Adaptation The domain gap between text-to-2D and text-to-3D generation presents a significant challenge, as discussed in Sec. 1. It has been observed that directly utilizing pretrained SD for text-to-3D generation can result in certain issues, such as the Janus Problem [1, 12]. To address these issues, we propose Camera-Guided Low-Rank Adaptation (CG-LoRA) as a solution to bridge the domain gap. As depicted in Fig. 20, we leverage camera parameters and direction-aware text to guide the generation of parameters in CG-LoRA, enabling X-Dreamer to effectively incorporate camera perspective and direction information. Specifically, given a text prompt T and camera parameters C = {x, y, z, \u03d5yaw, \u03d5pit, \u03b8fov} 1, we initially project these inputs into a feature space using the pretrained textual CLIP encoder Etxt(\u00b7) and a trainable MLP Epos(\u00b7): t = Etxt(T), (7) c = Epos(C), (8) where t \u2208Rdtxt and c \u2208Rdcam are textual features and camera features. Subsequently, we employ two low-rank matrices to project t and c into trainable dimensionalityreduction matrices within CG-LoRA: Atxt = Reshape(tW txt), (9) Acam = Reshape(cW cam), (10) where Atxt \u2208 Rd\u00d7 r 2 and Acam \u2208 Rd\u00d7 r 2 are two dimensionality-reduction matrices of CG-LoRA. The function Reshape(\u00b7) is used to transform the shape of a tensor from Rd\u2217r 2 to Rd\u00d7 r 2 . 2 W txt \u2208Rdtxt\u00d7(d\u2217r 2) and W cam \u2208Rdcam\u00d7(d\u2217r 2) are two low-rank matrices. Thus, 1The variables x, y, z, \u03d5yaw, \u03d5pit, \u03b8fov represent the x, y, z coordinates, yaw angle, pitch angle of the camera, and field of view, respectively. The roll angle \u03d5roll is intentionally set to 0 to ensure the stability of the object in the rendered image. 2Rd\u2217r 2 denotes a one-dimensional vector. Rd\u00d7 r 2 represents a twodimensional matrix. we decompose them into the product of two matrices to reduce the trainable parameters in our implementation, i.e., W txt = U txtV txt and W cam = U camV cam, where U txt \u2208Rdtxt\u00d7r\u2032, V txt \u2208Rr\u2032\u00d7(d\u2217r 2 ), U cam \u2208Rdcam\u00d7r\u2032, V cam \u2208Rr\u2032\u00d7(d\u2217r 2 ), r\u2032 is a small number (i.e., 4). In accordance with LoRA [7], we initialize the dimensionalityexpansion matrix B \u2208Rr\u00d7d with zero values to ensure that the model begins training from the pretrained parameters of SD. Thus, the feed-forward process of CG-LoRA is formulated as follows: y = xW + [xAtxt; xAcam]B, (11) where W \u2208Rd\u00d7d represents the frozen parameters of the pretrained SD model, and [\u00b7; \u00b7] is the concatenation operation alone the channel dimension. In our implementation, we integrate CG-LoRA into the linear embedding layers of the attention modules in SD to effectively capture direction and camera information. 3.3. Attention-Mask Alignment Loss Although SD is pretrained to generate 2D images that encompass both foreground and background elements, the task of text-to-3D generation demands a stronger focus on generating foreground objects. To address this specific requirement, we introduce Attention-Mask Alignment (AMA) Loss, which aims to align the attention map of SD with the rendered mask image of the 3D object. Specifically, for each attention layer in the pretrained SD, we compute the attention map between the query image feature Q \u2208RH\u00d7h\u00d7w\u00d7 d H and the key CLS token feature K \u2208RH\u00d7 d H . The calculation is formulated as follows: \u00af a = Softmax(QK\u22a4 \u221a d ), (12) where H denotes the number of attention heads in SD, and \u00af a \u2208RH\u00d7h\u00d7w represents the attention map. Subsequently, we proceed to compute the overall attention map \u02c6 a \u2208Rh\u00d7w by averaging the attention values of \u00af a across all attention heads. Since the attention map values are normalized using the softmax function, the activation values in the attention map may become very small when the image feature resolution is high. However, considering that each element in the rendered mask has a binary value of either 0 or 1, directly aligning the attention map with the rendered mask is not optimal. To address this, we propose a normalization technique that maps the values in the attention map from 0 to 1. This normalization process is formulated as follows: a = \u02c6 a \u2212min(\u02c6 a) max(\u02c6 a) \u2212min(\u02c6 a) + \u03bd , (13) where \u03bd represents a small constant value (e.g., 1e-6) that prevents division by zero in the denominator. Finally, we \fA sliced loaf of fresh bread. A rocket. A cabbage, highly detailed. A plate piled high with chocolate chip cookies. A strawberry. A chocolate cupcake, highly detailed. A DSLR photo of a brown cowboy hat. A hamburger. Figure 4. Text-to-3D generation results from an ellipsoid. Barack Obama\u2019s head. A beautifully carved wooden queen chess piece. A beautifully carved wooden queen chess piece. A corgi, highly detailed. Figure 5. Text-to-3D generation results from coarse-grained guided meshes. align the attention maps of all attention layers with the rendered mask of the 3D object using the AMA loss. The formulation of this alignment is presented in Eq. (4). 4. Experiments 4.1. Implementation Details. We conduct the experiments using four Nvidia RTX 3090 GPUs and the PyTorch library [21]. To calculate the SDS loss, we utilize the Stable Diffusion implemented by HuggingFace Diffusers [34]. For the DMTET \u03a6dmt and material encoder \u03a6mat, we implement them as a two-layer MLP and a single-layer MLP, respectively, with a hidden dimension of 32. The values of dcam, dtxt, r, r\u2032, the batch size, the SDS loss weight, the AMA loss weight, and the aspect ratio of the perspective projection plane are set to 1024, 1024, 4, 4, 4, 1, 0.1, and 1 respectively. We optimize XDreamer for 2000 iterations for geometry learning and 1000 iterations for appearance learning. For each iteration, \u03d5pit, \u03d5yaw, and \u03b8fov are randomly sampled from (\u221215\u25e6, 45\u25e6), (\u2212180\u25e6, 180\u25e6), and (25\u25e6, 45\u25e6), respectively. 4.2. Results of X-Dreamer Text-to-3D generation from an ellipsoid. We present representative results of X-Dreamer for text-to-3D generation, utilizing an ellipsoid as the initial geometry, as shown in Fig. 4. The results demonstrate the ability of X-Dreamer to generate high-quality and photo-realistic outputs that accurately correspond to the input text prompts. Text-to-3D generation from coarse-grained meshes. While there is a wide availability of coarse-grained meshes for download from the internet, directly utilizing these meshes for 3D content creation often results in poor performance due to the lack of geometric details. However, when compared to a 3D ellipsoid, these meshes may provide better 3D shape prior information for X-Dreamer. Hence, in\fProlificDreamer X-Dreamer DreamFusion Magic3D Fantasia3D Prompt: An onion, highly detailed. Prompt: A statue of Leonardo DiCaprio's head. Prompt: A pumpkin, highly detailed, 8K, HD. Figure 6. Comparison with State-of-the-Art (SOTA) methods. Our method yields results that exhibit enhanced fidelity and more details. stead of using ellipsoids, we can initialize DMTET with coarse-grained guided meshes as well. As shown in Fig. 5, X-Dreamer can generate 3D assets with precise geometric details based on the given text, even when the provided coarse-grained mesh lacks details. For instance, in the last column of Fig. 5, X-Dreamer accurately transforms the geometry from a cow to a corgi based on the text prompt \u201cA corgi, highly detailed.\u201d Therefore, X-Dreamer is also an exceptionally powerful tool for editing coarse-grained mesh geometry using textual inputs. Qualitative Comparison. To assess the effectiveness of XDreamer, we compare it with four SOTA methods: DreamFusion [22], Magic3D [13], Fantasia3D [4], and ProlificDreamer [36], as depicted in Fig. 6. When compared to the SDS-based methods [13, 22, 36], X-Dreamer outperforms them in generating superior-quality and realistic 3D assets. In addition, when compared to the VSD-based method [36], X-Dreamer produces 3D content with comparable or even better visual effects, while requiring significantly less optimization time. Specifically, the geometry and appearance learning process of X-Dreamer requires only approximately 27 minutes, whereas ProlificDreamer exceeds 8 hours. 4.3. Ablation Study Ablation on the proposed modules. To gain insights into the abilities of CG-LoRA and AMA loss, we perform ablation studies wherein each module is incorporated individually to assess its impact. As depicted in Fig. 7, the ablation results demonstrate a notable decline in the geometry and appearance quality of the generated 3D objects when CGLoRA is excluded from X-Dreamer. For instance, as shown in the second row of Fig. 7, the generated Batman lacks an ear on the top of its head in the absence of CG-LoRA. This observation highlights the crucial role of CG-LoRA in injecting camera-relevant information into the model, thereby enhancing the 3D consistency. Furthermore, the omission of AMA loss from X-Dreamer also has a deleterious effect on the geometry and appearance fidelity of the generated 3D assets. Specifically, as illustrated in the first row of Fig. 7, X-Dreamer successfully generates a photorealistic texture for the rocket, whereas the texture quality noticeably deteriorates in the absence of AMA loss. This disparity can be attributed to AMA loss, which directs the focus of the model towards foreground object generation, ensuring the realistic representation of both geometry and appearance of foreground objects. These ablation studies provide valuable insights into the individual contributions of CG-LoRA and AMA loss in enhancing the geometry, appearance, and overall quality of the generated 3D objects. Attention map comparisons w/ and w/o AMA loss. AMA loss is introduced with the aim of guiding attention during the denoising process towards the foreground object. This objective is achieved by aligning the attention map of SD with the rendered mask of the 3D object. To evaluate the effectiveness of AMA loss in accomplishing this goal, we visualize the attention maps of SD with and without AMA \fw/o AMA Loss w/o CG-LoRA X-Dreamer Prompt: A 3D rendering of Batman, highly detailed. Prompt: A DSLR photo of Lord Voldemort's head, highly detailed. Prompt: A rocket. Figure 7. Ablation studies of the proposed X-Dreamer. Attention Map Rendered Mask Rendered Image Geometry Learning Appearance Learning w/ AMA Loss w/o AMA Loss Attention Map Rendered Mask Rendered Image Prompt: A DSLR photo of a cat, highly detailed. Figure 8. Visualization of Attention Map, Rendered Mask, and Rendered Image with and without AMA Loss. For clarity, we only visualize the attention map of the first attention layer in SD. loss at both the geometry learning and appearance learning stages. As depicted in Fig. 8, it can be observed that incorporating AMA loss not only results in improved geometry and appearance of the generated 3D asset, but also concentrates the attention of SD specifically on the foreground object area. The visualizations confirm the efficacy of AMA loss in directing the attention of SD, resulting in improved quality and foreground object focus during geometry and appearance learning stages. 5." + }, + { + "url": "http://arxiv.org/abs/2303.15764v2", + "title": "X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance", + "abstract": "Text-driven 3D stylization is a complex and crucial task in the fields of\ncomputer vision (CV) and computer graphics (CG), aimed at transforming a bare\nmesh to fit a target text. Prior methods adopt text-independent multilayer\nperceptrons (MLPs) to predict the attributes of the target mesh with the\nsupervision of CLIP loss. However, such text-independent architecture lacks\ntextual guidance during predicting attributes, thus leading to unsatisfactory\nstylization and slow convergence. To address these limitations, we present\nX-Mesh, an innovative text-driven 3D stylization framework that incorporates a\nnovel Text-guided Dynamic Attention Module (TDAM). The TDAM dynamically\nintegrates the guidance of the target text by utilizing text-relevant spatial\nand channel-wise attentions during vertex feature extraction, resulting in more\naccurate attribute prediction and faster convergence speed. Furthermore,\nexisting works lack standard benchmarks and automated metrics for evaluation,\noften relying on subjective and non-reproducible user studies to assess the\nquality of stylized 3D assets. To overcome this limitation, we introduce a new\nstandard text-mesh benchmark, namely MIT-30, and two automated metrics, which\nwill enable future research to achieve fair and objective comparisons. Our\nextensive qualitative and quantitative experiments demonstrate that X-Mesh\noutperforms previous state-of-the-art methods.", + "authors": "Yiwei Ma, Xiaioqing Zhang, Xiaoshuai Sun, Jiayi Ji, Haowei Wang, Guannan Jiang, Weilin Zhuang, Rongrong Ji", + "published": "2023-03-28", + "updated": "2023-08-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In recent years, 3D asset creation through stylization, i.e., transforming bare meshes to match text prompts [39, 6, *Corresponding author; \u2021Equal contributions. Neural Style Network Steve Jobs in a red sweater, blue jeans, brown leather shoes and colorful gloves . X-Mesh Steve Jobs in a red sweater, blue jeans, brown leather shoes and colorful gloves . Bare Mesh CLIP Loss CLIP Loss Textual Guidance (a) Previous framework (b) The proposed X-Mesh stable & accurate stylization fast convergence slow convergence sometimes unsatisfactory Bare Mesh Stylized Mesh Stylized Mesh Figure 1. (a) A typical text-driven 3D stylization framework. (b) Our proposed X-Mesh framework. X-Mesh achieves better stylization and faster convergence. 65], images [63, 77], and 3D shapes [74], has received significant attention in the fields of computer vision and graphics [14, 15, 21]. The resulting stylized 3D assets are applied to a range of practical applications, such as gaming, virtual reality, and film. Among the stylization techniques available, text-driven 3D stylization is particularly user-friendly, as text prompts are more readily available than images or 3D shapes. However, creating stylized 3D assets through text input presents a significant challenge due to the significant gap between visual and linguistic information. The emergence of Contrastive Language-Image Pretraining (CLIP) [46] has made it possible to achieve textdriven 3D stylization. Recently, Text2Mesh [39] and TANGO [6] have made significant contributions in this field by predicting the attributes of each vertex on the mesh with the supervision of CLIP loss. Specifically, Text2Mesh prearXiv:2303.15764v2 [cs.CV] 4 Aug 2023 \fdicts the color and displacement of each mesh vertex to generate a stylized mesh that aligns with the target text prompt. Similarly, TANGO employs neural networks to forecast diffuse, roughness, specular, and normal maps to create photorealistic 3D meshes following a comparable approach. Despite achieving impressive results, existing textdriven 3D stylization methods have limitations that hinder their effectiveness and efficiency. One major drawback is their failure to fully consider the semantics of the input text during the prediction of mesh vertex attributes. Current methods only rely on CLIP loss to align the rendered images from the stylized mesh with the text prompt, without any additional textual semantic guidance during predicting vertex attributes. Such approaches lead to several issues, including unsatisfactory stylization and slow convergence. For instance, as shown in Fig. 1(a), conventional neural style networks do not utilize textual guidance during attribute prediction. As a result, the predicted vertex attributes may not align with the semantic context of the target text prompt, leading to an inconsistent stylized mesh. Moreover, the lack of additional text guidance makes it difficult to rapidly converge to an acceptable result. Typically, previous methods require over 500 iterations (equivalent to over 8 minutes of training) to attain stable stylized outcomes, which is impractical for users. To address the issues of inconsistency and slow convergence in conventional neural style networks, we propose X-Mesh, a framework that leverages textual semantic guidance to predict vertex attributes. As shown in Fig. 1(b), X-Mesh produces high-quality stylized results that are consistent with the input text. Besides, with textual guidance during vertex attribute prediction, X-Mesh usually achieves stable results in just 200 iterations (approximately 3 minutes of training). Our approach relies on a novel Textguided Dynamic Attention Module (TDAM) for text-aware attribute prediction. Fig. 2(b) illustrates how spatial and channel-wise attentions are employed in TDAM to extract text-relevant vertex features. Notably, the parameters of the attention modules are dynamically generated by textual features, which makes the vertex features prompt-aware. Additionally, the quality evaluation of the stylized results from existing text-driven 3D stylization methods [6, 39] poses a significant challenge. This challenge is mainly reflected in two aspects. Firstly, the lack of a standard benchmark for the text-driven 3D stylization problem presents a challenge in evaluating the effectiveness of existing methods. Without fixed text prompts and meshes, the results obtained from previous methods are incomparable. This in turn hinders progress and the development of more effective solutions. Secondly, the current evaluation of stylized 3D assets relies heavily on user studies, which is a timeconsuming and expensive process. Furthermore, this evaluation method is also subject to individual interpretation, which further hinders the reproducibility of results. To address the aforementioned challenges, we propose a standardized text-mesh benchmark and two automatic evaluation metrics for the fair, objective, and reproducible comparison of text-driven 3D stylization methods. The proposed benchmark, called Mesh wIth Text (MIT-30), contains 30 categories of bare meshes, each of which is annotated with 5 different text prompts for diverse stylization. The proposed two evaluation metrics aims to overcome the limitations of subjective and non-reproducible user studies used in prior work. Specifically, we render 24 images of the stylized 3D mesh from fixed elevation and azimuth angles, and propose two metrics, Multi-view Expert Score (MES) and Iteration for Target Score (ITS), to evaluate the stylization quality and convergence speed. This paper presents two main contributions: \u2022 We propose X-Mesh that incorporates a novel textguided dynamic attention module (TDAM) to improve the accuracy and convergence speed of 3D stylization. \u2022 We construct a standard benchmark and propose two automatic evaluation metrics, which facilitate objective and reproducible assessments of text-driven 3D stylization techniques, and may aid in advancing this field of research. 2. Related Work 2.1. Text-to-Image Manipulation/Generation Several previous works have attempted to combine GAN and CLIP to achieve text-to-image generation [4, 50, 72]. Specifically, StyleGAN [26, 27, 25, 56] focuses on the latent space to enable better control over generated images. Building on StyleGAN, StyleCLIP [43] leverages the guidance of CLIP to realize text-to-image generation. DAEGAN [52] uses a dynamic perception module to comprehensively perceive text information as a development architecture of GAN. Stack-GAN [75, 76] divides the task into two stages, generating basic color and shape constraints of the objects described in the text and then adding more details to produce high-quality images with high resolution. VQGAN [11] improves the performance of visual generation on multiple tasks. MirrorGAN [45] combines the global-to-local attention mechanism with a text-to-imageto-text framework to preserve semantics effectively. Meanwhile, diffusion models have made significant contributions to image generation. DALL-E [49] and CogView [8, 9, 17] are based on transformer and parallel auto-regressive architectures. GLIDE [12] leverages classifier-free guidance for image generation and restoration after fine-tuning. DALL-E2 [48] generates original and realistic images given a text prompt by encoding image features according to the text features of CLIP and then decod\fing them via a diffusion model. EDiff-I [2] trains a textto-image diffusion model for different synthesis stages to achieve high visual quality. Imagen [54] benefits from the semantic encoding ability of the large pre-trained language model T5 [47] and the diffusion model in generating highfidelity images. 2.2. Text-to-3D Manipulation/Generation The field of text-to-3D generation has seen significant advancements with the development of text-to-image techniques. Among these techniques, some NeRF-based methods have shown promise, especially when used in combination with CLIP. Some notable examples of such methods include CLIP-NeRF [64], PureCLIPNeRF [28], and DreamFields [22]. Additionally, recent studies have explored the fusion of CLIP with other algorithms, such as ISS [31] with SVR [41], CLIP-Forge [55] using a normalizing flow network [10], and AvatarCLIP [16] leveraging SMLP [34]. Furthermore, the diffusion model [53] has recently demonstrated impressive results in text-to-image generation, leading to its integration into the text-to-3D generation process. Examples of studies that have incorporated the diffusion model into their generation process include DreamFusion [44], Magic3D [30], and Dream3D [70]. Besides, mesh-based stylization is also widely researched due to its wide applicability. Traditionally, the stylization of bare meshes in computer graphics requires professional knowledge. However, recent studies [59, 13] have made strides in the automation of stylizing 3D representations using text prompts. For instance, CLIPMesh [40] uses CLIP and loop subdivision [33] to achieve 3D asset generation. While TANGO [6] incorporates reflection knowledge, it is limited in shape manipulation. Text2Mesh [39], on the other hand, predicts both color and displacement of each vertex to achieve stronger stylization. This paper proposes a text-guided dynamic attention module in the vertex attribute prediction phase. This module not only leads to a better stylization effect but also achieves a fast convergence speed. 2.3. Attention Mechanism Attention mechanism is a widely-used technique in deep learning that has been applied to a variety of tasks, including computer vision [20, 66, 18, 19], natural language processing [35, 58, 62], and multimodal fields [36, 71, 37, 38, 73, 24]. The concept of attention was first introduced in the context of neural machine translation by Bahdanau et al. [1], who proposed a model that learns to align the source and target sentences by focusing on different parts of the source sentence at each decoding step. Since then, various attention mechanisms have been proposed to improve the performance of different models. For example, Hu et al. [20] proposed channel attention to enhance the image recognition ability of the model. Woo et al. [68] leveraged both channel attention and spatial attention to focus on important areas and channels. Ye et al. [73] introduced dynamic attention for visual grounding, where different visual features are generated for different referring expressions. Self-attention [62], which is an effective global attention mechanism first proposed for NLP tasks, has been widely used to improve the performance of different models. Wang et al. [67] introduced a non-local attention mechanism for video understanding tasks. Liu et al. [32] improved selfattention by introducing shifted windows, which enhances the local perception ability of the model. In this paper, we propose a text-guided dynamic attention mechanism for text-driven 3D stylization, which enables the spatial (vertex) and channel information of the input mesh to be dynamically focused based on the target text prompt. 3. Approach In this section, we first explain the overall architecture of X-Mesh in Sec. 3.1. Then, we provide the details of the proposed Text-guided Dynamic Attention Module in Sec. 3.2. 3.1. Architecture An illustration of the proposed X-Mesh is shown in Fig. 2(a). The goal of X-Mesh is to modify an input mesh to match a given text prompt by predicting its appearance and geometry. Specifically, an input mesh M is defined as a set of vertices V \u2208Rn\u00d73 and faces F \u2208{1, . . . , n}m\u00d73, which are kept constant during training. Here, n and m denote the number of vertices and faces, respectively. Given an input mesh and a target text prompt, X-Mesh predicts the appearance attribute (i.e., the color offset \u2206Cp \u2208R3) and the geometry attribute (i.e., the position offset \u2206Pp \u2208R3) of each vertex p \u2208V, and finally generates a stylized mesh MS that conforms to the target text. We start by initializing the color of each vertex to (0.5, 0.5, 0.5) and normalizing the vertex coordinates to fit within a unit cube. To synthesize high-frequency details, we apply positional encoding using Fourier feature mappings to each vertex. Specifically, given a vertex p \u2208V of the mesh, we compute the positional encoding PE(p) as follows: PE(p) = [cos(2\u03c0Bp), sin(2\u03c0Bp)]T, (1) where B \u2208RC\u00d73 is a random Gaussian matrix, and each value in this matrix is randomly sampled from a normal distribution with mean 0 and variance \u03c32. Then, the proposed TDAM takes in the vertex positional encoding feature PE(p), which is dynamically processed under the guidance of the target text prompt. The resulting feature is further passed through two MLP branches, the Color MLP fC(\u00b7) and the Position MLP fP (\u00b7), which generate the color offset \u2206Cp and the position offset \u2206Pp, \fA 3D rendering of Steve Jobs in unreal engine. TDAM Positional Encoding Render Augmentation Dynamic MLP Dynamic MLP Channel Pooling Spatial Pooling Vertex Feature Textual Feature (a) The proposed X-Mesh (b) Text-Guided Dynamic Attention Position Offset < \u2206\ufffd, \u2206\ufffd, \u2206\ufffd> Color Offset < \u2206\ufffd, \u2206\ufffd, \u2206\ufffd > ( ) C f \uf0d7 ( ) P f \uf0d7 CLIP L Figure 2. (a) Illustration of the proposed X-Mesh model, which modifies the appearance and geometry of the input mesh according to the text prompt. (b) An overview of TDAM, which aims to process vertex features under the guidance of target text. respectively. Following [39], the position offset \u2206Pp is constrained to a small value, specifically |\u2206Pp|2 \u22640.1, to prevent excessive deformation. The new color and position attributes of each point are defined as C\u2032 p = Cp + \u2206Cp and P \u2032 p = Pp +\u2206Pp, respectively. Here, Cp \u2208R3 and Pp \u2208R3 represent the RGB color and coordinates of p on the original input mesh, respectively. To enhance geometry, a gray stylized mesh MS gray is used, which has the same geometry as MS but the color of all vertices are set to gray. We employ an interpolation-based differentiable renderer [5] for MS and MS gray from n\u03b8 different views. For each view \u03b8, we could obtain two rendered images, i.e., Icolor \u03b8 for MS and Igray \u03b8 for MS gray. We then apply 2D augmentation \u03c8(\u00b7) to each rendered image, and extract their features using the CLIP visual encoder Ev(\u00b7) [46]. We obtain the final feature representation by averaging the features across all views, which can be formulated as follows: \u03d5color = 1 n\u03b8 X \u03b8 Ev(\u03c8(Icolor \u03b8 )), (2) \u03d5gray = 1 n\u03b8 X \u03b8 Ev(\u03c8(Igray \u03b8 )), (3) To align the rendered images and the target text in CLIP space, we adopt CLIP textual encoder Et(\u00b7) to embed the text prompt. The framework is trained using the CLIP loss, and the training objective can be formulated as: L = \u2212sim (\u03d5color, Et(T )) \u2212sim (\u03d5gray, Et(T )) , (4) where T represents the target text prompt, and sim(a, b) denotes the cosine similarity between a and b. 3.2. Text-guided Dynamic Attention Module Previous works on text-driven 3D stylization have been limited by their inability to fully exploit the target text to guide the prediction of vertex attributes, resulting in suboptimal stylization results. To address this limitation, we propose a novel Text-guided Dynamic Attention Module (TDAM) that leverages the target text to guide the attribute prediction process. An overview of our approach is shown in Fig. 2(b), which illustrates how TDAM calculates textrelated vertex attention at both channel and spatial levels. Our proposed TDAM is based on a dynamic linear layer, whose parameters are generated dynamically based on the target textual features. We first explain how the dynamic linear layer is implemented and then describe how we design TDAM based on this layer to compute text-aware dynamic channel and spatial attention maps. Dynamic Linear Layer. Existing text-driven 3D stylization methods use static MLPs to predict the attributes of each vertex on the mesh. However, since the parameters of these MLPs are randomly generated, the target text cannot provide additional guidance during attribute prediction. To address this limitation, we propose a dynamic linear layer, whose parameters are generated based on the target textual feature Ft \u2208RDt. The dynamic linear layer is defined as follows: xout = xinWt + bt, (5) where xin \u2208RDin and xout \u2208RDout represent the input and output vectors of the dynamic linear layer, respectively. The trainable parameters of the dynamic linear layer are denoted as Md \u2208R(Din+1)\u00d7Dout = {Wt \u2208 RDin\u00d7Dout, bt \u2208RDout}, which are generated based on the target textual feature Ft. A straightforward method to generate dynamic parameters is to use a plain linear layer, defined as follows: Md = FtWm + bm, (6) where Wm \u2208 RDt\u00d7(Din+1)\u2217Dout and bm \u2208 R(Din+1)\u2217Dout. However, this method requires a large number of trainable parameters, specifically \f(Dt + 1) \u2217(Din + 1) \u2217Dout, which can result in an unaffordable training cost and overfitting. Thus, we use matrix decomposition to reduce the number of trainable parameters. Specifically, We decompose Md \u2208 R(Din+1)\u00d7Dout into U \u2208R(Din+1)\u00d7K and V \u2208RK\u00d7Dout, where K is a hyper-parameter that determines the compression ratio. It can be formulated as follows: Md = UV, (7) where U is a parameter matrix dynamically generated from Ft and V is a static trainable matrix. The formulation of U is presented as follows: U = \u03a6(FtWl + bl), (8) where Wl \u2208RDt\u00d7(Din+1)\u2217K and bl \u2208R(Din+1)\u2217K. \u03a6(\u00b7) is a reshape function that transfers the input from R(Din+1)\u2217K to R(Din+1)\u00d7K. Through the matrix decomposition technique, the number of trainable parameters is reduced from (Dt + 1) \u00d7 (Din + 1) \u2217Dout to (Dt + 1) \u00d7 (Din + 1) \u2217K + K \u00d7 Dout, which saves on additional training cost and avoids the risk of over-fitting. Dynamic Channel and Spatial Attention. As explained earlier, our goal is to obtain vertex features that are sensitive to the target text. To achieve this, we propose a Text-guided Dynamic Attention Module (TDAM) that builds upon the dynamic linear layer and comprises two types of attention mechanisms, i.e., channel attention and spatial attention. The key element of TDAM is the dynamic MLP, which comprises two dynamic linear layers separated by a ReLU activation function. Inspired by squeeze-and-excitation networks [20], the input and output dimensions of the dynamic MLP are identical, while the hidden dimension is reduced by a factor r. In TDAM, the objective of channel attention is to activate the channels of the vertex feature that are related to the target text. Specifically, given the vertex feature Fv \u2208RNv\u00d7Dv, where Nv is the number of vertices of the input mesh and Dv is the channel dimension of the input mesh, we first pass it through a dynamic MLP and then aggregate spatial dimensions through average pooling. To obtain the channel-wise attention map, we normalize the values to a range of 0 to 1 using the Sigmoid activation function as follows: Aca = \u03c3 1 Nv Nv X i=1 \u03b71(Fv)[i, :] ! , (9) where Aca \u2208R1\u00d7Dv denotes the channel-wise attention map, \u03c3(\u00b7) represents the Sigmoid function, and \u03b71(\u00b7) refers to the dynamic MLP. To obtain the channel-activated vertex feature F\u2032 v \u2208RNv\u00d7Dv, we take the element-wise product of Fv and Aca as follows: F\u2032 v = Fv \u2297Aca, (10) where \u2297is the element-wise product. The goal of spatial attention in TDAM is to activate the vertices that are related to the target text. First, we feed the channel-activated vertex feature F\u2032 into another dynamic MLP and aggregate the channel dimensions using the average function. The output is then normalized using the Sigmoid activation function as follows: Asa = \u03c3 \uf8eb \uf8ed1 Dv Dv X j=1 \u03b72(F\u2032 v)[:, j] \uf8f6 \uf8f8, (11) where Asa \u2208RNv\u00d71, and \u03b72(\u00b7) is a dynamic MLP with non-shared parameters with \u03b71(\u00b7). Finally, to obtain the spatially-activated vertex feature F\u2032\u2032 v, we perform elementwise product between F\u2032 v and Asa: F\u2032\u2032 v = F\u2032 v \u2297Asa. (12) 4. Benchmarks and Metrics Benchmark. In this paper, we construct a text-mesh benchmark to standardize the evaluation process of text-driven 3D stylization. The proposed MIT-30 benchmark includes 30 categories of bare meshes, collected from various public 3D datasets such as COSEG [60], Thingi10K [78], Shapenet [3], Turbo Squid [61], and ModelNet [69]. To ensure a diverse range of stylization, each mesh is annotated with five different text prompts. We found that the prompt template of \u2018A 3D rendering of \u00b7 \u00b7 \u00b7 in unreal engine.\u2019 is a good default, so all meshes are annotated with this prompt template if not specified. Metrics. Some previous works [6, 39] have used user studies to evaluate the perceived quality of stylized 3D assets, which is often subjective and non-reproducible. Other works [23, 29] have employed the metric [42] for text-toimage generation to assess the quality of 3D assets. However, this metric does not account for the continuity of 3D assets, as it only measures the similarity between a singleangle rendered image of the 3D asset and the target text. Given that text-driven 3D stylization aims to produce a 3D asset that conforms to the target text, evaluating rendered images from multiple angles is necessary. To enable objective and reproducible comparisons, we propose two automatic metrics that are based on multi-angle rendered images of 3D assets. These metrics will replace manual evaluation in user studies, allowing for a reliable evaluation of text-driven 3D stylization methods. Given a stylized 3D asset, we begin by rendering 24 images I = {Ii}24 i=1 from 24 fixed views, taking into account both azimuth angle \u03b8azi and elevation angle \u03b8ele. For each \fa wooden phoenix a BlueWhale a colourful lamp Figure 3. Text-driven 3D stylization results. X-Mesh provides high-quality stylization results for a collection of prompts and meshes. a colorful candy vase a dark castle Text2Mesh TANGO Ours Original Mesh TEXTure Figure 4. Text-driven 3D stylization results of Text2Mesh [39], TANGO [6], TEXTure [51], and X-Mesh (Ours) given the same mesh and prompt. X-Mesh provides high-quality and realistic stylization results. 3D asset, we establish a standard view where \u03b8azi = 0\u25e6and \u03b8ele = 0\u25e6. Using this standard view as a basis, we leverage 8 azimuth angles (0\u25e6, 45\u25e6, 90\u25e6, 135\u25e6, 180\u25e6, 225\u25e6, 270\u25e6, 315\u25e6) and 3 elevation angles (-30\u25e6, 0\u25e6, 30\u25e6) to render 24 rendered images. To address the subjective and non-reproducible nature of user studies, we use an automatic expert model [7] 1 trained on LAION-400M [57] for evaluation. Based on these 24 rendered images and the expert model, we propose two automatic evaluation metrics. Specifically, MES is used to evaluate the extent to which the stylized 3D asset conforms to the target text, and ITS is used to evaluate the convergence rate of the model. For MES, we first embed the 24 rendered images and the corresponding text prompt into a shared space using the visual and textual encoders of the expert model. Then, we calculate the cosine similarity scores between the rendered images and the corresponding text, and obtain MES by averaging them. The formulation of MES is as follows: MES(MS, T ) = 1 24 24 X i=1 sim \u0010 E\u2032 v(Ii), E\u2032 t(T ) \u0011 , (13) where MS and T is the stylized 3D mesh and the corresponding text prompt, respectively. E\u2032 v(\u00b7) and E\u2032 t(\u00b7) refer to the visual encoder and textual encoder of the expert model. 1https://github.com/mlfoundations/open_clip ITS represents the minimum number of iterations needed to achieve the target MES. For instance, ITS0.3(MS, T ) indicates the minimum number of iterations required when MES(MS, T ) = 0.3. In our experiment, we set the maximum number of training iterations for each mesh to 1200. If a mesh fails to reach the target MES within 1200 iterations, we set ITS of this sample to 2000. The final MES and ITS are obtained by averaging them across all samples in the benchmark. 5. Experiments We conducted all experiments using the public PyTorch library on a single RTX 3090 24GB GPU. We trained our proposed X-Mesh using the Adam optimizer with a learning rate of 5e-4. We set C, n\u03b8, r, \u03c3, and K to 256, 5, 8, 12, and 30, respectively. \u03c8(\u00b7) includes RandomPerspective and RandomResizedCrop. Our method typically achieves high-quality stylized results in just 3 minutes due to its fast convergence rate. In comparison, previous methods [6, 39] typically take more than 8 minutes to produce stable results on the same GPU. In Sec. 5.1, we qualitatively compare X-Mesh with stateof-the-art text-driven 3D stylization approaches on MIT-30. In Sec. 5.2, we conduct the ablation study to explore the effectiveness of the proposed module. Finally, We evaluate our method and previous SOTA methods with quantitative metrics in Sec. 5.3. 5.1. Text-driven Stylization Qualitative Results. Fig. 3 showcases some stylized results generated by X-Mesh for various meshes and driving prompts. The results demonstrate that the stylized meshes are not only faithful to the target text, but also visually plausible. For instance, when given the prompt \u201ca colourful lamp\u201d, X-Mesh produces a lamp with vibrant colors that match the prompt while preserving the lamp\u2019s shape and structure. Moreover, the generated outputs exhibit a high degree of consistency across different viewpoints. For instance, when given the prompt \u201ca wooden phoenix\u201d, the rendered images from different angles exhibit consistent stylization. Qualitative Comparisons. In this comparison study presented in Fig. 4, we provide evidence of the superiority of our proposed method, X-Mesh, over several existing state\fA 3D rendering of a BlueWhale in unreal engine. A 3D rendering of a Ginger cat with black collar in unreal engine. Iter0 Iter100 Iter200 Iter300 Iter400 Iter500 Iter600 Iter700 Iter800 Iter900 Iter1000 w/o TDAM w/ TDAM w/o TDAM w/ TDAM Figure 5. Visualization of text-driven 3D stylization process with and without the proposed TDAM under different iterations. The green box indicates the first iteration to obtain a stable stylization result. Vase w/o TDAM w/ TDAM Soldier Boy w/o TDAM w/ TDAM Candle w/o TDAM w/ TDAM Squirrel w/o TDAM w/ TDAM Phoenix w/o TDAM w/ TDAM Lamp w/o TDAM w/ TDAM Castle w/o TDAM w/ TDAM Dragon w/o TDAM w/ TDAM Bird w/o TDAM w/ TDAM Wardrobe w/o TDAM w/ TDAM Cat w/o TDAM w/ TDAM Treefrog w/o TDAM w/ TDAM Robot w/o TDAM w/ TDAM Bunny Head w/o TDAM w/ TDAM Person w/o TDAM w/ TDAM Blue Whale w/o TDAM w/ TDAM Horse w/o TDAM w/ TDAM Skull w/o TDAM w/ TDAM Chair w/o TDAM w/ TDAM Alien w/o TDAM w/ TDAM Bed w/o TDAM w/ TDAM Monster w/o TDAM w/ TDAM Forklift w/o TDAM w/ TDAM Pig w/o TDAM w/ TDAM Owl w/o TDAM w/ TDAM Tiger w/o TDAM w/ TDAM Sofa w/o TDAM w/ TDAM Vanity T able w/o TDAM w/ TDAM Wooly Sheep w/o TDAM w/ TDAM Chameleon w/o TDAM w/ TDAM Figure 6. The loss change of each mesh category during training for models with and without TDAM, where the loss values of 5 prompts for each mesh are averaged. The x-axis represents the training iteration, and the y-axis is the loss value. Due to the limitation of page space, we have omitted the contents of the x-axis and y-axis. See supplementary materials for a detailed version. of-the-art approaches for text-driven 3D stylization. We observe that Text2Mesh [39] frequently produces unreasonable deformation, which can be attributed to excessive displacement of vertices. For instance, we provide the example of the \u201ca dark castle\u201d in the bottom part of Fig. 4, where Text2Mesh generates several spikes that do not conform to the original structure of the castle. On the other hand, TANGO [6] and TEXTure [51], which do not displace the vertices of the original mesh, do not suffer from the deformation problem observed in Text2Mesh. However, They still has several shortcomings in terms of stylization quality and text understanding. We demonstrate this by showing the example of the \u201ca colorful candy vase\u201d in the top part of Fig. 4, where TANGO and TEXTure simply apply several colors to the vase without taking into account its underlying structure. In contrast, our proposed method, X-Mesh, overcomes both issues and generates textures that conform to the target text through proper displacement and color prediction for each vertex. We attribute this advantage to the introduction of dynamic guidance of text during vertex attribute prediction. By incorporating dynamic textual guidance, our method is able to generate more accurate results that are in line with the target text. 5.2. Ablation Study Convergence Speed. Convergence speed is a crucial factor to consider when assessing the effectiveness of text-driven \fPrompt: Steve Jobs in a red sweater, blue jeans, brown leather shoes and colorful gloves Prompt: a whale with a red head and a blue body w/o TDAM w/ TDAM w/o TDAM w/ TDAM Figure 7. Qualitative comparison of 3D assets generated based on complex prompts without TDAM and with TDAM. 3D stylization. A fast-converging model enables users to obtain the desired 3D asset for a given prompt quickly, while a slow-converging model can be frustrating for users to wait for. The results presented in Fig. 5 demonstrate that our proposed TDAM significantly improves convergence speed, allowing the model to reach an acceptable result in under 100 iterations. In contrast, the model without TDAM requires more than 300 iterations to achieve similar results. Additionally, the TDAM-equipped model reaches stable results in fewer than 300 iterations, while the model without TDAM requires more than 500 iterations. This remarkable improvement in convergence speed can be attributed to the TDAM module, which introduces textual guidance in the attribute prediction process. As a result, the proposed model achieves faster convergence speeds, making it an efficient and effective solution for text-driven 3D stylization. Moreover, in Fig. 6, we provide the loss curves for 30 categories of meshes in MIT-30. These curves illustrate that the loss value of the model with TDAM decreases faster than that of the model without TDAM during training. The superior performance of the proposed model suggests that TDAM significantly improves the efficiency and effectiveness of text-driven 3D stylization, making it a highly promising tool for 3D content creation. Overall, these findings underscore the importance of TDAM for text-driven 3D stylization, which can significantly improve the convergence speed and reduce the training time for users. Robustness to Complex Prompts. In this section, we aim to investigate the ability of the proposed X-Mesh to handle complex text prompts with the aid of the TDAM module. To achieve this goal, we conduct several experiments with complex prompts and report our observations as follows: Firstly, we observe that the model without the TDAM MES \u2191 ITS0.22 \u2193 TANGO [6] 23.21 795.47 Text2Mesh [39] 28.85 173.27 X-Mesh 29.26 88.53 Table 1. Qualitative comparison of state-of-the-art methods for text-driven 3D stylization. Note that a higher MES and a lower ITS0.22 is preferable in this table. module is highly susceptible to collapse when presented with complex prompts. In particular, as shown in the first line of Fig. 7, the final stylized mesh exhibits numerous spikes and loses its normal geometry when the model lacks the TDAM module. In contrast, the TDAM-equipped model can accurately predict the appropriate color and geometric attributes that match the target text. Furthermore, we observe that the model without TDAM may fail to capture some critical details in complex prompts. For example, the model without TDAM ignores \u201cblack collar\u201d in the third line of Fig. 5 and \u201ccolorful gloves\u201d in the first line of Fig. 7. By contrast, our method can make accurate predictions through comprehensive text understanding. Overall, our experimental results demonstrate that the TDAM-enhanced model can effectively handle complex text prompts and produce high-quality stylized 3D meshes. 5.3. Quantitative Comparison In previous works, user studies are used to evaluate stylization results. However, this evaluation approach has limitations, as it is subjective and non-reproducible. To overcome these limitations, we propose two automatic evaluation metrics, MES and ITS, which respectively measure the quality of the stylized assets and the convergence speed of stylization models. As presented in Tab. 1, our proposed XMesh outperforms previous methods. Specifically, X-Mesh achieves a 0.41 absolute improvement in MES on MIT30, indicating that our method produces better stylization quality than previous works. Moreover, X-Mesh obtains the lowest ITS0.22, highlighting that our method converges faster than previous methods. The superior performance of our proposed method demonstrated in both MES and ITS, metrics further validates the effectiveness and superiority of X-Mesh over previous methods, and supports its potential for practical applications. 6." + }, + { + "url": "http://arxiv.org/abs/2302.06098v1", + "title": "Towards Local Visual Modeling for Image Captioning", + "abstract": "In this paper, we study the local visual modeling with grid features for\nimage captioning, which is critical for generating accurate and detailed\ncaptions. To achieve this target, we propose a Locality-Sensitive Transformer\nNetwork (LSTNet) with two novel designs, namely Locality-Sensitive Attention\n(LSA) and Locality-Sensitive Fusion (LSF). LSA is deployed for the intra-layer\ninteraction in Transformer via modeling the relationship between each grid and\nits neighbors. It reduces the difficulty of local object recognition during\ncaptioning. LSF is used for inter-layer information fusion, which aggregates\nthe information of different encoder layers for cross-layer semantical\ncomplementarity. With these two novel designs, the proposed LSTNet can model\nthe local visual information of grid features to improve the captioning\nquality. To validate LSTNet, we conduct extensive experiments on the\ncompetitive MS-COCO benchmark. The experimental results show that LSTNet is not\nonly capable of local visual modeling, but also outperforms a bunch of\nstate-of-the-art captioning models on offline and online testings, i.e., 134.8\nCIDEr and 136.3 CIDEr, respectively. Besides, the generalization of LSTNet is\nalso verified on the Flickr8k and Flickr30k datasets", + "authors": "Yiwei Ma, Jiayi Ji, Xiaoshuai Sun, Yiyi Zhou, Rongrong Ji", + "published": "2023-02-13", + "updated": "2023-02-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "main_content": "Introduction Image captioning is the task of generating a \ufb02uent sentence to describe the given image. Recent years have witnessed the rapid development of Preprint submitted to Pattern Recognition February 14, 2023 arXiv:2302.06098v1 [cs.CV] 13 Feb 2023 \fthis \ufb01eld, which is supported by a bunch of innovative methods [1, 2] and datasets [3, 4]. Inspired by the great success of Bottom-up Attention [5], most existing methods in image captioning adopt the region features extracted by the object detector as the visual representations, e.g., Faster R-CNN [6]. Since the detector is pre-trained on the large-scale Visual Genome dataset [7], it can generate discriminative representations for salient regions in the image and provide complete object information for captioning. To this end, signi\ufb01cant progress in image captioning has been made based on the region features [2, 8, 9]. However, the region features still exist obvious defects. To be speci\ufb01c, they are extracted from the salient regions of the image, thus often ignoring the contextual information in the background. In this case, it is inferior for the model to capture the relationship between objects. For example, as shown in Fig. 1(a), the model trained on region features fails to understand the contextual information out of the bounding boxes, thereby incorrectly describing the relationship between \u201cwoman\u201d and \u201chorse\u201d. Besides, the pre-trained object detector may produce noisy, overlapped, or erroneous detections, which ultimately limits the performance upper bound of image captioning models. Grid Feature: A woman standing next to a horse on the beach. Region Feature: A woman riding a horse on the beach. horse woman bench \u221a \u00d7 (a) (b) Grid Feature Region Feature Flatten Flatten LSA Region Tokens Grid Tokens Multi-Scale Locality-Sensitive Grid Tokens Figure 1: (a) Captions generated by Transformer with the region and grid features, respectively. (b) Region features often contain complete object information, while the grid ones are more fragmented. Our LSA is conducive to reconstructing complete object information by modeling the relationship of adjacent grids. 2 \fTo compensate for the aforementioned limitations, some endeavors start to revisit the use of grid features. [10] explores grid features of the object detector to further push the performance of the visual question answering (VQA) task. RSTNet [11] and DLCT [12] \ufb01rst adopt grid features in Transformer-like networks, which achieve impressive performance in image captioning. However, Transformer-like architectures are not conducive to the perception of complete objects. Speci\ufb01cally, as shown in Fig. 1(b), a complete object may be divided into multiple adjacent grids in 2D space, while the \ufb02atten operation in Transformer inevitably destroys the local relationship of grid features. Meanwhile, recent advances [13] also show that the vanilla Transformer is less e\ufb03cient in local visual modeling. Based on the above analysis, we observe that both region and grid features have their own advantages and disadvantages. Region features contain explicit object information but lack background and relationship information. In contrast, grid features contain all information at the same time, but an object may be divided into multiple grids. As a result, the majority of the semantic information is damaged, which makes reasoning more challenging. A straightforward solution to enjoy the bene\ufb01ts of both features is adopting both region and grid features like DLCT [12] and GRIT [14]. However, it will lead to signi\ufb01cantly higher computation and longer training time, because the model needs to process both features at the same time. A more e\ufb03cient way is to model the local information on the grid features to compensate for the lack of object information. Therefore, we propose a novel Locality-Sensitive Transformer Network (LSTNet) in this paper. Speci\ufb01cally, LSTNet strengthens local modeling to perceive object-level information from the aspects of intra-layer interaction and inter-layer fusion, respectively. For intra-layer interaction, we propose a novel multi-branch module called Locality-Sensitive Attention (LSA) to perceive \ufb01ne-grained local information from di\ufb00erent receptive \ufb01elds and enhance the interactions between each grid and its neighbors. Notably, LSA can be re-parameterized into a single-branch structure during inference, thereby reducing the additional overhead of multi-scale perception. For inter-layer fusion, we design a Locality-Sensitive Fusion (LSF) module, which can align and fuse grid features from di\ufb00erent layers for cross-layer semantical complementary. With these novel designs, LSTNet improves the ability of of local visual modeling, but also greatly improves the quality of the generated captions. On the competitive MS-COCO benchmark, LSTNet presents outstanding performance on both o\ufb04ine and online testing, i.e., 3 \f134.8 CIDEr and 136.3 CIDEr. In addition to the outstanding performance on the MS-COCO dataset, the generalization of LSTNet is also veri\ufb01ed on the Flickr8k and Flickr30k datasets. To sum up, our contributions are three-fold: \u2022 To perceive object and context information with only grid features, we propose a novel LSTNet for image captioning. LSTNet not only improves the local perception ability of the model but also outperforms a bunch of recently proposed methods on the highly competitive MSCOCO benchmark. \u2022 We propose a Locality-Sensitive Attention (LSA) for the intra-layer visual modeling in Transformer, which is a re-parameterized module for enhancing the interaction between each grid feature and its local neighbors. \u2022 We propose a Locality-Sensitive Fusion (LSF) to aggregate inter-layer object semantic information for image captioning, which is conducive to inter-layer semantic understanding. CNN RNN Encoder 1 Encoder N ....... Decoder N Decoder 1 ....... Locality-Sensitive Encoder 1 Locality-Sensitive Encoder N ....... Decoder N Decoder 1 ....... Locality-Sensitive Fusion Module (a) CNN-RNN Model (b) Transformer-based Model (c) LSTNet Figure 2: Illustration of CNN-RNN model (a), Transformer-based model (b), and LSTNet (c) for image captioning. 4 \f2. Related Work 2.1. Image Captioning Image captioning is a challenging task, and enormous e\ufb00ort has been made to solve this problem. With years of development, a great improvement can be observed with a \ufb02urry of methods [5, 8, 11, 15\u201318]. The existing image captioning methods can be roughly divided into two categories: 1) the CNN-RNN model, 2) the Transformer-based model. As shown in Fig. 2(a), the CNN-RNN model uses CNN to encode images into vectorial representations and then adopts an RNN-based decoder to fuse these vectorial representations to provide content-related descriptions for input images. Speci\ufb01cally, [15] uses Convolutional Neural Network (CNN) to encode images and adopts Long Short-Term Memory (LSTM) as a decoder to generate captions. [16] exploits the adaptive attention mechanism to decide whether to attend to visual or non-visual information at each time step. [5] uses the pretrained Fast R-CNN [6] to extract salient objects as regional visual features, which is conducive to generating accurate captions. With the development of Transformer [19], a lot of researchers are investigating the application of Transformer-based models on the image captioning task, which is illustrated in Fig. 2(b). [8] introduces Bi-linear Pooling into the Transformer model to capture 2nd order interactions. [11] proposes to adaptively measure the contribution of visual and language cues on the top of the transformer decoder before word prediction. [12] presented a Dual-level Collaborative Transformer (DLCT) to accomplish the complementarity of the region and grid features. To improve the semantic understanding ability of Transformer, [18] proposes a Transformer-based captioning model with both spatial and channel-wise attention. [20] proposes a Geometry Attention Transformer (GAT) model to further leverage geometric information in image captioning. To consider the visual persistence of object features, [21] introduces a VPNet via inserting visual persistence modules in both the encoder and decoder. [22] proposes a novel CtxAdpAtt model, which adopts the linguistic context to explore related visual relationships between di\ufb00erent objects e\ufb00ectively. To alleviate the disadvantages of using GCN-based encoders to represent the relation information among scene graphs, ReFormer[23] explores a novel architecture to explicitly express the relationship between objects in the image. Our LSTNet is in line with the Transformer-base approach. However, when processing grid features, the Transformer ignores visual locality, which is important for identifying objects in the image. As shown in Fig. 2(c), 5 \fwe propose the Locality-Sensitive Attention (LSA) module and LocalitySensitive Attention (LSA) module to enhance local visual modeling. CNN Detector Detector Grid Feature Region Feature Grid Feature Caption Model Caption Model Caption Model (a) Grid Feature of CNN (b) Region Feature of Detector (c) Grid Feature of Detector Figure 3: Three main stages of visual features used in the image captioning task: (a) Grid features extracted by CNN, e.g., ResNet [24]; (b) Region features extracted by the pretrained detector, e.g., Faster R-CNN [6], which requires time-consuming post-processing operations, e.g., NMS [25]; (c) Grid features extracted from the feature map of CNN in the object detector. 2.2. Region features & Grid features The visual features used in image captioning go through three main stages: Grid \u2192Region \u2192Grid. In the \ufb01rst grid stage, some pioneering works [16, 26, 27] adopt grid visual features extracted from CNN [24] to represent images, which is illustrated in Fig. 3(a). For example, [26] \ufb01rst propose the image captioning task, and adopt CNN to encode visual features and RNN to decode the caption. To capture the importance of di\ufb00erent grids in an image, [27] applies the attention mechanism to the visual features before decoding the caption.[16] propose adaptive attention on the visual feature, which is extracted from the last convolutional layer of ResNet101 [24]. As shown in Fig. 3(b), to obtain foreground information, [5] adopts an object detector [6] pre-trained on VG [7] to extract region features, which are widely used in a lot of multi-modal tasks [2, 28]. As shown in Fig. 3(c), to compensate for the defects (e.g., time-consuming) of region features, [10] revisits the grid 6 \ffeature in the object detector, and \ufb01nds that it could achieve competitive performance in VQA, the e\ufb00ectiveness of which has also been validated in image captioning [11, 12]. Compared with previous methods [2, 8, 9] based on region features, our proposed LSTNet based on grid features can capture the contextual information out of bounding boxes, thus generating more accurate captions. On the other hand, compared with existing methods [11] based on grid features, our proposed LSTNet considers the locality of the grid features and models the relationship of neighboring grids, which is conducive to recognizing the objects in the image. DLCT [12] adopts bounding boxes to assist grid features to locate objects. However, due to the adoption of both grid and region features, the model needs to bear more training and prediction overhead, e.g., LSTNet runs over three times faster than DLCT in the cross-entropy training stage, whose performance is severely limited by the accuracy of bounding boxes. Our proposed LSTNet captures intraand inter-layer local relationships, leading to more detailed and \ufb01ner-grained grid features for image captioning. 2.3. Multi-head Self-Attention in Transformer Transformer is originally proposed to solve natural language processing (NLP) tasks. Due to its powerful modeling ability, the transformer has also been widely used in computer vision (CV) and multi-modal tasks in recent years. The key component of the transformer is the multi-head self-attention (MSA) module, which can e\ufb00ectively model the relationship and context of the input at di\ufb00erent positions. Speci\ufb01cally, a h-head self-attention is formulated as: MultiHead(Q, K, V) = Concat (head 1, head 2, . . . , head h) WO, (1) where Q, K, V \u2208RN\u00d7d represent the input query, key, and value, respectively. N is the length of input, d is the hidden dimension in each head. WO \u2208Rhd\u00d7d is a learnable matrix for the output of all heads. For each head, the attention is formulated as follows: head i = Attention \u0010 QQ i , KWK i , VWV i \u0011 = softmax QWQ i \u0000KWK i \u0001T \u221a d ! VWV i , (2) 7 \fwhere WQ i , WK i , WV i \u2208Rd\u00d7d are the learnable matrices for input query, key, and value, respectively. 3. Approach Encoder Layer 3 Encoder Layer 2 Encoder Layer 1 Decoder Layer3 Decoder Layer 2 Decoder Layer 1 MLP ... Embed FFN MSA LSA Feature Extractor A elephant laying on the ground in the grass. An baby elephant laying on the grass in a field Add & Norm Add & Norm Concat 1\u00d71 Conv Batch Notm 1\u00d71 Conv Batch Notm Sigmoid n\u00d7n Conv Batch Notm n\u00d7n Conv Sigmoid Training Structure Inference Structure LSF 1\u00d71 Conv Batch Notm 1\u00d71 Conv Batch Notm ReLU n\u00d7n Conv Batch Notm ReLU n\u00d7n Conv LSA ... \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \ufffd0 \ufffd1 \ufffd2 \ufffd1 \ufffd2 \ufffd3 Figure 4: Overview of our proposed LSTNet. Grid features are used as visual representations and fed to the visual encoder. Locality-Sensitive Attention (LSA) is inserted after self-attention to capture local dependencies. We also apply reparameterization technology to reduce its overhead during inference. The output of encoding layers is further processed by Locality-Sensitive Fusion (LSF) to achieve cross-layer aggregation, which can also provide the semantics of di\ufb00erent layers for local visual modeling. MSA and FFN represent Multi-head Self-Attention and Feed-Forward Networks, respectively. 3.1. Overview As shown in Fig. 4, our proposed Locality-Sensitive Transformer Network (LSTNet) follows the encoder-decoder paradigm. Concretely, the encoder takes the visual features as input and then models their relationships by the encoder layers, where Locality-Sensitive Attention (LSA) module is adopted to enhance local visual modeling. Then, Locality-Sensitive Fusion (LSF) aggregates the visual features from di\ufb00erent encoding layers, based on which the decoder predicts caption words to describe the given visual content. The visual features before the l-th encoder layer are denoted as Vl\u22121 \u2208 RNv\u00d7c (Nv = h \u00d7 w), where h, w, c represent the height, width and channel dimension of visual features, respectively. 8 \fEach encoder layer of LSTNet consists of three components: (1) a Multihead Self-Attention (MSA) module; (2) a Locality-Sensitive Attention (LSA) module; (3) a Feed-Forward Network (FFN). The visual features Vl\u22121 from the last encoder layer is \ufb01rst processed by the MSA as follows (LayerNorm operation is omitted for conciseness): V \u2032 l\u22121 = Vl\u22121 + MSA(Vl\u22121, Vl\u22121, Vl\u22121), (3) where MSA(\u00b7) is the standard Multi-head Self-Attention in Transformer [19]. Because MSA can model the relationship and context of any two positions in the input sequence, MSA is conducive to capturing long-range dependencies and modeling global information among grids. Self-Attention is often ine\ufb03cient in capturing the local details, which, however, is critical for grid visual features as explained in Sec.1. Thus, based on V \u2032 l\u22121, LSA is adopted to capture the dependencies of neighboring grids to further re\ufb01ne visual features: V \u2032\u2032 l\u22121 = V \u2032 l\u22121 + LSA(V \u2032 l\u22121), (4) where the detail of LSA is described in the next subsection. Because LSA is composed of cascaded convolution layers, which can model the relationship between adjacent grids, LSA is conducive to modeling local relationships among grids. Then the output of the LSA module is fed to FFN for the interaction in the channel domain: Vl = V \u2032\u2032 l\u22121 + FFN(V \u2032\u2032 l\u22121), (5) FFN(x) = max(0, xW1 + b1)W2 + b2. (6) Di\ufb00erent from previous Transformer-based models, which only feed the output of the top encoder layer to the decoder, our proposed LSF module aggregates visual features from all encoder layers to obtain richer features in semantic by Locality-Sensitive Fusion (LSF): V \u2217= LSF(V1, V2, \u00b7 \u00b7 \u00b7 , Vn), (7) where n is the number of encoder layers. Finally, V \u2217is fed into the decoder to generate the captions, which is the same as that of the vanilla Transformer [19]. 9 \f3.2. Locality-Sensitive Attention (LSA) As visualized in Fig. 1(b), an object in the image may be divided into several fragments and distributed in various grids, which destroys the spatial and semantic information of visual objects. A reasonable approach is to strengthen the interaction of local information, which is also in line with the assumption that features close to each other in vision are more likely to be correlated. Thus, to capture the local details and model the interaction between adjacent grids, we propose a multi-scale locality-sensitive module, namely Locality-Sensitive Attention (LSA). Speci\ufb01cally, the output feature V \u2032 \u2208 RN\u00d7C of the MSA module is a grid sequence, where N is the number of grids, C is the size of channel dimension. We \ufb01rst reshape V \u2032 \u2208RN\u00d7C to V \u2032 \u2208RH\u00d7W\u00d7C, where H, W are the height and width of the grid feature. Then, we use two multi-scale 2D CNNs in series with an activation function (i.e., ReLU) in between to obtain the visual features A after multi-scale local perception, which could be formulated as follows: A = MSC2 \u0000\u03c3(MSC1(V \u2032) \u0001 , (8) where \u03c3(\u00b7) is the activation function and MSCi(\u00b7) represents a multi-scale CNN implemented by multi-branch CNNs: MSCi(x) = BN i 1 \u0000F i 1(x) \u0001 + \u00b7 \u00b7 \u00b7 + BN i N \u0000F i N(x) \u0001 , (9) where i \u2208{1, 2}, N is the number of branches, BNj(\u00b7) is Batch Normalization [29], Fj(\u00b7) represents identity mapping, one convolution module, or several convolution modules in series, and j \u2208{1, \u00b7 \u00b7 \u00b7 , N}. In our LSTNet, the number of branches N is 3. As shown in the blue area in Fig. 4, three branches are (1) the identity mapping, (2) the 1 \u00d7 1 Conv, and (3) the sequential combination of 1 \u00d7 1 Conv and 3 \u00d7 3 Conv, respectively. During inference, the multi-branch structure MSCi(\u00b7) can be simpli\ufb01ed into a single-branch structure to save the number of parameters and computational cost by using some structural reparameterization techniques [30, 31] without any performance loss: MSCi(x) \u2192F i(x), (10) where F i(x) is a 3 \u00d7 3 convolution, and i \u2208{1, 2}. 10 \fTo get the attention weight for each grid, we also apply the Sigmoid function to A. Finally, we reweight the output feature of Self-Attention layer V \u2032 according to locality-sensitive attention map as follows: V \u2032\u2032 = V \u2032 \u2297Sigmoid(A), (11) where \u2297represents element-wise multiplication. 3.3. Locality-Sensitive Fusion (LSF) Features from di\ufb00erent layers tend to contain semantic information of various levels [2]. However, most existing image captioning methods only feed the feature of the top encoder layer to the decoder, leading to low-level information loss. To avoid such information loss, we fuse the features of all layers in the encoder and then feed the fused feature into the decoder. Technically, we introduce a simple Spatial Shift operation to enable each grid to align with its neighbor grids, and then the Multi-Layer Perceptron (MLP) is used to interact not only in the channel domain but also in the spatial domain. Particularly, denoting features from the l-th encoder layer as Vl \u2208Rh\u00d7w\u00d7c (the reshape operation is omitted here), V1 and V2 are shifted by di\ufb00erent Spatial Shift operations (i.e., SS1(\u00b7) and SS2(\u00b7)), which can be represented as Eq. 12 and Eq. 13: V1[ds : h, :, 0 : c/4] = V1[0 : h \u2212ds, :, 0 : c/4], V1[0 : h \u2212ds, :, c/4 : c/2] = V1[ds : h, :, c/4 : c/2], V1[:, ds : w, c/2 : 3c/4] = V1[:, 0 : w \u2212ds, c/2 : 3c/4], V1[:, 0 : w \u2212ds, 3c/4 : c] = V1[:, ds : w, 3c/4 : c], (12) V2[:, ds : w, 0 : c/4] = V2[:, 0 : w \u2212ds, 0 : c/4], V2[:, 0 : w \u2212ds, c/4 : c/2] = V2[:, ds : w, c/4 : c/2], V2[ds : h, :, c/2 : 3c/4] = V2[0 : h \u2212ds, :, c/2 : 3c/4], V2[0 : h \u2212ds, :, 3c/4 : c] = V2[ds : h, :, 3c/4 : c], (13) where Vi is the output feature of the i-th encoder layer, ds is the shift distance of Spatial Shift, which determines the scope of local interaction. The output of the top encoder layer V3 is not processed by any shift operations. The illustration of Spatial Shift can be observed in Fig. 4 and Fig. 5. Then, shifted features from di\ufb00erent layers are concatenated together: Vc = Concat(V1, V2, V3). (14) 11 \fTable 1: Leaderboard of the published state-of-the-art image captioning models on the COCO online testing server. Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr-D c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 SCST [32] cvpr\u201917 78.1 93.7 61.9 86.0 47.0 75.9 35.2 64.5 27.0 35.5 56.3 70.7 114.7 116.0 LSTM-A [33] iccv\u201917 78.7 93.7 62.7 86.7 47.6 76.5 35.6 65.2 27.0 35.4 56.4 70.5 116.0 118.0 Up-Down [5] cvpr\u201918 80.2 95.2 64.1 88.8 49.1 79.4 36.9 68.5 27.6 36.7 57.1 72.4 117.9 120.5 RFNet [34] eccv\u201918 80.4 95.0 64.9 89.3 50.1 80.1 38.0 69.2 28.2 37.2 58.2 73.1 122.9 125.1 GCN-LSTM [35] eccv\u201918 80.8 95.2 65.5 89.3 50.8 80.3 38.7 69.7 28.5 37.6 58.5 73.4 125.3 126.5 SGAE [36] cvpr\u201919 81.0 95.3 65.6 89.5 50.7 80.4 38.5 69.7 28.2 37.2 58.6 73.6 123.8 126.5 AoANet [9] cvpr\u201919 81.0 95.0 65.8 89.6 51.4 81.3 39.4 71.2 29.1 38.5 58.9 74.5 126.9 129.6 CAVPN [37] TPAMI\u201919 80.1 94.9 64.7 88.8 50.0 79.7 37.9 69.0 28.1 37.0 58.2 73.1 121.6 123.8 ETA [38] iccv\u201919 81.2 95.0 65.5 89.0 50.9 80.4 38.9 70.2 28.6 38.0 58.6 73.9 122.1 124.4 M2Transformer [2] cvpr\u201920 81.6 96.0 66.4 90.8 51.8 82.7 39.7 72.8 29.4 39.0 59.2 74.8 129.3 132.1 XTransformer (ResNet-101) [8] cvpr\u201920 81.3 95.4 66.3 90.0 51.9 81.7 39.9 71.8 29.5 39.0 59.3 74.9 129.3 131.4 XTransformer (SENet-154) [8] cvpr\u201920 81.9 95.7 66.9 90.5 52.4 82.5 40.3 72.4 29.6 39.2 59.5 75.0 131.1 133.5 DLCT (ResNeXt101) [12] aaai\u201921 82.0 96.2 66.9 91.0 52.3 83.0 40.2 73.2 29.5 39.1 59.4 74.8 131.0 133.4 DLCT (ResNeXt152) [12] aaai\u201921 82.4 96.6 67.4 91.7 52.8 83.8 40.6 74.0 29.8 39.6 59.8 75.3 133.3 135.4 RSTNet(ResNext101) [11] cvpr\u201921 81.7 96.2 66.5 90.9 51.8 82.7 39.7 72.5 29.3 38.7 59.2 74.2 130.1 132.4 RSTNet(ResNext152) [11] cvpr\u201921 82.1 96.4 67.0 91.3 52.2 83.0 40.0 73.1 29.6 39.1 59.5 74.6 131.9 134.0 GAT [20] expert syst. appl.\u201922 81.1 95.1 66.1 89.7 51.8 81.5 39.9 71.4 29.1 38.4 59.1 74.4 127.8 129.8 VPNet [21] neurocomputing\u201922 81.4 95.5 66.4 90.3 52.0 82.1 40.0 72.1 29.3 38.9 59.3 74.9 128.2 130.6 CtxAdpAtt [22] tmm\u201922 81.0 95.2 65.5 91.0 51.5 81.7 39.3 70.9 29.4 39.0 59.6 75.1 128.5 131.0 ReFormer [23] mm\u201922 82.0 96.7 40.1 73.2 29.8 39.5 59.9 75.2 129.9 132.8 LSTNet (ResNeXt-101) 82.2 96.2 67.2 91.2 52.7 83.5 40.6 73.8 29.6 39.3 59.6 75.0 132.0 134.5 LSTNet (ResNeXt-152) 82.6 96.7 67.8 92.0 53.3 84.3 41.1 74.7 29.9 39.6 60.0 75.4 134.0 136.3 Channel Split Spatial Shift Figure 5: Illustration of the Spatial Shift operation. c, h, w are the size of the channel, height, and width dimension, respectively. Theoretically, MLP can\u2019t model the relationships among adjacent grids. However, after being shifted by the Spatial Shift operation, which aligns each grid with its neighbors, MLP can communicate in both channel and spatial domains: \u02dc V = \u03c3(VcW1)W2, (15) where \u03c3(\u00b7) is the ReLU activation function, W1 \u2208R3c\u00d73c and W2 \u2208R3c\u00d7c are the learnable projection matrices. To further enhance the descriptive power of visual features, we combine the outputs of the top encoder layer with the fused feature \u02dc V via a residual connection: V \u2217= \u03bb\u02dc V + Vtop, (16) where Vtop is the feature of the top encoder layer, Vtop = V3 in our LSTNet, and \u03bb serves as a weighting factor. 12 \fTable 2: Comparisons with SOTAs on the Karpathy test split. All values are reported as percentages (%), where B-N, M, R, and C are short for BLEU-N, METEOR, ROUGE-L, and CIDEr scores. Model B-1 B-4 M R C S SCST [32] cvpr\u201917 34.2 26.7 55.7 114.0 Up-Down [5] cvpr\u201918 79.8 36.3 27.7 56.9 120.1 21.4 RFNet [34] eccv\u201918 79.1 36.5 27.7 57.3 121.9 21.2 GCN-LSTM [35] eccv\u201918 80.5 38.2 28.5 58.3 127.6 22.0 SGAE [36] cvpr\u201919 80.8 38.4 28.4 58.6 127.8 22.1 CAVPN [37] tpami\u201919 38.6 28.3 58.5 126.3 21.6 AoANet [9] cvpr\u201919 80.2 38.9 29.2 58.8 129.8 22.4 ORT [39] neurips\u201919 80.5 38.6 28.7 58.4 128.3 22.6 Transformer [19] neurips\u201917 80.7 38.6 29.1 58.5 130.1 22.7 M2Transformer [2] cvpr\u201920 80.8 39.1 29.2 58.6 131.2 22.6 XTransformer [8] cvpr\u201920 80.9 39.7 29.5 59.1 132.8 23.4 DLCT [12] aaai\u201921 81.4 39.8 29.5 59.1 133.8 23.0 RSTNet [11] cvpr\u201921 81.1 39.3 29.4 58.8 133.3 23.0 GAT [20] expert syst. appl.\u201922 80.8 39.7 29.1 59.0 130.5 22.9 VPNet [21] neurocomputing\u201922 80.9 39.7 29.3 59.2 130.4 23.2 CtxAdpAtt [22] tmm\u201922 80.5 39.1 29.3 59.3 130.1 23.6 ReFormer [23] mm\u201922 39.8 29.7 59.8 131.2 23.0 LSTNet 81.5 40.3 29.6 59.4 134.8 23.1 Speci\ufb01cally, the motivation for LSF comes from two aspects: 1) The output feature map of di\ufb00erent encoder layers has di\ufb00erent semantics (i.e., the high-level feature map has high-level semantic information, and the low-level feature map has low-level semantic information). The traditional Transformer only feeds the feature map of the last layer in the encoder into the decoder, ignoring the low-level semantic information. LSF solves this problem by fusing the outputs of all encoder layers, considering both high-level and low-level semantic information. 2) The LSF module can interact with local grids through spatial shift operation, which is conducive to modeling object-level information. 3.4. Objectives Given the ground-truth caption y\u2217 1:T and the captioning model with parameters \u03b8, where T is the length of the caption, we pre-train the model 13 \fusing Cross-Entropy (CE) loss as follows: LCE = \u2212 T X t=1 log \u0000p\u03b8(y\u2217 t |y\u2217 1:t\u22121) \u0001 . (17) Then we further optimize the model using CIDEr and BLEU-4 scores by Self-Critical Sequence Training (SCST) [32] as follows: \u2207\u03b8LRL(\u03b8) = \u22121 k k X i=1 \u0000r(yi 1:T) \u2212b \u0001 \u2207\u03b8 log p\u03b8(yi 1:T), (18) where k is the beam size, r(\u00b7) is the sum of CIDEr and BLEU-4, and b = (P i r(yi 1:T)) /k is the reward baseline. 4. Experiment 4.1. Datasets We conduct our experiments on the popular MS-COCO [3] image captioning dataset. It contains 123,287 images, including 82,783 training images, 40,504 validation images, and 40,775 testing images, each of which is annotated with 5 captions. We adopt the split provided by [40] for the o\ufb04ine test, where 5,000 images are used for validation, 5,000 images for testing, and the rest images for training. Besides, we also upload generated captions of the o\ufb03cial testing set for online evaluation 1. 4.2. Implementation Details The grid features are extracted from a pre-trained Faster-RCNN [6] provided by [10], where a stride-1 C5 backbone and 1 \u00d7 1 RoIPool with two FC layers are used as the detection head for training Faster R-CNN on the VG dataset. Speci\ufb01cally, we adopt the C5 feature maps and average-pool them as 7 \u00d7 7 spatial size. Note that we do not use any extra data preprocessing, except simple augmentations (e.g., RandomCrop, RandomRotation). The dmodel in the LSTNet is 512, the expansion rate in the FFN is 4, the number of heads is 8, and the size of the beam search is 5. We use Adam optimizer to train our model in both stages and adopt the relative position encoding following [12]. In the CE training stage, the batch 1https://competitions.codalab.org/competitions/3221#results 14 \fsize is 50, and the learning rate is linearly increased to 1 \u00d7 10-4 during the \ufb01rst 4 epochs. Afterwards, we set it to 2 \u00d7 10-5, 4 \u00d7 10-6 at 10-th and 12-th epoch. After 18 epochs of CE pre-training, we optimize the model by SCST with the batch size of 100 and learning rate of 5 \u00d7 10-6. The learning rate will be set to 2.5\u00d710-6, 5\u00d710-7, 2.5\u00d710-7, 5\u00d710-8 at the 35-th, 40-th, 45-th, 50-th epoch, and the SCST training will last 42 epochs. 4.3. Performance Comparison In this section, we compare our LSTNet with SOTAs on both o\ufb04ine and online evaluations. The compared models include: SCST [32], Up-Down [5], RFNet [34], GCN-LSTM [35], SGAE [36], AoANet [9], ETA [38], ORT [39], Transformer [19], M 2Transformer [2], XTransformer [8], RSTNet [11] and DLCT [12]. Following the standard evaluation criterion, we adopt BLEU-N [41], METEOR [42], ROUGE-L [43], CIDEr [44], SPICE [45] to evaluate the performance. 4.3.1. Online Evaluation Tab. 1 shows the performance comparisons of LSTNet and other SOTA methods on the online COCO test server with 5 reference captions (c5) and 40 reference captions (c40). For fair comparisons, we also use the ensemble of four models following [2] and adopt two common backbones (i.e., ResNeXt-101, ResNeXt-152 [46]). Notably, our LSTNet outperforms other SOTA methods in all metrics by signi\ufb01cant margins. Surprisingly, we observe that LSTNet with ResNeXt-101 performs better than RSTNet with ResNeXt-152 and X-Transformer with SENet-154 on most metrics. 4.3.2. O\ufb04ine Evaluation Tab. 2 summarizes the performance of the state-of-the-art models and our approach to the o\ufb04ine COCO Karpathy test split. Note that for fair comparisons, we report the results of single models without using any ensemble technologies. We can observe that our proposed LSTNet outperforms all the other SOTA models in terms of most metrics. Notably, the CIDEr score of our LSTNet achieves 134.8%, outperforming the strongest competitor DLCT by 1.0%, which adopts both region and grid features. We obverse that the LSTNet with the grid feature performs better than some models (e.g., M 2Transformer [2], XTransformer [8]) with region features. We think the reasons why the grid-level scheme in our paper performs better than the object-level scheme are as follows: 1) The region feature\u2019s 15 \fTable 3: Comparisons with SOTA methods on the Karpathy test split using the same ResNeXt101 [46] grid feature. Model B-1 B-4 M R C S Up-Down [5] cvpr\u201918 75.0 37.3 28.1 57.9 123.8 21.6 AoANet [9] cvpr\u201919 80.8 39.1 29.1 59.1 130.3 22.7 Transformer [19] neurips\u201917 81.0 38.9 29.0 58.4 131.3 22.6 M2Transformer cvpr\u201920 [2] 80.8 38.9 29.1 58.5 131.8 22.7 XTransformer [8] cvpr\u201920 81.0 39.7 29.4 58.9 132.5 23.1 DLCT [12] aaai\u201921 81.4 39.8 29.5 59.1 133.8 23.0 RSTNet [11] cvpr\u201921 81.1 39.3 29.4 58.8 133.3 23.0 LSTNet 81.5 40.3 29.6 59.4 134.8 23.1 background information is missing, and the grid feature extracts all of the information in the image. Speci\ufb01cally, visual region features are collected from the image\u2019s salient parts, typically omitting contextual information. Because of the lack of background information, the model performs poorly in capturing relationships between objects. The grid feature, on the other hand, collects all spatial information from the image. 2) The pre-trained object detector often involves noisy, overlapped, or erroneous detections, which ultimately limits the performance upper bound of image captioning models. On the other hand, grid features do not provide detection information, so the impact of error detection is avoided. 3) In grid features, an object is divided into di\ufb00erent grids, and this is the motivation of this paper. Our approach enables the model to capture local information and solves this problem. 4.3.3. Fair Comparisons with SOTA Methods To eliminate the interference of di\ufb00erent visual features, we conduct experiments on the same grid features to compare the LSTNet and other SOTA methods. As reported in Tab. 3, compared with other methods on the same visual features, our proposed LSTNet still achieves superior performance on all metrics. 4.4. Ablation Study 4.4.1. E\ufb00ect of Di\ufb00erent Branches of LSA To validate the impact of each branch, we conduct a series of experiments by leveraging di\ufb00erent branches of LSA. The performance of LSA with different branches is illustrated in Tab. 4. By analyzing this table, we gain the following observations: 16 \f\u2022 Compared with the model without LSA (line 1), adopting LSA (line 2-7) with either one or more branches is helpful to generate better captions. Moreover, the more branches are adopted, the better performance tends to be achieved. This may be because the proposed LSA module improves the local perception ability of the model, so it is bene\ufb01cial to the perception of object information. \u2022 As shown in Tab. 4, equipping one branch (lines 2,3,4) outperforms the model with 0 branches (line 1) on most evaluation metrics. By comparing the model with one branch (lines 2,3,4) to the model with two branches (lines 5,6,7), we can see that the model with two branches outperforms the model with one branch on most metrics, particularly CIDEr. Furthermore, we can see that the completed LSTNet (line 8) with all three branches performs the best. This may be attributed to that objects in the image vary in size, so more branches are conducive to strengthening the multi-scale modeling ability for objects of di\ufb00erent sizes. Importantly, by using reparameterization techniques, more branches of LSA will not lead to higher overhead during inference. 4.4.2. E\ufb00ect of Di\ufb00erent Arrangements of LSA and SA To explore the impact of various arrangements of Locality-Sensitive Attention (LSA) and Self-Attention (SA), we compare three methods to combine LSA and SA: (1) sequential LSA-SA, (2) sequential SA-LSA, (3) parallel usage of SA and LSA. As shown in Tab. 5, we can observe that the performance of the sequential SA-LSA is better than the others. The main reason may be that the features processed by SA are coarse-grained, and our proposed LSA, which aims to model local relationships, is helpful to further re\ufb01ne the visual features. 4.4.3. E\ufb00ect of Di\ufb00erent Shift Distances of LSF To explore the impact of shift distance ds of LSF, we conduct experiments by increasing ds from 0 to 4 (ds = 0 means that all features are not shifted). From Tab. 6, we could observe that shifted LSF performs better than the unshifted one. This can be attributed to that shifted LSF makes each grid can interact with neighboring grids during fusing, thus enhancing the local modeling. However, when the shift distance is greater than 1, the performance begins to drop and ds = 1 performs best. The reason may be 17 \fA man walking down a sidewalk with a laptop A man standing next to a chair and an umbrella (b) Transformer: A man walking down a sidewalk with a laptop. (c) LSTNet: A man standing next to a chair and an umbrella. GT1: A person standing next to a chair holding an umbrella. GT2: A person holding an umbrella in a garage. GT3: A person looking out their garage holding an umberella. GT4: A man standing in a garage with an umbrella opened. GT5: A person is holding an umbrella while standing in the garage. (a) Ground Truth Figure 6: The ground truth (a) and the visualization of attended images along with the caption generation of standard Transformer (b) and LSTNet (c). The most signi\ufb01cant di\ufb00erence between the two generated captions is highlighted in bold. that the large shift promotes the long-distance interaction, but ignores local modeling. 4.4.4. E\ufb00ect of \u03bb in LSA To choose the best weighting factor \u03bb in Eq. 16, we also conduct a group of experiments. From Tab. 7, we could \ufb01nd that too large \u03bb will lead to performance degradation, and \u03bb = 0.2 performs well on most metrics. Thus, we use \u03bb = 0.2 for our experiments if not speci\ufb01ed. 4.4.5. E\ufb00ect of Decoupling LSA and LSF To gain insights into the proposed LSA and LSF modules, we decouple these two modules in the experiment. As reported in Tab. 8, compared with the complete model, the performance of the model without LSA+LSF 18 \fTransformer: A man putting on a tie at a table. LSTNet: A man holding a cell phone in a room. Transformer: A young boy holding a stuffed animal. LSTNet: A young boy holding a teddy bear. Transformer: A man is throwing a frisbee in a field. LSTNet: A group of people playing baseball in a field. Original Image Transformer LSTNet Figure 7: Some examples of captions generated by Transformer and LSTNet on the same grid features and comparisons of corresponding attention map of the top encoder layer of Transformer and LSTNet. degrades dramatically. Particularly, it drops absolutely by 1.4% and 3.5% on BLEU-4 and CIDEr respectively, which demonstrates the vital importance of LSA and LSF. Particularly, our LSA and LSF achieve performance gains of 2.3% and 2.4% on the CIDEr score, respectively. This shows that LSA and LSF modules can promote each other to achieve better performance. 4.4.6. E\ufb00ect of Di\ufb00erent Approaches to Fuse Features To justify the e\ufb00ectiveness of LSF, we design a group of experiments by replacing LSF with various modules to fuse features. As shown in Tab. 9, we can \ufb01nd that fusing features from di\ufb00erent layers improves the performance when compared with the results in the \ufb01rst row (i.e, without fusing features). The main reason may be that features from di\ufb00erent layers are complementary in semantic information and fusing features will enrich visual features. Moreover, our proposed LFS module outperforms other modules by a large margin, which strongly illustrates the validity of LFS. 4.4.7. E\ufb00ect of Di\ufb00erent Grid Sizes To explore the impact of di\ufb00erent grid sizes, we conduct a series of experiments by setting the visual features to di\ufb00erent sizes via average pooling. As 19 \fTable 4: Ablation studies on various branches. All branches contain BatchNorm, and all models are not equipped with LSF. All values are reported as percentages (%), B-N, M, R, C, and S are short for BLEU-N, METEOR, ROUGE-L, CIDEr, and SPICE scores. Identity 1\u00d71 1\u00d71+3\u00d73 B-1 B-4 M R C S \u00d7 \u00d7 \u00d7 81.0 38.9 29.0 58.4 131.3 22.6 \u221a \u00d7 \u00d7 81.1 38.9 29.1 58.4 131.6 22.6 \u00d7 \u221a \u00d7 81.1 39.2 29.2 58.9 132.2 22.7 \u00d7 \u00d7 \u221a 81.2 39.4 29.2 58.9 132.3 22.7 \u221a \u221a \u00d7 81.2 39.6 29.1 59.0 133.5 22.7 \u221a \u00d7 \u221a 81.2 39.6 29.2 58.9 133.4 22.6 \u00d7 \u221a \u221a 81.2 39.6 29.3 59.1 133.5 22.8 \u221a \u221a \u221a 81.2 39.7 29.3 59.1 133.6 22.8 Table 5: Ablation studies on various arrangements of SA and LSA. + is sequential connection, & represents parallel connection. Arrangement B-1 B-4 M R C S LSA + SA 81.1 39.6 29.1 59.0 133.4 22.8 SA + LSA 81.2 39.7 29.3 59.1 133.6 22.8 SA & LSA 81.0 39.5 28.9 59.0 132.7 22.7 shown in Tab. 10, the performance of image captioning gradually improves as the grid size is increased. This is explained by the fact that a larger image feature o\ufb00ers more \ufb01ne-grained and richer semantic information, and the image captioning model will produce more accurate descriptions as a result of these details. 4.5. Quantitative Analysis By comparing conventional metrics (e.g., BLEU-N, CIDEr, SPICE), it is di\ufb03cult to determine whether our method signi\ufb01cantly improves the performance of image captioning. Aiming to demonstrate the e\ufb03cacy and superiority of our proposed LSTNet in an intuitive way, we conduct a two-tailed t-test with paired samples to compare LSTNet with a standard Transformer. To be speci\ufb01c, we \ufb01rst perform the two-tailed t-test for each conventional metric to explore whether the quality of caption generated by LSTNet is signi\ufb01cantly improved compared with the standard Transformer. Besides, we also report the semantic subcategories of SPICE scores (i.e., Relation, Cardinality, Attribute, Size, Color, and Object), which can be used to measure 20 \fTable 6: Ablation studies on various shift distances of LSF. All models are equipped with both LSA and LSF modules. Shift Distance B-1 B-4 M R C S ds = 0 81.2 39.9 29.3 59.1 133.9 22.8 ds = 1 81.5 40.3 29.6 59.4 134.8 23.1 ds = 2 81.4 40.1 29.5 59.2 134.4 22.9 ds = 3 81.3 40.1 29.5 59.2 134.2 22.8 ds = 4 81.3 40.0 29.4 59.1 134.0 22.8 Table 7: Performance comparisons with di\ufb00erent weight factors \u03bb. All models are equipped with both LSA and LSF modules. \u03bb B-1 B-4 M R C S \u03bb = 0.1 81.7 40.3 29.4 59.3 134.3 22.9 \u03bb = 0.2 81.5 40.3 29.6 59.4 134.8 23.1 \u03bb = 0.3 81.6 40.2 29.5 59.3 134.5 23.0 \u03bb = 0.5 81.6 40.3 29.5 59.3 134.5 30.0 \u03bb = 0.7 81.4 40.0 29.4 59.0 134.0 22.9 the semantic relevance between generated sentences and ground truth. Furthermore, for each comprehensive SPICE score, we do a detailed two-tailed t-test with matched data to see if these semantic indicators have also been signi\ufb01cantly improved. The conventional metrics and corresponding p-values for the t-test over the test set are displayed in Tab. 11. We observe that the improvement of all metrics is statistically signi\ufb01cant under the signi\ufb01cant level \u03b1 = 0.05, which demonstrates that our proposed LSTNet is conducive to the quality of the generated caption. Tab. 12 details the semantic subcategories of SPICE scores and p-values for t-test over the test set. We can observe that all semantic subcategories of SPICE are improved, which reveals the e\ufb00ectiveness and superiority of local visual modeling in LSTNet. Furthermore, we can also observe that under the signi\ufb01cant level \u03b1 = 0.05, some semantic subcategories of SPICE (i.e., Attribute, Size, Color, and Object) obtain signi\ufb01cant improvements. Notably, all these four metrics describe the attributes of the object, so it proves that the local visual modeling in LSTNet is helpful to capture object-level information. Compared with other semantic metrics, the improvement of Relation metric is relatively insigni\ufb01cant. This may be because, to capture the relation between objects, it is not enough to only 21 \fTable 8: Ablation studies on LSA and LSF modules. Module B-1 B-4 M R C S w/o LSA+LSF 81.0 38.9 29.0 58.4 131.3 22.6 only LSA 81.2 39.7 29.3 59.1 133.6 22.8 only LSF 81.3 39.8 29.3 59.0 133.7 22.8 LSA + LSF 81.5 40.3 29.6 59.4 134.8 23.1 Table 9: Ablation studies on various fusion methods. All models are not equipped with LSA. FPN is the feature pyramid network similar to [47]. All values are reported as percentages (%), where B-N, M, R, C, and S are short for BLEU-N, METEOR, ROUGEL, CIDEr, and SPICE scores. Fusion methods B-1 B-4 M R C S w/o Fuse 81.0 38.9 29.0 58.4 131.3 22.6 MLP 81.0 39.3 29.2 59.0 132.5 22.7 SumPool 81.1 39.4 29.1 59.0 132.5 22.7 3 \u00d7 3 Conv 81.3 39.7 29.0 58.9 132.6 22.7 FPN 81.1 39.1 29.1 58.7 132.6 22.6 LSF (ours) 81.3 39.8 29.3 59.0 133.7 22.8 improve the local modeling ability of the model and global modeling ability is also very important. For other semantic metrics of a single object (i.e., Attribute, Size, Color, and Object), local visual modeling has been able to achieve signi\ufb01cant performance improvement. 4.6. Qualitative Analysis To qualitatively validate the e\ufb00ectiveness of LSTNet, we display several typical examples of captions generated by Transformer and LSTNet on the same grid features in Fig. 7. We can observe that the captions generated by Transformer are uninformative even erroneous, while the captions generated by LSTNet are more accurate and distinguishable, which demonstrates that our proposed LSA and LSF are helpful to recognize the visual object by local modeling. To gain deep insights into the reason why LSTNet can generate accurate captions, we further illustrate the attention map of the top encoder layer in Transformer and LSTNet in Fig. 7. By analyzing the results, we gain the following observations: 1) The attention map produced by Transformer fails to attend to the important visual objects in the image, while LSTNet is able to focus on the important ones. For instance, for the image in the \ufb01rst 22 \fTable 10: Ablation studies on di\ufb00erent grid sizes of the visual feature. All values are reported as percentages (%), where B-N, M, R, C and S are short for BLEU-N, METEOR, ROUGE-L, CIDEr, and SPICE scores. Grid Size B-1 B-4 M R C S 1 \u00d7 1 78.5 36.2 27.4 56.5 120.3 20.6 2 \u00d7 2 79.9 37.7 28.3 57.7 124.9 21.7 3 \u00d7 3 80.4 38.9 28.8 58.4 129.6 22.3 4 \u00d7 4 80.6 38.9 28.9 58.5 130.3 22.4 5 \u00d7 5 81.0 39.6 29.1 58.8 131.7 22.7 6 \u00d7 6 81.3 39.9 29.3 58.9 132.1 22.9 7 \u00d7 7 81.5 40.3 29.6 59.4 134.8 23.1 Table 11: Performance comparisons of di\ufb00erent captioning metrics for the Standard Transformer and our LSTNet. P-values come from two-tailed t-tests using paired samples. P-values in bold are signi\ufb01cant at 0.05 signi\ufb01cance level. Model BLEU-1 BLEU-4 METEOR ROUGE CIDEr SPICE Transformer 81.0 38.9 29.0 58.4 131.3 22.6 LSTNet 81.5 40.3 29.6 59.4 134.8 23.1 p-value 6.7 \u00d7 10\u22123 3.3 \u00d7 10\u22127 1.2 \u00d7 10\u22127 8.0 \u00d7 10\u22127 3.9 \u00d7 10\u22129 1.2 \u00d7 10\u22124 Table 12: Subcategories of SPICE metrics for the Standard Transformer and our proposed LSTNet. P-values are calculated by two-tailed t-tests using paired samples. Note that p-values in bold are signi\ufb01cant at 0.05 signi\ufb01cance level. Model SPICE Relation Cardinality Attribute Size Color Object Transformer 6.91 20.58 11.80 4.71 12.93 40.35 LSTNet 7.06 22.68 12.38 4.84 14.60 40.76 p-value 0.298 0.059 0.002 0.048 0.001 0.009 23 \fTable 13: Comparison with state of the art on the Flickr8k dataset. All values are reported as percentages (%), where B-N, M, R, and C are short for BLEU-N, METEOR, ROUGE-L, and CIDEr scores. \u2020 indicates an ensemble model results. Methods B-1 B-4 M R C Deep VS [40] 57.9 16.0 Google NIC [15]\u2020 63.0 Soft-Attention [26] 67.0 19.5 18.9 Hard-Attention [26] 67.0 21.3 20.3 emb-gLSTM [48] 64.7 21.2 20.6 Log Bilinear [49] 65.6 17.7 17.3 LSTNet 67.4 24.3 21.5 44.8 63.6 row in Fig. 7, Transformer is focusing on the table and LSTNet is focusing on the man and phone. Thus, Transformer generates the erroneous caption (i.e., \u201ctie\u201d, \u201ctable\u201d), while LSTNet recognizes \u201cA man holding a cell phone\u201d correctly. 2) Transformer can only attend to one object or small area in the image, while LSTNet will focus on more primary objects, thus generating accurate and detailed descriptions. For example, for the images in the second row in Fig. 7, Transformer is only focusing on the mouth of the boy, and LSTNet is focusing on both the boy and the teddy bear. Thus, Transformer fails to recognize the \u201cteddy bear\u201d but only produces a general phrase (i.e., \u201ca stu\ufb00ed animal\u201d). Thanks to the precise attention in the encoder, LSTNet recognizes \u201ca young boy holding a teddy bear \u201d successfully. These observations reveal that our proposed LSA forces the model to focus on not only important but also comprehensive information in the image. 4.7. Attention Visualization To better qualitatively evaluate the generated results with LSTNet, we visualize the contribution of each grid of the visual features during caption generation in Fig. 6. Technically, we average attention weights of 8 heads in the last decoder layer. We can observe that Transformer will attend to irrelevant regions, thus generating erroneous descriptions (e.g., \u201claptop\u201d). Instead, our proposed LSTNet can focus on correct grids when generating informative words like \u201cchair\u201d and \u201cumbrella\u201d. These observations demonstrate that our proposed LSA and LSF modules help the model consistently focus on the correct regions for image captioning by providing richer and 24 \fTable 14: Comparison with state of the art on the Flickr30k dataset. All values are reported as percentages (%), where B-N, M, R, and C are short for BLEU-N, METEOR, ROUGE-L and CIDEr scores. \u2020 indicates an ensemble model results. Methods B1 B4 M R C Deep VS [40] 57.3 15.7 Google NIC [15]\u2020 66.3 18.3 m-RNN [50] 60.0 19.0 Soft-Attention [26] 66.7 19.1 18.5 Hard-Attention [26] 66.9 19.9 18.5 emb-gLSTM [48] 64.6 20.6 17.9 ATT [51]\u2020 64.7 23.0 18.9 Log Bilinear [49] 60.0 17.1 16.9 LSTNet 67.1 23.3 20.4 44.3 64.5 \ufb01ner-grained visual features for the decoder through local interaction and fusion. 4.8. Generalization on the Flickr Datasets To verify the generalization of our proposed LSNet, we also conduct extensive experiments on the Flickr8k and Flickr30k datasets. The performance comparisons between our proposed LSTNet and previous SOTAs on Flickr8k [52] and Flickr30k [53] are shown in Tab 13 and Tab 14, respectively. As can be observed, our proposed LSTNet outperforms the previous SOTAs with a signi\ufb01cant margin on both Flickr8k and Flickr30K. This veri\ufb01es that our proposed LSTNet has strong generalization on other datasets. 5." + }, + { + "url": "http://arxiv.org/abs/2207.07285v2", + "title": "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval", + "abstract": "Video-text retrieval has been a crucial and fundamental task in multi-modal\nresearch. The development of video-text retrieval has been considerably\npromoted by large-scale multi-modal contrastive pre-training, which primarily\nfocuses on coarse-grained or fine-grained contrast. However, cross-grained\ncontrast, which is the contrast between coarse-grained representations and\nfine-grained representations, has rarely been explored in prior research.\nCompared with fine-grained or coarse-grained contrasts, cross-grained contrast\ncalculate the correlation between coarse-grained features and each fine-grained\nfeature, and is able to filter out the unnecessary fine-grained features guided\nby the coarse-grained feature during similarity calculation, thus improving the\naccuracy of retrieval. To this end, this paper presents a novel multi-grained\ncontrastive model, namely X-CLIP, for video-text retrieval. However, another\nchallenge lies in the similarity aggregation problem, which aims to aggregate\nfine-grained and cross-grained similarity matrices to instance-level\nsimilarity. To address this challenge, we propose the Attention Over Similarity\nMatrix (AOSM) module to make the model focus on the contrast between essential\nframes and words, thus lowering the impact of unnecessary frames and words on\nretrieval results. With multi-grained contrast and the proposed AOSM module,\nX-CLIP achieves outstanding performance on five widely-used video-text\nretrieval datasets, including MSR-VTT (49.3 R@1), MSVD (50.4 R@1), LSMDC (26.1\nR@1), DiDeMo (47.8 R@1) and ActivityNet (46.2 R@1). It outperforms the previous\nstate-of-theart by +6.3%, +6.6%, +11.1%, +6.7%, +3.8% relative improvements on\nthese benchmarks, demonstrating the superiority of multi-grained contrast and\nAOSM.", + "authors": "Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, Rongrong Ji", + "published": "2022-07-15", + "updated": "2022-09-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Video-text retrieval (VTR) is a multi-modal task, which aims to find the most relevant video/text based on the text/video query. With the explosive growth of videos on the Internet, VTR has attracted increasing interests and served as an important role in people\u2019s daily life. Recent years have witnessed the rapid development of VTR, which is supported by a series of pre-training multi-modal models [4, 30, 44], innovative retrieval methods [3, 5, 13\u201315, 24, 30, 34, 35, 38, 41, 54, 58, 61, 63, 66] and video-text benchmarks [2, 6, 7, 45, 56]. Recently, with great success in large-scale contrastive languageimage pre-training, VTR has also achieved great progress. Specifically, with 400M image-text pairs for training, CLIP [44] can embed the images and sentences into the shared semantic space for similarity calculation. Furthermore, CLIP4Clip [38] transfers the imagetext knowledge of CLIP to the VTR task, resulting in significant performance improvements on several video-text retrieval datasets. However, CLIP and CLIP4Clip embed the whole sentence and image/video into textual and visual representations, thus lacking the ability to capture fine-grained interactions. To this end, some previous works [29, 59] propose fine-grained contrastive frameworks, which consider the contrast between each word of the sentence and each frame of the video. Moreover, TACo [57] introduces tokenlevel and sentence-level loss to consider both fine-grained and coarse-grained contrast. Although they have shown promising advances on the VTR task, cross-modality semantic contrast still needs to be systematically explored. As shown in Fig. 1, a video is composed of multiple frames, and a sentence consists of several words. Video and sentence are usually redundant, which may contain some unnecessary frames or unimportant words. Concretely, given a specific video or sentence query, unnecessary frames or unimportant words refer to the candidates with low relevance to the query (i.e., light-colored frames and words in Fig. 1). However, most current works mainly focus on coarse-grained contrast [38, 44], fine-grained contrast [29, 59] or both [57], which are inefficient in filtering out these unnecessary frames and words. Specifically, coarse-grained contrast calculates the similarity between video-level and sentence-level features, and arXiv:2207.07285v2 [cs.CV] 22 Sep 2022 \fMM \u201922, October 10\u201314, 2022, Lisboa, Portugal Yiwei Ma et al. A man is driving a car. man car A is driving a Coarse-grained Contrast Fine-grained Contrast Cross-grained Contrast Figure 1: X-CLIP aims for improving video-text retrieval performance via multi-grained contrastive learning, including fine-grained (frame-word), coarse-grained (video-sentence) and cross-grained (video-word, sentence-frame) contrast. The transparency of words and frames represents the degree of relevance to query. fine-grained contrast calculates the similarity between frame-level and word-level features. To this end, we ask: How to effectively filter out unnecessary information during retrieval? To answer this question, we propose the cross-grained contrast, which calculates the similarity score between the coarse-grained features and each fine-grained feature. As shown in Fig. 1, with the help of the coarsegrained feature, unimportant fine-grained features will be filtered out and important fine-grained features will be up-weighted. However, challenges in cross-grained contrast arise from aggregating similarity matrices to instance-level similarity scores. A naive and easy method is to use Mean-Max strategy [25, 26, 47, 59] to calculate the instance-level similarity score after obtaining the similarity matrix. However, the conventional Mean-Max strategy is not conducive to filtering out the unnecessary information in videos and sentences during retrieval. On one hand, Mean applies the same weight to all frames and words, so the contrast between unnecessary frames and unimportant words may harm the retrieval performance. On the other hand, Max only considers the most important frame and word, ignoring other critical frames and words. Based on the above analysis, in this paper, we propose an endto-end multi-grained contrast model, namely X-CLIP, for videotext retrieval. Specifically, X-CLIP first adopts modality-specific encoders to generate multi-grained visual and textual representations and then considers multi-grained contrast of features (i.e., videosentence, video-word, sentence-frame, and frame-word) to obtain multi-grained similarity scores, vectors, and matrices. To effectively filter out the unnecessary information and obtain meaningful instance-level similarity scores, the AOSM module of X-CLIP conducts the attention mechanism over the similarity vectors/matrices. Different from the conventional Mean-Max strategy, our proposed AOSM module dynamically considers the importance of each frame in the video and each word in the sentence, so the adverse effects of unimportant words and unnecessary frames on retrieval performance are reduced. To validate the effectiveness of our proposed X-CLIP, we conduct extensive experiments on five widely-used video-text retrieval benchmarks and achieve significantly better performance than previous approaches. Specifically, our X-CLIP achieves 49.3 R@1 on MSR-VTT (i.e., 6.3% relative improvement, 2.9% absolute improvement over the previous state-of-the-art approach). Besides, our proposed X-CLIP achieves 50.4 R@1, 26.1 R@1, 47.8 R@1, 46.2 R@1 on the MSVD, LSMDC, DiDeMo and ActivityNet datasets, respectively, which outperforms the previous SOTA method by +6.6% (+3.1%), +11.1% (+2.6%), +6.7% (+3.0%), +3.8% (+1.7%) on relative (absolute) improvement. 2 RELATED WORKS 2.1 Vision-Language Pre-Training With the success of self-supervised pre-training such as BERT [12] in NLP, vision-language pre-training on large-scale unlabeled crossmodal data has attracted growing attention [23, 31\u201333, 37, 44, 50, 51, 55, 60]. One line of work such as LXMERT [51], OSCAR [33] and ALBEF [31] focuses on pre-training on enormous image-text pairs data, and obtains significant improvement in a variety of vision-andlanguage tasks. To better cope with the image-text retrieval tasks, contrastive language-image pre-training methods such as CLIP [44], ALIGN [23] and WenLan [19] have been proposed, by leveraging billion-scale image-text pairs data from the web with a dual-stream Transformer. Due to the great advantage of CLIP for visual representation learning, some recent work such as CLIP4Clip [38] has also begun to transfer the knowledge of CLIP to video-text retrieval tasks and obtained new state-of-the-art results. The other line of work such as VideoBERT [50], HERO [32] and Frozen in Time [4] directly collects video-text pairs data for video-language pre-training, by further considering the temporal information in videos. However, the scale of the video-language pre-training dataset is much smaller than image-text pre-training since the process of video-text dataset collection is much more expensive. In this work, we follow the line of CLIP4Clip [38], which enhances video-text retrieval by borrowing the ability of visual representation learning from contrastive image-text pre-training. Different from CLIP4Clip [38], we design a multi-grained video-text alignment function to better align the video-text semantics. 2.2 Video-Text Retrieval Video-text retrieval is a popular but challenging task, which involves cross-modal fusion of multiple modalities and additional understanding of temporal information in videos. Traditional videotext retrieval methods tend to design task-specific or modalityspecific fusion strategies for cross-modal learning from offline extracted video and text features [15, 16, 20, 28, 34, 42, 62], including face recognition/object recognition/audio processing. However, they are limited by the pre-extracted single modal features, since these features are not properly learnt for the target downstream tasks. Recently, the paradigm of end-to-end video-text retrieval by training models directly from raw video/text has gained large popularity. For example, MIL-NCE [40] adopts Multiple Instance Learning and Noise Contrastive Estimation for end-to-end video representation learning, which addresses visually misaligned narrations from uncurated videos. ClipBERT [30] proposes to sparsely sample video clips for end-to-end training to obtain clip-level predictions, while Frozen in Time [4] uniformly samples video frames and conducts end-to-end training on both image-text and videotext pairs data. CLIP4Clip [38] transfers the knowledge of CLIP to \fX-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval MM \u201922, October 10\u201314, 2022, Lisboa, Portugal end-to-end video-text retrieval and investigates three similarity calculation approaches for video-sentence contrastive learning. However, cross-grained (i.e., video-word and sentence-frame) contrast is also critical, which has rarely been explored in previous works. We propose the first work of multi-grained contrastive learning for endto-end video-text retrieval, by considering all the video-sentence, video-word, sentence-frame, and frame-word contrasts. 2.3 Multi-Grained Contrastive Learning Recently, contrastive learning [8\u201310, 18] has been a popular topic in deep learning community. CLIP [44] implements the idea of contrastive learning based on a large number of image-text pairs, achieving outstanding performance on several multi-modal downstream tasks [17, 21, 22, 39, 64, 65]. To achieve fine-grained contrastive learning, FILIP [59] contrasts the patch in the image with the word in the sentence, achieving fine-grained semantic alignment. TACo [57] proposes token-level and sentence-level losses to include both fine-grained and coarse-grained contrasts. Although contrastive learning has been widely used in multi-modal pretraining, cross-grained contrast has rarely been explored in previous works, which is also critical for semantic alignment. Therefore, we propose a multi-grained contrastive learning method for video-text retrieval, which aims to achieve multi-grained semantic alignment. 3 METHODOLOGY In this section, we elaborate each component of our proposed XCLIP, whose architecture is shown in Fig. 2. Specifically, we first introduce how to extract the multi-grained visual and textual representations in Sec. 3.1. We then explain the multi-grained contrastive learning based on these feature representations in Sec. 3.2, which aims to obtain multi-grained contrast scores, vectors, and matrices. We also introduce how to aggregate the similarity vectors/matrices to the instance-level similarity score in Sec. 3.3. Finally, we describe the similarity calculation and objective function for video-text retrieval in Sec. 3.4 and 3.5, respectively. 3.1 Feature Representation 3.1.1 Frame-level Representation. For a video \u02c6 \ud835\udc63\ud835\udc56\u2208\u02c6 V, we first sample video frames using the sampling rate of 1 frame per second (FPS). Frame encoder is used to process these frames to obtain frame-level features, which is a standard vision transformer (ViT) with 12 layers. Following the previous work [38], we initialize our frame encoder with the public CLIP [44] checkpoints. The architecture of ViT is the same as the transformer [52] encoder in natural language processing (NLP), except ViT introduces a visual tokenization process to convert video frames into discrete token sequences. The discrete token sequence, which is prepended with a [CLS] token, is then fed into the Transformer of ViT. The [CLS] tokens from the last layer are extracted as the frame-level features \u00af \ud835\udc63(\ud835\udc56,\ud835\udc57) \u2208\u00af V\ud835\udc56. 3.1.2 Visual Representation. However, \u00af \ud835\udc63(\ud835\udc56,\ud835\udc57) \u2208\u00af V\ud835\udc56are extracted from separate frames, without considering the interaction among frames. Therefore, we further propose a temporal encoder with temporal position embedding P, which is a set of predefined parameters, to model the temporal relationship. To be specific, the temporal encoder is also a standard transformer with 3 layers, which can be formulated as: V\ud835\udc56= \ud835\udc47\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60\ud835\udc38\ud835\udc5b\ud835\udc50\u0000\u00af V\ud835\udc56+ P\u0001 , (1) where V\ud835\udc56= [\ud835\udc63(\ud835\udc56,1), \ud835\udc63(\ud835\udc56,2), \ud835\udc63(\ud835\udc56,3), ..., \ud835\udc63(\ud835\udc56,\ud835\udc5b)] is the final frame-level (fine-grained) visual features for the video \u02c6 \ud835\udc63\ud835\udc56, \ud835\udc5bis the number of frames in the video \u02c6 \ud835\udc63\ud835\udc56. To obtain video-level (coarse-grained) visual feature \ud835\udc63\u2032 \ud835\udc56\u2208R\ud835\udc51\ud835\udc56\ud835\udc5a, all frame-level features of the video \ud835\udc63\ud835\udc56are averaged, which can be formulate as: \ud835\udc63\u2032 \ud835\udc56= 1 \ud835\udc5b \ud835\udc5b \u2211\ufe01 \ud835\udc57 \ud835\udc63(\ud835\udc56,\ud835\udc57). (2) 3.1.3 Textual Representation. Given a sentence, we directly use the text encoder of CLIP to generate the textual representation, which is also initialized by the public checkpoints of CLIP [44]. Specifically, it is a transformer encoder, which consists of multihead self-attention and feed-forward networks. The transformer consists of 12 layers and 8 attention heads. The dimension of the query, key, and value features is 512. The tokenizer used in the experiment is lower-cased byte pair encoding (BPE) [48] with a 49,152 vocab size. Before being fed into the text encoder, the textual token sequence is padded with [BOS] and [EOS] at the beginning and end, respectively. The sentence-level (coarse-grained) textual feature \ud835\udc61\u2032 \ud835\udc56\u2208R\ud835\udc51\ud835\udc56\ud835\udc5aand word-level (fine-grained) textual features T\ud835\udc56= [\ud835\udc61(\ud835\udc56,1),\ud835\udc61(\ud835\udc56,2),\ud835\udc61(\ud835\udc56,3), ...,\ud835\udc61(\ud835\udc56,\ud835\udc5a)] are the outputs of the [EOS] token and corresponding word tokens from the final layer of text encoder, where \ud835\udc5ais the length of the sentence. 3.2 Multi-Grained Contrastive Learning Previous VTR works [29, 38] focus on fine-grained and coarsegrained contrastive learning, which include video-sentence and frame-word contrasts. However, as explained in Sec. 1, cross-grained (i.e., video-word and sentence-frame) contrast is explicit to filter out the unnecessary information in the video and sentence. Therefore, different from previous works [29, 38, 59], which only focus singlegrained contrast, X-CLIP is a multi-grained contrastive framework for VTR. 3.2.1 Video-Sentence Contrast. Given the video-level representation \ud835\udc63\u2032 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5aand sentence-level representation \ud835\udc61\u2032 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5a, we use matrix multiplication to evaluate the similarity between video and sentence, which can be formulated as: \ud835\udc46\ud835\udc49\u2212\ud835\udc46= (\ud835\udc63\u2032)\u22ba(\ud835\udc61\u2032), (3) where \ud835\udc46\ud835\udc49\u2212\ud835\udc46\u2208R1 is the video-sentence similarity score. 1 3.2.2 Video-Word Contrast. For the given video-level representation \ud835\udc63\u2032 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5aand word-level representation vector T \u2208R\ud835\udc5a\u00d7\ud835\udc51\ud835\udc56\ud835\udc5a, we use matrix multiplication to calculate the similarity between the video representation and each word representation, which can be represented as follows: \ud835\udc46\ud835\udc49\u2212\ud835\udc4a= (T\ud835\udc63\u2032)\u22ba, (4) where \ud835\udc46\ud835\udc49\u2212\ud835\udc4a\u2208R1\u00d7\ud835\udc5ais the similarity vector between video and each word in the sentence, \ud835\udc5ais the length of the sentence. 1For clarity and simplicity, we have omitted the frame (word) index and video (sentence) index of visual (textual) representations. \fMM \u201922, October 10\u201314, 2022, Lisboa, Portugal Yiwei Ma et al. Text Encoder Temporal Encoder 1 2 3 4 5 6 ... Frame Encoder ... ... a 1 man 2 is 3 driving 4 a 5 car 6 Video-Sentence Score Video-Word Score Matrix Sentence-Frame Score Matrix Frame-Word Score Matrix Attention Over Similarity Matrix (AOSM) Multi-Grained Contrastive Similarity Mean Pool Video-Word Score Sentence-Frame Score Frame-Word Score Figure 2: Illustration of the proposed X-CLIP model. The input sentences are processed by the text encoder to generate coarsegrained and fine-grained textual representations. The input video is sampled into ordinal frames and these frames are fed into the frame encoder to generate frame-level representations. The frame-level representations are then fed into the temporal encoder to capture the temporal relationships. The outputs of the temporal encoder are fine-grained visual representations, and the coarse-grained visual representation is obtained by averaging all these fine-grained features. Based on these representations, we calculate the video-sentence, video-word, sentence-frame, and frame-word similarity score. 3.2.3 Sentence-Frame Contrast. Similar to Video-Word Contrast, we can calculate the similarity between the sentence representation \ud835\udc61\u2032 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5aand each frame representation \u00af V \u2208R\ud835\udc5b\u00d7\ud835\udc51\ud835\udc56\ud835\udc5abased on matrix multiplication, which can be formulated as follows: \ud835\udc46\ud835\udc39\u2212\ud835\udc46= \u00af V\ud835\udc61\u2032, (5) where \ud835\udc46\ud835\udc39\u2212\ud835\udc46\u2208R\ud835\udc5b\u00d71 is the similarity vector between the sentence and each frame of a video, \ud835\udc5bis the number of frames in the video. 3.2.4 Frame-Word Contrast. The fine-grained similarity matrix between word representations and frame representations can be also obtained using the matrix multiplication: \ud835\udc46\ud835\udc39\u2212\ud835\udc4a= \u00af VT\u22ba, (6) where \ud835\udc46\ud835\udc39\u2212\ud835\udc4a\u2208R\ud835\udc5b\u00d7\ud835\udc5ais the fine-grained similarity matrix, \ud835\udc5band \ud835\udc5a are the number of frames and words, respectively. 3.3 Attention Over Similarity Matrix (AOSM) To obtain the instance-level similarity, we fuse the similarity vector/matrix in Eq. 4, Eq. 5 and Eq. 6. As discussed in Sec. 1, Mean-Max strategies [25, 26, 47, 59] ignore the importance of different frames and words. To address this issue, we propose the Attention Over Similarity Matrix (AOSM) module, where scores in similarity vectors/matrices will be given different weights during aggregation. Specifically, given the similarity vectors \ud835\udc46\ud835\udc49\u2212\ud835\udc4a\u2208R1\u00d7\ud835\udc5aand \ud835\udc46\ud835\udc39\u2212\ud835\udc46\u2208R\ud835\udc5b\u00d71, we first use Softmax to obtain the weights for the similarity vector, where scores for the fine-grained features related to the query will be given high weights. Then, we aggregate these similarity scores based on the obtained weights, which can be formulated as follows: \ud835\udc46\u2032 \ud835\udc49\u2212\ud835\udc4a= \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc49\u2212\ud835\udc4a(1,\ud835\udc56)/\ud835\udf0f) \u00cd\ud835\udc5a \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc49\u2212\ud835\udc4a(1,\ud835\udc57)/\ud835\udf0f) \ud835\udc46\ud835\udc49\u2212\ud835\udc4a(1,\ud835\udc56), (7) \ud835\udc46\u2032 \ud835\udc39\u2212\ud835\udc46= \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc46(\ud835\udc56,1)/\ud835\udf0f) \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc46(\ud835\udc57,1)/\ud835\udf0f) \ud835\udc46\ud835\udc39\u2212\ud835\udc46(\ud835\udc56,1), (8) where \ud835\udf0fis the temperature parameter of Softmax. Since the fine-grained similarity matrix \ud835\udc46\ud835\udc39\u2212\ud835\udc4a\u2208R\ud835\udc5b\u00d7\ud835\udc5acontains the similarity scores of \ud835\udc5bframes and\ud835\udc5awords, we perform attention operations on the matrix twice. The first attention aims to get finegrained video-level and sentence-level similarity vectors, which can be formulated as follows: \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51= \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\ud835\udc56,\u2217)/\ud835\udf0f) \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\ud835\udc57,\u2217)/\ud835\udf0f) \ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\ud835\udc56,\u2217), (9) \ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b= \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\u2217,\ud835\udc56)/\ud835\udf0f) \u00cd\ud835\udc5a \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\u2217,\ud835\udc57)/\ud835\udf0f) \ud835\udc46\ud835\udc39\u2212\ud835\udc4a(\u2217,\ud835\udc56), (10) where \u2217represents all content in the dimension, \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51\u2208R1\u00d7\ud835\udc5a and \ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b\u2208R\ud835\udc5b\u00d71 are the video-level and sentence-level similarity vector, respectively. Specifically, \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51\u2208R1\u00d7\ud835\udc5ashows the similarity score between the video and \ud835\udc5awords in the sentence. \ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b\u2208R\ud835\udc5b\u00d71 represents the similarity score between the sentence and \ud835\udc5aframes in the video. To obtain fine-grained instance-level similarity scores, we conduct the second attention operation on the video-level vector \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51\u2208 R1\u00d7\ud835\udc5aand sentence-level similarity vector \ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b\u2208R\ud835\udc5b\u00d71, which can \fX-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval MM \u201922, October 10\u201314, 2022, Lisboa, Portugal be represented as follows: \ud835\udc46\u2032 \ud835\udc63\ud835\udc56\ud835\udc51= \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51(1,\ud835\udc56)/\ud835\udf0f) \u00cd\ud835\udc5a \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51(1,\ud835\udc57)/\ud835\udf0f) \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc51(1,\ud835\udc56), (11) \ud835\udc46\u2032 \ud835\udc60\ud835\udc52\ud835\udc5b= \ud835\udc5b \u2211\ufe01 \ud835\udc56=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b(\ud835\udc56,1)/\ud835\udf0f) \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b(\ud835\udc57,1)/\ud835\udf0f) \ud835\udc46\ud835\udc60\ud835\udc52\ud835\udc5b(\ud835\udc56,1), (12) where \ud835\udc46\u2032 \ud835\udc63\ud835\udc56\ud835\udc51\u2208R1 and \ud835\udc46\u2032 \ud835\udc60\ud835\udc52\ud835\udc5b\u2208R1 are the instance-level similarities. We use the average value as the fine-grained similarity score: \ud835\udc46\u2032 \ud835\udc39\u2212\ud835\udc4a= (\ud835\udc46\u2032 \ud835\udc63\ud835\udc56\ud835\udc51+ \ud835\udc46\u2032 \ud835\udc60\ud835\udc52\ud835\udc5b)/2. (13) 3.4 Similarity Calculation The similarity score \ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc57) measures the semantic similarity between the two instances. Different from the previous work [38] that only consider the coarse-grained contrast, our proposed XCLIP adopt multi-grained contrast during retrieval. Therefore, the final similarity score \ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc57) of X-CLIP contains multi-grained contrastive similarity scores, which can be represented as follows: \ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc57) = (\ud835\udc46\ud835\udc49\u2212\ud835\udc46+ \ud835\udc46\u2032 \ud835\udc49\u2212\ud835\udc4a+ \ud835\udc46\u2032 \ud835\udc39\u2212\ud835\udc46+ \ud835\udc46\u2032 \ud835\udc39\u2212\ud835\udc4a)/4. (14) 3.5 Objective Function During training, given a batch of \ud835\udc35video-text pairs, the model will generate a \ud835\udc35\u00d7 \ud835\udc35similarity matrix. We adopt the symmetric InfoNCE loss over the similarity matrix to optimize the retrieval model, which can be formulated as: L\ud835\udc632\ud835\udc61= \u22121 \ud835\udc35 \ud835\udc35 \u2211\ufe01 \ud835\udc56=1 log exp \u0000\ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc56)\u0001 \u00cd\ud835\udc35 \ud835\udc57=1 exp \u0000\ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc57)\u0001 , (15) L\ud835\udc612\ud835\udc63= \u22121 \ud835\udc35 \ud835\udc35 \u2211\ufe01 \ud835\udc56=1 log exp \u0000\ud835\udc60(\ud835\udc63\ud835\udc56,\ud835\udc61\ud835\udc56)\u0001 \u00cd\ud835\udc35 \ud835\udc57=1 exp \u0000\ud835\udc60(\ud835\udc63\ud835\udc57,\ud835\udc61\ud835\udc56)\u0001 , (16) L = L\ud835\udc632\ud835\udc61+ L\ud835\udc612\ud835\udc63. (17) 4 EXPERIMENTS 4.1 Datasets MSR-VTT [56] is a popular video-text retrieval dataset, which contains 10,000 videos and 200,000 captions. The length of videos in this dataset ranges from 10 to 32 seconds. In this paper, we adopt the widely-used \u2018Training-9K\u2019 split, where 9,000 videos and 180,000 captions are used for training and the rest are used for testing. MSVD [7] contains 1,970 videos, the duration of which vary from 1 to 62 seconds. Each video is annotated with 40 English captions. We use 1,200, 100, 670 videos for training, validating, and testing. LSMDC [45] is a dataset that contains 118,081 videos and captions. The duration of each video ranges from 2 to 30 seconds. We adopt 109,673, 7,408, and 1,000 videos for training, validating, and testing. DiDeMo [2] contains 10,000 videos and 40,000 captions. Following previous works [4, 30, 35], all captions of a video are concatenated together during video-paragraph retrieval. ActivityNet [6] contains 20,000 YouTube videos, which are annotated temporally. Following previous works [15, 38, 49], all captions of a video are also concatenated together during video-paragraph retrieval for fair comparison. 4.2 Experimental Settings 4.2.1 Implementation Details. We conduct the experiments on 4 NVIDIA Tesla V100 32GB GPUs using the PyTorch library. Following the previous work [38], the text encoder and frame encoder of X-CLIP are initialized by the public CLIP checkpoints. We use the Adam optimizer [27] to optimize the X-CLIP and decay the learning rate using a cosine schedule strategy [36]. Since the parameters of the text encoder and frame encoder are initialized from the public CLIP checkpoints, we adopt different learning rates for different modules. Specifically, the initial learning rate for text encoder and frame encoder is 1e-7, and the initial learning rate for other modules is 1e-4. We set the max token length, max frame length, batch size, and the training epoch to 32, 12, 300, and 3 for MSR-VTT, MSVD, and LSMDC datasets. Since videos and captions in DiDeMo and ActivityNet are longer and more complex, we set the max token length, max frame length, and the training epoch to 64, 64, and 20. Due to the limitation of GPU memory, we also reduce the batch size of DiDeMo and ActivityNet to 64. We conduct ablation, quantitative and qualitative experiments on the MSR-VTT dataset, it is more popular and competitive compared with other datasets. The base model of X-CLIP is ViT-B/32 if not specified. In order to enhance the expression ability of the model, we adopt linear embedding during calculating the video-sentence and frame-word similarity scores, which are initialized with the identity matrices. Besides, we also use the FC layers which are initialized with the identity matrices on similarity scores to enhance the modeling ability of the model. 4.2.2 Evaluation Protocols. To evaluate the retrieval performance of our proposed model, we use recall at Rank K (R@K, higher is better), median rank (MdR, lower is better), and mean rank (MnR, lower is better) as retrieval metrics, which are widely used in previous retrieval works [3, 5, 13\u201315, 30, 34, 35, 38, 41, 61, 63, 66]. 4.3 Performance Comparison We compare X-CLIP against the previous works on MSR-VTT, MSVD, LSMDC, DiDeMo, and ActivityNet. X-CLIP achieves the SOTA results on all five datasets with significant improvements. For the MSR-VTT dataset, the performance comparison is shown in Tab. 1. By analyzing the table, we gain the following observations: \u2022 Benefiting from the large-scale image-text pre-training, both CLIP4Clip and our model X-CLIP can obtain significant gains in performance compared with all the baselines. The consistent improvements verify that it is important to adopt end-to-end finetuning to realize the full potential of the image-text pretrained model on video-text retrieval. \u2022 Compared with the strongest competitor (i.e., CLIP4Clip-seqTransf), X-CLIP obtains 49.3 R@1 (6.3% relative improvement, 2.9% absolute improvement) in the text-to-video retrieval task and 48.9 R@1 (7.7% relative improvement, 3.5% absolute improvement) in the video-to-text retrieval task by employing CLIP(ViT-B/16) as pre-trained model. This can be attributed to that our proposed cross-grained contrast and the AOSM module are critical to reducing the bad effects of unnecessary frames and unimportant words. \u2022 Compared to all the other state-of-the-arts, our model with ViTB/16 achieves the best performance in all metrics. Surprisingly, \fMM \u201922, October 10\u201314, 2022, Lisboa, Portugal Yiwei Ma et al. Table 1: Retrieval performance comparison to SOTAs on the MSR-VTT dataset. Text-to-Video Retrieval Video-to-Text Retrieval Model R@1\u2191 R@5\u2191 R@10\u2191 MdR\u2193 MnR\u2193 R@1\u2191 R@5\u2191 R@10\u2191 MdR\u2193 MnR\u2193 CE [35] 20.9 48.8 62.4 6.0 28.2 20.6 50.3 64.0 5.3 MMT [15] 26.6 57.1 69.6 4.0 24.0 27.0 57.5 69.7 3.7 AVLnet [46] 27.1 55.6 66.6 4.0 28.5 54.6 65.2 4.0 SSB [42] 30.1 58.5 69.3 3.0 28.5 58.6 71.6 3.0 MDMMT [14] 38.9 69.0 79.7 2.0 16.5 Frozen [4] 31.0 59.5 70.5 3.0 HiT [34] 30.7 60.9 73.2 2.6 32.1 62.7 74.1 3.0 TT-CE+ [11] 29.6 61.6 74.2 3.0 32.1 62.7 75.0 3.0 CLIP-straight [43] 31.2 53.7 64.2 4.0 27.2 51.7 62.6 5.0 CLIP4Clip-MeanP (ViT-B/32) [38] 43.1 70.4 80.8 2.0 16.2 43.1 70.5 81.2 2.0 12.4 CLIP4Clip-seqLSTM (ViT-B/32) [38] 42.5 70.8 80.7 2.0 16.7 42.8 71.0 80.4 2.0 12.3 CLIP4Clip-seqTransf (ViT-B/32) [38] 44.5 71.4 81.6 2.0 15.3 42.7 70.9 80.6 2.0 11.6 CLIP4Clip-tightTransf (ViT-B/32) [38] 40.2 71.5 80.5 2.0 13.4 40.6 69.5 79.5 2.0 13.6 CLIP4Clip-MeanP (ViT-B/16) [38] 45.3 73.3 83.0 2.0 13.0 44.8 73.2 82.2 2.0 9.6 CLIP4Clip-seqLSTM (ViT-B/16) [38] 44.3 72.0 82.2 2.0 13.7 44.3 73.4 82.4 2.0 10.3 CLIP4Clip-seqTransf (ViT-B/16) [38] 46.4 72.1 82.0 2.0 14.7 45.4 73.4 82.4 2.0 10.7 CLIP4Clip-tightTransf (ViT-B/16) [38] 42.9 71.7 81.5 2.0 13.3 41.9 71.0 80.7 2.0 10.1 X-CLIP (ViT-B/32) 46.1 73.0 83.1 2.0 13.2 46.8 73.3 84.0 2.0 9.1 X-CLIP (ViT-B/16) 49.3 75.8 84.8 2.0 12.2 48.9 76.8 84.5 2.0 8.1 Table 2: Retrieval performance comparison on MSVD. Text-to-Video Video-to-Text Model R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 Multi Cues [41] 20.3 47.8 CE [35] 19.8 49.0 SSB [42] 28.4 60.0 NoiseE [1] 20.3 49.0 CLIP-straight [43] 37.0 64.1 59.9 85.2 Frozen [4] 33.7 64.7 TT-CE+ [11] 25.4 56.9 27.1 55.3 CLIP4Clip-MeanP (ViT-B/32) [38] 46.2 76.1 10.0 56.6 79.7 7.6 CLIP4Clip-seqTransf (ViT-B/32) [38] 45.2 75.5 10.3 62.0 87.3 4.3 CLIP4Clip-MeanP (ViT-B/16) [38] 47.3 77.7 9.1 62.9 87.2 4.2 CLIP4Clip-seqTransf (ViT-B/16) [38] 47.2 77.7 9.1 63.2 87.2 4.2 X-CLIP (ViT-B/32) 47.1 77.8 9.5 60.9 87.8 4.7 X-CLIP (ViT-B/16) 50.4 80.6 8.4 66.8 90.4 4.2 Table 3: Retrieval performance comparison on LSMDC. Text-to-Video Video-to-Text Model R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 CT-SAN [62] 5.1 16.3 JSFusion [61] 9.1 21.2 12.3 28.6 CE [35] 11.2 26.9 96.8 MMT [15] 12.9 29.9 75.0 NoiseE [1] 6.4 19.8 CLIP-straight [43] 11.3 22.7 6.8 16.4 MDMMT [14] 18.8 38.5 58.0 Frozen [4] 15.0 30.8 HiT [34] 14.0 31.2 TT-CE+ [11] 17.2 36.5 17.5 36.0 CLIP4Clip-MeanP (ViT-B/32) [38] 20.7 38.9 65.3 20.6 39.4 56.7 CLIP4Clip-seqTransf (ViT-B/32) [38] 22.6 41.0 61.0 20.8 39.0 54.2 CLIP4Clip-MeanP (ViT-B/16) [38] 23.5 43.2 54.8 22.6 50.5 50.3 CLIP4Clip-seqTransf (ViT-B/16) [38] 23.5 45.2 51.6 23.2 42.4 47.4 X-CLIP (ViT-B/32) 23.3 43.0 56.0 22.5 42.2 50.7 X-CLIP (ViT-B/16) 26.1 48.4 46.7 26.9 46.2 41.9 our model with the ViT-B/32 can even achieve comparable performance to CLIP4Clip with ViT-B/16, which again demonstrates the effectiveness and superiority of multi-grained contrast and the AOSM module. We also further validate the generalization of X-CLIP on MSVD, LSMDC, DiDeMo and ActivityNet in Tab. 2 5. It is worth noting Table 4: Retrieval performance comparison on DiDeMo. Text-to-Video Video-to-Text Model R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 S2VT [53] 11.9 33.6 13.2 33.6 FSE [63] 13.9 36.0 13.1 33.9 CE [35] 16.1 41.1 43.7 15.6 40.9 42.4 ClipBERT [30] 20.4 48.0 Frozen [4] 34.6 65.0 TT-CE+ [11] 21.6 48.6 21.1 47.3 CLIP4Clip-MeanP (ViT-B/32) [38] 43.4 70.2 17.5 42.5 70.6 11.6 CLIP4Clip-seqTransf (ViT-B/32) [38] 42.8 68.5 18.9 41.4 68.2 12.4 CLIP4Clip-MeanP (ViT-B/16) [38] 44.8 75.1 13.0 47.2 74.0 10.5 CLIP4Clip-seqTransf (ViT-B/16) [38] 44.8 73.4 13.5 44.7 74.0 10.6 X-CLIP (ViT-B/32) 45.2 74.0 14.6 43.1 72.2 10.9 X-CLIP (ViT-B/16) 47.8 79.3 12.6 47.8 76.8 10.5 that, in all variants of CLIP4Clip, we only report the performance of CLIP4Clip-MeanP and CLIP4Clip-seqTranf, because they perform better than the other two variants in consideration of experience in the previous work [38] and performance comparison in Tab. 1. By analyzing these tables, we can observe that X-CLIP also achieves significant improvement on these datasets for text-to-video and video-to-text retrieval tasks. Specifically, for the text-to-video retrieval task, X-CLIP outperforms the CLIP4Clip with ViT-B/16 on R@1 by +6.6% (+3.1%), +11.1% (+2.6%), +6.7% (+3.0%), +3.8% (+1.7%) relative (absolute) improvement on aforesaid four datasets respectively. For the video-to-text retrieval task, X-CLIP obtains +5.7% (+3.6%), +12.9% (+3.0%), +1.3% (+0.6%), +5.2% (+2.3%) relative (absolute) improvement on R@1. This demonstrates that our proposed X-CLIP can achieve consistent performance improvement on several video-text retrieval datasets. More experimental results are in the supplementary materials. 4.4 Ablation Study To fully examine the impact of different contrastive modules, we conduct an ablation study to compare different variants of X-CLIP. As shown in Tab. 6, we gain two important observations: \fX-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval MM \u201922, October 10\u201314, 2022, Lisboa, Portugal Table 5: Retrieval performance comparison on ActivityNet. Text-to-Video Video-to-Text Model R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 FSE [63] 18.2 44.8 16.7 43.1 CE [35] 18.2 47.7 23.1 17.7 46.6 24.4 HSE [63] 20.5 49.3 18.7 48.1 MMT [15] 28.7 61.4 16.0 28.9 61.1 17.1 SSB [42] 29.2 61.6 28.7 60.8 HiT [34] 29.6 60.7 ClipBERT [30] 21.3 49.0 TT-CE+ [11] 23.5 57.2 23.0 56.1 CLIP4Clip-MeanP (ViT-B/32) [38] 40.5 72.4 7.4 42.5 74.1 6.6 CLIP4Clip-seqTransf (ViT-B/32) [38] 40.5 72.4 7.5 41.4 73.7 6.7 CLIP4Clip-MeanP (ViT-B/16) [38] 44.0 73.9 7.0 44.1 74.0 6.5 CLIP4Clip-seqTransf (ViT-B/16) [38] 44.5 75.2 6.4 44.1 75.2 6.4 X-CLIP (ViT-B/32) 44.3 74.1 7.9 43.9 73.9 7.6 X-CLIP (ViT-B/16) 46.2 75.5 6.8 46.4 75.9 6.4 \u2022 With the number of contrastive modules increasing, the retrieval performance tends to be higher. When X-CLIP is equipped with all contrastive modules, the best retrieval performance can be achieved. This may be because each contrastive module plays a different role in the retrieval task and different contrast modules can promote each other to achieve better retrieval results. \u2022 Our proposed cross-grained contrast can assist fine-grained contrast or coarse-grained contrast to achieve better performance in the retrieval task. Specifically, X-CLIP with the sentence-video contrast module (i.e., Exp1) only achieves 43.0 R@1 in the textto-video retrieval task. However, when X-CLIP is additionally equipped with cross-grained contrast modules (i.e., Exp8 and Exp9), the performance gets obvious absolute improvements of 2.4% and 1.0% respectively. Similarly, when X-CLIP is only equipped with fine-grained and coarse-grained contrast modules (i.e., Exp10), it achieves 44.8 R@1 in the text-to-video task. However, when it is additionally equipped with cross-grained contrast modules (i.e., Exp13 and Exp14), 1.0% and 0.7% absolute improvement of R@1 can be achieved. Therefore, we conclude that the performance improvement of cross-grained contrast modules in the retrieval task does not conflict with that of coarse-grained and fine-grained contrast modules. To justify the effectiveness of the proposed AOSM module, we compare our method with the conventional Mean-Max and other variants (i.e., Max-Max, Max-Mean and Mean-Mean). As shown in Tab. 7, we observe that the Mean-Mean strategy performs worst. This may be because the Mean-Mean strategy, which applies the same weight to all similarity scores during aggregating, can not eliminate the adverse effects of unnecessary frames and unimportant words on the retrieval results. The Max-Mean, Mean-Max and Max-Max strategies perform better than the Mean-Mean strategy. This can be attributed to that these strategies adopt the highest similarity during aggregation, so contrast scores between unnecessary frames and unimportant words will be filtered out. However, since these strategies adopt the top-1 similarity score, some important similarity scores will also be ignored. To address this issue, we propose the AOSM module, where all similarity scores will be applied with different weights during aggregation. From Tab. 7, we observe that compared with other strategies, our proposed attention mechanism achieves better performance. To explore the impact of the temporal encoder module in X-CLIP, we also conduct an ablative study to compare the X-CLIP with and without the temporal encoder. As shown in Tab 8, based on either Top1: a woman plays instruments in a field. (31.03) Top2: women of a foreign nation comb their hair and perform in traditional costumes. (26.60) Top3: woman playing instruments in a field for a music video. (26.30) \u2714 Top1: A police officer drives his white car onto a grassy field and then back on to the street. (32.50) Top2: A car is in a wreck. (28.03) Top3: A car is racing on road. (27.85) \u2714 Top1: A cartoon character prepares to ride a bicycle. (34.30) Top2: Cartoon of a squid on a bike looking up at a treehouse. (29.64) Top3: A video game character rides around on a motorcycle. (27.27) \u2714 Figure 3: Top-3 video-to-text retrieval results on MSR-VTT. The number in parentheses is the similarity score. ViT-B/32 or ViT/16, X-CLIP with temporal encoder consistently outperforms X-CLIP without temporal encoder. This may be because the temporal encoder is used to model the temporal relation of different frames in a video. Therefore, X-CLIP without temporal encoder can not understand and perceive the information that requires a combination of multiple frames, e.g., action. Based on the above analysis, we conclude that temporal modeling is also a key to improving the performance of retrieval tasks. 4.5 Effect of Temperature Parameter To explore the effect of different \ud835\udf0fin the AOSM module, we also designed a group of experiments by setting different temperature parameters \ud835\udf0fin Softmax. From Tab. 9, we observe that the retrieval performance first improves before reaching the saturation point (i.e., \ud835\udf0f= 0.01), and then begins to decline slightly. The main reason may be that when \ud835\udf0fis large, too many noisy similarity scores are considered. On the contrary, if the \ud835\udf0fis small, some important similarity scores may be ignored. Besides, our proposed attention mechanism with different \ud835\udf0fconsistently performs better than the Mean-Mean strategy, and the attention mechanism with the optimal \ud835\udf0foutperforms other strategies in all evaluation protocols. This justifies that our proposed attention mechanism helps to strengthen the influence of important similarity scores and weaken the influence of noisy similarity scores, thus achieving better retrieval performance. 4.6 Qualitative Analysis To qualitatively validate the effectiveness of our proposed X-CLIP, we show some typical video-to-text and text-to-video retrieval examples in Fig. 3 and Fig. 4, respectively. From these retrieval results, we find that X-CLIP could accurately understand the content of sentences and videos. Meanwhile, it is robust for X-CLIP to comprehend complex and similar sentences and videos, which is mainly attributed to the multi-grained contrast of our proposed model. To be specific, as shown in the first example in Fig.3, although the top-3 retrieved sentences are similar, our proposed X-CLIP can still choose the correct sentence by understanding the details of \fMM \u201922, October 10\u201314, 2022, Lisboa, Portugal Yiwei Ma et al. Table 6: Retrieval performance with different contrastive granularity on the MSR-VTT dataset. Contrastive Module Text-to-Video Video-to-Text ID Sent-Video Sent-Frame Word-Video Word-Frame R@1\u2191 R@5\u2191 R@10\u2191 MnR\u2193 R@1\u2191 R@5\u2191 R@10\u2191 MnR\u2193 Exp1 \u2713 43.0 70.7 81.6 16.3 43.0 70.2 81.2 11.5 Exp2 \u2713 42.7 69.6 81.3 13.9 43.1 70.7 82.1 9.9 Exp3 \u2713 42.8 69.9 80.1 17.0 43.2 70.1 80.5 13.8 Exp4 \u2713 42.7 69.5 81.3 14.4 42.8 70.8 81.7 10.6 Exp5 \u2713 \u2713 44.6 72.8 82.4 13.9 45.7 73.2 82.3 9.1 Exp6 \u2713 \u2713 45.6 72.0 82.0 13.6 44.8 72.5 81.7 9.6 Exp7 \u2713 \u2713 44.1 70.2 81.3 14.3 44.4 71.6 82.8 9.7 Exp8 \u2713 \u2713 45.4 72.2 81.6 13.4 45.4 72.8 82.7 9.2 Exp9 \u2713 \u2713 44.0 70.3 82.5 13.9 43.6 70.9 81.8 11.3 Exp10 \u2713 \u2713 44.8 72.6 83.0 13.6 45.3 73.0 83.8 9.5 Exp11 \u2713 \u2713 \u2713 45.7 72.7 82.5 13.2 45.6 72.8 82.9 9.2 Exp12 \u2713 \u2713 \u2713 45.7 72.7 82.5 13.2 45.6 72.8 82.9 9.2 Exp13 \u2713 \u2713 \u2713 45.8 73.2 82.7 13.2 46.5 72.6 83.8 9.7 Exp14 \u2713 \u2713 \u2713 45.5 72.8 82.9 13.5 46.4 72.5 83.7 9.6 Exp15 \u2713 \u2713 \u2713 \u2713 46.1 73.0 83.1 13.2 46.8 73.3 84.0 9.1 Table 7: Retrieval performance with different fusion methods for similarity matrices on the MSR-VTT dataset. Text-to-Video Video-to-Text Method R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 Max-Max 44.0 72.6 13.5 44.4 72.5 9.2 Mean-Mean 43.2 71.2 14.8 42.5 70.2 11.4 Mean-Max 44.4 71.1 14.9 44.2 71.7 10.2 Max-Mean 44.9 71.3 13.5 43.8 71.8 9.4 Attention 46.1 73.0 13.2 46.8 73.3 9.1 Table 8: Ablation study of temporal encoder on the MSRVTT dataset. TE is short for temporal encoder. Text-to-Video Video-to-Text Base Model TE R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 ViT-B/32 45.2 72.9 13.8 45.6 73.9 9.2 \u2713 46.1 73.0 13.2 46.8 73.3 9.1 ViT-B/16 48.3 75.3 13.4 47.6 76.1 9.0 \u2713 49.3 75.8 12.2 48.9 76.8 8.1 Table 9: Retrieval performance with different temprature parameters \ud835\udf0fin Softmax on the MSR-VTT dataset. Text-to-Video Video-to-Text \ud835\udf0f R@1\u2191 R@5\u2191 MnR\u2193 R@1\u2191 R@5\u2191 MnR\u2193 1 43.9 71.6 14.5 43.5 71.3 11.3 0.1 45.2 72.2 14.0 45.3 73.1 9.3 0.01 46.1 73.0 13.2 46.8 73.3 9.1 0.001 45.6 72.2 13.7 43.6 72.5 9.4 sentences and videos. Similarly, as shown in the first example in Fig.4, all top-3 retrieved videos describe the same cartoon, while \u201csquid\u201d does not appear in the second and third videos. Due to the multi-grained contrast, X-CLIP performs well in visual and textual content understanding, so it can retrieve the correct video. 5" + } + ], + "Jiayi Ji": [ + { + "url": "http://arxiv.org/abs/2012.07061v1", + "title": "Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network", + "abstract": "Transformer-based architectures have shown great success in image captioning,\nwhere object regions are encoded and then attended into the vectorial\nrepresentations to guide the caption decoding. However, such vectorial\nrepresentations only contain region-level information without considering the\nglobal information reflecting the entire image, which fails to expand the\ncapability of complex multi-modal reasoning in image captioning. In this paper,\nwe introduce a Global Enhanced Transformer (termed GET) to enable the\nextraction of a more comprehensive global representation, and then adaptively\nguide the decoder to generate high-quality captions. In GET, a Global Enhanced\nEncoder is designed for the embedding of the global feature, and a Global\nAdaptive Decoder are designed for the guidance of the caption generation. The\nformer models intra- and inter-layer global representation by taking advantage\nof the proposed Global Enhanced Attention and a layer-wise fusion module. The\nlatter contains a Global Adaptive Controller that can adaptively fuse the\nglobal information into the decoder to guide the caption generation. Extensive\nexperiments on MS COCO dataset demonstrate the superiority of our GET over many\nstate-of-the-arts.", + "authors": "Jiayi Ji, Yunpeng Luo, Xiaoshuai Sun, Fuhai Chen, Gen Luo, Yongjian Wu, Yue Gao, Rongrong Ji", + "published": "2020-12-13", + "updated": "2020-12-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Image captioning aims to describe the semantic content of an image via neural language, which has recently attracted extensive research attention. Inspired by the sequence-tosequence model for machine translation, most captioning models (Vinyals et al. 2016; Xu et al. 2015; Anderson et al. 2018; Huang et al. 2019) mainly adopt a encoder-decoder framework, where an encoder network encodes the input image into a vectorial feature, and a decoder network takes the vectorial feature as input and generates the output caption. Such an encoder-decoder framework is recently well promoted with the development of the Transformer (Vaswani et al. 2017), where the self-attention is ef\ufb01ciently utilized to capture the correlations among the regions and words (Liu et al. 2019; Huang et al. 2019; Li et al. 2019a; Herdade et al. 2019; Cornia et al. 2020). *Corresponding Author Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. (a) (b) Figure 1: (a) The self-attention mechanism in the l-th layer of a standard Transformer. The vectorial representation vl i is region-biased, which only focuses on the region-level information (Devlin et al. 2018; Song et al. 2020; Weng et al. 2020). (b) Two key issues of the traditional Transformerbased captioning model that we try to address: object missing (top: missing \u201csnow\u201d) and false prediction (bottom: predicting \u201cplaying with a boy\u201d as \u201cwaking down\u201d). In the Transformer architecture, a set of image regions are encoded and attended into vectorial representations, as shown in Fig. 1 (a). These representations are then fused into the decoder to generate the corresponding captions. However, as demonstrated by earlier works (Devlin et al. 2018; Song et al. 2020; Weng et al. 2020), even though the vectorial representations of these regions are hierarchically calculated by being attended to all regions in the image, they still ignore the image-level characteristics and are thereby less effective for the decoder (Weng et al. 2020; Anderson et al. 2018). It causes the problem of object missing when generarXiv:2012.07061v1 [cs.CV] 13 Dec 2020 \fating descriptions, which is attributed to the limit number of categories in object detectors. As shown in the top of Fig. 1 (b), an important concept, i.e. \u201csnow\u201d, is not presented. Besides, it is more error-prone by focusing on local information while ignoring global guidance, as shown in the bottom of Fig. 1 (b), which is attributed to treating each object in isolation, to lead to a relationship bias. To improve the caption quality, a natural way is to capture and leverage global representation to guide the selection of attractive objects and their relationships, which is however nontrivial due to two challenges. First, directly extracting a global representation from an image by techniques like pooling might introduce strong contextual noises, which severely cause semantic ambiguity and damage the representation accuracy. Such damage can be even accumulated for multi-step self-attention in Transformers. Second, the extracted global representation can not be directly used by the Transformer decoder since the need for global guidance varies during the generation of captions. To solve the above problems, we propose a new Transformer architecture, i.e., Global Enhanced Transformer (termed GET) as shown in Fig. 2. GET captures the global feature via Global Enhanced Attention and utilizes the global feature to guide the caption generation via Gated Adaptive Controller. In GET, we \ufb01rst design a Global Enhanced Encoder to extract intraand inter-layer global representations. Speci\ufb01cally, we adopt Global Enhanced Attention to aggregate local information from each layer to form intra-layer global representation. After that, the global features are sequentially aggregated among layers via recurrent neural networks, which discard useless information from the previous layers. Then we adaptively fuse the distilled global representation into the decoder via a Global Adaptive Controller module, which can be implemented by two alternative gating modules to control the fusion, i.e., Gate Adaptive Controller and Multi-Head Adaptive Controller. As the local vectorial representations may be insuf\ufb01ciently comprehensive in detail, GET explores the global parts of images to supplement the local vectorial representation, which could be more comprehensive and instructive for caption generation. To sum up, our major contributions are itemized below: \u2022 We address the issue of object missing and relationship bias by leveraging global represention to provide more comprehensive visual information and play the role of connecting various local parts, which is fundamental in image captioning task. \u2022 We devise a unique encoder, termed Global Enhanced Encoder, which enables the Transformer framework to model intraand inter-layer global information simultaneously, and propose a novel gating mechanism named Gated Adaptive Controller to provide an adaptive and sophisticated control for the fusion of global information. \u2022 Through extensive experiments, we demonstrate that our Global Enhanced Transformer (GET) model can achieve new state-of-the-art performance on MS COCO dataset. RELATED WORK Image Captioning. Inspired by the encoder-decoder architectures in machine translation (Bahdanau, Cho, and Bengio 2014; Sutskever, Vinyals, and Le 2014), most existing image captioning approaches typically adopt the CNN-RNN framework (Vinyals et al. 2016; Karpathy and Fei-Fei 2015), where a convolution neural network (CNN) (He et al. 2016; Lin et al. 2020) is used to encode a given image, which is followed by a recurrent neural network (RNN) (Hochreiter and Schmidhuber 1997) to decode the CNN output into a sentence. Recently, a variety of advanced models (Yao et al. 2018; Yang et al. 2019; Anderson et al. 2018; Lu et al. 2017) have been proposed with attention (Xu et al. 2015) and RLbased training objectives (Rennie et al. 2017). Transformer-based Image Captioning. Some recent approaches have explored the use of the Transformer model (Vaswani et al. 2017) in Vision-Language tasks. (Huang et al. 2019) introduced a Transformer-like encoder to encode the regions into the hidden states, which was paired with an LSTM decoder. Recently, (Zhu et al. 2018; Herdade et al. 2019; Pan et al. 2020; Guo et al. 2020; Li et al. 2019b; Cornia et al. 2020) proposed to replace conventional RNN with the Transformer architecture, achieving new state-ofthe-art performance. On the same line, (Li et al. 2019a; Liu et al. 2019, 2020) used the Transformer to integrates both visual information and additional semantic concepts given by an external tagger. However, leveraging global information in the Transformer for the image captioning task has never been explicitly explored, which motivates our work in this paper. PRELIMINARIES The Transformer-based models formulate the calculation of the t-th hidden state of decoder as ht = Decoder(Encoder(I), w1, \u00b7 \u00b7 \u00b7 , wt\u22121), (1) where wi represents the feature embedding of the i-th word. The Transformer contains an encoder which consists of a stack of self-attention and feed-forward layers, and a decoder which uses self-attention on textual words and crossattention over the vectorial representations from the encoder to generate the caption word by word. We \ufb01rst present a basic form of attention, called \u201cScaled Dot-Product Attention\u201d , which is \ufb01rst proposed as a core component in Transformer (Vaswani et al. 2017). All intramodality and cross-modality interactions between word and image-level features are modeled via this basic form of attention. The attention module operates on some queries Q, keys K and values V and generates weighted average vectors \u02c6 V , which can be formulated as: \u02c6 V = Attention (Q, K, V ) = softmax \u0012QKT \u221a d \u0013 V, (2) where Q is a matrix of nq query vectors, K and V both contain nk keys and values, all with the same dimensionality, and d is a scaling factor. \fa young man hitting a tennis ball with a tennis racket Faster RCNN Encoder Layer 1 Encoder Layer \ud835\udc59\ud835\udc59+ 1 \u00d7 (L-1) LSTM LSTM LSTM \ud835\udc63\ud835\udc631 0~\ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 0 \ud835\udc63\ud835\udc631 \ud835\udc3f\ud835\udc3f~\ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 \ud835\udc3f\ud835\udc3f \ud835\udc54\ud835\udc540 \ud835\udc54\ud835\udc54\ud835\udc3f\ud835\udc3f \ud835\udc4a\ud835\udc4a \ud835\udc61\ud835\udc61\u22121 Self-Attention Cross-Attention Feedforward Linear+Softmax man \ud835\udc3b\ud835\udc3b\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59 \ud835\udc4e\ud835\udc4e\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59+1 \ud835\udc52\ud835\udc52\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59+1 \u210e\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59+1 \u210e\ud835\udc61\ud835\udc61 \ud835\udc3f\ud835\udc3f \u00d7 L \ud835\udc54\ud835\udc54\ud835\udc39\ud835\udc39 FFN GEA \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 \ud835\udc59\ud835\udc59 \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 \ud835\udc59\ud835\udc59+1 FFN GEA \ud835\udc54\ud835\udc54\ud835\udc41\ud835\udc41 \ud835\udc59\ud835\udc59 \ud835\udc54\ud835\udc54\ud835\udc41\ud835\udc41 \ud835\udc59\ud835\udc59+1 \u22ef FFN GEA \ud835\udc63\ud835\udc631 l \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 \ud835\udc59\ud835\udc59+1 \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \u22ef Attention \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \ud835\udc63\ud835\udc63\ud835\udc41\ud835\udc41 L \u22ef value key et l+1 \u0305 \ud835\udc52\ud835\udc52\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59+1 \ud835\udc52\ud835\udc52\ud835\udc61\ud835\udc61 \ud835\udc59\ud835\udc59+1 \ud835\udc54\ud835\udc54\ud835\udc39\ud835\udc39 query Global Enhanced Encoder Global Adaptive Decoder Global Adaptive Controller Cross-attention Global Enhanced Attention Figure 2: Overview of our Global Enhanced Transformer Networks (GET) for image captioning. A set of regions are \ufb01rst fed into a global enhanced encoder to extract intraand inter-layer global information and region-level representation, which are then adaptively fused into the decoder to generate captions. Notice that the Residual Connections, Layer Normalizations, and Embedding Layers are omitted. To extend the capacity of exploring subspaces, Transformer employs an effective module called multi-head attention, which is de\ufb01ned as MultiHead(Q, K, V ) = Concat (H1, . . . , Hh) W O, (3) Hi = Attention \u0010 QW Q i , KW K i , V W V i \u0011 , (4) where W Q i , W K i , W V i \u2208R d h \u00d7d are the independent head projection matrices, i = 1, 2, \u00b7 \u00b7 \u00b7 , h, and W O denotes the linear transformation. OUR METHOD In this section, we devise our Global Enhanced Transformer (GET) for image captioning. As shown in Fig. 2, the overall architecture follows the encoder-decoder paradigm. First, a global-enhanced encoder maps the original inputs into highly abstract local representations and extracts the intraand inter-layer global representation. Then the decoder adaptively incorporates the multimodal information simultaneously through the proposed global adaptive controller to generate the caption word by word. Global-enhanced Encoder The image is represented as a group of visual features V = {v1, v2, \u00b7 \u00b7 \u00b7 , vN} extracted from a pre-trained object detector as (Ren et al. 2015), where N is the number of visual regions. Speci\ufb01cally, the detector is a Faster-RCNN model pre-trained on the Visual Genome dataset (Krishna et al. 2016). We can represent the images as: g = 1 N N X i=1 vi. (5) Each encoder is a stack of L identical layers, of which each one contains a novel structure, i.e., the global-enhanced self-attention (GEA). To adapt the feature dimensionality to the encoder, the visual features V is \ufb01rst fed into a fully-connected layer, then we get projected features V 0 = {v0 1, v0 2, \u00b7 \u00b7 \u00b7 , v0 N} and g0. Global-enhanced attention. The early methods only feed regions to the encoder to extract the vectorial representation. As shown in (Devlin et al. 2018; Song et al. 2020; Weng et al. 2020), even though the vectorial representation of each region is hierarchically calculated by attending to all regions in the image, these vectorial representations only contain local features which focus on the region-level information. To capture a comprehensive global representation, both region features V and global feature g are fed into the multi-head self-attention module in each layer. By this way, the local information can be aggregated to form the global representation, through which we can capture the intra-layer global information. Speci\ufb01cally, the output of the l-th (0 \u2264l