diff --git "a/related_53K/test_related_long_2404.17571v1.json" "b/related_53K/test_related_long_2404.17571v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.17571v1.json" @@ -0,0 +1,8440 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17571v1", + "title": "Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos", + "abstract": "Video try-on is a challenging task and has not been well tackled in previous\nworks. The main obstacle lies in preserving the details of the clothing and\nmodeling the coherent motions simultaneously. Faced with those difficulties, we\naddress video try-on by proposing a diffusion-based framework named \"Tunnel\nTry-on.\" The core idea is excavating a \"focus tunnel\" in the input video that\ngives close-up shots around the clothing regions. We zoom in on the region in\nthe tunnel to better preserve the fine details of the clothing. To generate\ncoherent motions, we first leverage the Kalman filter to construct smooth crops\nin the focus tunnel and inject the position embedding of the tunnel into\nattention layers to improve the continuity of the generated videos. In\naddition, we develop an environment encoder to extract the context information\noutside the tunnels as supplementary cues. Equipped with these techniques,\nTunnel Try-on keeps the fine details of the clothing and synthesizes stable and\nsmooth videos. Demonstrating significant advancements, Tunnel Try-on could be\nregarded as the first attempt toward the commercial-level application of\nvirtual try-on in videos.", + "authors": "Zhengze Xu, Mengting Chen, Zhao Wang, Linyu Xing, Zhonghua Zhai, Nong Sang, Jinsong Lan, Shuai Xiao, Changxin Gao", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "2.1. Image Visual Try-on The image virtual try-on methods can generally be divided into two categories: GAN-based methods [7, 10, 13, 20, 26, 28, 36, 39] and diffusion-based methods [1, 4, 11, 23, 29, 52]. The GAN-based methods typically utilize Conditional Generative Adversarial Network (cGAN) [27] and of2 ten have two decoupled modules: a warping module that adjusts the clothing to fit the human body at a semantic level and a GAN-based try-on generator that blends the adjusted clothing with the human body image. To achieve accurate clothing wrapping, existing techniques estimate a dense flow map or apply alignment strategies between the warped clothing and the human body. VITON [13] proposes a coarse-to-fine strategy to warp a desired clothing onto the corresponding region. CP-VTON [39] preserves the clothing identity with the help of a geometric matching module. Using knowledge distillation, PBAFN [10] proposed a parser-free method, which can reduce the requirement for accurate masks. VITON-HD [7] adopts alignmentaware segment normalization to alleviate misalignment between the warped clothing and the human body. However, these approaches face challenges in dealing with images of individuals in complex poses and intricate backgrounds. Moreover, conditional GANs struggle with significant spatial transformations between the clothing and the person\u2019s posture. The exceptional generative capabilities of diffusion have inspired several diffusion-based image try-on methods. TryOnDiffusion [52] employs a dual U-Nets architecture for image try-on, which requires extensive datasets for training. Subsequent methods tend to leverage large-scale pre-trained diffusion models as priors in the try-on networks [17, 33, 45]. LADI-VTON [29] treats clothing as pseudowords. DCI-VTON [11] integrates clothing into pre-trained diffusion models by employing warping networks. StableVITON [23] proposes to condition the intermediate feature maps of the Main U-Net using a zero cross-attention block. While these diffusion-based methods have achieved highfidelity single-image inference, when applied to video virtual try-on, the absence of inter-frame relationship consideration leads to significant inter-frame inconsistency, resulting in unacceptable generation results. 2.2. Video Visual Try-on Compared to image-based try-on, video visual try-on offers users a higher degree of freedom in trying on clothing and provides a more realistic try-on experience. However, there have been few studies exploring video visual try-on to date. FW-GAN [8] utilizes an optical flow prediction module [41] to warp past frames in video virtual try-on to generate coherent video. FashionMirror [3] also employs optical flow to warp past frames, but it warps at the feature level instead of the pixel level. MV-TON [51] adopts a memory refinement module to remember the previously generated frames. ClothFormer [21] proposes a dual-stream transformer architecture to extract and fuse the clothing and the person\u2019s features. It also suggests using a tracking strategy based on optical flow and ridge regression to obtain a temporally consistent warp sequence. Due to the difficulties faced by warp modules in handling complex textures and significant motion, previous video try-on methods are limited to handling simple cases, such as minor movements, simple backgrounds, and clothing with simple textures. Additionally, previous video try-on methods have only focused on tightfitting tops. These limitations make them inadequate for real-world scenarios involving diverse clothing types, complex backgrounds, free-form movements, and variations in the size, proportion, and position of individuals. Therefore, we propose to remove explicit warp modules and utilize diffusion models for video try-on, while employing the focus tunnel strategy to adapt to the varied relationships between individuals and backgrounds in real-world applications. 2.3. Image Animation Image animation aims to generate a video sequence from a static image. Recently, some diffusion-based models have shown unprecedented success [6, 18, 19, 22, 30, 40, 44, 46, 48, 50]. Among them, Magic Animate [44] and Animate Anyone [18] have demonstrated the best generation results. Both models utilize an additional U-Net to extract appearance information from images and an encoder to encode pose sequences. Combining the current best animation frameworks with advanced image try-on methods can also achieve video try-on. However, a drawback of this pipeline is the lack of guidance from human video information, resulting in the network only generating static backgrounds, making it difficult for the characters to blend into the real environment and achieve satisfactory try-on effects. Additionally, relying solely on pose-driven actions can lead to strange generation results when conducting visual try-on with significant person\u2019s movements.", + "pre_questions": [], + "main_content": "Introduction Video virtual try-on aims to dress the given clothing on the target person in video sequences. It requires to preserve both the appearance of the clothing and the motions of the person. It provides consumers with an interactive experience, enabling them to explore clothing options without the necessity for physical try-on, which has garnered widespread attention from both the fashion industry and consumers alike. Although there are not many studies on video tryon, image-based try-on has already been extensively researched. Numerous classical image virtual try-on methods rely on the Generative-Adversarial-Networks(GANs) [7, 9, 10, 20, 39]. These methods typically comprise two primary components: a warping module that warps clothing to fit the human body in semantic level, and a try-on generator that blends the warped clothing with the human body image. Recently, with the development of diffusion models [33], the quality of image and video generation has been significantly improved. Some diffusion-based methods [23, 52] for image virtual try-on have been proposed, which do not explicitly incorporate a warp module but instead integrate the warp and blend process in a single unified process. Leveraging pre-trained text-to-image diffusion models, these diffusion-based models achieve fidelity surpassing that of GAN-based models. It is evident that video try-on provides a more comprehensive presentation of the try-on clothing under different conditions compared to image try-on. A direct transfer approach is to apply image try-on methods to process videos frame by frame. However, this leads to significant interframe inconsistency, resulting in unacceptable generation outcomes. Several approaches have explored specialized designs for video virtual try-on [8, 21, 25, 51]. These methods typically utilize optical flow prediction modules to warp frames generated by the try-on generator, aiming to enhance temporal consistency. ClothFormer [21] additionally proposes temporal smoothing operations for the input to the warping module. While these explorations of video tryon make steady advancements, most of them only tackle simple scenarios. For example, in VVT [8] dataset, samples mainly include simple textures, tight-fitting T-shirts, plain backgrounds, fixed camera angles, and repetitive human movements. This notably lags behind the standards of image virtual try-on and falls short of meeting practical application needs. We analyze that, different from the image-based settings, the main challenge in video try-on is preserving the fine detail of the clothing and generating coherent motions at the same time. In this paper, to address the aforementioned challenges in complex natural scenes, we propose a novel framework termed Tunnel Try-on. We start with a strong baseline of image-based virtual try-on. It leverages an inpainting UNet (noted as Main U-Net) as the main branch and utilizes a reference U-Net (noted as Ref U-Net) to extract and inject the fine details of the given clothing. By inserting Temporal-Attention after each stage of the Main U-Net, we extend this model to conduct virtual try-on in videos. However, this basic solution is insufficient to deal with the challenging cases in real-world videos. We observe that the human often occupies a small area in videos and the area or location could change violently along with the camera movements. Thus, we propose to excavate a \u201ctunnel\u201d in the given video to provide a stable close-up shot of the clothing region. Specifically, we conduct a region crop in each frame and zoom in on the cropped region to ensure that the individuals are appropriately centered. This strategy maximizes the model\u2019s capabilities for preserving the fine details of the reference clothing. At the same time, we leverage Kalman filtering techniques [43] to recalculate the coordinates of the cropping boxes and inject the position embedding of the focus tunnel into the Temporal-Attention. In this way, we could keep the smoothness and continuity of the cropped video region and assist in generating more consistent motions. Additionally, although the regions inside the tunnel deserve more attention, the outside region could provide the global context for the background around the clothing. Thus, we develop an environment encoder. It extracts global high-level features outside the tunnels and incorporates them into the Main UNet to enhance the background generation. Extensive experiments demonstrate that equipped with the aforementioned techniques, our proposed Tunnel Tryon significantly outperforms other video virtual try-on methods. In summary, our contributions can be summarized in the following three aspects: \u2022 We proposed Tunnel Try-on, the first diffusion-based video virtual try-on model that demonstrates state-of-theart performance in complex scenarios. \u2022 We design a novel technique of constructing the focus art performance in complex scenarios. \u2022 We design a novel technique of constructing the focus tunnel to emphasize the clothing region and generate coherent motion in videos. \u2022 We further develop several enhancing strategies like inherent motion in videos. \u2022 We further develop several enhancing strategies like incorporating the Kalman filter to smooth the focus tunnel, leveraging the tunnel position embedding and environment context in the attentions to improve the generation quality. In Section 3.1, we introduce the foundational knowledge of latent diffusion models required for subsequent discussions. Section 3.2 provides a comprehensive exposition of the network architecture of our Tunnel Try-on. In Section 3.3, we present details of the focus tunnel extraction strategy. In Section 3.4, we introduce the enhancing strategies for the focus tunnel, including tunnel smoothing and tunnel embedding. In Section 3.5, we elaborate on the environment encoder which aims at extracting the global context as the complementary. At last, we summarize our training and validation pipeline in Section 3.6. 3.1. Preliminaries Diffusion models [16] have demonstrated promising capabilities in both image and video generation. Built on the Latent Diffusion Model (LDM), Stable Diffusion [33] conducts denoising in the latent space of an auto-encoder. Trained on the large-scale LAION dataset [35], Stable Diffusion demonstrates excellent generation performance. Our network is built upon Stable Diffusion. 3 Add Concatenate C Ref-Attention Env-Attention Temporal-Attention Self-Attention Ref U-Net Self-Attention Person Denoising Feature Tunnel Info Embedding Projector Cross-Attention C C Self-Attention Env Embedding Clothing Embedding Clothing Ref Feature Sinusoidal Embedding Temporal-Attention Env Encoder Tunnel Extraction Zoom-In Tunnel Embedding Main U-Net Pose Encoder CLIP Encoder Tunnel Blend Env-Attention Ref-Attention Tunnel Embedding Clothing Input Video Figure 2. The overview of Tunnel Try-on. Given an input video and a clothing image, we first extract a focus tunnel to zoom in on the region around the garments to better preserve the details. The zoomed region is represented by a sequence of tensors consisting of the background latent, latent noise, and the garment mask, which are concatenated and fed into the Main U-Net. At the same time, we use a Ref U-Net and a CLIP Encoder to extract the representations of the clothing image. These clothing representations are then added to the Main U-Net using ref-attention. Moreover, human pose information is added into the latent feature to assist in generation. The tunnel embedding is also integrated into temporal attention to generating more consistent motions, and an environment encoder is developed to extract the global context as additional guidance. Given an input image x0, the model first employs a latent encoder [24] to project it into the latent space: z0 = E(x0). Throughout the training, Stable Diffusion transforms the latent representation into Gaussian noise by applying a variance-preserving Markov process [37] to z0, which can be formulated as: zt = \u221a\u00af \u03b1tz0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, \u03f5 \u223cU([0, 1]) (1) where \u00af \u03b1t is the cumulative product of the noise coefficient \u03b1t at each step. Subsequently, the denoising process learns the prediction of noise \u03f5\u03b8(zt, c, t), which can be summarized as: LLDM = Ez,c,\u03f5,t(|\u03f5 \u2212\u03f5\u03b8(zt, c, t)|2 2). (2) Here, t represents the diffusion timestep, c denotes the conditioning text prompts from the CLIP [32], and \u03f5\u03b8 denotes the noise prediction neural networks like the UNet [34]. In inference, Stable Diffusion reconstructs an image from Gaussian noise step by step, predicting the noise added at each stage. The denoised results are then fed into a latent decoder to regenerate images from the latent representations, denoted as \u02c6 x0 = D(\u02c6 z0). 3.2. Overall Architecture This section provides a comprehensive illustration of the pipeline presented in Figure 2. We start with introducing the strong baseline for image try-on. Then, we extend it to videos by adding Temporal-Attention. Afterwards, we briefly describe our novel designs which will be elaborated on in the next sections. Image try-on baseline. The baseline (modules in gray) of Tunnel Try-on consists of two U-Nets: the Main UNet and the Ref U-Net. The Main U-Net is initialized with an inpainting model. The Ref U-Net [47] has been proven effective [4, 18, 44] in preserving detailed information of reference images. Therefore, Tunnel Try-on utilizes the Ref U-Net to encode the fine-grained features of reference clothing. Additionally, Tunnel Try-on employs a CLIP image encoder to capture high-level semantic information of target clothing images, such as overall color. Specifically, the Main U-Net takes a 9-channel tensor with the shape of B \u00d7 9 \u00d7 H \u00d7 W as input, where B, H, and W denote the batch size, height, and width. The 9 channels consist of the clothing-masked video frame (4 channels), the latent noise (4 channels), and the cloth-agnostic mask (1 channel). To 4 enhance guidance on the movements of the generated video and further improve its fidelity, we incorporate pose maps as an additional control adjustment. These pose maps, encoded by a pose encoder comprising several convolutions, are added to the concatenated feature in the latent space. Adaption for videos. To adapt the image try-on model for processing videos, we insert Temporal-Attention after each stage of the Main U-Net. Specifically, TemporalAttention conducts self-attention on features of the same spatial position across different frames to ensure smooth transitions between frames. The feature maps of the Main U-Net are extended with the temporal dimension of f, denoting the frames. Thus, the input shape becomes B \u00d7 9 \u00d7 f \u00d7 H \u00d7 W. Therefore, as shown in RefAttention, the feature maps from the Ref U-Net are repeated f times and further concatenated along the spatial dimension. Subsequently, after flattening along the spatial dimension, the concatenated features are input into the selfattention module, and the output features retain only the original denoising feature map part. Novel designs of Tunnel Try-on. We excavate a Focus Tunnel in the input video and zoom in on the region to emphasize the clothing. To enhance the video consistency, we leverage the Kalman filter to smooth the tunnel and inject the tunnel embedding into the temporal attention layers. Simultaneously, we design an environment encoder (Env Encoder in Figure 2) to capture the global context information in each video frame as complementary cues. In this way, the Main U-Net primarily utilizes three types of attention modules to integrate control conditions at various levels, enhancing the spatio-temporal consistency of the generated video. These modules are depicted in the bottom colored box in Figure 2. Each of the novel modules will be introduced in detail in the following sections. 3.3. Focus Tunnel Extraction In typical image virtual try-on datasets, the target person is typically centered and occupies a large portion of the image. However, in video virtual try-on, due to the movement of the person and camera panning, the person in video frames may appear at the edges or occupy a smaller portion. This can lead to a decrease in the quality of video generation results and reduce the model\u2019s ability to maintain clothing identity. To enhance the model\u2019s ability to preserve details and better utilize the training weights learned from image try-on data, we propose the \u201dfocus tunnel\u201d strategy, as shown Figure 2. Specifically, depending on the type of try-on clothing, we utilize the pose map to identify the minimum bounding box for the upper or lower body. We then expand the coordinates of the obtained bounding box according to predefined rules to ensure coverage of all clothing. Since the expanded bounding box sequence resembles an information tunnel focused on the person, we refer to it as the \u201dfocus tunnel\u201d of the input video. Next, we zoom in on the tunnel. In other words, the video frames within the focus tunnel are cropped, padded, and resized to the input resolution. Then they are combined to form a new sequence input for the Main UNet. The generated video output from the Main U-Net is then blended with the original video using Gaussian blur to achieve natural integration. 3.4. Focus Tunnel Enhancement Since the process of focus tunnel extraction is computed only within individual frames without considering interframe relationships, slight jitters or jumps of bounding box sequences may occur when applied to videos, due to the movement of people and the camera. These jitters and jumps can result in focus tunnels that appear unnatural compared to videos captured naturally, increasing the difficulty of temporal attention convergence and leading to decreased temporal consistency in the generated videos. Dealing with this challenge, we propose tunnel smoothing and injecting tunnel embedding into the attention layers. Tunnel smoothing. To smooth the focus tunnel and achieve a variation effect similar to natural camera movements, we propose the focus tunnel smoothing strategy. Specifically, we first use Kalman filtering to correct the focus tunnel, which can be represented as Algorithm 1. Algorithm 1: Kalman Filter. Input: Raw tunnel coordinate x, tunnel length f. Result: Smoothed tunnel coordinate \u02c6 x. 1 Initialize P0 = x1, \u02c6 x0 = x1, Q = 0.001, R = 0.0015, t = 1. 2 repeat 3 Project the state ahead \u02c6 x\u2212 t = \u02c6 xt\u22121. 4 Project the error covariance ahead P \u2212 t = Pt\u22121 + Q. 5 Compute the Kalman Gain Kt = P \u2212 t (P \u2212 t + R)\u22121 6 Update the estimate \u02c6 xt = \u02c6 x\u2212 t + Kt(xt \u2212\u02c6 x\u2212 t ) 7 Update the error covariance Pt = P \u2212 t (1 \u2212Kt)\u22121 8 t \u2190t + 1. 9 until t > f; Output: \u02c6 x \u02c6 xt represents the smoothed coordinate of the focus tunnel at time t, calculated using the prediction equation of the Kalman filter. xt represents the observed position of the tunnel at time t, i.e., the coordinate of the tunnel before 5 smoothing. After the Kalman filter, we further filter out the high-frequency jitter caused by exceptional cases using a low-pass filter. Tunnel embedding. The input form of the focus tunnel has increased the magnitude of the camera movement. To mitigate the challenge faced by the temporal-attention module in smoothing out such significant camera movements, we introduce the Tunnel Embedding. Tunnel Embedding accepts a three-tuple input, comprising the original image size, tunnel center coordinates, and tunnel size. Inspired by the design of resolution embedding in SDXL [31], Tunnel Embedding first encodes the three-tuple into 1D absolute position encoding, and then obtains the corresponding embedding through linear mapping and activation functions. Subsequently, the focus tunnel embedding is added to the temporal attention as position encoding. With Tunnel embedding, temporal attention integrates details about the size and position of the focus tunnel, aiding in preventing misalignment with focus tunnels affected by excessively large camera movements. This enhancement contributes to improving the temporal consistency of video generation within the focus tunnel. 3.5. Environment Feature Encoding After applying the focus tunnel strategy, the context tends to be lost, posing a challenge in generating a reasonable background within the masked area. To address this, we propose the Environment Encoder. It consists of a frozen CLIP image encoder and a learnable linear mapping layer. Initially, the masked original image is encoded by a frozen CLIP image encoder to capture the overall information about the environment. Subsequently, the output CLIP features are finetuned through a learnable linear projection layer. As shown in the Env-Attention of Figure 2, the output features of Environment Encoder, serving as keys and values, are injected into the denoising process through cross-attention. 3.6. Train and Test Pipeline Training process. The training phase can be divided into two stages. In the first stage, the model excludes temporal attention, the Environment Encoder, and the Tunnel Embedding. Additionally, we freeze the weights of the VAE encoder and decoder (omitted in Fig 2 for simplicity), as well as the CLIP image encoder, and only update the parameters of the Main U-Net, Ref U-Net, and pose guider. In this stage, the model is trained on paired image try-on data. The objective of this stage is to learn the extraction and preservation of clothing features using larger, higherquality, and more diverse paired image data compared to the video data, aiming to achieve high-fidelity image-level try-on generation results as a solid foundation. In the second stage, all strategies and modules are incorporated, and the model is trained on video try-on datasets. Only the parameters of the Temporal-Attention, Environment Encoder are updated in this stage. The goal of this stage is to leverage the image-level try-on capability learned in the first stage while enabling the model to learn temporally related information, resulting in high spatio-temporal consistency in try-on videos. Test process. During the testing phase, the input video undergoes Tunnel Extraction to obtain the Focus Tunnel. The input video, along with the conditional videos, is then zoomed in on the focus tunnel and fed into the Main U-Net. Guided by the outputs of the Ref U-Net, CLIP Encoder, Environment Encoder, and Tunnel Embedding, the Main UNet progressively recovers the try-on video from the noise. Finally, the generated try-on video undergoes Tunnel-Blend post-processing to obtain the desired complete try-on video. 4. Experiments 4.1. Datasets We evaluate our Tunnel Try-on on two video try-on datasets: the VVT [8] dataset and our collected dataset. The VVT dataset is a standard video virtual try-on dataset, comprising 791 paired person videos and clothing images, with a resolution of 192\u00d7256. The models in the videos have similar and simple poses and movements on a pure white background, while the clothes are all fitted tops. Due to these limitations, the VVT dataset fails to reflect the realworld application scenarios of visual video try-on. Therefore, we collected a dataset from real e-commerce application scenarios, featuring complex backgrounds, diverse movements and body poses, and various types of clothing. The dataset consists of a total of 5,350 video-image pairs. We divided it into 4,280 training videos and 1,070 testing videos, each containing 776,536 and 192,923 frames, respectively. 4.2. Implement Details Model configurations. In our implementation, the Main U-Net is initialized with the inpainting model weight of Stable Diffusion-1.5 [33]. The Ref U-Net is initialized with a standard text-to-image SD-1.5. The Temporal-Attention is initialized from the motion module of AnimateDiff [12]. Training and testing protocols. The training phase is structured in two stages. In both stages, we resize and pad the inputs to a uniform resolution of 512x512 pixels, and we adopt an initial learning rate of 1e-5. The models are trained on 8x A100 GPUs. In the first stage, we utilized image try-on pairs extracted from video data, and merged them with existing image try-on datasets VITON-HD [7] 6 (c) PBAFN (e) StableVITON (b) FW-GAN (d) ClothFormer (f) Ours (a) Input Figure 3. Qualitative comparison with existing alternatives on the VVT dataset. The clothing and target person is shown in (a). The results of (b) FW-GAN, (c) PBAFN, (d) ClothFormer, (e) StableVITON, and (f) Tunnel Try-on are represented respectively. Figure 4. Qualitative results of Tunnel Try-on on our dataset. We present the try-on results of pants and skirts, as well as cross-category try-on results. for training. Then, we sample a clip consisting of 24 frames in the videos as the input for training in stage 2. In the testing phase, we use the temporal aggregation technique [38] to combine different video clips, producing a longer video output. 4.3. Comparisons with Existing Alternatives We conducted a comprehensive comparison with other visual try-on methods on the VVT dataset, including qualitative, quantitative comparisons and user studies. We collected several visual try-on methods, covering both GANbased methods like FW-GAN [8], PBAFN [10] and ClothFormer [21], and diffusion-based methods like Anydoor [5] and StableVITON [23]. To ensure a fair comparison, we utilized the VITON-HD [7] dataset for the first-stage training and conducted second-stage training on the VVT [8] dataset without using our own dataset. Figure 3 displays the qualitative results of various methods on the VVT dataset. From Figure 3, it is evident that GAN-based methods like FW-GAN and PBAFN, which utilize warping modules, struggle to adapt effectively to variations in the sizes of individuals in the video. Satisfactory results are achieved only in close-up shots, with the warping of clothing producing acceptable outcomes. However, when the model moves farther away and becomes smaller, the warping module produces inaccurately wrapped clothing, resulting in unsatisfactory single-frame try-on results. ClothFormer can handle situations where the person\u2019s pro7 Table 1. Comparison on the VVT dataset: \u2191denotes higher is better, while \u2193indicates lower is better. Method SSIM\u2191LPIPS\u2193V FIDI3D \u2193V FIDResNeXt \u2193 CP-VTON [39] 0.459 0.535 6.361 12.10 FW-GAN [8] 0.675 0.283 8.019 12.15 PBAFN [10] 0.870 0.157 4.516 8.690 ClothFormer [21] 0.921 0.081 3.967 5.048 AnyDoor [5] 0.800 0.127 4.535 5.990 StableVITON[23] 0.876 0.076 4.021 5.076 Tunnel Try-on 0.913 0.054 3.345 4.614 Table 2. User study for the preference rate on the VVT test dataset. * indicates testing was conducted only on examples shown in ClothFormer demonstrations. Method Quality% Fidelity% Smoothness% FW-GAN [8] 0 0 5.62 PBAFN [10] 6.77 8.77 6.31 AnyDoor [5] 7.85 7.08 0 StableVITON[23] 15.46 16.54 0 Tunnel Try-on 69.92 67.62 88.07 ClothFormer* [21] 30.8 26.0 39.6 Tunnel Try-on* 69.2 74.0 60.4 portion is relatively small, but its generated results are blurry, with significant color deviation. We also extend some diffusion-based image try-on methods (e.g., AnyDoor and StableVITON) to videos by perframe generation. We observe that they can generate relatively accurate single-frame results. However, due to the lack of consideration for temporal coherence, there are discrepancies between consecutive frames. As shown in Figure 3(e), the letters on the clothing change in different frames. Additionally, there are lots of jitters between adjacent frames in these methods, which can be observed more intuitively in videos. Compared with those existing solutions, our Tunnel Try-on seamlessly integrates diffusion-based models and video generation models, enabling the generation of accurate single-frame try-on videos with high inter-frame consistency. As depicted in Figure 3(f), the letters on the chest of the clothing remain consistent and correct as the person moves closer. In Table 1, we conduct quantitative experiments with both image-based and video-based metrics. For image-based evaluation, we utilize structural similarity (SSIM) [42] and learned perceptual image patch similarity (LPIPS) [49]. These two metrics are used to evaluate the quality of single-image generation under the paired setting. The higher the SSIM and the lower the LPIPS, the greater the similarity between the generated image and the original image. For video-based evaluation, we employ the Video Frechet Inception Distance (VFID) [8] to evaluate visual quality and temporal consistency. The FID [15] measures the diversity of generated samples. Furthermore, VFID employs 3D convolution to extract features in both temporal and spatial dimensions for better measures. Two CNN backbone models, namely I3D [2] and 3D-ResNeXt101 [14], are adopted as feature extractors for VFID. Table 1 demonstrates that on the VVT dataset, our Tunnel Try-on outperforms others in terms of SSIM, LPIPS, and VFID metrics, further confirming the superiority of our model in image visual quality (similarity and diversity) and temporal continuity compared to other methods. It\u2019s worth noting that we have a substantial advantage in LPIPS compared to other methods. Considering that LPIPS is more in line with human visual perception compared to SSIM, this highlights the superior visual quality of our approach. Considering that the quantitative metrics could not perfectly align with the human preference for generation tasks, we conducted a user study to provide more comprehensive comparisons. We organized a group of 10 annotators to make comparisons on the 130 samples of VVT test set. We let different methods generate videos for the same input, and let the annotators pick the best one. The evaluation criteria included three aspects: quality, fidelity, and smoothness. Specifically, \u201dQuality\u201d denotes the image quality, encompassing aspects like artifacts, noise levels, and distortion. \u201dFidelity\u201d measures the ability to preserve details compared to the reference clothing image. \u201dSmoothness\u201d evaluates the temporal consistency of the generated videos. Note that ClothFormer is not open-sourced but it provides 25 generation results. We conduct an individual comparison in the bottom block of Table 1 for the 25 provided results between ClothFormer and our method. Results show that our method demonstrates significant superiority over the others. 4.4. Qualitative Analysis Due to the limited diversity and the simplicity of samples in the VVT dataset, it fails to represent the scenarios encountered in actual video try-on applications. Therefore, we provide additional qualitative results on our own dataset to highlight the robust try-on capabilities and practicality of Tunnel Try-on. Figure 1 illustrates various results generated by Tunnel Try-on, including scenarios such as changes in the size of individuals due to person-to-camera distance variation, the parallel motion relative to the camera, and alterations in background and perspective induced by camera angle changes. By integrating the focus tunnel strategy and focus tunnel enhancement, our method demonstrates the ability to effectively adapt to different types of human movements and camera variations, resulting in high-detail preservation and temporal consistency in the generated try8 (c) w/ tunnel (b) w/o tunnel (a) input Figure 5. Qualitative ablations for the focus tunnel. This zoomin strategy brings notable improvements for preserving the fine details of the clothing. on videos. Moreover, unlike previous video try-on methods limited to fitting tight-fitting tops, our model can perform try-on tasks for different types of tops and bottoms based on the user\u2019s choices. Figure 4 presents some try-on examples of different types of bottoms. 4.5. Ablation Study We conducted ablation experiments for Tunnel Try-on to explore the impact of focus tunnel extraction (Section 3.3), focus tunnel enhancement (Section 3.4), and environment encoding (Section 3.5). We conduct both qualitative and quantitative ablations on our collected dataset to assess their performance. In Table 3, we provide quantitative metrics related to the ablation experiments. The Focus Tunnel strategy significantly improves the model\u2019s SSIM and LPIPS metrics, but it leads to a certain degree of decrease in the VFID metric. This indicates that the Focus Tunnel can effectively enhance the quality of frame generation but may introduce more flickering, reducing the temporal consistency of the video. However, with the tunnel enhancement, the network\u2019s VFID shows a significant improvement, while the SSIM also increases. Lastly, although the environment encoder does not exert a significant impact on quantitative metrics, we observed that it contributes to the generation of the background environments around the clothing, as demonstrated in Figure 7. We conduct a detailed analysis of each component in the following paragraphs. As shown in Figure 5, the impact of the Focus Tunnel Strategy is evident. Without the focus tunnel, there exists obvious distortion in the details of the logos. However, after zooming in on the tunnel regions with a close-up shot of the clothing. The detailed information of the garments could be significantly better preserved. In Figure 6, we investigate the effectiveness of the tunnel enhancement. As depicted in the red box area, when the tunnel enhancement is not employed (first row), the clothing textures exhibit variations and flickering over time, leading (b) w/ Enhancement (a) w/o Enhancement Figure 6. Qualitative ablations for the tunnel enhancement. It assists in generating more stable and continuous textures. (a) w/o Env (b) w/ Env (c) w/o Env (d) w/ Env Figure 7. Qualitative ablations for the environment encoder. The global context contributes to the recovery of the background around the clothing regions. Table 3. Quantities ablations for the core components. \u201cTunnel\u201d, \u201cEnhance\u201d, and \u201cEnv\u201d denote the focus tunnel, the tunnel enhancement, and the environment encoder respectively. Tunnel Enhance Env SSIM\u2191LPIPS\u2193V FIDI3D\u2193V FIDResNeXt \u2193 0.801 0.061 6.103 8.751 \u2713 0.877 0.052 6.759 9.034 \u2713 \u2713 0.914 0.049 5.997 8.356 \u2713 \u2713 \u2713 0.909 0.042 5.901 8.348 to decreased temporal consistency in the generated video. Figure 7 illustrates the impact of the environment encoder on the generation results. Since the environment encoder can extract overall context information outside the focus tunnel, it can enhance the quality of the background around the garment, making it more consistent with highlevel semantic information about the environment. As shown in Figure 7, when the environment encoder is added, the generation errors in the textures of the walls and zebra crossings near the human are corrected. 5. Conclusion We propose the first diffusion-based video visual try-on model, Tunnel Try-on. It outperforms all existing alternatives in both qualitative and quantitative comparisons. 9 Leveraging the focus tunnel, tunnel enhancement, and environment encoding, our model can adapt to diverse camera movements and human motions in videos. Trained on real datasets, our model could handle virtual try-on in videos with complex backgrounds and diverse clothing types, producing high-fidelity try-on results. Serving as a practical tool for the fashion industry, Tunnel Try-on provides new insights for future research in virtual try-on applications.", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.06760v1", + "title": "DiffusionDialog: A Diffusion Model for Diverse Dialog Generation with Latent Space", + "abstract": "In real-life conversations, the content is diverse, and there exists the\none-to-many problem that requires diverse generation. Previous studies\nattempted to introduce discrete or Gaussian-based continuous latent variables\nto address the one-to-many problem, but the diversity is limited. Recently,\ndiffusion models have made breakthroughs in computer vision, and some attempts\nhave been made in natural language processing. In this paper, we propose\nDiffusionDialog, a novel approach to enhance the diversity of dialogue\ngeneration with the help of diffusion model. In our approach, we introduce\ncontinuous latent variables into the diffusion model. The problem of using\nlatent variables in the dialog task is how to build both an effective prior of\nthe latent space and an inferring process to obtain the proper latent given the\ncontext. By combining the encoder and latent-based diffusion model, we encode\nthe response's latent representation in a continuous space as the prior,\ninstead of fixed Gaussian distribution or simply discrete ones. We then infer\nthe latent by denoising step by step with the diffusion model. The experimental\nresults show that our model greatly enhances the diversity of dialog responses\nwhile maintaining coherence. Furthermore, in further analysis, we find that our\ndiffusion model achieves high inference efficiency, which is the main challenge\nof applying diffusion models in natural language processing.", + "authors": "Jianxiang Xiang, Zhenhua Liu, Haodong Liu, Yin Bai, Jia Cheng, Wenliang Chen", + "published": "2024-04-10", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "5.1. One-to-many Modeling The existence of multiple suitable responses for a given context is referred to as the one-to-many problem. Some works introduce latent variable to model the relationship, CVAE(Zhao et al., 2017) utilizes Gaussian distribution to capture variations in responses at the discourse level, since a simple distribution over the latent variables has a lack of granularity in modeling the semantic information of the responses, DialogWAE(Gu et al., 2018) develop a Gaussian mixture prior network to enrich the latent space, instead of the single Gaussian prior of VAE. iVAEMI(Fang et al., 2019) address the challenge with implicit learning. DialogVED(Chen et al., 2022b) incorporates continuous latent variables into an enhanced encoderdecoder pre-training framework to increase the relevance and diversity of responses. PLATO(Bao et al., 2019) introduces discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Both of PLATO and DialogVED are pretrained with large dialog corpus, providing a strong baseline for one-to-many modeling. 5.2. Diffusion Models for Sequence Learning Since Diffusion model(Dhariwal and Nichol, 2021; Song et al., 2020b) has achieved breakthroughs in the field of image processing. There have been many works attempting to apply diffusion models to the field of natural language processing. Considering the discrete nature of texts, D3PM(Austin et al., 2021) introduce Markov transition matrices to diffuse the source data instead of Gaussian noise, Analog Bits(Chen et al., 2022a) represents discrete data as binary bits, and then training a continuous diffusion model to model these bits as real numbers. Diffusion-LM(Li et al., 2022) develop a non-autoregressive language model based on continuous diffusions with an embedding function and rounding process, iteratively denoises a sequence of Gaussian vectors into words. DiffuSeq(Gong et al., 2022) propose a diffusion model designed for sequence-to-sequence text generation tasks utilizing encoder-only Transformers. And SeqDiffuSeq(Yuan et al., 2022) approach sequence-tosequence text generation with Encoder-Decoder Transformers. LD4LG(Lovelace et al., 2022) learn the continuous diffusion models in the latent space of a pre-trained encoder-decoder model.", + "pre_questions": [], + "main_content": "Introduction Open-domain dialogue generation is a crucial component in dialogue systems. With the development of pre-trained language models, current models are capable of generating fluent and relevant dialogues(Radford et al., 2019; Raffel et al., 2020). However, there is still a lack of exploration in generating diverse responses, because there may be multiple appropriate responses when presented with a single context, and that\u2019s known as the oneto-many mapping problem, shown as figure 1. To model the one-to-many relationship between dialog history and response, Bao et al. (2019) introduce discrete latent variables, but the diversity of response is constrained by the categories of discrete latent variables, making it challenging to achieve fine-grained diversity generation. Sun et al. (2021) and Chen et al. (2022b) introduce continuous latent variable which can relief the problem of the discrete latent variables, but the prior of the model is limited by the inflexible prior distribution, which cannot model the distribution of the response well. As an alternative solution of one-to-many problem, we propose the integration of a diffusion model (Ho et al., 2020), which have shown its\u2019 superiority of generating high-quality and diverse results in the fields of image and audio genera\u2217 *Corresponding author He is a good guy. I don't really konw about him. Awful! I like his hair. Who\uff1f Maybe a smart boy He has a great shape of body What do you think of Tom? Sorry, but i don't konw We always have a good time together Figure 1: one to many problem in dialog generation. tion (Dhariwal and Nichol, 2021; Ramesh et al., 2022; Rombach et al., 2022; Kong et al., 2020). As for text-generation, DiffuSeq (Gong et al., 2022) uses the Diffusion-LM (Li et al., 2022) structure for sequence-to-sequence tasks in a nonautoregressive manner, and both models perform diffusion operations in the embedding space. However, there are several important drawbacks. Firstly, the inference speed of the model will be greatly limited by the context length, especially in multi-turn dialogue scenarios where time consumption can be disastrous. Secondly, these models need to be trained from scratch and cannot take advantage of pre-trained language models. Some work has arXiv:2404.06760v1 [cs.CL] 10 Apr 2024 also attempted to combine diffusion models with latent variable. For example, LATENTOPS (Liu et al., 2022) applies diffusion models in latent space for controllable text generation tasks, this approach involves training multiple classifiers for different control requirements, and using the corresponding classifier to guide the inference of diffusion model in order to achieve controlled generation of text. However, as a complex conditional generation task, it is difficult to train classifiers to guide the latent inference process for dialogue generation. In this work, we propose a structure that combines a latent-based diffusion model with a pretrained language model to address the one-tomany modeling problem in multi-turn dialogues, called DiffusionDialog. DiffusionDialog integrates a encoder-decoder structured pre-trained language model Bart (Lewis et al., 2019) and a latent-based (Vaswani et al., 2017) diffusion model with transformer decoder structure. It performs inference of the diffusion model in the fixeddimensional latent space, and combines the diffusion model with the language model for specific response generation. Instead of learning to approximate the fixed prior (e.g. Gaussian distribution) of the latent variable, our diffusion model learns a more flexible prior distribution from the encoder, enabling the generation of responses with finergrained diversity. And due to the low-dimensional nature of the latent space, our diffusion model overcomes the slow inference speed issue which is the major problem of diffusion models. The contributions of this paper can be summarized as follows: 1. We propose a novel approach to address the one-to-many problem in dialogue using a combination of a latent-based diffusion model and a pre-trained language model. 2. To the best of our knowledge, our work is the first to apply a latent diffusion model to dialog generation. By reasoning in the latent space, the inference efficiency of our diffusion model is significantly improved. 3. Through comparative experiments, we demonstrate the effectiveness of our model, which can generate responses that are rich in diversity while ensuring fluency and coherence. 2. Background 2.1. Dialog Generation with Latent Variable The objective of dialog system is to estimate the conditional distribution p(x|c). Let d = [u1, ..., uk] denote a dialogue comprising of k utterances. Each utterance is represented by ui = [w1, ..., w|ui|], where wn refers to the n-th word in ui. Additionally, we define c = [u1, ..., uk\u22121] as the dialogue context, which constitutes the k \u22121 historical utterances, and x = uk as the response, which denotes the next utterance in the dialogue. Finding a direct connection between the discrete token sequences x and c can be challenging. To address this issue, we propose the use of a continuous latent variable z, which serves as a high-level representation of the response. In this two-step response generation process, we first sample a latent variable z from a distribution p\u03b8(z|c) that resides in a latent space Z. Subsequently, we decode the response x from z and c as p\u03b8(x|z, c).And this process can be estimated as p\u03b8(x|c) = Z z p\u03b8(z|c)p\u03b8(x|z, c)dz. (1) Since the optimal z is intractable, we optimize the posterior distribution of z as q\u03d5(z|x) considering the x. And we approximate the posterior with the prior distribution p\u03b8(z|c), log p\u03b8(x|c) = log R z q\u03d5(z|x)p\u03b8(x|z, c) \u2265Ez\u223cq\u03d5(z|x)[log p\u03b8(x|z, c)] \u2212KL(q\u03d5(z|x), p\u03b8(z|c)). (2) 2.2. Diffusion Model in Latent Space Diffusion model is designed to operate in fixed and continuous domain, consisting forward and reverse processes. In this work, we perform forward and reverse process in learned latent space representing the high-level semantic of response. Suppose posterior as z0 \u223cq\u03d5(z|x), in the forward process, z0 is corrupted with standard Gaussian noise in large amount of step, forming a Markov chain of z0, z1, ..., zT , with zT \u223cN(0, I): q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI), (3) where \u03b2t \u2208(0, 1) controls the scale of the noise in a single step. In the reverse progress, diffusion model learn to reconstruct z0 from zT by learning p\u03b8(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt, t), \u03a3\u03b8(zt, t)), Since the q(zt\u22121|zt, z0) has a closed form,the canonical objective is the variational lower bound of log p\u03b8(z0), Lvlb = Eq [DKL (q (zT | z0) \u2225p\u03b8 (zT ))] +Eq hPT t=2 DKL (q (zt\u22121 | zt, z0) \u2225p\u03b8 (zt\u22121 | zt, t)) i \u2212log p\u03b8 (z0 | z1) . (4) To promote stability in training, we take advantage of the simplified objective proposed by Ho et al. as Lsimple, Lsimple(z0) = T X t=1 E q(zt|z0)\u2225\u00b5\u03b8(zt, t) \u2212\u02c6 \u00b5 (zt, z0) \u22252, (5) where \u02c6 \u00b5(zt, z0) refers to q(zt\u22121|zt, z0), and \u00b5\u03b8(zt, z0) is learned by diffusion model. 3. DiffusionDialog 3.1. Model Architecture Our model introduces a hierarchical generation process with latent variable. Firstly it obtains latent variable reflecting the semantic of response from the context and then generate the response considering the latent variable and the context (Equation 1), thus the response generation involves three key components: the dialogue context c, the response r, and the latent variable z. We combines encoder-decoder structured pretrained language model Bart with a latent-based diffusion model to handle the two-stage generation, the figure 2 illustrates our model, and we explain our model by illustrating the function of each part of the model. 3.1.1. Bart Encoder The bart encoder plays a dual role in our model, encoding both the contex and the latent variables. For context, following the PLATO, in addition to token and position embeddings, it also incorporates turn embeddings to align with the context turn number, and role embeddings to align with the speaker\u2019s role. As a result, the final embedding input of the context is the sum of corresponding token, turn, role, and position embeddings. For latent variables, since the priors are untraceable, bart encoder learns the priors of the latent variable q\u03d5(z|x) which represents the high-level semantic information about the response. To connect the latent space, we concatenate a special token in front of the response to encode the semantic information of the response. We refer to this special token as latent toke. Therefore, the input format for latent variable encoding is [l, wx 1, wx 2..., wx n], n refers to the length of response x. We append a multilayer perceptron to obtain a representation of the posterior distribution z0 \u223c q\u03d5(z|x) : z0 = MLP(h[L]), (6) where h[L] \u2208Rd refers to the final hidden state of the latent token. 3.1.2. Latent Diffusion Denoiser After obtaining z0 from the bart encoder, we sample a time step t \u2208[1, T] uniformly and add noise to the latent variable according to Equation 3, resulting in a noised latent zt. The latent diffusion denoiser is trained to denoise the latent. It adopts the structure of a transformer decoder, taking the noised latent variable as inputs and incorporates the context hidden state with cross-attention mechanism, and a timestep embedding is also added before the first Transformer block to inform the model of the current timestep, \u02dc z0 = Denoiser(zt, et, hc), (7) where et refers to the embedding of the timestep t. Since the context hidden state is fixed during inference, the inference time required for the diffusion model is short. 3.1.3. Bart Decoder To guide the response generation of the decoder using latent variables, we adopt the memory scheme from OPTIMUS (Li et al., 2020). Specifically, we project the latent variable z as a key-value pair and concatenate them to the left of the token hidden state to introduce the latent variable into the decoder. H(l+1) = MultiHead(H(l), h(l) Mem \u2295H(l), h(l) Mem \u2295H(l)), where H(l) refers to the token hidden state of the l-th layer, and h(l) Mem is calculated as: h(l) Mem = \u0014 zkey zvalue \u0015 = W l M z, (8) where W l M \u2208Rd\u00d72d is a weight matrix. 3.2. Training During our training process for dialogue generation, we utilize three different loss functions: negative log-likelihood (NLL) loss, bag-of-words (BOW) loss, and latent denoising (LD) loss. Detailed descriptions will be provided in this section. 3.2.1. Response semantic capture To enable the latent variable to capture the overall semantic information of the response, we adopt the bag-of-words (BOW)(Zhao et al., 2017) loss, which is used to enable the latent variable to predict the tokens in the response in a non-autoregressive manner. LBOW = \u2212Ez0\u223cq\u03d5(z|r) N X n=1 log p(rt|z0) = \u2212Ez0\u223cq\u03d5(z|r) N X n=1 log efrn P v\u2208V efv . (9) The symbol V refers to the entire vocabulary. The function f attempts to non-autoregressively predict the words that make up the target response. f = softmax (W2hz + b2) \u2208R|V |. (10) Response Latent Noise Context Shifted Response Context Shifted Response Training Inference Bart Encoder Bart Encoder Bart Decoder Latent Denoiser Latent Denoiser Bart Encoder Bart Decoder Figure 2: frame work of DiffusionDialog. In the given equation, hz represents the hidden state of the latent variable, while |V | denotes the size of the vocabulary. The estimated probability of word rn is denoted by frn. BOW loss disregards the word order and compels the latent variable to capture the overall information of the target response. 3.2.2. Latent Denoising For each training step, we sample a time step t and obtain zt referring to Equation 3. To better capture the semantic information of the latent variables, our diffusion model predicts z0 directly instead of zt\u22121 given zt, denoted as Lz0-simple , a variant of Lsimple in Equation 5: Lz0-simple (z0) = T X t=1 Ezt \u2225p (zt, c, t) \u2212z0\u22252 . (11) where our latent diffusion denoiser p (zt, hc, t) predicts z0 directly. Thus at each time step, the loss of latent denoising is: LLD = \u2225p (zt, t, c) \u2212z0\u22252. (12) 3.2.3. Response Generation In our model, the response is generated by conditioning on both the latent variable and the context. To train the response generation we adopt the commonly used NLL loss, LNLL = \u2212E \u02dc z0\u223cp(z|c,zt,t) log p(r | c, \u02dc z0) = \u2212E \u02dc z0\u223cp(z|c,zt,t) N X n=1 log p (rt | c, \u02dc z0, r