diff --git "a/related_53K/test_related_long_2404.18020v1.json" "b/related_53K/test_related_long_2404.18020v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.18020v1.json" @@ -0,0 +1,8449 @@ +[ + { + "url": "http://arxiv.org/abs/2404.18020v1", + "title": "DM-Align: Leveraging the Power of Natural Language Instructions to Make Changes to Images", + "abstract": "Text-based semantic image editing assumes the manipulation of an image using\na natural language instruction. Although recent works are capable of generating\ncreative and qualitative images, the problem is still mostly approached as a\nblack box sensitive to generating unexpected outputs. Therefore, we propose a\nnovel model to enhance the text-based control of an image editor by explicitly\nreasoning about which parts of the image to alter or preserve. It relies on\nword alignments between a description of the original source image and the\ninstruction that reflects the needed updates, and the input image. The proposed\nDiffusion Masking with word Alignments (DM-Align) allows the editing of an\nimage in a transparent and explainable way. It is evaluated on a subset of the\nBison dataset and a self-defined dataset dubbed Dream. When comparing to\nstate-of-the-art baselines, quantitative and qualitative results show that\nDM-Align has superior performance in image editing conditioned on language\ninstructions, well preserves the background of the image and can better cope\nwith long text instructions.", + "authors": "Maria Mihaela Trusca, Tinne Tuytelaars, Marie-Francine Moens", + "published": "2024-04-27", + "updated": "2024-04-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Despite the aim of keeping the background as similar as possible to the input image, numerous AIbased semantic image editors insert unwanted alterFigure 3: Semantic image editing: Imagen dataset. Source captions: (1) c1. A photo of a British shorthair cat wearing a cowboy hat and red shirt riding a bike on a beach. (2) c1. An oil painting of a raccoon wearing sunglasses and red shirt playing a guitar on top of a mountain. (3) c1. An oil painting of a fuzzy panda wearing sunglasses and red shirt riding a bike on a beach. ations in the image. FlexIt (Couairon et al., 2022) combines the input image and instruction text into a single target point in the CLIP multimodal embedding space and iteratively transforms the input image toward this target point. Zhang et al. (2023a) introduce ControlNet as a neural network based on two diffusion models, one frozen and one trainable. While the trainable model is optimized to inject the textual conditionality of the semantic editing, the frozen model preserves the weights of the model pre-trained on large image corpora. The output of ControlNet is gathered by summing the outputs of the two diffusion models. To keep the structural information of the input image, Tumanyan et al. (2023) define a Plug-and-Play model as a variation of the Latent Diffusion Model. Their method edits an input image using not only textual guidance but also a set of features that separately store spatial information and layout details like the shape of objects. While most text-based image editors are trainingfree, Imagic proposed by Kawar et al. (2023), assumes fine-tuning of the diffusion model by iteratively running over a text embedding optimized to match the input image and resemble the editing text instruction. Ultimately, the text embedding of the editing instructions and the optimized text embedding are interpolated and utilized as input by the fine-tuned model to generate the final edited image. This idea of fine-tuning a diffusion model is also adopted by Brooks et al. (2023) to define InstructPix2Pix as a model that approaches textbased image editing as a supervised task. Due to the scarcity of data, a methodology relying on Prompt-to-Prompt (Hertz et al., 2023) is proposed for generating pairs of images before and after the update. During inference, the fine-tuned stable diffusion model can seamlessly edit images using an input image and a text instruction. The above approaches lack an explicit delineation of the image content to be altered. Closer to our work is the Prompt-to-Prompt model (Hertz et al., 2023) which connects the text prompt with different image regions using cross-attention maps. The image editing is then performed in the latent representations responsible for the generation of the images. In contrast, our work focuses on the detection and delineation of the content to be altered in the image and is guided by the difference in textual instructions. Additionally, we edit images using real pictures and not latent representations artificially generated by a source prompt. To overcome the problem of unwanted alterations in the image, DiffEdit (Couairon et al., 2023) computes an image mask as the difference between the denoised outputs using the textual instruction that describes the source image and the instruction that describes the desired edits. However, without an explicit alignment between the two text instructions and the input image, DiffEdit has little control over the regions to be replaced or preserved. While DiffEdit internally creates the editing mask, models like SmartBrush (Xie et al., 2023), Imagen Editor (Wang et al., 2023), Blended Diffusion (Avrahami et al., 2022) or Blended Latent Diffusion (Avrahami et al., 2023) directly edit images using hand-crafted user-defined masks. Due to a rough text-based control, the above models often struggle with preserving background details and are overly sensitive to the length of text instructions. Different from the current models, our DM-Align model does not treat the recognition of the visual content that requires preservation or substitution as a black box. By explicitly capturing the semantic differences between the natural language instructions, DM-Align provides comprehensive control over image editing. This novel approach results in superior preservation of unaltered image content and more effective processing of long text instructions. Except for the models that require additional input masks, all the above-mentioned text-based image editors are used as baselines for our evaluation.", + "pre_questions": [], + "main_content": "Introduction AI-driven image generation was confirmed as a smooth-running option for content creators with high rates of efficiency and also creativity (Ramesh et al., 2022) that can be easily adapted to generate consecutive frames for video generation (Ding et al., 2022; Singer et al., 2023). Text-based guidance has proven to be a natural and effective means of altering visual content in images. Various model architectures have been proposed for text-based image synthesis, ranging from transformers (Ding et al., 2021; Vaswani et al., 2017) to generative adversarial networks (GANs) (Goodfellow et al., 2014; Reed et al., 2016; Zhu et al., 2019), and more recently, diffusion models like DALL\u00b7E 2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022), or Stable Diffusion Models (Rombach et al., 2022). The Figure 1: The proposed image editor utilizes a source caption to describe the initial image and a target text instruction to define the desired edited image. To accomplish this task, we employ the two captions to generate a diffusion mask, refining it further by incorporating regions of words that we want to keep or alter in the image. success of diffusion models, akin to that observed in language models (Kaplan et al., 2020), largely results from their scalability. Factors such as model size, training dataset size, and computational resources contribute significantly to their effectiveness, overshadowing the impact of the model architecture itself. This scalability enables these models to adapt easily to different domains, including unseen concepts (Ramesh et al.; Saharia et al., 2022). Moreover, these models are ready to use without the need for additional training (Choi et al., 2021; Li et al., 2020). While similar to the text-based semantic image generation task in its creation of new visual content, text-guided image editing also relies on additional visual guidance. Consequently, the goal of textguided image manipulation is to modify the content of a picture according to a given text while keeping the remaining visual content untouched. The remaining visual content is from now on referred to as \u201cbackground\". As text-to-image generators, textbased image editors work at the frame level and can be further adapted for video editing (Zhang et al., 2023b). Text-based semantic image editing typically employs text-based image generation models with user-defined image masks (Avrahami et al., 2023, 2022; Wang et al., 2023; Xie et al., 2023). arXiv:2404.18020v1 [cs.CV] 27 Apr 2024 Figure 2: The implementation of DM-Align. The aim is to update the input image described by the text instruction c1 (\u201cA clear sky and a ship landed on the sand\") according to the text instruction c2 (\u201cA clear sky and a ship landed on the ocean\"). Each of these masks is an arrangement that differentiates between the image content that is to be changed or preserved. However, asking humans to generate masks is cumbersome, so we would like to edit images naturally, relying solely on a textual description of the image and its instruction to change it. Existing models for text-based semantic image editing, which do not require human-drafted image masks, struggle to maintain the background (Brooks et al., 2023; Couairon et al., 2022; Kawar et al., 2023; Tumanyan et al., 2023; Zhang et al., 2023a). Preserving the background\u2019s consistency is particularly relevant for applications like game development or virtual worlds, where visual continuity across frames is crucial. Finally, the complexity of textual instructions given by their length poses a challenge for semantic image editors. While the existing models can effectively handle short text instructions, they encounter difficulties in manipulating an image using longer and more elaborate ones. To address the aforementioned limitations, we present a novel approach that employs one-to-one alignments between the words in the text instruction describing the source image and those describing the desired edited image (Figure 1). By leveraging these word alignments, we implement image editing as a series of deletion, insertion, and replacement operations. Through this text-based control mechanism, our proposed model consistently produces high-quality editing results, even with long text instructions, while ensuring the preservation of the background. As presented in Figure 2, we align the words of the text that describes the source image and the textual instruction that describes how the image should look after the editing, which allows us to determine the information the user wants to keep, or replace. Then, disjoint regions associated with the preserved or discarded information are detected by segmenting the image. Next, a global, rough mask for inpainting is generated using standard diffusion models. While the diffusion mask allows the insertion of new objects that are larger than the replaced ones, it has the disadvantage of being too rough. Therefore, we further refine it using again the detected disjoint regions. To prove the effectiveness of DM-Align, the masked content is generated using inpainting stable diffusion (Rombach et al., 2022). Our contributions are summarized as follows: 1. Our novel approach reasons with the text caption of the original input image and the text instruction that guides the changes in the image, which is a natural and human-like way of approaching image editing with a high level of explainability. 2. By differentiating between the image content to be changed from the content to be left unaltered, the proposed DM-Align enhances the text control of semantic image editing. 3. Compared to recent models for text-based semantic image editing, DM-Align demonstrates superior capability in handling long text instructions and preserving the background of the input image while accurately implementing the specified edits. In this section, we present our solution for semantic image editing. We define the task and then describe the main steps of the proposed model, which consist of: 1) Detecting the content that needs to be updated or kept relying on the alignment of words of the text that describes the source image and the textual instruction that describes how the image should look after the editing; 2) The segmentation of the image content to be updated or kept by crossmodal grounding; 3) The computation of a global diffusion mask that assures the coherence of the updated image; 4) The refinement of the global diffusion mask with the segmented image content that will be updated or kept; and 5) The inpainting of the mask with the help of a diffusion model. As demonstrated by our experiments, the proposed DM-Align can successfully replace, delete, or insert objects in the input image according to the text instructions. Our method mainly focuses on the nouns of the text instructions and their modifiers. Consequently, DM-Align does not implement action changes and the resulting changes in the position or posture of objects in the input image, which we leave for future work. 3.1 Task Definition DM-Align aims to alter a picture described by a source text description or instruction c1 using a target text instruction c2. Considering this definition, the purpose is to adjust only the updated content mentioned in the text instruction c2 and leave the remaining part of the image unchanged. Based on this, we argue the need for a robust masking system that clearly distinguishes between unaltered image regions, which we call \u201cbackground\", and the regions that require adjustments. 3.2 Word alignment between the text instructions The alignment represents the first step of the DMAlign model proposed to enhance the text-based control for semantic image editing (Figure 2). Given the two text instructions c1 and c2, our assumption is that the shared words should indicate unaltered regions, while the substituted words should point to the regions that require manipulations. Implicitly, the most relevant words for this analysis are nouns due to their quality of representing objects in the picture. The words are syntactically classified using the Stanford part-of-speech tagger (Toutanova et al., 2003). We extend the region to be edited by including the regions of the shared words with different word modifiers1 in the two text instructions. As a result, the properties of the already existing objects in the picture can be updated. On the contrary, if the aligned nouns have identical modifiers (or no modifiers) in both instructions, their regions in the image should be unaltered. In addition, we also consider the regions of the unaligned nouns mentioned in the source text instruction (deleted nouns) as unaltered regions. Keeping the regions of the deleted nouns is important because we assume that in the target instruction, a user only mentions the desired changes in the image, omitting irrelevant content (Hurley, 2014). Editing the regions of the deleted nouns reduces the similarity w.r.t the source image and increases the level of randomness in the target image since we generate new visual content that is irrelevant to both the source image and the target caption (Figure 11). Considering the example presented in Figure 4, the diffusion mask is adjusted to include the regions assigned to the sofa and dress. While the sofa is substituted with a bench, the dress has different modifiers in the captions. On the other hand, the regions of nouns \u201cgirl\" and \u201ccat\" are eliminated from the diffusion mask. The girl is mentioned in both captions, while the cat is irrelevant to the user according to the caption c2 and is incorporated in 1A modifier is a word or phrase that offers information about another word mentioned in the same sentence. To keep the editing process simple, in the current work we use only word modifiers represented by adjectives. the background. Figure 4: Word alignment example. Blue: identical words, Purple: substituted words, Green: nouns with different modifiers, Red: nouns mentioned only in the source caption c1. The detection of word alignments between the two text instructions is realized with a neural semiMarkov CRF model (Lan et al., 2021). The model is trained to optimize the word span alignments, where the maximum length of spans is equal to D words (in our case D = 3). The obtained word span alignments will then further be refined into word alignments. The neural semi-Markov CRF model is optimized to increase the similarity between the aligned source and target word span representations, which are each computed with a pretrained SpanBERT model (Joshi et al., 2020). The component that optimizes the similarity between these representations is implemented as a feed-forward neural network with Parametric ReLU (He et al., 2015). To avoid alignments that are far apart in the source and target instructions, another component controls the Markov transitions between adjacent alignment labels. To achieve this, it is trained to reduce the distance between the beginning index of the current target span and the end index of the target span aligned to the former source span. Finally, a Hamming distance is used to minimize the distance between the predicted alignment and the gold alignment. The outputs of the above components are fused in a final function \u03c8(a|s,t) that computes the score of an alignment a given a source text s and target text t. The conditional probability of span alignment a is then computed as: p(a|s,t) = e\u03c8(a|s,t) \u2211a\u2032\u2208A e\u03c8(a\u2032|s,t) (1) where the set A denotes all possible span alignments between source text s and target text t. The model is trained by minimizing the negative loglikelihood of the gold alignment a\u2217from both directions, that is, source to target s2t and target to source t2s : \u2211 s,t,a\u2217\u2212log p(a\u2217 s2t|s,t)\u2212log p(a\u2217 t2s|t,s) (2) The neural semi-Markov CRF model is trained on the MultiMWA-MTRef monolingual dataset, a subset of the MTReference dataset (Yao, 2014). Considering the trained model, we predict the word alignments as follows. Given two text instructions c1 and c2, the model predicts two sets of span alignments a: as2t aligning c1 to c2; and at2s aligning c2 to c1 The final word alignment is computed by merging these two span alignments. Let i be a word of the source text and j be a word of the target text, if alignment as2t indicates the connection i\u2212> j and alignment at2s indicates the connection j\u2212> i, then the words i and j become aligned. In the end, the word alignments are represented by a set of pairs (i\u2212j), where i is a word of the instruction c1, and j is a word of the instruction c2. 3.3 Segmentation of the image based on the word alignments The aim is to identify the regions in the image that require changes or conservation (second step in Figure 2). Based on the above word alignments, we select the nouns whose regions will be edited (non-identical aligned nouns or aligned nouns with different modifiers in the two text instructions) and the nouns whose regions will stay unaltered (nouns of the source text instruction not shared with the target text instruction, identical aligned nouns). Once these nouns are selected we use Grounded-SAM (Charles, 2023) to detect their corresponding image regions. Its benefit is the \u201copen-set object detection\" achieved by the object detector Grounding DINO (Liu et al., 2023) which allows the recognition of each object in an image that is mentioned in the language instruction. Given a noun, Grounding DINO detects its bounding box in the image, and SAM (Kirillov et al., 2023) determines the region of the object inside the bounding box. The selected regions will be used to locally refine the diffusion masks discussed in the next section. 3.4 Diffusion mask To ensure the coherence of the complete image given the target language instruction and to cope with the cases when the object to be replaced is smaller than the object to be inserted, we also use a global diffusion mask. The computation of the diffusion mask represents the third step of our proposed model (Figure 2) and relies on the denoising diffusion probabilistic models (DDPM) (Ho et al., 2020; Weng, 2021). DDPMs are based on Markov chains that gradually convert the input data Figure 5: Semantic image editing: Bison dataset. Source captions: (1) c1. A man standing next to a baby elephant in the city. (2) c1. A wooden plate topped with sliced meat and vegetables. (3) c1. A vase filled with red and white flowers. into Gaussian noise during a forward process, and slowly denoise the sampled data into newly desired data during a reverse process. In each iteration t of the forward process, new data xt is sampled from the distribution q(xt|xt\u22121) = N ( p 1\u2212\u03b2xt\u22121,\u03b2I), where \u03b2t is an increasing coefficient that varies between 0 and 1 and controls the level of noise for each time step t. The process is further simplified by expressing the sampled data xt w.r.t the input image x0, as follows: xt = \u221a\u03b1tx0 + p 1\u2212\u03b1t\u03b5 (3) where \u03b1t = \u220ft i=0(1 \u2212\u03b2i) and \u03b5 \u223cN (0,1) represents the noise variable. As we empirically observed that the editing effect is diminished over the regions where the noise variable is cancelled, we set the noise variable \u03b5 to 0 over the regions that should be preserved. We dubbed this operation noise cancellation. The forward process is executed for T iterations until xT converges to N (0,1). During the reverse process, at each time step t \u22121, xt\u22121 is denoised from the distribution q\u03b8(xt\u22121|xt) defined as: q\u03b8(xt\u22121|xt) = N ( 1 p 1\u2212\u03b2t (xt\u2212 \u03b2t \u221a1\u2212\u03b1t \u03b5\u03b8(xt)), 1\u2212\u03b1t\u22121 1\u2212\u03b1t \u03b2t) (4) where \u03b5\u03b8(xt,t) is estimated by a neural network usually represented by a U-Net. To impose the text conditionally in a diffusion model, we have to integrate the text instruction c into the U-Net model and compute \u03b5\u03b8(xt|c), instead of \u03b5\u03b8(xt). Using classifier-free guidance (Saharia et al., 2022) and knowing that s (s > 1) represents the guidance scale, \u03b5\u03b8(xt) mentioned in Eq. 4 is replaced by \u03b5\u03b8(xt|c) defined as: \u03b5\u03b8(xt|c) = s\u03b5\u03b8(xt|c)+(1\u2212s)(\u03b5\u03b8(xt|0) (5) To obtain the diffusion mask, we first compute the denoised output of the input image corresponding to the source instruction and the denoised output of the input image corresponding to the target instruction by running two separate DDPM processes. The diffusion process does not run over the input image but over its encoded representation yielded by a Variational Autoencoder (VAE) (Kingma and Welling, 2014; Rombach et al., 2022) with Kullback-Leibler loss. Therefore, the denoised outputs do not represent the final edited image but only an intermediate image representation with semantic information associated with the source or target instruction. Inspired by Couairon et al. (2023), we compute the diffusion mask as the absolute difference between the two noise estimates that is rescaled between [0,1] and binarized using a threshold set to 0.5. This diffusion mask represents a global mask that roughly indicates the content to be changed. 3.5 Refinement of the diffusion mask The refinement of the diffusion mask represents the fourth step of DM-Align as presented in Figure 2. To further improve the precision of the global diffusion mask, we refine it using the regions detected in Section 3.3. More specifically, we extend the diffusion mask to include the regions to be altered and Table 1: Image-level evaluation for Dream, Bison and Imagen datasets (mean and variance). Compared with the baselines, DM-Align achieves the best image-based scores while FlexIT obtains the best similarity w.r.t the target instruction as indicated by CLIPScore. Knowing that the CLIPScore is heavily biased for models based on the CLIP model (as FlexIT does), and considering the image-based scores, DM-Align achieves the best trade-off between similarities to the input image and the target instruction. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 Dream FlexIT 150.20 \u00b1 0.67 0.53 \u00b1 0.00 47.63 \u00b1 0.13 0.87 \u00b1 0.00 InstructPix2Pix 158.77 \u00b1 3.03 0.44 \u00b1 0.00 43.20 \u00b1 0.44 0.81 \u00b1 0.00 ControlNet 140.42 \u00b1 0.38 0.49 \u00b1 0.00 49.6 \u00b1 0.46 0.80 \u00b1 0.00 DiffEdit 126.77 \u00b1 0.14 0.29 \u00b1 0.57 30.22 \u00b1 0.14 0.72 \u00b1 0.00 Plug-and-Play 128.13 \u00b1 0.98 0.53 \u00b1 0.00 48.56 \u00b1 0.13 0.76 \u00b1 0.00 Imagic 157.06 \u00b1 0.23 0.65 \u00b1 0.00 48.23 \u00b1 0.03 0.79 \u00b1 0.00 DM-Align 124.57 \u00b1 0.52 0.31 \u00b1 0.00 29.24 \u00b1 0.03 0.80 \u00b1 0.00 Bison FlexIT 41.78 \u00b1 0.09 0.50 \u00b1 0.00 42.59 \u00b1 0.03 0.90 \u00b1 0.00 InstructPix2Pix 62.62 \u00b1 0.17 0.53 \u00b1 0.00 41.45 \u00b1 0.01 0.78 \u00b1 0.00 ControlNet 45.87 \u00b1 0.38 0.45 \u00b1 0.00 52.12 \u00b1 0.11 0.78 \u00b1 0.00 DiffEdit 53.54 \u00b1 0.22 0.45 \u00b1 0.00 49.65 \u00b1 0.18 0.76 \u00b1 0.00 Plug-and-Play 52.44 \u00b1 0.18 0.46 \u00b1 0.00 48.45 \u00b1 0.15 0.76 \u00b1 0.00 Imagic 63.23 \u00b1 0.28 0.52 \u00b1 0.00 51.44 \u00b1 0.12 0.77 \u00b10.00 DM-Align 40.05 \u00b1 0.03 0.39 \u00b1 0.00 37.05 \u00b1 0.07 0.78 \u00b1 0.00 Imagen FlexIT 91.86 \u00b1 0.32 0.46 \u00b1 0.00 44.05 \u00b1 0.00 0.91 \u00b1 0.00 InstructPix2Pix 133.33 \u00b1 0.04 0.57 \u00b1 0.00 42.68 \u00b1 0.18 0.79 \u00b1 0.00 ControlNet 85.86 \u00b1 0.26 0.51 \u00b1 0.00 58.44 \u00b1 0.04 0.79 \u00b1 0.00 DiffEdit 101.73 \u00b1 0.00 0.38 \u00b1 0.00 30.02 \u00b1 0.09 0.71 \u00b1 0.00 Plug-and-Play 84.37 \u00b1 0.29 0.41 \u00b1 0.00 41.79 \u00b1 0.07 0.78 \u00b1 0.00 Imagic 94.92 \u00b1 0.44 0.67 \u00b1 0.00 51.58 \u00b1 0.11 0.77 \u00b1 0.00 DM-Align 66.68 \u00b1 0.01 0.31 \u00b1 0.00 29.04 \u00b1 0.01 0.79 \u00b1 0.00 Table 2: Image-level evaluation of DM-Align on a subset of the Bison dataset that contains only source and target text instructions with a degree of similarity higher than Rouge 0.7. Out of all baselines, only FlexIT and DiffEdit are presented, as they utilize a source caption in their implementation. While DM-Align scores better than the baselines for image-based metrics, FlexIT has the highest CLIPScore due to its CLIP-based architecture. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 FlexIT 71.64 \u00b1 0.03 0.48 \u00b1 0.00 42.30 \u00b1 0.03 0.89 \u00b1 0.00 DiffEdit 74.60 \u00b1 0.94 0.44 \u00b1 0.01 51.75 \u00b1 0.29 0.76 \u00b1 0.00 DM-align 67.91 \u00b1 0.00 0.36 \u00b1 0.00 36.28 \u00b1 0.00 0.78 \u00b1 0.00 shrink it to avoid editing over the preserved regions. To improve control over the preserved background, we adjust the noise variable over the forward process of the obtained diffusion mask. The noise variable is cancelled for the unaltered regions detected in the previous step and kept unchanged for the regions to be manipulated. Note that both the global diffusion mask with noise cancellation and the regions determined through image segmentation are necessary for a qualitative mask. The global diffusion mask facilitates the replacement of small objects with larger ones and gives context to the editing. On the other hand, the insertion or deletion of different regions based on image segmentation improves the precision of the final mask as shown in ablation experiments in Subsection 5.1. Once the refined diffusion mask is computed, we use inpainting stable diffusion (Rombach et al., 2022) to edit the masked regions based on the given target text caption (fifth step of DM-Align presented in Figure 2). We also tried to replace the inpainting stable diffusion with latent blended diffusion (Avrahami et al., 2023). However, the obtained results were slightly worse, and the computational time increased by 60% (details are in Table 6 of Appendix C). 4 Experimental setup Baselines. We compare results obtained with DM-Align with those of FlexIT (Couairon et al., 2022), DiffEdit (Couairon et al., 2023), ControlNet (Zhang et al., 2023a), Plug-and-Play (Brooks et al., 2023), Imagic (Kawar et al., 2023) and IntructPix2Pix (Tumanyan et al., 2023). The implementation details are presented in Appendix A. Datasets. While ControlNet, Plug-and-Play, Imagic and InstructPix2Pix are evaluated on Table 3: Background-level evaluation for Dream, Imagen and Bison datasets (mean and variance). DM-Align outperforms the baselines in terms of background preservation, especially for the Bison and Imagen datasets which have more elaborate captions than Dream. FID\u2193 LPIPS\u2193 PWMSE\u2193 Dream FlexIT 154.44 \u00b1 0.19 0.31 \u00b1 0.00 30.22 \u00b1 0.05 InstructPix2Pix 147.62 \u00b1 0.82 0.25 \u00b1 0.00 27.87 \u00b1 0.35 ControlNet 137.29 \u00b1 1.86 0.31 \u00b1 0.00 32.74 \u00b1 0.42 DiffEdit 125.95 \u00b1 0.44 0.15 \u00b1 0.00 15.72 \u00b1 0.04 Plug-and-Play 151.42 \u00b1 1.02 0.34 \u00b1 0.00 31.59 \u00b1 0.00 Imagic 174.41 \u00b1 1.49 0.42 \u00b1 0.00 31.63 \u00b1 0.08 DM-Align 102.44 \u00b1 0.07 0.11 \u00b1 0.00 14.54 \u00b1 0.01 Bison FlexIT 35.48 \u00b1 0.07 0.24 \u00b1 0.00 20.38 \u00b1 0.03 InstructPix2Pix 44.01 \u00b1 0.28 0.26 \u00b1 0.00 20.00 \u00b1 0.06 ControlNet 35.39 \u00b1 0.06 0.25 \u00b1 0.00 26.58 \u00b1 0.04 DiffEdit 37.68 \u00b1 0.33 0.23 \u00b1 0.00 19.68 \u00b1 0.09 Plug-and-Play 36.44 \u00b1 0.78 0.24 \u00b1 0.00 19.79 \u00b1 0.12 Imagic 43.55 \u00b1 0.76 0.27 \u00b1 0.00 27.12 \u00b1 0.10 DM-Align 16.41 \u00b1 0.00 0.08 \u00b1 0.00 14.16 \u00b1 0.00 Imagen FlexIT 92.44 \u00b1 0.35 0.36 \u00b1 0.00 36.57 \u00b1 0.01 InstructPix2Pix 124.32 \u00b1 0.80 0.46 \u00b1 0.00 34.29 \u00b1 0.15 ControlNet 85.56 \u00b1 0.31 0.42 \u00b1 0.00 49.78 \u00b1 0.02 DiffEdit 88.01 \u00b1 0.55 0.31 \u00b1 0.00 24.17 \u00b1 0.09 Plug-and-Play 81.28 \u00b1 0.28 0.34 \u00b1 0.00 31.59 \u00b1 0.07 Imagic 103.74 \u00b1 1.49 0.56 \u00b1 0.00 43.91 \u00b1 0.08 DM-Align 54.12 \u00b1 0.04 0.21 \u00b1 0.00 22.09 \u00b1 0.00 Table 4: Ablation tests for the Imagen dataset (mean and variance). The results underscore the significance of all DM-Align components. \"Non-shared objects\" denote objects mentioned solely in the source caption, while \"Refinement of diffusion mask\" involves adjusting the diffusion mask through shrinkage or expansion based on regions corresponding to keywords. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 (w/o) diffusion mask 43.36 \u00b1 1.44 0.42 \u00b1 0.00 41.61 \u00b1 0.26 0.77 \u00b1 0.00 (w/o) noise cancellation 44.44 \u00b1 0.76 0.41 \u00b1 0.00 40.57 \u00b1 0.30 0.79 \u00b1 0.00 (w/o) refinement of diffusion mask 47.63 \u00b1 0.78 0.43 \u00b1 0.00 43.60 \u00b1 0.15 0.77 \u00b1 0.00 (w/o) objects with different modifiers 42.34 \u00b1 0.57 0.40 \u00b1 0.00 38.23 \u00b1 0.20 0.77 \u00b1 0.00 (w/o) non-shared objects 45.35 \u00b1 2.25 0.43 \u00b1 0.00 41.57 \u00b1 0.79 0.77 \u00b1 0.00 DM-Align 40.05 \u00b1 0.00 0.39 \u00b1 0.00 37.05 \u00b1 0.00 0.78 \u00b1 0.00 datasets devoid of source text descriptions, FlexIT and DiffEdit are evaluated on a subset of the ImageNet dataset (Deng et al., 2009), which assumes the replacement of the main object of the image with another object. Additionally, DiffEdit is evaluated on the Bison dataset (Hu et al., 2019) and a self-defined collection of Imagen pictures (Saharia et al., 2022). Out of these datasets, our model is evaluated using the Bison dataset and the collection of images generated by Imagen (further referred to as the Imagen dataset) described by Couairon et al. (2023). We omit the ImageNet dataset due to its oversimplified setup, primarily employing single-word source and target text instructions. Bison and Imagen datasets contain elaborated text captions with up to 23 words. To investigate the behavior of the DM-Align model and the baseline models when confronted with shorter text instructions we generate a collection of 100 images using Dream by WOMBO2 that relies on the source captions as guidance. As the dataset is generated using Dream by Wombo, we further refer to it as Dream. To complete the Dream dataset, we specify a new text query as the target instruction for each image-instruction pair. Unlike the Imagen and Bison datasets, the text instructions of Dream do not contain more than 11 words. Evaluation metrics. To evaluate our model, we use a set of metrics that assess the similarity of the edited image to both the input image and the target instruction. By default, it is a trade-off between image-based and text-based metrics as we need to find the best equilibrium point. 2The code is available at https://github.com/cdgco/ dream-api Figure 6: Semantic image editing: Dream dataset. Source captions: (1) c1. A soldier in front of a building. (2) c1. A pot with flowers. (3) c1. A girl throwing a volleyball. Generating images close to the source image improves the image-based metrics while reducing the similarity to the target caption. On the other hand, images close to the target instruction improve the text-based scores but can affect the similarity to the input picture. The equilibrium point is important given that people tend to focus mainly on specifying the desired changes in an image while omitting the information that already exists (Hurley, 2014). Therefore, the edited content can represent a small region of the new image while the rest of it should keep the content of the source image. The similarity (or the distance) of the updated image w.r.t the source image is assessed using FID (Heusel et al., 2017), LPIPS (Zhang et al., 2018) and the pixel-wise Mean Square Error (PWMSE). FID relies on the difference between the distributions of the last layer of the Inception V3 model (Szegedy et al., 2016) that separately runs over the input and edited images. FID measures the consistency and image realism of the new image w.r.t the source image. Contrary to the quality assessment computed by FID, LPIPS measures the perceptual similarity by calculating the distance between layers of an arbitrary neural network that separately runs over the input and updated images. As the LPIPS metric, PWMSE determines the pixel leakage by computing the pixel-wise error between the input and the edited images. The similarity of the updated image w.r.t the target instruction is computed in the CLIP multimodal embedding space by the CLIPScore (Hessel et al., 2021). More details about the evaluation metrics are specified in Appendix B. 5 Results and discussion 5.1 Quantitative analysis and ablation tests How well can the DM-Align model edit a source image considering the length of the text instruction? To address the first research question, we refer to Table 1. When compared to the baselines Diffedit, ControlNet, FlexIT, Plug-and-Play, and InstructPix2Pix, our proposed DM-Align model exhibits particularly effective performance in terms of image-based metrics. This effectiveness is particularly noticeable in the Bison and Imagen datasets, which contain longer captions compared to the Dream dataset. When compared with the best baseline over the Imagen dataset, DM-Align improves FID, LPIPS, and PWMSE by 23.42%, 19.87%, and 3.32%, respectively. Similar results are observed for the Bison dataset, where DM-Align enhances the results of the best baseline by 4.22% for FID, by 14.26% for LPIPS, and by 11.20% for PWMSE. In the case of the Dream dataset, DM-Align still outperforms other baselines in terms of FID and PWMSE, albeit with smaller margins. However, in terms of LPIPS, DiffEdit outperforms DM-Align over the instance of the Dream dataset. Given the results presented in Table 1, we posit that baselines find it easier to accurately edit images using short text instructions. Conversely, when text instructions are more elaborate, such as in the Bison and Imagen datasets, results significantly surpass those achieved by the baselines. DM-Align leverages word alignments between source and target instructions, highlighting their crucial role in facilitating effective image editing. In terms of text-based metrics, CLIPScore suggests that FlexIT generates images closest to the target instructions. This outcome is likely attributed to FlexIT\u2019s architecture, which is based on a CLIP model\u2014the same model used to calculate CLIPScore. This issue is highlighted in (Poole et al., 2023). Another possible explanation is that FlexIT is trained to maximize the similarity between input images and instructions. As depicted in Figures 3, 5, and 6, FlexIT may sacrifice image quality for higher similarity scores. Regarding CLIPScore, DM-Align consistently outperforms Plug-and-Play, Imagic and DiffEdit baselines or is equally effective as InstructPix2Pix and ControlNet. DM-Align also outperforms Prompt-to-Prompt (Hertz et al., 2023). As Prompt-to-Prompt can edit only selfgenerated images the comparison with DM-Align is limited only to text-based metrics like CLIPScore. More details about this comparison are presented in Appendix C. Given the text-based and image-based metrics, DM-Align seems to properly preserve the content of the input image and obtain a better trade-off between closeness to the input picture and target instruction than the baselines. While the above analysis demonstrates that elaborate text instructions do not affect the editing capabilities of DM-Align, unlike the baselines, we are also interested in examining how the degree of overlap between source and target captions impacts the quality of the edited image. To analyze this, we select 575 Bison instances with a similarity between the source and target instructions higher than Rouge 0.75. We do not conduct this analysis for the Imagen and Dream datasets as their text instructions already exhibit a level of similarity higher than Rouge 0.75. Our analysis is limited to DMAlign, FlexIT, and DiffEdit, as the other baselines do not utilize source captions in their implementation and are therefore omitted from this analysis. The results are presented in Table 2. We observe that while the results are similar to the image-based and text-based scores reported in Table 1 for Bison, all models report better performance and an improved trade-off between image and text-based metrics. These results suggest that increased overlap between source and target captions enhances the quality of image editing. How well does the DM-Align model preserve the background? To extract the background, we consider the DM-Align mask obtained after adjusting the diffusion mask. Upon analyzing the results presented in Table 3, the first notable observation is the significant reduction in the FID score of the DM-Align model by 73.27% for the Bison dataset, 40.12% for the Imagen dataset, and 20.58% for the Dream dataset when compared with the best baseline. Similarly, the LPIPS and PWMSE scores also indicate significant margin reductions, particularly for the Bison and Imagen datasets. Concerning the Dream dataset, DM-Align still outperforms the best-performing baseline with a margin of 25.02% for LPIPS and 7.80% for PWMSE. While DM-Align consistently demonstrates superior results for background preservation, we infer that the baselines are relatively adept at preserving the background only when the instructions are short and simple, as observed in the case of the Dream dataset. This conclusion is further supported by the results presented in Table 1. Ablation tests To run the ablation tests for DMAlign we rely on the Imagen dataset. According to Table 4, the absence of the refinement of the diffusion mask using the regions detected with the word alignment model and the Grounding-SAM segmentation model has the highest negative impact over the similarity w.r.t the input picture. As expected, a significant negative effect over the similarity with the input image is also noticed when omitting the deleted nouns or the nouns with different modifiers in the two queries. Similarly, noise cancellation and especially the diffusion mask also affect the conservation of the background. Including all the components in the architecture of DM-Align mainly facilitates the preservation of the input image and does not result in a reduction of the CLIPScore. Therefore, the inclusion of all these components in the DM-Align represents the best trade-off w.r.t the similarity to the input image and to the target caption. The next five visualizations exemplify the ablation tests. The first row of each figure presents the effect of omitting a component of DM-Align, while the correct behavior is shown in the second row. Figure 7 illustrates the effect of defining the editing mask based only on the image regions of the keywords. Without the diffusion mask, the model has to insert a new object in the fixed area of the replaced object. If we need to replace an object with a larger one, DM-Align without diffusion might create distorted and unnatural outputs. As we usuFigure 7: 1st line: Example of omitting the diffusion mask (c1: A woman near a cat., c2: A woman near a dog.). 2nd line: The correct example of including the diffusion mask. Figure 8: 1st line: Example of omitting the cancellation of the noise variable defined within the diffusion model. (c1: A man sitting at a table holding a laptop on the train., c2: A man sitting at a table reading a book on the train.). 2nd line: The correct example of including the noise cancellation. ally expect bigger dogs than cats, DM-Align with diffusion properly replaces the cat with a slightly bigger dog. On the contrary, the dog that replaced the cat is distorted when diffusion is not used. While the overall diffusion mask can give more context for the editing and allows the insertion of objects of different sizes, noise cancellation is an important step used to improve the initial diffusion mask. As shown in Figure 8, when noise cancellation is used, the initial diffusion mask is better trimmed, and the background is properly preserved. As the diffusion mask does not have complete control over the regions to be edited, its extension or shrinkage based on the image regions of the keywords is mandatory to obtain a correct mask for editing. When the image is edited using only the initial diffusion mask in Figure 9, both the ship and the sand are modified, while the former is expected to be preserved. As opposed, when the diffusion mask is refined with image segmentation, only the sand is replaced by the ocean. Figure 9: 1st line: Example of omitting the refinement of the diffusion mask using image segmentation (c1: A clear sky and a ship landed on the sand., c2: A clear sky and a ship landed on the ocean.). 2nd line: The correct example of including the refinement of the diffusion mask with image segmentation. Figure 10: 1st line: Example of omitting the information about modifiers associated with the nouns shared by both captions (c1: A woman with a red jacket., c2: A woman with a green jacket.). 2nd line: The correct example of including the information about the modifiers. The omission of the adjective modifiers in the analysis of DM-Align is exemplified in Figure 10. If the modifiers are left out, DM-Align considers the jacket a shared noun, like the noun \u201cwoman\", and removes its regions from the diffusion mask. As a result, DM-Align does not detect any semantical difference between the text instructions, and the output image is identical to the input image. On the other hand, if the modifiers are considered, DMAlign can properly adjust the color of the jacket while keeping the woman\u2019s face unaltered. As we are interested to make only the necessary updates in the picture, while keeping the background and the regions of the deleted words unchanged, the region assigned to the word \u201cman\" in Figure 11 is removed from the diffusion mask. As a result, the corresponding region is untouched. On the contrary, the inclusion of the region associated with the word \u201cman\" in the diffusion mask increases the randomness in the new image by inserting a store. Since the store is irrelevant, both Figure 11: 1st line: Example of omitting the information about the deleted nouns from the source caption (c1: A motorcycle near a man., c2: A motorcycle.). 2nd line: The correct example of including the information about the deleted nouns. the similarity scores w.r.t the input image or target instruction are reduced. 5.2 Human qualitative analysis Some qualitative examples extracted from the three data collections are shown in Figures 3, 5, and 6. Compared to DIFFEdit, ControlNet, and FlexIT, as well as Plug-and-Play, Imagic and InstructPix2Pix, the DM-Align model demonstrates superior manipulation of the content of the input image while largely preserving the background in line with the target query. DM-Align establishes semantic connections between source and target queries, updating the image content accordingly, whereas the baselines often alter the background more than necessary, as discussed above. DiffEdit tends to introduce random visual content (see Figure 3), while FlexIT tends to distort and zoom into the image (Figures 5 and 6), trading off the minimization of the reconstruction loss with respect to the input image and the text instructions for potential distortions in the new image. Although ControlNet can maintain the structure of the input image, it struggles to preserve the texture or colors of the objects, likely due to the absence of a masking system. InstructPix2Pix also encounters challenges in preserving the style of objects in the input image and tends to include more objects in the image than specified in the target text instruction. Plug-andPlay zooms into the image and tends to slightly alter the details of objects requested for preservation in the target text instruction. Out of all baselines, Imagic shows the highest tendency to change the input image\u2019s compositional structure, as highlighted also by the image-based metrics presented in Tables 1 and 3. Table 5: Human evaluation of the quality of the editing process based on the text instruction (Q1), the preservation of the background (Q2) and the quality of the edited image (Q3). The results represent the average scores reported by annotators using a 5-point Likert scale. Q1\u2191 Q2\u2191 Q3\u2191 FlexIt 3.75 4.00 3.85 DiffEdit 3.85 4.15 3.85 ControlNet 3.50 3.75 3.90 Plug-and-Play 3.80 4.10 3.85 InstructPix2Pix 3.50 3.75 3.80 Imagic 3.80 3.20 3.85 DM-Align 3.90 4.35 3.95 To confirm the above observations, we randomly selected 100 images from the Bison dataset and asked Amazon MTurk annotators to evaluate the editing quality of the five baselines and the proposed DM-Align. For each edited image, the annotators were asked to evaluate the overall quality of the editing process based on the text instruction (Q1), the preservation of the background (Q2) and the quality of the edited image in terms of compositionality, sharpness, distortion, color and contrast (Q3). According to the human evaluation executed on a 5-point Likert scale, our model scores better than all baselines (Table 5). The inter-rater agreement is good with Cohen\u2019s weighted kappa \u03ba between 0.65 and 0.75 for all analyzed models. 6 Conclusion, limitations and future work We propose a novel model DM-Align for semantic image editing that confers to the users a natural control over the image editing by updating the text instructions. By automatically identifying the regions to be kept or altered purely based on the text instructions, the proposed model is not a black box. Due to the high level of explainability, the users can easily understand the edited result and how to change the instructions to obtain the desired output. The quantitative and qualitative evaluations show the superiority of DM-Align to enhance the textbased control of semantic image editing over existing baselines FlexIT, DiffEdit, ControlNet, Imagic, Plug-and-Play and InstructPix2Pix. Unlike the latter models, our approach is not limited by the length of the text instructions. Due to the inclusion of one-to-one alignments between the words of the instructions that describe the image before and after the image update, we can edit images regardless of how complicated and elaborate the text instructions are. Besides the low sensitivity to the complexity of the instructions, the one-to-one word alignments allow us to properly conserve the background while editing only what is strictly required by the users. DM-Align focuses on the editing of objects mentioned as nouns and their adjectives. In future work, its flexibility can be improved by editing actions in which objects and persons are involved. As a result, they might change position in the image without the need to update their properties. Acknowledgments This project was funded by the European Research Council (ERC) Advanced Grant CALCULUS (grant agreement No. 788506).", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.06760v1", + "title": "DiffusionDialog: A Diffusion Model for Diverse Dialog Generation with Latent Space", + "abstract": "In real-life conversations, the content is diverse, and there exists the\none-to-many problem that requires diverse generation. Previous studies\nattempted to introduce discrete or Gaussian-based continuous latent variables\nto address the one-to-many problem, but the diversity is limited. Recently,\ndiffusion models have made breakthroughs in computer vision, and some attempts\nhave been made in natural language processing. In this paper, we propose\nDiffusionDialog, a novel approach to enhance the diversity of dialogue\ngeneration with the help of diffusion model. In our approach, we introduce\ncontinuous latent variables into the diffusion model. The problem of using\nlatent variables in the dialog task is how to build both an effective prior of\nthe latent space and an inferring process to obtain the proper latent given the\ncontext. By combining the encoder and latent-based diffusion model, we encode\nthe response's latent representation in a continuous space as the prior,\ninstead of fixed Gaussian distribution or simply discrete ones. We then infer\nthe latent by denoising step by step with the diffusion model. The experimental\nresults show that our model greatly enhances the diversity of dialog responses\nwhile maintaining coherence. Furthermore, in further analysis, we find that our\ndiffusion model achieves high inference efficiency, which is the main challenge\nof applying diffusion models in natural language processing.", + "authors": "Jianxiang Xiang, Zhenhua Liu, Haodong Liu, Yin Bai, Jia Cheng, Wenliang Chen", + "published": "2024-04-10", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "5.1. One-to-many Modeling The existence of multiple suitable responses for a given context is referred to as the one-to-many problem. Some works introduce latent variable to model the relationship, CVAE(Zhao et al., 2017) utilizes Gaussian distribution to capture variations in responses at the discourse level, since a simple distribution over the latent variables has a lack of granularity in modeling the semantic information of the responses, DialogWAE(Gu et al., 2018) develop a Gaussian mixture prior network to enrich the latent space, instead of the single Gaussian prior of VAE. iVAEMI(Fang et al., 2019) address the challenge with implicit learning. DialogVED(Chen et al., 2022b) incorporates continuous latent variables into an enhanced encoderdecoder pre-training framework to increase the relevance and diversity of responses. PLATO(Bao et al., 2019) introduces discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Both of PLATO and DialogVED are pretrained with large dialog corpus, providing a strong baseline for one-to-many modeling. 5.2. Diffusion Models for Sequence Learning Since Diffusion model(Dhariwal and Nichol, 2021; Song et al., 2020b) has achieved breakthroughs in the field of image processing. There have been many works attempting to apply diffusion models to the field of natural language processing. Considering the discrete nature of texts, D3PM(Austin et al., 2021) introduce Markov transition matrices to diffuse the source data instead of Gaussian noise, Analog Bits(Chen et al., 2022a) represents discrete data as binary bits, and then training a continuous diffusion model to model these bits as real numbers. Diffusion-LM(Li et al., 2022) develop a non-autoregressive language model based on continuous diffusions with an embedding function and rounding process, iteratively denoises a sequence of Gaussian vectors into words. DiffuSeq(Gong et al., 2022) propose a diffusion model designed for sequence-to-sequence text generation tasks utilizing encoder-only Transformers. And SeqDiffuSeq(Yuan et al., 2022) approach sequence-tosequence text generation with Encoder-Decoder Transformers. LD4LG(Lovelace et al., 2022) learn the continuous diffusion models in the latent space of a pre-trained encoder-decoder model.", + "pre_questions": [], + "main_content": "Introduction Open-domain dialogue generation is a crucial component in dialogue systems. With the development of pre-trained language models, current models are capable of generating fluent and relevant dialogues(Radford et al., 2019; Raffel et al., 2020). However, there is still a lack of exploration in generating diverse responses, because there may be multiple appropriate responses when presented with a single context, and that\u2019s known as the oneto-many mapping problem, shown as figure 1. To model the one-to-many relationship between dialog history and response, Bao et al. (2019) introduce discrete latent variables, but the diversity of response is constrained by the categories of discrete latent variables, making it challenging to achieve fine-grained diversity generation. Sun et al. (2021) and Chen et al. (2022b) introduce continuous latent variable which can relief the problem of the discrete latent variables, but the prior of the model is limited by the inflexible prior distribution, which cannot model the distribution of the response well. As an alternative solution of one-to-many problem, we propose the integration of a diffusion model (Ho et al., 2020), which have shown its\u2019 superiority of generating high-quality and diverse results in the fields of image and audio genera\u2217 *Corresponding author He is a good guy. I don't really konw about him. Awful! I like his hair. Who\uff1f Maybe a smart boy He has a great shape of body What do you think of Tom? Sorry, but i don't konw We always have a good time together Figure 1: one to many problem in dialog generation. tion (Dhariwal and Nichol, 2021; Ramesh et al., 2022; Rombach et al., 2022; Kong et al., 2020). As for text-generation, DiffuSeq (Gong et al., 2022) uses the Diffusion-LM (Li et al., 2022) structure for sequence-to-sequence tasks in a nonautoregressive manner, and both models perform diffusion operations in the embedding space. However, there are several important drawbacks. Firstly, the inference speed of the model will be greatly limited by the context length, especially in multi-turn dialogue scenarios where time consumption can be disastrous. Secondly, these models need to be trained from scratch and cannot take advantage of pre-trained language models. Some work has arXiv:2404.06760v1 [cs.CL] 10 Apr 2024 also attempted to combine diffusion models with latent variable. For example, LATENTOPS (Liu et al., 2022) applies diffusion models in latent space for controllable text generation tasks, this approach involves training multiple classifiers for different control requirements, and using the corresponding classifier to guide the inference of diffusion model in order to achieve controlled generation of text. However, as a complex conditional generation task, it is difficult to train classifiers to guide the latent inference process for dialogue generation. In this work, we propose a structure that combines a latent-based diffusion model with a pretrained language model to address the one-tomany modeling problem in multi-turn dialogues, called DiffusionDialog. DiffusionDialog integrates a encoder-decoder structured pre-trained language model Bart (Lewis et al., 2019) and a latent-based (Vaswani et al., 2017) diffusion model with transformer decoder structure. It performs inference of the diffusion model in the fixeddimensional latent space, and combines the diffusion model with the language model for specific response generation. Instead of learning to approximate the fixed prior (e.g. Gaussian distribution) of the latent variable, our diffusion model learns a more flexible prior distribution from the encoder, enabling the generation of responses with finergrained diversity. And due to the low-dimensional nature of the latent space, our diffusion model overcomes the slow inference speed issue which is the major problem of diffusion models. The contributions of this paper can be summarized as follows: 1. We propose a novel approach to address the one-to-many problem in dialogue using a combination of a latent-based diffusion model and a pre-trained language model. 2. To the best of our knowledge, our work is the first to apply a latent diffusion model to dialog generation. By reasoning in the latent space, the inference efficiency of our diffusion model is significantly improved. 3. Through comparative experiments, we demonstrate the effectiveness of our model, which can generate responses that are rich in diversity while ensuring fluency and coherence. 2. Background 2.1. Dialog Generation with Latent Variable The objective of dialog system is to estimate the conditional distribution p(x|c). Let d = [u1, ..., uk] denote a dialogue comprising of k utterances. Each utterance is represented by ui = [w1, ..., w|ui|], where wn refers to the n-th word in ui. Additionally, we define c = [u1, ..., uk\u22121] as the dialogue context, which constitutes the k \u22121 historical utterances, and x = uk as the response, which denotes the next utterance in the dialogue. Finding a direct connection between the discrete token sequences x and c can be challenging. To address this issue, we propose the use of a continuous latent variable z, which serves as a high-level representation of the response. In this two-step response generation process, we first sample a latent variable z from a distribution p\u03b8(z|c) that resides in a latent space Z. Subsequently, we decode the response x from z and c as p\u03b8(x|z, c).And this process can be estimated as p\u03b8(x|c) = Z z p\u03b8(z|c)p\u03b8(x|z, c)dz. (1) Since the optimal z is intractable, we optimize the posterior distribution of z as q\u03d5(z|x) considering the x. And we approximate the posterior with the prior distribution p\u03b8(z|c), log p\u03b8(x|c) = log R z q\u03d5(z|x)p\u03b8(x|z, c) \u2265Ez\u223cq\u03d5(z|x)[log p\u03b8(x|z, c)] \u2212KL(q\u03d5(z|x), p\u03b8(z|c)). (2) 2.2. Diffusion Model in Latent Space Diffusion model is designed to operate in fixed and continuous domain, consisting forward and reverse processes. In this work, we perform forward and reverse process in learned latent space representing the high-level semantic of response. Suppose posterior as z0 \u223cq\u03d5(z|x), in the forward process, z0 is corrupted with standard Gaussian noise in large amount of step, forming a Markov chain of z0, z1, ..., zT , with zT \u223cN(0, I): q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI), (3) where \u03b2t \u2208(0, 1) controls the scale of the noise in a single step. In the reverse progress, diffusion model learn to reconstruct z0 from zT by learning p\u03b8(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt, t), \u03a3\u03b8(zt, t)), Since the q(zt\u22121|zt, z0) has a closed form,the canonical objective is the variational lower bound of log p\u03b8(z0), Lvlb = Eq [DKL (q (zT | z0) \u2225p\u03b8 (zT ))] +Eq hPT t=2 DKL (q (zt\u22121 | zt, z0) \u2225p\u03b8 (zt\u22121 | zt, t)) i \u2212log p\u03b8 (z0 | z1) . (4) To promote stability in training, we take advantage of the simplified objective proposed by Ho et al. as Lsimple, Lsimple(z0) = T X t=1 E q(zt|z0)\u2225\u00b5\u03b8(zt, t) \u2212\u02c6 \u00b5 (zt, z0) \u22252, (5) where \u02c6 \u00b5(zt, z0) refers to q(zt\u22121|zt, z0), and \u00b5\u03b8(zt, z0) is learned by diffusion model. 3. DiffusionDialog 3.1. Model Architecture Our model introduces a hierarchical generation process with latent variable. Firstly it obtains latent variable reflecting the semantic of response from the context and then generate the response considering the latent variable and the context (Equation 1), thus the response generation involves three key components: the dialogue context c, the response r, and the latent variable z. We combines encoder-decoder structured pretrained language model Bart with a latent-based diffusion model to handle the two-stage generation, the figure 2 illustrates our model, and we explain our model by illustrating the function of each part of the model. 3.1.1. Bart Encoder The bart encoder plays a dual role in our model, encoding both the contex and the latent variables. For context, following the PLATO, in addition to token and position embeddings, it also incorporates turn embeddings to align with the context turn number, and role embeddings to align with the speaker\u2019s role. As a result, the final embedding input of the context is the sum of corresponding token, turn, role, and position embeddings. For latent variables, since the priors are untraceable, bart encoder learns the priors of the latent variable q\u03d5(z|x) which represents the high-level semantic information about the response. To connect the latent space, we concatenate a special token in front of the response to encode the semantic information of the response. We refer to this special token as latent toke. Therefore, the input format for latent variable encoding is [l, wx 1, wx 2..., wx n], n refers to the length of response x. We append a multilayer perceptron to obtain a representation of the posterior distribution z0 \u223c q\u03d5(z|x) : z0 = MLP(h[L]), (6) where h[L] \u2208Rd refers to the final hidden state of the latent token. 3.1.2. Latent Diffusion Denoiser After obtaining z0 from the bart encoder, we sample a time step t \u2208[1, T] uniformly and add noise to the latent variable according to Equation 3, resulting in a noised latent zt. The latent diffusion denoiser is trained to denoise the latent. It adopts the structure of a transformer decoder, taking the noised latent variable as inputs and incorporates the context hidden state with cross-attention mechanism, and a timestep embedding is also added before the first Transformer block to inform the model of the current timestep, \u02dc z0 = Denoiser(zt, et, hc), (7) where et refers to the embedding of the timestep t. Since the context hidden state is fixed during inference, the inference time required for the diffusion model is short. 3.1.3. Bart Decoder To guide the response generation of the decoder using latent variables, we adopt the memory scheme from OPTIMUS (Li et al., 2020). Specifically, we project the latent variable z as a key-value pair and concatenate them to the left of the token hidden state to introduce the latent variable into the decoder. H(l+1) = MultiHead(H(l), h(l) Mem \u2295H(l), h(l) Mem \u2295H(l)), where H(l) refers to the token hidden state of the l-th layer, and h(l) Mem is calculated as: h(l) Mem = \u0014 zkey zvalue \u0015 = W l M z, (8) where W l M \u2208Rd\u00d72d is a weight matrix. 3.2. Training During our training process for dialogue generation, we utilize three different loss functions: negative log-likelihood (NLL) loss, bag-of-words (BOW) loss, and latent denoising (LD) loss. Detailed descriptions will be provided in this section. 3.2.1. Response semantic capture To enable the latent variable to capture the overall semantic information of the response, we adopt the bag-of-words (BOW)(Zhao et al., 2017) loss, which is used to enable the latent variable to predict the tokens in the response in a non-autoregressive manner. LBOW = \u2212Ez0\u223cq\u03d5(z|r) N X n=1 log p(rt|z0) = \u2212Ez0\u223cq\u03d5(z|r) N X n=1 log efrn P v\u2208V efv . (9) The symbol V refers to the entire vocabulary. The function f attempts to non-autoregressively predict the words that make up the target response. f = softmax (W2hz + b2) \u2208R|V |. (10) Response Latent Noise Context Shifted Response Context Shifted Response Training Inference Bart Encoder Bart Encoder Bart Decoder Latent Denoiser Latent Denoiser Bart Encoder Bart Decoder Figure 2: frame work of DiffusionDialog. In the given equation, hz represents the hidden state of the latent variable, while |V | denotes the size of the vocabulary. The estimated probability of word rn is denoted by frn. BOW loss disregards the word order and compels the latent variable to capture the overall information of the target response. 3.2.2. Latent Denoising For each training step, we sample a time step t and obtain zt referring to Equation 3. To better capture the semantic information of the latent variables, our diffusion model predicts z0 directly instead of zt\u22121 given zt, denoted as Lz0-simple , a variant of Lsimple in Equation 5: Lz0-simple (z0) = T X t=1 Ezt \u2225p (zt, c, t) \u2212z0\u22252 . (11) where our latent diffusion denoiser p (zt, hc, t) predicts z0 directly. Thus at each time step, the loss of latent denoising is: LLD = \u2225p (zt, t, c) \u2212z0\u22252. (12) 3.2.3. Response Generation In our model, the response is generated by conditioning on both the latent variable and the context. To train the response generation we adopt the commonly used NLL loss, LNLL = \u2212E \u02dc z0\u223cp(z|c,zt,t) log p(r | c, \u02dc z0) = \u2212E \u02dc z0\u223cp(z|c,zt,t) N X n=1 log p (rt | c, \u02dc z0, r