| [ | |
| { | |
| "url": "http://arxiv.org/abs/2404.18020v1", | |
| "title": "DM-Align: Leveraging the Power of Natural Language Instructions to Make Changes to Images", | |
| "abstract": "Text-based semantic image editing assumes the manipulation of an image using\na natural language instruction. Although recent works are capable of generating\ncreative and qualitative images, the problem is still mostly approached as a\nblack box sensitive to generating unexpected outputs. Therefore, we propose a\nnovel model to enhance the text-based control of an image editor by explicitly\nreasoning about which parts of the image to alter or preserve. It relies on\nword alignments between a description of the original source image and the\ninstruction that reflects the needed updates, and the input image. The proposed\nDiffusion Masking with word Alignments (DM-Align) allows the editing of an\nimage in a transparent and explainable way. It is evaluated on a subset of the\nBison dataset and a self-defined dataset dubbed Dream. When comparing to\nstate-of-the-art baselines, quantitative and qualitative results show that\nDM-Align has superior performance in image editing conditioned on language\ninstructions, well preserves the background of the image and can better cope\nwith long text instructions.", | |
| "authors": "Maria Mihaela Trusca, Tinne Tuytelaars, Marie-Francine Moens", | |
| "published": "2024-04-27", | |
| "updated": "2024-04-27", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Original Paper", | |
| "paper_cat": "Diffusion AND Model", | |
| "gt": "Despite the aim of keeping the background as similar as possible to the input image, numerous AIbased semantic image editors insert unwanted alterFigure 3: Semantic image editing: Imagen dataset. Source captions: (1) c1. A photo of a British shorthair cat wearing a cowboy hat and red shirt riding a bike on a beach. (2) c1. An oil painting of a raccoon wearing sunglasses and red shirt playing a guitar on top of a mountain. (3) c1. An oil painting of a fuzzy panda wearing sunglasses and red shirt riding a bike on a beach. ations in the image. FlexIt (Couairon et al., 2022) combines the input image and instruction text into a single target point in the CLIP multimodal embedding space and iteratively transforms the input image toward this target point. Zhang et al. (2023a) introduce ControlNet as a neural network based on two diffusion models, one frozen and one trainable. While the trainable model is optimized to inject the textual conditionality of the semantic editing, the frozen model preserves the weights of the model pre-trained on large image corpora. The output of ControlNet is gathered by summing the outputs of the two diffusion models. To keep the structural information of the input image, Tumanyan et al. (2023) define a Plug-and-Play model as a variation of the Latent Diffusion Model. Their method edits an input image using not only textual guidance but also a set of features that separately store spatial information and layout details like the shape of objects. While most text-based image editors are trainingfree, Imagic proposed by Kawar et al. (2023), assumes fine-tuning of the diffusion model by iteratively running over a text embedding optimized to match the input image and resemble the editing text instruction. Ultimately, the text embedding of the editing instructions and the optimized text embedding are interpolated and utilized as input by the fine-tuned model to generate the final edited image. This idea of fine-tuning a diffusion model is also adopted by Brooks et al. (2023) to define InstructPix2Pix as a model that approaches textbased image editing as a supervised task. Due to the scarcity of data, a methodology relying on Prompt-to-Prompt (Hertz et al., 2023) is proposed for generating pairs of images before and after the update. During inference, the fine-tuned stable diffusion model can seamlessly edit images using an input image and a text instruction. The above approaches lack an explicit delineation of the image content to be altered. Closer to our work is the Prompt-to-Prompt model (Hertz et al., 2023) which connects the text prompt with different image regions using cross-attention maps. The image editing is then performed in the latent representations responsible for the generation of the images. In contrast, our work focuses on the detection and delineation of the content to be altered in the image and is guided by the difference in textual instructions. Additionally, we edit images using real pictures and not latent representations artificially generated by a source prompt. To overcome the problem of unwanted alterations in the image, DiffEdit (Couairon et al., 2023) computes an image mask as the difference between the denoised outputs using the textual instruction that describes the source image and the instruction that describes the desired edits. However, without an explicit alignment between the two text instructions and the input image, DiffEdit has little control over the regions to be replaced or preserved. While DiffEdit internally creates the editing mask, models like SmartBrush (Xie et al., 2023), Imagen Editor (Wang et al., 2023), Blended Diffusion (Avrahami et al., 2022) or Blended Latent Diffusion (Avrahami et al., 2023) directly edit images using hand-crafted user-defined masks. Due to a rough text-based control, the above models often struggle with preserving background details and are overly sensitive to the length of text instructions. Different from the current models, our DM-Align model does not treat the recognition of the visual content that requires preservation or substitution as a black box. By explicitly capturing the semantic differences between the natural language instructions, DM-Align provides comprehensive control over image editing. This novel approach results in superior preservation of unaltered image content and more effective processing of long text instructions. Except for the models that require additional input masks, all the above-mentioned text-based image editors are used as baselines for our evaluation.", | |
| "pre_questions": [], | |
| "main_content": "Introduction AI-driven image generation was confirmed as a smooth-running option for content creators with high rates of efficiency and also creativity (Ramesh et al., 2022) that can be easily adapted to generate consecutive frames for video generation (Ding et al., 2022; Singer et al., 2023). Text-based guidance has proven to be a natural and effective means of altering visual content in images. Various model architectures have been proposed for text-based image synthesis, ranging from transformers (Ding et al., 2021; Vaswani et al., 2017) to generative adversarial networks (GANs) (Goodfellow et al., 2014; Reed et al., 2016; Zhu et al., 2019), and more recently, diffusion models like DALL\u00b7E 2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022), or Stable Diffusion Models (Rombach et al., 2022). The Figure 1: The proposed image editor utilizes a source caption to describe the initial image and a target text instruction to define the desired edited image. To accomplish this task, we employ the two captions to generate a diffusion mask, refining it further by incorporating regions of words that we want to keep or alter in the image. success of diffusion models, akin to that observed in language models (Kaplan et al., 2020), largely results from their scalability. Factors such as model size, training dataset size, and computational resources contribute significantly to their effectiveness, overshadowing the impact of the model architecture itself. This scalability enables these models to adapt easily to different domains, including unseen concepts (Ramesh et al.; Saharia et al., 2022). Moreover, these models are ready to use without the need for additional training (Choi et al., 2021; Li et al., 2020). While similar to the text-based semantic image generation task in its creation of new visual content, text-guided image editing also relies on additional visual guidance. Consequently, the goal of textguided image manipulation is to modify the content of a picture according to a given text while keeping the remaining visual content untouched. The remaining visual content is from now on referred to as \u201cbackground\". As text-to-image generators, textbased image editors work at the frame level and can be further adapted for video editing (Zhang et al., 2023b). Text-based semantic image editing typically employs text-based image generation models with user-defined image masks (Avrahami et al., 2023, 2022; Wang et al., 2023; Xie et al., 2023). arXiv:2404.18020v1 [cs.CV] 27 Apr 2024 Figure 2: The implementation of DM-Align. The aim is to update the input image described by the text instruction c1 (\u201cA clear sky and a ship landed on the sand\") according to the text instruction c2 (\u201cA clear sky and a ship landed on the ocean\"). Each of these masks is an arrangement that differentiates between the image content that is to be changed or preserved. However, asking humans to generate masks is cumbersome, so we would like to edit images naturally, relying solely on a textual description of the image and its instruction to change it. Existing models for text-based semantic image editing, which do not require human-drafted image masks, struggle to maintain the background (Brooks et al., 2023; Couairon et al., 2022; Kawar et al., 2023; Tumanyan et al., 2023; Zhang et al., 2023a). Preserving the background\u2019s consistency is particularly relevant for applications like game development or virtual worlds, where visual continuity across frames is crucial. Finally, the complexity of textual instructions given by their length poses a challenge for semantic image editors. While the existing models can effectively handle short text instructions, they encounter difficulties in manipulating an image using longer and more elaborate ones. To address the aforementioned limitations, we present a novel approach that employs one-to-one alignments between the words in the text instruction describing the source image and those describing the desired edited image (Figure 1). By leveraging these word alignments, we implement image editing as a series of deletion, insertion, and replacement operations. Through this text-based control mechanism, our proposed model consistently produces high-quality editing results, even with long text instructions, while ensuring the preservation of the background. As presented in Figure 2, we align the words of the text that describes the source image and the textual instruction that describes how the image should look after the editing, which allows us to determine the information the user wants to keep, or replace. Then, disjoint regions associated with the preserved or discarded information are detected by segmenting the image. Next, a global, rough mask for inpainting is generated using standard diffusion models. While the diffusion mask allows the insertion of new objects that are larger than the replaced ones, it has the disadvantage of being too rough. Therefore, we further refine it using again the detected disjoint regions. To prove the effectiveness of DM-Align, the masked content is generated using inpainting stable diffusion (Rombach et al., 2022). Our contributions are summarized as follows: 1. Our novel approach reasons with the text caption of the original input image and the text instruction that guides the changes in the image, which is a natural and human-like way of approaching image editing with a high level of explainability. 2. By differentiating between the image content to be changed from the content to be left unaltered, the proposed DM-Align enhances the text control of semantic image editing. 3. Compared to recent models for text-based semantic image editing, DM-Align demonstrates superior capability in handling long text instructions and preserving the background of the input image while accurately implementing the specified edits. In this section, we present our solution for semantic image editing. We define the task and then describe the main steps of the proposed model, which consist of: 1) Detecting the content that needs to be updated or kept relying on the alignment of words of the text that describes the source image and the textual instruction that describes how the image should look after the editing; 2) The segmentation of the image content to be updated or kept by crossmodal grounding; 3) The computation of a global diffusion mask that assures the coherence of the updated image; 4) The refinement of the global diffusion mask with the segmented image content that will be updated or kept; and 5) The inpainting of the mask with the help of a diffusion model. As demonstrated by our experiments, the proposed DM-Align can successfully replace, delete, or insert objects in the input image according to the text instructions. Our method mainly focuses on the nouns of the text instructions and their modifiers. Consequently, DM-Align does not implement action changes and the resulting changes in the position or posture of objects in the input image, which we leave for future work. 3.1 Task Definition DM-Align aims to alter a picture described by a source text description or instruction c1 using a target text instruction c2. Considering this definition, the purpose is to adjust only the updated content mentioned in the text instruction c2 and leave the remaining part of the image unchanged. Based on this, we argue the need for a robust masking system that clearly distinguishes between unaltered image regions, which we call \u201cbackground\", and the regions that require adjustments. 3.2 Word alignment between the text instructions The alignment represents the first step of the DMAlign model proposed to enhance the text-based control for semantic image editing (Figure 2). Given the two text instructions c1 and c2, our assumption is that the shared words should indicate unaltered regions, while the substituted words should point to the regions that require manipulations. Implicitly, the most relevant words for this analysis are nouns due to their quality of representing objects in the picture. The words are syntactically classified using the Stanford part-of-speech tagger (Toutanova et al., 2003). We extend the region to be edited by including the regions of the shared words with different word modifiers1 in the two text instructions. As a result, the properties of the already existing objects in the picture can be updated. On the contrary, if the aligned nouns have identical modifiers (or no modifiers) in both instructions, their regions in the image should be unaltered. In addition, we also consider the regions of the unaligned nouns mentioned in the source text instruction (deleted nouns) as unaltered regions. Keeping the regions of the deleted nouns is important because we assume that in the target instruction, a user only mentions the desired changes in the image, omitting irrelevant content (Hurley, 2014). Editing the regions of the deleted nouns reduces the similarity w.r.t the source image and increases the level of randomness in the target image since we generate new visual content that is irrelevant to both the source image and the target caption (Figure 11). Considering the example presented in Figure 4, the diffusion mask is adjusted to include the regions assigned to the sofa and dress. While the sofa is substituted with a bench, the dress has different modifiers in the captions. On the other hand, the regions of nouns \u201cgirl\" and \u201ccat\" are eliminated from the diffusion mask. The girl is mentioned in both captions, while the cat is irrelevant to the user according to the caption c2 and is incorporated in 1A modifier is a word or phrase that offers information about another word mentioned in the same sentence. To keep the editing process simple, in the current work we use only word modifiers represented by adjectives. the background. Figure 4: Word alignment example. Blue: identical words, Purple: substituted words, Green: nouns with different modifiers, Red: nouns mentioned only in the source caption c1. The detection of word alignments between the two text instructions is realized with a neural semiMarkov CRF model (Lan et al., 2021). The model is trained to optimize the word span alignments, where the maximum length of spans is equal to D words (in our case D = 3). The obtained word span alignments will then further be refined into word alignments. The neural semi-Markov CRF model is optimized to increase the similarity between the aligned source and target word span representations, which are each computed with a pretrained SpanBERT model (Joshi et al., 2020). The component that optimizes the similarity between these representations is implemented as a feed-forward neural network with Parametric ReLU (He et al., 2015). To avoid alignments that are far apart in the source and target instructions, another component controls the Markov transitions between adjacent alignment labels. To achieve this, it is trained to reduce the distance between the beginning index of the current target span and the end index of the target span aligned to the former source span. Finally, a Hamming distance is used to minimize the distance between the predicted alignment and the gold alignment. The outputs of the above components are fused in a final function \u03c8(a|s,t) that computes the score of an alignment a given a source text s and target text t. The conditional probability of span alignment a is then computed as: p(a|s,t) = e\u03c8(a|s,t) \u2211a\u2032\u2208A e\u03c8(a\u2032|s,t) (1) where the set A denotes all possible span alignments between source text s and target text t. The model is trained by minimizing the negative loglikelihood of the gold alignment a\u2217from both directions, that is, source to target s2t and target to source t2s : \u2211 s,t,a\u2217\u2212log p(a\u2217 s2t|s,t)\u2212log p(a\u2217 t2s|t,s) (2) The neural semi-Markov CRF model is trained on the MultiMWA-MTRef monolingual dataset, a subset of the MTReference dataset (Yao, 2014). Considering the trained model, we predict the word alignments as follows. Given two text instructions c1 and c2, the model predicts two sets of span alignments a: as2t aligning c1 to c2; and at2s aligning c2 to c1 The final word alignment is computed by merging these two span alignments. Let i be a word of the source text and j be a word of the target text, if alignment as2t indicates the connection i\u2212> j and alignment at2s indicates the connection j\u2212> i, then the words i and j become aligned. In the end, the word alignments are represented by a set of pairs (i\u2212j), where i is a word of the instruction c1, and j is a word of the instruction c2. 3.3 Segmentation of the image based on the word alignments The aim is to identify the regions in the image that require changes or conservation (second step in Figure 2). Based on the above word alignments, we select the nouns whose regions will be edited (non-identical aligned nouns or aligned nouns with different modifiers in the two text instructions) and the nouns whose regions will stay unaltered (nouns of the source text instruction not shared with the target text instruction, identical aligned nouns). Once these nouns are selected we use Grounded-SAM (Charles, 2023) to detect their corresponding image regions. Its benefit is the \u201copen-set object detection\" achieved by the object detector Grounding DINO (Liu et al., 2023) which allows the recognition of each object in an image that is mentioned in the language instruction. Given a noun, Grounding DINO detects its bounding box in the image, and SAM (Kirillov et al., 2023) determines the region of the object inside the bounding box. The selected regions will be used to locally refine the diffusion masks discussed in the next section. 3.4 Diffusion mask To ensure the coherence of the complete image given the target language instruction and to cope with the cases when the object to be replaced is smaller than the object to be inserted, we also use a global diffusion mask. The computation of the diffusion mask represents the third step of our proposed model (Figure 2) and relies on the denoising diffusion probabilistic models (DDPM) (Ho et al., 2020; Weng, 2021). DDPMs are based on Markov chains that gradually convert the input data Figure 5: Semantic image editing: Bison dataset. Source captions: (1) c1. A man standing next to a baby elephant in the city. (2) c1. A wooden plate topped with sliced meat and vegetables. (3) c1. A vase filled with red and white flowers. into Gaussian noise during a forward process, and slowly denoise the sampled data into newly desired data during a reverse process. In each iteration t of the forward process, new data xt is sampled from the distribution q(xt|xt\u22121) = N ( p 1\u2212\u03b2xt\u22121,\u03b2I), where \u03b2t is an increasing coefficient that varies between 0 and 1 and controls the level of noise for each time step t. The process is further simplified by expressing the sampled data xt w.r.t the input image x0, as follows: xt = \u221a\u03b1tx0 + p 1\u2212\u03b1t\u03b5 (3) where \u03b1t = \u220ft i=0(1 \u2212\u03b2i) and \u03b5 \u223cN (0,1) represents the noise variable. As we empirically observed that the editing effect is diminished over the regions where the noise variable is cancelled, we set the noise variable \u03b5 to 0 over the regions that should be preserved. We dubbed this operation noise cancellation. The forward process is executed for T iterations until xT converges to N (0,1). During the reverse process, at each time step t \u22121, xt\u22121 is denoised from the distribution q\u03b8(xt\u22121|xt) defined as: q\u03b8(xt\u22121|xt) = N ( 1 p 1\u2212\u03b2t (xt\u2212 \u03b2t \u221a1\u2212\u03b1t \u03b5\u03b8(xt)), 1\u2212\u03b1t\u22121 1\u2212\u03b1t \u03b2t) (4) where \u03b5\u03b8(xt,t) is estimated by a neural network usually represented by a U-Net. To impose the text conditionally in a diffusion model, we have to integrate the text instruction c into the U-Net model and compute \u03b5\u03b8(xt|c), instead of \u03b5\u03b8(xt). Using classifier-free guidance (Saharia et al., 2022) and knowing that s (s > 1) represents the guidance scale, \u03b5\u03b8(xt) mentioned in Eq. 4 is replaced by \u03b5\u03b8(xt|c) defined as: \u03b5\u03b8(xt|c) = s\u03b5\u03b8(xt|c)+(1\u2212s)(\u03b5\u03b8(xt|0) (5) To obtain the diffusion mask, we first compute the denoised output of the input image corresponding to the source instruction and the denoised output of the input image corresponding to the target instruction by running two separate DDPM processes. The diffusion process does not run over the input image but over its encoded representation yielded by a Variational Autoencoder (VAE) (Kingma and Welling, 2014; Rombach et al., 2022) with Kullback-Leibler loss. Therefore, the denoised outputs do not represent the final edited image but only an intermediate image representation with semantic information associated with the source or target instruction. Inspired by Couairon et al. (2023), we compute the diffusion mask as the absolute difference between the two noise estimates that is rescaled between [0,1] and binarized using a threshold set to 0.5. This diffusion mask represents a global mask that roughly indicates the content to be changed. 3.5 Refinement of the diffusion mask The refinement of the diffusion mask represents the fourth step of DM-Align as presented in Figure 2. To further improve the precision of the global diffusion mask, we refine it using the regions detected in Section 3.3. More specifically, we extend the diffusion mask to include the regions to be altered and Table 1: Image-level evaluation for Dream, Bison and Imagen datasets (mean and variance). Compared with the baselines, DM-Align achieves the best image-based scores while FlexIT obtains the best similarity w.r.t the target instruction as indicated by CLIPScore. Knowing that the CLIPScore is heavily biased for models based on the CLIP model (as FlexIT does), and considering the image-based scores, DM-Align achieves the best trade-off between similarities to the input image and the target instruction. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 Dream FlexIT 150.20 \u00b1 0.67 0.53 \u00b1 0.00 47.63 \u00b1 0.13 0.87 \u00b1 0.00 InstructPix2Pix 158.77 \u00b1 3.03 0.44 \u00b1 0.00 43.20 \u00b1 0.44 0.81 \u00b1 0.00 ControlNet 140.42 \u00b1 0.38 0.49 \u00b1 0.00 49.6 \u00b1 0.46 0.80 \u00b1 0.00 DiffEdit 126.77 \u00b1 0.14 0.29 \u00b1 0.57 30.22 \u00b1 0.14 0.72 \u00b1 0.00 Plug-and-Play 128.13 \u00b1 0.98 0.53 \u00b1 0.00 48.56 \u00b1 0.13 0.76 \u00b1 0.00 Imagic 157.06 \u00b1 0.23 0.65 \u00b1 0.00 48.23 \u00b1 0.03 0.79 \u00b1 0.00 DM-Align 124.57 \u00b1 0.52 0.31 \u00b1 0.00 29.24 \u00b1 0.03 0.80 \u00b1 0.00 Bison FlexIT 41.78 \u00b1 0.09 0.50 \u00b1 0.00 42.59 \u00b1 0.03 0.90 \u00b1 0.00 InstructPix2Pix 62.62 \u00b1 0.17 0.53 \u00b1 0.00 41.45 \u00b1 0.01 0.78 \u00b1 0.00 ControlNet 45.87 \u00b1 0.38 0.45 \u00b1 0.00 52.12 \u00b1 0.11 0.78 \u00b1 0.00 DiffEdit 53.54 \u00b1 0.22 0.45 \u00b1 0.00 49.65 \u00b1 0.18 0.76 \u00b1 0.00 Plug-and-Play 52.44 \u00b1 0.18 0.46 \u00b1 0.00 48.45 \u00b1 0.15 0.76 \u00b1 0.00 Imagic 63.23 \u00b1 0.28 0.52 \u00b1 0.00 51.44 \u00b1 0.12 0.77 \u00b10.00 DM-Align 40.05 \u00b1 0.03 0.39 \u00b1 0.00 37.05 \u00b1 0.07 0.78 \u00b1 0.00 Imagen FlexIT 91.86 \u00b1 0.32 0.46 \u00b1 0.00 44.05 \u00b1 0.00 0.91 \u00b1 0.00 InstructPix2Pix 133.33 \u00b1 0.04 0.57 \u00b1 0.00 42.68 \u00b1 0.18 0.79 \u00b1 0.00 ControlNet 85.86 \u00b1 0.26 0.51 \u00b1 0.00 58.44 \u00b1 0.04 0.79 \u00b1 0.00 DiffEdit 101.73 \u00b1 0.00 0.38 \u00b1 0.00 30.02 \u00b1 0.09 0.71 \u00b1 0.00 Plug-and-Play 84.37 \u00b1 0.29 0.41 \u00b1 0.00 41.79 \u00b1 0.07 0.78 \u00b1 0.00 Imagic 94.92 \u00b1 0.44 0.67 \u00b1 0.00 51.58 \u00b1 0.11 0.77 \u00b1 0.00 DM-Align 66.68 \u00b1 0.01 0.31 \u00b1 0.00 29.04 \u00b1 0.01 0.79 \u00b1 0.00 Table 2: Image-level evaluation of DM-Align on a subset of the Bison dataset that contains only source and target text instructions with a degree of similarity higher than Rouge 0.7. Out of all baselines, only FlexIT and DiffEdit are presented, as they utilize a source caption in their implementation. While DM-Align scores better than the baselines for image-based metrics, FlexIT has the highest CLIPScore due to its CLIP-based architecture. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 FlexIT 71.64 \u00b1 0.03 0.48 \u00b1 0.00 42.30 \u00b1 0.03 0.89 \u00b1 0.00 DiffEdit 74.60 \u00b1 0.94 0.44 \u00b1 0.01 51.75 \u00b1 0.29 0.76 \u00b1 0.00 DM-align 67.91 \u00b1 0.00 0.36 \u00b1 0.00 36.28 \u00b1 0.00 0.78 \u00b1 0.00 shrink it to avoid editing over the preserved regions. To improve control over the preserved background, we adjust the noise variable over the forward process of the obtained diffusion mask. The noise variable is cancelled for the unaltered regions detected in the previous step and kept unchanged for the regions to be manipulated. Note that both the global diffusion mask with noise cancellation and the regions determined through image segmentation are necessary for a qualitative mask. The global diffusion mask facilitates the replacement of small objects with larger ones and gives context to the editing. On the other hand, the insertion or deletion of different regions based on image segmentation improves the precision of the final mask as shown in ablation experiments in Subsection 5.1. Once the refined diffusion mask is computed, we use inpainting stable diffusion (Rombach et al., 2022) to edit the masked regions based on the given target text caption (fifth step of DM-Align presented in Figure 2). We also tried to replace the inpainting stable diffusion with latent blended diffusion (Avrahami et al., 2023). However, the obtained results were slightly worse, and the computational time increased by 60% (details are in Table 6 of Appendix C). 4 Experimental setup Baselines. We compare results obtained with DM-Align with those of FlexIT (Couairon et al., 2022), DiffEdit (Couairon et al., 2023), ControlNet (Zhang et al., 2023a), Plug-and-Play (Brooks et al., 2023), Imagic (Kawar et al., 2023) and IntructPix2Pix (Tumanyan et al., 2023). The implementation details are presented in Appendix A. Datasets. While ControlNet, Plug-and-Play, Imagic and InstructPix2Pix are evaluated on Table 3: Background-level evaluation for Dream, Imagen and Bison datasets (mean and variance). DM-Align outperforms the baselines in terms of background preservation, especially for the Bison and Imagen datasets which have more elaborate captions than Dream. FID\u2193 LPIPS\u2193 PWMSE\u2193 Dream FlexIT 154.44 \u00b1 0.19 0.31 \u00b1 0.00 30.22 \u00b1 0.05 InstructPix2Pix 147.62 \u00b1 0.82 0.25 \u00b1 0.00 27.87 \u00b1 0.35 ControlNet 137.29 \u00b1 1.86 0.31 \u00b1 0.00 32.74 \u00b1 0.42 DiffEdit 125.95 \u00b1 0.44 0.15 \u00b1 0.00 15.72 \u00b1 0.04 Plug-and-Play 151.42 \u00b1 1.02 0.34 \u00b1 0.00 31.59 \u00b1 0.00 Imagic 174.41 \u00b1 1.49 0.42 \u00b1 0.00 31.63 \u00b1 0.08 DM-Align 102.44 \u00b1 0.07 0.11 \u00b1 0.00 14.54 \u00b1 0.01 Bison FlexIT 35.48 \u00b1 0.07 0.24 \u00b1 0.00 20.38 \u00b1 0.03 InstructPix2Pix 44.01 \u00b1 0.28 0.26 \u00b1 0.00 20.00 \u00b1 0.06 ControlNet 35.39 \u00b1 0.06 0.25 \u00b1 0.00 26.58 \u00b1 0.04 DiffEdit 37.68 \u00b1 0.33 0.23 \u00b1 0.00 19.68 \u00b1 0.09 Plug-and-Play 36.44 \u00b1 0.78 0.24 \u00b1 0.00 19.79 \u00b1 0.12 Imagic 43.55 \u00b1 0.76 0.27 \u00b1 0.00 27.12 \u00b1 0.10 DM-Align 16.41 \u00b1 0.00 0.08 \u00b1 0.00 14.16 \u00b1 0.00 Imagen FlexIT 92.44 \u00b1 0.35 0.36 \u00b1 0.00 36.57 \u00b1 0.01 InstructPix2Pix 124.32 \u00b1 0.80 0.46 \u00b1 0.00 34.29 \u00b1 0.15 ControlNet 85.56 \u00b1 0.31 0.42 \u00b1 0.00 49.78 \u00b1 0.02 DiffEdit 88.01 \u00b1 0.55 0.31 \u00b1 0.00 24.17 \u00b1 0.09 Plug-and-Play 81.28 \u00b1 0.28 0.34 \u00b1 0.00 31.59 \u00b1 0.07 Imagic 103.74 \u00b1 1.49 0.56 \u00b1 0.00 43.91 \u00b1 0.08 DM-Align 54.12 \u00b1 0.04 0.21 \u00b1 0.00 22.09 \u00b1 0.00 Table 4: Ablation tests for the Imagen dataset (mean and variance). The results underscore the significance of all DM-Align components. \"Non-shared objects\" denote objects mentioned solely in the source caption, while \"Refinement of diffusion mask\" involves adjusting the diffusion mask through shrinkage or expansion based on regions corresponding to keywords. FID\u2193 LPIPS\u2193 PWMSE\u2193 CLIPScore\u2191 (w/o) diffusion mask 43.36 \u00b1 1.44 0.42 \u00b1 0.00 41.61 \u00b1 0.26 0.77 \u00b1 0.00 (w/o) noise cancellation 44.44 \u00b1 0.76 0.41 \u00b1 0.00 40.57 \u00b1 0.30 0.79 \u00b1 0.00 (w/o) refinement of diffusion mask 47.63 \u00b1 0.78 0.43 \u00b1 0.00 43.60 \u00b1 0.15 0.77 \u00b1 0.00 (w/o) objects with different modifiers 42.34 \u00b1 0.57 0.40 \u00b1 0.00 38.23 \u00b1 0.20 0.77 \u00b1 0.00 (w/o) non-shared objects 45.35 \u00b1 2.25 0.43 \u00b1 0.00 41.57 \u00b1 0.79 0.77 \u00b1 0.00 DM-Align 40.05 \u00b1 0.00 0.39 \u00b1 0.00 37.05 \u00b1 0.00 0.78 \u00b1 0.00 datasets devoid of source text descriptions, FlexIT and DiffEdit are evaluated on a subset of the ImageNet dataset (Deng et al., 2009), which assumes the replacement of the main object of the image with another object. Additionally, DiffEdit is evaluated on the Bison dataset (Hu et al., 2019) and a self-defined collection of Imagen pictures (Saharia et al., 2022). Out of these datasets, our model is evaluated using the Bison dataset and the collection of images generated by Imagen (further referred to as the Imagen dataset) described by Couairon et al. (2023). We omit the ImageNet dataset due to its oversimplified setup, primarily employing single-word source and target text instructions. Bison and Imagen datasets contain elaborated text captions with up to 23 words. To investigate the behavior of the DM-Align model and the baseline models when confronted with shorter text instructions we generate a collection of 100 images using Dream by WOMBO2 that relies on the source captions as guidance. As the dataset is generated using Dream by Wombo, we further refer to it as Dream. To complete the Dream dataset, we specify a new text query as the target instruction for each image-instruction pair. Unlike the Imagen and Bison datasets, the text instructions of Dream do not contain more than 11 words. Evaluation metrics. To evaluate our model, we use a set of metrics that assess the similarity of the edited image to both the input image and the target instruction. By default, it is a trade-off between image-based and text-based metrics as we need to find the best equilibrium point. 2The code is available at https://github.com/cdgco/ dream-api Figure 6: Semantic image editing: Dream dataset. Source captions: (1) c1. A soldier in front of a building. (2) c1. A pot with flowers. (3) c1. A girl throwing a volleyball. Generating images close to the source image improves the image-based metrics while reducing the similarity to the target caption. On the other hand, images close to the target instruction improve the text-based scores but can affect the similarity to the input picture. The equilibrium point is important given that people tend to focus mainly on specifying the desired changes in an image while omitting the information that already exists (Hurley, 2014). Therefore, the edited content can represent a small region of the new image while the rest of it should keep the content of the source image. The similarity (or the distance) of the updated image w.r.t the source image is assessed using FID (Heusel et al., 2017), LPIPS (Zhang et al., 2018) and the pixel-wise Mean Square Error (PWMSE). FID relies on the difference between the distributions of the last layer of the Inception V3 model (Szegedy et al., 2016) that separately runs over the input and edited images. FID measures the consistency and image realism of the new image w.r.t the source image. Contrary to the quality assessment computed by FID, LPIPS measures the perceptual similarity by calculating the distance between layers of an arbitrary neural network that separately runs over the input and updated images. As the LPIPS metric, PWMSE determines the pixel leakage by computing the pixel-wise error between the input and the edited images. The similarity of the updated image w.r.t the target instruction is computed in the CLIP multimodal embedding space by the CLIPScore (Hessel et al., 2021). More details about the evaluation metrics are specified in Appendix B. 5 Results and discussion 5.1 Quantitative analysis and ablation tests How well can the DM-Align model edit a source image considering the length of the text instruction? To address the first research question, we refer to Table 1. When compared to the baselines Diffedit, ControlNet, FlexIT, Plug-and-Play, and InstructPix2Pix, our proposed DM-Align model exhibits particularly effective performance in terms of image-based metrics. This effectiveness is particularly noticeable in the Bison and Imagen datasets, which contain longer captions compared to the Dream dataset. When compared with the best baseline over the Imagen dataset, DM-Align improves FID, LPIPS, and PWMSE by 23.42%, 19.87%, and 3.32%, respectively. Similar results are observed for the Bison dataset, where DM-Align enhances the results of the best baseline by 4.22% for FID, by 14.26% for LPIPS, and by 11.20% for PWMSE. In the case of the Dream dataset, DM-Align still outperforms other baselines in terms of FID and PWMSE, albeit with smaller margins. However, in terms of LPIPS, DiffEdit outperforms DM-Align over the instance of the Dream dataset. Given the results presented in Table 1, we posit that baselines find it easier to accurately edit images using short text instructions. Conversely, when text instructions are more elaborate, such as in the Bison and Imagen datasets, results significantly surpass those achieved by the baselines. DM-Align leverages word alignments between source and target instructions, highlighting their crucial role in facilitating effective image editing. In terms of text-based metrics, CLIPScore suggests that FlexIT generates images closest to the target instructions. This outcome is likely attributed to FlexIT\u2019s architecture, which is based on a CLIP model\u2014the same model used to calculate CLIPScore. This issue is highlighted in (Poole et al., 2023). Another possible explanation is that FlexIT is trained to maximize the similarity between input images and instructions. As depicted in Figures 3, 5, and 6, FlexIT may sacrifice image quality for higher similarity scores. Regarding CLIPScore, DM-Align consistently outperforms Plug-and-Play, Imagic and DiffEdit baselines or is equally effective as InstructPix2Pix and ControlNet. DM-Align also outperforms Prompt-to-Prompt (Hertz et al., 2023). As Prompt-to-Prompt can edit only selfgenerated images the comparison with DM-Align is limited only to text-based metrics like CLIPScore. More details about this comparison are presented in Appendix C. Given the text-based and image-based metrics, DM-Align seems to properly preserve the content of the input image and obtain a better trade-off between closeness to the input picture and target instruction than the baselines. While the above analysis demonstrates that elaborate text instructions do not affect the editing capabilities of DM-Align, unlike the baselines, we are also interested in examining how the degree of overlap between source and target captions impacts the quality of the edited image. To analyze this, we select 575 Bison instances with a similarity between the source and target instructions higher than Rouge 0.75. We do not conduct this analysis for the Imagen and Dream datasets as their text instructions already exhibit a level of similarity higher than Rouge 0.75. Our analysis is limited to DMAlign, FlexIT, and DiffEdit, as the other baselines do not utilize source captions in their implementation and are therefore omitted from this analysis. The results are presented in Table 2. We observe that while the results are similar to the image-based and text-based scores reported in Table 1 for Bison, all models report better performance and an improved trade-off between image and text-based metrics. These results suggest that increased overlap between source and target captions enhances the quality of image editing. How well does the DM-Align model preserve the background? To extract the background, we consider the DM-Align mask obtained after adjusting the diffusion mask. Upon analyzing the results presented in Table 3, the first notable observation is the significant reduction in the FID score of the DM-Align model by 73.27% for the Bison dataset, 40.12% for the Imagen dataset, and 20.58% for the Dream dataset when compared with the best baseline. Similarly, the LPIPS and PWMSE scores also indicate significant margin reductions, particularly for the Bison and Imagen datasets. Concerning the Dream dataset, DM-Align still outperforms the best-performing baseline with a margin of 25.02% for LPIPS and 7.80% for PWMSE. While DM-Align consistently demonstrates superior results for background preservation, we infer that the baselines are relatively adept at preserving the background only when the instructions are short and simple, as observed in the case of the Dream dataset. This conclusion is further supported by the results presented in Table 1. Ablation tests To run the ablation tests for DMAlign we rely on the Imagen dataset. According to Table 4, the absence of the refinement of the diffusion mask using the regions detected with the word alignment model and the Grounding-SAM segmentation model has the highest negative impact over the similarity w.r.t the input picture. As expected, a significant negative effect over the similarity with the input image is also noticed when omitting the deleted nouns or the nouns with different modifiers in the two queries. Similarly, noise cancellation and especially the diffusion mask also affect the conservation of the background. Including all the components in the architecture of DM-Align mainly facilitates the preservation of the input image and does not result in a reduction of the CLIPScore. Therefore, the inclusion of all these components in the DM-Align represents the best trade-off w.r.t the similarity to the input image and to the target caption. The next five visualizations exemplify the ablation tests. The first row of each figure presents the effect of omitting a component of DM-Align, while the correct behavior is shown in the second row. Figure 7 illustrates the effect of defining the editing mask based only on the image regions of the keywords. Without the diffusion mask, the model has to insert a new object in the fixed area of the replaced object. If we need to replace an object with a larger one, DM-Align without diffusion might create distorted and unnatural outputs. As we usuFigure 7: 1st line: Example of omitting the diffusion mask (c1: A woman near a cat., c2: A woman near a dog.). 2nd line: The correct example of including the diffusion mask. Figure 8: 1st line: Example of omitting the cancellation of the noise variable defined within the diffusion model. (c1: A man sitting at a table holding a laptop on the train., c2: A man sitting at a table reading a book on the train.). 2nd line: The correct example of including the noise cancellation. ally expect bigger dogs than cats, DM-Align with diffusion properly replaces the cat with a slightly bigger dog. On the contrary, the dog that replaced the cat is distorted when diffusion is not used. While the overall diffusion mask can give more context for the editing and allows the insertion of objects of different sizes, noise cancellation is an important step used to improve the initial diffusion mask. As shown in Figure 8, when noise cancellation is used, the initial diffusion mask is better trimmed, and the background is properly preserved. As the diffusion mask does not have complete control over the regions to be edited, its extension or shrinkage based on the image regions of the keywords is mandatory to obtain a correct mask for editing. When the image is edited using only the initial diffusion mask in Figure 9, both the ship and the sand are modified, while the former is expected to be preserved. As opposed, when the diffusion mask is refined with image segmentation, only the sand is replaced by the ocean. Figure 9: 1st line: Example of omitting the refinement of the diffusion mask using image segmentation (c1: A clear sky and a ship landed on the sand., c2: A clear sky and a ship landed on the ocean.). 2nd line: The correct example of including the refinement of the diffusion mask with image segmentation. Figure 10: 1st line: Example of omitting the information about modifiers associated with the nouns shared by both captions (c1: A woman with a red jacket., c2: A woman with a green jacket.). 2nd line: The correct example of including the information about the modifiers. The omission of the adjective modifiers in the analysis of DM-Align is exemplified in Figure 10. If the modifiers are left out, DM-Align considers the jacket a shared noun, like the noun \u201cwoman\", and removes its regions from the diffusion mask. As a result, DM-Align does not detect any semantical difference between the text instructions, and the output image is identical to the input image. On the other hand, if the modifiers are considered, DMAlign can properly adjust the color of the jacket while keeping the woman\u2019s face unaltered. As we are interested to make only the necessary updates in the picture, while keeping the background and the regions of the deleted words unchanged, the region assigned to the word \u201cman\" in Figure 11 is removed from the diffusion mask. As a result, the corresponding region is untouched. On the contrary, the inclusion of the region associated with the word \u201cman\" in the diffusion mask increases the randomness in the new image by inserting a store. Since the store is irrelevant, both Figure 11: 1st line: Example of omitting the information about the deleted nouns from the source caption (c1: A motorcycle near a man., c2: A motorcycle.). 2nd line: The correct example of including the information about the deleted nouns. the similarity scores w.r.t the input image or target instruction are reduced. 5.2 Human qualitative analysis Some qualitative examples extracted from the three data collections are shown in Figures 3, 5, and 6. Compared to DIFFEdit, ControlNet, and FlexIT, as well as Plug-and-Play, Imagic and InstructPix2Pix, the DM-Align model demonstrates superior manipulation of the content of the input image while largely preserving the background in line with the target query. DM-Align establishes semantic connections between source and target queries, updating the image content accordingly, whereas the baselines often alter the background more than necessary, as discussed above. DiffEdit tends to introduce random visual content (see Figure 3), while FlexIT tends to distort and zoom into the image (Figures 5 and 6), trading off the minimization of the reconstruction loss with respect to the input image and the text instructions for potential distortions in the new image. Although ControlNet can maintain the structure of the input image, it struggles to preserve the texture or colors of the objects, likely due to the absence of a masking system. InstructPix2Pix also encounters challenges in preserving the style of objects in the input image and tends to include more objects in the image than specified in the target text instruction. Plug-andPlay zooms into the image and tends to slightly alter the details of objects requested for preservation in the target text instruction. Out of all baselines, Imagic shows the highest tendency to change the input image\u2019s compositional structure, as highlighted also by the image-based metrics presented in Tables 1 and 3. Table 5: Human evaluation of the quality of the editing process based on the text instruction (Q1), the preservation of the background (Q2) and the quality of the edited image (Q3). The results represent the average scores reported by annotators using a 5-point Likert scale. Q1\u2191 Q2\u2191 Q3\u2191 FlexIt 3.75 4.00 3.85 DiffEdit 3.85 4.15 3.85 ControlNet 3.50 3.75 3.90 Plug-and-Play 3.80 4.10 3.85 InstructPix2Pix 3.50 3.75 3.80 Imagic 3.80 3.20 3.85 DM-Align 3.90 4.35 3.95 To confirm the above observations, we randomly selected 100 images from the Bison dataset and asked Amazon MTurk annotators to evaluate the editing quality of the five baselines and the proposed DM-Align. For each edited image, the annotators were asked to evaluate the overall quality of the editing process based on the text instruction (Q1), the preservation of the background (Q2) and the quality of the edited image in terms of compositionality, sharpness, distortion, color and contrast (Q3). According to the human evaluation executed on a 5-point Likert scale, our model scores better than all baselines (Table 5). The inter-rater agreement is good with Cohen\u2019s weighted kappa \u03ba between 0.65 and 0.75 for all analyzed models. 6 Conclusion, limitations and future work We propose a novel model DM-Align for semantic image editing that confers to the users a natural control over the image editing by updating the text instructions. By automatically identifying the regions to be kept or altered purely based on the text instructions, the proposed model is not a black box. Due to the high level of explainability, the users can easily understand the edited result and how to change the instructions to obtain the desired output. The quantitative and qualitative evaluations show the superiority of DM-Align to enhance the textbased control of semantic image editing over existing baselines FlexIT, DiffEdit, ControlNet, Imagic, Plug-and-Play and InstructPix2Pix. Unlike the latter models, our approach is not limited by the length of the text instructions. Due to the inclusion of one-to-one alignments between the words of the instructions that describe the image before and after the image update, we can edit images regardless of how complicated and elaborate the text instructions are. Besides the low sensitivity to the complexity of the instructions, the one-to-one word alignments allow us to properly conserve the background while editing only what is strictly required by the users. DM-Align focuses on the editing of objects mentioned as nouns and their adjectives. In future work, its flexibility can be improved by editing actions in which objects and persons are involved. As a result, they might change position in the image without the need to update their properties. Acknowledgments This project was funded by the European Research Council (ERC) Advanced Grant CALCULUS (grant agreement No. 788506)." | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2212.05034v1", | |
| "title": "SmartBrush: Text and Shape Guided Object Inpainting with Diffusion Model", | |
| "abstract": "Generic image inpainting aims to complete a corrupted image by borrowing\nsurrounding information, which barely generates novel content. By contrast,\nmulti-modal inpainting provides more flexible and useful controls on the\ninpainted content, \\eg, a text prompt can be used to describe an object with\nricher attributes, and a mask can be used to constrain the shape of the\ninpainted object rather than being only considered as a missing area. We\npropose a new diffusion-based model named SmartBrush for completing a missing\nregion with an object using both text and shape-guidance. While previous work\nsuch as DALLE-2 and Stable Diffusion can do text-guided inapinting they do not\nsupport shape guidance and tend to modify background texture surrounding the\ngenerated object. Our model incorporates both text and shape guidance with\nprecision control. To preserve the background better, we propose a novel\ntraining and sampling strategy by augmenting the diffusion U-net with\nobject-mask prediction. Lastly, we introduce a multi-task training strategy by\njointly training inpainting with text-to-image generation to leverage more\ntraining data. We conduct extensive experiments showing that our model\noutperforms all baselines in terms of visual quality, mask controllability, and\nbackground preservation.", | |
| "authors": "Shaoan Xie, Zhifei Zhang, Zhe Lin, Tobias Hinz, Kun Zhang", | |
| "published": "2022-12-09", | |
| "updated": "2022-12-09", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2210.09276v3", | |
| "title": "Imagic: Text-Based Real Image Editing with Diffusion Models", | |
| "abstract": "Text-conditioned image editing has recently attracted considerable interest.\nHowever, most methods are currently either limited to specific editing types\n(e.g., object overlay, style transfer), or apply to synthetically generated\nimages, or require multiple input images of a common object. In this paper we\ndemonstrate, for the very first time, the ability to apply complex (e.g.,\nnon-rigid) text-guided semantic edits to a single real image. For example, we\ncan change the posture and composition of one or multiple objects inside an\nimage, while preserving its original characteristics. Our method can make a\nstanding dog sit down or jump, cause a bird to spread its wings, etc. -- each\nwithin its single high-resolution natural image provided by the user. Contrary\nto previous work, our proposed method requires only a single input image and a\ntarget text (the desired edit). It operates on real images, and does not\nrequire any additional inputs (such as image masks or additional views of the\nobject). Our method, which we call \"Imagic\", leverages a pre-trained\ntext-to-image diffusion model for this task. It produces a text embedding that\naligns with both the input image and the target text, while fine-tuning the\ndiffusion model to capture the image-specific appearance. We demonstrate the\nquality and versatility of our method on numerous inputs from various domains,\nshowcasing a plethora of high quality complex semantic image edits, all within\na single unified framework.", | |
| "authors": "Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani", | |
| "published": "2022-10-17", | |
| "updated": "2023-03-20", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2203.04705v1", | |
| "title": "FlexIT: Towards Flexible Semantic Image Translation", | |
| "abstract": "Deep generative models, like GANs, have considerably improved the state of\nthe art in image synthesis, and are able to generate near photo-realistic\nimages in structured domains such as human faces. Based on this success, recent\nwork on image editing proceeds by projecting images to the GAN latent space and\nmanipulating the latent vector. However, these approaches are limited in that\nonly images from a narrow domain can be transformed, and with only a limited\nnumber of editing operations. We propose FlexIT, a novel method which can take\nany input image and a user-defined text instruction for editing. Our method\nachieves flexible and natural editing, pushing the limits of semantic image\ntranslation. First, FlexIT combines the input image and text into a single\ntarget point in the CLIP multimodal embedding space. Via the latent space of an\nauto-encoder, we iteratively transform the input image toward the target point,\nensuring coherence and quality with a variety of novel regularization terms. We\npropose an evaluation protocol for semantic image translation, and thoroughly\nevaluate our method on ImageNet. Code will be made publicly available.", | |
| "authors": "Guillaume Couairon, Asya Grechka, Jakob Verbeek, Holger Schwenk, Matthieu Cord", | |
| "published": "2022-03-09", | |
| "updated": "2022-03-09", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2111.14818v2", | |
| "title": "Blended Diffusion for Text-driven Editing of Natural Images", | |
| "abstract": "Natural language offers a highly intuitive interface for image editing. In\nthis paper, we introduce the first solution for performing local (region-based)\nedits in generic natural images, based on a natural language description along\nwith an ROI mask. We achieve our goal by leveraging and combining a pretrained\nlanguage-image model (CLIP), to steer the edit towards a user-provided text\nprompt, with a denoising diffusion probabilistic model (DDPM) to generate\nnatural-looking results. To seamlessly fuse the edited region with the\nunchanged parts of the image, we spatially blend noised versions of the input\nimage with the local text-guided diffusion latent at a progression of noise\nlevels. In addition, we show that adding augmentations to the diffusion process\nmitigates adversarial results. We compare against several baselines and related\nmethods, both qualitatively and quantitatively, and show that our method\noutperforms these solutions in terms of overall realism, ability to preserve\nthe background and matching the text. Finally, we show several text-driven\nediting applications, including adding a new object to an image,\nremoving/replacing/altering existing objects, background replacement, and image\nextrapolation. Code is available at:\nhttps://omriavrahami.com/blended-diffusion-page/", | |
| "authors": "Omri Avrahami, Dani Lischinski, Ohad Fried", | |
| "published": "2021-11-29", | |
| "updated": "2022-03-28", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.GR", | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2212.06909v2", | |
| "title": "Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting", | |
| "abstract": "Text-guided image editing can have a transformative impact in supporting\ncreative applications. A key challenge is to generate edits that are faithful\nto input text prompts, while consistent with input images. We present Imagen\nEditor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided\nimage inpainting. Imagen Editor's edits are faithful to the text prompts, which\nis accomplished by using object detectors to propose inpainting masks during\ntraining. In addition, Imagen Editor captures fine details in the input image\nby conditioning the cascaded pipeline on the original high resolution image. To\nimprove qualitative and quantitative evaluation, we introduce EditBench, a\nsystematic benchmark for text-guided image inpainting. EditBench evaluates\ninpainting edits on natural and generated images exploring objects, attributes,\nand scenes. Through extensive human evaluation on EditBench, we find that\nobject-masking during training leads to across-the-board improvements in\ntext-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and\nStable Diffusion -- and, as a cohort, these models are better at\nobject-rendering than text-rendering, and handle material/color/size attributes\nbetter than count/shape attributes.", | |
| "authors": "Su Wang, Chitwan Saharia, Ceslee Montgomery, Jordi Pont-Tuset, Shai Noy, Stefano Pellegrini, Yasumasa Onoe, Sarah Laszlo, David J. Fleet, Radu Soricut, Jason Baldridge, Mohammad Norouzi, Peter Anderson, William Chan", | |
| "published": "2022-12-13", | |
| "updated": "2023-04-12", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2210.11427v1", | |
| "title": "DiffEdit: Diffusion-based semantic image editing with mask guidance", | |
| "abstract": "Image generation has recently seen tremendous advances, with diffusion models\nallowing to synthesize convincing images for a large variety of text prompts.\nIn this article, we propose DiffEdit, a method to take advantage of\ntext-conditioned diffusion models for the task of semantic image editing, where\nthe goal is to edit an image based on a text query. Semantic image editing is\nan extension of image generation, with the additional constraint that the\ngenerated image should be as similar as possible to a given input image.\nCurrent editing methods based on diffusion models usually require to provide a\nmask, making the task much easier by treating it as a conditional inpainting\ntask. In contrast, our main contribution is able to automatically generate a\nmask highlighting regions of the input image that need to be edited, by\ncontrasting predictions of a diffusion model conditioned on different text\nprompts. Moreover, we rely on latent inference to preserve content in those\nregions of interest and show excellent synergies with mask-based diffusion.\nDiffEdit achieves state-of-the-art editing performance on ImageNet. In\naddition, we evaluate semantic image editing in more challenging settings,\nusing images from the COCO dataset as well as text-based generated images.", | |
| "authors": "Guillaume Couairon, Jakob Verbeek, Holger Schwenk, Matthieu Cord", | |
| "published": "2022-10-20", | |
| "updated": "2022-10-20", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2208.01626v1", | |
| "title": "Prompt-to-Prompt Image Editing with Cross Attention Control", | |
| "abstract": "Recent large-scale text-driven synthesis models have attracted much attention\nthanks to their remarkable capabilities of generating highly diverse images\nthat follow given text prompts. Such text-based synthesis methods are\nparticularly appealing to humans who are used to verbally describe their\nintent. Therefore, it is only natural to extend the text-driven image synthesis\nto text-driven image editing. Editing is challenging for these generative\nmodels, since an innate property of an editing technique is to preserve most of\nthe original image, while in the text-based models, even a small modification\nof the text prompt often leads to a completely different outcome.\nState-of-the-art methods mitigate this by requiring the users to provide a\nspatial mask to localize the edit, hence, ignoring the original structure and\ncontent within the masked region. In this paper, we pursue an intuitive\nprompt-to-prompt editing framework, where the edits are controlled by text\nonly. To this end, we analyze a text-conditioned model in depth and observe\nthat the cross-attention layers are the key to controlling the relation between\nthe spatial layout of the image to each word in the prompt. With this\nobservation, we present several applications which monitor the image synthesis\nby editing the textual prompt only. This includes localized editing by\nreplacing a word, global editing by adding a specification, and even delicately\ncontrolling the extent to which a word is reflected in the image. We present\nour results over diverse images and prompts, demonstrating high-quality\nsynthesis and fidelity to the edited prompts.", | |
| "authors": "Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, Daniel Cohen-Or", | |
| "published": "2022-08-02", | |
| "updated": "2022-08-02", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.CL", | |
| "cs.GR", | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.02779v2", | |
| "title": "Blended Latent Diffusion", | |
| "abstract": "The tremendous progress in neural image generation, coupled with the\nemergence of seemingly omnipotent vision-language models has finally enabled\ntext-based interfaces for creating and editing images. Handling generic images\nrequires a diverse underlying generative model, hence the latest works utilize\ndiffusion models, which were shown to surpass GANs in terms of diversity. One\nmajor drawback of diffusion models, however, is their relatively slow inference\ntime. In this paper, we present an accelerated solution to the task of local\ntext-driven editing of generic images, where the desired edits are confined to\na user-provided mask. Our solution leverages a recent text-to-image Latent\nDiffusion Model (LDM), which speeds up diffusion by operating in a\nlower-dimensional latent space. We first convert the LDM into a local image\neditor by incorporating Blended Diffusion into it. Next we propose an\noptimization-based solution for the inherent inability of this LDM to\naccurately reconstruct images. Finally, we address the scenario of performing\nlocal edits using thin masks. We evaluate our method against the available\nbaselines both qualitatively and quantitatively and demonstrate that in\naddition to being faster, our method achieves better precision than the\nbaselines while mitigating some of their artifacts.", | |
| "authors": "Omri Avrahami, Ohad Fried, Dani Lischinski", | |
| "published": "2022-06-06", | |
| "updated": "2023-04-30", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.GR", | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2302.05543v3", | |
| "title": "Adding Conditional Control to Text-to-Image Diffusion Models", | |
| "abstract": "We present ControlNet, a neural network architecture to add spatial\nconditioning controls to large, pretrained text-to-image diffusion models.\nControlNet locks the production-ready large diffusion models, and reuses their\ndeep and robust encoding layers pretrained with billions of images as a strong\nbackbone to learn a diverse set of conditional controls. The neural\narchitecture is connected with \"zero convolutions\" (zero-initialized\nconvolution layers) that progressively grow the parameters from zero and ensure\nthat no harmful noise could affect the finetuning. We test various conditioning\ncontrols, eg, edges, depth, segmentation, human pose, etc, with Stable\nDiffusion, using single or multiple conditions, with or without prompts. We\nshow that the training of ControlNets is robust with small (<50k) and large\n(>1m) datasets. Extensive results show that ControlNet may facilitate wider\napplications to control image diffusion models.", | |
| "authors": "Lvmin Zhang, Anyi Rao, Maneesh Agrawala", | |
| "published": "2023-02-10", | |
| "updated": "2023-11-26", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI", | |
| "cs.GR", | |
| "cs.HC", | |
| "cs.MM" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2211.09800v2", | |
| "title": "InstructPix2Pix: Learning to Follow Image Editing Instructions", | |
| "abstract": "We propose a method for editing images from human instructions: given an\ninput image and a written instruction that tells the model what to do, our\nmodel follows these instructions to edit the image. To obtain training data for\nthis problem, we combine the knowledge of two large pretrained models -- a\nlanguage model (GPT-3) and a text-to-image model (Stable Diffusion) -- to\ngenerate a large dataset of image editing examples. Our conditional diffusion\nmodel, InstructPix2Pix, is trained on our generated data, and generalizes to\nreal images and user-written instructions at inference time. Since it performs\nedits in the forward pass and does not require per example fine-tuning or\ninversion, our model edits images quickly, in a matter of seconds. We show\ncompelling editing results for a diverse collection of input images and written\ninstructions.", | |
| "authors": "Tim Brooks, Aleksander Holynski, Alexei A. Efros", | |
| "published": "2022-11-17", | |
| "updated": "2023-01-18", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI", | |
| "cs.CL", | |
| "cs.GR", | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2211.12572v1", | |
| "title": "Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation", | |
| "abstract": "Large-scale text-to-image generative models have been a revolutionary\nbreakthrough in the evolution of generative AI, allowing us to synthesize\ndiverse images that convey highly complex visual concepts. However, a pivotal\nchallenge in leveraging such models for real-world content creation tasks is\nproviding users with control over the generated content. In this paper, we\npresent a new framework that takes text-to-image synthesis to the realm of\nimage-to-image translation -- given a guidance image and a target text prompt,\nour method harnesses the power of a pre-trained text-to-image diffusion model\nto generate a new image that complies with the target text, while preserving\nthe semantic layout of the source image. Specifically, we observe and\nempirically demonstrate that fine-grained control over the generated structure\ncan be achieved by manipulating spatial features and their self-attention\ninside the model. This results in a simple and effective approach, where\nfeatures extracted from the guidance image are directly injected into the\ngeneration process of the target image, requiring no training or fine-tuning\nand applicable for both real or generated guidance images. We demonstrate\nhigh-quality results on versatile text-guided image translation tasks,\nincluding translating sketches, rough drawings and animations into realistic\nimages, changing of the class and appearance of objects in a given image, and\nmodifications of global qualities such as lighting and color.", | |
| "authors": "Narek Tumanyan, Michal Geyer, Shai Bagon, Tali Dekel", | |
| "published": "2022-11-22", | |
| "updated": "2022-11-22", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/0907.0417v1", | |
| "title": "Microscopic origin of the jump diffusion model", | |
| "abstract": "The present paper is aimed at studying the microscopic origin of the jump\ndiffusion. Starting from the $N$-body Liouville equation and making only the\nassumption that molecular reorientation is overdamped, we derive and solve the\nnew (hereafter generalized diffusion) equation. This is the most general\nequation which governs orientational relaxation of an equilibrium molecular\nensemble in the hindered rotation limit and in the long time limit. The\ngeneralized diffusion equation is an extension of the small-angle diffusion\nequation beyond the impact approximation. We establish the conditions under\nwhich the generalized diffusion equation can be identified with the jump\ndiffusion equation, and also discuss the similarities and differences between\nthe two approaches.", | |
| "authors": "M. F. Gelin, D. S. Kosov", | |
| "published": "2009-07-02", | |
| "updated": "2009-07-02", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1906.02405v1", | |
| "title": "Indirect interactions influence contact network structure and diffusion dynamics", | |
| "abstract": "Interaction patterns at the individual level influence the behaviour of\ndiffusion over contact networks. Most of the current diffusion models only\nconsider direct interactions among individuals to build underlying infectious\nitems transmission networks. However, delayed indirect interactions, where a\nsusceptible individual interacts with infectious items after the infected\nindividual has left the interaction space, can also cause transmission events.\nWe define a diffusion model called the same place different time transmission\n(SPDT) based diffusion that considers transmission links for these indirect\ninteractions. Our SPDT model changes the network dynamics where the\nconnectivity among individuals varies with the decay rates of link infectivity.\nWe investigate SPDT diffusion behaviours by simulating airborne disease\nspreading on data-driven contact networks. The SPDT model significantly\nincreases diffusion dynamics (particularly for networks with low link densities\nwhere indirect interactions create new infection pathways) and is capable of\nproducing realistic disease reproduction number. Our results show that the SPDT\nmodel is significantly more likely to lead to outbreaks compared to current\ndiffusion models with direct interactions. We find that the diffusion dynamics\nwith including indirect links are not reproducible by the current models,\nhighlighting the importance of the indirect links for predicting outbreaks.", | |
| "authors": "Md Shahzamal, Raja Jurdak, Bernard Mans, Frank de Hoog", | |
| "published": "2019-06-06", | |
| "updated": "2019-06-06", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI", | |
| "physics.soc-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.12377v1", | |
| "title": "The vanishing diffusion limit for an Oldroyd-B model in $\\mathbb{R}^2_+$", | |
| "abstract": "We consider the initial-boundary value problem for an incompressible\nOldroyd-B model with stress diffusion in two-dimensional upper half plane which\ndescribes the motion of viscoelastic polymeric fluids. From the physical point\nof view, the diffusive coefficient is several orders of magnitude smaller than\nother parameters in the model, and is usually assumed to be zero. However, the\nlink between the diffusive model and the standard one (zero diffusion) via\nvanishing diffusion limit is still unknown from the mathematical point of view,\nin particular for the problem with boundary. Some numerical results [13]\nsuggest that this should be true. In this work, we provide a rigorous\njustification for the vanishing diffusion in $L^\\infty$-norm.", | |
| "authors": "Yinghui Wang, Huanyao Wen", | |
| "published": "2023-05-21", | |
| "updated": "2023-05-21", | |
| "primary_cat": "math.AP", | |
| "cats": [ | |
| "math.AP", | |
| "35Q35, 76A10, 76D10" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2212.10805v1", | |
| "title": "Beyond Information Exchange: An Approach to Deploy Network Properties for Information Diffusion", | |
| "abstract": "Information diffusion in Online Social Networks is a new and crucial problem\nin social network analysis field and requires significant research attention.\nEfficient diffusion of information are of critical importance in diverse\nsituations such as; pandemic prevention, advertising, marketing etc. Although\nseveral mathematical models have been developed till date, but previous works\nlacked systematic analysis and exploration of the influence of neighborhood for\ninformation diffusion. In this paper, we have proposed Common Neighborhood\nStrategy (CNS) algorithm for information diffusion that demonstrates the role\nof common neighborhood in information propagation throughout the network. The\nperformance of CNS algorithm is evaluated on several real-world datasets in\nterms of diffusion speed and diffusion outspread and compared with several\nwidely used information diffusion models. Empirical results show CNS algorithm\nenables better information diffusion both in terms of diffusion speed and\ndiffusion outspread.", | |
| "authors": "Soumita Das, Anupam Biswas, Ravi Kishore Devarapalli", | |
| "published": "2022-12-21", | |
| "updated": "2022-12-21", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI", | |
| "cs.CV", | |
| "cs.IR", | |
| "J.4; G.4; I.6" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2404.12761v1", | |
| "title": "Universality of giant diffusion in tilted periodic potentials", | |
| "abstract": "Giant diffusion, where the diffusion coefficient of a Brownian particle in a\nperiodic potential with an external force is significantly enhanced by the\nexternal force, is a non-trivial non-equilibrium phenomenon. We propose a\nsimple stochastic model of giant diffusion, which is based on a biased\ncontinuous-time random walk (CTRW). In this model, we introduce a flight time\nin the biased CTRW. We derive the diffusion coefficients of this model by the\nrenewal theory and find that there is a maximum diffusion coefficient when the\nbias is changed. Giant diffusion is universally observed in the sense that\nthere is a peak of the diffusion coefficient for any tilted periodic potentials\nand the degree of the diffusivity is greatly enhanced especially for\nlow-temperature regimes. The biased CTRW models with flight times are applied\nto diffusion under three tilted periodic potentials. Furthermore, the\ntemperature dependence of the maximum diffusion coefficient and the external\nforce that attains the maximum are presented for diffusion under a tilted\nsawtooth potential.", | |
| "authors": "Kento Iida, Andreas Dechant, Takuma Akimoto", | |
| "published": "2024-04-19", | |
| "updated": "2024-04-19", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2302.05737v2", | |
| "title": "A Reparameterized Discrete Diffusion Model for Text Generation", | |
| "abstract": "This work studies discrete diffusion probabilistic models with applications\nto natural language generation. We derive an alternative yet equivalent\nformulation of the sampling from discrete diffusion processes and leverage this\ninsight to develop a family of reparameterized discrete diffusion models. The\nderived generic framework is highly flexible, offers a fresh perspective of the\ngeneration process in discrete diffusion models, and features more effective\ntraining and decoding techniques. We conduct extensive experiments to evaluate\nthe text generation capability of our model, demonstrating significant\nimprovements over existing diffusion models.", | |
| "authors": "Lin Zheng, Jianbo Yuan, Lei Yu, Lingpeng Kong", | |
| "published": "2023-02-11", | |
| "updated": "2024-02-03", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/0801.3436v1", | |
| "title": "Model for Diffusion-Induced Ramsey Narrowing", | |
| "abstract": "Diffusion-induced Ramsey narrowing that appears when atoms can leave the\ninteraction region and repeatedly return without lost of coherence is\ninvestigated using strong collisions approximation. The effective diffusion\nequation is obtained and solved for low-dimensional model configurations and\nthree-dimensional real one.", | |
| "authors": "Alexander Romanenko, Leonid Yatsenko", | |
| "published": "2008-01-22", | |
| "updated": "2008-01-22", | |
| "primary_cat": "quant-ph", | |
| "cats": [ | |
| "quant-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2303.16203v3", | |
| "title": "Your Diffusion Model is Secretly a Zero-Shot Classifier", | |
| "abstract": "The recent wave of large-scale text-to-image diffusion models has\ndramatically increased our text-based image generation abilities. These models\ncan generate realistic images for a staggering variety of prompts and exhibit\nimpressive compositional generalization abilities. Almost all use cases thus\nfar have solely focused on sampling; however, diffusion models can also provide\nconditional density estimates, which are useful for tasks beyond image\ngeneration. In this paper, we show that the density estimates from large-scale\ntext-to-image diffusion models like Stable Diffusion can be leveraged to\nperform zero-shot classification without any additional training. Our\ngenerative approach to classification, which we call Diffusion Classifier,\nattains strong results on a variety of benchmarks and outperforms alternative\nmethods of extracting knowledge from diffusion models. Although a gap remains\nbetween generative and discriminative approaches on zero-shot recognition\ntasks, our diffusion-based approach has significantly stronger multimodal\ncompositional reasoning ability than competing discriminative approaches.\nFinally, we use Diffusion Classifier to extract standard classifiers from\nclass-conditional diffusion models trained on ImageNet. Our models achieve\nstrong classification performance using only weak augmentations and exhibit\nqualitatively better \"effective robustness\" to distribution shift. Overall, our\nresults are a step toward using generative over discriminative models for\ndownstream tasks. Results and visualizations at\nhttps://diffusion-classifier.github.io/", | |
| "authors": "Alexander C. Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, Deepak Pathak", | |
| "published": "2023-03-28", | |
| "updated": "2023-09-13", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.CV", | |
| "cs.NE", | |
| "cs.RO" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2010.02514v1", | |
| "title": "Diffusion model and analysis of diffusion process at lagrangian method", | |
| "abstract": "Based on Fick's 2nd law the development of moving particle semi-implicit\nmethod for predicting diffusion process is proposed in this study", | |
| "authors": "Ziqi Zhou", | |
| "published": "2020-10-06", | |
| "updated": "2020-10-06", | |
| "primary_cat": "physics.flu-dyn", | |
| "cats": [ | |
| "physics.flu-dyn" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1308.3393v2", | |
| "title": "Cosmology with matter diffusion", | |
| "abstract": "We construct a viable cosmological model based on velocity diffusion of\nmatter particles. In order to ensure the conservation of the total\nenergy-momentum tensor in the presence of diffusion, we include a cosmological\nscalar field $\\phi$ which we identify with the dark energy component of the\nUniverse. The model is characterized by only one new degree of freedom, the\ndiffusion parameter $\\sigma$. The standard $\\Lambda$CDM model can be recovered\nby setting $\\sigma=0$. If diffusion takes place ($\\sigma >0$) the dynamics of\nthe matter and of the dark energy fields are coupled. We argue that the\nexistence of a diffusion mechanism in the Universe can serve as a theoretical\nmotivation for interacting models. We constrain the background dynamics of the\ndiffusion model with Supernovae, H(z) and BAO data. We also perform a\nperturbative analysis of this model in order to understand structure formation\nin the Universe. We calculate the impact of diffusion both on the CMB spectrum,\nwith particular attention to the integrated Sachs-Wolfe signal, and on the\nmatter power spectrum $P(k)$. The latter analysis places strong constraints on\nthe magnitude of the diffusion mechanism but does not rule out the model.", | |
| "authors": "Simone Calogero, Hermano Velten", | |
| "published": "2013-08-15", | |
| "updated": "2013-10-29", | |
| "primary_cat": "astro-ph.CO", | |
| "cats": [ | |
| "astro-ph.CO" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/0912.3770v1", | |
| "title": "SLE(6) and the geometry of diffusion fronts", | |
| "abstract": "We study the diffusion front for a natural two-dimensional model where many\nparticles starting at the origin diffuse independently. It turns out that this\nmodel can be described using properties of near-critical percolation, and\nprovides a natural example where critical fractal geometries spontaneously\narise.", | |
| "authors": "Pierre Nolin", | |
| "published": "2009-12-18", | |
| "updated": "2009-12-18", | |
| "primary_cat": "math.PR", | |
| "cats": [ | |
| "math.PR" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1712.02290v2", | |
| "title": "Effects of nongaussian diffusion on \"isotropic diffusion measurements'': an ex-vivo microimaging and simulation study", | |
| "abstract": "Designing novel diffusion-weighted pulse sequences to probe tissue\nmicrostructure beyond the conventional Stejskal-Tanner family is currently of\nbroad interest. One such technique, multidimensional diffusion MRI, has been\nrecently proposed to afford model-free decomposition of diffusion signal\nkurtosis into terms originating from either ensemble variance of isotropic\ndiffusivity or microscopic diffusion anisotropy. This ability rests on the\nassumption that diffusion can be described as a sum of multiple Gaussian\ncompartments, but this is often not strictly fulfilled. The effects of\nnongaussian diffusion on single shot isotropic diffusion sequences were first\nconsidered in detail by de Swiet and Mitra in 1996. They showed theoretically\nthat anisotropic compartments lead to anisotropic time dependence of the\ndiffusion tensors, which causes the measured isotropic diffusivity to depend on\ngradient frame orientation. Here we show how such deviations from the multiple\nGaussian compartments assumption conflates orientation dispersion with ensemble\nvariance in isotropic diffusivity. Second, we consider additional contributions\nto the apparent variance in isotropic diffusivity arising due to\nintracompartmental kurtosis. These will likewise depend on gradient frame\norientation. We illustrate the potential importance of these confounds with\nanalytical expressions, numerical simulations in simple model geometries, and\nmicroimaging experiments in fixed spinal cord using isotropic diffusion\nencoding waveforms with 7.5 ms duration and 3000 mT/m maximum amplitude.", | |
| "authors": "Sune N\u00f8rh\u00f8j Jespersen, Jonas Lynge Olesen, Andrada Ianu\u015f, Noam Shemesh", | |
| "published": "2017-12-06", | |
| "updated": "2019-02-04", | |
| "primary_cat": "physics.bio-ph", | |
| "cats": [ | |
| "physics.bio-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1506.05574v1", | |
| "title": "Information Diffusion issues", | |
| "abstract": "In this report there will be a discussion for Information Diffusion. There\nwill be discussions on what information diffusion is, its key characteristics\nand on several other aspects of these kinds of networks. This report will focus\non peer to peer models in information diffusion. There will be discussions on\nepidemic model, OSN and other details related to information diffusion.", | |
| "authors": "Jonathan Helmigh", | |
| "published": "2015-06-18", | |
| "updated": "2015-06-18", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1603.05605v1", | |
| "title": "Multiscale modeling of diffusion in a crowded environment", | |
| "abstract": "We present a multiscale approach to model diffusion in a crowded environment\nand its effect on the reaction rates. Diffusion in biological systems is often\nmodeled by a discrete space jump process in order to capture the inherent noise\nof biological systems, which becomes important in the low copy number regime.\nTo model diffusion in the crowded cell environment efficiently, we compute the\njump rates in this mesoscopic model from local first exit times, which account\nfor the microscopic positions of the crowding molecules, while the diffusing\nmolecules jump on a coarser Cartesian grid. We then extract a macroscopic\ndescription from the resulting jump rates, where the excluded volume effect is\nmodeled by a diffusion equation with space dependent diffusion coefficient. The\ncrowding molecules can be of arbitrary shape and size and numerical experiments\ndemonstrate that those factors together with the size of the diffusing molecule\nplay a crucial role on the magnitude of the decrease in diffusive motion. When\ncorrecting the reaction rates for the altered diffusion we can show that\nmolecular crowding either enhances or inhibits chemical reactions depending on\nlocal fluctuations of the obstacle density.", | |
| "authors": "Lina Meinecke", | |
| "published": "2016-03-12", | |
| "updated": "2016-03-12", | |
| "primary_cat": "q-bio.SC", | |
| "cats": [ | |
| "q-bio.SC", | |
| "math.NA", | |
| "92-08" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1609.04658v1", | |
| "title": "Analyzing Signal Attenuation in PFG Anomalous Diffusion via a Modified Gaussian Phase Distribution Approximation Based on Fractal Derivative Model", | |
| "abstract": "Pulsed field gradient (PFG) has been increasingly employed to study anomalous\ndiffusions in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging\n(MRI). However, the analysis of PFG anomalous diffusion is complicated. In this\npaper, a fractal derivative model based modified Gaussian phase distribution\nmethod is proposed to describe PFG anomalous diffusion. By using the phase\ndistribution obtained from the effective phase shift diffusion method based on\nfractal derivatives, and employing some of the traditional Gaussian phase\ndistribution approximation techniques, a general signal attenuation expression\nfor free fractional diffusion is derived. This expression describes a stretched\nexponential function based attenuation, which is distinct from both the\nexponential attenuation for normal diffusion obtained from conventional\nGaussian phase distribution approximation, and the Mittag-Leffler function\nbased attenuation for anomalous diffusion obtained from fractional derivative.\nThe obtained signal attenuation expression can analyze the finite gradient\npulse width (FGPW) effect. Additionally, it can generally be applied to all\nthree types of PFG fractional diffusions classified based on time derivative\norder alpha and space derivative order beta. These three types of fractional\ndiffusions include time-fractional diffusion, space-fractional diffusion, and\ngeneral fractional diffusion. The results in this paper are consistent with\nreported results based on effective phase shift diffusion equation method and\ninstantaneous signal attenuation method. This method provides a new, convenient\napproximation formalism for analyzing PFG anomalous diffusion experiments.", | |
| "authors": "Guoxing Lin", | |
| "published": "2016-09-15", | |
| "updated": "2016-09-15", | |
| "primary_cat": "physics.chem-ph", | |
| "cats": [ | |
| "physics.chem-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2210.05559v2", | |
| "title": "Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance", | |
| "abstract": "Diffusion models have achieved unprecedented performance in generative\nmodeling. The commonly-adopted formulation of the latent code of diffusion\nmodels is a sequence of gradually denoised samples, as opposed to the simpler\n(e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper\nprovides an alternative, Gaussian formulation of the latent space of various\ndiffusion models, as well as an invertible DPM-Encoder that maps images into\nthe latent space. While our formulation is purely based on the definition of\ndiffusion models, we demonstrate several intriguing consequences. (1)\nEmpirically, we observe that a common latent space emerges from two diffusion\nmodels trained independently on related domains. In light of this finding, we\npropose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image\ntranslation. Furthermore, applying CycleDiffusion to text-to-image diffusion\nmodels, we show that large-scale text-to-image diffusion models can be used as\nzero-shot image-to-image editors. (2) One can guide pre-trained diffusion\nmodels and GANs by controlling the latent codes in a unified, plug-and-play\nformulation based on energy-based models. Using the CLIP model and a face\nrecognition model as guidance, we demonstrate that diffusion models have better\ncoverage of low-density sub-populations and individuals than GANs. The code is\npublicly available at https://github.com/ChenWu98/cycle-diffusion.", | |
| "authors": "Chen Henry Wu, Fernando De la Torre", | |
| "published": "2022-10-11", | |
| "updated": "2022-12-07", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.GR", | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1907.09989v1", | |
| "title": "Rogue Heat and Diffusion Waves", | |
| "abstract": "In this paper, we numerically show and discuss the existence and\ncharacteristics of rogue heat and diffusion waves. More specifically, we use\ntwo different nonlinear heat (diffusion) models and show that modulation\ninstability leads to the generation of unexpected and large fluctuations in the\nframe of these models. These fluctuations can be named as rogue heat\n(diffusion) waves. We discuss the properties and statistics of such rogue\nwaves. Our results can find many important applications in many branches such\nas the nonlinear heat transfer, turbulence, financial mathematics, chemical or\nbiological diffusion, nuclear reactions, subsurface water infiltration, and\npore water pressure diffusion modeled in the frame of nonlinear Terzaghi\nconsolidation models, just to name a few.", | |
| "authors": "Cihan Bayindir", | |
| "published": "2019-07-18", | |
| "updated": "2019-07-18", | |
| "primary_cat": "nlin.PS", | |
| "cats": [ | |
| "nlin.PS", | |
| "physics.flu-dyn" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2209.05557v3", | |
| "title": "Blurring Diffusion Models", | |
| "abstract": "Recently, Rissanen et al., (2022) have presented a new type of diffusion\nprocess for generative modeling based on heat dissipation, or blurring, as an\nalternative to isotropic Gaussian diffusion. Here, we show that blurring can\nequivalently be defined through a Gaussian diffusion process with non-isotropic\nnoise. In making this connection, we bridge the gap between inverse heat\ndissipation and denoising diffusion, and we shed light on the inductive bias\nthat results from this modeling choice. Finally, we propose a generalized class\nof diffusion models that offers the best of both standard Gaussian denoising\ndiffusion and inverse heat dissipation, which we call Blurring Diffusion\nModels.", | |
| "authors": "Emiel Hoogeboom, Tim Salimans", | |
| "published": "2022-09-12", | |
| "updated": "2024-05-01", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.CV", | |
| "stat.ML" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2301.00059v2", | |
| "title": "Describing NMR chemical exchange by effective phase diffusion approach", | |
| "abstract": "This paper proposes an effective phase diffusion method to analyze chemical\nexchange in nuclear magnetic resonance (NMR). The chemical exchange involves\nspin jumps around different sites where the spin angular frequencies vary,\nwhich leads to a random phase walk viewed from the rotating frame reference.\nTherefore, the random walk in phase space can be treated by the effective phase\ndiffusion method. Both the coupled and uncoupled phase diffusions are\nconsidered; additionally, it includes normal diffusion as well as fractional\ndiffusion. Based on these phase diffusion equations, the line shape of NMR\nexchange spectrum can be analyzed. By comparing these theoretical results with\nthe conventional theory, this phase diffusion approach works for fast exchange,\nranging from slightly faster than intermediate exchange to very fast exchange.\nFor normal diffusion models, the theoretically predicted curves agree with\nthose predicted from traditional models in the literature, and the\ncharacteristic exchange time obtained from phase diffusion with a fixed jump\ntime is the same as that obtained from the conventional model. However, the\nphase diffusion with a monoexponential time distribution gives a characteristic\nexchange time constant which is half of that obtained from the traditional\nmodel. Additionally, the fractional diffusion obtains a significantly different\nline shape than that predicted based on normal diffusion.", | |
| "authors": "Guoxing Lin", | |
| "published": "2022-12-30", | |
| "updated": "2023-05-17", | |
| "primary_cat": "physics.chem-ph", | |
| "cats": [ | |
| "physics.chem-ph", | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.01115v2", | |
| "title": "In-Context Learning Unlocked for Diffusion Models", | |
| "abstract": "We present Prompt Diffusion, a framework for enabling in-context learning in\ndiffusion-based generative models. Given a pair of task-specific example\nimages, such as depth from/to image and scribble from/to image, and a text\nguidance, our model automatically understands the underlying task and performs\nthe same task on a new query image following the text guidance. To achieve\nthis, we propose a vision-language prompt that can model a wide range of\nvision-language tasks and a diffusion model that takes it as input. The\ndiffusion model is trained jointly over six different tasks using these\nprompts. The resulting Prompt Diffusion model is the first diffusion-based\nvision-language foundation model capable of in-context learning. It\ndemonstrates high-quality in-context generation on the trained tasks and\ngeneralizes effectively to new, unseen vision tasks with their respective\nprompts. Our model also shows compelling text-guided image editing results. Our\nframework aims to facilitate research into in-context learning for computer\nvision. We share our code and pre-trained models at\nhttps://github.com/Zhendong-Wang/Prompt-Diffusion.", | |
| "authors": "Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen, Pengcheng He, Weizhu Chen, Zhangyang Wang, Mingyuan Zhou", | |
| "published": "2023-05-01", | |
| "updated": "2023-10-18", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1807.03744v2", | |
| "title": "Enhanced Diffusivity in Perturbed Senile Reinforced Random Walk Models", | |
| "abstract": "We consider diffusivity of random walks with transition probabilities\ndepending on the number of consecutive traversals of the last traversed edge,\nthe so called senile reinforced random walk (SeRW). In one dimension, the walk\nis known to be sub-diffusive with identity reinforcement function. We perturb\nthe model by introducing a small probability $\\delta$ of escaping the last\ntraversed edge at each step. The perturbed SeRW model is diffusive for any\n$\\delta >0 $, with enhanced diffusivity ($\\gg O(\\delta^2)$) in the small\n$\\delta$ regime. We further study stochastically perturbed SeRW models by\nhaving the last edge escape probability of the form $\\delta\\, \\xi_n$ with\n$\\xi_n$'s being independent random variables. Enhanced diffusivity in such\nmodels are logarithmically close to the so called residual diffusivity\n(positive in the zero $\\delta$ limit), with diffusivity between\n$O\\left(\\frac{1}{|\\log\\delta |}\\right)$ and\n$O\\left(\\frac{1}{\\log|\\log\\delta|}\\right)$. Finally, we generalize our results\nto higher dimensions where the unperturbed model is already diffusive. The\nenhanced diffusivity can be as much as $O(\\log^{-2}\\delta)$.", | |
| "authors": "Thu Dinh, Jack Xin", | |
| "published": "2018-07-10", | |
| "updated": "2020-03-16", | |
| "primary_cat": "math.PR", | |
| "cats": [ | |
| "math.PR", | |
| "60G50, 60H30, 58J37" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2006.00003v1", | |
| "title": "Coupling particle-based reaction-diffusion simulations with reservoirs mediated by reaction-diffusion PDEs", | |
| "abstract": "Open biochemical systems of interacting molecules are ubiquitous in\nlife-related processes. However, established computational methodologies, like\nmolecular dynamics, are still mostly constrained to closed systems and\ntimescales too small to be relevant for life processes. Alternatively,\nparticle-based reaction-diffusion models are currently the most accurate and\ncomputationally feasible approach at these scales. Their efficiency lies in\nmodeling entire molecules as particles that can diffuse and interact with each\nother. In this work, we develop modeling and numerical schemes for\nparticle-based reaction-diffusion in an open setting, where the reservoirs are\nmediated by reaction-diffusion PDEs. We derive two important theoretical\nresults. The first one is the mean-field for open systems of diffusing\nparticles; the second one is the mean-field for a particle-based\nreaction-diffusion system with second-order reactions. We employ these two\nresults to develop a numerical scheme that consistently couples particle-based\nreaction-diffusion processes with reaction-diffusion PDEs. This allows modeling\nopen biochemical systems in contact with reservoirs that are time-dependent and\nspatially inhomogeneous, as in many relevant real-world applications.", | |
| "authors": "Margarita Kostr\u00e9, Christof Sch\u00fctte, Frank No\u00e9, Mauricio J. del Razo", | |
| "published": "2020-05-29", | |
| "updated": "2020-05-29", | |
| "primary_cat": "q-bio.QM", | |
| "cats": [ | |
| "q-bio.QM", | |
| "physics.chem-ph", | |
| "physics.comp-ph", | |
| "92C40, 92C45, 60J70, 60Gxx, 70Lxx" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1409.3132v1", | |
| "title": "Front propagation in reaction-diffusion systems with anomalous diffusion", | |
| "abstract": "A numerical study of the role of anomalous diffusion in front propagation in\nreaction-diffusion systems is presented. Three models of anomalous diffusion\nare considered: fractional diffusion, tempered fractional diffusion, and a\nmodel that combines fractional diffusion and regular diffusion. The reaction\nkinetics corresponds to a Fisher-Kolmogorov nonlinearity. The numerical method\nis based on a finite-difference operator splitting algorithm with an explicit\nEuler step for the time advance of the reaction kinetics, and a Crank-Nicholson\nsemi-implicit time step for the transport operator. The anomalous diffusion\noperators are discretized using an upwind, flux-conserving, Grunwald-Letnikov\nfinite-difference scheme applied to the regularized fractional derivatives.\nWith fractional diffusion of order $\\alpha$, fronts exhibit exponential\nacceleration, $a_L(t) \\sim e^{\\gamma t/\\alpha}$, and develop algebraic decaying\ntails, $\\phi \\sim 1/x^{\\alpha}$. In the case of tempered fractional diffusion,\nthis phenomenology prevails in the intermediate asymptotic regime\n $\\left(\\chi t \\right)^{1/\\alpha} \\ll x \\ll 1/\\lambda$, where $1/\\lambda$ is\nthe scale of the tempering. Outside this regime, i.e. for $x > 1/\\lambda$, the\ntail exhibits the tempered decay $\\phi \\sim e^{-\\lambda x}/x^{\\alpha+1}$, and\nthe front velocity approaches the terminal speed $v_*=\n\\left(\\gamma-\\lambda^\\alpha \\chi\\right)/ \\lambda$. Of particular interest is\nthe study of the interplay of regular and fractional diffusion. It is shown\nthat the main role of regular diffusion is to delay the onset of front\nacceleration. In particular, the crossover time, $t_c$, to transition to the\naccelerated fractional regime exhibits a logarithmic scaling of the form $t_c\n\\sim \\log \\left(\\chi_d/\\chi_f\\right)$ where $\\chi_d$ and $\\chi_f$ are the\nregular and fractional diffusivities.", | |
| "authors": "D. del-Castillo-Negrete", | |
| "published": "2014-09-10", | |
| "updated": "2014-09-10", | |
| "primary_cat": "nlin.PS", | |
| "cats": [ | |
| "nlin.PS", | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1701.00257v2", | |
| "title": "Analyzing PFG anisotropic anomalous diffusions by instantaneous signal attenuation method", | |
| "abstract": "Anomalous diffusion has been investigated in many systems. Pulsed field\ngradient (PFG) anomalous diffusion is much more complicated than PFG normal\ndiffusion. There have been many theoretical and experimental studies for PFG\nisotropic anomalous diffusion, but there are very few theoretical treatments\nreported for anisotropic anomalous diffusion. Currently, there is not a general\nPFG signal attenuation expression, which includes the finite gradient pulse\neffect and can treat all three types of anisotropic fractional diffusions:\ngeneral fractional diffusion, time fractional diffusion, and space-fractional\ndiffusion. In this paper, the recently developed instantaneous signal\nattenuation (ISA) method was applied to obtain PFG signal attenuation\nexpression for free and restricted anisotropic anomalous diffusion with two\nmodels: fractal derivative and fractional derivative models. The obtained PFG\nsignal attenuation expression for anisotropic anomalous diffusion can reduce to\nthe reported result for PFG anisotropic normal diffusion. The results can also\nreduce to reported PFG isotropic anomalous diffusion results obtained by\neffective phase shift diffusion equation method and instantaneous signal\nattenuation method. For anisotropic space-fractional diffusion, the obtained\nresult agrees with that obtained by the modified Bloch equation method.\nAdditionally, The PFG signal attenuation expressions for free and restricted\nanisotropic curvilinear diffusions were derived by the traditional method, the\nresults of which agree with the PFG anisotropic fractional diffusion results\nbased on the fractional derivative model. The powder pattern of PFG anisotropic\ndiffusion was also discussed. The results here improve our understanding of PFG\nanomalous diffusion, and provide new formalisms for PFG anisotropic anomalous\ndiffusion in NMR and MRI.", | |
| "authors": "Guoxing Lin", | |
| "published": "2017-01-01", | |
| "updated": "2017-01-05", | |
| "primary_cat": "physics.chem-ph", | |
| "cats": [ | |
| "physics.chem-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.12327v1", | |
| "title": "Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems", | |
| "abstract": "Graph diffusion problems such as the propagation of rumors, computer viruses,\nor smart grid failures are ubiquitous and societal. Hence it is usually crucial\nto identify diffusion sources according to the current graph diffusion\nobservations. Despite its tremendous necessity and significance in practice,\nsource localization, as the inverse problem of graph diffusion, is extremely\nchallenging as it is ill-posed: different sources may lead to the same graph\ndiffusion patterns. Different from most traditional source localization\nmethods, this paper focuses on a probabilistic manner to account for the\nuncertainty of different candidate sources. Such endeavors require overcoming\nchallenges including 1) the uncertainty in graph diffusion source localization\nis hard to be quantified; 2) the complex patterns of the graph diffusion\nsources are difficult to be probabilistically characterized; 3) the\ngeneralization under any underlying diffusion patterns is hard to be imposed.\nTo solve the above challenges, this paper presents a generic framework: Source\nLocalization Variational AutoEncoder (SL-VAE) for locating the diffusion\nsources under arbitrary diffusion patterns. Particularly, we propose a\nprobabilistic model that leverages the forward diffusion estimation model along\nwith deep generative models to approximate the diffusion source distribution\nfor quantifying the uncertainty. SL-VAE further utilizes prior knowledge of the\nsource-observation pairs to characterize the complex patterns of diffusion\nsources by a learned generative prior. Lastly, a unified objective that\nintegrates the forward diffusion estimation model is derived to enforce the\nmodel to generalize under arbitrary diffusion patterns. Extensive experiments\nare conducted on 7 real-world datasets to demonstrate the superiority of SL-VAE\nin reconstructing the diffusion sources by excelling other methods on average\n20% in AUC score.", | |
| "authors": "Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao", | |
| "published": "2022-06-24", | |
| "updated": "2022-06-24", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.IT", | |
| "math.IT" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1212.2829v1", | |
| "title": "Spin diffusion in one-dimensional classical Heisenberg mode", | |
| "abstract": "The problem of spin diffusion is studied numerically in one-dimensional\nclassical Heisenberg model using a deterministic odd even spin precession\ndynamics. We demonstrate that spin diffusion in this model, like energy\ndiffusion, is normal and one obtains a long time diffusive tail in the decay of\nautocorrelation function (ACF). Some variations of the model with different\ncoupling schemes and with anisotropy are also studied and we find normal\ndiffusion in all of them. A systematic finite size analysis of the Heisenberg\nmodel also suggests diffusive spreading of fluctuation, contrary to previous\nclaims of anomalous diffusion.", | |
| "authors": "Debarshee Bagchi", | |
| "published": "2012-12-12", | |
| "updated": "2012-12-12", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.14671v2", | |
| "title": "A Survey of Diffusion Models in Natural Language Processing", | |
| "abstract": "This survey paper provides a comprehensive review of the use of diffusion\nmodels in natural language processing (NLP). Diffusion models are a class of\nmathematical models that aim to capture the diffusion of information or signals\nacross a network or manifold. In NLP, diffusion models have been used in a\nvariety of applications, such as natural language generation, sentiment\nanalysis, topic modeling, and machine translation. This paper discusses the\ndifferent formulations of diffusion models used in NLP, their strengths and\nlimitations, and their applications. We also perform a thorough comparison\nbetween diffusion models and alternative generative models, specifically\nhighlighting the autoregressive (AR) models, while also examining how diverse\narchitectures incorporate the Transformer in conjunction with diffusion models.\nCompared to AR models, diffusion models have significant advantages for\nparallel generation, text interpolation, token-level controls such as syntactic\nstructures and semantic contents, and robustness. Exploring further\npermutations of integrating Transformers into diffusion models would be a\nvaluable pursuit. Also, the development of multimodal diffusion models and\nlarge-scale diffusion language models with notable capabilities for few-shot\nlearning would be important directions for the future advance of diffusion\nmodels in NLP.", | |
| "authors": "Hao Zou, Zae Myung Kim, Dongyeop Kang", | |
| "published": "2023-05-24", | |
| "updated": "2023-06-14", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2303.09295v1", | |
| "title": "DIRE for Diffusion-Generated Image Detection", | |
| "abstract": "Diffusion models have shown remarkable success in visual synthesis, but have\nalso raised concerns about potential abuse for malicious purposes. In this\npaper, we seek to build a detector for telling apart real images from\ndiffusion-generated images. We find that existing detectors struggle to detect\nimages generated by diffusion models, even if we include generated images from\na specific diffusion model in their training data. To address this issue, we\npropose a novel image representation called DIffusion Reconstruction Error\n(DIRE), which measures the error between an input image and its reconstruction\ncounterpart by a pre-trained diffusion model. We observe that\ndiffusion-generated images can be approximately reconstructed by a diffusion\nmodel while real images cannot. It provides a hint that DIRE can serve as a\nbridge to distinguish generated and real images. DIRE provides an effective way\nto detect images generated by most diffusion models, and it is general for\ndetecting generated images from unseen diffusion models and robust to various\nperturbations. Furthermore, we establish a comprehensive diffusion-generated\nbenchmark including images generated by eight diffusion models to evaluate the\nperformance of diffusion-generated image detectors. Extensive experiments on\nour collected benchmark demonstrate that DIRE exhibits superiority over\nprevious generated-image detectors. The code and dataset are available at\nhttps://github.com/ZhendongWang6/DIRE.", | |
| "authors": "Zhendong Wang, Jianmin Bao, Wengang Zhou, Weilun Wang, Hezhen Hu, Hong Chen, Houqiang Li", | |
| "published": "2023-03-16", | |
| "updated": "2023-03-16", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1009.5965v1", | |
| "title": "Sensitivity of a Babcock-Leighton Flux-Transport Dynamo to Magnetic Diffusivity Profiles", | |
| "abstract": "We study the influence of various magnetic diffusivity profiles on the\nevolution of the poloidal and toroidal magnetic fields in a kinematic flux\ntransport dynamo model for the Sun. The diffusivity is a poorly understood\ningredient in solar dynamo models. We mathematically construct various\ntheoretical profiles of the depth-dependent diffusivity, based on constraints\nfrom mixing length theory and turbulence, and on comparisons of poloidal field\nevolution on the Sun with that from the flux-transport dynamo model.\n We then study the effect of each diffusivity profile in the cyclic evolution\nof the magnetic fields in the Sun, by solving the mean-field dynamo equations.\nWe investigate effects on the solar cycle periods, the maximum tachocline field\nstrengths, and the evolution of the toroidal and poloidal field structures\ninside the convection zone, due to different diffusivity profiles.\n We conduct three experiments: (I) comparing very different magnetic\ndiffusivity profiles; (II) comparing different locations of diffusivity\ngradient near the tachocline for the optimal profile; and (III) comparing\ndifferent slopes of diffusivity gradient for an optimal profile.\n Based on these simulations, we discuss which aspects of depth-dependent\ndiffusivity profiles may be most relevant for magnetic flux evolution in the\nSun, and how certain observations could help improve knowledge of this dynamo\ningredient.", | |
| "authors": "E. J. Zita", | |
| "published": "2010-09-29", | |
| "updated": "2010-09-29", | |
| "primary_cat": "astro-ph.SR", | |
| "cats": [ | |
| "astro-ph.SR", | |
| "physics.flu-dyn" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2402.13144v1", | |
| "title": "Neural Network Diffusion", | |
| "abstract": "Diffusion models have achieved remarkable success in image and video\ngeneration. In this work, we demonstrate that diffusion models can also\n\\textit{generate high-performing neural network parameters}. Our approach is\nsimple, utilizing an autoencoder and a standard latent diffusion model. The\nautoencoder extracts latent representations of a subset of the trained network\nparameters. A diffusion model is then trained to synthesize these latent\nparameter representations from random noise. It then generates new\nrepresentations that are passed through the autoencoder's decoder, whose\noutputs are ready to use as new subsets of network parameters. Across various\narchitectures and datasets, our diffusion process consistently generates models\nof comparable or improved performance over trained networks, with minimal\nadditional cost. Notably, we empirically find that the generated models perform\ndifferently with the trained networks. Our results encourage more exploration\non the versatile use of diffusion models.", | |
| "authors": "Kai Wang, Zhaopan Xu, Yukun Zhou, Zelin Zang, Trevor Darrell, Zhuang Liu, Yang You", | |
| "published": "2024-02-20", | |
| "updated": "2024-02-20", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2307.13949v1", | |
| "title": "How Does Diffusion Influence Pretrained Language Models on Out-of-Distribution Data?", | |
| "abstract": "Transformer-based pretrained language models (PLMs) have achieved great\nsuccess in modern NLP. An important advantage of PLMs is good\nout-of-distribution (OOD) robustness. Recently, diffusion models have attracted\na lot of work to apply diffusion to PLMs. It remains under-explored how\ndiffusion influences PLMs on OOD data. The core of diffusion models is a\nforward diffusion process which gradually applies Gaussian noise to inputs, and\na reverse denoising process which removes noise. The noised input\nreconstruction is a fundamental ability of diffusion models. We directly\nanalyze OOD robustness by measuring the reconstruction loss, including testing\nthe abilities to reconstruct OOD data, and to detect OOD samples. Experiments\nare conducted by analyzing different training parameters and data statistical\nfeatures on eight datasets. It shows that finetuning PLMs with diffusion\ndegrades the reconstruction ability on OOD data. The comparison also shows that\ndiffusion models can effectively detect OOD samples, achieving state-of-the-art\nperformance in most of the datasets with an absolute accuracy improvement up to\n18%. These results indicate that diffusion reduces OOD robustness of PLMs.", | |
| "authors": "Huazheng Wang, Daixuan Cheng, Haifeng Sun, Jingyu Wang, Qi Qi, Jianxin Liao, Jing Wang, Cong Liu", | |
| "published": "2023-07-26", | |
| "updated": "2023-07-26", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2002.02101v1", | |
| "title": "Trace of anomalous diffusion in a biased quenched trap model", | |
| "abstract": "Diffusion on a quenched heterogeneous environment in the presence of bias is\nconsidered analytically. The first-passage-time statistics can be applied to\nobtain the drift and the diffusion coefficient in periodic quenched\nenvironments. We show several transition points at which sample-to-sample\nfluctuations of the drift or the diffusion coefficient remain large even when\nthe system size becomes large, i.e., non-self-averaging. Moreover, we find that\nthe disorder average of the diffusion coefficient diverges or becomes zero when\nthe corresponding annealed model generates superdiffusion or subdiffusion,\nrespectively. This result implies that anomalous diffusion in an annealed model\nis traced by anomaly of the diffusion coefficients in the corresponding\nquenched model.", | |
| "authors": "Takuma Akimoto, Keiji Saito", | |
| "published": "2020-02-06", | |
| "updated": "2020-02-06", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2210.07677v1", | |
| "title": "TransFusion: Transcribing Speech with Multinomial Diffusion", | |
| "abstract": "Diffusion models have shown exceptional scaling properties in the image\nsynthesis domain, and initial attempts have shown similar benefits for applying\ndiffusion to unconditional text synthesis. Denoising diffusion models attempt\nto iteratively refine a sampled noise signal until it resembles a coherent\nsignal (such as an image or written sentence). In this work we aim to see\nwhether the benefits of diffusion models can also be realized for speech\nrecognition. To this end, we propose a new way to perform speech recognition\nusing a diffusion model conditioned on pretrained speech features.\nSpecifically, we propose TransFusion: a transcribing diffusion model which\niteratively denoises a random character sequence into coherent text\ncorresponding to the transcript of a conditioning utterance. We demonstrate\ncomparable performance to existing high-performing contrastive models on the\nLibriSpeech speech recognition benchmark. To the best of our knowledge, we are\nthe first to apply denoising diffusion to speech recognition. We also propose\nnew techniques for effectively sampling and decoding multinomial diffusion\nmodels. These are required because traditional methods of sampling from\nacoustic models are not possible with our new discrete diffusion approach. Code\nand trained models are available: https://github.com/RF5/transfusion-asr", | |
| "authors": "Matthew Baas, Kevin Eloff, Herman Kamper", | |
| "published": "2022-10-14", | |
| "updated": "2022-10-14", | |
| "primary_cat": "eess.AS", | |
| "cats": [ | |
| "eess.AS", | |
| "cs.AI", | |
| "cs.SD" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2401.06046v2", | |
| "title": "Quantifying the contributions to diffusion in complex materials", | |
| "abstract": "Using machine learning with a variational formula for diffusivity, we recast\ndiffusion as a sum of individual contributions to diffusion--called\n\"kinosons\"--and compute their statistical distribution to model a complex\nmulticomponent alloy. Calculating kinosons is orders of magnitude more\nefficient than computing whole trajectories, and elucidates kinetic mechanisms\nfor diffusion. The distribution of kinosons with temperature leads to new\naccurate analytic models for macroscale diffusivity. This combination of\nmachine learning with diffusion theory promises insight into other complex\nmaterials.", | |
| "authors": "Soham Chattopadhyay, Dallas R. Trinkle", | |
| "published": "2024-01-11", | |
| "updated": "2024-03-14", | |
| "primary_cat": "cond-mat.mtrl-sci", | |
| "cats": [ | |
| "cond-mat.mtrl-sci" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2202.05830v1", | |
| "title": "Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality", | |
| "abstract": "Diffusion models have emerged as an expressive family of generative models\nrivaling GANs in sample quality and autoregressive models in likelihood scores.\nStandard diffusion models typically require hundreds of forward passes through\nthe model to generate a single high-fidelity sample. We introduce\nDifferentiable Diffusion Sampler Search (DDSS): a method that optimizes fast\nsamplers for any pre-trained diffusion model by differentiating through sample\nquality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a\nfamily of flexible non-Markovian samplers for diffusion models. We show that\noptimizing the degrees of freedom of GGDM samplers by maximizing sample quality\nscores via gradient descent leads to improved sample quality. Our optimization\nprocedure backpropagates through the sampling process using the\nreparametrization trick and gradient rematerialization. DDSS achieves strong\nresults on unconditional image generation across various datasets (e.g., FID\nscores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82\nwith 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines).\nOur method is compatible with any pre-trained diffusion model without\nfine-tuning or re-training required.", | |
| "authors": "Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi", | |
| "published": "2022-02-11", | |
| "updated": "2022-02-11", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/0910.2253v1", | |
| "title": "Linearized Kompaneetz equation as a relativistic diffusion", | |
| "abstract": "We show that Kompaneetz equation describing photon diffusion in an\nenvironment of an electron gas, when linearized around its equilibrium\ndistribution, coincides with the relativistic diffusion discussed in recent\npublications. The model of the relativistic diffusion is related to soluble\nmodels of imaginary time quantum mechanics. We suggest some non-linear\ngeneralizations of the relativistic diffusion equation and their astrophysical\napplications (in particular to the Sunyaev-Zeldovich effect).", | |
| "authors": "Z. Haba", | |
| "published": "2009-10-12", | |
| "updated": "2009-10-12", | |
| "primary_cat": "astro-ph.CO", | |
| "cats": [ | |
| "astro-ph.CO" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.09605v1", | |
| "title": "Expressiveness Remarks for Denoising Diffusion Models and Samplers", | |
| "abstract": "Denoising diffusion models are a class of generative models which have\nrecently achieved state-of-the-art results across many domains. Gradual noise\nis added to the data using a diffusion process, which transforms the data\ndistribution into a Gaussian. Samples from the generative model are then\nobtained by simulating an approximation of the time reversal of this diffusion\ninitialized by Gaussian samples. Recent research has explored adapting\ndiffusion models for sampling and inference tasks. In this paper, we leverage\nknown connections to stochastic control akin to the F\\\"ollmer drift to extend\nestablished neural network approximation results for the F\\\"ollmer drift to\ndenoising diffusion models and samplers.", | |
| "authors": "Francisco Vargas, Teodora Reu, Anna Kerekes", | |
| "published": "2023-05-16", | |
| "updated": "2023-05-16", | |
| "primary_cat": "stat.ML", | |
| "cats": [ | |
| "stat.ML", | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2301.00527v1", | |
| "title": "Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data", | |
| "abstract": "In this paper, we learn a diffusion model to generate 3D data on a\nscene-scale. Specifically, our model crafts a 3D scene consisting of multiple\nobjects, while recent diffusion research has focused on a single object. To\nrealize our goal, we represent a scene with discrete class labels, i.e.,\ncategorical distribution, to assign multiple objects into semantic categories.\nThus, we extend discrete diffusion models to learn scene-scale categorical\ndistributions. In addition, we validate that a latent diffusion model can\nreduce computation costs for training and deploying. To the best of our\nknowledge, our work is the first to apply discrete and latent diffusion for 3D\ncategorical data on a scene-scale. We further propose to perform semantic scene\ncompletion (SSC) by learning a conditional distribution using our diffusion\nmodel, where the condition is a partial observation in a sparse point cloud. In\nexperiments, we empirically show that our diffusion models not only generate\nreasonable scenes, but also perform the scene completion task better than a\ndiscriminative model. Our code and models are available at\nhttps://github.com/zoomin-lee/scene-scale-diffusion", | |
| "authors": "Jumin Lee, Woobin Im, Sebin Lee, Sung-Eui Yoon", | |
| "published": "2023-01-02", | |
| "updated": "2023-01-02", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1202.6521v1", | |
| "title": "Coherence transition in degenerate diffusion equations with mean field coupling", | |
| "abstract": "We introduce non-linear diffusion in a classical diffusion advection model\nwith non local aggregative coupling on the circle, that exhibits a transition\nfrom an uncoherent state to a coherent one when the coupling strength is\nincreased. We show first that all solutions of the equation converge to the set\nof equilibria, second that the set of equilibria undergoes a bifurcation\nrepresenting the transition to coherence when the coupling strength is\nincreased. These two properties are similar to the situation with linear\ndiffusion. Nevertheless nonlinear diffusion alters the transition scenari,\nwhich are different when the diffusion is sub-quadratic and when the diffusion\nis super-quadratic. When the diffusion is super-quadratic, it results in a\nmultistability region that preceeds the pitchfork bifurcation at which the\nuncoherent equilibrium looses stability. When the diffusion is quadratic the\npitchfork bifurcation at the onset of coherence is infinitely degenerate and a\ndisk of equilibria exist for the critical value of the coupling strength.\nAnother impact of nonlinear diffusion is that coherent equilibria become\nlocalized when advection is strong enough, a phenomenon that is preculded when\nthe diffusion is linear.", | |
| "authors": "Khashayar Pakdaman, Xavier Pellegrin", | |
| "published": "2012-02-29", | |
| "updated": "2012-02-29", | |
| "primary_cat": "nlin.AO", | |
| "cats": [ | |
| "nlin.AO", | |
| "37N25, 92B25, 35Q35, 35K55, 37B25, 82C26" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2106.04745v2", | |
| "title": "Evaluation of diffuse mismatch model for phonon scattering at disordered interfaces", | |
| "abstract": "Diffuse phonon scattering strongly affects the phonon transport through a\ndisordered interface. The often-used diffuse mismatch model assumes that\nphonons lose memory of their origin after being scattered by the interface.\nUsing mode-resolved atomic Green's function simulation, we demonstrate that\ndiffuse phonon scattering by a single disordered interface cannot make a phonon\nlose its memory and thus the applicability of diffusive mismatch model is\nlimited. An analytical expression for diffuse scattering probability based on\nthe continuum approximation is also derived and shown to work reasonably well\nat low frequencies.", | |
| "authors": "Qichen Song, Gang Chen", | |
| "published": "2021-06-09", | |
| "updated": "2021-08-04", | |
| "primary_cat": "cond-mat.mes-hall", | |
| "cats": [ | |
| "cond-mat.mes-hall" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1908.03076v3", | |
| "title": "The strategy of survival for a competition between normal and anomalous diffusion", | |
| "abstract": "In this paper, we study the competition of two diffusion processes for\nachieving the maximum possible diffusion in an area. This competition, however,\ndoes not occur in the same circumstance; one of these processes is a normal\ndiffusion with a higher growth rate, and another one is an anomalous diffusion\nwith a lower growth rate. The trivial solution of the proposed model suggests\nthat the winner is the one with the higher growth rate. But, the question is:\nwhat characteristics and strategies should the second diffusion include to\nprolong the survival in such a competition? The studied diffusion equations\ncorrespond to the SI model such that the anomalous diffusion has memory\ndescribed by a fractional order derivative. The strategy promise that anomalous\ndiffusion reaches maximum survival in case of forgetting some parts of the\nmemory. This model can represent some of real phenomena, such as the contest of\ntwo companies in a market share, the spreading of two epidemic diseases, the\ndiffusion of two species, or any reaction-diffusion related to real-world\ncompetition.", | |
| "authors": "Moein Khalighi, Jamshid Ardalankia, Abbas Karimi Rizi, Haleh Ebadi, Gholamreza Jafari", | |
| "published": "2019-08-07", | |
| "updated": "2020-10-18", | |
| "primary_cat": "physics.soc-ph", | |
| "cats": [ | |
| "physics.soc-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2403.15766v1", | |
| "title": "BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion", | |
| "abstract": "Bagging has achieved great success in the field of machine learning by\nintegrating multiple base classifiers to build a single strong classifier to\nreduce model variance. The performance improvement of bagging mainly relies on\nthe number and diversity of base classifiers. However, traditional deep\nlearning model training methods are expensive to train individually and\ndifficult to train multiple models with low similarity in a restricted dataset.\nRecently, diffusion models, which have been tremendously successful in the\nfields of imaging and vision, have been found to be effective in generating\nneural network model weights and biases with diversity. We creatively propose a\nBagging deep learning training algorithm based on Efficient Neural network\nDiffusion (BEND). The originality of BEND comes from the first use of a neural\nnetwork diffusion model to efficiently build base classifiers for bagging. Our\napproach is simple but effective, first using multiple trained model weights\nand biases as inputs to train autoencoder and latent diffusion model to realize\na diffusion model from noise to valid neural network parameters. Subsequently,\nwe generate several base classifiers using the trained diffusion model.\nFinally, we integrate these ba se classifiers for various inference tasks using\nthe Bagging method. Resulting experiments on multiple models and datasets show\nthat our proposed BEND algorithm can consistently outperform the mean and\nmedian accuracies of both the original trained model and the diffused model. At\nthe same time, new models diffused using the diffusion model have higher\ndiversity and lower cost than multiple models trained using traditional\nmethods. The BEND approach successfully introduces diffusion models into the\nnew deep learning training domain and provides a new paradigm for future deep\nlearning training and inference.", | |
| "authors": "Jia Wei, Xingjun Zhang, Witold Pedrycz", | |
| "published": "2024-03-23", | |
| "updated": "2024-03-23", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2310.01221v2", | |
| "title": "Nonlocal diffusion model with maximum principle", | |
| "abstract": "In this paper, we propose nonlocal diffusion models with Dirichlet boundary.\nThese nonlocal diffusion models preserve the maximum principle and also have\ncorresponding variational form. With these good properties, It is relatively\neasy to prove the well-posedness and the vanishing nonlocality convergence.\nFurthermore, by specifically designed weight function, we can get a nonlocal\ndiffusion model with second order convergence which is optimal for nonlocal\ndiffusion models.", | |
| "authors": "Zuoqiang Shi", | |
| "published": "2023-10-02", | |
| "updated": "2023-10-12", | |
| "primary_cat": "math.AP", | |
| "cats": [ | |
| "math.AP", | |
| "cs.NA", | |
| "math.NA" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1609.09697v1", | |
| "title": "Anomalous diffusion in time-fluctuating non-stationary diffusivity landscapes", | |
| "abstract": "We investigate the ensemble and time averaged mean squared displacements for\nparticle diffusion in a simple model for disordered media by assuming that the\nlocal diffusivity is both fluctuating in time and has a deterministic average\ngrowth or decay in time. In this study we compare computer simulations of the\nstochastic Langevin equation for this random diffusion process with analytical\nresults. We explore the regimes of normal Brownian motion as well as anomalous\ndiffusion in the sub- and superdiffusive regimes. We also consider effects of\nthe inertial term on the particle motion. The investigation of the resulting\ndiffusion is performed for unconfined and confined motion.", | |
| "authors": "A. G. Cherstvy, R. Metzler", | |
| "published": "2016-09-30", | |
| "updated": "2016-09-30", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2207.09786v1", | |
| "title": "Non-Uniform Diffusion Models", | |
| "abstract": "Diffusion models have emerged as one of the most promising frameworks for\ndeep generative modeling. In this work, we explore the potential of non-uniform\ndiffusion models. We show that non-uniform diffusion leads to multi-scale\ndiffusion models which have similar structure to this of multi-scale\nnormalizing flows. We experimentally find that in the same or less training\ntime, the multi-scale diffusion model achieves better FID score than the\nstandard uniform diffusion model. More importantly, it generates samples $4.4$\ntimes faster in $128\\times 128$ resolution. The speed-up is expected to be\nhigher in higher resolutions where more scales are used. Moreover, we show that\nnon-uniform diffusion leads to a novel estimator for the conditional score\nfunction which achieves on par performance with the state-of-the-art\nconditional denoising estimator. Our theoretical and experimental findings are\naccompanied by an open source library MSDiff which can facilitate further\nresearch of non-uniform diffusion models.", | |
| "authors": "Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Sch\u00f6nlieb, Christian Etmann", | |
| "published": "2022-07-20", | |
| "updated": "2022-07-20", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/astro-ph/0012545v1", | |
| "title": "Diffusion and the occurrence of hydrogen shell flashes in helium white dwarf stars", | |
| "abstract": "We investigate the effects of element diffusion on the structure and\nevolution of low-mass helium white dwarfs (WD). Attention is focused on the\noccurrence of hydrogen shell flashes induced by diffusion processes during\ncooling phases. Initial models from 0.406 to 0.161 solar masses are constructed\nby applying mass loss rates at different stages of the RGB evolution of a solar\nmodel. The multicomponent flow equations describing gravitational settling, and\nchemical and thermal diffusion are solved and the diffusion calculations are\ncoupled to an evolutionary code. In addition, the same sequences are computed\nbut neglecting diffusion. We find that element diffusion strongly affects the\nstructure and cooling history of helium WD. In particular, diffusion induces\nthe occurrence of hydrogen shell flashes in models with masses ranging from\n0.18 to 0.41 solar masses, which is in sharp contrast from the situation when\ndiffusion is neglected. In connection with the further evolution, these\ndiffusion-induced flashes lead to much thinner hydrogen envelopes, preventing\nstable nuclear burning from being an appreciable energy source at advanced\nstages of evolution. This implies much shorter cooling ages than in the case\nwhen diffusion is neglected. These new WD models are discussed in light of\nrecent observational data of some millisecond pulsar systems with WD\ncompanions. We find that age discrepancies between the predictions of standard\nevolutionary models and such observations appear to be the result of ignoring\nelement diffusion in such models. Indeed, such discrepancies vanish when\naccount is made of diffusion.", | |
| "authors": "L. G. Althaus, A. M. Serenelli, O. G. Benvenuto", | |
| "published": "2000-12-29", | |
| "updated": "2000-12-29", | |
| "primary_cat": "astro-ph", | |
| "cats": [ | |
| "astro-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.13122v1", | |
| "title": "Policy Representation via Diffusion Probability Model for Reinforcement Learning", | |
| "abstract": "Popular reinforcement learning (RL) algorithms tend to produce a unimodal\npolicy distribution, which weakens the expressiveness of complicated policy and\ndecays the ability of exploration. The diffusion probability model is powerful\nto learn complicated multimodal distributions, which has shown promising and\npotential applications to RL. In this paper, we formally build a theoretical\nfoundation of policy representation via the diffusion probability model and\nprovide practical implementations of diffusion policy for online model-free RL.\nConcretely, we character diffusion policy as a stochastic process, which is a\nnew approach to representing a policy. Then we present a convergence guarantee\nfor diffusion policy, which provides a theory to understand the multimodality\nof diffusion policy. Furthermore, we propose the DIPO which is an\nimplementation for model-free online RL with DIffusion POlicy. To the best of\nour knowledge, DIPO is the first algorithm to solve model-free online RL\nproblems with the diffusion model. Finally, extensive empirical results show\nthe effectiveness and superiority of DIPO on the standard continuous control\nMujoco benchmark.", | |
| "authors": "Long Yang, Zhixiong Huang, Fenghao Lei, Yucun Zhong, Yiming Yang, Cong Fang, Shiting Wen, Binbin Zhou, Zhouchen Lin", | |
| "published": "2023-05-22", | |
| "updated": "2023-05-22", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1905.04004v2", | |
| "title": "Well-posedness of a cross-diffusion population model with nonlocal diffusion", | |
| "abstract": "We prove the existence and uniqueness of solution of a nonlocal\ncross-diffusion competitive population model for two species. The model may be\nconsidered as a version, or even an approximation, of the paradigmatic\nShigesada-Kawasaki-Teramoto cross-diffusion model, in which the usual diffusion\ndifferential operator is replaced by an integral diffusion operator. The proof\nof existence of solutions is based on a compactness argument, while the\nuniqueness of solution is achieved through a duality technique.", | |
| "authors": "Gonzalo Galiano, Juli\u00e1n Velasco", | |
| "published": "2019-05-10", | |
| "updated": "2024-01-24", | |
| "primary_cat": "math.AP", | |
| "cats": [ | |
| "math.AP" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1411.2007v1", | |
| "title": "On large time behavior and selection principle for a diffusive Carr-Penrose Model", | |
| "abstract": "This paper is concerned with the study of a diffusive perturbation of the\nlinear LSW model introduced by Carr and Penrose. A main subject of interest is\nto understand how the presence of diffusion acts as a selection principle,\nwhich singles out a particular self-similar solution of the linear LSW model as\ndetermining the large time behavior of the diffusive model. A selection\nprinciple is rigorously proven for a model which is a semi-classical\napproximation to the diffusive model. Upper bounds on the rate of coarsening\nare also obtained for the full diffusive model.", | |
| "authors": "Joseph G. Conlon, Michael Dabkowski, Jingchen Wu", | |
| "published": "2014-11-07", | |
| "updated": "2014-11-07", | |
| "primary_cat": "math.AP", | |
| "cats": [ | |
| "math.AP", | |
| "35F05" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2110.14851v1", | |
| "title": "Behavior of Spiral Wave Spectra with a Rank-Deficient Diffusion Matrix", | |
| "abstract": "Spiral waves emerge in numerous pattern forming systems and are commonly\nmodeled with reaction-diffusion systems. Some systems used to model biological\nprocesses, such as ion-channel models, fall under the reaction-diffusion\ncategory and often have one or more non-diffusing species which results in a\nrank-deficient diffusion matrix. Previous theoretical research focused on\nspiral spectra for strictly positive diffusion matrices. In this paper, we use\na general two-variable reaction-diffusion system to compare the essential and\nabsolute spectra of spiral waves for strictly positive and rank-deficient\ndiffusion matrices. We show that the essential spectrum is not continuous in\nthe limit of vanishing diffusion in one component. Moreover, we predict\nlocations for the absolute spectrum in the case of a non-diffusing slow\nvariable. Predictions are confirmed numerically for the Barkley and Karma\nmodels.", | |
| "authors": "Stephanie Dodson, Bjorn Sandstede", | |
| "published": "2021-10-28", | |
| "updated": "2021-10-28", | |
| "primary_cat": "math.DS", | |
| "cats": [ | |
| "math.DS" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1708.06890v1", | |
| "title": "Collaborative Inference of Coexisting Information Diffusions", | |
| "abstract": "Recently, \\textit{diffusion history inference} has become an emerging\nresearch topic due to its great benefits for various applications, whose\npurpose is to reconstruct the missing histories of information diffusion traces\naccording to incomplete observations. The existing methods, however, often\nfocus only on single information diffusion trace, while in a real-world social\nnetwork, there often coexist multiple information diffusions over the same\nnetwork. In this paper, we propose a novel approach called Collaborative\nInference Model (CIM) for the problem of the inference of coexisting\ninformation diffusions. By exploiting the synergism between the coexisting\ninformation diffusions, CIM holistically models multiple information diffusions\nas a sparse 4th-order tensor called Coexisting Diffusions Tensor (CDT) without\nany prior assumption of diffusion models, and collaboratively infers the\nhistories of the coexisting information diffusions via a low-rank approximation\nof CDT with a fusion of heterogeneous constraints generated from additional\ndata sources. To improve the efficiency, we further propose an optimal\nalgorithm called Time Window based Parallel Decomposition Algorithm (TWPDA),\nwhich can speed up the inference without compromise on the accuracy by\nutilizing the temporal locality of information diffusions. The extensive\nexperiments conducted on real world datasets and synthetic datasets verify the\neffectiveness and efficiency of CIM and TWPDA.", | |
| "authors": "Yanchao Sun, Cong Qian, Ning Yang, Philip S. Yu", | |
| "published": "2017-08-23", | |
| "updated": "2017-08-23", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2303.06574v2", | |
| "title": "Diffusion Models for Non-autoregressive Text Generation: A Survey", | |
| "abstract": "Non-autoregressive (NAR) text generation has attracted much attention in the\nfield of natural language processing, which greatly reduces the inference\nlatency but has to sacrifice the generation accuracy. Recently, diffusion\nmodels, a class of latent variable generative models, have been introduced into\nNAR text generation, showing an improved text generation quality. In this\nsurvey, we review the recent progress in diffusion models for NAR text\ngeneration. As the background, we first present the general definition of\ndiffusion models and the text diffusion models, and then discuss their merits\nfor NAR generation. As the core content, we further introduce two mainstream\ndiffusion models in existing work of text diffusion, and review the key designs\nof the diffusion process. Moreover, we discuss the utilization of pre-trained\nlanguage models (PLMs) for text diffusion models and introduce optimization\ntechniques for text data. Finally, we discuss several promising directions and\nconclude this paper. Our survey aims to provide researchers with a systematic\nreference of related research on text diffusion models for NAR generation. We\npresent our collection of text diffusion models at\nhttps://github.com/RUCAIBox/Awesome-Text-Diffusion-Models.", | |
| "authors": "Yifan Li, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen", | |
| "published": "2023-03-12", | |
| "updated": "2023-05-13", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2211.08892v2", | |
| "title": "Fast Graph Generation via Spectral Diffusion", | |
| "abstract": "Generating graph-structured data is a challenging problem, which requires\nlearning the underlying distribution of graphs. Various models such as graph\nVAE, graph GANs, and graph diffusion models have been proposed to generate\nmeaningful and reliable graphs, among which the diffusion models have achieved\nstate-of-the-art performance. In this paper, we argue that running full-rank\ndiffusion SDEs on the whole graph adjacency matrix space hinders diffusion\nmodels from learning graph topology generation, and hence significantly\ndeteriorates the quality of generated graph data. To address this limitation,\nwe propose an efficient yet effective Graph Spectral Diffusion Model (GSDM),\nwhich is driven by low-rank diffusion SDEs on the graph spectrum space. Our\nspectral diffusion model is further proven to enjoy a substantially stronger\ntheoretical guarantee than standard diffusion models. Extensive experiments\nacross various datasets demonstrate that, our proposed GSDM turns out to be the\nSOTA model, by exhibiting both significantly higher generation quality and much\nless computational consumption than the baselines.", | |
| "authors": "Tianze Luo, Zhanfeng Mo, Sinno Jialin Pan", | |
| "published": "2022-11-16", | |
| "updated": "2022-11-19", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1801.09352v1", | |
| "title": "Distributed order Hausdorff derivative diffusion model to characterize non-Fickian diffusion in porous media", | |
| "abstract": "Many theoretical and experimental results show that solute transport in\nheterogeneous porous media exhibits multi-scaling behaviors. To describe such\nnon-Fickian diffusions, this work provides a distributed order Hausdorff\ndiffusion model to describe the tracer transport in porous media. This model is\nproved to be equivalent with the diffusion equation model with a nonlinear time\ndependent diffusion coefficient. In conjunction with the structural derivative,\nits mean squared displacement (MSD) of the tracer particles is explicitly\nderived as a dilogarithm function when the weight function of the order\ndistribution is a linear function of the time derivative order. This model can\ncapture both accelerating and decelerating anomalous and ultraslow diffusions\nby varying the weight parameter c. In this study, the tracer transport in\nwater-filled pore spaces of two-dimensional Euclidean is demonstrated as a\ndecelerating sub-diffusion, and can well be described by the distributed order\nHausdorff diffusion model with c = 1.73. While the Hausdorff diffusion model\ncan accurately fit the sub-diffusion experimental data of the tracer transport\nin the pore-solid prefractal porous media.", | |
| "authors": "Yingjie Liang, Wen Chen, Wei Xu, HongGuang Sun", | |
| "published": "2018-01-29", | |
| "updated": "2018-01-29", | |
| "primary_cat": "physics.flu-dyn", | |
| "cats": [ | |
| "physics.flu-dyn" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2308.06342v2", | |
| "title": "Mirror Diffusion Models", | |
| "abstract": "Diffusion models have successfully been applied to generative tasks in\nvarious continuous domains. However, applying diffusion to discrete categorical\ndata remains a non-trivial task. Moreover, generation in continuous domains\noften requires clipping in practice, which motivates the need for a theoretical\nframework for adapting diffusion to constrained domains. Inspired by the mirror\nLangevin algorithm for the constrained sampling problem, in this theoretical\nreport we propose Mirror Diffusion Models (MDMs). We demonstrate MDMs in the\ncontext of simplex diffusion and propose natural extensions to popular domains\nsuch as image and text generation.", | |
| "authors": "Jaesung Tae", | |
| "published": "2023-08-11", | |
| "updated": "2023-08-18", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1906.02856v1", | |
| "title": "Diffusion on dynamic contact networks with indirect transmission links", | |
| "abstract": "Modelling diffusion processes on dynamic contact networks is an important\nresearch area for epidemiology, marketing, cybersecurity, and ecology. However,\ncurrent diffusion models cannot capture transmissions occurring for indirect\ninteractions. For example, an airborne infected individual releases infectious\nparticles at locations that can suspend in the air and infect susceptible\nindividuals arriving even after the infected individual left. Thus, current\ndiffusion models miss transmissions during indirect interactions. In this\nthesis, a novel diffusion model called the same place different time\ntransmission based diffusion (SPDT) is introduced to take into account the\ntransmissions through indirect interactions. The behaviour of SPDT diffusion is\nanalysed on real dynamic contact networks and a significant amplification in\ndiffusion dynamics is observed. The SPDT model also introduces some novel\nbehaviours different from current diffusion models. In this work, a new SPDT\ngraph model is also developed to generate synthetic traces to explore SPDT\ndiffusion in several scenarios. The analysis shows that the emergence of new\ndiffusion becomes common thanks to the inclusion of indirect transmissions\nwithin the SPDT model. This work finally investigates how diffusion can be\ncontrolled and develops new methods to hinder diffusion.", | |
| "authors": "Md Shahzamal", | |
| "published": "2019-06-07", | |
| "updated": "2019-06-07", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI", | |
| "physics.soc-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1711.09967v2", | |
| "title": "CO diffusion and desorption kinetics in CO$_2$ ices", | |
| "abstract": "Diffusion of species in icy dust grain mantles is a fundamental process that\nshapes the chemistry of interstellar regions; yet measurements of diffusion in\ninterstellar ice analogs are scarce. Here we present measurements of CO\ndiffusion into CO$_2$ ice at low temperatures (T=11--23~K) using CO$_2$\nlongitudinal optical (LO) phonon modes to monitor the level of mixing of\ninitially layered ices. We model the diffusion kinetics using Fick's second law\nand find the temperature dependent diffusion coefficients are well fit by an\nArrhenius equation giving a diffusion barrier of 300 $\\pm$ 40 K. The low\nbarrier along with the diffusion kinetics through isotopically labeled layers\nsuggest that CO diffuses through CO$_2$ along pore surfaces rather than through\nbulk diffusion. In complementary experiments, we measure the desorption energy\nof CO from CO$_2$ ices deposited at 11-50 K by temperature-programmed\ndesorption (TPD) and find that the desorption barrier ranges from 1240 $\\pm$ 90\nK to 1410 $\\pm$ 70 K depending on the CO$_2$ deposition temperature and\nresultant ice porosity. The measured CO-CO$_2$ desorption barriers demonstrate\nthat CO binds equally well to CO$_2$ and H$_2$O ices when both are compact. The\nCO-CO$_2$ diffusion-desorption barrier ratio ranges from 0.21-0.24 dependent on\nthe binding environment during diffusion. The diffusion-desorption ratio is\nconsistent with the above hypothesis that the observed diffusion is a surface\nprocess and adds to previous experimental evidence on diffusion in water ice\nthat suggests surface diffusion is important to the mobility of molecules\nwithin interstellar ices.", | |
| "authors": "Ilsa R. Cooke, Karin I. \u00d6berg, Edith C. Fayolle, Zoe Peeler, Jennifer B. Bergner", | |
| "published": "2017-11-27", | |
| "updated": "2017-12-18", | |
| "primary_cat": "astro-ph.GA", | |
| "cats": [ | |
| "astro-ph.GA" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2111.03914v2", | |
| "title": "A systematic approach for modeling a nonlocal eddy diffusivity", | |
| "abstract": "This study considers advective and diffusive transport of passive scalar\nfields by spatially-varying incompressible flows. Prior studies have shown that\nthe eddy diffusivities governing the mean field transport in such systems can\ngenerally be nonlocal in space and time. While for many flows nonlocal eddy\ndiffusivities are more accurate than commonly-used Boussinesq eddy\ndiffusivities, nonlocal eddy diffusivities are often computationally\ncost-prohibitive to obtain and difficult to implement in practice. We develop a\nsystematic and more cost-effective approach for modeling nonlocal eddy\ndiffusivities using matched moment inverse (MMI) operators. These operators are\nconstructed using only a few leading-order moments of the exact nonlocal eddy\ndiffusivity kernel, which can be easily computed using the inverse macroscopic\nforcing method (IMFM) (Mani and Park (2021)). The resulting reduced-order\nmodels for the mean fields that incorporate the modeled eddy diffusivities\noften improve Boussinesq-limit models since they capture leading-order nonlocal\neffects. But more importantly, these models can be expressed as partial\ndifferential equations that are readily solvable using existing computational\nfluid dynamics capabilities rather than as integro-partial differential\nequations.", | |
| "authors": "Jessie Liu, Hannah Williams, Ali Mani", | |
| "published": "2021-11-06", | |
| "updated": "2023-06-28", | |
| "primary_cat": "physics.flu-dyn", | |
| "cats": [ | |
| "physics.flu-dyn" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.10028v1", | |
| "title": "Pyramid Diffusion Models For Low-light Image Enhancement", | |
| "abstract": "Recovering noise-covered details from low-light images is challenging, and\nthe results given by previous methods leave room for improvement. Recent\ndiffusion models show realistic and detailed image generation through a\nsequence of denoising refinements and motivate us to introduce them to\nlow-light image enhancement for recovering realistic details. However, we found\ntwo problems when doing this, i.e., 1) diffusion models keep constant\nresolution in one reverse process, which limits the speed; 2) diffusion models\nsometimes result in global degradation (e.g., RGB shift). To address the above\nproblems, this paper proposes a Pyramid Diffusion model (PyDiff) for low-light\nimage enhancement. PyDiff uses a novel pyramid diffusion method to perform\nsampling in a pyramid resolution style (i.e., progressively increasing\nresolution in one reverse process). Pyramid diffusion makes PyDiff much faster\nthan vanilla diffusion models and introduces no performance degradation.\nFurthermore, PyDiff uses a global corrector to alleviate the global degradation\nthat may occur in the reverse process, significantly improving the performance\nand making the training of diffusion models easier with little additional\ncomputational consumption. Extensive experiments on popular benchmarks show\nthat PyDiff achieves superior performance and efficiency. Moreover, PyDiff can\ngeneralize well to unseen noise and illumination distributions.", | |
| "authors": "Dewei Zhou, Zongxin Yang, Yi Yang", | |
| "published": "2023-05-17", | |
| "updated": "2023-05-17", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2312.04410v1", | |
| "title": "Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models", | |
| "abstract": "Recently, diffusion models have made remarkable progress in text-to-image\n(T2I) generation, synthesizing images with high fidelity and diverse contents.\nDespite this advancement, latent space smoothness within diffusion models\nremains largely unexplored. Smooth latent spaces ensure that a perturbation on\nan input latent corresponds to a steady change in the output image. This\nproperty proves beneficial in downstream tasks, including image interpolation,\ninversion, and editing. In this work, we expose the non-smoothness of diffusion\nlatent spaces by observing noticeable visual fluctuations resulting from minor\nlatent variations. To tackle this issue, we propose Smooth Diffusion, a new\ncategory of diffusion models that can be simultaneously high-performing and\nsmooth. Specifically, we introduce Step-wise Variation Regularization to\nenforce the proportion between the variations of an arbitrary input latent and\nthat of the output image is a constant at any diffusion training step. In\naddition, we devise an interpolation standard deviation (ISTD) metric to\neffectively assess the latent space smoothness of a diffusion model. Extensive\nquantitative and qualitative experiments demonstrate that Smooth Diffusion\nstands out as a more desirable solution not only in T2I generation but also\nacross various downstream tasks. Smooth Diffusion is implemented as a\nplug-and-play Smooth-LoRA to work with various community models. Code is\navailable at https://github.com/SHI-Labs/Smooth-Diffusion.", | |
| "authors": "Jiayi Guo, Xingqian Xu, Yifan Pu, Zanlin Ni, Chaofei Wang, Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi", | |
| "published": "2023-12-07", | |
| "updated": "2023-12-07", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2005.00562v1", | |
| "title": "Unexpected crossovers in correlated random-diffusivity processes", | |
| "abstract": "The passive and active motion of micron-sized tracer particles in crowded\nliquids and inside living biological cells is ubiquitously characterised by\n\"viscoelastic\" anomalous diffusion, in which the increments of the motion\nfeature long-ranged negative and positive correlations. While viscoelastic\nanomalous diffusion is typically modelled by a Gaussian process with correlated\nincrements, so-called fractional Gaussian noise, an increasing number of\nsystems are reported, in which viscoelastic anomalous diffusion is paired with\nnon-Gaussian displacement distributions. Following recent advances in Brownian\nyet non-Gaussian diffusion we here introduce and discuss several possible\nversions of random-diffusivity models with long-ranged correlations. While all\nthese models show a crossover from non-Gaussian to Gaussian distributions\nbeyond some correlation time, their mean squared displacements exhibit\nstrikingly different behaviours: depending on the model crossovers from\nanomalous to normal diffusion are observed, as well as unexpected dependencies\nof the effective diffusion coefficient on the correlation exponent. Our\nobservations of the strong non-universality of random-diffusivity viscoelastic\nanomalous diffusion are important for the analysis of experiments and a better\nunderstanding of the physical origins of \"viscoelastic yet non-Gaussian\"\ndiffusion.", | |
| "authors": "Wei Wang, Flavio Seno, Igor M. Sokolov, Aleksei V. Chechkin, Ralf Metzler", | |
| "published": "2020-05-01", | |
| "updated": "2020-05-01", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech", | |
| "physics.bio-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.08379v2", | |
| "title": "TESS: Text-to-Text Self-Conditioned Simplex Diffusion", | |
| "abstract": "Diffusion models have emerged as a powerful paradigm for generation,\nobtaining strong performance in various continuous domains. However, applying\ncontinuous diffusion models to natural language remains challenging due to its\ndiscrete nature and the need for a large number of diffusion steps to generate\ntext, making diffusion-based generation expensive. In this work, we propose\nText-to-text Self-conditioned Simplex Diffusion (TESS), a text diffusion model\nthat is fully non-autoregressive, employs a new form of self-conditioning, and\napplies the diffusion process on the logit simplex space rather than the\nlearned embedding space. Through extensive experiments on natural language\nunderstanding and generation tasks including summarization, text\nsimplification, paraphrase generation, and question generation, we demonstrate\nthat TESS outperforms state-of-the-art non-autoregressive models, requires\nfewer diffusion steps with minimal drop in performance, and is competitive with\npretrained autoregressive sequence-to-sequence models. We publicly release our\ncodebase at https://github.com/allenai/tess-diffusion.", | |
| "authors": "Rabeeh Karimi Mahabadi, Hamish Ivison, Jaesung Tae, James Henderson, Iz Beltagy, Matthew E. Peters, Arman Cohan", | |
| "published": "2023-05-15", | |
| "updated": "2024-02-21", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2304.01565v1", | |
| "title": "A Survey on Graph Diffusion Models: Generative AI in Science for Molecule, Protein and Material", | |
| "abstract": "Diffusion models have become a new SOTA generative modeling method in various\nfields, for which there are multiple survey works that provide an overall\nsurvey. With the number of articles on diffusion models increasing\nexponentially in the past few years, there is an increasing need for surveys of\ndiffusion models on specific fields. In this work, we are committed to\nconducting a survey on the graph diffusion models. Even though our focus is to\ncover the progress of diffusion models in graphs, we first briefly summarize\nhow other generative modeling methods are used for graphs. After that, we\nintroduce the mechanism of diffusion models in various forms, which facilitates\nthe discussion on the graph diffusion models. The applications of graph\ndiffusion models mainly fall into the category of AI-generated content (AIGC)\nin science, for which we mainly focus on how graph diffusion models are\nutilized for generating molecules and proteins but also cover other cases,\nincluding materials design. Moreover, we discuss the issue of evaluating\ndiffusion models in the graph domain and the existing challenges.", | |
| "authors": "Mengchun Zhang, Maryam Qamar, Taegoo Kang, Yuna Jung, Chenshuang Zhang, Sung-Ho Bae, Chaoning Zhang", | |
| "published": "2023-04-04", | |
| "updated": "2023-04-04", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2307.06272v1", | |
| "title": "Exposing the Fake: Effective Diffusion-Generated Images Detection", | |
| "abstract": "Image synthesis has seen significant advancements with the advent of\ndiffusion-based generative models like Denoising Diffusion Probabilistic Models\n(DDPM) and text-to-image diffusion models. Despite their efficacy, there is a\ndearth of research dedicated to detecting diffusion-generated images, which\ncould pose potential security and privacy risks. This paper addresses this gap\nby proposing a novel detection method called Stepwise Error for\nDiffusion-generated Image Detection (SeDID). Comprising statistical-based\n$\\text{SeDID}_{\\text{Stat}}$ and neural network-based\n$\\text{SeDID}_{\\text{NNs}}$, SeDID exploits the unique attributes of diffusion\nmodels, namely deterministic reverse and deterministic denoising computation\nerrors. Our evaluations demonstrate SeDID's superior performance over existing\nmethods when applied to diffusion models. Thus, our work makes a pivotal\ncontribution to distinguishing diffusion model-generated images, marking a\nsignificant step in the domain of artificial intelligence security.", | |
| "authors": "Ruipeng Ma, Jinhao Duan, Fei Kong, Xiaoshuang Shi, Kaidi Xu", | |
| "published": "2023-07-12", | |
| "updated": "2023-07-12", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.CR", | |
| "cs.LG" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1911.11645v1", | |
| "title": "Effects of different discretisations of the Laplacian upon stochastic simulations of reaction-diffusion systems on both static and growing domains", | |
| "abstract": "By discretising space into compartments and letting system dynamics be\ngoverned by the reaction-diffusion master equation, it is possible to derive\nand simulate a stochastic model of reaction and diffusion on an arbitrary\ndomain. However, there are many implementation choices involved in this\nprocess, such as the choice of discretisation and method of derivation of the\ndiffusive jump rates, and it is not clear a priori how these affect model\npredictions. To shed light on this issue, in this work we explore how a variety\nof discretisations and method for derivation of the diffusive jump rates affect\nthe outputs of stochastic simulations of reaction-diffusion models, in\nparticular using Turing's model of pattern formation as a key example. We\nconsider both static and uniformly growing domains and demonstrate that, while\nonly minor differences are observed for simple reaction-diffusion systems,\nthere can be vast differences in model predictions for systems that include\ncomplicated reaction kinetics, such as Turing's model of pattern formation. Our\nwork highlights that care must be taken in using the reaction-diffusion master\nequation to make predictions as to the dynamics of stochastic\nreaction-diffusion systems.", | |
| "authors": "Bartosz J. Bartmanski, Ruth E. Baker", | |
| "published": "2019-11-26", | |
| "updated": "2019-11-26", | |
| "primary_cat": "physics.comp-ph", | |
| "cats": [ | |
| "physics.comp-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2312.08873v1", | |
| "title": "Diffusion Cocktail: Fused Generation from Diffusion Models", | |
| "abstract": "Diffusion models excel at generating high-quality images and are easy to\nextend, making them extremely popular among active users who have created an\nextensive collection of diffusion models with various styles by fine-tuning\nbase models such as Stable Diffusion. Recent work has focused on uncovering\nsemantic and visual information encoded in various components of a diffusion\nmodel, enabling better generation quality and more fine-grained control.\nHowever, those methods target improving a single model and overlook the vastly\navailable collection of fine-tuned diffusion models. In this work, we study the\ncombinations of diffusion models. We propose Diffusion Cocktail (Ditail), a\ntraining-free method that can accurately transfer content information between\ntwo diffusion models. This allows us to perform diverse generations using a set\nof diffusion models, resulting in novel images that are unlikely to be obtained\nby a single model alone. We also explore utilizing Ditail for style transfer,\nwith the target style set by a diffusion model instead of an image. Ditail\noffers a more detailed manipulation of the diffusion generation, thereby\nenabling the vast community to integrate various styles and contents seamlessly\nand generate any content of any style.", | |
| "authors": "Haoming Liu, Yuanhe Guo, Shengjie Wang, Hongyi Wen", | |
| "published": "2023-12-12", | |
| "updated": "2023-12-12", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2012.06816v1", | |
| "title": "Evaluation and Comparison of Diffusion Models with Motif Features", | |
| "abstract": "Diffusion models simulate the propagation of influence in networks. The\ndesign and evaluation of diffusion models has been subjective and empirical.\nWhen being applied to a network represented by a graph, the diffusion model\ngenerates a sequence of edges on which the influence flows, such sequence forms\na temporal network. In most scenarios, the statistical properties or the\ncharacteristics of a network are inferred by analyzing the temporal networks\ngenerated by diffusion models. To analyze real temporal networks, the motif has\nbeen proposed as a reliable feature. However, it is unclear how the network\ntopology and the diffusion model affect the motif feature of a generated\ntemporal network. In this paper, we adopt the motif feature to evaluate the\ntemporal graph generated by a diffusion model, thence the diffusion model\nitself. Two benchmarks for quantitively evaluating diffusion models with motif,\nstability and separability, are proposed and measured on numerous diffusion\nmodels. One motif-based metric is proposed to measure the similarity between\ndiffusion models. The experiments suggest that the motif of a generated\ntemporal network is dominated by the diffusion model, while the network\ntopology is almost ignored. This result indicates that more practical and\nreliable diffusion models have to be designed with delicacy in order to capture\nthe propagation patterns of real temporal networks.", | |
| "authors": "Fangqi Li", | |
| "published": "2020-12-12", | |
| "updated": "2020-12-12", | |
| "primary_cat": "cs.SI", | |
| "cats": [ | |
| "cs.SI", | |
| "cs.NI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2312.14589v1", | |
| "title": "Non-Denoising Forward-Time Diffusions", | |
| "abstract": "The scope of this paper is generative modeling through diffusion processes.\nAn approach falling within this paradigm is the work of Song et al. (2021),\nwhich relies on a time-reversal argument to construct a diffusion process\ntargeting the desired data distribution. We show that the time-reversal\nargument, common to all denoising diffusion probabilistic modeling proposals,\nis not necessary. We obtain diffusion processes targeting the desired data\ndistribution by taking appropriate mixtures of diffusion bridges. The resulting\ntransport is exact by construction, allows for greater flexibility in choosing\nthe dynamics of the underlying diffusion, and can be approximated by means of a\nneural network via novel training objectives. We develop a unifying view of the\ndrift adjustments corresponding to our and to time-reversal approaches and make\nuse of this representation to inspect the inner workings of diffusion-based\ngenerative models. Finally, we leverage on scalable simulation and inference\ntechniques common in spatial statistics to move beyond fully factorial\ndistributions in the underlying diffusion dynamics. The methodological advances\ncontained in this work contribute toward establishing a general framework for\ngenerative modeling based on diffusion processes.", | |
| "authors": "Stefano Peluchetti", | |
| "published": "2023-12-22", | |
| "updated": "2023-12-22", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1709.05336v1", | |
| "title": "Cs diffusion in SiC high-energy grain boundaries", | |
| "abstract": "Cesium (Cs) is a radioactive fission product whose release is of concern for\nTristructural-Isotropic (TRISO) fuel particles. In this work, Cs diffusion\nthrough high energy grain boundaries (HEGBs) of cubic-SiC is studied using an\nab-initio based kinetic Monte Carlo (kMC) model. The HEGB environment was\nmodeled as an amorphous SiC (a-SiC), and Cs defect energies were calculated\nusing density functional theory (DFT). From defect energies, it was suggested\nthat the fastest diffusion mechanism as Cs interstitial in an amorphous SiC.\nThe diffusion of Cs interstitial was simulated using a kMC, based on the site\nand transition state energies sampled from the DFT. The Cs HEGB diffusion\nexhibited an Arrhenius type diffusion in the range of 1200-1600{\\deg}C. The\ncomparison between HEGB results and the other studies suggests not only that\nthe GB diffusion dominates the bulk diffusion, but also that the HEGB is one of\nthe fastest grain boundary paths for the Cs diffusion. The diffusion\ncoefficients in HEGB are clearly a few orders of magnitude lower than the\nreported diffusion coefficients from in- and out-of- pile samples, suggesting\nthat other contributions are responsible, such as a radiation enhanced\ndiffusion.", | |
| "authors": "Hyunseok Ko, Izabela Szlufarska, Dane Morgan", | |
| "published": "2017-09-11", | |
| "updated": "2017-09-11", | |
| "primary_cat": "cond-mat.mtrl-sci", | |
| "cats": [ | |
| "cond-mat.mtrl-sci", | |
| "nucl-th" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2212.10777v4", | |
| "title": "Hierarchically branched diffusion models leverage dataset structure for class-conditional generation", | |
| "abstract": "Class-labeled datasets, particularly those common in scientific domains, are\nrife with internal structure, yet current class-conditional diffusion models\nignore these relationships and implicitly diffuse on all classes in a flat\nfashion. To leverage this structure, we propose hierarchically branched\ndiffusion models as a novel framework for class-conditional generation.\nBranched diffusion models rely on the same diffusion process as traditional\nmodels, but learn reverse diffusion separately for each branch of a hierarchy.\nWe highlight several advantages of branched diffusion models over the current\nstate-of-the-art methods for class-conditional diffusion, including extension\nto novel classes in a continual-learning setting, a more sophisticated form of\nanalogy-based conditional generation (i.e. transmutation), and a novel\ninterpretability into the generation process. We extensively evaluate branched\ndiffusion models on several benchmark and large real-world scientific datasets\nspanning many data modalities.", | |
| "authors": "Alex M. Tseng, Max Shen, Tommaso Biancalani, Gabriele Scalia", | |
| "published": "2022-12-21", | |
| "updated": "2024-02-01", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2302.07261v2", | |
| "title": "Where to Diffuse, How to Diffuse, and How to Get Back: Automated Learning for Multivariate Diffusions", | |
| "abstract": "Diffusion-based generative models (DBGMs) perturb data to a target noise\ndistribution and reverse this process to generate samples. The choice of\nnoising process, or inference diffusion process, affects both likelihoods and\nsample quality. For example, extending the inference process with auxiliary\nvariables leads to improved sample quality. While there are many such\nmultivariate diffusions to explore, each new one requires significant\nmodel-specific analysis, hindering rapid prototyping and evaluation. In this\nwork, we study Multivariate Diffusion Models (MDMs). For any number of\nauxiliary variables, we provide a recipe for maximizing a lower-bound on the\nMDMs likelihood without requiring any model-specific analysis. We then\ndemonstrate how to parameterize the diffusion for a specified target noise\ndistribution; these two points together enable optimizing the inference\ndiffusion process. Optimizing the diffusion expands easy experimentation from\njust a few well-known processes to an automatic search over all linear\ndiffusions. To demonstrate these ideas, we introduce two new specific\ndiffusions as well as learn a diffusion process on the MNIST, CIFAR10, and\nImageNet32 datasets. We show learned MDMs match or surpass bits-per-dims (BPDs)\nrelative to fixed choices of diffusions for a given dataset and model\narchitecture.", | |
| "authors": "Raghav Singhal, Mark Goldstein, Rajesh Ranganath", | |
| "published": "2023-02-14", | |
| "updated": "2023-03-03", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2403.01742v2", | |
| "title": "Diffusion-TS: Interpretable Diffusion for General Time Series Generation", | |
| "abstract": "Denoising diffusion probabilistic models (DDPMs) are becoming the leading\nparadigm for generative models. It has recently shown breakthroughs in audio\nsynthesis, time series imputation and forecasting. In this paper, we propose\nDiffusion-TS, a novel diffusion-based framework that generates multivariate\ntime series samples of high quality by using an encoder-decoder transformer\nwith disentangled temporal representations, in which the decomposition\ntechnique guides Diffusion-TS to capture the semantic meaning of time series\nwhile transformers mine detailed sequential information from the noisy model\ninput. Different from existing diffusion-based approaches, we train the model\nto directly reconstruct the sample instead of the noise in each diffusion step,\ncombining a Fourier-based loss term. Diffusion-TS is expected to generate time\nseries satisfying both interpretablity and realness. In addition, it is shown\nthat the proposed Diffusion-TS can be easily extended to conditional generation\ntasks, such as forecasting and imputation, without any model changes. This also\nmotivates us to further explore the performance of Diffusion-TS under irregular\nsettings. Finally, through qualitative and quantitative experiments, results\nshow that Diffusion-TS achieves the state-of-the-art results on various\nrealistic analyses of time series.", | |
| "authors": "Xinyu Yuan, Yan Qiao", | |
| "published": "2024-03-04", | |
| "updated": "2024-03-14", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/cond-mat/0208120v1", | |
| "title": "Aging in a Chaotic System", | |
| "abstract": "We demonstrate aging behavior in a simple non-linear system. Our model is a\nchaotic map which generates deterministically sub-diffusion. Asymptotic\nbehaviors of the diffusion process are described using aging continuous time\nrandom walks, introduced previously to model diffusion in glasses.", | |
| "authors": "E. Barkai", | |
| "published": "2002-08-06", | |
| "updated": "2002-08-06", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech", | |
| "nlin.CD" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1705.01542v2", | |
| "title": "A Spatial Structural Derivative Model for Ultraslow Diffusion", | |
| "abstract": "This study investigates the ultraslow diffusion by a spatial structural\nderivative, in which the exponential function exp(x)is selected as the\nstructural function to construct the local structural derivative diffusion\nequation model. The analytical solution of the diffusion equation is a form of\nBiexponential distribution. Its corresponding mean squared displacement is\nnumerically calculated, and increases more slowly than the logarithmic function\nof time. The local structural derivative diffusion equation with the structural\nfunction exp(x)in space is an alternative physical and mathematical modeling\nmodel to characterize a kind of ultraslow diffusion.", | |
| "authors": "Wei Xu, Wen Chen, Yingjie Liang, Jose Weberszpil", | |
| "published": "2017-05-03", | |
| "updated": "2017-06-13", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1404.3573v1", | |
| "title": "\"Diffusing diffusivity\": A model for anomalous and \"anomalous yet Brownian\" diffusion", | |
| "abstract": "Wang et al. [PNAS 106 (2009) 15160] have found that in several systems the\nlinear time dependence of the mean-square displacement (MSD) of diffusing\ncolloidal particles, typical of normal diffusion, is accompanied by a\nnon-Gaussian displacement distribution (DisD), with roughly exponential tails\nat short times, a situation they termed \"anomalous yet Brownian\" diffusion. The\ndiversity of systems in which this is observed calls for a generic model. We\npresent such a model where there is \"diffusivity memory\" but no \"direction\nmemory\" in the particle trajectory, and we show that it leads to both a linear\nMSD and a non-Gaussian DisD at short times. In our model, the diffusivity is\nundergoing a (perhaps biased) random walk, hence the expression \"diffusing\ndiffusivity\". The DisD is predicted to be exactly exponential at short times if\nthe distribution of diffusivities is itself exponential, but an exponential\nremains a good fit to the DisD for a variety of diffusivity distributions.\nMoreover, our generic model can be modified to produce subdiffusion.", | |
| "authors": "Mykyta V. Chubynsky, Gary W. Slater", | |
| "published": "2014-04-14", | |
| "updated": "2014-04-14", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech", | |
| "cond-mat.soft" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1304.0925v1", | |
| "title": "A new approach to multi-modal diffusions with applications to protein folding", | |
| "abstract": "This article demonstrates that flexible and statistically tractable\nmulti-modal diffusion models can be attained by transformation of simple\nwell-known diffusion models such as the Ornstein-Uhlenbeck model, or more\ngenerally a Pearson diffusion. The transformed diffusion inherits many\nproperties of the underlying simple diffusion including its mixing rates and\ndistributions of first passage times. Likelihood inference and martingale\nestimating functions are considered in the case of a discretely observed\nbimodal diffusion. It is further demonstrated that model parameters can be\nidentified and estimated when the diffusion is observed with additional\nmeasurement error. The new approach is applied to molecular dynamics data in\nform of a reaction coordinate of the small Trp-zipper protein, for which the\nfolding and unfolding rates are estimated. The new models provide a better fit\nto this type of protein folding data than previous models because the diffusion\ncoefficient is state-dependent.", | |
| "authors": "Julie Forman, Michael S\u00f8rensen", | |
| "published": "2013-04-03", | |
| "updated": "2013-04-03", | |
| "primary_cat": "stat.ME", | |
| "cats": [ | |
| "stat.ME" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/physics/0403039v1", | |
| "title": "Non-diffusive transport in plasma turbulence: a fractional diffusion approach", | |
| "abstract": "Numerical evidence of non-diffusive transport in three-dimensional, resistive\npressure-gradient-driven plasma turbulence is presented. It is shown that the\nprobability density function (pdf) of test particles' radial displacements is\nstrongly non-Gaussian and exhibits algebraic decaying tails. To model these\nresults we propose a macroscopic transport model for the pdf based on the use\nof fractional derivatives in space and time, that incorporate in a unified way\nspace-time non-locality (non-Fickian transport), non-Gaussianity, and\nnon-diffusive scaling. The fractional diffusion model reproduces the shape, and\nspace-time scaling of the non-Gaussian pdf of turbulent transport calculations.\nThe model also reproduces the observed super-diffusive scaling.", | |
| "authors": "D. del-Castillo-Negrete, B. A. Carreras, V. E. Lynch", | |
| "published": "2004-03-04", | |
| "updated": "2004-03-04", | |
| "primary_cat": "physics.plasm-ph", | |
| "cats": [ | |
| "physics.plasm-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.16269v1", | |
| "title": "UDPM: Upsampling Diffusion Probabilistic Models", | |
| "abstract": "In recent years, Denoising Diffusion Probabilistic Models (DDPM) have caught\nsignificant attention. By composing a Markovian process that starts in the data\ndomain and then gradually adds noise until reaching pure white noise, they\nachieve superior performance in learning data distributions. Yet, these models\nrequire a large number of diffusion steps to produce aesthetically pleasing\nsamples, which is inefficient. In addition, unlike common generative\nadversarial networks, the latent space of diffusion models is not\ninterpretable. In this work, we propose to generalize the denoising diffusion\nprocess into an Upsampling Diffusion Probabilistic Model (UDPM), in which we\nreduce the latent variable dimension in addition to the traditional noise level\naddition. As a result, we are able to sample images of size $256\\times 256$\nwith only 7 diffusion steps, which is less than two orders of magnitude\ncompared to standard DDPMs. We formally develop the Markovian diffusion\nprocesses of the UDPM, and demonstrate its generation capabilities on the\npopular FFHQ, LSUN horses, ImageNet, and AFHQv2 datasets. Another favorable\nproperty of UDPM is that it is very easy to interpolate its latent space, which\nis not the case with standard diffusion models. Our code is available online\n\\url{https://github.com/shadyabh/UDPM}", | |
| "authors": "Shady Abu-Hussein, Raja Giryes", | |
| "published": "2023-05-25", | |
| "updated": "2023-05-25", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.LG", | |
| "eess.IV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2404.08926v2", | |
| "title": "Diffusion Models Meet Remote Sensing: Principles, Methods, and Perspectives", | |
| "abstract": "As a newly emerging advance in deep generative models, diffusion models have\nachieved state-of-the-art results in many fields, including computer vision,\nnatural language processing, and molecule design. The remote sensing community\nhas also noticed the powerful ability of diffusion models and quickly applied\nthem to a variety of tasks for image processing. Given the rapid increase in\nresearch on diffusion models in the field of remote sensing, it is necessary to\nconduct a comprehensive review of existing diffusion model-based remote sensing\npapers, to help researchers recognize the potential of diffusion models and\nprovide some directions for further exploration. Specifically, this paper first\nintroduces the theoretical background of diffusion models, and then\nsystematically reviews the applications of diffusion models in remote sensing,\nincluding image generation, enhancement, and interpretation. Finally, the\nlimitations of existing remote sensing diffusion models and worthy research\ndirections for further exploration are discussed and summarized.", | |
| "authors": "Yidan Liu, Jun Yue, Shaobo Xia, Pedram Ghamisi, Weiying Xie, Leyuan Fang", | |
| "published": "2024-04-13", | |
| "updated": "2024-04-17", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2211.07804v3", | |
| "title": "Diffusion Models for Medical Image Analysis: A Comprehensive Survey", | |
| "abstract": "Denoising diffusion models, a class of generative models, have garnered\nimmense interest lately in various deep-learning problems. A diffusion\nprobabilistic model defines a forward diffusion stage where the input data is\ngradually perturbed over several steps by adding Gaussian noise and then learns\nto reverse the diffusion process to retrieve the desired noise-free data from\nnoisy data samples. Diffusion models are widely appreciated for their strong\nmode coverage and quality of the generated samples despite their known\ncomputational burdens. Capitalizing on the advances in computer vision, the\nfield of medical imaging has also observed a growing interest in diffusion\nmodels. To help the researcher navigate this profusion, this survey intends to\nprovide a comprehensive overview of diffusion models in the discipline of\nmedical image analysis. Specifically, we introduce the solid theoretical\nfoundation and fundamental concepts behind diffusion models and the three\ngeneric diffusion modelling frameworks: diffusion probabilistic models,\nnoise-conditioned score networks, and stochastic differential equations. Then,\nwe provide a systematic taxonomy of diffusion models in the medical domain and\npropose a multi-perspective categorization based on their application, imaging\nmodality, organ of interest, and algorithms. To this end, we cover extensive\napplications of diffusion models in the medical domain. Furthermore, we\nemphasize the practical use case of some selected approaches, and then we\ndiscuss the limitations of the diffusion models in the medical domain and\npropose several directions to fulfill the demands of this field. Finally, we\ngather the overviewed studies with their available open-source implementations\nat\nhttps://github.com/amirhossein-kz/Awesome-Diffusion-Models-in-Medical-Imaging.", | |
| "authors": "Amirhossein Kazerouni, Ehsan Khodapanah Aghdam, Moein Heidari, Reza Azad, Mohsen Fayyaz, Ilker Hacihaliloglu, Dorit Merhof", | |
| "published": "2022-11-14", | |
| "updated": "2023-06-03", | |
| "primary_cat": "eess.IV", | |
| "cats": [ | |
| "eess.IV", | |
| "cs.CV" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2402.01965v2", | |
| "title": "Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization", | |
| "abstract": "Diffusion models are becoming widely used in state-of-the-art image, video\nand audio generation. Score-based diffusion models stand out among these\nmethods, necessitating the estimation of score function of the input data\ndistribution. In this study, we present a theoretical framework to analyze\ntwo-layer neural network-based diffusion models by reframing score matching and\ndenoising score matching as convex optimization. Though existing diffusion\ntheory is mainly asymptotic, we characterize the exact predicted score function\nand establish the convergence result for neural network-based diffusion models\nwith finite data. This work contributes to understanding what neural\nnetwork-based diffusion model learns in non-asymptotic settings.", | |
| "authors": "Fangzhao Zhang, Mert Pilanci", | |
| "published": "2024-02-03", | |
| "updated": "2024-02-06", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "math.OC" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1705.07063v1", | |
| "title": "Double diffusivity model under stochastic forcing", | |
| "abstract": "The \"double diffusivity\" model was proposed in the late 1970s, and reworked\nin the early 1980s, as a continuum counterpart to existing discrete models of\ndiffusion corresponding to high diffusivity paths, such as grain boundaries and\ndislocation lines. Technically, the model pans out as a system of coupled {\\it\nFick type} diffusion equations to represent \"regular\" and \"high\" diffusivity\npaths with \"source terms\" accounting for the mass exchange between the two\npaths. The model remit was extended by analogy to describe flow in porous media\nwith double porosity, as well as to model heat conduction in media with two\nnon-equilibrium local temperature baths e.g. ion and electron baths. Uncoupling\nof the two partial differential equations leads to a higher-ordered diffusion\nequation, solutions of which could be obtained in terms of classical diffusion\nequation solutions. Similar equations could also be derived within an \"internal\nlength\" gradient (ILG) mechanics formulation applied to diffusion problems,\n{\\it i.e.}, by introducing nonlocal effects, together with inertia and\nviscosity, in a mechanics based formulation of diffusion theory. This issue\nbecomes particularly important in the case of diffusion in nanopolycrystals\nwhose deterministic ILG based theoretical calculations predict a relaxation\ntime that is only about one-tenth of the actual experimentally verified\ntimescale. This article provides the \"missing link\" in this estimation by\nadding a vital element in the ILG structure, that of stochasticity, that takes\ninto account all boundary layer fluctuations. Our stochastic-ILG diffusion\ncalculation confirms rapprochement between theory and experiment, thereby\nbenchmarking a new generation of gradient-based continuum models that conform\ncloser to real life fluctuating environments.", | |
| "authors": "Amit K Chattopadhyay, Elias C Aifantis", | |
| "published": "2017-05-19", | |
| "updated": "2017-05-19", | |
| "primary_cat": "cond-mat.soft", | |
| "cats": [ | |
| "cond-mat.soft", | |
| "cond-mat.mtrl-sci", | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/nlin/0212039v2", | |
| "title": "Front dynamics in reaction-diffusion systems with Levy flights: a fractional diffusion approach", | |
| "abstract": "The use of reaction-diffusion models rests on the key assumption that the\nunderlying diffusive process is Gaussian. However, a growing number of studies\nhave pointed out the prevalence of anomalous diffusion, and there is a need to\nunderstand the dynamics of reactive systems in the presence of this type of\nnon-Gaussian diffusion. Here we present a study of front dynamics in\nreaction-diffusion systems where anomalous diffusion is due to the presence of\nasymmetric Levy flights. Our approach consists of replacing the Laplacian\ndiffusion operator by a fractional diffusion operator, whose fundamental\nsolutions are Levy $\\alpha$-stable distributions. Numerical simulation of the\nfractional Fisher-Kolmogorov equation, and analytical arguments show that\nanomalous diffusion leads to the exponential acceleration of fronts and a\nuniversal power law decay, $x^{-\\alpha}$, of the tail, where $\\alpha$, the\nindex of the Levy distribution, is the order of the fractional derivative.", | |
| "authors": "D. del-Castillo-Negrete, B. A. Carreras, V. E. Lynch", | |
| "published": "2002-12-17", | |
| "updated": "2003-06-30", | |
| "primary_cat": "nlin.PS", | |
| "cats": [ | |
| "nlin.PS", | |
| "nlin.CD" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1812.07249v1", | |
| "title": "A unifying approach to first-passage time distributions in diffusing diffusivity and switching diffusion models", | |
| "abstract": "We propose a unifying theoretical framework for the analysis of first-passage\ntime distributions in two important classes of stochastic processes in which\nthe diffusivity of a particle evolves randomly in time. In the first class of\n\"diffusing diffusivity\" models, the diffusivity changes continuously via a\nprescribed stochastic equation. In turn, the diffusivity switches randomly\nbetween discrete values in the second class of \"switching diffusion\" models.\nFor both cases, we quantify the impact of the diffusivity dynamics onto the\nfirst-passage time distribution of a particle via the moment-generating\nfunction of the integrated diffusivity. We provide general formulas and some\nexplicit solutions for some particular cases of practical interest.", | |
| "authors": "D. S. Grebenkov", | |
| "published": "2018-12-18", | |
| "updated": "2018-12-18", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech", | |
| "physics.bio-ph", | |
| "physics.chem-ph" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2306.07491v2", | |
| "title": "Exact sharp-fronted solutions for nonlinear diffusion on evolving domains", | |
| "abstract": "Models of diffusive processes that occur on evolving domains are frequently\nemployed to describe biological and physical phenomena, such as diffusion\nwithin expanding tissues or substrates. Previous investigations into these\nmodels either report numerical solutions or require an assumption of linear\ndiffusion to determine exact solutions. Unfortunately, numerical solutions do\nnot reveal the relationship between the model parameters and the solution\nfeatures. Additionally, experimental observations typically report the presence\nof sharp fronts, which are not captured by linear diffusion. Here we address\nboth limitations by presenting exact sharp-fronted solutions to a model of\ndegenerate nonlinear diffusion on a growing domain. We obtain the solution by\nidentifying a series of transformations that converts the model of a nonlinear\ndiffusive process on an evolving domain to a nonlinear diffusion equation on a\nfixed domain, which admits known exact solutions for certain choices of\ndiffusivity functions. We determine expressions for critical time scales and\ndomain growth rates such that the diffusive population never reaches the domain\nboundaries and hence the solution remains valid.", | |
| "authors": "Stuart T. Johnston, Matthew J. Simpson", | |
| "published": "2023-06-13", | |
| "updated": "2023-10-06", | |
| "primary_cat": "q-bio.PE", | |
| "cats": [ | |
| "q-bio.PE" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/cond-mat/0210703v1", | |
| "title": "Membrane bound protein diffusion viewed by fluorescence recovery after bleaching experiments : models analysis", | |
| "abstract": "Diffusion processes in biological membranes are of interest to understand the\nmacromolecular organisation and function of several molecules. Fluorescence\nRecovery After Photobleaching (FRAP) has been widely used as a method to\nanalyse this processes using classical Brownian diffusion model. In the first\npart of this work, the analytical expression of the fluorescence recovery as a\nfunction of time has been established for anomalous diffusion due to long\nwaiting times. Then, experimental fluorescence recoveries recorded in living\ncells on a membrane-bound protein have been analysed using three different\nmodels : normal Brownian diffusion, Brownian diffusion with an immobile\nfraction and anomalous diffusion due to long waiting times.", | |
| "authors": "C. Favard, N. Olivi-Tran, J. -L. Meunier", | |
| "published": "2002-10-31", | |
| "updated": "2002-10-31", | |
| "primary_cat": "cond-mat.stat-mech", | |
| "cats": [ | |
| "cond-mat.stat-mech", | |
| "physics.bio-ph", | |
| "q-bio.BM" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1210.5101v1", | |
| "title": "Global well-posedness and zero-diffusion limit of classical solutions to the 3D conservation laws arising in chemotaxis", | |
| "abstract": "In this paper, we study the relationship between a diffusive model and a\nnon-diffusive model which are both derived from the well-known Keller-Segel\nmodel, as a coefficient of diffusion $\\varepsilon$ goes to zero. First, we\nestablish the global well-posedness of classical solutions to the Cauchy\nproblem for the diffusive model with smooth initial data which is of small\n$L^2$ norm, together with some {\\it a priori} estimates uniform for $t$ and\n$\\varepsilon$. Then we investigate the zero-diffusion limit, and get the global\nwell-posedness of classical solutions to the Cauchy problem for the\nnon-diffusive model. Finally, we derive the convergence rate of the diffusive\nmodel toward the non-diffusive model. It is shown that the convergence rate in\n$L^\\infty$ norm is of the order $O(\\varepsilon^{1/2})$. It should be noted that\nthe initial data is small in $L^2$-norm but can be of large oscillations with\nconstant state at far field. As a byproduct, we improve the corresponding\nresult on the well-posedness of the non-difussive model which requires small\noscillations.", | |
| "authors": "Hongyun Peng, Huanyao Wen, Changjiang Zhu", | |
| "published": "2012-10-18", | |
| "updated": "2012-10-18", | |
| "primary_cat": "math.AP", | |
| "cats": [ | |
| "math.AP" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2401.17181v1", | |
| "title": "Transfer Learning for Text Diffusion Models", | |
| "abstract": "In this report, we explore the potential for text diffusion to replace\nautoregressive (AR) decoding for the training and deployment of large language\nmodels (LLMs). We are particularly interested to see whether pretrained AR\nmodels can be transformed into text diffusion models through a lightweight\nadaptation procedure we call ``AR2Diff''. We begin by establishing a strong\nbaseline setup for training text diffusion models. Comparing across multiple\narchitectures and pretraining objectives, we find that training a decoder-only\nmodel with a prefix LM objective is best or near-best across several tasks.\nBuilding on this finding, we test various transfer learning setups for text\ndiffusion models. On machine translation, we find that text diffusion\nunderperforms the standard AR approach. However, on code synthesis and\nextractive QA, we find diffusion models trained from scratch outperform AR\nmodels in many cases. We also observe quality gains from AR2Diff -- adapting AR\nmodels to use diffusion decoding. These results are promising given that text\ndiffusion is relatively underexplored and can be significantly faster than AR\ndecoding for long text generation.", | |
| "authors": "Kehang Han, Kathleen Kenealy, Aditya Barua, Noah Fiedel, Noah Constant", | |
| "published": "2024-01-30", | |
| "updated": "2024-01-30", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL" | |
| ], | |
| "category": "Diffusion AND Model" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/0805.0647v1", | |
| "title": "Scaling of Rough Surfaces: Effects of Surface Diffusion on Growth and Roughness Exponents", | |
| "abstract": "Random deposition model with surface diffusion over several next nearest\nneighbours is studied. The results agree with the results obtained by Family\nfor the case of nearest neighbour diffusion [F. Family, J. Phys. A 19(8), L441,\n1986]. However for larger diffusion steps, the growth exponent and the\nroughness exponent show interesting dependence on diffusion length.", | |
| "authors": "Baisakhi Mal, Subhankar Ray, J. Shamanna", | |
| "published": "2008-05-06", | |
| "updated": "2008-05-06", | |
| "primary_cat": "cond-mat.soft", | |
| "cats": [ | |
| "cond-mat.soft", | |
| "cond-mat.stat-mech" | |
| ], | |
| "category": "Diffusion AND Model" | |
| } | |
| ] |