{ "url": "http://arxiv.org/abs/2404.16678v1", "title": "Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior", "abstract": "Colorizing grayscale images offers an engaging visual experience. Existing\nautomatic colorization methods often fail to generate satisfactory results due\nto incorrect semantic colors and unsaturated colors. In this work, we propose\nan automatic colorization pipeline to overcome these challenges. We leverage\nthe extraordinary generative ability of the diffusion prior to synthesize color\nwith plausible semantics. To overcome the artifacts introduced by the diffusion\nprior, we apply the luminance conditional guidance. Moreover, we adopt\nmultimodal high-level semantic priors to help the model understand the image\ncontent and deliver saturated colors. Besides, a luminance-aware decoder is\ndesigned to restore details and enhance overall visual quality. The proposed\npipeline synthesizes saturated colors while maintaining plausible semantics.\nExperiments indicate that our proposed method considers both diversity and\nfidelity, surpassing previous methods in terms of perceptual realism and gain\nmost human preference.", "authors": "Han Wang, Xinning Chai, Yiwen Wang, Yuhong Zhang, Rong Xie, Li Song", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Original Paper", "paper_cat": "Diffusion AND Model", "gt": "Automatic colorization synthesizes a colorful and semanti- cally plausible image given a grayscale image. It is a classical computer vision task that has been studied for decades. How- ever, existing automatic colorization methods cannot provide satisfactory solution due to the two main challenges: incorrect semantic colors and unsaturated colors. Aiming to synthesize semantically coherent and percep- tually plausible colors, generative models have been exten- sively incorporated into relevant research. Generative adver- sarial networks (GAN) based [4, 5, 1] and autoregressive- based [6, 2, 7] methods have made notable progress. Al- though the issue of incorrect semantic colors has been par- tially addressed, significant challenges still remain. See the yellow boxes in Figure 1, the semantic errors significantly undermine the visual quality. Recently, Denoising Diffusion Probabilistic Models(DDPM) [8] has demonstrated remark- able performance in the realm of image generation. With its exceptional generation capabilities, superior level of de- tail, and extensive range of variations, DDPM has emerged as a compelling alternative to the GAN. Moreover, the con- trollable generation algorithms based on the diffusion model have achieved impressive performance in various downstream tasks such as T2I [9], image editing [10], super resolu- tion [11], etc. In this work, we leverage the powerful diffu- sion prior to synthesize plausible images that align with real- world common sense. Unfortunately, applying pre-trained diffusion models directly to this pixel-wise conditional task lead to inconsistencies [12] that do not accurately align with the original grayscale input. Therefore, it becomes imperative to provide more effective condition guidance in order to en- sure coherence and fidelity. We align the luminance channel both in the latent and pixel spaces. Specifically, our proposed image-to-image pipeline is fine-tuned based on pre-trained stable diffusion. The pixel-level conditions are injected into the latent space to assist the denoising U-Net in producing latent codes that are more faithful to grayscale images. A luminance-aware decoder is applied to mitigate pixel space distortion. In addition to incorrect semantics, another challenge in this task is unsaturated colors. For example, the oranges in the first two columns of Figure 1 suffer from the unsaturated colors. To moderate the unsaturated colors, priors such as categories [5], bounding boxes [13] and saliency maps [14] have been introduced in relevant research. Based on this in- sight, we adopt multimodal high-level semantic priors to help the model understand the image content and generate vivid colors. To simultaneously generate plausible semantics and vivid colors, multimodal priors, including category, caption, and segmentation, are injected into the generation process in a comprehensive manner. In summary, we propose an automatic colorization pipeline to address the challenges in this task. The contri- butions of this paper are as follows: \u2022 We extend the stable diffusion model to automatic im- age colorization by introducing pixel-level grayscale conditions in the denoising diffusion. The pre-trained diffusion priors are employed to generate vivid and arXiv:2404.16678v1 [cs.CV] 25 Apr 2024 BigColor CT2 ControlNet Ours Fig. 1. We achieve saturated and semantic plausible colorization for grayscale images surpassing the GAN-based(BigColor [1]), transformer-based(CT2 [2]) and diffusion-based(ControlNet [3]) methods. plausible colors. \u2022 We design a high-level semantic injection module to enhance the model\u2019s capability to produce semantically reasonable colors. \u2022 A luminance-aware decoder is designed to mitigate pixel domain distortion and make the reconstruction more faithful to the grayscale input. \u2022 Quantitative and qualitative experiments demonstrate that our proposed colorization pipeline provides high- fidelity, color-diversified colorization for grayscale im- ages with complex content. User study further indi- cates that our pipeline gain more human preference than other state-of-the-art methods.", "main_content": "Learning-based algorithms have been the mainstream of research on automatic colorization in recent years. Previous methods suffer from unsaturated colors and semantic confusion due to the lack of prior knowledge of color. In order to generate plausible colors, generative models have been applied to automatic colorization tasks, including adversarial generative networks [4, 5, 1] and transformers [6, 2, 7]. Besides, [15] shows that diffusion models are more creative than GAN. DDPM has achieved amazing results in diverse natural image generation. Research based on DDPM has confirmed its ability to handle a variety of downstream tasks, including colorization [16]. To alleviate semantic confusion and synthesize more satisfactory results, priors are introduced into related research, including categories [5], saliency maps [14], bounding boxes [13], etc. 3. METHOD 3.1. Overview A color image ylab, represented in CIELAB color space, contains three channels: lightness channel l and chromatic channels a and b. The automatic colorization aims to recover the chromatic channels from the grayscale image: xgray \u2192\u02c6 ylab. In this work, we propose an automatic colorization pipeline for natural images based on stable diffusion. The pipeline consists of two parts: a variational autoencoder [17] and a denoising U-Net. Explicitly, the VAE is for the transformation between pixel space x \u2208RH\u00d7W \u00d73 and latent space z \u2208Rh\u00d7w\u00d7c. While the denoising U-Net applies DDPM in the latent space to generate an image from Gaussian noise. The framework of our pipeline is shown in Figure 2. First, the VAE encodes grayscale image xgray into latent code zc. Next, the T-step diffusion process generates a clean latent code z0 from Gaussian noise zT under the guidance of image latent zc and high-level semantics. Finally, z0 is reconstructed by a luminance-aware decoder to obtain the color image \u02c6 y. The pixel-level grayscale condition and high-level semantic condition for denoising process are introduced in the latent space, shown in the yellow box in Figure 2. We elaborate on the detailed injections of these conditions in Section 3.2 and Section 3.3, respectively. As for the reconstruction processes, the detailed designs of the luminance-aware decoder are described in Section 3.4. 3.2. Colorization Diffusion Model Large-scale diffusion model has the capability to generate high-resolution images with complex structures. While naive usage of diffusion priors generates serious artifacts, we introduce pixel-level luminance information to provide detailed guidance. Specifically, we use encoded grayscale image zc as control condition to enhance U-Net\u2019s understanding of luminance information in the latent space. To involve the grayscale condition in the entire diffusion process, we simultaneously input the latent code zt generated in the previous time step and the noise-free grayscale latent code zc into the input layer of UNet at each time step t: Denoising U-Net Input \ud835\udc65!\"#$ \u2208\ud835\udc45%\u00d7'\u00d7( Text Encoder EfficientNet BLIP Transfiner Category Caption Labels resize cat Output \ud835\udc66 % \u2208\ud835\udc45%\u00d7'\u00d7) \ud835\udc67\u0302*+( Luminance-aware Decoder Encoder \ud835\udefc, conv \ud835\udc53 -./0 , \ud835\udc53 12 3 \ud835\udc53 * 12 3 \u00d7 \ud835\udc40, \u2208\ud835\udc454\u00d7/\u00d7( \ud835\udc67*+( , \ud835\udc67*+( \ud835\udc67* \ud835\udc675 \u2208\ud835\udc454\u00d7/\u00d76 \ud835\udc67\u03027 \u00d7\ud835\udc47steps Cross Attention Text Embeddings \ud835\udc50* Latent Space Fig. 2. Overview of the proposed automatic colorization pipeline. It combines a semantic prior generator (blue box), a highlevel semantic guided diffusion model(yellow box), and a luminance-aware decoder (orange box). z\u2032 t = conv1\u00d71(concat(zt, zc)) (1) In this way, we take advantage of the powerful generative capabilities of stable diffusion while preserve the grayscale condition. The loss function for our denoising U-Net is defined in a similar way to stable diffusion [18]: L = Ez,zc,c,\u03f5\u223cN (0,1),t[||\u03f5 \u2212\u03f5\u03b8(zt, t, zc, c)||2 2] (2) where z is the encoded color image, zc is the encoded grayscale image, c is the category embedding, \u03f5 is a noise term, t is the time step, \u03f5\u03b8 is the denoising U-Net, zt is the noisy version of z at time step t. 3.3. High-level Semantic Guidance To alleviate semantic confusion and generate vivid colors, we design a high-level semantic guidance module for inference. As shown in Figure 2, the multimodal semantics are generated by the pre-trained semantic generator in the blue box. Afterwards, text and segmentation priors are injected into the inference process through cross attention and segmentation guidance respectively, as shown in the yellow box in Figure 2. Specifically, given the grayscale image xgray, the semantic generator produce the corresponding categories [19], captions [20] and segmentations [21]. The category, caption, and segmentation labels are in textual form, while the segmentation masks are binary masks. For textual priors, the CLIP [22] encoder is employed to generate the text embedding ct. The text embedding guidance is applied in denoising U-Net via cross-attention mechanism. Given the timestep t, the concatenated noisy input zt and the text condition ct, the latent code zt\u22121 is produced by the Colorization Diffusion Model(CDM): zt\u22121 = CDM(zt, t, zc, ct) (3) For segmentation priors, we use the pre-trained transfiner [21] to generate paired segmentation masks M and labels L. For each instance, we first resize the binary mask Mi \u2208RH\u00d7W \u00d71 to align the latent space. The resized mask is represented as M i \u2208Rh\u00d7w\u00d71. Then we use the CDM to yield the corresponding latent code zi t\u22121 of the masked region: zi t\u22121 = CDM(zt, t, zc \u00d7 M i, Li) (4) Finally, we combine the original latent code zt\u22121 and the instances to yield the segment-aware latent code \u02c6 zt\u22121: \u02c6 zt\u22121 = i=k X i=1 [zt\u22121 \u00d7 (1 \u2212M i) + zi t\u22121 \u00d7 M i] (5) We set a coefficient i \u2208[0, 1] to control the strength of segmentation guidance. The threshold is defined as Tth = T \u00d7 (1 \u2212i). The segmentation mask is used to guide the synthesis process at inference time step t > Tth. We set i = 0.3 for the experiment. Users have the flexibility to select a different value based on their preferences. 3.4. Luminance-aware Decoder As the downsampling to latent space inevitably lose detailed structures and textures, we apply the luminance condition to InstColor ChromaGAN BigColor ColTran CT2 ControlNet Ours Fig. 3. Qualitative comparisons among InstColor [13], ChromaGAN [5], BigColor [1], ColTran [6], CT2 [2], ControlNet [3] and Ours. More results are provided on https://servuskk.github.io/ColorDiff-Image/. the reconstruction process and propose a luminance-aware decoder. To align the latent space with stable diffusion, we freeze the encoder. The intermediate grayscale features obtained in the encoder are added to the decoder through skip connections. Specifically, the intermediate features f i down generated by the first three downsample layers of the encoder are extracted. These features are convolved, weighted, and finally added to the corresponding upsample layers of the decoder: \u02c6 f j up = f j up + \u03b1i \u00b7 conv(f i down), i = 0, 1, 2; j = 3, 2, 1 (6) We adopt L2 loss L2 and perceptual loss [23] Lp to train the luminance-aware decoder: L = L2 + \u03bbpLp (7) 4. EXPERIMENT 4.1. Implementation We train the denoising U-Net and luminance-aware decoder separately. Firstly, we train the denoising U-Net on the imagenet [24] training set at the resolution of 512 \u00d7 512. We initialize the U-Net using the pre-trained weights of [18]. The learning rate is fixed at 5e \u22125. We use the classifierfree guidance [25] strategy and set the conditioning dropout probability to 0.05. The model is updated for 20K iterations with a batch size of 16. Then we train the luminance-aware decoder on the same dataset and at the same resolution. The VAE is initialized using the pre-trained weights of [18]. We fix the learning rate at 1e\u22124 for 22,500 steps with a batch size of 1. We set the parameter \u03bbp in Eq.(7) to 0.1. Our tests are conducted on the COCO-Stuff [26] val set containing 5,000 images of complex scenes. At inference, we adopt DDIM sampler [27] and set the inference time step T = 50. We conduct all experiments on a single Nvidia GeForce RTX 3090 GPU. 4.2. Comparisons We compare with 6 state-of-the-art automatic colorization methods including 3 types: 1) GAN-based method: InstColor [13], ChromaGAN [5], BigColor [1], 2)Transformerbased method: ColTran [6], CT2 [2], 3) Diffusion-based method: ControlNet [3]. Qualitative Comparison. We show visual comparison results in Figure 3. The images in the first and second rows indicate the ability of the models to synthesise vivid colors. Both GAN-based and transformer-based algorithms suffer from unsaturated colors. Although ControlNet synthesises saturated colors, the marked areas contain significant artifacts. Images in the third and forth rows demonstrate the ability of the models to synthesise semantically reasonable colors. InTable 1. Quantitative comparison results. Method FID\u2193 Colorful\u2191 PSNR\u2191 InstColor [13] 14.40 27.00 23.85 ChromaGAN [5] 27.46 27.06 23.20 BigColor [1] 10.24 39.65 20.86 ColTran [6] 15.06 34.31 22.02 CT2 [2] 25.87 39.64 22.80 ControlNet [3] 10.86 45.09 19.95 Ours 9.799 41.54 21.02 Fig. 4. User evaluations. stColor, ChromaGAN, BigColor, CT2 and ControlNet fail to maintain the color continuity of the same object(discontinuity of colors between the head and tail of the train, hands and shoulders of the girl). While ColTran yields colors that defy common sense (blue shadows and blue hands). In summary, our method provides vivid and semantically reasonable colorization results. User Study. To reflect human preferences, we randomly select 15 images from the COCO-Stuff val set for user study. For each image, the 7 results and ground truth are displayed to the user in a random order. We asked 18 participants to choose their top three favorites. Figure 4 shows the proportion of the Top 1 selected by users. Our method has a vote rate of 22.59%, which significantly outperforms other methods. Quantitative Comparison. We use Fr\u00b4 echet Inception Distance (FID) and colorfulness [28] to evaluate image quality and vividness. These two metrics have recently been used to evaluate the colorization algorithm [1, 29] . Considering that colorization is an ill-posed problem, the ground-truth dependent metric PSNR used in previous works does not accurately reflect the quality of image and color generation [6, 29, 30], and the comparison here is for reference. As shown in Table 1, our proposed method demonstrates superior performance in terms of FID when compared to the state-of-the-art algorithms. Even though ControlNet outperforms our algorithm in the colourful metric, the results shown in the qualitative comparison indicate that the artefacts are meaningless and negatively affect the visual effect of the image. 4.3. Ablation Studies The significance of the main components of the proposed method is discussed in this section. The quantitative and visual comparisons are presented in Table 2 and Figure 5. High-level Semantic Guidance. We discuss the impact of high-level semantic guidance on model performance. The viTable 2. Quantitative comparison of ablation studies. Exp. Luminanceaware decoder High-level guidance FID\u2193 Colorful\u2191 (a) ! 10.05 33.73 (b) ! 9.917 42.55 Ours ! ! 9.799 41.54 w/o semantic ours (a)High-level guidance. w/o luminance ours (b)Luminance-aware decoder. Fig. 5. Visual comparison from ablation studies. suals shown in Figure 5(a) demonstrate our high-level guidance improves saturation of synthesised colors and mitigates failures caused by semantic confusion. The quantitative scores in Table 2 confirm the significant improvement in both color vividness and perceptual quality introduced by the highlevel semantic guidance. Luminance-aware Decoder. The pipeline equipped with a luminance-aware decoder facilitates the generation of cognitively plausible colors. As shown in the first row of Figure 5(b), the artifacts are suppressed. Furthermore, the incorporation of this decoder yields a positive impact on the retrieval of image details, as demonstrated by the successful reconstruction of textual elements in the second row of Figure 5(b). Consequently, the full model outperforms the alternative in terms of FID. It is reported a slight decrease in colorfulness score after incorporating luminance awareness which can be attributed to the suppression of outliers, as discussed in Section 4.2 regarding the analysis of the ControlNet. 5. CONCLUSION In this study, we introduce an novel automatic colorization pipeline that harmoniously combines color diversity with fidelity. It generate plausible and saturated colors by leveraging powerful diffusion priors with the proposed luminance and high-level semantic guidance. Besides, we design a luminance-aware decoder to restore image details and improve color plausibility. Experiments demonstrate that the proposed pipeline outperforms previous methods in terms of perceptual realism and attains the highest human preference compared to other algorithms. 6. ACKNOWLEDGEMENT This work was supported by National Key R&D Project of China(2019YFB1802701), MoE-China Mobile Research Fund Project(MCM20180702), the Fundamental Research Funds for the Central Universities; in part by the 111 project under Grant B07022 and Sheitc No.150633; in part by the Shanghai Key Laboratory of Digital Media Processing and Transmissions. 7." }