diff --git "a/abs_29K_G/test_abstract_long_2405.03008v1.json" "b/abs_29K_G/test_abstract_long_2405.03008v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03008v1.json" @@ -0,0 +1,144 @@ +{ + "url": "http://arxiv.org/abs/2405.03008v1", + "title": "DVMSR: Distillated Vision Mamba for Efficient Super-Resolution", + "abstract": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git", + "authors": "Xiaoyan Lei, Wenlong ZHang, Weifeng Cao", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git", + "main_content": "Introduction Single image super-resolution (SR) is a key challenge in computer vision and image processing, aiming to reconstruct a high-resolution image from a low-resolution input. Effective super-resolution aims to improve the efficiency of the SR model while maintaining reconstruction perfor*Corresponding author Figure 1. PSNR results v.s the total number of parameters of different methods for image SR on Set5. mance. Since the introduction of deep learning into superresolution tasks [18], many CNN-based methods have been proposed [16, 20, 21, 46, 47, 51, 63] to improve the performance. A series of approaches [20, 37, 39, 46, 47, 50, 53, 67, 114] have been proposed for building efficient models for image SR. The majority of these efficient models focus on five factors: runtime, parameters, FLOPS, activations, and depths. To further promote the development of efficient SR, ICCV holds the first competition in the AIM 2019 challenge [122]. The information multi-distillation network(IMDN) [39] proposes cascaded information multidistillation blocks to improve the feature extraction module, which won first place in this competition. After that, The winning solution of the AIM 2020 challenge [124], residual feature distillation network(RFDN) [67], further improves the IMDN by residual learning in the main block. In the efficient SR track of NTIRE 2022 [45] challenge, the winning solution, residual local feature network(RLFN) [50], removes the hierarchical distillation connection of residual feature distillation block(RFDB) [67] to reduce the inference time. In the efficient SR track of NTIRE 2022 [114] challenge, the winning solution utilizes a multi-stage arXiv:2405.03008v1 [eess.IV] 5 May 2024 \flightweight training strategy that combines distillation and pruning to reduce both time consumption and model size. The Transformer model, initially successful in natural language processing [100], has attracted interest from the computer vision community. Its effectiveness in highlevel visual tasks (e.g., image classification [22, 72, 103]) has demonstrated the potential in super-resolution [12, 64]. Recently, Mamba [24] has demonstrated superior performance over Transformers across various sizes on largescale real data and exhibits linear scalability with sequence length. Despite pioneering works adopting Mamba for vision tasks [24, 85, 112], it is still in its initial stages of exploring its potential (e.g., long-range modeling capability and efficiency) in low-level vision. Different from the CNN-based and transformer-based methods, our goal is to explore the long-range modeling capability and efficiency of mamba-based methods for efficient SR. In this paper, we employ vision mamba as the basic architecture to enhance the model\u2019s long-range modeling capability and efficiency. Our DVMSR consists of several stacked Residual State Space Blocks (RSSB), each containing several Vision Mamba Modules (ViMM). The ViMM includes a unidirectional SSM, a residual connection, and SiLU activation function. These elements work together to accelerate model convergence and enhance model accuracy and efficiency. As shown in Figure 2, our method can achieve a larger perception range compared with other methods. Furthermore, we utilize a distillation strategy to enhance the model\u2019s efficiency. We introduce a Mamba network with a larger number of parameters as the teacher network to extract knowledge for the learning of the student network. Extensive experiments and ablation studies have shown the effectiveness of our proposed method. Our contributions can be summarized as follows: 1. By leveraging the long-range modeling capability of Vision Mamba, we propose a lightweight model with unidirectional state space models (SSM) for efficient superresolution. 2. We propose a special feature distillation strategy to enhance the efficiency ability of vision mamba for efficient super-resolution. 3. Extensive experiments have shown that our proposed method outperforms existing state-of-the-art (SOTA) methods in terms of parameters while maintaining comparable PSNR and SSIM performance. 2. Related Work 2.1. Lightweight Super Resolution SRCNN [18] marks the inaugural application of deep learning algorithms in the Single Image Super-Resolution (SISR) [11, 12]. A series of works have been explored to apply the SR method in real scenarios, such as GAN-based SR [56, 128? , 129], degradation model [107, 126, 130], multi-task learning [132] and systematic evaluation [131]. In real-world SR model deployments, the computing power of the deployed devices is often limited, such as edge devices, etc. In this case, the efficiency of the SR network becomes an important aspect. Efficient Image SuperResolution aims to reduce the computational effort and parameters of the SR network while achieving faster inference times and maintaining high performance. FSRCNN [20] reduces unnecessary computational costs by utilizing the deconvolution layer as the upsampling layer. VDSR [47] is introduced to further improve super-resolution (SR) performance. DRCN [46] achieves parameter reduction through deep recursive convolutional networks. LapSRN [53] employs a Laplacian pyramid super-resolution block for HR image reconstruction. DRRN [91] employs recursive and residual network architectures, surpassing DRCN in both performance and parameter reduction. MemNet [92] introduces a memory block to explicitly model long-term dependencies in CNN-based SR models. IDN [37] explicitly divides the preceding extracted features into two parts. IMDN [39] introduces a lightweight Information MultiDistillation Network by constructing cascaded Information Multi-Distillation Blocks. RFDN [67] proposes the residual feature distillation network. RLFN [50] improves its speed by eliminating hierarchical distillation connections. DIPNet [114] introduces the Reparameterization Residual Feature Block, which explores the potential of complex structures during optimization while maintaining computational efficiency. Besides, they achieve first place in the NTIRE 2023 Efficient Super-Resolution Challenge [60]. 2.2. State space models in Vision Recent researches have led to a surge of interest in the state space model (SSM), which has its origins in the classic Kalman filter model [44]. The linear scalability of State Space Models (SSMs) in handling long-range dependencies, exemplified by the Mamba architecture [24], contrasts with Transformers. While Mamba outperforms Transformers in natural language tasks, recent research endeavors extend its applicability to vision tasks. Specifically, Mamba models are designed to capture long-range temporal dependencies in video data, enhancing video classification performance [41, 42, 80, 102]. Additionally, other works explore Mamba\u2019s applicability in vision tasks, including image classification [71, 139], biomedical image segmentation [73], remote sensing image classification [9], and Multimodal Learning [85]. The research conducted by [26] emphasizes Mamba\u2019s utility as a straightforward and efficient baseline for image restoration in low-level vision tasks. Our work extends this by proposing a novel network architecture that combines Mamba with distillation, achieving a tradeoff between super-resolution quality and computational ef\fficiency. 2.3. Feature Distillation Knowledge distillation stands out as a straightforward yet powerful technique for enhancing the performance of smaller models, a necessity driven by the limited computing power of deployed devices. This method involves training a smaller network (student) under the guidance of a larger network (teacher), enabling effective knowledge transfer. Unlike other compression methods, knowledge distillation can reduce network size regardless of structural differences between the teacher and student networks. The seminal work by [31] introduced the knowledge distillation (KD) method, utilizing the softmax output of the teacher network. Notably, this method can be applied across various network architectures due to matching output dimensions. Over time, intermediate layer distillation methods have emerged, leveraging insights from the teacher network\u2019s convolutional or penultimate layers, preserving crucial feature-map localities [1, 31, 48, 115]. Moreover, there exists a wealth of research integrating distillation techniques into super-resolution tasks [38, 40, 68, 108, 138]. In this paper, we focus on adopting the output feature map of a pre-trained model as the distillation target. Through extensive experimentation, we demonstrate the effectiveness of our approach in enhancing model performance. 3. Methodology 3.1. Motivation Efficient Super Resolution (SR) is designed to transform low-quality images into high-quality counterparts, leveraging a small parameter set and minimal computational power. ESR predominantly relies on CNNs for local feature extraction, but their limited long-range modeling hinders performance. Transformers, while proficient in global context, introduce computational complexities. Mamba excels in high-level vision tasks, supported by prior research [9, 71, 73, 85, 112, 139]. Motivated by Mamba\u2019s long-range modeling capabilities, we investigate its performance in super-resolution (SR) tasks, comparing it to CNN-based ESR methods [39, 67, 114] and transformerbased method [64]. To elucidate Mamba\u2019s operational mechanisms, we employe a specialized diagnostic tool called LAM [13], designed specifically for SR tasks. Utilizing LAM enabled us to pinpoint the input pixels that contribute most significantly to the selected region. As depicted in Figure 2, the red-marked points denote informative pixels crucial for the reconstruction process. Notably, DVMSR exhibited a notably higher DI (Diffusion index) indication compared to other models, indicating its superior ability to leverage a broader range of pixel information and affirming its exceptional long-range modeling capability. The proposed DVMSR yields improved image details during the reconstruction process, thereby substantiating its efficacy for super-resolution tasks. 3.2. Preliminaries State space models (SSMs), such as the Mamba deep learning model, hold potential for long sequence modeling. Inspired by continuous systems, SSMs map a 1-D function or sequence x(t) \u2208R 7\u2212 \u2192y(t) \u2208R via a hidden state h(t) \u2208RN. The formulation is as follows: h\u2032(t) = Ah(t) + Bx(t), y(t) = Ch(t). (1) where N is the state size, A \u2208RN\u00d7N, B \u2208RN\u00d71, C \u2208 R1\u00d7N. Mamba is the discrete versions of the continuous system, and it achieves this by utilizing \u2206to convert continuous parameters A and B into their discrete counterparts, \u00af A and \u00af B. The commonly used method for transformation is zeroorder hold (ZOH), which is defined as follows: \u00af A = exp(\u2206A), \u00af B = (\u2206A)\u22121(exp(\u2206A) \u2212I) \u00b7 \u2206B. (2) After the discretization of \u00af A, \u00af B, the discretized version of Eq. 5 using a step size \u2206can be rewritten as: ht = \u00af Aht\u22121 + \u00af Bxt, yt = Cht. (3) 3.3. Overall network architecture The overall network architecture of our proposed DVMSR is depicted in Figure 3. Our DVMSR mainly consists of three main modules: feature extraction convolution, multiple stacked Residual State Space Blocks (RSSBs), and a reconstruction module. Specifically, for a given lowresolution (LR) input ILR \u2208RH\u00d7W \u00d7Cin , we exploit one convolution layer to extract the first feature F0 \u2208 RH\u00d7W \u00d7C, where Cin and C denote the channel number of the input and the intermediate feature. Then, a series of Residual State Space Block (RSSB) and one 3 \u00d7 3 convolution layer HConv(\u00b7) are utilized to perform the deep feature extraction. After that, we add a global residual connection to fuse shallow features F0 and deep features FD \u2208RH\u00d7W \u00d7C, and then reconstruct the high-resolution result via a reconstruction module. As depicted in Figure 3, each RSSB contains two Vision Mamba Module (ViMM) and a 3 \u00d7 3 convolution layer with a residual connection. For the reconstruction module, the pixel-shuffle method is adopted to up-sample the fused feature. \fFigure 2. The LAM results are provided for various networks including both CNN-based and transformer-based methods. LAM attribution indicates the significance of each pixel in the input LR image during the reconstruction process of the patch highlighted by a box. The Diffusion Index (DI) denotes the extent of pixel involvement. A higher DI indicates a broader range of utilized pixels. Figure 3. The overall network architecture of our DVMSR. Figure 4. The structure of Vision Mamba Module(ViMM). 3.3.1 Mamba network The design of mamba network is shown in Figure 4, which is Vision Mamba Module (ViMM) using unidirectional sequence modeling. The input token sequence X \u2208 RH\u00d7W \u00d7C is first normalized by the normalization layer. Next, we linearly project the normalized sequence, expanded the features channel to \u03bbC. We proceed by processing the projection layer through 1-D convolution, resulting in the computation of X1 via the SSM. The X1 gated by the projection layer and a residual connection to get the output token sequence Xout \u2208RH\u00d7W \u00d7C, as follows: X1 = SSM(Conv1d(Linear(LN(X)))), X2 = SiLU(Linear(LN(X))), Xout = Linear(X1 \u2299X2) + X. (4) Where LN is the layer normalization and \u2299denotes the Hadamard product. 3.3.2 Distillation strategy Our method introduces a deep feature distillation strategy (Fig. 5). During the distillation stage, the teacher network accumulates rich representation knowledge, maintaining a fixed state. By minimizing the L1 loss, we ensure alignment between student network features and those of the teacher. This formal process facilitates effective knowledge transfer from the teacher to the student network: \fFigure 5. The deep feature distillation pipeline of our method. Lout = \u03bbdisLdis + \u03bb1L1, Ldis = \u2225T (ILR) \u2212S(ILR)\u22251 , L1 = \u2225IHR \u2212S(ILR)\u22251 , (5) where \u03bbdis and \u03bb1 represents the coefficient of the Ldis loss function and the coefficient of the L1 loss function, respectively. They are set 1. T represents the function of our teacher network and S denotes the function of our proposed network. ILR and IHR are the input LR images and the corresponding ground-truth HR images, respectively. More information of Ldis can be seen from Fig.6. 4. Experiments 4.1. Datasets and metrics In this paper, DF2K (DIV2K + Flickr2K) [98] with 3450 images are used for training the proposed model from scratch. During testing, we select five standard benchmark datasets: Set5 [7], Set14 [117], BSD100 [75], Urban100 [36] and Manga109 [76]. The low-resolution images are generated from the ground truth images by the \u201cbicubic\u201d downsampling in MATLAB. PSNR/SSIM measured by discarding a 4-pixel boundary around the images, and calculated on the Y channel is reported for the quantitative metrics. 4.2. Implementation details During training, we set the input patch size to 256 \u00d7 256 and use random rotation and horizontal flipping for data augmentation. The batch size is set to 128 and the total number of iterations is 500k. The initial learning rate is set to 2 \u00d7 10\u22124. We adopt a multi-step learning rate strategy, where the learning rate will be halved when the iteration reaches 250000, 400000, 450000, and 475000, respectively. Adam optimizer with \u03b21 = 0.9 and \u03b22 = 0.99 is used to train the model. Distillation training. In the teacher learning phase, we utilize the DF2K dataset with 2K resolution to train the teacher network, which comprises 8 RSSB and 2 ViMM blocks with 192 channels. During the distillation training phase, we use DF2K datasets for the student network, which contains 4 RSSB and 2 ViMM blocks with 60 channels. 4.3. Comparison with State-of-the-art SR models We compare DVMSR with several advanced efficient superresolution model [2, 18, 20, 37, 39, 46, 47, 50, 53, 67, 91, 92, 114, 120]. The quantitative performance comparison on several benchmark datasets [7, 36, 75, 76, 117] is indicated in Table 1. Our experimental results showcase our ability to achieve smaller parameter counts while surpassing several previous methods on five benchmark datasets. Specifically, we attained higher SSIM scores on Set5, Set14, and BSD100. It\u2019s important to note that SSIM scores serve as a crucial metric, indicating how effectively our model preserves the structure and content of the images, ultimately resulting in reconstructions that closely resemble the original images. Additionally, we observed that PSNR values remain comparable across these five datasets. This comprehensive evaluation underscores the effectiveness of our approach in enhancing image quality while maintaining efficiency, making it a promising solution for various image enhancement tasks. It\u2019s worth emphasizing that in our current study, we directly utilize the final model architecture employed in the NTIRE competition. Remarkably, we manage to maintain excellent performance without unnecessarily inflating the parameter count. This strategic decision underscores our commitment to efficiency and effectiveness in model design, ensuring that our approach remains practical and scalable for real-world applications. Model complexity comparisons between SwinIR and DVMSR. Our investigation focuses on Mamba\u2019s performance in super-resolution (SR) tasks. In Fig. 2, we show the excellent long-range modeling capabilities of our \fTable 1. Average PSNR/SSIM for scale factor 4 on datasets Set5, Set14, BSD100, Urban100, and Manga109. The best and second best results are highlighted in red and blue respectively. Method Params Set5 Set14 BSD100 Urban100 Manga109 PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM Bicubic 28.42/0.8104 26.00/0.7027 25.96/0.6675 23.14/0.6577 24.89/0.7866 SRCNN [18] 8K 30.48/0.8626 27.50/0.7513 26.90/0.7101 24.52/0.7221 27.58/0.8555 FSRCNN [20] 13K 30.72/0.8660 27.61/0.7550 26.98/0.7150 24.62/0.7280 27.90/0.8610 VDSR [47] 666K 31.35/0.8838 28.01/0.7674 27.29/0.7251 25.18/0.7524 28.83/0.8870 DRCN [46] 1774K 31.53/0.8854 28.02/0.7670 27.23/0.7233 25.14/0.7510 28.93/0.8854 LapSRN [53] 502K 31.54/0.8852 28.09/0.7700 27.32/0.7275 25.21/0.7562 29.09/0.8900 DRRN [91] 298K 31.68/0.8888 28.21/0.7720 27.38/0.7284 25.44/0.7638 29.45/0.8946 MemNet [92] 678K 31.74/0.8893 28.26/0.7723 27.40/0.7281 25.50/0.7630 29.42/0.8942 IDN [37] 553K 31.82/0.8903 28.25/0.7730 27.41/0.7297 25.41/0.7632 29.41/0.8942 SRMDNF [120] 1552K 31.96/0.8925 28.35/0.7787 27.49/0.7337 25.68/0.7731 30.09/0.9024 CARN [2] 1592K 32.13/0.8937 28.60/0.7806 27.58/0.7349 26.07/0.7837 30.47/0.9084 IMDN [39] 715K 32.21/0.8948 28.58/0.7811 27.56/0.7353 26.04/0.7838 30.45/0.9075 RFDN [67] 550K 32.24/0.8952 28.61/0.7819 27.57/0.7360 26.11/0.7858 30.58/0.9089 RLFN [50] 543K 32.24/0.8952 28.62/0.7813 27.60/0.7364 26.17/0.7877 -/DIPNet [114] 543K 32.20/0.8950 28.58/0.7811 27.59/0.7364 26.16/0.7879 30.53/0.9087 DVMSR (Ours) 424K 32.19/0.8955 28.61/0.7823 27.58/0.7379 26.03/0.7838 30.48/0.9084 DVMSR using LAM. Additionally, we compare DVMSR with SwinIR, a transformer-based model, in terms of model complexity. SwinIR outperforms DVMSR by 0.23 dB in PSNR, but at the cost of approximately twice the number of parameters, significantly higher FLOPS, and about 20 times longer inference time. These findings suggest that Mambabased models hold promise for efficient SR. Table 2. Model complexity comparisons between SwinIR and DVMSR. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory is measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] SwinIR 29.20 dB 0.865 0.9296 70.7828 26.7387 1454.458 DVMSR 28.97 dB 0.048 0.4244 20.1680 26.7387 1094.245 4.4. Ablation Study 4.4.1 Model Parameter Analysis Here, we train DVMSR on DIV2K for classical image SR (\u00d74) and test it on Set5 and Set14. Impact of ViMM number. We show the effects of ViMM number in each RSSB on model performance in Table 3. In experiments 1 3, it is observed that the PSNR/SSIM is negatively correlated with the number of ViMMs. However, when we set the ViMM number to 1, as presented in experiment 4, the PSNR in Set5 and Set14 decreased by 0.09 dB compared to when the ViMM number is set to 2. Therefore, there may be a balance point for the ViMM number, where it should not be too large to avoid over-complexity of the model, nor too small to limit the model\u2019s ability to represent the data. Experimental results indicate that setting the ViMM number to 2 is appropriate. Table 3. Impact of ViMM number in each RSSB on the Set5 and Set14 datasets with scale factor of \u00d74. The number of RSSB is fixed at 4 and keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] ViMM number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 7.222 6,6,6,6 31.99/0.8926 28.44/0.7785 2 5.214 2,2,9,2 32.17/0.8959 28.63/0.7834 3 3.651 2,2,2,2 32.30/0.8972 28.68/0.7847 4 2.758 1,1,1,1 32.21/0.8954 28.59/0.7821 Impact of RSSB number. In Table 4, In Experiments 13, as the RSSB number increases, the parameter count increases, with the channel number set to 180. Along with the increase in RSSB number, the PSNR in Set5 shows a significant improvement. Compared to Experiment 1, Experiment 2 shows an increase of 0.26 dB, and relative to Experiment 2, Experiment 3 shows an increase of 0.13 dB. When we set the RSSB number to 10, the improvement is moderated, with Experiment 4 showing an increase of 0.01 dB relative to Experiment 3. Impact of channel number. We maintained the ViMM number and RSSB number while examining the influence of channel numbers on model performance, as detailed in Table 5. Notably, our analysis revealed a diminishing improvement in model performance when the channel number \fTable 4. Impact of RSSB number on the Set5 and Set14 datasets with scale factor of \u00d74. The number of ViMM is fixed at 2 and keeps other parameter settings consistent. The best results are highlighted. Exp. Params[M] RSSB number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.175 2 32.04/0.8938 28.51/0.7799 2 3.651 4 32.30/0.8972 28.68/0.7847 3 5.128 6 32.43/0.8987 28.75/0.7866 4 8.080 10 32.44/0.8990 28.77/0.7874 was set to 210. Thus, we conclude that setting the channel number to 192 is more suitable for optimal model performance. Table 5. Impact of channel number on the Set5 and Set14 datasets with scale factor of \u00d74. keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] channel number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.664 150 32.32/0.8971 28.65/0.7838 2 3.651 180 32.30/0.8972 28.68/0.7847 3 4.089 192 32.37/0.8977 28.71/0.7851 4 4.809 210 32.39/0.8976 28.71/0.7850 Table 6. Comparison of unidirectional SSM or bidirectional SSM. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory are measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] unidirectional SSM 28.87 dB 0.048 0.4244 20.1680 26.7387 1094.245 bidirectional SSM 28.88 dB 0.087 0.4849 23.9429 26.7387 1451.680 4.4.2 Distillation Learning Distillation loss. To investigate the effectiveness of distillation loss, we tried multiple distillation strategies. Mid-level feature distillation and end-level feature distillation are presented in Figure 6. As shown in Table 7, using the end-level feature distillation method tends to increase the PSNR and SSIM on Set5 and Set14 datasets. This suggests that the features towards the end of the model might be closer to the target output of the SR task. When attempting to alter the weights and types of distillation loss in the mid-level feature distillation method, there were no changes observed in PSNR and SSIM values on Set5 and Set14 datasets. This indicates that it is difficult for the student model to benefit from the features of the middle layer of the teacher model, as even after modifying the weights and types of distillation loss, there were no significant changes in the results. When we increase the weight of distillation loss in the end-level Figure 6. Left: The structure of mid-level feature distillation; Right: The structure of end-level feature distillation feature distillation method, there is a slight decrease in the PSNR and SSIM on Set5 and Set14 datasets. This could be because excessively high weights on distillation loss might introduce too many constraints, thereby affecting the model\u2019s performance. Table 7. Impact of the distillation loss. \u201c\u2718\u201d signifies that distillation is not used, and \u201c\u2714\u201d signifies that distillation is used. \u201cmid\u201d and \u201cend\u201d represent mid-level feature distillation and endlevel feature distillation, respectively. Ldis : L1 represents the weight ratio of the distillation loss and L1 loss. distillation distillation distillation Ldis : L1 Set5 Set14 strategy position loss PSNR/SSIM PSNR/SSIM \u2718 32.04/0.8940 28.50/0.7801 \u2714 mid L1 1:1 32.11/0.8949 28.56/0.7811 \u2714 mid L1 5:1 32.11/0.8949 28.56/0.7811 \u2714 mid L2 1:1 32.11/0.8949 28.56/0.7811 \u2714 end L1 1:1 32.12/0.8951 28.57/0.7813 \u2714 end L1 5:1 32.11/0.8950 28.57/0.7813 Teacher model. When the teacher model has more parameters and richer representation capability, the knowledge it transfers to the student model will be more abundant, leading to a more significant performance improvement of the student model on the task. To verify this conclusion, we attempted two teacher models with different parameters. They exhibited a PSNR difference of 0.27dB on the Set5 dataset. However, as shown in Table 8, the performance of the student model remained unchanged. This could indicate that the student model\u2019s capacity or architecture may not be sufficiently expressive to fully utilize the additional knowledge provided by the larger teacher model. Therefore, finding the balance point between the performance of the teacher model and the student model is a worthwhile exploration. 4.4.3 Unidirectional v.s. Bidirectional SSM To investigate the effectiveness of bidirectional SSM in ESR, we evaluate its performance in ESR based on several aspects: PSNR, Time, Params, FLOPS, Activations, and Memory. The architecture of unidirectional SSM and \fTable 8. Design of the teacher model. PSNR is the result of testing on Set5. Params is the parameter of teacher model, and the parameter of student model is fixed. Method Params[M] Teacher model Student model PSNR/SSIM PSNR/SSIM DVMSR 32.04/0.8940 DVMSR 4.089 32.38/0.8977 32.12/0.8950 DVMSR 7.432 32.65/0.9011 32.12/0.8950 Figure 7. Unidirectional SSM or bidirectional SSM in ViMM. bidirectional SSM are presented in Figure 7. As shown in Table 6, compared to Unidirectional SSM, the improvement of bidirectional SSM in PSNR is limited (increased by 0.01dB), while the inference time has increased by 0.039s. This cost is significant. Therefore, Unidirectional SSM is more suitable for the ESR task. 4.4.4 NTIRE 2024 Challenge on Efficient SR We actively participate in the NTIRE 2024 Efficient SuperResolution Challenge [86]. The model structure and training strategy are slightly different from the above. This competition aims to procure solutions that excel in overall performance metrics, encompassing inference runtime, FLOPS, and parameter optimization on the NVIDIA GeForce RTX 3090 GPU. This challenge also requires the maintenance or enhancement of threshold PSNR results, underscoring the importance of efficiency without compromising on image quality benchmarks. During the teacher learning phase, we train the teacher network using the DIV2K dataset with a resolution of 2K. Our teacher architecture consists of 6 RSSB (Residual Scaling and Shifting Block) and 2 ViMM (Vision Mamba Modules), each configured with 180 channels. In the subsequent distillation training phase, we amalgamated data from both the DIV2K and LSDIR datasets to train the student network. This student model comprises 2 RSSB and 2 ViMM blocks, tailored with 60 channels to maintain computational efficiency while preserving performance standards. Notably, the teacher network remains unchanged. We employ DIV2K [98] and LSDIR [59] to construct the training dataset. The High-Resolution (HR) images are cropped to 256 \u00d7 256 patches for the training procedure. During network optimization, we employ the L1 loss function in conjunction with the Adam optimizer, a widely adopted optimization algorithm in deep learning tasks. Our optimization regimen commenced with an initial learning rate of 2 \u00d7 10\u22124, evolving through a multi-step learning rate strategy. Specifically, the learning rate halved at key iterations: 250000, 400000, 450000, and 475000, respectively, throughout the 500k total iterations. This adaptive learning rate scheme enhances model convergence and stability over the training period, crucial for achieving superior performance. Through extensive experiments, we refine our model\u2019s architecture and training process, aiming for excellence in both efficiency and performance, as evidenced by our results in Table 9. Our approach employs a novel architecture that differs from both CNN and transformer, providing a reference for the development of mamba in Efficient SuperReslution. Table 9. NTIRE 2024 ESR Challenge results. Model Val PSNR Test PSNR Val Time Test Time FLOPS Params (dB) (dB) (ms) (ms) (G) (M) RLFN baseline 26.96 27.07 14.348 9.194 19.67 0.317 DVMSR 26.93 27.04 40.75 34.634 20.17 0.424 5.", + "additional_graph_info": { + "graph": [ + [ + "Wenlong Zhang", + "Yihao Liu" + ] + ], + "node_feat": { + "Wenlong Zhang": [ + { + "url": "http://arxiv.org/abs/2309.03020v2", + "title": "SEAL: A Framework for Systematic Evaluation of Real-World Super-Resolution", + "abstract": "Real-world Super-Resolution (Real-SR) methods focus on dealing with diverse\nreal-world images and have attracted increasing attention in recent years. The\nkey idea is to use a complex and high-order degradation model to mimic\nreal-world degradations. Although they have achieved impressive results in\nvarious scenarios, they are faced with the obstacle of evaluation. Currently,\nthese methods are only assessed by their average performance on a small set of\ndegradation cases randomly selected from a large space, which fails to provide\na comprehensive understanding of their overall performance and often yields\ninconsistent and potentially misleading results. To overcome the limitation in\nevaluation, we propose SEAL, a framework for systematic evaluation of real-SR.\nIn particular, we cluster the extensive degradation space to create a set of\nrepresentative degradation cases, which serves as a comprehensive test set.\nNext, we propose a coarse-to-fine evaluation protocol to measure the\ndistributed and relative performance of real-SR methods on the test set. The\nprotocol incorporates two new metrics: acceptance rate (AR) and relative\nperformance ratio (RPR), derived from acceptance and excellence lines. Under\nSEAL, we benchmark existing real-SR methods, obtain new observations and\ninsights into their performance, and develop a new strong baseline. We consider\nSEAL as the first step towards creating a comprehensive real-SR evaluation\nplatform, which can promote the development of real-SR. The source code is\navailable at https://github.com/XPixelGroup/SEAL", + "authors": "Wenlong Zhang, Xiaohui Li, Xiangyu Chen, Yu Qiao, Xiao-Ming Wu, Chao Dong", + "published": "2023-09-06", + "updated": "2024-01-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from their lowresolution (LR) counterparts. Recent years have witnessed great success in classical SR settings (i.e., bicubic downsampling) with deep learning techniques (Dong et al., 2014; Zhang et al., 2018b;c; Ledig et al., 2017; Wang et al., 2018). To further approach real-world applications, a series of \u201cblind\u201d SR methods have been proposed to deal with complex and unknown degradation kernels (Zhang et al., 2018a; Gu et al., 2019; Wang et al., 2021a; Luo et al., 2020). Among them, real-SR methods, such as BSRGAN (Zhang et al., 2021) and RealESRGAN (Wang et al., 2021b), have attracted increasing attention due to their impressive results in various real-world scenarios. Different from classical SR that only adopts a simple downsampling kernel, real-SR methods propose complex degradation models (e.g., a sequence of blurring, resizing and compression operations) that can represent a much larger degradation space, covering a wide range of real-world cases. However, they also face a dilemma in evaluation: As the degradation space is vast, how to evaluate their overall performance? Directly testing on all degradations is obviously infeasible, as there are numerous degradation combinations in the vast degradation space. To evaluate the performance of real-SR methods, previous works directly calculate the average performance on a randomly-sampled small-sized test set based on an IQA metric (e.g., PSNR). However, we find that this evaluation protocol is fatally flawed. Due to the vastness of the degradation \u2217Corresponding author 1 arXiv:2309.03020v2 [cs.CV] 20 Jan 2024 \fPublished as a conference paper at ICLR 2024 Random test set 1 Random test set 2 24.4 24.6 24.8 PSNR(dB) BSRNet RealESRNet (a) Results with the conventional evaluation approach. 0 20 40 60 80 100 Representative test set 22.0 24.0 26.0 28.0 30.0 PSNR (dB) Acceptance Line Excellence Line BSRNet (AR: 0.54) RealESRNet (AR: 0.15) (b) Results with our framework SEAL. Figure 1: (a) We compare the average performance of BSRNet and RealESRNet on two real test sets generated by common practice. There is a significant variance in their performance: the differences between their average PSNR on the two test sets are -0.23dB and 0.18dB respectively, leading to contradictory conclusions. (b) BSRNet and RealESRNet assessed under our SEAL framework in a distributed manner with 100 representative test sets. It shows the former outperforms the latter in 60% cases, providing a comprehensive overview of their performance. space, a small test set that is selected randomly cannot reliably represent the degradation space and may cause significant bias and randomness in the evaluation results, as illustrated in Fig. 1. In addition, the current evaluation strategy is not enough for assessing real-SR methods, as they typically average quantitative results across all testing samples, which may also lead to misleading comparison results. For example, one method may outperform another on 60% of the degradation types, but it may not achieve a higher mean PSNR value for the entire test set (Sec. 5.3). The average score cannot adequately represent the overall performance and distribution. Furthermore, if the goal is to improve the average score, we could focus solely on enhancing the performance of simple cases (e.g., small noise or blur), which, however, would adversely affect difficult ones (e.g., complex degradation combinations). This would contradict our main objective. Instead, once we have achieved satisfactory outcomes in easy cases, we should divert our focus towards challenging ones to enhance the overall performance. The aforementioned points indicate the need for a new framework that can comprehensively evaluate the performance of real-SR methods. In this work, we establish a systematic evaluation framework for real-SR, namely SEAL, which assesses relative, distributed, and overall performance rather than relying solely on absolute, average, and misleading evaluation strategy commonly used in current evaluation methods. Our first step is to use a clustering approach to partition the expansive degradation space to identify representative degradation cases, which form a comprehensive test set. In the second step, we propose an evaluation protocol that incorporates two new relative evaluation metrics, namely Acceptance Ratio (AR) and Relative Performance Ratio (RPR), with the introduction of acceptance and excellence lines. The AR metric indicates the percentage by which the real-SR method surpasses the acceptance line, which is a minimum quality benchmark required for the method to be considered satisfactory. The RPR metric measures the improvement of the real-SR method relative to the distance between the acceptance and excellence lines. The integration of these metrics intends to provide a more thorough and detailed evaluation of real-SR methods. With SEAL, it becomes possible to conduct a comprehensive evaluation of the overall performance of real-SR methods, as illustrated in Fig. 1. The significance of our work can be summarized as: \u2022 Our relative, distributed evaluation approach serves as a complement to existing evaluation methods that solely rely on absolute, average performance, addressing their limitations and providing a valuable alternative perspective for evaluation. \u2022 By employing SEAL, we benchmark existing real-SR models, leading to the discovery of new observations and valuable insights, which further enables us to develop a new strong real-SR model. \u2022 The components of our SEAL framework are flexibly customizable, including the clustering algorithm, acceptance/excellence lines, and evaluation protocol. It can facilitate the development of appropriate test sets and comparative evaluation metrics for real-SR. 2 \fPublished as a conference paper at ICLR 2024 2 RELATED WORK Image super-resolution. Since Dong et al. (Dong et al., 2014) first introduced Convolutional Neural Networks (CNNs) to the Super-Resolution (SR) task, there have been significant advancements in the field. A variety of techniques have been developed, including residual networks (Kim et al., 2016), dense connections (Zhang et al., 2018c), channel attention (Zhang et al., 2018b), residualin-residual dense blocks (Wang et al., 2018), and transformer structure (Liang et al., 2021a; Chen et al., 2023b). To reconstruct realistic textures, Generative Adversarial Networks (GANs) (Ledig et al., 2017; Wang et al., 2018; Zhang et al., 2019; Wenlong et al., 2021) are introduced to SR approaches for generating visually pleasing results. Although these methods have made significant progress, they often rely on a simple degradation model (i.e., bicubic downsampling), which may not adequately recover the low-quality images in real-world scenarios. Blind super-resolution. Several works have been made to improve the generalization of SR networks in real-world scenarios. These works employ multiple degradation factors (e.g., Gaussian blur, noise, and JPEG compression) to formulate a blind degradation model. SRMD (Zhang et al., 2018a) employs a single SR network to learn multiple degradations. Kernel estimation-based methods (Gu et al., 2019; Luo et al., 2020; Bell-Kligler et al., 2019; Wang et al., 2021a) introduce a kernel estimation network to guide the SR network for the application of the low-quality image with different kernels. To cover the diverse degradations of real images, BSRGAN (Zhang et al., 2021) proposes a practical degradation model that includes multiple degradations with a shuffled strategy. RealESRGAN (Wang et al., 2021b) introduces a high-order strategy to construct a large degradation model. These works demonstrate the potential of blind SR in real-world applications. Model evaluation for super-resolution. For non-blind SR model evaluation, a relatively standard process employs the fixed bicubic down-sampling on the benchmark test datasets to generate lowquality images. However, it is typically implemented using a predefined approach (e.g., uniform sampling) for blind SR, such as the general Gaussian blur kernels (Zhang et al., 2018a; Liang et al., 2021c), Gaussian8 kernels (Gu et al., 2019), and five spatially variant kernel types (Liang et al., 2021b). For real-SR, existing methods often add random degradations to DIV2K val (includes 100 Ground-Truth images) to construct the real test set, such as DIV2K4D in BSRGAN (Zhang et al., 2021) and DIV2K val with three Levels in DASR (Liang et al., 2022). However, these methods use small test sets with average performance, making it difficult to evaluate overall performance across different degradation combinations in real-world scenarios. 3 DEGRADATION SPACE MODELING 3.1 GENERATING THE DEGRADATION SPACE In real-SR, the degradation process (Wang et al., 2021b) can be simulated by ILR = (ds \u25e6\u00b7 \u00b7 \u00b7 \u25e6d2 \u25e6d1)(IHR), (1) where s is the number of degradations applied on a high-resolution image IHR, and di (1 \u2264i \u2264s) represents a randomly selected degradation. Assume there are only s degradation types (e.g., blur, resize, noise, and compression), and each type contains only k discrete degradation levels. The total degradation should be As s \u00d7 ks. With s = 10 and k = 10, it will generate a degradation space of magnitude (A10 10) \u22171010, which is already an astronomical figure. Clearly, randomly sampling a limited number of degradations (e.g., 100 in existing works (Zhang et al., 2021)) from such a huge space cannot adequately represent the entire space, which will inevitably result in inconsistent and potentially misleading outcomes, as illustrated in Fig. 1. 3.2 REPRESENTING THE DEGRADATION SPACE To represent the degradation space D, a straightforward way is to divide the space by degradation parameters, which may seem reasonable at first glance. However, we observe that different combinations of degradation types may have similar visual quality and restoration difficulty. As shown in Fig. 13 of the Appendix, the images undergone different degradations have similar appearances. This suggests that it might be more reasonable to distinguish the degraded images with their lowlevel features instead of degradation parameters. 3 \fPublished as a conference paper at ICLR 2024 Test GT \u2026 Degradation Param. Class center Evaluation Metrics Degradation Space Distributed Performance Acceptance Line Excellence Line Train Calculate \ud835\udc79\ud835\udc77\ud835\udc79\ud835\udc70 Unacceptable cases Acceptable cases \ud835\udc79\ud835\udc77\ud835\udc79\ud835\udc7c \ud835\udc79\ud835\udc77\ud835\udc79\ud835\udc68 AR Real-SR Model Figure 2: Our proposed evaluation framework consists of a clustering-based approach for degradation space modeling (Sec. 3) and a set of metrics based on representative degradation cases (Sec. 4). We divide the degradation space into K clusters and use the degradation parameters of the class centers to create K training datasets to train K non-blind tiny / large SR models as the acceptance / excellence line. The distributed performance (Eq. 4) of the real-SR model across the K test datasets will be compared with the acceptance and excellence lines and evaluated by a set of metrics including AR (acceptance rate), RPR (relative performance ratio), RPRA (average RPR on acceptable cases), and RPRU (average RPR on unacceptable cases). Therefore, we propose to find prototypical degradation cases to represent the vast degradation space. As shown in Fig. 2, a plausible solution is to cluster the degradation space by grouping the degraded images into K groups and choose the K group centers as the representative cases: D = {c1, c2, \u00b7 \u00b7 \u00b7 , cK}, (2) where ci(1 \u2264i \u2264K) is the center of the i-th group. Note that the images can be represented by their features (e.g., image histograms) and clustered by a conventional clustering algorithm such as spectral clustering. 3.3 EVALUATING REAL-SR MODELS USING THE REPRESENTATIVE DEGRADATION CASES We then use the degradation parameters of the cluster centers ci to construct a test set for systematic evaluation, denoted as the SE test set: Dtest = {Dc1, Dc2, \u00b7 \u00b7 \u00b7 , DcK}, (3) where Dci(1 \u2264i \u2264K) is a set of low-quality images obtained by using the degradation parameters of ci on a set of clean images (e.g., DIV2K dataset). Dci can be used to evaluate the distributional performance of a real-SR model on a representative degradation case. Dtest can be used to provide a full picture of the performance on all representative degradation cases. 4 EVALUATION METRICS To provide a comprehensive and systematic overview of the performance of a real-SR model on Dtest, we develop a set of evaluation metrics to assess its effectiveness in a quantitative manner. Distributed Absolute Performance. The most straightforward way to evaluate a real-SR model is to compute its distributed performance on Dtest: Qd = {Qd 1, Qd 2, \u00b7 \u00b7 \u00b7 , Qd K}, Qd ave = 1 K X i Qd i, (4) where Qd i represents the average absolute performance of a real-SR model on the i-th representative test set Di, and Qd ave denotes the average absolute performance on Dtest. Distributed Relative Performance. To comparatively evaluate a real-SR model and pinpoint the representative cases where its performance is deemed inadequate, we specify an acceptance line and an excellence line. The acceptance line is designated by a small network (e.g., FSRCNN (Dong et al., 2016)), while the excellence line is provided by a large network (e.g., SRResNet (Ledig et al., 2017)). If the real-SR model is unable to surpass the small network on a representative case, it 4 \fPublished as a conference paper at ICLR 2024 is considered failed on that case. We use Qac and Qex to represent the acceptance line and the excellence line: Qac = {Qac 1 , Qac 2 , \u00b7 \u00b7 \u00b7 , Qac K}, Qex = {Qex 1 , Qex 2 , \u00b7 \u00b7 \u00b7 , Qex K}, (5) where Qac i and Qex i are the performance of the small and large networks trained with the degradation parameters of ci in a non-blind manner. Acceptance Rate (AR) measures the percentage of acceptable cases among all K representative degradation cases for a real-SR model. An acceptable case is one in which the performance of a real-SR model surpasses the acceptance line. AR is defined as AR = 1 K X i I(Qd i > Qac i ), (6) where I represents the indicator function. AR can reflect the overall generalization ability of a real-SR model. Relative Performance Ratio (RPR) is devised to compare the performance of real-SR models at the same scale w.r.t. the acceptance and excellence lines. It is defined as RPRi = \u03c3 \u0012 Qd i \u2212Qac i Qex i \u2212Qac i \u0013 , and R = {RPR1, RPR2, \u00b7 \u00b7 \u00b7 , RPRK}, (7) where \u03c3 denotes the sigmoid function, which is used to map the value to (0, 1). Note that RPRi > \u03c3(0) = 0.5 indicates that the real-SR model is better than the acceptance line on the i-th degradation case, and RPRi > \u03c3(1) = 0.73 means it is better than the excellence line. a) Interquartile range of RPR (RPRI) is used to access the level of variance in the performances of a real-SR model on Dtest. It is defined as: RPRI = RW3 \u2212RW1, (8) where RW3 and RW1 denote the 75th and 25th percentiles (Wan et al., 2014) of the RPR scores, respectively. Low RPRI means the real-SR model demonstrates a similar relative improvement in most degradation cases. b) Average RPR on acceptable cases (RPRA) computes the mean of RPR scores on acceptable cases: RPRA = 1 |RA| X Ri\u2208RA Ri, RA = {Ri \u2208R|Ri \u22650.5}. (9) Note that RPRA \u2208(0.5, 1), and RPRA > 0.73 means the average performance of a real-SR model on acceptable cases exceeds the excellence line. c) Average RPR on unacceptable cases (RPRU) computes the mean of RPR scores on unacceptable cases: RPRU = 1 |RU| X Ri\u2208RU Ri, RU = {Ri \u2208R|Ri < 0.5}. (10) Note that RPRU \u2208(0, 0.5), and RPRA near 0.5 means the average performance of a real-SR model on unacceptable cases is close to the acceptance line. Coarse-to-fine Evaluation Protocol. Based on the proposed metrics, we develop a coarse-tofine evaluation protocol to rank different real-SR models. As illustrated in Fig. 3, the models are compared by the proposed metrics sequentially by order of priority. AR represents a coarse-grained comparison, while RPR provides a fine-grained comparison. If their performances are too close to the current metric, the next metric is used to rank them. 5 \fPublished as a conference paper at ICLR 2024 Real-SR Models !\" \"#\"! \"#\"\" \"#\"# Coarse-grained comparison Fine-grained comparison Figure 3: A coarse-to-fine evaluation protocol to rank real-SR models with the proposed metrics. Figure 4: Visualization of distributed performance in PSNR for MSE-based real-SR methods on Set14-SE. Table 1: Benchmark results and ranking of MSE-based real-SR methods in PSNR by our proposed SEAL. The subscript denotes the rank order. \u00d7 means the model fails in a majority of degradation cases. Models PSNR AR \u2191 RPRI \u2193 RPRA \u2191 RPRU \u2191 Rank SRResNet 20.95 0.00(\u00d7) 0.02 0.00 0.03 \u00d7 DASR 21.08 0.00(\u00d7) 0.01 0.00 0.02 \u00d7 BSRNet 22.77(2) 0.59(1) 0.42(4) 0.72(2) 0.27(4) 1 RealESRNet 22.67(3) 0.27(4) 0.28(2) 0.63(3) 0.28(3) 4 RDSR 22.44(5) 0.08(\u00d7) 0.23 0.63 0.21 \u00d7 RealESRNet-GD 22.82(1) 0.43(2) 0.37(3) 0.74(1) 0.33(1) 2 SwinIR 22.61(4) 0.41(3) 0.24(1) 0.58(4) 0.29(2) 3 5 EXPERIMENTS 5.1 IMPLEMENTATION Constructing the test set for systematic evaluation. We utilize two widely-used degradation models, BSRGAN (Zhang et al., 2021) and RealESRGAN (Wang et al., 2021b), which are designed to simulate the real-world image space. By combining the two degradation models with equal probability [0.5, 0.5], we generate a dataset of 1 \u00d7 104 low-quality images from the ground-truth (GT) image, lenna, from the Set14 dataset (Zeyde et al., 2010). To categorize the degraded images, we employ spectral clustering due to its effectiveness in identifying clusters of arbitrary shape, making it a flexible choice for our purposes. Specifically, we first use the histogram feature (Tang et al., 2011; Ye & Doermann, 2012) with 768 values (bins) to represent the degraded image. Then, we compute the pairwise similarities of all degraded images. Next, we implement spectral clustering based on the computed similarity matrix to generate 100 cluster centers. The degradation parameters of the cluster centers are then utilized to generate the distributional test set Dtest. We take Set14 (Zeyde et al., 2010) and DIV2K val (Lim et al., 2017) to construct the test sets for systematic evaluation, denoted as Set14-SE and DIV2K val-SE, respectively. Establishing the acceptance and excellence lines. We use the 100 representative degradation parameters to synthesize 100 training datasets based on DIV2K. In the case of MSE-based realSR methods, we utilize a variant of FSRCNN (Dong et al., 2016), referred to as FSRCNN-mz, to train a collection of 100 non-blind SR models, which serve as acceptance line. Concurrently, we employ SRResNet (Ledig et al., 2017), following an identical procedure, as the excellence line 1. The models within the model zoo are initially pre-trained under the real-SR setting. Subsequently, they undergo a fine-tuning process consisting of a total of 2 \u00d7 105 iterations. The Adam (Kingma & Ba, 2014) optimizer with \u03b21 = 0.9 and \u03b22 = 0.99 is used for training. The initial learning rate is 2\u00d710\u22124. We adopt L1 loss to optimize the networks. Regarding GAN-based SR methods, we adopt the widely recognized RealESRGAN (Wang et al., 2021b) as our acceptance line. Concurrently, we consider the state-of-the-art RealHATGAN (Chen et al., 2023b;a) as our excellence line. We utilize the officially released models for our experiments. 5.2 BENCHMARKING EXISTING MSE-BASED AND GAN-BASED REAL-SR METHODS We utilize the proposed SEAL to evaluate the performance of existing MSE-based real-SR methods, including DASR (Wang et al., 2021a), BSRNet (Zhang et al., 2021), SwinIR (Liang et al., 2021a), RealESRNet (Wang et al., 2021b), RDSR (Kong et al., 2022), and RealESRNet-GD (Zhang et al., 2022). Furthermore, we benchmark GAN-based real-SR methods such as ESRGAN (Wang et al., 2018), DASR (Liang et al., 2022), BSRGAN (Zhang et al., 2021), MMRealSR (Mou et al., 2022), 1Both the two lines and the distributed test set will be released. 6 \fPublished as a conference paper at ICLR 2024 Input RDSR RealESNet SwinIR FSRCNN RealESRNet-GD SRResNet BSRNet PSNR (dB) 22.55 22.73 23.04 23.05 23.31 23.52 23.66 Input RealESRNet SwinIR FSRCNN RDSR BSRNet RealESRNet-GD SRResNet PSNR (dB) 23.06 23.52 23.65 23.68 23.70 23.96 24.20 Figure 5: Visual results of MSE-based real-SR methods with the acceptance line FSRCNN and excellence line SRResNet. It is best viewed in color. Figure 6: Visualization of distributed performance in LPIPS for GAN-based real-SR methods on Set14-SE. Table 2: Benchmark results and ranking of GAN-based real-SR methods in LPIPS by our proposed SEAL. The subscript denotes the rank order. \u00d7 means the model fails in a majority of degradation cases. Models LPIPS \u2193 AR \u2191 RPRI \u2193 RPRA \u2191 RPRU \u2191 Rank ESRGAN 0.6224(6) 0.00(\u00d7) 0.01 0.00 0.03 \u00d7 RealSRGAN 0.5172(5) 0.01(\u00d7) 0.10 0.53 0.14 \u00d7 DASR 0.5230(4) 0.02(\u00d7) 0.13 0.61 0.12 \u00d7 BSRGAN 0.4810(3) 0.44(3) 0.40(3) 0.72(1) 0.28(3) 3 MMRealSR 0.4770(2) 0.80(2) 0.08(1) 0.57(3) 0.41(1) 1 SwinIR 0.4656(1) 0.81(1) 0.24(2) 0.71(2) 0.31(2) 2 Input RealSRGAN BSRGAN DASR RealESRGAN SwinIR MMRealSR RealHATGAN LPIPS 0.474 0.449 0.440 0.428 0.416 0.384 0.530 Input RealSRGAN DASR BSRGAN RealESRGAN MMRealSR SwinIR RealHATGAN LPIPS 0.507 0.495 0.471 0.470 0.461 0.456 0.439 Figure 7: Visual results of GAN-based real-SR methods with the acceptance line RealESRGAN and excellence line RealHATGAN. It is best viewed in color. SwinIR (Liang et al., 2021a). We also modify SRGAN (Ledig et al., 2017) to achieve RealSRGAN under the RealESRGAN training setting. The visualization of the distributed performance offers a comprehensive insight into real-SR performance. Fig. 4 illustrates the distribution performance for MSE-based real-SR methods, using PSNR as the IQA metric. On the other hand, Fig. 6 depicts the distribution performance for GANbased real-SR methods, with LPIPS as the metric. Both visualizations are generated using our proposed SE test set. The SE test sets are arranged in ascending order based on the PSNR values of the acceptance line output. Test sets with lower numbers represent more challenging cases. It\u2019s noticeable that there are a few degradation cases that fall significantly below the acceptance line in Fig. 4. Interestingly, real-SR methods seem to perform better on more challenging degradation cases. This is evident in test datasets 0-20 in Fig. 4 and 80-100 in Fig. 6. The coarse-to-fine evaluation protocol offers a systematic ranking. In Tab. 1 and Tab. 2, the realSR models with AR below 0.25 are excluded from the ranking due to their low acceptance rates. For the real-SR models with AR > 0.25, a step-by-step ranking is performed based on {AR, RRPI, RPRA, RPRU}, with thresholds {0.02, 0.02, 0.05, 0.05} respectively. If the difference in the cur7 \fPublished as a conference paper at ICLR 2024 rent metric exceeds the threshold, the metric is used to represent the overall ranking. Otherwise, the next metric is considered. From our proposed SEAL evaluation, we can make several observations: (1) Some existing methods fail on the majority of degradation cases. The AR values of some existing methods are below 0.5, as shown in Tab. 1 and Tab. 2. For instance, most MSE-based real-SR models can not even outperform the small network (i.e., FSRCNN-mz) in most degradation cases. (2) Our SEAL is capable of ranking existing methods across various dimensions, such as robustness, denoted by RPRI and performance bound indicated by RPRA. In Tab. 2, the metric learning based MMRealSR achieves significant robustness (RPRI 0.08) compared with the transformer-based SwinIR (RPRI 0.24). Therefore, under our current coarse-to-fine evaluation protocol, MMRealSR is ranked in the first place. Interestingly, we observed that SwinIR achieves a higher RPRA at the same AR level. If the user prioritizes the performance of acceptance cases, SwinIR would be a better choice. Consequently, we can also flexibly set RPRA as the first finer metric. In this way, SwinIR would take the first place. (3) The acceptance line serves as a useful reference line for visual comparison. Visual results are presented in Fig. 5 and Fig. 7. It\u2019s evident that the visual results of the acceptance line can serve as a basic need for image quality, while the visual results of the excellence line represent the upper bound of image quality under the current evaluation protocol. The visuals below the acceptance line clearly exhibit unacceptable visual effects, including blurring (as seen in the crocodile results of RealSRGAN and DASR in Fig. 7), over-sharpening (as seen in the text results of RealESRNet in Fig. 5), and other artifacts. Notably, our SEAL can flexibly use new reference lines for future needs. 5.3 COMPARISON WITH THE CONVENTIONAL EVALUATION Here, we compare our SEAL with the conventional strategy (Zhang et al., 2021; Liang et al., 2022) used for evaluating real-SR models. Randomly generated multiple synthetic test sets fail to establish a clear ranking with distinct differences. We randomly sample 100 degradation cases and add them to Set14 to obtain 100 test sets (Set14-Random). Tab. 3 shows that 1) the mean and standard deviations (std) of PSNR obtained on the two Set14-Random100 datasets show significant inconsistency, demonstrating the presence of high randomness and variability in the sampled degradation cases. 2) On our Set14-SE (formed with the 100 representative cases), the means and stds of the compared methods are very close, making it hard to establish a clear ranking with distinct differences among the methods. In contrast, our SEAL offers a definitive ranking of these methods based on their AR scores, offering a new systematic evaluation view. Table 3: Comparison with multiple synthetic test sets on mean and standard deviations. Set14-Random100 (#1) Set14-Random100 (#2) Set14-SE PSNR \u2191 mean \u2191 std \u2193 mean \u2191 std \u2193 mean std AR \u2191 RPRI \u2193RPRA \u2191RPRU \u2191rank BSRNet 23.39(3) 1.56(2) 22.98(1) 1.64(1) 22.77(2) 1.65(1) 0.59(1) 0.42(4) 0.72(2) 0.27(4) 1 RealESRNet-GD 23.72(1) 1.64(4) 22.98(1) 1.95(4) 22.82(1) 1.83(4) 0.43(2) 0.37(3) 0.74(1) 0.33(1) 2 SwinIR 23.25(4) 1.62(3) 22.79(4) 1.69(2) 22.61(4) 1.69(2) 0.41(3) 0.24(1) 0.58(4) 0.29(2) 3 RealESRNet 23.54(2) 1.55(1) 22.80(3) 1.83(3) 22.67(3) 1.73(3) 0.27(4) 0.28(2) 0.63(3) 0.28(3) 4 The utilization of a randomly generated synthetic test set may lead to misleading outcomes. Following Zhang et al. (2021); Liang et al. (2022), we randomly add degradations to images in the DIV2K (Agustsson & Timofte, 2017) validation set to construct a single real-DIV2K val set. Tab. 4 shows that RealESRNet achieves a higher average PSNR than BSRNet (24.93dB vs. 24.77dB) on real-DIV2K val, leading to the misleading impression that RealESRNet is superior than BSRNet. However, our SEAL framework leads to a contrary conclusion. BSRNet obtains much higher PSNR (24.74dB vs. 24.43dB) and AR value (0.55 vs. 0.15) than RealESRNet on our DIV2K val-SE, illustrating that the former outperforms the latter in most representative degradation cases. 5.4 ABLATION STUDIES AND ANALYSIS In this section, we first conduct ablation studies on several factors that affect spectral clustering, including the number of sampled degradations, similarity metrics, and the number of clusters. Then, we study the stability of degradation clustering for real-SR evaluation. 8 \fPublished as a conference paper at ICLR 2024 Table 4: Comparison with the single synthetic test set. Under our SEAL framework, BSRNet is ranked first, contrary to the results obtained by the conventional method. PSNR-SE (Eq. 4) denotes the average PSNR on our DIV2K val-SE. PSNR Rank PSNR-SE AR \u2191 RPRI \u2193 RPRA \u2191 RPRU \u2191 Rank (ours) BSRNet 24.77 2 24.74 0.55(1) 0.36 0.65 0.26 1 RealESRNet 24.93 1 24.43 0.15(2) 0.33 0.59 0.18 2 Figure 8: Effect of the number of clusters on RPR value of Set14-SE. Figure 9: Effect of the dataset used for clustering on average PSNR of Set14-SE. Number of sampled degradations. To assess the effect of the number of sampled degradations, we randomly generate four datasets, each containing 500, 1000, 5000, and 10000 degradation samples, respectively. We compute the variance of the similarity matrices for each of these datasets, which are 8.32, 8.45, 8.68, and 8.71, respectively. The observation indicates that the change in variance is not significant when the number of samples increases from 5000 to 10000. This observation suggests that a sample size of 5000 random degradations sufficiently represents the degradation space. However, to ensure the highest possible accuracy in our results, we opted to use 10000 degradation samples for the clustering process. Table 5: The purity accuracy of the clustering results with different similarity metrics on Blur100, Noise100, and BN100 datasets. Range K MSE SSIM Histogram Blur100 0.1 4 4 78.2% 78.2% 80.2% Noise100 1 40 4 39.6% 34.6% 80.2% Blur100 + Noise100 8 51.7% 58.2% 80.5% Choice of similarity metric. We compare different metrics that are used to compute the similarity matrix for degradation clustering, including MSE, SSIM, and histogram similarity. In Tab. 5, the purity accuracy of the clustering results using MSE or SSIM is significantly lower than that with histogram similarity, especially when noise is considered. Thus, we adopt histogram similarity as the similarity metric. Choice of the number of clusters. Our goal is to generate as many representative classes as possible while maintaining the clustering quality so that the class centers can serve as representative cases. The results in Fig. 8 show the performance of RealESRNet and BSRNet becomes stable as k approaches 100, with minimal variations observed for k = 60, 80, 100. Therefore, to achieve a more comprehensive assessment and strike a balance between clustering quality and time cost, we set k = 100 without further increasing its value. In the appendix, we have included the quantitative results of the silhouette score (Rousseeuw, 1987), a metric commonly employed to evaluate the quality of clusters. Stability of degradation clustering for real-SR evaluation. In Fig. 9, we study the stability of degradation clustering by using different images as a reference for evaluation. Beyond the Lenna image, our study incorporated four additional images\u2014specifically, Baboon, Barbara, Flowers, and Zebra\u2014from the Set14 dataset. These images were employed as Ground Truth images in the construction of the clustering dataset, adhering to the same degradation clustering process. Despite using different reference images, our results show that the average PSNR of BSRNet is consistently higher than that of RealESRNet by more than 0.1dB, indicating that our degradation clustering method exhibits excellent stability for real-SR evaluation. 9 \fPublished as a conference paper at ICLR 2024 6" + }, + { + "url": "http://arxiv.org/abs/2205.04910v1", + "title": "A Closer Look at Blind Super-Resolution: Degradation Models, Baselines, and Performance Upper Bounds", + "abstract": "Degradation models play an important role in Blind super-resolution (SR). The\nclassical degradation model, which mainly involves blur degradation, is too\nsimple to simulate real-world scenarios. The recently proposed practical\ndegradation model includes a full spectrum of degradation types, but only\nconsiders complex cases that use all degradation types in the degradation\nprocess, while ignoring many important corner cases that are common in the real\nworld. To address this problem, we propose a unified gated degradation model to\ngenerate a broad set of degradation cases using a random gate controller. Based\non the gated degradation model, we propose simple baseline networks that can\neffectively handle non-blind, classical, practical degradation cases as well as\nmany other corner cases. To fairly evaluate the performance of our baseline\nnetworks against state-of-the-art methods and understand their limits, we\nintroduce the performance upper bound of an SR network for every degradation\ntype. Our empirical analysis shows that with the unified gated degradation\nmodel, the proposed baselines can achieve much better performance than existing\nmethods in quantitative and qualitative results, which are close to the\nperformance upper bounds.", + "authors": "Wenlong Zhang, Guangyuan Shi, Yihao Liu, Chao Dong, Xiao-Ming Wu", + "published": "2022-05-10", + "updated": "2022-05-10", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "main_content": "Introduction Traditional image super-resolution (SR) aims at reconstructing a high-resolution (HR) image from a lowresolution (LR) observation. In the past decade, convolutional neural networks (CNNs) [5, 30, 41, 42] have demonstrated superior performance in this task due to their powerful representation learning ability. Unlike traditional image SR, blind SR aims to generate an HR image from the counterpart one with a variety of unknown degradation types. *Corresponding author LR PSNR (dB)/SSIM SwinIR 24.74/0.66 SwinIR-GD (ours) 25.70/0.72 BSRNet 24.59/0.65 Figure 1. Visual comparisons of our method and state-of-the-art methods in \u00d74 blind super-resolution. Recent blind SR methods can be roughly divided into two groups. The \ufb01rst one [39] adopts a classical degradation model, which adds a blur degradation to the non-blind degradation model. Extensive research has achieved signi\ufb01cant progress, such as kernel estimation [6,18,19], representation learning [28], zero-shot learning [24], meta-learning [21,25], optimization method [4], real-world dataset [3,32] and unsupervised methods [17,35]. However, down-sampling with blur degradation is still an overly simple simulation since there exist many other degradation types in the real world. To address this problem, recent research introduces a practical degradation (PD) model [29,38] to mimic the degradation process from HR to LR images with various degradation types, including multiple blur types, noise types, and JPEG compression. Furthermore, BSRGAN [38] introduces a shuf\ufb02e operation to expand the degradation space, and RealESRGAN [29] designs a high-order pipeline to simulate complex degradations. arXiv:2205.04910v1 [eess.IV] 10 May 2022 \fDespite recent progress, blind SR remains a challenging problem. In our pilot study, we have identi\ufb01ed three key issues not well examined in previous research: 1) the design of a general degradation model that can cover most or even all degradation cases; 2) strong baselines that can well handle most degradation cases; 3) the study of performance upper bounds that can be used to evaluate the performance of existing blind SR methods w.r.t. distinct degradation cases. For issue 1, it is a well-known fact that the degradation process of real-world images is highly random, which may involve a broad set of degradation cases. However, existing degradation models only cover limited degradation cases. The classical degradation model [19, 28] only focuses on the blur degradation type, whereas the practical degradation model [29,38] considers the most complex degradation cases and ignores many other corner cases (e.g., combinations of a subset of degradation types). This leads to issue 2. Due to the lack of a uni\ufb01ed degradation model, existing methods can not perform well in various degradation cases, as shown in Figure 1. Hence, a strong baseline that can well handle different degradation cases is in need, which can facilitate the comparative analysis of the learning ability of a blind SR network. For issue 3, there lacks the study of quantitative performance upper bounds that an SR network trained with a speci\ufb01c degradation type (e.g., blur 2.0) can achieve on the test dataset. Without comparison with the upper bounds, it is dif\ufb01cult to evaluate whether a blind SR network is good enough in a special degradation case. In this paper, we take a closer look at the three issues and provide simple yet effective solutions. To address issue 1, we propose a uni\ufb01ed gated practical degradation (GD) model for blind SR. Speci\ufb01cally, the proposed GD model introduces a gate mechanism that can generate various combinations of degradation types to cover as many degradation cases as possible in the real-world. In the degradation process, we use a random gate controller to determine whether the HR image undergoes a certain degradation. As such, the proposed GD model can include traditional cases (nonblind SR), simple degradation cases (classical blind SR), complex degradation cases (practical blind SR), as well as many other common corner cases. The GD model leads to solutions to issue 2. Based on the GD model, we propose strong baseline networks that can well handle most degradation cases. Without additional design, our blind SR networks can surprisingly achieve consistent and significant performance gains over existing methods. To address issue 3, we introduce performance upper bounds to effectively evaluate existing methods and our proposed baselines on various degradation cases. Speci\ufb01cally, the performance upper bound for a certain degradation case can be obtained by training an SR network on the corresponding dataset. With the performance upper bounds, we provide a comprehensive comparative analysis of a blind SR network on the classical and practical degradation models as well as our proposed GD model (section 4). The contributions of this paper are summarized as follows. \u2022 We propose a uni\ufb01ed gated degradation model that can effectively handle non-blind, classical, practical degradation cases as well as many other corner cases. \u2022 To the best of our knowledge, we are the \ufb01rst to provide a comprehensive analysis of blind SR with performance upper bounds on both the classical and practical blind SR paradigms. \u2022 We show that the baseline networks with the proposed GD model can achieve superior performance close to the upper bounds. 2. Related work Non-blind super-resolution. Since Dong et al. [5] \ufb01rst introduced convolutional neural networks (CNNs) to the SR task, a series of learning-based works [7, 8, 10, 10, 16, 27, 37, 41, 42] have achieved great performance. To reconstruct realistic textures, generative adversarial networks (GAN) [12] are introduced to generate visually pleasing results. A series of GAN-based methods [22,23,30,31,35] are proposed to improve the visual results and quantitative results [33,40]. However, these methods focus on the bicubic down-sampling degradation model, which is too idealistic compared with the LR image in the real-world. Classical blind super-resolution. To enhance the reconstruction ability of the SR network in the real-world. Zhang et al. [39] proposed a classical blind degradation model consisting of Gaussian blur and noise with a range. Furthermore, Gu et al. [6] proposed a kernel estimation method with an iterative correction algorithm. Then, DAN [18,19] and DASR [28] are proposed to further improve the blind SR results. In addition, a series of methods achieved great improvements in classical blind SR, such as zero-shot learning [24], meta-learning [21, 25], optimization method [4], real-world dataset [3, 32], and unsupervised methods [17, 35]. However, these methods only consider a part of degradation types in the real-world. The LR images in the real-world are affected by a variety of degradation types. Practical blind super-resolution. Considering that there are multiple degradation types in the real-world. Zhang et al. [38] proposed a practical degradation model, which includes multiple blur types, down-sampling operation (bilinear and bicubic) with a scale factor, camera noise, and JPEG compression. The degradation order is not \ufb01xed but randomly shuf\ufb02ed. Furthermore, RealESRGAN [29] introduced a high-order operation to enhance the practical degradation model. However, these methods can achieve promising results on complex degradation while ignoring some easy cases. \fBlur Downsampling Noise JPEG Compression GT LR Gate Bic Non-blind SR case Blur Classical blind SR case Noise Corner degradation case Practical degradation case Blur+Noise +JPEG Blur+Noise Figure 2. Our proposed gated degradation model is a uni\ufb01ed model that encompasses non-blind SR, classical blind SR, and practical blind SR. The gate controller can generate various corner degradation cases and complex degradation cases to simulate real-world scenarios. 3. Degradation Models 3.1. Prior Research Classical Degradation Model. Blind SR is an ill-posed inverse problem which assumes the HR image is affected by multiple degradation types. Mathematically, the LR image ILR is generated from the HR image IHR as follows: ILR = Dk,n,j(IHR) = [(k \u2297IHR)\u2193s + n]j, (1) where \u2297represents convolution. First, the high-resolution image IHR is convolved with Gaussian blur kernel k. Then, the blurred image is down-sampled (denoted by \u2193s) and an additive white Gaussian noise (denoted by n) is added to the degraded image. Finally, the low-resolution image ILR is obtained by JPEG compression (denoted by j). With the classical degradation model, existing blind SR methods [6, 18, 28, 34] focus on the blur degradation while using a \ufb01xed noise (e.g., n = 20) rather than a range noise (e.g., n \u2208[1, 30]). JPEG compression is generally not considered. Note that without the blur, noise, and JPEG degradation, the classical degradation model is equivalent to the non-blind degradation model. Practical Degradation Model. Different from the classical degradation model, the practical degradation model [29,38] assumes the HR image undergoes a series of degradation cases to generate the LR image: ILR = Dp(IHR) = (D1 \u25e6D2 \u25e6D3 \u00b7 \u00b7 \u00b7 Dm)(IHR), (2) where Dp denotes the practical degradation process and Di \u2208{Dk, Dn, Dj, \u00b7 \u00b7 \u00b7 }, \u2200i \u2208{1, . . . , m}, represents a base degradation type, e.g., Dk is the blur degradation, and Dn is the noise degradation. To simulate more complex degradation cases, the degradation models in BSRGAN [38] and RealESRGAN [29] use a wide range of base degradation types including multiple blur types (e.g., generalized Gaussian blur and plateaushaped Gaussian blur), multiple down-sampling schemes (e.g., nearest, bilinear, and bicubic), and multiple noises (e.g., Poisson noise and camera sensor noise). 3.2. Our Proposed Gated Degradation Model The practical degradation model only considers complex degradation cases by using all (or most) base degradation types in the degradation process. However, it ignores important corner cases, i.e., combinations of different subsets of base degradation types, which are prevalent in the real world. Motivated by this, we propose a uni\ufb01ed degradation model by introducing a gate mechanism to randomly select the base degradation types to be included in the degradation process. Formally, ILR = Dg(IHR) = (\u03c3g(D1) \u25e6\u03c3g(D2) \u25e6\u03c3g(D3) \u00b7 \u00b7 \u00b7 \u03c3g(Dm))(IHR), (3) where Dg denotes the gated degradation process, and Di \u2208{Dk, Dn, Dj, \u00b7 \u00b7 \u00b7 }, \u2200i \u2208{1, . . . , m}, represents a base degradation type. The gate controller \u03c3g determines whether Di is used in the degradation process, i.e., \u03c3g(Di)(Id) = ( Di(Id), g = 1, Id, g = 0, (4) where Id denotes the degraded (or input) HR image. Note that when all gates g = 1, the gated degradation model is equivalent to the practical degradation model, whereas when all the gates g = 0, it is the same as the traditional non-blind SR. The gate controller allows to generate various combinations of base degradation types, and hence our degradation model is a unifed model that encompasses nonblind SR, classical blind SR, and practical blind SR. \f4. A Comprehensive Analysis of Blind SR with Performance Upper Bounds This section analyzes blind SR networks with the existing classical, practical, and proposed gated degradation model. We \ufb01nd that a blind SR network can achieve promising performance with our proposed gated degradation model, while the blind SR network with a practical model has a signi\ufb01cant performance drop. Preliminary. FAIG [34] shows a one-branch blind SR network can achieve comparable results compared with the SOTA methods DAN [18] and DASR [28]. In the BSD100 [20] validation dataset with blur degradation type, the performance of a one-branch network is higher than SOTA DAN and DASR, about 0.05 dB. So, the one-branch network is considered as a base network to analyze the blind SR problem. Similarly, RealESRGAN [29] and BSRGAN [38] adopt a powerful one-branch network RRDBNet as the blind SR network to solve the practical blind SR problem. Performance Upper Bound. An essential issue of the practical blind SR is how to evaluate blind SR networks effectively. Based on the proposed gated degradation model, the performance upper bound can easily be introduced to clearly evaluate the blind SR network. Take a special degradation type bicubic as an example. To get the upper bound, we could train a special SR network with bicubic type and test the well-trained network on the corresponding bicubic test dataset. A similar procedure can obtain the upper bounds of other corner degradation types. The de\ufb01nition of the upper bound is a vital tool to evaluate blind SR. Setting. In this section, RRDBNet is used as the primary blind SR network (BSRNet), which is trained on a representative degradation model. The degradation model includes isotropic Gaussian blur [0.1, 3.0], additive Gaussian noise [1, 30], and JPEG [40, 95]. To clearly evaluate the BSRNet, we design a validation dataset Practical8, which includes every corner degradation case {bic, b2.0, n20, j60, b2.0n20, b2.0j60, n20j60, b2.0n20j60}. Then, we train 8 SR models to get the upper bound on every corner case. Therefore, we can use the PSNR distance between BSRNet and upper bound for the evaluation on Practical8. We adopt a similar setting for the classical degradation model. 4.1. Analysis of Classical Blind SR Similar to FAIG [34], we train the BSRNet-FAIG on a classical degradation model with isotropic Gaussian blur [0, 3.0]. Then, we train 5 SR networks to get the corresponding upper bound on the validation dataset with bicubic (bic) and blur {0.6, 1.2, 1.8, 2.4}. From Table 1, we \ufb01nd that BSRNet has a slight performance drop (about 0.3 dB) on PSNR compared with the corresponding upper bound. The slight performance drop is relatively acceptable on the blind SR problem since it is more challenging than the non-blind SR. This exciting obTable 1. Average PSNR (dB) of BSRNet with classical degradation models in \u00d74 blind SR. Method Blur degradation types bic 0.6 1.2 1.8 2.4 BSRNet-FAIG [34]) 26.51 27.25 28.07 28.42 28.43 Upper bound 26.75 27.46 28.43 28.71 28.74 servation motivates us to investigate the underlying learning ability of blind SR networks, especially on a practical degradation model. bic b2.0 n20 j60 b2.0n20 b2.0j60 n20j60 b2.0n20j60 23 24 25 26 27 28 29 PSNR (dB) 26.01 27.13 24.66 25.46 24.41 25.49 24.16 26.45 27.95 25.17 25.62 25.04 25.59 24.48 24.01 26.58 28.59 25.32 25.74 25.46 25.86 24.67 24.25 Upper bound BSRNet-PD BSRNet-GD 24.52 Figure 3. Comparison of PSNR (dB) of BSRNet with different degradation models. 4.2. Analysis of Practical Blind SR Practical Degradation Model. We \ufb01rstly train BSRNet with the PD model to get BSRNet-PD. Figure 3 shows that BSRNet-PD has a signi\ufb01cant drop on corner cases bic, blur2.0, noise20, blur2.0n20 while having a minimal drop on corner cases b2.0j60 and noise20j60. Interestingly, in complex case b2.0n20j60, the PSNR distance between BSRNet-PD and the upper bound is 0.09 dB, which is a tiny drop since the PD model focuses on the combination of the blur, noise, and JPEG. 24.09/0.72 24.56/0.73 BSRNet-PD Upper bound 29.16/0.85 30.68/0.89 BSRNet-PD Upper bound 30.95/0.89 32.81/0.91 BSRNet-PD Upper bound BSRNet-PD Upper bound 27.73/0.78 27.89/0.78 b2.0n20j60 n20 bic b2.0 Figure 4. Visual comparisons of BSRNet-PD and the corresponding upper bound with PSNR (dB)/SSIM. Figure 4 shows that BSRNet-PD fails to generate realistic textures on corner cases bic, b2.0, and n20, while the visual results on complex case b2.0n20j60 are promising compared with the upper bound. The PSNR value has a small drop of 0.16 dB, which is acceptable. \fThe analysis for the PD model presents three crucial points: 1) The blind SR Network can handle the most complex case well. 2) The performance of the blind SR network have a slight drop on a few corner cases, such as n20j60. 3) The quantitative and visual results of most corner degradation types have a signi\ufb01cant drop compared with the upper bound. Gated Degradation Model. To address the issue 1 described in Section 1, we apply the proposed GD model to generate all combinations of degradation types for the BSRNet (named BSRNet-GD). Interestingly, Figure 3 shows that BSRNet-GD achieves 0.82 dB and 0.63 dB improvement on the corner case b2.0 and b2.0n20, respectively. The performance of other corner cases is closer to the corresponding upper bound. The PSNR value of BSRNet-GD has a slight drop by 0.13 dB on complex case b2.0n20j60 compared with BSRNet-PD. These results support the solutions of issue 2 in Section 1. A blind SR network with our proposed GD model can surprisingly achieve signi\ufb01cant performance on all degradation cases. 31.66/0.89 31.58/0.89 31.74/0.89 19.77/0.42 20.25/0.44 20.35/0.46 26.87/0.82 28.53/0.87 29.57/0.88 BSRNet-PD Upper bound BSRNet-GD BSRNet-GD BSRNet-GD BSRNet-PD BSRNet-PD Upper bound Upper bound b2.0n20j60 n20 b2.0 Figure 5. Visual comparisons of BSRNet-PD, BSRNet-GD, and the corresponding upper bound with PSNR (dB)/SSIM. Figure 5 shows that the BSRNet-GD can generate more realistic textures than BSRNet-PD on b2.0 and n20 degradation. The sacri\ufb01ce of complex case b2.0n20j60 is completely acceptable because we can hardly tell the difference between BSRNet-PD and BSRNet-GD on visual results. Although the practical degradation model can handle some special cases, it is obvious that the practical degradation model cannot guarantee promising quantitative and qualitative results in all corner cases. These quantitative and qualitative comparisons con\ufb01rm the effectiveness of upper bounds on the issues 3 described in Section 1. Based on the proposed gated degradation model: 1) A blind SR network has a tiny sacri\ufb01ce in the complex case. 2) The performance of corner cases can achieve obvious improvement compared with the PD model. 3) A blind SR network can handle all of the degradation types with a small performance drop compared with the upper bound. 5. Experiments 5.1. Datasets and Implementation Details Datasets. Following existing blind SR methods [6, 18, 28,29,38,39], we use DIV2K (800 images) [1] and Flickr2K (2650 images) [26] dataset for training. The training images are randomly cropped to 128\u00d7128 patches that are blurred, noised, and compressed (JPEG). We use benchmark datasets BSD100 [20] and Urban100 [9] for evaluation. Degradation model. To ensure fair quantitative comparison, we adopt a light degradation model to generate the dataset. Following the setting of BSRGAN [38] and RealESRGAN [29], the light degradation model includes isotropic Gaussian blur [0.1, 3.0], additive Gaussian noise [1, 30], and JPEG [40, 95]. The down-sampling adopts \u00d74 bicubic in the RealESRGAN version. For the proposed GD model, the probability of every gate is set to 0.5 to generate all degradation cases. Baselines. Based on the analysis in Section 4, the proposed GD model is applied to the representative networks as our proposed baseline network. We employ RRDBNet [29, 38] and SwinIR [13] to get the baseline networks: CNN-based RRDBNet-GD, transformer-based SwinIR-GD, and GAN-based baseline BSRGAN-GD and SwinIRGAN-GD. Practical8. In order to quantitatively conduct evaluation, we propose Practical8 test dataset to evaluate blind SR methods. Practical8 consists of {bic, b2.0, n20, j60, b2.0n20, b2.0j60, n20j60, b2.0n20j60}. The degradation types in Practical8 are based on the combinations of degradation types in the training dataset. The evaluation metric employs PSNR to compare MSE-based methods and PSNR/NIQE for GAN-based methods. Training. In our experiments, the Adam [11] optimization method with \u03b21 = 0.9 and \u03b21 = 0.99 is used for training. The initial learning rate is set to 2 \u00d7 10\u22124, which is reduced by a half for multi-step [25 \u00d7 104, 50 \u00d7 104, 75 \u00d7 104, 100\u00d7104]. A total of 100\u00d7104 iterations are executed by PyTorch. The loss function adopts L1 loss between SR results and HR images. 5.2. Experiments on MSE-based blind SR Networks. Here we consider a series of representative networks for quantitative comparisons, such as SRResNet which is used in prevalent kernel estimation methods, [2,6,14,15,18] and RRDB network which is used in BSRGAN [38] and RealESRGAN [29] to handle practical blind SR. In addition, the representative network RCAN [41] and SwinIR [13] are also employed for quantitative comparison. Notably, all networks are adjusted to the same setting and parameter level to ensure a fair comparison. Comparison with the state-of-the-art. Table 2 shows \fTable 2. Average PSNR (dB) of different methods in \u00d74 blind SR on BSD100 [20] and Urban100 [9]. The top two results are highlighted in red and blue, respectively. Note that we ensure all methods have similar model size for a fair comparison. Dataset Method Degradation Types bic b2.0 n20 j60 b2.0n20 b2.0j60 n20j60 b2.0n20j60 Average BSD100 Bicubic 24.63 25.40 21.56 24.06 21.90 24.65 21.22 21.72 23.14 RCAN [41] 25.65 26.77 24.63 25.16 24.39 25.36 24.36 24.15 25.06 SRResNet-FAIG [34] 25.58 26.72 24.53 25.11 24.26 25.29 24.32 24.07 24.99 RRDBNet [29,38] 25.62 26.76 24.58 25.13 24.33 25.32 24.34 24.11 25.02 SwinIR [13] 25.84 27.05 24.77 25.27 24.48 25.44 24.44 24.18 25.18 RRDBNet-GD (ours) 26.25 27.31 25.31 25.23 24.95 25.32 24.38 24.07 25.35 SwinIR-GD (ours) 26.61 27.58 25.64 25.30 25.30 25.39 24.44 24.14 25.55 Upper bound (RRDBNet) 26.36 27.68 25.46 25.30 25.34 25.49 24.45 24.15 25.53 Urban100 Bicubic 21.89 22.54 20.00 21.50 20.36 22.02 19.74 20.20 21.03 RCAN [41] 23.65 24.67 22.93 23.35 22.59 23.36 22.77 22.35 23.21 SRResNet-FAIG [34] 23.54 24.42 22.88 23.26 22.42 23.16 22.73 22.19 23.08 RRDBNet [29,38] 23.53 24.46 22.89 23.28 22.48 23.17 22.75 22.24 23.10 SwinIR [13] 24.16 25.10 23.34 23.73 22.86 23.62 23.09 22.53 23.55 RRDBNet-GD (ours) 24.51 25.39 23.57 23.67 23.05 23.18 22.92 22.13 23.55 SwinIR-GD (ours) 25.55 26.12 24.40 24.11 23.83 23.56 23.26 22.42 24.16 Upper bound (RRDBNet) 25.13 26.38 23.91 23.97 23.56 23.62 23.18 22.44 24.02 20.07/0.52 21.48/0.64 21.63/0.64 21.46/0.63 21.75/0.66 21.95/0.67 22.36/0.70 23.86/0.45 27.37/0.68 27.41/0.68 27.41/0.68 27.55/0.69 27.66/0.70 28.04/0.72 23.43/0.70 Bicubic 23.83/0.72 SRResNet 23.97/0.72 RCAN 23.89/0.72 RRDBNet 24.09/0.72 SwinIR 24.42/0.73 RRDBNet-GD (Ours) 24.82/0.74 SwinIR-GD (Ours) PSNR (dB)/ /SSIM PSNR (dB)/SSIM PSNR (dB)/SSIM Figure 6. Visual comparisons of our methods and others in \u00d74 super-resolution. Please zoom in for a better view. the quantitative comparisons. Firstly, we \ufb01nd that RRDBNet is only about 0.03 dB higher than SRResNet-FAIG on PSNR. Interestingly, the non-blind SR method RCAN achieves better performance than SRResNet-FAIG and RRDBNet. Bene\ufb01t from channel attention design, RCAN outperforms RRDBNet by about 0.1 dB on all corner degradations in Practical8. Furthermore, SwinIR achieves the highest performance compared with other methods. Secondly, the average performance of the proposed baseline RRDBNet-GD and SwinIR-GD achieves signi\ufb01cant improvement (0.3-0.6 dB) on BSD100 and urban100 datasets. Figure 6 shows that our method could generate visually pleasing results than other works. Upper bound. To further evaluate the performance of the blind SR networks, we train 8 SR models with the speci\ufb01c degradation types in Practical8. Since RRDBNet is adopted in SOTA practical blind SR methods BSRGAN [38] and RealESRGAN [29], RRDBNet is selected as the basic network to obtain the upper bound. Table 2 shows that blind SR networks with a practical degradation model have a signi\ufb01cant drop compared with the upper bound on some corner degradation cases. The most interesting aspect is that the quantitative difference between a speci\ufb01c case and the upper bound is very large. Take RRDBNet as an \fTable 3. Average PSNR (dB) of networks of different capacity in \u00d74 blind SR with the proposed gated degradation model on Set14 [36]. Method #Para. (M) Degradation Types bic b2.0 n20 j60 b2.0n20 b2.0j60 n20j60 b2.0n20j60 Average Bicubic 25.00 25.34 21.77 24.29 21.91 24.51 21.46 21.73 23.25 SRResNet-16 1.52 26.45 27.94 25.17 25.59 25.04 25.56 24.53 24.04 25.54 SRResNet-46 3.73 26.49 28.16 25.23 25.67 25.12 25.57 24.58 24.09 25.61 RCAN 3.87 26.62 28.31 25.36 25.75 25.33 25.66 24.68 24.19 25.74 RRDBNet-5 3.75 26.53 28.25 25.28 25.68 25.22 25.62 24.59 24.15 25.67 SwinIR-v1 3.85 26.94 28.59 25.67 25.83 25.73 25.77 24.77 24.30 25.95 SwinIR-v2 11.90 27.21 28.84 25.92 26.07 25.87 25.87 24.91 24.37 26.13 Upper bound (RRDBNet-5) 3.75 26.75 28.74 25.48 25.81 25.63 25.96 24.74 24.32 25.93 Table 4. Average PSNR (dB) of RRDBNet in \u00d74 blind SR with light and hard degradation models on Set14 [36]. Method Degradation Types bic b2.0 n20 j60 RRDBNet-GD-light 26.53 28.25 25.28 25.68 RRDBNet-GD-hard 26.53 28.11 25.22 25.68 b2.0n20 b2.0j60 n20j60 b2.0n20j60 RRDBNet-GD-light 25.22 25.62 24.59 24.15 RRDBNet-GD-hard 25.12 26.65 24.56 24.08 color-n20 Poisson-n20 RRDBNet-GD-light 24.93 24.66 26.71 26.56 RRDBNet-GD-hard 25.52 25.43 26.90 26.83 example, it is apparent that the bic, b2.0, and b2.0n20 cases have a large performance drop compared with other cases. Based on the proposed GD model, there is a signi\ufb01cant improvement in all corner cases, such as bicubic (nonblind SR), b2.0 (classical blind SR), and complex case b2.0n20j60 (practical blind SR). Notably, there are also great differences in the improvement of different cases. For example, the b2.0j60 case has the smallest improvement, and bic case has a great improvement compared with the upper bounds. Network capacity. Table 3 shows the comparisons of blind SR networks with different network parameters and structures on the GD model. The SRResNet-16 with 16 residual blocks has 1.52M parameters, but it just has a 0.39 dB drop compared with the upper bound on average PSNR. Furthermore, SRResNet-46 and RRDBNet-5 get about 0.1 dB improvement compared with SRResNet-16. Bene\ufb01tting from the attention mechanism, RCAN and SwinIRv1 (version1) achieve better performance with similar parameters. Finally, SwinIR-v2 (version2) with 11.9 M can further improve the SR results. Interestingly, the degradation b2.0n20j60 only has a slight improvement (0.07 dB), while the easy corner degradations have a signi\ufb01cant improvement (e.g., 0.27 dB in bicubic). Light vs. hard degradation models. We further apply the proposed GD model with a hard scenario, which includes various blur types (isotropic, anisotropic, generalized isotropic/anisotropic, and plateau isotropic/anisotropic Gaussian blur), noises (additive grey/color Gaussian noise and Poisson grey/color noise) and JPEG compression. Table 4 shows that the performance of RRDBNetGD-hard has a slight drop on the light cases, while it has a more signi\ufb01cant improvement on the new cases. 5.3. Experiments on GAN-based Blind SR Experimental setup. Similar to Section 5.1, we adopt the same settings to train the GAN-based networks. We train three representative models, SRGAN [12], BSRGAN [38] (RealESRGAN [29]), and SwinIRGAN [13] by the same light degradation model in Section 5.1. The loss function combines L1 loss, perceptual loss, and GAN loss, with weights [1, 1, 0.1], respectively. The baseline BSRGANGD and SwinIRGAN-GD are trained on the proposed GD model. The discriminator adopts a U-Net structure in RealESRGAN [29]. The upper bound of Practical8 is also provided to evaluate the performance of different GAN models quantitatively. Comparison with the stat-of-the-arts. Table 5 shows SRGAN tends to sacri\ufb01ce PSNR performance to generate perceptual textures while BSRGAN and SwinIRGAN can achieve higher reconstructive performance when generating texture details. Based on our proposed GD model, the reconstructive performance achieves further improvement compared with the practical degradation model. Interestingly, SwinIRGAN pays more attention to reconstruction performance PSNR while the perceptual metric NIQE value is higher than BSRGAN-GD. Figure 7 shows that our methods can generate realistic visual results compared with existing methods. We validate our method on a real-world dataset RealSRSet used in BSRGAN [38], which consists of real images downloaded from the Internet. BSRGAN-GD achieves 5.11 in NIQE, much better than BSRGAN with a light practical degradation model, which is 6.06. \fTable 5. Average NIQE/ PSNR (dB) of different GAN-based methods in \u00d74 blind SR on Urban100 [9]. The top two results are highlighted in red and blue, respectively. Note that we ensure all methods have similar model size for a fair comparison. Method Metric Degradation Types bic b2.0 n20 j60 b2.0n20 b2.0j60 n20j60 b2.0n20j60 Average Bicubic NIQE 7.08 7.89 8.97 7.35 8.42 7.93 8.99 8.37 8.13 PSNR 21.89 22.54 20.00 21.50 20.36 22.02 19.74 20.20 21.03 SRGAN [12] NIQE 4.25 5.00 3.49 3.88 3.69 4.59 3.46 3.65 4.00 PSNR 21.75 23.16 21.08 21.55 21.68 22.42 20.95 21.45 21.76 BSRGAN [29,38] NIQE 4.51 5.77 4.02 4.25 4.24 5.26 3.97 4.36 4.55 PSNR 22.18 23.39 21.58 21.96 21.81 22.51 21.38 21.51 22.04 SwinIRGAN [13] NIQE 4.39 5.01 4.29 4.40 4.46 4.91 4.08 4.36 4.49 PSNR 22.92 24.10 22.10 22.48 22.18 22.84 21.82 21.83 22.53 BSRGAN-GD (ours) NIQE 4.04 4.27 3.91 3.95 4.18 4.91 3.63 4.57 4.18 PSNR 23.31 24.43 22.51 22.45 22.40 22.69 21.62 21.62 22.63 SwinIRGAN-GD (ours) NIQE 4.01 4.38 4.11 4.16 4.29 4.55 4.09 4.72 4.29 PSNR 24.24 25.20 23.28 22.98 23.13 22.94 22.17 21.86 23.23 Upper bound (BSRGAN) NIQE 3.79 4.10 3.88 3.92 3.86 4.00 3.73 3.87 3.89 PSNR 23.66 25.17 22.58 22.58 22.41 22.51 21.77 21.52 22.78 22.49/8.45 Bicubic 23.75/2.71 SRGAN 24.03/2.84 BSRGAN 23.94/2.80 SwinIRGAN 24.69/2.92 BSRGAN-GD (Ours) 25.43/2.88 SwinIRGAN-GD (Ours) PSNR (dB)/NIQE 20.47/6.84 19.93/2.46 20.12/2.42 20.34/2.43 20.58/1.80 21.09/1.60 PSNR (dB)/NIQE Figure 7. Visual comparisons of our methods and others in \u00d74 Blind SR. Lower NIQE score indicates better perceptual quality, and higher PSNR indicates less distortion. Please zoom in for a better view. 5.4. Discussion To further adapt the proposed GD model for the realworld scenario, an intuitive way is to enlarge the degradation space. We apply this simple scheme in Section 5.2, Table 4 shows the potential ability to handle complex cases in real-world scenarios. Different from BSRGAN [38] and RealESRGAN [29] that tends to provide a powerful degradation model, our work focuses on how to fairly and quantitatively evaluate blind SR networks. In addition, our proposed degradation model is complement to that of BSRGAN and addresses the important corner cases that were not considered in BSRGAN. We can easily apply the proposed GD strategy to a complex degradation model, such as the degradation model in BSRGAN and RealESRGAN. In summary, as the degradations are extremely complex in real-world applications, the degradation model, baseline, and upper bound would be an important topic for future blind SR research. 6." + }, + { + "url": "http://arxiv.org/abs/2204.12456v1", + "title": "Event Detection Explorer: An Interactive Tool for Event Detection Exploration", + "abstract": "Event Detection (ED) is an important task in natural language processing. In\nthe past few years, many datasets have been introduced for advancing ED machine\nlearning models. However, most of these datasets are under-explored because not\nmany tools are available for people to study events, trigger words, and event\nmention instances systematically and efficiently. In this paper, we present an\ninteractive and easy-to-use tool, namely ED Explorer, for ED dataset and model\nexploration. ED Explorer consists of an interactive web application, an API,\nand an NLP toolkit, which can help both domain experts and non-experts to\nbetter understand the ED task. We use ED Explorer to analyze a recent proposed\nlarge-scale ED datasets (referred to as MAVEN), and discover several underlying\nproblems, including sparsity, label bias, label imbalance, and debatable\nannotations, which provide us with directions to improve the MAVEN dataset. The\nED Explorer can be publicly accessed through http://edx.leafnlp.org/. The\ndemonstration video is available here\nhttps://www.youtube.com/watch?v=6QPnxPwxg50.", + "authors": "Wenlong Zhang, Bhagyashree Ingale, Hamza Shabir, Tianyi Li, Tian Shi, Ping Wang", + "published": "2022-04-26", + "updated": "2022-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Being one of the basic elements of event understanding, event detection (ED) aims at detecting event triggers from unstructured texts and classifying them into some prede\ufb01ned event types (Chen et al., 2017; Le and Nguyen, 2021). It is one of the most important steps for extracting structured event information from unstructured texts (Ahn, 2006). Ef\ufb01cient and accurate event detection will also bene\ufb01t many Natural Language Processing (NLP) tasks, such as information retrieval (Jungermann and Morik, 2008; Kanhabua and Anand, 2016), question answering (Yang et al., 2003; Souza Costa et al., 2020), and event augment prediction (Cheng and Erk, 2018). Figure 1: The architecture of the ED Explorer. The ED task has attracted considerable attention in recent years. Traditional feature-based models (Araki and Mitamura, 2015; Li et al., 2013; Gupta and Ji, 2009) rely on constructing different features that are related to events and incorporating them into the models. Many recent deep learning based models formulate the ED task as a sequence labeling problem and have achieved state-of-the-art results (Lai et al., 2020; Liu et al., 2019; Yan et al., 2019; Ding et al., 2019; Zhao et al., 2018; Chen et al., 2018). The advances of deep ED models are attributed to the development of datasets that can be used to train and benchmark these models. In the past decades, several ED datasets have been introduced and widely used to develop ED models, such as ACE 2005 (Walker et al., 2006) and TAC KBP (Mitamura et al., 2015). However, these datasets suffer from several limitations (Wang et al., 2020). (1) Data Scarcity. These datasets are in small scale and cover a small number of instances. For example, there are only 599 documents and 5,349 instances in ACE 2005. (2) Low Event Type Coverage, i.e., only a small number of event types are considered in these datasets. For example, ACE 2005 and TAC KBP have only 33 and 38 event types, respectively. (3) Label Imbalance. In these datasets, many events are related to certain topics, which results in the laarXiv:2204.12456v1 [cs.CL] 26 Apr 2022 \fbel imbalance problem. For example, in ACE 2005, 60% event types have less than 100 annotated event mention instances. Recently, a large scale ED dataset, namely MAVEN (Wang et al., 2020), has been introduced. It has more than 100K event mention instances for 168 event types, which alleviates the Data Scarcity and Low Event Type Coverage problems. There are also several other datasets that can be used to train and evaluate ED models (Sims et al., 2019; Lee et al., 2021; Satyapanich et al., 2020). For example, RAMS (Ebner et al., 2020)) was originally annotated for document-level argument linking. It has 9,124 annotated events across 139 types and can also be used to train ED models. ALDG (Chen et al., 2017) and FewEvent (Deng et al., 2020) are automatically labeled datasets, which are used as augmented datasets to improve ED models. Despite more ED datasets are made available for research and different models have been developed based on them (Yu et al., 2021; Wang et al., 2021), there are still a few problems that need to be investigated: (1) Uniqueness. What are the advantages of each ED dataset compared with other datasets (including ACE 2005 and TAC KBP) in terms of event types, trigger words, data distributions, event type coverage, and practical applications? (2) Reliability. Since all these new ED datasets are recently introduced, they have not been comprehensively validated by other domain experts. It is unclear if these datasets also suffer from data bias (Wang et al., 2021), label imbalance, and annotation artifact (Gururangan et al., 2018) problems. (3) Accessibility. Although there are many tools and packages for domain experts to explore and visualize these ED datasets, it is still dif\ufb01cult for most people to process and analyze them, and understand the ED task. Therefore, can we develop a tool that can help them systematically explore ED datasets, so that this task can be easily accessed by more people? To address these problems, we develop an ED Explorer (see Fig. 1), that allows both domain experts and non-experts to systematically and ef\ufb01ciently explore different publicly available ED datasets, and the models trained on them. There are three toolkits for users: A web application, an API, and a NLP toolkit. The interactive front-end of the web application (see Fig. 2) makes it very easy for end users to navigate between different event types and trigger words, which can help them better Datasets MAVEN RAMS ALDG Domain Wikipedia News Wikipedia # Sentences 49,873 9,124 72,611 # Event types 168 139 21 # Event mentions 118,732 9,124 72,611 Table 1: Basic statistics of the three ED datasets used. understand the datasets and ef\ufb01ciently check and discover underlying problems in the annotations. There is also a home maintained and easy-to-use NLP toolkit in Python, namely LeafNLP, for the ED task. Therefore, users can test the ED models via the integrated and interactive web application (see Fig. 3), API and LeafNLP. 2 Event Detection Datasets In this section, we introduce the details of three representative ED datasets that are presented in our ED Explorer, including MAVEN (Wang et al., 2020), RAMS (Ebner et al., 2020), and ALDG (Chen et al., 2017). MAVEN represents open-domain general purpose ED datasets, which can detect multiple triggers and events in a single sentence. For RAMS and ALDG, each of the sentences has only one primary event mention. RAMS is manually annotated which is more reliable, while ALDG is automatically generated and can be used in data augmentation when training ED models. There are also several other publicly available datasets, such as CASIE (Satyapanich et al., 2020) and Commodity News Corpus (Lee et al., 2021). We will incorporate them to our platform in the future. Since our platform is designed to be freely and publicly accessible, we do not include the well-known ED datasets ACE 2005 and TAC KBP. Table 1 provides the basic statistics of these datasets. \u2022 MAssive eVENt detection (MAVEN) dataset (Wang et al., 2020): is a massive ED dataset developed in 2020 by combining machine generation and human-annotation based on 4,480 Wikipedia documents. It aims at addressing limitations of existing ED datasets about data scarcity and low coverage of event types. The event types in MAVEN are derived from the frames de\ufb01ned in the linguistic resource Frame net (Baker et al., 1998) with a large coverage of events in the general domain. Compared with existing datasets, MAVEN covers 168 event types, and 118,732 events mentions, which indicates a larger data scale and a larger event coverage. Recently, it has been used for developing different ED mod\fels (Cao et al., 2021; Yu et al., 2021; Wang et al., 2021; Frisoni et al., 2021; Wang et al., 2021). \u2022 Roles Across Multiple Sentences (RAMS) (Ebner et al., 2020): is a crowdsourced dataset developed for identifying explicit arguments of different roles for an event from multiple sentences, which is known as multi-sentence argument linking. It covers 139 event types, 9,124 annotated events from 3,993 news articles, and 65 roles. Compared with prior small-scale datasets for cross-sentence argument linking, RAMS advances the development of advanced deep learning models for this task (Zhang et al., 2020; Wen et al., 2021; Lou et al., 2021). \u2022 Automatically Labeled Data Generation for large scale event extraction (ALDG) (Chen et al., 2017): is an automatically generated dataset using distant supervision (Mintz et al., 2009) by jointly using the world knowledge Freebase (Bollacker et al., 2008) and linguistic knowledge FrameNet. ALDG covers 72,611 events across 6.3 million articles in Wikipedia and 21 event types with a focus on the topic about education, military, and sports. 3 ED Explorer In this section, we describe the pipelines and usage of our Event Detection (ED) Explorer. 3.1 Architecture Our ED Explorer (see Fig. 1) enables end-users to explore ED datasets and models by interacting with a Web Application and API. The Web Application is an HTTP server in Node.js developed following the concept of Model-View-Controller design pattern. For the Controller, we adopt express as the primary framework and express router to handle routing and user navigation. For the View (i.e., front-end), we use ejs as our template engine to generate HTML and use Bootstrap to style web pages. For the Model, different models do not directly interact with databases, instead they send and receive JSON content via HTTP requests (e.g., GET and POST) to a REST API. The Web API is built with FastAPI, which is a high-performance Python web framework. Different models, i.e., functions, in FastAPI handle different requests, interact with databases or machine learning (ML) pipelines, and respond to the requests. The ML pipelines (1) get messages from API models, (2) annotate texts with a home maintained NLP toolkit, namely LeafNLP1, and (3) store the annotated results in a database. Following this design, there are three entry points for end-users: (1) Web Application. Users can explore ED datasets and models via front-end of ED Explorer. (2) Web API. Users can get processed data and output of ED models via interacting with the Web API. (3) LeafNLP. Users can download and install LeafNLP (via pip install leafnlp) and use this toolkit to annotate their text. 3.2 ED Dataset Explorer The ED Dataset Explorer (EDDE, see Fig. 2) aims at helping users understand and explore ED datasets more ef\ufb01ciently. In this section, we introduce three primary components of EDDE. 3.2.1 Events Overview The \ufb01rst component of EDDE is an events overview page (see Fig. 2 (a)) that shows the distribution of different events, including distributions of trigger words in different events. The design of this page is intended to help end-users understand the following research questions: \u2022 How many event types are there? What are they? \u2022 How many event mention instances for each event type? \u2022 What are trigger words for each event type? \u2022 How many event mention instances for each event type and trigger word? On one hand, these questions are very important for us to \ufb01nd underlying problems of an ED dataset and improve existing annotations. On the other hand, they provide us some guidance to develop and evaluate ML models. For example, in MAVEN dataset, more than 30 event types, such as Breathing and Change Tool, have less than 100 event mention instances. Therefore, we should perform extra evaluations for ML models in predicting these rare event types. Also, we should annotate more instances for these events in order to improve the quality of the MAVEN dataset. 3.2.2 Event Type Explorer Fig. 2 (b) displays the second component of EDDE, namely, Event Type Explorer. For each event type (e.g., Catastrophe), we show 10 most frequent trigger words (with the count of their corresponding instances) and all event mention instances (i.e., sentences). In each instance, we use RED color to 1https://pypi.org/project/leafnlp \fFigure 2: Front-end design of ED Explorer, which includes three primary components, including (a) Events Overview, (b) Event Type Explorer, and (c-d) Trigger Word Explorer. highlight trigger words, and also show other triggered events and their trigger words. In addition, we use BLUE color to indicate candidate trigger words that do not trigger any event (namely, negative triggers). With Event Type Explorer, end-users can easily and ef\ufb01ciently explore different event types, their trigger words and event mention instances. 3.2.3 Trigger Word Explorer Trigger Word Explorer, which is the third component of EDDE (Fig. 2 (c) and (d)), is designed to systematically explore trigger words, their event types, and event mention instances. In most realworld applications (e.g., Fig. 3 (b)), we start exploring events by analyzing trigger words. Therefore, understanding what events a word may trigger is naturally the \ufb01rst step to understand event detection. The design of instance visualization is the same as that for Event Type Explorer. Fig. 2 (c) and (d) represent two different \ufb01lters, which show event mention instances for all events triggered by a trigger word (i.e., storm) and a single event (Attack) triggered by the trigger word (storm). Trigger Words Explorer has played a very important role for us to identify incorrect annotations in this work. For example, in Fig. 2 (c), we \ufb01nd that in 925 instances, storm is annotated as Catastrophe, but it is also labeled as Attack, Self motion, Damaging, Motion, and Bodily Harm. By manually checking these rare instances, we have found many incorrect annotations. In 771 instances, storm is treated as negative trigger, which is also questionable. 3.3 ED Model Explorer In addition to ED Dataset Explorer, it is also important to explore machine learning (ML) models trained on an ED dataset, so that we can perform integrated analysis on model outputs, and jointly improve both data annotations and ML models. In this work, we have implemented a deep learning model for the event detection task and trained it on the MAVEN dataset. The architecture of the model is similar to the one used for the named entity recognition task in (Devlin et al., 2019), where input texts are \ufb01rst tokenized with a casepreserving Word-Piece model; and then, they are encoded by a BERT encoder with 12 transformer \fFigure 3: Live demonstration of ED Model Explorer. (a) Input sentence or article. (b) Output from ED Model Explorer with predicted events. (c) Further exploration of the trigger word with Trigger Word Explorer. layers (Wolf et al., 2020). The contextual embeddings from BERT encoder are passed to a randomly initialized two-layer BiLSTM before the classi\ufb01cation layer. Fig. 3 (a) and (b) show the front-end of the demonstration, where end-users can type their input (e.g., a sentence or an article) in the text box and then they can view the annotated sentences. The trigger words and event types in Fig. 3 (b) has linked to the Trigger Words Explorer and Event Types Explorer, which makes it easy for them to check the dataset that the ED model was trained on (Fig. 3 (c)). Therefore, this integrated interactive system can help end-users better understand the model outputs and the ED datasets. 4 ED Datasets Analysis With our ED Explorer, we have systematically explored MAVEN, RAMS, and ALDG datasets, and placed special emphasis on the MAVEN dataset, since it is developed and manually annotated for the ED task. In this section, we present our primary \ufb01ndings for MAVEN. 4.1 Common Problems Through Event Overview page and Trigger Word Explorer, we found that it is necessary to investigate the statistics of the MAVEN dataset, because we observed that many event types only have less than 100 annotated instances; and for many trigTriggers Event Types (# Ins.) N.T. (# Ins.) crash Catastrophe(174), Damaging(4) Motion(2), Attack(2) 153 damage Damaging(619), Causation(1) Destroying(1), Bodily Harm(1) 275 storm Catastrophe(925), Attack(14) Self Motion(5), Damaging(1) Motion(1), Bodily Harm(1) 771 Table 2: Examples of trigger words with their annotated event types and frequencies in MAVEN. N.T. represents Negative Triggers that are note annotated. ger words, they trigger one event in most instances. For example, crash trigger Catastrophe in 174 instances and is Negative Trigger in 153 instances (see Table 2). Here, we summarize the common problems as follows: \u2022 Sparsity. In the MAVEN training set, there are 50,388 unique candidate trigger words, out of which 7,074 words trigger at least one event. The total number of annotated instances is 96,897. Within 7,074, only 963 words (14%) have at least 20 annotated instances. However, they cover 75,950 annotated instances (78%). Therefore, for most trigger words, they have very few instances to train ED models. \u2022 Label Bias. In our ED Explorer, we also show the distribution of topics for documents used by MAVEN. We observed that most documents are about military con\ufb02ict, hurricane, civilian attack, concert tour and civil con\ufb02ict, which may lead to label bias problem and limit the applications of ED models trained on MAVEN. For example, for event Building, the most common triggers are established, built, building, constructed, build, establish, buildings, set up, assembly, and erected. In our experiments, we found via our ED Model Explorer that the ED model cannot detect event Building in any of these sentences: \u201cWe will build a house.\u201d, \u201cWe will construct a new building.\u201d, \u201cWe will expand the runway.\u201d, \u201cWe will redevelop the terminal.\u201d \u2022 Label Imbalance. For 7,074 words, we further checked the events they may trigger and found that 4,648 words (66%) have triggered only one event in different instances. For other words that trigger more than one event, many of them have dominant events. Here, we de\ufb01ne an event as the dominant event for a trigger word, if the number of instances for one event is signi\ufb01cantly larger than other events (i.e., #of instances for the most frequent events #of instances for other events > 5). For ex\fProblems Instance Examples Negative Trigger It formed on October 1 in the Caribbean Sea as the seventeenth tropical storm::Negative Trigger, and initially moved slowly to the north. Trigger Wrong Events Unknown to the hijackers, passengers aboard made::Manufacturing telephone calls to friends and family and relayed information on the hijacking. Events Ambiguity S1: The hurricane reached peak winds of 125 mph (205 km/h) on October 6 while moving::Motion through the Bahamas. S2: By midday on June 25, the hurricane reached peak winds of before moving::Self Motion inland well south of the U.S. Mexico border. Table 3: Common annotation problems in MAVEN. The pattern moving::Motion represents the word moving triggers an event Motion. ample, in Table 2, Catastrophe is the dominant events for crash and storm. Using this de\ufb01nition, we \ufb01nd that among the 963 words that have more annotated instances, 585 (61%) of them have dominant events, which results in the label imbalance problem. Therefore, the ED model trained on MAVEN may suffer from the problem that it predicts only one event for each trigger word in different scenarios. 4.2 Debatable Annotations in MAVEN In addition to analyzing the distributions of events and triggers, we have also manually checked annotated instances in MAVEN. In details, for each of the 168 event types, we chose the most frequent 10 trigger words as candidates and for each trigger word, we manually checked 10 annotated instances for every event type it triggers. These procedures can be easily accomplished by navigating through ED Event Type (see Fig. 2 (b)) and Trigger Word Explorers (see Fig. 2 (c) and (d)). We have checked around 10,000 instances in total, and found 2,579 debatable instances (25%), which are also shown in the ED Explorer and are publicly available via API. From these debatable annotations, we found that there are typically three types of annotation mistakes: (1) Negative Trigger represents the situations that annotating a word triggers an event as a negative trigger. (2) Trigger Wrong Events indicates that the word does not trigger the annotated event types. (3) Events Ambiguity means that it is dif\ufb01cult to distinguish two event types (such as Motion and Self Motion), so the annotated instances are also debatable. We have shown examples of each of the annotation problems in Table 3. 4.3 Findings from the ED Model Explorer We have also found several annotation problems via ED Model Explorer, i.e., the ED Demonstration. (1) The \ufb01rst problem is Label Bias, which has been discussed in Section 4.1. (2) The second problem is that many words that should trigger an event do not actually trigger any event in a number of testing cases. For example, in Fig. 3, the word storm should trigger an event Catastrophe, however, it does not trigger any events, even if we test cases like \u201cThe storm hits New York.\u201d, \u201cThe storm damages a lot of houses.\u201d, etc. We think this might because storm is labeled as Negative Trigger in 771 instances (see Table 2). 4.4 Future Directions As ED Explorer can help us systematically explore ED datasets, better understand events and trigger words, and ef\ufb01ciently discover underlying annotation problems, we will include more ED datasets and models, and make it easier for people to explore them in a single integrated platform in the future. More importantly, we plan to improve the quality of MAVEN dataset by (1) continuing checking and correcting annotation errors, (2) annotating more documents in other \ufb01elds to mitigate label bias problem, (3) annotating more instances for non-dominant events to address label imbalance problem. 5" + } + ], + "Yihao Liu": [ + { + "url": "http://arxiv.org/abs/2403.08556v1", + "title": "SM4Depth: Seamless Monocular Metric Depth Estimation across Multiple Cameras and Scenes by One Model", + "abstract": "The generalization of monocular metric depth estimation (MMDE) has been a\nlongstanding challenge. Recent methods made progress by combining relative and\nmetric depth or aligning input image focal length. However, they are still\nbeset by challenges in camera, scene, and data levels: (1) Sensitivity to\ndifferent cameras; (2) Inconsistent accuracy across scenes; (3) Reliance on\nmassive training data. This paper proposes SM4Depth, a seamless MMDE method, to\naddress all the issues above within a single network. First, we reveal that a\nconsistent field of view (FOV) is the key to resolve ``metric ambiguity''\nacross cameras, which guides us to propose a more straightforward preprocessing\nunit. Second, to achieve consistently high accuracy across scenes, we\nexplicitly model the metric scale determination as discretizing the depth\ninterval into bins and propose variation-based unnormalized depth bins. This\nmethod bridges the depth gap of diverse scenes by reducing the ambiguity of the\nconventional metric bin. Third, to reduce the reliance on massive training\ndata, we propose a ``divide and conquer\" solution. Instead of estimating\ndirectly from the vast solution space, the correct metric bins are estimated\nfrom multiple solution sub-spaces for complexity reduction. Finally, with just\n150K RGB-D pairs and a consumer-grade GPU for training, SM4Depth achieves\nstate-of-the-art performance on most previously unseen datasets, especially\nsurpassing ZoeDepth and Metric3D on mRI$_\\theta$. The code can be found at\nhttps://github.com/1hao-Liu/SM4Depth.", + "authors": "Yihao Liu, Feng Xue, Anlong Ming", + "published": "2024-03-13", + "updated": "2024-03-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction Monocular depth estimation is a fundamental visual task with wide-ranging applications in navigation [6, 42], 3D reconstruction [28, 47], 3D object perception [19, 32], visual SLAM [52, 58], and human pose estimation [33, 43]. In this \u2020Equal Contribution. Email: l1h@bupt.edu.cn, feng.xue@unitn.it *Corresponding Author. Email: mal@bupt.edu.cn Scale Shift (m) 270M 148M 267M 203M 110M Params 280G 785G 283G 569G 105G FLOPs NeWCRFs MIM ZoeDepth Metric3D SM4Depth Methods Ideal Point without Error Average Error (1.3, -3.6) (1.1, -2.1) (1.4, -3.6) (0.9, 2.4) (1.1, -0.8) (0.0, 0.0) Figure 1. Confidence eclipses of \u2206scale and \u2206shift(m) between the prediction and the ground truth for all data in multiple zero-shot datasets (iBims-1[22], DIODE[46], nuScenes-val[7], DDAD[18], and ETH3D[39]). community, early research focused on MMDE [8, 23, 51], which were trained only on specific datasets. As a result, they suffered from poor generalization when applied to unseen datasets, which limited their applications in the real world. To solve this issue, much research shifted their focus to monocular relative depth estimation (MRDE) [26, 36, 55] while disregarding the metric scale. Leveraging diverse and easily accessible relative depth data, these studies have achieved impressive performance, enabling their application in scale-free tasks, such as image editing [11, 30] and image stylization [21, 29], but not scale-sensitive applications, e.g., obstacle avoidance [49, 50, 53], 3D reconstruction [44], and virtual reality [1]. Beyond these approaches, universal MMDE has recently gained prominence for its generalization capabilities, marked by ZoeDepth [5] and Metric3D [56]. The former combined relative depth pre-training and metric depth finetuning. The latter achieved higher generalization by training on 8M data at the same focal length. However, both of them still face challenges in the three aspects of MMDE: 1. Camera Level: The existing method [56] aligned all images to one focal length by maintaining a canonical camera space, which is not straightforward for a preprocessing unit. arXiv:2403.08556v1 [cs.CV] 13 Mar 2024 \fFigure 2. Depth and distribution visualization of SM4Depth. Our method enables good generalization across multiple metric depth datasets captured by different sensors. Top: input images. Middle: depth prediction. Bottom: distribution of the prediction (red) and ground truth (green). Six datasets: SUN-RGBD[41], DIODE[46], iBims-1[22], ETH3D[39], nuScenes-val[7], and DDAD[18]. 2. Scene Level: Real-world scenes vary widely in depth, ranging from [1m, 2m] (indoor close-up) to [0.5m, 80m] (street scenes), making models tend to focus on specific scenes and causing inconsistent accuracy across scenes. As depicted in Fig. 1, the previous works suffered from large accuracy fluctuations and high average errors. 3. Data Level: The reliance on massive training data (8M metric depth data for Metric3D) remains due to the high complexity of determining a unique metric scale from a vast solution space of the natural scene. Aiming to address these issues, we propose a Seamless Model for MMDE across Multiple cameras and scenes (SM4Depth for short). First, to be compatible with all cameras, we analyze the key role of FOV in solving the metric ambiguity, which guides us to design a more straightforward FOV alignment unit for consistent inputs. For the second issue, we explicitly model metric scale determination as discretizing the depth interval into bins [3], and propose variation-based unnormalized bins. This method reduces the bin ambiguity between images which was inherent in previous width-based bins. This facilitates the learning of large-gap depth ranges from diverse scenes. Regarding the third issue, we propose a domain-aware bin estimation mechanism based on the \u201cdivide and conquer\u201d idea, which estimates metric bins from various solution sub-spaces, not the entire one, for reducing complexity. Divide: we divide the common depth range into several range domains (RDs) offline and generate independent metric bins for each RD online. Conquer: we predict the RD that the input image belongs to, and weightedly fuse all bins into a single one. By solving all three issues, SM4Depth obtains correct depth distributions and ranges in all scenes displayed in Fig. 2. Our primary contributions are as follows: 1. Camera level: We delve deep into the factor of \u201cmetric ambiguity\u201d, and deduce the crucial role of FOV consistency. This sparks the design of our FOV alignment unit, which is more efficient than the previous one. 2. Scene level: To facilitate depth learning across scenes, we reveal the previously omitted bin ambiguity. This inspires us to propose the variation-based depth bins that bridge the gaps in scenes by reducing this ambiguity. 3. Data level: We propose the domain-aware bin estimation mechanism inspired by the \u201cdivide and conquer\u201d idea. It reduces the complexity of metric bin estimation and thus eliminates the need for massive training data. 4. SM4Depth achieves state-of-the-art zero-shot performance but is trained on only 150K RGB-D pairs without needing a GPU cluster. With the aid of these accomplishments, SM4Depth significantly enhances the practical applicability of MMDE. 2. Related Work Monocular metric depth estimation is a classic visual task, in which determining metric scales is a crucial point, and there are two paradigms. Mainstream MMDE methods [8, 14, 23, 51, 56] have directly modeled this task as a pixel-wise regression problem (predicting continuous depth values in the real metric space), where metric scales are implicitly encoded. In contrast, since [17], several methods [3, 4, 40] have defined this task as a classification problem. Among them, Adabins [3] explicitly encoded the metric scale into image-level depth bins. We follow the latter paradigm as this paper focuses on recovering metric scale. However, the same bin on two images with a large gap in depth range represents drastically different depths, causing misleading back-propagation during training. In this paper, we introduce variation-based bins to overcome this issue. Zero-shot generation has become a new trend of monocular depth estimation in recent years. Early works [26, 34, 35] mainly achieved this goal by training with more accessible relative depth data. Initially, Li et al. [26] developed a pipeline of MRDE on large-scale relative depth data. Ranftl et al. [34] trained an MRDE model on five datasets and reapplied the training strategy to [35]. For high practicality, the universal MMDE was first proposed in [5] which combined relative depth and metric depth to achieve generalization. Yin et al. [56] trained the model on 8M metric depth data for generalization. This reliance on numerous training data is due to the complexity of determining correct metric \f\ufffd \ufffd\ufffd \ufffd\ufffd \ufffd\ufffd Optical center Imaging plane \ufffd Crop & Resize Pad & Resize a b Resize Figure 3. Principle and examples of FOV Alignment. (a) Side view of the pinhole camera model. (b) Cases of FOV alignment. scales from diverse natural scenes. Our approach aims to reduce the reliance on training data. 3. Problem Analysis and Countermeasures In this section, we delve deep into the three issues of MMDE at the camera, scene, and data levels, and provide specific solutions for each issue. 3.1. Camera level: Sensitivity to Different Cameras Analysis of metric ambiguity. According to Metric3D [56], due to different intrinsic parameters, two cameras produce different projections when observing an object at the same distance, which is well known as \u201cmetric ambiguity\u201d. Next, we investigate the key of eliminating metric ambiguity by Fig. 3 that illustrates the imaging process of the pinhole camera model. Assuming that d denotes the depth of the object, and fy denotes the Y-direction focal length of the camera, measured in pixels. According to the similarity principle, there is an equation: d S = fy s (1) where S and s are the actual height (in millimeters) and the imaging height (in pixels) of the object respectively. Based on Eq. (1), the object\u2019s depth can be formulated as d = S \u0002 fy s \u0003 . Therefore, a fixed value of \u0002 fy s \u0003 is crucial for a consistent depth d across different cameras. In practice, all images need to be resized into the same resolution before being fed into the deep network: d = S h(f \u2032 y/fy)fy (h \u2032/h)s i (2) where f \u2032 y and h \u2032 are the focal length and height of the network input, h is the original height of the image, and (f \u2032 y/fy) = (h \u2032/h). Note that, since f \u2032 y and h \u2032 are two preset values, the consistency of h fy ensures a consistent depth d across different cameras. Furthermore, the value h fy follows an arctangent function relationship with the camera\u2019s vertical FOV denoted as \u03c9y: \u03c9y = 2 arctan( h 2fy ) (3) Thus, the consistency of \u03c9y is essential for consistent depth and eliminating metric ambiguity across different cameras. The same applies to horizontal FOV denoted as \u03c9x. FOV alignment for solving metric ambiguity. Following the analysis above, we propose an FOV alignment unit to ensure input consistency across cameras. Given an input image I with focal length (fx, fy), we first preset the input resolution of network as (h, w) and define the target FOV as (\u03c9 \u2032 x, \u03c9 \u2032 y) in radians. Then, according to Eq. (3), an rectangular region I \u2032 \u2208Rh \u2032\u00d7w \u2032\u00d73 on I equivalent to the target FOV (\u03c9 \u2032 x, \u03c9 \u2032 y) is calculated by: w \u2032 = 2fx tan(\u03c9 \u2032 x 2 ) , h \u2032 = 2fy tan(\u03c9 \u2032 y 2 ) (4) Next, we crop this region I \u2032 from the original image I, and fill the pixels beyond I with 255. Finally, the region I \u2032 is resized to the target resolution (h, w) for generating a new image I as input of the network, as shown in Fig. 3 (b). Unlike transforming images to the same intrinsic parameter [56], we aim to ensure consistent inputs by unifying the FOV of images, thus do not need to maintain a canonical camera space. 3.2. Scene level: Inconsistent Accuracy Generally, real-world images exhibit vastly different depth ranges, e.g. [1m, 2m] for indoor close-up and [0.5m, 80m] for street scenes. Such a large gap causes the model to overly concentrate on specific scenes instead of all scenes, leading to inconsistent accuracy across different scenes. In this section, we solve this issue by novel depth bins that bridge the gap of metric scale representation across scenes. Before that, we briefly review the conventional depth bin [3] and outline its weakness. Reviewing width-based depth bin and its weakness. Given the input image I \u2208Rh\u00d7w\u00d73, Adabins [3] generates an N-channel probability map P \u2208Rh\u00d7w\u00d7N and a vector c \u2208RN\u00d71 representing the centers of N depth bins discreted from the depth interval, which are linearly combined to obtain a metric depth map D \u2208Rh\u00d7w: D(i) = XN n=1 cnPn(i) (5) where D(i) is the ith pixel\u2019s predicted depth, and Pn(i) denotes the probability for pixel i that its depth is equal to the nth bin center cn. In Eq. (5), the bin center cn is calculated by accumulating the width of each bin b \u2208RN\u00d71: cn = dmin + (dmax \u2212dmin) (bn/2 + Xn\u22121 j=1 bj) (6) where bn = (b \u2032 n +\u03f5)/ PN i=1(b \u2032 i +\u03f5) denotes the normalized width of the nth depth bin, with \u03f5 = 10\u22123 and b \u2032 n \u2208[0, +\u221e) being the unnormalized width predicted through a feedforward neural network (FFN) with the ReLU activation func\fLarge bin ambiguity Reduced bin ambiguity 1 32 64 96 160 192 224 256 128 30m 0m 25m 20m 15m 5m 10m Index of depth bin center Depth Width-based bin on \u201cmeetingroom_01\u201d of iBims Width-based bin on \u201ccorridor_01\u201d of iBims Variation-based bin on \u201cmeetingroom_01\u201d of iBims Variation-based bin on \u201ccorridor_01\u201d of iBims Figure 4. The bin center curves illustrate the bin ambiguity between a distant view image ([0m, 30m]) and a close-up one ([0m, 3.5m]). They are unnormalized, which was proved to have no effect in [3]. tion. During training, the bi-directional Chamfer loss [16] is employed to enforce the small width b \u2032 within the interesting depth interval in the ground truth depth map D: Lbin(c, D) = X d\u2208D min cn\u2208c||d\u2212cn||2+ X cn\u2208c min d\u2208D||d\u2212cn||2 (7) where d is the pixel\u2019s correct depth. Since the depth width b \u2032 n is strictly positive, the bin centers cn increase monotonically with n until approaching dmax, which enlarges the distinction in the bin center at the same index between two images. An excessive distinction would confuse the physical metric meaning of the probability map\u2019s channels Pn. Thus, there is an ambiguity in pixelwise classification when generating Pn, in turn leading to back-propagation of misleading signals during training. We term this phenomenon as \u201cbin ambiguity\u201d. As shown in Fig. 4, there is a significant difference between the same depth bin of the long-distance image (blue dashed line) and that of the close-up image (blue solid line). Depth variation based bins for consistent accuracy. Our idea for reducing bin ambiguity is to use only the front part of the depth bin when the input image has a small depth range. To achieve this, we propose the variation-based unnormalized depth bins. Unlike the conventional bin b \u2032 n, we use only an FFN without ReLU activation. In this way, the FFN outputs variations that allow negative values, denoted as \u02c6 b \u2032 \u2208RN\u00d71. Then, we re-formulate the bin center c in Eq. (6) as an unnormalized bin center \u02c6 c to no longer be limited by the depth range of specific datasets (e.g., [0m, 10m] for NYUDv2 and [0m, 80m] for KITTI): \u02c6 cn = \u03f5 + \u02c6 b \u2032 n/2 + Xn\u22121 j=1 \u02c6 b \u2032 j (8) Since the depth variations \u02c6 b \u2032 are allowed to be negative, the bin center value {\u02c6 cn|n \u2208[1, N]} does not increase monotonically, as shown by the red lines in Fig. 4. In this way, the Chamfer loss (see Eq. (7)) forces an intermediate bin center \u02c6 cn(n\u2208[1, N]) to have the maximum depth, not necessarily the last bin center \u02c6 cN. As a result, it reduces the Heatmap with variation-based depth bin ( \u0302 \ud835\udc50, % \ud835\udc4f!) Heatmap with width-based depth bin (\ud835\udc50, \ud835\udc4f, \ud835\udc4f!) Low Frequency High Frequency 20m 16m 12m 8m 4m 0m1 32 64 96 128 160 192 256 224 20m 16m 12m 8m 4m 0m1 32 64 96 128 160 192 256 224 Figure 5. The heatmaps show the frequency of depth values occurring in each depth bin, which are obtained from iBims-1 [22]. If a square (X, Y) appears darker, it indicates that the depth value Y mainly occurs within the X th depth bin. ambiguity of the bins across different images, as indicated by the red double-headed arrows in Fig. 4. Specifically, since the front bin centers {\u02c6 cn|n \u2208[1, n]} indicate depths from 0 to the maximum depth dmax, there is a lack of supervision for the latter bin centers {\u02c6 cn|n\u2208(n, N]}, causing them to be roughly suppressed below the maximum depth dmax. Consequently, the latter channels of the probability map {Pn|n\u2208(n, N]} are filled with small values. Fig. 5 presents additional statistics information, i.e., the frequency of depth values occurring in each depth bin. For the width-based depth bin (c, b, b \u2032), depths below 4m occur most frequently across all bins. Conversely, variationbased depth bin (\u02c6 c,\u02c6 b \u2032) exhibits larger depths in the latter bins. This means that the depth values represented by each bin center are pulled apart on the level of the entire dataset, suppressing the bin ambiguity. 3.3. Data level: Reliance on Massive Training Data Reason behind the reliance. Practical applications differ from specific datasets in that images are taken from various camera angles and innumerable scenes. Due to the diverse nature of appearance, mapping from visual cues to a wide range of depth values becomes highly intricate and cannot be exhaustively presented. Consequently, determining metric depth bins entails exploring a vast solution space, which necessitates greater attention to reducing its complexity. However, previous works have overlooked this crucial aspect by directly making prediction (e.g., [5] solves for metric bins and [56] predicts metric depth) from the entire solution space, inevitably requiring massive training data. To address this issue, we first divide the whole solution space into several sub-spaces. Then a \u201cdivide and conquer\u201d method is proposed to generate metric bins in each sub-space and predict the best metric bins for the input. Stage 1: Online depth range domain generation. To divide the solution space into sub-spaces, the previous ap\fproaches group all images according to semantic categories [5, 27]. However, a large gap in depth range may exist within one scene category. Unlike them, we group all training images according to the depth range that better constrains the camera angle and scene from which the image is taken. According to [17], the amount of information for depth estimation decreases as the depth value increases. Thus, we employ a space-increasing strategy to gain more image groups (named range domain, RD) when the depth value is smaller. Assuming that the depth range is [Zmin, Zmax] and there are K RDs, the kth RD can be formulated as: RDk = h Zmin, Zmin + Xk i=1 2i(Zmax \u2212Zmin) K(1 + K) i (9) We further visualize the RDs in the supplementary material. Stage 2: Online domain-aware bin estimation design. We design a domain-aware bin estimation mechanism that generates metric bins for each RD and finds the bestmatching metric bins, following the \u201cdivide and conquer\u201d idea in two steps. The \u201cDivide\u201d step aims to discretize each depth interval RDk into N bins. Specifically, given the deep feature of the input image, we leverage a transformer encoder to learn the relationship between the deep feature and K preset learnable 1-D embeddings (called bin queries). The output embeddings of these queries are fed into an FFN to generate K depth variation vectors {\u02c6 b \u2032[k]|k\u2208[1, K]}, and calculate the bin center vectors {\u02c6 c[k]|k \u2208[1, K]} based on Eq. (8). To illustrate our idea, we compare our design with two other possible choices: \u2022 1 Query + K FFNs: Using K FFNs to process the output of only one query. \u2022 K Queries + K FFNs: Using K FFNs to process the outputs of K queries in a one-to-one way. \u2022 K Queries + 1 FFN (Ours): Using only one FFN to process the outputs of K queries. The first two designs both employ K FFNs. Thus, each FFN only learns the knowledge of a single RD during training, which leads to drastically different outputs of these FFNs and makes them sensitive to input noise. The last design is recommended as the best choice and the experiments (in Sec.5.6) verify its superiority over other options. The \u201cConquer\u201d step aims to estimate the correct RD for the input image and determine the best-matching metric bins. Specifically, we preset an additional 1-D embedding (called domain query) alongside the bin queries. Its corresponding output is then fed into a classification head (CLS) to generate the probability that the input image belongs to each RD, denoted as {yk \u2208[0, 1]|k \u2208[1, K]}. Subsequently, considering the possibility of images being positioned near the decision boundary of RD classification, we do not select the top-scoring metric bin but instead ( \u210e,\ud835\udc64) (\ud835\udf14! \",\ud835\udf14# \") \ud835\udc39 $ \ud835\udc39 % \ud835\udc39 & \ud835\udc39 ' \ud835\udc39 ( Encoder CLS Pyramid Scene Transformer [37] \u2026 0.9 \u0302 \ud835\udc50$ 0.1 \u0302 \ud835\udc50% 0.0 \u0302 \ud835\udc50) \u22ef \u22ef FFN FFN FFN FFN \u2026 \u2026 DPT Decoder Structure S S S \ud835\udc1c \ud835\udc08 \ud835\udc37 S \ud835\udc37$ \" \ud835\udc37% \" \ud835\udc37& \" \ud835\udc37' \" \ud835\udc37( \" 3\u00d73 Convolution 2\u00d7 Upsample Residual Block Softmax \ud835\udc3e Bin Queries Domain query CLS Classification Head Weighted Fusion FFN Weight-shared Feedforward Neural Network Image S C \u25cf \ud835\udc37*+$ \" \ud835\udc39 * \ud835\udc1c \ud835\udc37* \" Decoder Stage Concatenate C Pixel-wise Product \u25cf FOV Align \u25cf Figure 6. Pipeline of SM4Depth. The blue block denotes the domain-aware bin estimation. The red one denotes our decoder. combine all bin center vectors \u02c6 c[k] to a single one by using the RD probabilities {yk|k \u2208[1, K]} as weights: c = X k\u2208[1,K] \u02c6 c[k]yk (10) where c is the final bin center vector. 4. Architecture of SM4Depth Pipeline. Fig. 6 illustrates the structure of our network. Given an RGB image I \u2208Rh\u00d7w\u00d73 and its pixel-presented focal length (fx, fy), we use the FOV alignment unit (in Sec.3.1) to align I to a preset FOV (\u03c9 \u2032 x, \u03c9 \u2032 y) and resolution (h, w), obtaining an image without metric ambiguity I \u2208Rh\u00d7w\u00d73. Then, we employ a Swin-Transformer as the encoder to extract the deep feature from I. Next, a pyramid scene transformer (PST) [40] is positioned between the encoder and decoder. It consists of three parallel transformer encoders with inputs of different patch sizes, respectively. We employ the transformer encoder with the smallest patch size to process all queries. Based on the mechanism in Sec.3.3, we obtain the bin center vector c of image I. Finally, we leverage a decoder with hierarchical scale constraints (HSC-Decoder) to anchor the metric scale in multiple resolutions and output the metric depth map D. Decoder with hierarchical scale constraints. Our decoder draws inspiration from the refinement decoder structures [8, 24], but the divergence lies in scale constraints on the metric depth at each stage. As shown in Fig. 6, taking the PST\u2019s output, denoted as F1, as input, we employ the DPT\u2019s decoder [35] to gradually recover the resolution of features, denoted as Fs with a size h 2(6\u2212s) \u00d7 w 2(6\u2212s) , where s \u2208{1, 2, 3, 4, 5} is the stage number. In the first stage, F1 is compressed into N-channel and then multiplied pixel\fCategories Method SUN RGB-D iBims-1 Benchmark ETH3D Indoor DIODE Indoor \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 MMDE BTS [24] 0.718 0.181 0.533 -31.45% 0.536 0.233 1.059 -32.82% 0.360 0.324 2.210 -18.73% 0.208 0.419 2.382 -34.57% AdaBins [3] 0.751 0.167 0.493 -23.07% 0.548 0.216 1.078 -29.56% 0.283 0.361 2.347 -31.23% 0.173 0.442 2.450 -40.95% NeWCRFs [57] 0.779 0.159 0.437 -14.67% 0.543 0.209 1.031 -26.43% 0.452 0.268 1.874 0.76% 0.183 0.402 2.307 -33.51% MIM [48] 0.844 0.147 0.341 0.00% 0.717 0.163 0.813 0.00% 0.453 0.287 1.800 0.00% 0.416 0.317 1.960 0.00% Universal MMDE ZoeD-N [5] 0.850 0.125 0.357 3.66% 0.652 0.171 0.883 -7.53% 0.388 0.275 1.678 -1.13% 0.376 0.331 2.198 -8.72% SM4Depth-N 0.874 0.121 0.303 10.80% 0.715 0.162 0.801 0.60% 0.486 0.249 1.662 9.40% 0.418 0.298 1.790 5.05% ZoeD-NK [5] 0.841 0.129 0.367 1.42% 0.610 0.189 0.952 -15.99% 0.353 0.280 1.691 -4.53% 0.386 0.335 2.211 -8.57% Metric3D[56] 0.033 2.631 5.633 \u00d7 0.818 0.158 0.582 15.18% 0.536 0.335 1.550 5.16% 0.505 0.427 1.687 0.21% SM4Depth 0.869 0.127 0.301 9.43% 0.790 0.134 0.673 15.06% 0.527 0.233 1.407 18.99% 0.356 0.300 1.721 1.04% Categories Method nuScenes-val DDAD ETH3D Outdoor DIODE Outdoor \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 \u03b41 \u2191 REL \u2193 RMSE \u2193 mRI\u03b8 \u2191 MMDE BTS [24] 0.420 0.285 9.140 -9.24% 0.802 0.146 7.611 -13.07% 0.175 0.831 5.746 7.19% 0.172 0.838 10.475 -34.70% AdaBins [3] 0.483 0.272 10.178 -7.45% 0.757 0.155 8.673 -22.80% 0.110 0.889 6.480 -12.65% 0.162 0.853 10.322 -36.09% NeWCRFs [57] 0.415 0.280 7.402 -0.64% 0.866 0.120 6.359 2.66% 0.258 0.799 5.061 29.57% 0.177 0.841 9.304 -29.25% MIM [48] 0.396 0.283 6.868 0.00% 0.859 0.134 6.157 0.00% 0.159 0.889 6.048 0.00% 0.269 0.625 7.819 0.00% Universal MMDE ZoeD-K [5] 0.379 0.290 6.900 -2.41% 0.833 0.130 7.154 -5.41% 0.303 1.012 5.853 26.65% 0.269 0.823 6.891 -6.60% SM4Depth-K 0.623 0.229 7.175 23.98% 0.841 0.160 5.677 -4.56% 0.452 0.294 3.168 99.61% 0.280 0.552 8.335 3.06% ZoeD-NK [5] 0.371 0.299 6.988 -4.57% 0.821 0.139 7.274 -8.77% 0.337 0.752 4.758 49.56% 0.207 0.735 7.570 -12.49% Metric3D*[56] 0.868 0.143 8.506 48.27% 0.896 0.119 7.262 -0.01% 0.324 0.724 9.830 19.93% 0.169 0.499 9.353 -12.21% SM4Depth 0.672 0.214 7.221 29.65% 0.890 0.123 5.390 8.09% 0.348 0.273 3.274 78.01% 0.190 0.487 8.435 -5.05% Table 1. Quantitative results on zero-shot datasets. mRI\u03b8 denotes the mean relative improvement compared to MIM across all metrics(\u03b41, REL, RMSE). All methods undergo evaluation consistently within a specific region. The best results are in bold and the second-best ones are underlined. \u00d7 indicates poor performance. * means that Metric3D was trained on DDAD. wisely with c by Eq. (5), generating a low-resolution depth map D \u2032 1 \u2208R h 32 \u00d7 w 32 . In the following sth stage, the depth map of the former stage D \u2032 s\u22121 is upsampled and fused with feature Fs by a residual convolution block [8]. Then, we linearly combine the fused feature and the bin centers c to generate the depth map D \u2032 s \u2208R h 2(6\u2212s) \u00d7 w 2(6\u2212s) . In this way, the depth map of the last stage D \u2032 5 \u2208R h 2 \u00d7 w 2 is obtained. Compared to the previous refinement decoder [8], HSC-Decoder incorporates the metric bins into each stage to refine the depth range progressively, thus performing better in recovering the depth range. The loss functions are further described in the supplementary material. 5. Experiments 5.1. Datasets Training. We randomly sample RGB-Depth pairs from various datasets for training. Specifically, we sample 24K pairs from ScanNet [13], 15K pairs from Hypersim [37], 51K pairs from DIML [10], 36K pairs from UASOL [2], 14K pairs from ApolloScape [20], and 11K pairs from CityScapes [12]. During training, NYUD [31] and KITTI [45] are used for validation. In addition, we apply the same pre-processing steps to the training data as [5, 34], elaborated in supplementary materials. Testing. To evaluate the zero-shot performance, we employ eight datasets that are not used during the training process: SUN RGB-D [41], iBims-1 [22], ETH3D Indoor [38], DIODE Indoor [46] for indoor scene; nuScenes-val [7], DDAD [18], ETH3D Outdoor [38], DIODE Outdoor [46] for outdoor scene. Note that, we remove the test set of NYUD from SUN RGB-D to avoid unfair comparisons. 5.2. Metrics We employ four widely used metrics [3] for evaluation: the accuracy under threshold (\u03b4k < 1.25k, k = 1, 2, 3), the absolute relative error (REL), and the root mean squared error (RMSE). In addition, we use the relative improvement (RI) across datasets (mRI\u03b7) and metrics (mRI\u03b8) in [5]. During the evaluation, the final output is obtained by averaging the predictions for an image and its mirror image. In addition, the final output is upsampled to match the original image size, and all metrics are computed within the same FOV. Note that, we cap the evaluation depth at 80m (compared to only 8m for SUN RGB-D and 10m for NYUD). 5.3. Implementation Details SM4Depth employs the Swin Transformer Base as the backbone, and runs on a single NVIDIA RTX 3090 GPU. The network is trained by the Adam optimizer with parameters (\u03b21, \u03b22) = (0.9, 0.999). The training runs for 20 epochs with a batch size of 10. The initial learning rate is set to 2\u00d710\u22125 and gradually reduced to 2\u00d710\u22126. Note that, an over-large fixed FOV would cause too large invalid area in the small FOV dataset, making the network underfitting. We empirically set the fixed FOV to (\u03c9 \u2032 x, \u03c9 \u2032 y) = (58\u25e6, 45\u25e6) and the fixed resolution to (w, h)=(564, 424). 5.4. Quantitative Result We employ two classical MMDE methods, i.e., BTS [24] and AdaBins [3], as well as two more advanced MMDE approaches, i.e., NeWCRFs [57] and MIM [59], for comparison. Moreover, the universal MMDE methods, i.e., Metric3D [56] and ZoeDepth-N/K/NK [5], are also employed for comparison (N indicates NYUD fine-tuning and K for KITTI fine-tuning; they are also applicable to SM4Depth). \fInput GT ZoeDepth MIM Ours Metric3D Figure 7. Qualitative comparison with MDE methods on zero-shot datasets. The depth distribution is under the depth maps with green for ground truth and red for prediction. Our method performs better on images with multiple viewpoints and diverse scenes. Method Backbone NYUD RANK (mRI\u03b8) \u03b41 \u2191 \u03b42 \u2191 \u03b43 \u2191 REL\u2193 RMSE\u2193 log10\u2193 ZoeD-N [5] Beit-Large 0.956 0.995 0.999 0.075 0.279 0.032 1.00 ZoeD-NK [5] Beit-Large 0.954 0.996 0.999 0.076 0.286 0.033 1.15 SM4Depth-N Swin-Base 0.932 0.991 0.998 0.088 0.328 0.038 2.22 Metric3D [56] ConvNeXt-Large 0.926 0.984 0.995 0.091 0.340 0.038 2.43 SM4Depth Swin-Base 0.860 0.981 0.997 0.126 0.417 0.052 5.00 Method Backbone KITTI RANK (mRI\u03b8) \u03b41 \u2191 \u03b42 \u2191 \u03b43 \u2191 REL\u2193 RMSE\u2193 log10\u2193 ZoeD-K [5] Beit-Large 0.978 0.998 0.999 0.049 2.221 0.021 1.00 ZoeD-NK [5] Beit-Large 0.971 0.994 0.996 0.053 2.415 0.024 1.62 SM4Depth-K Swin-Base 0.971 0.996 0.999 0.054 2.477 0.023 1.61 Metric3D [56] ConvNeXt-Large 0.962 0.993 0.998 0.060 2.969 0.026 2.57 SM4Depth Swin-Base 0.928 0.985 0.996 0.087 3.272 0.038 5.00 Table 2. Quantitative result on NYUD and KITTI. All methods undergo evaluation in a consistent region. The best results are in bold and the second-best ones are underlined. In Table 1, the upper part shows the zero-shot performance on four indoor datasets, and the lower part shows that on four outdoor datasets. Intuitively, our method outperforms most MMDE methods. Compared to Metric3D, SM4Depth performs better on most datasets, (i.e., SUN RGB-D, ETH3D, DIODE, and DDAD) and similar on iBims-1, but is only trained 150K images, which proves the effectiveness of SM4Depth. Especially, SM4Depth outperforms Metric3D by +58.08% and +8.10% mRI\u03b8 on ETH3D Outdoor and DDAD. In addition, SM4Depth outperforms Metric3D on nuScenes-val by -1.285 of RMSE, but falls behind on \u03b41 and REL, as Metric3D is trained on much more self-driving datasets, which endows it an advantage in such scenes. Compared with ZoeDepth, SM4Depth leads on all datasets regardless of fine-tuning or not, not only in absolute metrics (\u03b41, RMSE) but also in relative metrics (REL), illustrating that SM4Depth learns more accurate relative depth from metric depth data. Note that, Metric3D shows a noticeable accuracy drop on SUN RGB-D, where all images are cropped beforehand. This degradation is due to the lack of constraint on the ratio f h in Eq. (2). Specifically, the cropping operation alters the image size (h, w), thereby invalidating this equation and resulting in a potential metric ambiguity in depth estimation. Furthermore, most methods struggle with accurate depth estimation on DIODE due to its high proportion of upward-perspective images that contain numerous invalid pixels. Table 2 displays the results on NYUD and KITTI. In the comparison of the zero-shot setting, our method obtains lower \u03b41 and higher RMSE than Metric3D on NYUD and KITTI. However, after being fine-tuned on NYUD and KITTI, SM4Depth achieves competitive accuracy with the state-of-the-art methods, while avoiding a significant degradation in accuracy on zero-shot datasets (see in Table 1). 5.5. Qualitative Result Fig. 7 visualizes several methods\u2019 predictions and depth distributions. The 1st \u22123rd columns show close-up scenes with unusual perspectives, challenging depth range determination. Previous methods obtain the incorrect depth distributions, while Metric3D tends to push the background farther when the foreground boundary is clearly delineated. The 4th\u22127th columns show indoor scenes where other methods suffer from incorrect depth range, while SM4Depth recovers the depth distribution accurately, as our decoder optimizes the metric scale at multiple resolutions. The 8th\u221210th columns show multiple outdoor scenes. The predictions of \fSolutions SUN RGB-D ETH3D DIODE DDAD mRI\u03b7 \u2191 CSTM label [56] 1.051 2.648 5.950 28.094 0.00% FOV Alignment 0.301 2.373 5.605 5.390 46.23% Table 3. RMSE results using metirc ambiguity solutions on SM4Depth. The best results are in bold. RMSE Figure 8. Parameter experiment about K. The performance, as measured by \u03b41 and RMSE, is optimal when K equals 4. The dots represent the use of the uniform partition strategy for training. MIM and ZoeDepth exhibit overall shifts, while Metric3D fails to distinctly differentiate between the front objects and the background wall. In contrast, due to training on multiple metric depth datasets, SM4Depth generates a visually reasonable depth distribution, while it does not assign an extreme depth value to sky regions because they are set to 0 during training. The last two columns show images from self-driving scenes. Although all methods generate good depth maps, SM4Depth obtains a more accurate depth distribution and captures richer shape details than other methods. Especially in the 12th column, where objects are up to 80m away, our method correctly predicts their farthest depths as well as generating fine tree trunk edges. 5.6. Detail Analysis Comparing metric ambiguity elimination methods. Metric3D [56] proposed two methods to solve metric ambiguity, i.e., CSTM label and CSTM image. However, only the code of CSTM label is released. Thus, we employ CSTM label to compare with our FOV alignment. As shown in Table 3, the proposed solution outperforms CSTM label by +46.23% on the comprehensive metric mRI\u03b7, especially on SUN RGB-D and DDAD. This performance gap arises from the cropping operation. Note that neither model was trained on DDAD, but the model using the CSTM label appears to be incompatible with DDAD. This experiment proves the superiority of the FOV alignment in metric ambiguity elimination. Number of depth range domain. We explore the optimal number of RD, i.e., K, and additionally evaluate the uniform partition strategy [17] when using the best K. Fig. 8 shows all variants\u2019 performance on the mixing test sets. As K increases, RMSE decreases slowly. RMSE suddenly drops below 3.4 when K = 4, and rises again at 5 or 6. We argue that this phenomenon occurs because RDs better describe images with different appearance when K = 4 and prevents excessive similarity between RDs due to redundant division. In addition, using the uniform partition V-Bin1 WF-Bin2.1 DBE2.2 HSC3 iBims-1 ETH3D DIODE DDAD mRI\u03b7 \u2191 \u221a \u221a \u221a \u221a 0.673 2.373 5.605 5.390 10.92% \u00d7 \u221a \u221a \u221a 0.692 2.504 6.033 5.726 6.03% \u00d7 \u00d7 \u221a \u221a 0.701 2.692 6.111 5.486 4.53% \u00d7 \u00d7 \u00d7 \u221a 0.741 2.566 6.163 5.587 3.67% \u00d7 \u00d7 \u00d7 \u00d7 0.695 2.695 6.107 6.767 0.00% 1: Depth-Variation based Bin 2.2: Domain-aware Bin Estimation 2.1: Weighted Fusion of Bins 3: Decoder with Hierarchical Scale Constraints Table 4. RMSE results of the ablation study. The best results are in bold, while the second-best ones are underlined. Design Choices iBims-1 ETH3D DIODE DDAD mRI\u03b7 \u2191 1 * Query + K * FFNs 0.770 2.522 5.982 6.601 0.00% K * Queries + K * FFNs 0.734 2.401 5.820 6.920 1.84% K * Queries + 1 * FFN 0.673 2.373 5.605 5.390 10.79% Table 5. RMSE results of the domain-ware bin estimation. The best results are in bold, while the second-best ones are underlined. strategy leads to a notable decrease in \u03b41 and RMSE. Ablation study. We conduct the ablation study by gradually removing our designs and comparing all variants. In Table 4, the baseline (last row) consists of only an encoderdecoder structure and a PST [40]. Observably, the RMSEs increase overall as the proposed modules and innovations are gradually removed. The depth-variation based bins make the greatest contribution (+4.89% mRI\u03b7), indicating its effectiveness in learning large depth range gaps. The entire domain aware bin estimation increases mRI\u03b7 by 2.36%, with the weighted fusion scheme contributing 1.5% of this. In addition, the HSC-decoder improves mRI\u03b7 by 3.67%. Comparing designs for domain-aware bin estimation. As shown in Table 5, we compare three design choices of our domain-aware bin estimation mentioned in Sec.3.3 on the same four datasets in the ablation study. Compared to the other settings, \u201cK*Query+1*FFN\u201d achieves the lowest RMSE and highest mRI\u03b7. The reason is that the single FFN is trained on multiple RDs and thus learns common knowledge for bin estimation from multiple RDs. 6." + }, + { + "url": "http://arxiv.org/abs/2310.10513v2", + "title": "Unifying Image Processing as Visual Prompting Question Answering", + "abstract": "Image processing is a fundamental task in computer vision, which aims at\nenhancing image quality and extracting essential features for subsequent vision\napplications. Traditionally, task-specific models are developed for individual\ntasks and designing such models requires distinct expertise. Building upon the\nsuccess of large language models (LLMs) in natural language processing (NLP),\nthere is a similar trend in computer vision, which focuses on developing\nlarge-scale models through pretraining and in-context learning. This paradigm\nshift reduces the reliance on task-specific models, yielding a powerful unified\nmodel to deal with various tasks. However, these advances have predominantly\nconcentrated on high-level vision tasks, with less attention paid to low-level\nvision tasks. To address this issue, we propose a universal model for general\nimage processing that covers image restoration, image enhancement, image\nfeature extraction tasks, etc. Our proposed framework, named PromptGIP, unifies\nthese diverse image processing tasks within a universal framework. Inspired by\nNLP question answering (QA) techniques, we employ a visual prompting question\nanswering paradigm. Specifically, we treat the input-output image pair as a\nstructured question-answer sentence, thereby reprogramming the image processing\ntask as a prompting QA problem. PromptGIP can undertake diverse cross-domain\ntasks using provided visual prompts, eliminating the need for task-specific\nfinetuning. Our methodology offers a universal and adaptive solution to general\nimage processing. While PromptGIP has demonstrated a certain degree of\nout-of-domain task generalization capability, further research is expected to\nfully explore its more powerful emergent generalization.", + "authors": "Yihao Liu, Xiangyu Chen, Xianzheng Ma, Xintao Wang, Jiantao Zhou, Yu Qiao, Chao Dong", + "published": "2023-10-16", + "updated": "2024-02-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "eess.IV" + ], + "main_content": "Introduction Image processing encompasses a set of fundamental tasks that are aimed at direct manipulation and enhancement of image pixel-level information. These tasks are primarily focused on improving image quality and extracting basic image features, including but not limited to image restoration, image enhancement, image filtering, and image feature extraction. They provide a solid foundation for subsequent analysis, recognition, and comprehension of visual content within images. To address diverse image processing requirements, practitioners have traditionally resorted to developing specialized task-specific models. Consequently, achieving a particular objective often demands the utilization of different independent or combined models. 1 arXiv:2310.10513v2 [cs.CV] 21 Feb 2024 \fUnifying Image Processing as Visual Prompting Question Answering I love hamburgers J'adore les hamburgers I like sports J'aime le sport MAE-VQGAN Painter PromptGIP Q A Q A Task Prompt Query Input I J'adore love les Hamburgers hamburgers I J'aime like le sports sport I love hamburgers I like sports J'adore les hamburgers J'aime le sport I love hamburgers I like sports J'adore les hamburgers J'aime le sport I love hamburgers J'adore les hamburgers I like sports J'aime le sport I love hamburgers J'adore les hamburgers I like sports J'aime le sport An image is equal to one sentence Image processing as prompting QA English-French Translation Figure 2. Analogous to NLP tasks, various image processing tasks can be unified into a general visual prompting QA paradigm: given a pair of image prompt, the model can process the query image based on the prompts. MAE-VQGAN fragments image tokens and arrange them in an interleaved fashion. It disrupts the continuity and contextual understanding of the image content. Painter adopts a Q-Q-A-A organizational structure, which is not aligned with the QA paradigm. This misalignment can lead to inefficiencies in learning. In recent years, a significant trend has emerged towards the development of general large-scale models. This paradigm shift involves extensive pretraining on massive datasets and interactive in-context learning techniques, leading to the creation of a unified, powerful model capable of handling multiple tasks. For example, large language models (LLMs), especially the GPT series models (Radford et al., 2019; Brown et al., 2020), have successfully unified most tasks in the natural language processing (NLP) field and achieved exceptional performance. Similar exploration has also been observed in the field of computer vision. Meta AI Research introduced a Segment Anything Model (SAM) (Kirillov et al., 2023) for image segmentation. Through large-scale pretraining, SAM achieves remarkable zero-shot generalization performance in various scenarios. In other computer vision fields, a quantity of large foundation models have also been proposed, such as Inpainting Anything Model (IAM) (Yu et al., 2023), Track Anything Model (TAM) (Yang et al., 2023), InternImage (Wang et al., 2023a), and InternVideo (Wang et al., 2022a). These advancements carry profound implications for the realization of artificial general intelligence (AGI). However, current focus of large models primarily lies in the domain of high-level vision. Low-level vision has received relatively little attention. While some newly-proposed methods, e.g., MAE-VQGAN (Bar et al., 2022) and Painter (Wang et al., 2023b), have involved a few classic low-level vision tasks, their main focus remains on high-level vision tasks. Furthermore, these methods encounter challenges in dataset selection, model design, and training paradigms, making them unable to directly adapt to the low-level vision. In this paper, we present a universal model for general image processing by thoroughly examining the characteristics of low-level vision tasks and analyzing the limitations of existing in-context learning models in computer vision. Unlike prior literature that predominantly focused on image restoration tasks, our proposed model expands its scope to encompass image restoration, image enhancement, and image feature extraction. These tasks all belong to the domain of image processing, but their objectives and output domains are distinct. Specifically, image restoration aims to recover the original clean and natural image from a degraded image, such as denoising and deblurring. Image enhancement focuses on improving the visual quality of the image by enhancing contrast, brightness, color tones, and textures. Image feature extraction, like edge detection, focuses on extracting the basic features from the image. Due to the different output representations, conventional image restoration models cannot accomplish these diverse crossdomain tasks by simply expanding the training data within a streamlined framework. To mitigate the ambiguity across different output domains, substantial task-specific retraining is needed. To address the diverse challenges of general image processing tasks, we adopt a visual prompting question answering paradigm, which utilizes paired visual prompts to precisely indicate the tasks to be accomplished. Our universal model, namely PromptGIP, can effectively handle up to 15 various image processing tasks, providing a more versatile solution for low-level vision. The experiments also indicate that in-context learning enables the model to exhibit preliminary generalization for out-of-domain tasks. 2. Related Work Image Restoration and Beyond. Over the past decade, single-purpose image restoration methods, dedicated to recover the original clean and natural image from degraded observation, have garnered substantial research attention. 2 \fUnifying Image Processing as Visual Prompting Question Answering Numerous representative approaches have found applications across various domains, including denoising (Zhang et al., 2017), deblurring (Kupyn et al., 2018), and deraining (Zamir et al., 2021), among others. However, the inherent limitation of these techniques lies in their reliance on specialized datasets and the tailored network architectures. Consequently, their generalization ability remains unsatisfactory, falling notably short of generality. Moreover, image enhancement algorithms, like low-light enhancement (Wei et al., 2018), present significant demands and applications. Paradoxically, most researchers tend to concentrate solely on a specific augmentation methodology, such as simply enlarging the training data, to seek for more robust generalization. Differently, we advocate for a paradigm shift, expanding the purview beyond image restoration to embrace image enhancement and other image processing tasks. Besides, we propose a unified framework capable of collectively tackling all these tasks. This pioneering approach markedly enhances the universality of low-level vision foundation models, bridging the gap between disparate domains. Visual In-Context Learning. In NLP, the GPT (Generative Pretrained Transformer) series models, such as GPT-2 and GPT-3 (Radford et al., 2019; Brown et al., 2020), have achieved significant success in unifying various NLP tasks. By providing a prompt or designing an in-context example, which is usually a task-specific instruction or question, GPT can be transformed into a task-specific question-answering model without the need for extensive retraining or finetuning. In vision, a few works \u2013 MAE-VQGAN (Bar et al., 2022) and Painter (Wang et al., 2023b), have begun harnessing the flexibility afforded by in-context learning to unify diverse vision tasks. By constructing grid-like prompts, they exhibit commendable performance on high-level tasks like semantic segmentation. However, their efficacy has been less pronounced in low-level domains, failing to exploit the full potential of in-context learning. We claim that this discrepancy may be attributed to the distinct nature of low-level vision tasks, which involve pixel-wise image manipulation, in contrast to the high-level tasks that demand comprehension across varying levels of abstraction. Multi-task Learning for Image Processing. Multi-Task Learning (MTL) aims to train a single model to concurrently handle multiple image processing tasks. Traditionally, MTL approaches have predominantly focused on image restoration, and they can be broadly categorized into two streams. BSRGAN (Zhang et al., 2021) and RealESRGAN (Wang et al., 2021b) adopt a data-centric approach. They propose to employ models with significant parameter complexity and utilize complicated degradation models to generate ample training data. DASR (Wang et al., 2021a) and AirNet (Li et al., 2022), on the other hand, adopt a model-centric approach. They design specialized modules to implicitly capture diverse degradations and exploit them as conditions for achieving MTL. Beyond these approaches, ProRes (Ma et al., 2023) and PromptIR (Potlapalli et al., 2023) leverage prompts as a form of guidance or condition, enabling MTL for three (denoising, rain removal, and fog removal) tasks, or five (denoising, deraining, deblurring, low-light enhancement, and defogging) tasks. Despite these contributions, existing methodologies remain limited in their ability to tackle a modest number of MTL tasks, typically up to five. In contrast, our proposed approach breaks this ceiling by achieving MTL across more than ten distinct tasks (denoising, deblurring, deJPEG, dering, deraining, defogging, deraining, inpainting, low-light enhancement, local Laplacian filtering, and edge detection). 3. Method 3.1. Image Processing as Visual Question Answering Compared to high-level vision tasks, low-level vision tasks necessitate meticulous pixel-level adjustments, demanding architectures that excel in processing intricate details. These tasks encounter diverse input/output domains, characterized by various degradations and complex operations. These challenges underscore the complexity and non-trivial nature of general image processing. Inspired by the success of prompting in NLP (Liu et al., 2023), we propose to unify the general image processing problem as the visual prompting question answering (QA) paradigm, as illustrated in Fig. 2. In QA, the objective is to process a given context, such as a paragraph or document, and accurately generate the correct answer in response to specific questions related to that context. Building upon this concept, we adapt the QA paradigm to image processing. In our design, we view an image as a \u201cquestion\u201d (Q) or an \u201canswer\u201d (A). When inference, the model G is initially provided with input-output image pairs (PQ and PA), which serve as essential task prompts, much like the given context in QA tasks. These image pairs play a pivotal role in guiding the model\u2019s image processing operations. To process a new targeted input image XQ, we encode it as the query \u201cquestion\u201d to be answered. The provided input-output image pairs then serve as contextual prompts, enabling the model to gain insightful cues to generate the desired output. With this knowledge, the model executes the appropriate image processing operations to produce the predicted \u201canswer\u201d YA: YA = G(XQ|{PQ, PA}). (1) An illustrative example is shown in Fig. 3. The content of the prompts for the model is represented in the form of \u201cquestion\u201d-\u201canswer\u201d image pairs. For instance, when the input prompt is a \u201crainy\u201d-\u201crain-free\u201d image pair, the model will perform rain removal on the target input image. If the answer in the prompt is related to image edges, the model 3 \fUnifying Image Processing as Visual Prompting Question Answering Patch Embedding Random Masking Transformer Block Loss Q A Q A Training Phase Patch Embedding Adding Mask Transformer Block Q A Q Inference Phase Predicted patch Masked patch ? Figure 3. We structure the input and output images as a \u201cQ-A-Q-A\u201d sequence. During training, the answer images (A) are randonly masked and predicted. For inference, PromptGIP can execute proper processing to the question image according to the prompt pairs. will conduct edge detection operations on the query image, producing the corresponding edge image as the output. Notably, PromptGIP is capable of handling tasks with distinct output domains, which was not achievable with previous image restoration methods. The output domain of image restoration is the natural image space; image enhancement involves transformations in image brightness, color tones, or styles; while image edge detection outputs edge features, not the RGB image space. Our approach unifies these different tasks within a unified framework. 3.2. Masked Visual Prompting Paradigm Masked image modeling has emerged as a promising selfsupervised technique for learning valuable visual representations. Following (He et al., 2022), we implement a similar masked autoencoding approach in our training process. As depicted in Fig. 3, we initially structure the input and output images within a \u201cQ-A-Q-A\u201d sequence. Then, we introduce random masking to certain portions of the answer images, prompting the model to reconstruct these masked patches from the unmasked counterparts. This procedure employs a mask ratio of 85%. It is pivotal to note that our organizational framework distinguishes itself from prior works (Bar et al., 2022; Wang et al., 2023b) in its more rational and effective design. More analyses are described in Sec. 3.3. During the training phase, our approach leverages a diverse dataset comprising input-output image pairs, where each pair corresponds to a distinct image processing goal, including restoration, enhancement, and edge detection. Notably, each primary task encompasses various sub-tasks that further enrich the model\u2019s understanding. Throughout this process, the model is trained to grasp the intrinsic correlations between the Q-A image pairs. During the inference stage, we assemble an input-output pair as a task prompt, guiding the model to execute tailored operations. By providing an input question image alongside a fully masked image, the model generates the intended answer image in correspondence with the question image. 3.3. Further Discussion Comparison with image restoration models. Earlier research primarily focused on crafting specialized models tailored to specific tasks, such as SRCNN (Dong et al., 2015) for super-resolution, DnCNN (Zhang et al., 2017) for denoising, and Deblur-GAN (Kupyn et al., 2018) for deblurring. While effective within constrained scenarios, these task-specific models possess limited generalization capability. Recent attention has pivoted toward all-in-one restoration methods (Li et al., 2022; Wang et al., 2021b). These approaches leverage multi-task learning techniques to construct models that are able to handle diverse restoration tasks, thereby circumventing the need for task-specific finetuning. Nonetheless, these models are often limited within predefined application domains. They fall short in producing alternative representations like stylistic images or image edges. Several concurrent works (Ma et al., 2023; Potlapalli et al., 2023) have embraced the concept of prompt learning, but still concentrate on image restoration tasks. They propose to incorporate learnable prompts as degradation embeddings to guide the restoration process. However, it is worth noting that general image processing encompasses more than just restoration tasks. In this context, PromptGIP demonstrates a remarkable adaptability across a wide spectrum of low-level vision tasks, liberating it from the constraints of a singular output domain. 4 \fUnifying Image Processing as Visual Prompting Question Answering Painter Input prompts Input prompts Query Query Output Output MAE-VQGAN Input prompts: Denoising Input prompts: Depth estimation Query Query Output\uff1aSegmentation Output\uff1aSegmentation Figure 4. The drawbacks of existing methods. MAE-VQGAN fails to produce high-quality images. The prompts of Painter do not actually work well. Comparison with existing visual prompting models. Two novel visual prompting techniques, MAE-VQGAN (Bar et al., 2022) and Painter (Wang et al., 2023b), have emerged for addressing various tasks. MAE-VQGAN employs a masked autoencoder for pretraining. Unlike predicting masked pixels, it predicts visual tokens from a pretrained VQGAN codebook. The training process involves the ImageNet dataset and the collected CVF dataset, comprising a diverse array of figures from computer vision papers. Painter combines pairs of images to predict the output domain through the masked image modeling. It encompasses high-level tasks and a few low-level tasks. Differences with MAE-VQGAN. MAE-VQGAN diverges from the question-answering paradigm. As illustrated in Fig. 2, MAE-VQGAN utilizes crude images extracted from the ImageNet/CVF dataset during MAE training, and stitches paired images as a whole image for inference. This straightforward and coarse data organization scheme deviates from the QA framework. Specifically, during the training phase, the model lacks the capability to differentiate whether a given visual token corresponds to a \u201cQuestion\u201d or an \u201cAnswer\u201d, leading to an interleaved and ambiguous input-output encoding. In contrast, our approach is firmly grounded in an explicit QA paradigm, enabling precise pixellevel prediction. In addition, MAE-VQGAN choose to predict VQGAN tokens rather than pixels, which results in subpar fidelity of reconstructed content, as exemplified in Fig. 4. On the contrary, our framework excels in pixel-wise prediction with visually compelling outcomes. Differences with Painter. Painter predominantly targets highlevel vision tasks, encompassing segmentation, depth estimation, and keypoint detection, while it also addresses limited low-level task. Painter only draws training data from just seven specific datasets. This potentially induces a propensity for excessive alignment with these datasets, which can easily result in a concomitant risk of overfitting. Furthermore, the implementation of Painter employs a \u201cQQ-A-A\u201d sequence for encoding prompt and query images (see Fig. 2). Such a design yields unexpected behaviors in Painter\u2019s response to task prompts. Extensive tests on Painter revealed disparities in its prompt mechanism compared to anticipated outcomes. As in Fig. 4, when provided prompts linked to denoising and depth estimation, it unexpectedly executes segmentation tasks. This phenomenon hints at the model\u2019s inclination to memorize specific datasets rather than effectively leveraging the provided prompts. We conjecture that this problem might stem from the limited range of training tasks. 4. Experiments and Analysis 4.1. Image Processing Task Settings To show the versatility of our proposed method, we incorporate up to 15 tasks including diverse image restoration, image enhancement, and image edge detection tasks into our experiments. These tasks have their distinct output domains. Image restoration. We consider 10 degradation types: Gaussian noise, Gaussian blur, Poisson noise, salt & pepper noise, jpeg compression, ringing artifacts, R-L algorithm (Richardson, 1972), inpainting, haze, and rain. For the first eight types, we directly introduce corresponding distortions to the ImageNet (Deng et al., 2009) dataset to create degraded-clean pairs. We collect a composed dataset (Common528) for testing, which consists of commonlyused datasets: Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), BSDS100 (Martin et al., 2001), Manga109 (Matsui et al., 2017), Urban100 (Huang et al., 2015), General100 (Dong et al., 2016), and DIV2K-Valid (Agustsson & Timofte, 2017). For dehazing, we utilize the ITS training set of RESIDE dataset (Li et al., 2018). For rain removal, we employ two types of rain addition models: Simple Rain Model and Complex Rain Model. The former is a straightforward additive rain model, directly synthesized on the ImageNet dataset; while the latter utilizes Rain13K (Zamir et al., 2021), including an assortment of diverse rain models. Image enhancement. We employ two enhancement tasks: low-light image enhancement (LLE) and local Laplacian filtering (LLF). For LLE, the LOL dataset (Wei et al., 2018) is adopted for training. For LLF, we apply local Laplacian filter (Aubry et al., 2014) on the expert-C retouched images of Adobe-MIT Fivek dataset (Bychkovsky et al., 2011), forming the requisite input-output pairs. LLF is a multiscale operator for edge-preserving detail enhancement. 5 \fUnifying Image Processing as Visual Prompting Question Answering Question-Answer Prompts Input Question Predicted Answer GT Gaussian Blur Gaussian Noise S&P Noise Jpeg Compression Ringing Artifacts Inpainting Haze Rain Figure 5. Visual results of PromptGIP on all-in-one multi-task restoration. 6 \fUnifying Image Processing as Visual Prompting Question Answering Table 1. Quantitative results (PSNR/SSIM) on image restoration tasks. \u22c6: trained with only restoration tasks. \u2660: trained with all image processing tasks. \u2020: public released model. Gaussian Noise Poisson Noise S&P Noise Gaussian Blur JPEG Ringing R-L Inpainting Simple Rain Complex Rain Haze Real-ESRGAN\u2020 25.38/0.7997 26.57/0.8472 21.50/0.5884 21.49/0.6263 25.21/0.8058 24.64/0.7834 21.71/0.6548 14.06/0.7084 16.10/0.5989 21.01/0.6705 11.86/0.6346 Restormer\u22c6 28.66/0.8731 31.31/0.9317 36.12/0.9851 24.24/0.7537 26.65/0.8391 27.14/0.8561 30.53/0.9306 27.77/0.9289 29.68/0.9476 24.26/0.8369 14.83/0.7382 ViT-large\u22c6 24.67/0.7804 25.39/0.8152 23.71/0.7335 22.17/0.6413 24.76/0.7920 23.89/0.7463 24.09/0.7335 23.11/0.7662 23.21/0.7620 23.04/0.7788 24.91/0.8565 Restormer\u2660 25.27/0.7634 27.22/0.8535 27.84/0.8811 21.71/0.6078 23.90/0.7606 23.61/0.7261 23.18/0.7120 24.19/0.8615 22.68/0.7879 20.39/0.6930 7.22/0.1395 Painter\u2660 24.17/0.7468 24.63/0.7792 24.75/0.7903 22.36/0.6477 23.97/0.7458 24.21/0.7531 24.56/0.7728 22.95/0.7455 23.35/0.7493 22.81/0.7710 20.60/0.8250 PromptGIP\u2660 26.22/0.8167 27.29/0.8590 27.49/0.8804 22.77/0.6911 25.38/0.7978 25.45/0.8079 26.79/0.8506 25.02/0.8401 25.46/0.8399 24.08/0.8322 24.32/0.9020 Table 2. Quantitative results on image enhancement and image edge detection. \u22c6: single models trained with individual tasks. \u2660: trained with all image processing tasks. LLE (LOL dataset) LLF Canny Laplacian PSNR\u2191 SSIM\u2191 PSNR\u2191 SSIM\u2191 MAE\u2193 MAE\u2193 ViT-large\u22c6 13.37 0.4892 25.42 0.8948 36.5290 1.4655 Painter\u2660 19.47 0.7491 23.87 0.8451 33.7188 5.4518 PromptGIP\u2660 20.30 0.8026 26.11 0.9107 21.4376 3.7852 Image edge detection. Two acknowledged image edge detection operators, the Canny and Laplacian operators, are investigated. The ImageNet dataset forms the basis for creating input-output training pairs. All these 15 diverse tasks are amalgamated within a unified setting. PromptGIP excels in accommodating these tasks under a cohesive framework with one single training phase. 4.2. Implementation Details A vanilla vision Transformer (ViT-large) (Dosovitskiy et al., 2020) is adopted as the backbone architecture. During training, the model processes sequences of four 256 \u00d7 256 images in a \u201cQ-A-Q-A\u201d pattern, resulting in a 4 \u00d7 256 \u00d7 256 total input resolution. L1 loss is utilized as the loss function. For optimization, AdamW (Loshchilov & Hutter, 2017) optimizer with a cosine learning rate scheduler is employed. The base learning rate is 1e\u22124. The batch size is 48. We use 8 Tesla V100 GPUs for training. A total of 50 epochs are executed. For testing Painter and PromptGIP, we construct 20 image prompts for each task and report the best results. 4.3. Experiments Currently, there is no existing unified network that can comprehensively address all the aforementioned tasks in an all-in-one manner. For instance, previous image restoration models are incapable of handling image edge detection task. For reference, we train a ViT-large model and a Restormer model (Zamir et al., 2022) using the same training policy on multiple restoration tasks. We retrain the Painter (Wang et al., 2023b) model with all tasks as PromptGIP. We also report the results of Real-ESRGAN (Wang et al., 2021b), which is proposed to handle various complex restoration. Due to differences in the performance of various architectures, absolute numerical comparisons would be unfair. We have opted for the simplest ViT structure, thus it is more fair to focus on a direct comparison with the ViT and Painter. Moreover, achieving state-of-the-art performance on every Question-Answer Prompts Input Question Predicted Answer GT Low-light LLF Canny Operator Laplacian Operator Figure 6. Visual results of PromptGIP on image enhancement and edge detection tasks. task is not the purpose of this paper. Our primary focus revolves around examining the effects and capability of prompt learning in the context of general image processing. We can focus more on functional outcomes rather than numerical results. The metrics are evaluated on RGB channels. Results. Illustrated in Fig. 5 and 6, PromptGIP proficiently addresses a range of image processing tasks using different input prompts. These tasks encompass multipledegradation restoration, enhancement, and edge detection. These tasks entail distinct output representations, a level of complexity that lies beyond the capability of existing image restoration methods. PromptGIP yields impressive visual results in diverse tasks. In the training process, we introduced mixed degradation scenarios to further challenge the model\u2019s restoration capability, with results presented in Fig. 7. Quantitative results for restoration are detailed in Tab. 1, where PromptGIP demonstrates appealing performance across 10 restoration tasks using a vanilla ViT backbone. Compared to the original ViT model and Painter, prompt learning demonstrates a significant enhancement in model performance, resulting in improved restoration and multitasking capability. PromptGIP also surpasses the performance of Real-ESRGAN, a model specifically crafted for blind image restoration. PromptGIP achieves higher quantitative score than Restormer on complex derain and dehaze tasks. Restormer achieves superior quantitative scores on 7 \fUnifying Image Processing as Visual Prompting Question Answering Table 3. Effectiveness of the proposed QA paradigm and masked training strategy. Encoding Paradigm Mask Strategy Poisson Noise PSNR\u2191 Haze PSNR\u2191 LLF PSNR\u2191 Laplacian MAE\u2193 Painter Q1-Q2-A1-A2 A1&A2 24.63 20.60 23.87 5.4518 Direct predicting Q1-A1-Q2-A2 only A2 26.30 18.57 25.38 11.5553 PromptGIP Q1-A1-Q2-A2 A1&A2 27.29 24.32 26.11 3.7852 other degradations. This can be attributed to Restormer\u2019s advanced architecture tailored for restoration tasks. PromptGIP also succeeds in enhancing low-light images and emulating image operators. Notably, earlier methods struggle to simultaneously realize all these tasks within a single framework, due to variances in output domain representations. However, PromptGIP, when provided with proper prompts, effectively executes a wide spectrum of image processing tasks within a singular, streamlined network, as depicted in Fig. 6. For image edge detection, the Canny operator produces clear and well-defined edges, while Laplacian operator tends to produce thicker and noisier edges. Despite these intricacies, PromptGIP successfully discerns and faithfully simulates the distinct behaviors of both operators, underscoring its impressive adaptability. The numerical results are shown in Tab. 2. Query Input Output Query Input Output Query Input Output Query Input Output Figure 7. Results of mixed degraded images. Effectiveness of the QA paradigm and masked training. We further validate the efficacy of our newly proposed QA paradigm and the masked training strategy. Unlike the encoding order of Q-Q-A-A used in Painter, our PromptGIP employs a Q-A-Q-A approach. Q-Q-A-A could dilute the model\u2019s focus and impair its ability to directly map questions to their relevant answers. Our paradigm significantly improves performance, as in Tab. 1 and 2, which demonstrate superior outcomes in both image restoration and enhancement tasks. Additionally, we emphasize the necessity of our masked training strategy. During the training phase, PromptGIP randomly masks patches in the two \u201canswer\u201d images, in contrast to direct predicting where only the last \u201canswer\u201d image is masked. This methodology, as shown in Tab. 3, proves more effective across all tasks, particularly in image dehazing, where direct predicting struggles to yield satisfactory results. This outcome suggests that masked training not only enhances the model\u2019s capability in handling diverse tasks but also contributes to its generalization and stability. Prompts: Colorization Query Output GT Prompts : Style transfer Query Output GT Out-of-distribution Tasks Prompts: Mixed degradation Query Output GT Prompts: Mixed degradation Query Output GT Figure 8. Although PromptGIP cannot perfectly deal with every out-of-distribution tasks, it has demonstrated a certain level of generalization capability. Exploration on out-of-distribution tasks. To evaluate the model\u2019s capacity for generalization, we incorporate a set of diverse out-of-distribution tasks that are intentionally not encountered during the training phase, including mixed degradation restoration, colorization, and style transfer. The results are presented in Fig. 8. We employ L0 smooth filtering (Xu et al., 2011) to conduct style transfer experiment. As shown in Fig. 8, the model seems to understand the input prompt pair, yielding images with a discernible L0 smooth filter style. While it occasionally succeeds in producing visually appealing reconstructed images, it encounters difficulties in effectively restoring unfamiliar mix degraded images when compared to seen degraded data. Additionally, we provide a grayscale-colorful image pair as a prompt, with the expectation that the model would apply colorization to the grayscale input. However, the model regrettably does not exhibit colorization behavior in response to this prompt. In summary, these observations highlight the model\u2019s capacity to discern the intended task from the prompt and endeavor to fulfill it, showcasing a certain level of generalization. It is essential to emphasize that the model\u2019s present capability does not extend to generating great \u201cemergent\u201d outcomes. These conclusions are in accordance with prior studies (Min et al., 2022; Wei et al., 2023). 5." + }, + { + "url": "http://arxiv.org/abs/2306.15374v3", + "title": "LeCo: Lightweight Compression via Learning Serial Correlations", + "abstract": "Lightweight data compression is a key technique that allows column stores to\nexhibit superior performance for analytical queries. Despite a comprehensive\nstudy on dictionary-based encodings to approach Shannon's entropy, few prior\nworks have systematically exploited the serial correlation in a column for\ncompression. In this paper, we propose LeCo (i.e., Learned Compression), a\nframework that uses machine learning to remove the serial redundancy in a value\nsequence automatically to achieve an outstanding compression ratio and\ndecompression performance simultaneously. LeCo presents a general approach to\nthis end, making existing (ad-hoc) algorithms such as Frame-of-Reference (FOR),\nDelta Encoding, and Run-Length Encoding (RLE) special cases under our\nframework. Our microbenchmark with three synthetic and six real-world data sets\nshows that a prototype of LeCo achieves a Pareto improvement on both\ncompression ratio and random access speed over the existing solutions. When\nintegrating LeCo into widely-used applications, we observe up to 5.2x speed up\nin a data analytical query in the Arrow columnar execution engine and a 16%\nincrease in RocksDB's throughput.", + "authors": "Yihao Liu, Xinyu Zeng, Huanchen Zhang", + "published": "2023-06-27", + "updated": "2023-11-23", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.LG" + ], + "main_content": "INTRODUCTION Almost all major database vendors today have adopted a columnoriented design for processing analytical queries [31, 33, 44, 53, 61, 74, 76, 94]. One of the key benefits of storing values of the same attribute consecutively is that the system can apply a variety of lightweight compression algorithms to the columns to save space and disk/network bandwidth [28, 29, 104]. These algorithms, such as Run-Length Encoding (RLE) [29] and Dictionary Encoding, typically involve a single-pass decompression process (hence, lightweight) to minimize the CPU overhead. A few of them (e.g., Frame-of-Reference or FOR [59, 117]) allow random access to the individual values. This is a much-preferred feature because it allows the DBMS to avoid full-block decompression for highly selective Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile \u00a9 2023 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn queries, which are increasingly common, especially in hybrid transactional/analytical processing (HTAP) [17, 62, 69, 78, 90, 93] and real-time analytics [13, 76]. There are two categories of lightweight compression algorithms that exploit different sources of redundancy in a value sequence. The first are dictionary-based algorithms, including those that encode substring patterns (e.g., FSST [36], HOPE [114]). These algorithms leverage the uneven probability distribution of the values and have a compression ratio limited by Shannon\u2019s Entropy [98]. On the other hand, integer compression algorithms such as Run-Length Encoding (RLE) [29], FOR, and Delta Encoding [29, 80] exploit the serial correlation between the values in a sequence: the value of the current position may depend on its preceding values. However, RLE, FOR, and Delta Encoding are ad-hoc solutions modeling the simplest serial patterns. For example, Delta adopts a model of a basic step function, while RLE only works with consecutive repetitions (elaborated in Section 2). Consequently, we have missed many opportunities to leverage more sophisticated patterns such as the piecewise linearity shown in Figure 1 for better compression in a column store. Prior studies in time-series data storage [50, 51, 64, 72, 87, 108] have proposed to learn the series distribution and minimize the model sizes to achieve a lossy compression. These techniques, however, are not applicable to a general analytical system. To the best of our knowledge, none of the existing column stores apply machine learning to improve the efficiency of their lightweight lossless compression systematically. We, thus, propose a framework called LeCo (i.e., Learned Compression) to automatically learn serial patterns from a sequence and use the models for compression. Our key insight is that if we can fit such serial patterns with lightweight machine-learning models, we only need to store the prediction error for each value to achieve a lossless compression. Our framework addresses two subproblems. The first is that given a subsequence of values, how to best fit the data using one model? This is a classic regression problem. However, instead of minimizing the sum of the squared errors, we minimize the maximum error because we store the deltas (i.e., prediction errors) in a fixed-length array to support fast random access during query processing. LeCo also includes a Hyperparameter-Advisor to select the regressor type (e.g., linear vs. higher-order) that would produce the best compression ratios. The second subproblem is data partitioning: given the type(s) of the regression model, how to partition the sequence to minimize the overall compression ratio? Proactive partitioning is critical to achieving high-prediction accuracy in the regression tasks above because real-world data sets typically have uneven distributions [70, 115]. The partition schemes introduced by lossy time-series compression are not efficient to apply. They only target minimizing the total size of the model parameters rather than striking a balance between the model size and the delta array size. Our evaluation (Section 4.8) shows that the state-of-the-art partitioning 1 arXiv:2306.15374v3 [cs.DB] 23 Nov 2023 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. position value partition #1 partition #2 Figure 1: A Motivating Example. \u2013 On movieid data set. 0 10 20 30 40 50 Compression Ratio(%) 0 50 100 150 200 250 300 350 Random Access(ns) FOR Elias-Fano Delta LeCo LeCo-var Figure 2: Performance-space trade-offs. algorithms [40, 72] are still suboptimal for general lossless column compression. In the lossless case, however, having smaller partitions might be beneficial for reducing the local max errors, but it increases the overall model (and metadata) size. Because optimal partitioning is an NP-hard problem, we developed different heuristic-based algorithms for different regression models to obtain approximate solutions in a reasonable amount of time. Another design trade-off is between fixed-length and variable-length partitions. Variablelength partitions produce a higher compression ratio but are slower in random access. We implemented a prototype of LeCo to show the benefit of using machine learning to compress columnar data losslessly. For each partition, we store a pre-trained regression model along with an array of fixed-length deltas. Decompressing a value only involves a model inference plus a random access to the delta array. LeCo is highly extensible with built-in support for various model types and for both fixed-length and variable-length partition schemes. We compared LeCo against state-of-the-art lightweight compression algorithms including FOR, Elias-Fano, and Delta Encoding using a microbenchmark consisting of both synthetic and real-world data sets. As illustrated in Figure 21, LeCo achieves a Pareto improvement over these algorithms. Compared to FOR and Elias-Fano, LeCo improves the compression ratio by up to 91% while retaining a comparable decompression and random access performance. Compared to Delta Encoding, LeCo is an order-of-magnitude faster in random access with a competitive or better compression ratio. We further integrated LeCo into two widely-used applications to study its benefit on end-to-end system performance. We first report LeCo\u2019s performance on a columnar execution engine, using Apache Arrow [4] and Parquet [6] as the building blocks. Enabling LeCo in this system speeds up a multi-column filter-groupby-aggregation query by up to 5.2\u00d7 and accelerates single-column bitmap aggregation query up to 11.8\u00d7 with a 60.5% reduction in memory footprint. We also use LeCo to compress the index blocks in RocksDB [14, 48] and observed a 16% improvement in RocksDB\u2019s throughput compared to its default configuration. The paper makes four primary contributions. First, we identify that exploiting the serial correlation between values has a great potential for efficient column compression. Second, we make the case for applying machine learning to lightweight lossless column compression. Third, we propose the Learned Compression (LeCo) framework and implement a prototype that achieves a Pareto improvement on compression ratio and random access speed over 1Figure 2 is based on the weighted average result of twelve data sets in Section 4.3. existing algorithms. Finally, we integrate LeCo into a columnar execution engine and a key-value store and show that it helps improve the systems\u2019 performance and space efficiency simultaneously. 2 THE CASE FOR LEARNED COMPRESSION The performance of persistent storage devices has improved by orders of magnitude over the last decade [109]. Modern NVMe SSDs can achieve 7GB/s read throughput and over 500,000 IOPS [16]. The speed of processors, on the other hand, remains stagnant as Moore\u2019s Law fades [56]. Such a hardware trend is gradually shifting the bottleneck of a data processing system from storage to computation. Hence, pursuing a better compression ratio is no longer the dominating goal when developing a data compression algorithm. Many applications today prefer lightweight compression schemes because decompressing the data is often on the critical path of query execution. Meanwhile, an analytical workload today is often mixed with OLTP-like queries featuring small range scans or even point accesses [12, 89]. To handle such a wide range of selectivity, it is attractive for a data warehouse to adopt compression algorithms that can support fast random access to the original data without decompressing the entire block. Dictionary encoding is perhaps the most widely-used compression scheme in database management systems (DBMSs). Nonetheless, for a sequence where the values are mostly unique, dictionary encoding does not bring compression because it assumes independence between the values, and its compression ratio is bounded by Shannon\u2019s Entropy [98]. Shannon\u2019s Entropy, however, is not the lower bound for compressing an existing sequence2. In many real-world columns, values often exhibit strong serial correlations (e.g., sorted or clustered) where the value at a particular position is dependent on the values preceding it. Unfortunately, to the best of our knowledge, there is no general solution proposed that can systematically leverage such positional redundancy for compression. We argue that a learned approach is a natural fit. Extracting serial correlation is essentially a regression task. Once the regression model captures the \u201ccommon pattern\u201d of the sequence, we can use fewer bits to represent the remaining delta for each value. This Model + Delta framework (a.k.a., LeCo) is fundamental for exploiting serial patterns in a sequence to achieve lossless compression. For example, Boffa et al. attempted to use linear models for storing rank&select dictionaries specifically [35]. In fact, the widely-used FOR, RLE, and Delta Encoding (Delta) can be considered special cases under our framework as well. FOR divides an integer sequence into frames, and for each value \ud835\udc63\ud835\udc56in a frame, it is encoded as \ud835\udc63\ud835\udc56\u2212\ud835\udc63\ud835\udc5a\ud835\udc56\ud835\udc5bwhere \ud835\udc63\ud835\udc5a\ud835\udc56\ud835\udc5bis the minimum value of that frame. From a LeCo \u2019s point of view, the regression function for each frame in FOR is a horizontal line. Although such a naive model is fast to train and inference, it is usually suboptimal in terms of compression ratio. RLE can be considered a special case of FOR, where the values in a frame must be identical. Delta Encoding achieves compression by only storing the difference between neighboring values. Specifically, for an integer sequence \ud835\udc631, \ud835\udc632, ..., \ud835\udc63\ud835\udc5b, Delta encodes the values as \ud835\udc631, \ud835\udc632 \u2212\ud835\udc631, \ud835\udc633 \u2212\ud835\udc632, \ud835\udc63\ud835\udc5b\u2212\ud835\udc63\ud835\udc5b\u22121. 2The lower bound is known as the Kolmogorov Complexity. It is the length of the shortest program that can produce the original data [81]. Kolmogorov Complexity is incomputable. 2 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Similar to FOR, it uses the horizontal-line function as the model, but each partition/frame in Delta only contains one item. The advantage of Delta is that the models can be derived from recovering the previous values rather than stored explicitly. The downside, however, is that accessing any particular value requires a sequential decompression of the entire sequence. LeCo helps bridge the gap between data compression and data mining. Discovering and extracting patterns are classic data mining tasks. Interestingly, these tasks often benefit from preprocessing the data set with entropy compression tools to reduce \u201cnoise\u201d for a more accurate prediction [101]. As discussed above, these data mining algorithms can inversely boost compression efficiency by extracting the serial patterns through the LeCo framework. The theoretical foundation of this relationship is previously discussed in [52]. Notice that although we focus on regression in this paper, other data mining techniques, such as anomaly detection, also reveal serial patterns that can improve compression efficiency [30, 38]. The beauty of LeCo is that it aligns the goal of sequence compression with that of serial pattern extraction. LeCo is an extensible framework: it provides a convenient channel to bring related advances in data mining to the improvement of sequence compression. Although designed to solve different problems, LeCo is related to the recent learned indexes [47, 55, 73] in that they both use machine learning (e.g., regression) to model data distributions. A learned index tries to fit the cumulative distribution function (CDF) of a sequence and uses that to predict the quantile (i.e., position) of an input value. Inversely, LeCo takes the position in the sequence as input and tries to predict the actual value. LeCo\u2019s approach is consistent with the mapping direction (i.e., position \u2192value) in classic pattern recognition tasks in data mining. Moreover, LeCo mainly targets immutable columnar formats such as Arrow [4] and Parquet [6]. Updating the content requires a complete reconstruction of the files on which LeCo can piggyback its model retraining. Unlike indexes where incremental updates are the norm, the retraining overhead introduced by LeCo is amortized because the files in an analytical system typically follow the pattern of \u201ccompress once and access many times\u201d. We next present the LeCo framework in detail, followed by an extensive microbenchmark evaluation in Section 4. We then integrate LeCo into two real-world applications and demonstrate their end-to-end performance in Section 5. 3 THE LECO FRAMEWORK Let us first define the learned compression problem that the LeCo framework targets. Given a data sequence \u00ae \ud835\udc63[0,\ud835\udc5b) = (\ud835\udc630, ..., \ud835\udc63\ud835\udc5b\u22121), let \ud835\udc430 = \u00ae \ud835\udc63[\ud835\udc580=0,\ud835\udc581), \ud835\udc431 = \u00ae \ud835\udc63[\ud835\udc581,\ud835\udc582), ..., \ud835\udc43\ud835\udc5a\u22121 = \u00ae \ud835\udc63[\ud835\udc58\ud835\udc5a\u22121,\ud835\udc58\ud835\udc5a=\ud835\udc5b) be a partition assignment P with \ud835\udc5anon-overlap segments where each partition \ud835\udc57has a model F\ud835\udc57. Let \ud835\udeff\ud835\udc56= \ud835\udc63\ud835\udc56\u2212F\ud835\udc57(\ud835\udc56), where F\ud835\udc57(\ud835\udc56) is the model prediction at position\ud835\udc56, for \ud835\udc63\ud835\udc56\u2208\ud835\udc43\ud835\udc57. The goal of learned compression is to find a partition assignment P and the associated models F such that the model size plus the delta-array size are minimized: \ud835\udc5a\u22121 \u2211\ufe01 \ud835\udc57=0 (\u2225F\ud835\udc57\u2225+ (\ud835\udc58\ud835\udc57+1 \u2212\ud835\udc58\ud835\udc57)( \ud835\udc58\ud835\udc57+1\u22121 max \ud835\udc56=\ud835\udc58\ud835\udc57 \u2308log2 \ud835\udeff\ud835\udc56\u2309)) where \u2225F\ud835\udc57\u2225denotes the model size of F\ud835\udc57, and max\u2308log2 \ud835\udeff\ud835\udc56\u2309is the number of bits required to represent the largest \ud835\udeff\ud835\udc56in the partition. Uncompressed Data Compressed Data Encoder Partitioner Regressor Model Partition Boundaries Model Parameters Decoder Point/Range Request Decompressed Value(s) Distribution Features Hyper-parameter Advisor Figure 3: The LeCo Framework \u2013 An overview of the modules and their interactions with each other. As shown in Figure 3, LeCo consists of five modules: Regressor, Partitioner, Hyperparameter-Advisor, Encoder, and Decoder. The Hyper-parameter Advisor trains a Regressor Selector model offline. Given an uncompressed sequence of values at runtime, it extracts features from it for model inference and outputs the recommended Regressor type as well as advises on partitioning strategy. Then, LeCo enters the model learning phase, where the Regressor and the Partitioner work together to produce a set of regression models with associated partition boundaries. The Encoder receives the model parameters as well as the original sequence and then generates a compact representation of the \u201cModel + Delta\u201d (i.e., the compressed sequence) based on a pre-configured format. The compressed sequence is self-explanatory: all the metadata needed for decoding is embedded in the format. When a user issues a query by sending one or a range of positions, the Decoder reads the model of the relevant partition along with the corresponding locations in the delta array to recover the requested values. A design goal of LeCo is to make the framework extensible. We first decouple model learning (i.e., the logical value encoding) from the physical storage layout because applying common storagelevel optimizations such as bit-packing and null-suppression to a delta sequence is orthogonal to the modeling algorithms. We also divide the model learning task into two separate modules. The Regressor focuses on best fitting the data in a single partition, while a Partitioner determines how to split the data set into subsequences to achieve a desirable performance and compression ratio. Such a modular design facilitates integrating future advances in serial pattern detection and compressed storage format into LeCo. It also allows us to reason the performance-space trade-off for each component independently. We next describe our prototype and the design decisions made for each module (Section 3.1 to Section 3.3), followed by the extension to handling string data in Section 3.4. 3.1 Regressor The Regressor takes in a sequence of values \ud835\udc630, \ud835\udc631, ..., \ud835\udc63\ud835\udc5b\u22121 and outputs a single model that \u201cbest fits\u201d the sequence. LeCo supports the linear combination of various model types, including constant, linear, polynomial, and more sophisticated models, such as exponential and logarithm. Given a model F (\ud835\udc56) = \u00cd \ud835\udc57(\ud835\udf03\ud835\udc57\u00b7M\ud835\udc57(\ud835\udc56)) where M\ud835\udc57denotes different model terms with \ud835\udf03\ud835\udc57as its linear combination weight and \ud835\udc56represents the position in the sequence, classic regression methods minimize the sum of the squared errors \u00cd \ud835\udc56(\ud835\udc63\ud835\udc56\u2212F (\ud835\udc56))2 (i.e., the \ud835\udc592 norm of deltas), which has a closed-form solution. If LeCo stores deltas in variable lengths, this solution would produce a delta sequence with minimal size. As we discussed before, 3 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. real databases usually avoid variable-length values because of the parsing overhead during query execution. LeCo, therefore, stores each value in the delta array in fixed length. Specifically, LeCo adopts the bit-packing technique. Suppose the maximum absolute value in the delta array is\ud835\udeff\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc4e\ud835\udc4f\ud835\udc60, then each delta occupies a fixed \ud835\udf19= \u2308\ud835\udc59\ud835\udc5c\ud835\udc542(\ud835\udeff\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc4e\ud835\udc4f\ud835\udc60)\u2309bits. The storage size of the delta array is thus determined by \ud835\udf19rather than the expected value of the deltas, and our regression objective becomes: minimize \ud835\udf19 subject to \u2308log2(|F (\ud835\udc56) \u2212v\ud835\udc56|)\u2309\u2264\ud835\udf19,\ud835\udc56= 0, . . . ,\ud835\udc5b\u22121 \ud835\udf19\u22650 The constrained optimization problem above can be transformed into a linear programming problem with 2\ud835\udc5b+ 1 constraints where we can get an approximated optimal solution in \ud835\udc42(\ud835\udc5b) time [97]. We introduce a Regressor Selector (RS) in the HyperparameterAdvisor to automatically choose the regressor type (e.g., linear vs. higher-order) for a given sequence partition. RS takes in features collected from a single pass of the input data and then feeds them to its classification model (e.g., Classification and Regression Tree or CART). The model is trained offline using the same features from the training data sets. We briefly introduce the main features used in the current RS implementation below. Log-scale data range. Data range gives an upper bound of the size of the delta array. A smaller data range prefers simpler models because the model parameters would take a significant portion of the compressed output. Deviation of the \ud835\udc58th-order deltas. Given a data sequence \ud835\udc630, ..., \ud835\udc63\ud835\udc5b\u22121, we define the first-order delta sequence as \ud835\udc510 1 = \ud835\udc631 \u2212 \ud835\udc630,\ud835\udc511 1 = \ud835\udc632 \u2212\ud835\udc631, ...,\ud835\udc511 \ud835\udc5b\u22122 = \ud835\udc63\ud835\udc5b\u22121 \u2212\ud835\udc63\ud835\udc5b\u22122. Then, the \ud835\udc58th-order delta sequence is {\ud835\udc51\ud835\udc58 0,\ud835\udc51\ud835\udc58 1, ...,\ud835\udc51\ud835\udc58 \ud835\udc5b\u2212\ud835\udc58\u22121}, where \ud835\udc51\ud835\udc58 \ud835\udc56\u22121 = \ud835\udc51\ud835\udc58\u22121 \ud835\udc56 \u2212\ud835\udc51\ud835\udc58\u22121 \ud835\udc56\u22121 . Let \ud835\udc51\ud835\udc58 \ud835\udc5a\ud835\udc4e\ud835\udc65,\ud835\udc51\ud835\udc58 \ud835\udc5a\ud835\udc56\ud835\udc5b, and\ud835\udc51\ud835\udc58 \ud835\udc4e\ud835\udc63\ud835\udc54be the maximum, minimum, and average delta values, respectively. We then compute the normalized deviation of the \ud835\udc58th-order deltas as \u00cd \ud835\udc56\u2208[0,\ud835\udc5b\u2212\ud835\udc58) (\ud835\udc51\ud835\udc58 \ud835\udc56\u2212\ud835\udc51\ud835\udc58 \ud835\udc4e\ud835\udc63\ud835\udc54) (\ud835\udc5b\u2212\ud835\udc58) (\ud835\udc51\ud835\udc58 \ud835\udc5a\ud835\udc4e\ud835\udc65\u2212\ud835\udc51\ud835\udc58 \ud835\udc5a\ud835\udc56\ud835\udc65) . We use this metric to determine the maximum degree of polynomial needed to fit the data. The intuition is that the \ud835\udc58th-order delta sequence of a \ud835\udc58th-degree polynomial is constant (i.e., with minimum deviation). Subrange trend and divergence. We first split the data into fixed-length subblocks {\u00ae \ud835\udc63[\ud835\udc56\u00b7\ud835\udc60,(\ud835\udc56+1)\u00b7\ud835\udc60)}\ud835\udc56, each containing \ud835\udc60records with a data range (i.e., subrange) of \ud835\udc5f\ud835\udc56. We define the subrange ratio (SR) between adjacent subblocks as \ud835\udc5f\ud835\udc56 \ud835\udc5f\ud835\udc56\u22121 . The metric \u201csubrange trend\u201d T is the average SR across all subblocks, while \u201csubrange divergence\u201d D is the difference between the maximum SR and minimum SR. These two metrics provide a rough sketch of the value-sequence distribution: T depicts how fast the values increase on average, and D indicates how stable the increasing-trend is. 3.2 Partitioner Given a Regressor, the Partitioner divides the input sequence \u00ae \ud835\udc63[0,\ud835\udc5b) = \ud835\udc630, \ud835\udc631, ..., \ud835\udc63\ud835\udc5b\u22121 into \ud835\udc5aconsecutive subsequences (i.e., partitions) \u00ae \ud835\udc63[0,\ud835\udc581), \u00ae \ud835\udc63[\ud835\udc581,\ud835\udc582), ..., \u00ae \ud835\udc63[\ud835\udc58\ud835\udc5a\u22121,\ud835\udc58\ud835\udc5a) where a regression model is trained on each partition. The goal of the Partitioner is to minimize the overall size of the compressed sequences. Although partitioning increases the number of models to store, it is more likely for the Regressor to produce a smaller delta array position value partition #1 partition #2 Figure 4: Fixed-length Partitioning Example. 102 103 104 105 106 107 Block size 0 20 40 60 Compression Ratio(%) booksale normal Figure 5: ompression Ratio Trend. \u2013 Sweeping block size. when fitting a shorter subsequence. Thus, we require the Partitioner to balance between the model storage overhead and the general model fitting quality. We can find an optimal partition arrangement by computing the compressed size of each possible subsequence through dynamic programming [99]. Such an exhaustive search, however, is forbiddingly expensive with time complexity of \ud835\udc42(\ud835\udc5b3) and space complexity of \ud835\udc42(\ud835\udc5b2). We next propose two practical partitioning schemes developed in LeCo that make different trade-offs between compression ratio and compression/decompression performance. 3.2.1 Fixed-Length Partitioning. The most common strategy is splitting the sequence into fixed-length partitions. This partitioning scheme is easy to implement and is friendly to random accesses. Because each partition contains a fixed number of items, given a position, an application can quickly locate the target partition without the need for a binary search in the metadata. The downside, however, is that fixed-length partitioning is not flexible enough to help the Regressor capture the desired patterns. For example, as shown in Figure 4, if we divide the Movie ID data set into fixedlength partitions, the Regressor would fail to leverage the piecewise linearity in certain ranges. To find an optimal partition size: (1) Sample < 1% of the data randomly, consisting of subsequences of length \ud835\udc41, where \ud835\udc41is the maximum partition length in the search space (e.g., \ud835\udc41= 10\ud835\udc58). (2) Search the (fixed) partition size between 1 and \ud835\udc41that produces the lowest compression ratio on the samples. Because the compression ratio typically has a \u201cU-shape\u201d as we vary the partition size (illustrated in Figure 5), we first perform an exponential search to go past the global minimum. Then, we search back with smaller steps to approach the optimal partition size. (3) Stop the search process once the compression ratio converges (with < 0.01% decline between adjacent iterations). 3.2.2 Variable-Length Partitioning. Below, we propose a greedy algorithm for variable-length partitioning for an arbitrary Regressor discussed in Section 3.1 to approximate the optimal solution obtained by the dynamic programming approach. Our greedy algorithm includes two phases: split and merge. In the split phase, the algorithm groups consecutive data points into small partitions where the Regressor can predict with small errors. We impose strict constraints to limit the maximum prediction error produced by the Regressor for each partition. Because of our aggressive guarantee of prediction errors, the algorithm tends to generate an excessive number of partitions in the split phase, where the cumulative model size could dominate the final compressed size. To compensate for the over-splitting, the algorithm enters the 4 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile merge phase where adjacent partitions are merged if such an action can reduce the final compressed size. Specifically, in the split phase, we first pick a few starting partitions. A starting partition contains at least a minimum number of consecutive values for the Regressor to function meaningfully (e.g., three for a linear Regressor). Then, we examine the adjacent data point to determine whether to include this point into the partition. The intuition is that if the space cost of incorporating this data point is less than a pre-defined threshold, the point is added to the partition; otherwise, a new partition is created. The splitting threshold is related to the model size \ud835\udc46\ud835\udc40of the Regressor. Suppose the current partition spans from position \ud835\udc56to \ud835\udc57\u22121: \u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57). Let \u0394(\u00ae \ud835\udc63) be a function that takes in a value sequence and outputs the number of bits required to represent the maximum absolute prediction error from the Regressor (i.e., \u2308\ud835\udc59\ud835\udc5c\ud835\udc542(\ud835\udeff\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc4e\ud835\udc4f\ud835\udc60)\u2309). Then, the space cost of adding the next data point \ud835\udc63\ud835\udc57is \ud835\udc36= (\ud835\udc57+ 1 \u2212\ud835\udc56) \u00b7 \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57+1)) \u2212(\ud835\udc57\u2212\ud835\udc56) \u00b7 \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) We compare \ud835\udc36against \ud835\udf0f\ud835\udc46\ud835\udc40, where \ud835\udf0fis a pre-defined coefficient between 0 and 1 to reflect the \u201caggressiveness\u201d of the split phase: a smaller \ud835\udf0fleads to more fine-grained partitions with more accurate models. If \ud835\udc36\u2264\ud835\udf0f\ud835\udc46\ud835\udc40, \ud835\udc63\ud835\udc57is included to the current partition \u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57). Otherwise, we create a new partition with \ud835\udc63\ud835\udc57as the first value. In the merge phase, we scan through the list of partitions \u00ae \ud835\udc63[0,\ud835\udc581), \u00ae \ud835\udc63[\ud835\udc581,\ud835\udc582), ..., \u00ae \ud835\udc63[\ud835\udc58\ud835\udc5a\u22121,\ud835\udc58\ud835\udc5a) produced in the split phase and merge the adjacent ones if the size of the merged partition is smaller than the total size of the individual ones. Suppose the algorithm proceeds at partition \u00ae \ud835\udc63[\ud835\udc58\ud835\udc56\u22121,\ud835\udc58\ud835\udc56). At each step, we try to merge the partition to its right neighbor \u00ae \ud835\udc63[\ud835\udc58\ud835\udc56,\ud835\udc58\ud835\udc56+1). We run the Regressor on the merged partition \u00ae \ud835\udc63[\ud835\udc58\ud835\udc56\u22121,\ud835\udc58\ud835\udc56+1) and compare its size \ud835\udc46\ud835\udc40+ (\ud835\udc58\ud835\udc56+1 \u2212 \ud835\udc58\ud835\udc56\u22121) \u00b7 \u0394(\u00ae \ud835\udc63[\ud835\udc58\ud835\udc56\u22121,\ud835\udc58\ud835\udc56+1)) to the combined size of the original partitions 2\ud835\udc46\ud835\udc40+(\ud835\udc58\ud835\udc56\u2212\ud835\udc58\ud835\udc56\u22121) \u00b7\u0394(\u00ae \ud835\udc63[\ud835\udc58\ud835\udc56\u22121,\ud835\udc58\ud835\udc56))+(\ud835\udc58\ud835\udc56+1\u2212\ud835\udc58\ud835\udc56) \u00b7\u0394(\u00ae \ud835\udc63[\ud835\udc58\ud835\udc56,\ud835\udc58\ud835\udc56+1)). We accept this merge if it results in a size reduction. We iterate the partition list multiple times until no qualified merge exists. We summarize our vari-length partitioning algorithm as follows: [Init Phase] Scan all data point once. Pick a few \u201cgood\u201d initial positions to form the starting partitions. [Split Phase] Scan the starting partition set once. \u2022 Try \u201cgrowing\u201d each starting partition by adding adjacent points. \u2022 Calculate the inclusion cost and approve the inclusion if it is below the predefined threshold related to the model size. Otherwise, start a new partition with a single point. \u2022 Stops after each point belongs to a partition. [Merge Phase] Scan the partition sets multiple times. \u2022 Merge a partition to its right neighbor if the combined one achieves a lower compression ratio. \u2022 Stops when no merge can reduce the total space. We next discuss two critical aspects that largely determine the efficiency of the above split-merge algorithm. Computing \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) Efficiently. The computational complexity of \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) dominates the overall algorithm complexity because the function is invoked at every data point inclusion in the split phase. For a general \ud835\udc58-degree polynomial model \u00cd \ud835\udc56\u2208[0,\ud835\udc58] \ud835\udf03\ud835\udc56\u00b7 \ud835\udc65\ud835\udc56, we can use the method introduced in [97] to compute \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) starting partition 0 30 31 32 29 49 119 124 118 122 4 0 1 3 2 4 0 0 2 2 3 6 8 4 4 4 6 merge 0 30 31 32 29 49 119 124 118 122 \u2705 split 0 30 31 32 29 49 119 124 118 122 \u2705 \u2705 1 1 -3 20 70 5 -6 4 30 required bits delta bits Figure 6: Variable-length Partitioning on Delta Encoding \u2013 Value 29 is successfully included into segment {30, 31, 32} in the split phase because its inclusion cost \ud835\udc36[1,5) = 6 is less than the pre-defined threshold \ud835\udf0f\ud835\udc46\ud835\udc40= 0.5 \u00b7 32 = 16. In the merge phase, the attempt to merge segment {30, 31, 32, 29} and {49} succeeds because the space consumption of the segment formed is smaller than the summation of the two original segments. in linear time. To further speed up the process for the linear Regressor (which is most commonly used), we propose a much simpler metric e \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) = log2(max\ud835\udc57\u22121 \ud835\udc58=\ud835\udc56+1(\ud835\udc51\ud835\udc58) \u2212min\ud835\udc57\u22121 \ud835\udc58=\ud835\udc56+1(\ud835\udc51\ud835\udc58)), where \ud835\udc51\ud835\udc58= \ud835\udc63\ud835\udc58\u2212\ud835\udc63\ud835\udc58\u22121 to approximate the functionality of \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) The intuition is that the proposed metric e \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) indicates the difficulty of the linear regression task and has a positive correlation to max bit-width measure \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)). As discussed in Section 2, Delta Encoding is considered a specific design point under the LeCo framework. The model in each Delta partition is an implicit step function, and only the first value in the partition is explicitly stored as the model. The prediction errors (i.e., the \ud835\udeff\u2032\ud835\udc60) of Delta Encoding are the differences between each pair of the adjacent values. Therefore, \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) = \u2308log2(max\ud835\udc57\u22121 \ud835\udc58=\ud835\udc56+1 \ud835\udc51\ud835\udc58)\u2309, where\ud835\udc51\ud835\udc58= \ud835\udc63\ud835\udc58\u2212\ud835\udc63\ud835\udc58\u22121. After adding the next data point \ud835\udc63\ud835\udc57to this partition, we can directly compute \u0394(\u00ae \ud835\udc63[0,\ud835\udc57+1)) = max {\u0394(\u00ae \ud835\udc63[0,\ud835\udc57)),\ud835\udc51\ud835\udc57}. Selecting Good Starting Positions. Because the algorithms used in both the split and merge phases are greedy, the quality of the algorithms\u2019 starting partitions can significantly impact the partition results, especially for the split phase. Suppose we start at a \u201cbumpy\u201d region \u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57) during splitting. Because \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)) of this partition is already large, there is a high probability that it stays the same when including an extra data point in the partition (i.e., \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57+1)) = \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57))). Therefore, the space cost of adding this point becomes a constant \ud835\udc36= \u0394(\u00ae \ud835\udc63[\ud835\udc56,\ud835\udc57)). As long as \ud835\udc36\u2264\ud835\udf0f\ud835\udc46\ud835\udc40, this \u201cbad\u201d partition would keep absorbing data points, which is destructive to the overall compression. For a general polynomial model of degree \ud835\udc58, we select segments where the (\ud835\udc58+1)th-order deltas (refer to the definition in Section 3.1) are minimized as the positions to initiate the partitioning algorithm. The intuition is that the discrete (\ud835\udc58+ 1)th-order deltas approximate the (\ud835\udc58+ 1)th-order derivatives of a continuous function of degree \ud835\udc58. If a segment has small (\ud835\udc58+ 1)th-order deltas, the underlying function to be learned is less likely to contain terms with a degree much higher than \ud835\udc58. For Delta Encoding, a good starting partition is when the differences between the neighboring values are small (i.e., a small model 5 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. ...... Range decoding error correction ...... Delta array Header Figure 7: LeCo\u2019s Storage Format for One Partition prediction error) and when the neighboring points form roughly an arithmetic progression (i.e., the partition has the potential to grow larger). We, therefore, compute the bit-width for each delta in the sequence first (\u201crequired bits\u201d in Figure 6). We then compute the second-order \u201cdelta bits\u201d based on those \u201crequired bits\u201d and pick the positions with the minimum value (the yellow-boxed zeros in Figure 6) as the initial partitions. The required bits are used as the tie-breaker to determine the partition growth precedence. To summarize, we compared the split-merge partitioning algorithm with the linear Regressor against the optimal partitioning obtained via dynamic programming on real-world data sets introduced in Section 4.1 and found that our greedy algorithm imposes less than 3% overhead on the final compressed size. 3.2.3 Partitioning Strategy Advising. Compared to fixed-length partitions, variable-length partitions could produce a higher compression ratio with a cost of slower random access and compression speed. The choice of the partitioning strategies depends largely on the application\u2019s needs. To facilitate estimating the trade-offs, our Hyperparameter-Advisor provides two scores to indicate the potential space benefit of adopting the variable-length strategy. The two scores are inspired by the definitions of \u201clocal hardness\u201d (H\ud835\udc59) and \u201cglobal hardness\u201d (H\ud835\udc54) of a data set introduced in [107]. H\ud835\udc59captures the local unevenness in the values distribution, while H\ud835\udc54depicts the degree of variation of the distribution at a global scale. Intuitively, if the data set is locally hard (i.e., H\ud835\udc59is high), no Regressor would fit the data well regardless of the partitioning strategy. On the other hand, if the data set is locally easy but globally hard (i.e., H\ud835\udc54is high), applying variable-length partitioning could improve the compression ratio significantly because it is able to catch the \u201csharp turns\u201d in the global trend of the value distribution. Similar to [107], we compute H\ud835\udc59by running the piece-wise linear approximation (PLA) algorithm with a small error bound (e.g.,\ud835\udf16= 7) on the data set and count the number of segments generated. The count is then divided by the data set size to normalize the H\ud835\udc59score. For H\ud835\udc54, we run the same PLA algorithm with a much larger error bound (e.g., \ud835\udf16= 4096). Instead of counting the number of segments, we use the the average gap3 between adjacent segments and the variance of the segment lengths to estimate the \u201cglobal hardness\u201d of the value distribution. H\ud835\udc54is the summation of these two numbers, with each normalized. 3.3 Encoder and Decoder The Encoder is responsible for generating the final compressed sequences. The input to the Encoder is a list of value partitions produced by the Partitioner, where each partition is associated with a model. The Encoder computes the delta for each value through model inference and then stores it in the delta array. The storage format is shown in Figure 7. There is a header and a delta array for each partition. In the header, we first store the model parameters. For the default linear Regressor, the parameters 3first value of the latter segment last value of the former segment index 0 79 104 158 a a a a d b a e a a g c character set: a-z 26-base Figure 8: LeCo String Compression \u2013 An example including algorithm optimizations and storage format modifications. are two 64-bit floating-point numbers: intercept \ud835\udf030 and slope \ud835\udf031. Because we bit pack the delta array according to the maximum delta, we must record the bit-length \ud835\udc4ffor an array item in the header. For fixed-length partitions, the Encoder stores the partition size \ud835\udc3fin the metadata. If the partitions are variable-length, the Encoder keeps the start index (in the overall sequence) for each partition so that a random access can quickly locate the target partition. We use ALEX [47] (a learned index) to record those start positions to speed up the binary search. To decompress a value given a position \ud835\udc56, the Decoder first determines which partition contains the requested value. If the partitions are fixed-length, the value is located in the \u230a\ud835\udc56 \ud835\udc3f\u230bth partition. Otherwise, the Decoder conducts a \u201clower-bound\u201d search in the metadata to find the partition with the largest start index \u2264\ud835\udc56. After identifying the partition, the Decoder reads the model parameters from the partition header and then performs a model inference using \ud835\udc56\u2032 = \ud835\udc56\u2212start_index to get a predicted value \u02c6 \ud835\udc63. Then, the Decoder fetches the corresponding \ud835\udeff\ud835\udc56\u2032 in the delta array by accessing from the (\ud835\udc4f\u00b7\ud835\udc56\u2032)th bit to the (\ud835\udc4f\u00b7 (\ud835\udc56\u2032 +1)\u22121)th bit. Finally, the Decoder returns the decompressed value \u230a\u02c6 \ud835\udc63\u230b+ \ud835\udeff\ud835\udc56\u2032. Decoding a value involves at most two memory accesses, one for fetching the model (often cached) and the other for fetching the delta. The basic algorithm for range decompression is to invoke the above decoding process for each position in the range. Because of the sequential access pattern, most cache misses are eliminated. For the default linear regression, the Decoder performs two floatingpoint calculations for model inference (one multiplication and one addition) and an integer addition for delta correction. We carry out an optimization to increase the range decompression throughput by 10 \u221220%. For position \ud835\udc56, the model prediction is \u02c6 \ud835\udc63\ud835\udc56= \ud835\udf030 +\ud835\udf031 \u00b7\ud835\udc56. We can obtain \u02c6 \ud835\udc63\ud835\udc56by computing \u02c6 \ud835\udc63\ud835\udc56\u22121 +\ud835\udf031, thus saving the floating-point multiplication. However, because of the limited precision in the floating-point representation, the \ud835\udf031-accumulation result at certain position \ud835\udc56is incorrect (i.e., \u230a\ud835\udf030 + \u00cd\ud835\udc56 1 \ud835\udf031\u230b+ \ud835\udeff\ud835\udc56\u2260 \u230a\ud835\udf030 + \ud835\udf031 \u00b7 \ud835\udc56\u230b+ \ud835\udeff\ud835\udc56). Therefore, we append an extra list to the delta array to correct the deviation at those positions. 3.4 Extension to Handling Strings The (integer-based) algorithms discussed so far can already benefit a subset of the string columns in a relational table where the values are dictionary-encoded. In this section, we extend our support to mostly unique string values under the LeCo framework. The idea is to create an order-preserving mapping between the strings and large integers so that they can be fed to the Regressor. Given a partition of string values, we first extract their common prefix (marked in dashed box in Figure 8) and store it separately in 6 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile the partition header. Then, we shrink the size of the character set if possible. Because many string data sets refer to a portion of the ASCII table, we can use a smaller base to perform the string-integer mapping. For example, we adopt 26-based integers in Figure 8 with only lower-case letters presenting. Notice that for an arbitrary \ud835\udc40-based mapping, the computation required to recover each character from the integer is expensive. Given the mapped integer \ud835\udc63, it requires an integer modulo \ud835\udc63%\ud835\udc40 to decode the current character and an integer division \ud835\udc63/\ud835\udc40to prepare for decoding the next one. Both operations take tens of CPU cycles. To speed up decoding, we set \ud835\udc40to its closest power of two (2\ud835\udc5a) so that the modulo becomes a left-shift followed by a bit-wise AND (\ud835\udc63&((1 << \ud835\udc5a) \u22121)), and the division becomes a right-shift (\ud835\udc63>> \ud835\udc5a). For example, for strings that only consist of lower-case characters, we set \ud835\udc40= 32. LeCo requires strings to be fixed-length. For a column of varchar(3), we pad every string to 3 bytes (padding bytes marked with orange \u201ca\u201d in Figure 8). An interesting observation is that we can leverage the flexibility in choosing the padding characters to minimize the stored deltas. Suppose the string at position \ud835\udc56is \ud835\udc60\ud835\udc56, and the smallest/largest valid string after padding is \ud835\udc60\ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udc56 /\ud835\udc60\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc56 (i.e., pad each bit position with the smallest/largest character in the character set). We then choose the padding adaptively based on the predicted value \u02c6 \ud835\udc60\ud835\udc56from the Regressor to minimize the absolute value of the prediction error. If \u02c6 \ud835\udc60\ud835\udc56< \ud835\udc60\ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udc56 , we adopt the minimum padding and store \ud835\udeff\ud835\udc56= \ud835\udc60\ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udc56 \u2212\u02c6 \ud835\udc60\ud835\udc56in the delta array; if \u02c6 \ud835\udc60\ud835\udc56> \ud835\udc60\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc56 , we use the maximum padding and produce \ud835\udeff\ud835\udc56= \ud835\udc60\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc56 \u2212\u02c6 \ud835\udc60\ud835\udc56; if \ud835\udc60\ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udc56 \u2264\u02c6 \ud835\udc60\ud835\udc56\u2264\ud835\udc60\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc56 , we choose \u02c6 \ud835\udc60\ud835\udc56as the padded string directly and obtain \ud835\udeff\ud835\udc56= 0. The lower part of Figure 8 shows the updated storage format to accommodate varchars. Additionally, the header includes the maximum padding length (without prefix) along with the common prefix of the partition. We also record the length of each varchar value in the delta array (the slot before each delta value) to mark the boundary of the valid bytes from padded bytes in order to decode correctly. These lengths can be omitted for fixed-length strings. 4 MICROBENCHMARK EVALUATION We evaluate LeCo in two steps. In this section, we compare LeCo against state-of-the-art lightweight compression schemes through a set of microbenchmarks. We analyze LeCo\u2019s gains and trade-offs in compression ratio, random access speed, and range decompression throughput. In Section 5, we integrate LeCo into two widely-used applications to show the end-to-end performance. 4.1 Compression Schemes and Data Sets The baseline compression schemes under evaluation are EliasFano [88, 103], Frame-of-Reference (FOR) [59, 117], Delta Encoding (Delta) [29], and rANS [49]. FOR and Delta are introduced in Section 2. rANS is a variant of arithmetic encoding [106] with a decoding speed similar to Huffman [63]. Elias-Fano is an encoding mechanism to compress a sorted list of integers. Suppose the list has \ud835\udc5bintegers, with \ud835\udc5abeing the difference between the maximum and minimum value of the sequence. Elias-Fano stores the lower \u2308\ud835\udc59\ud835\udc5c\ud835\udc542( \ud835\udc5a \ud835\udc5b)\u2309bits for each value explicitly with bit packing. For the remaining higher bits, Elias-Fano uses unary coding to record the number of appearances for each possible higher-bit value. For example, the binary sequence 00000, 00011, 01101, 10000, 10010, 10011, 11010, 11101 is encoded as \u201c00 11 01 00 10 11 10 01\u201d for the lower bits and \u201c110 0 0 10 1110 0 10 10\u201d for the higher bits. Elias-Fano is quasi-succinct [103] in that it only requires (2 + \u2308\ud835\udc59\ud835\udc5c\ud835\udc542( \ud835\udc5a \ud835\udc5b)\u2309) bits per element. We evaluate LeCo and the baseline solutions extensively on thirteen integer data sets: \u2013 linear, normal: synthetic data sets with 200M 32-bit sorted integers following a clean linear (or normal) distribution. \u2013 poisson: 87M 64-bit timestamps following a Poisson distribution that models events collected by distributed sensors [113]. \u2013 ml: 14M 64-bit sorted timestamps from the UCI-ML data set [18]. \u2013 booksale, facebook, wiki, osm: each with 200M 32-bit or 64-bit sorted integers from the SOSD benchmark [70]. \u2013 movieid: 20M 32-bit \u201cliked\u201d movie IDs from MovieLens [9]. \u2013 house_price: 100K 32-bit sorted integers representing the distribution of house prices in the US [10]. \u2013 planet: 200M 64-bit sorted planet ID from OpenStreetMap [42]. \u2013 libio: 200M 64-bit sorted repository ID from libraries.io [85]. \u2013 medicare: (used in Section 4.5) 1.5 billion augmented 64-bit integers (without order) exported from the public BI benchmark [24]. seven additional non-linear data sets (used in Section 4.4): \u2013 cosmos: 100M 32-bit data simulating a cosmic ray signal4. \u2013 polylog: 10M 64-bit synthetic data of a biological population growth curve5. \u2013 exp, poly: 200M 64-bit synthetic data, each block follows the exponential or polynomial distribution of different parameters. \u2013 site, weight, adult: 250k, 25k and 30k sorted 32-bit integer column exported from the websites_train_sessions, weights_heights, and adult_train data sets in mlcourse.ai [23]. nine tabular data sets, each sorted by its primary key column: \u2013 lineitem, partsupp, orders: TPC-H [27] tables, scale factor = 1. \u2013 inventory, catalog_sales, date_dim: from TPC-DS [26], sf = 1. \u2013 geo, stock, course_info: real-world tables extracted from geonames [20], GRXEUR price [21] and Udemy course [22]. and three string data sets: \u2013 email: 30K email addresses (host reversed) with an average string length of 15 bytes [2]. \u2013 hex: 100K sorted hexadecimal strings (up to 8 bytes) [36]. \u2013 word: 222K English words with an average length of 9 bytes [3]. Figure 9a visualizes the eighteen integer data sets where noticeable unevenness is observed frequently in real-world data sets. 4.2 Experiment Setup We run the microbenchmark on a machine with Intel\u00aeXeon\u00ae(Ice Lake) Platinum 8369B CPU @ 2.70GHz and 32GB DRAM. The three baselines are labeled as Elias-Fano, FOR, and Delta-fix. Delta-var represents our improved version of Delta Encoding that uses the variable-length Partitioner in LeCo. LeCo-fix and 4We use (sin \ud835\udc65+10 60\ud835\udf0b+ 1 10 sin 3(\ud835\udc65+10) 60\ud835\udf0b ) \u00d7 106 + N(0, 100) to construct it. 5Constructed by concatenating the polynomial and logarithm distribution, in turn, every 500 records. 7 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. 0 2e+08 0 4.3e+09 linear 0 2e+08 0 4.3e+09 normal 0 2e+08 0 5.3e+08 libio 0 2e+08 1.2e+09 wiki 0 2e+08 0 4.3e+09 booksale 0 1.4e+07 1.5e+12 ml 0 1e+03 0 8.2e+04 movieid 0 8.7e+07 0 5e+16 poisson 0 1e+05 0 6e+07 house_price 0 2e+08 0 9.2e+09 planet 0 2e+08 0 1.8e+19 facebook 0 2e+08 0 1.4e+19 osm 0 3e+04 0 2e+09 Poly 0 2e+03 1 1.7e+15 Exp 0 1e+07 0 3.1e+09 polylog 0 2.5e+05 0 3.5e+04 Site 0.0 2.5 1e4 6.028 7.515 1e6 Weight 0 3.3e+04 0 1.5e+06 Adult (a) Data Distribution Plot. \u2013 The first row presents the nine data sets classified as \u201clocal easy\u201d. 0.0 0.2 0.4 0.6 0.8 1.0 Local Hardness 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Global Hardness house_price wiki libio booksale planet ml facebook normal linear poisson movieid osm (b) Dataset Hardness. Figure 9: Data Distribution with Hardness evaluation. LeCo-var are linear-Regressor LeCo prototypes that adopt fixedlength and variable-length partitioning, respectively. The corresponding LeCo variants with polynomial Regressor are labeled LeCo-Poly-fix and LeCo-Poly-var. For all the fixed-length partitioning methods, the partition size is obtained through a quick sampling-based parameter search described in Section 3.2.1. For Delta-var, LeCo-var, and LeCo-Poly-var, we set the split-parameter \ud835\udf0fto be small (in the range [0, 0.15]) in favor of the compression ratio over the compression throughput. Given a data set, an algorithm under test first compresses the whole data set and reports the compression ratio (i.e., compressed_size / uncompressed_size) and compression throughput. Then the algorithm performs \ud835\udc41uniformly-random accesses (\ud835\udc41 is the size of the data set) and reports the average latency. Finally, the algorithm decodes the entire data set and measures the decompression throughput. All experiments run on a single thread in the main memory. We repeat each experiment three times and report the average result for each measurement. 4.3 Integer Benchmark Figure 10 shows the experiment results for compression ratio, random access latency, and decompression throughput on the twelve integer data sets. Elias-Fano does not apply to poisson and movieid because these two data sets are not fully-sorted. Overall, LeCo achieves a Pareto improvement over the existing algorithms. Compared to Elias-Fano and FOR, the LeCo variants obtain a significantly better compression ratio while retaining a comparable decompression and random access speed. When compared to Delta Encoding, LeCo remains competitive in the compression ratio while outperforming the Delta variants by an order of magnitude in random access. 4.3.1 Compression Ratio. As shown in the first row of Figure 10, the compression ratios from the LeCo variants are strictly better than the corresponding ones from FOR. This is because FOR is a special case of LeCo: the output of its Regressor is fixed to a horizontal line (refer to Section 2). We further plot the local hardness H\ud835\udc59and the global hardness H\ud835\udc54(defined in Section 3.2.3) of the different data sets in Figure 9b. The horizontal/vertical dashed line marks the average global/local hardness among the data sets. we observe that LeCo\u2019s compressionratio advantage over FOR is larger on locally-easy data sets (40.9% improvement on average) than the three locally-hard data sets (9.3% improvement on average). This is because local unevenness in the distribution makes it difficult for a regression algorithm to fit well. LeCo also compresses better than Elias-Fano across (almost) all data sets. Although Elias-Fano is proved to be quasi-succinct, it fails to leverage the embedded serial correlation between the values for further compression. rANS remains the worst, which indicates that the redundancy embedded in an integer sequence often comes more from the serial correlation rather than the entropy. Compared to Delta Encoding, LeCo shows a remarkable improvement in compression ratio for \u201csmooth\u201d (synthetic) data sets: linear, normal, and poisson. For the remaining (real-world) data sets, however, LeCo remains competitive. This is because many real-world data sets exhibit local unevenness, as shown in Figure 9a. The degree of such irregularity is often at the same level as the difference between adjacent values. Another observation is that variable-length partitioning is effective in reducing the compression ratio on real-world data sets that have rapid slope changes or irregular value gaps (e.g., movieid, house_price). Our variable-length partitioning algorithm proposed in Section 3.2 is able to detect those situations and create partitions accordingly to avoid oversized partitions caused by unfriendly patterns to the Regressor. We also notice that LeCo-var achieves an additional 28.2% compression compared to LeCo-fix on the four locally-easy and globally-hard data sets, while the improvement drops to < 10% for the remaining data sets6. This indicates that the two metrics used for the partitioning strategy advising (refer to Section 3.2.3) is effective in identifying data sets that can potentially benefit from variable-length partitions. 4.3.2 Random Access. The second row of Figure 10 presents the average latency of decoding a single value in memory for each compression scheme. The random access speed of LeCo-fix is comparable to that of FOR because they both require only two memory accesses per operation. FOR is often considered the lower bound of the random access latency for lightweight compression because it involves minimal computation (i.e., an integer addition). Compared to FOR, LeCo-fix requires an additional floating-point multiplication. This overhead, however, is mostly offset by a better cache hit ratio because LeCo-fix produces a smaller compressed sequence. LeCo-var is slower because it has to first search the metadata to determine the corresponding partition for a given position. This index search takes an extra 35\u221290 ns depending on the total number of partitions. The Delta variants are an order of magnitude slower than the others in most data sets because they must decompress the entire partition sequentially to perform a random access. 6Except for the ideal cases in linear and normal 8 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile 0 20 40 60 Compression Ratio(%) 77.6 74.1 72.1 84.8 87.1 rANS FOR Elias-Fano Delta Delta-var Leco Leco-var model size 0 100 200 Random Access(ns) 5e+05 5e+05 9e+05 1e+05 5e+05 1e+06 1e+05 7e+05 5e+05 5e+05 2e+05 1e+06 40 39 57 43 51 58 48 16 24 43 10 65 1657 1643 334 377 347 344 394 285 537 1884 1857 530 1329 641 525 549 753 378 1053 625 28 29 72 55 56 77 67 19 26 63 11 61 linear normal libio wiki booksale planet facebook ml movieid poisson house_price osm 0.0 1.0 2.0 3.0 4.0 Decode TPS(GB/s) 4.14 7.7 7.42 Figure 10: Compression Microbenchmark \u2013 Measurement of seven compression schemes on twelve integer data sets from three aspects: Compression Ratio, Random Access Latency, and Full Decompression Throughput. We break down the compression ratio into model size (marked with the cross pattern) and delta size in the first row. The dashed lines split these data sets into four groups in the order of locally easy globally easy, locally hard globally easy, locally easy globally hard, and locally hard globally hard according to Figure 9b. FOR Elias-Fano Delta-fix Delta-var LeCo-fix LeCo-var 0.81\u00b10.28 0.58\u00b10.17 1.04\u00b10.14 0.04\u00b10.01 0.78\u00b10.11 0.02\u00b10.01 Table 1: Compression Throughput (GB/s). movieid poly cosmos exp polylog site weight adult 0 20 40 60 80 CPR(%) FOR LeCo optimal recommend Figure 11: Regressor Selection Result. 4.3.3 Full Decompression. The third row in Figure 10 shows the throughput of each compression algorithm for decompressing an entire data set. In general, LeCo-fix is 14% \u221234%7 slower than its fastest competitor FOR because LeCo-fix involves an extra floatingpoint operation upon decoding each record. Delta-var and LeCo-var perform exceptionally well on house_price. The reason is that part of the data set contains sequences of repetitive values. LeCo\u2019s Partitioner would detect them and put them into the same segment, making the decompression task trivial for these partitions. 4.3.4 Compression throughput. Table 1 shows the compression throughput for each algorithm weighted averaged across all the twelve data sets with error bars. LeCo-fix has a similar compression speed to the baselines because our linear Regressor has a low computational overhead. Algorithms that adopt variable-length partitioning (i.e., Delta-var and LeCo-var), however, are an order of magnitude slower because the Partitioner needs to perform multiple scans through the data set and invokes the Regressor (or an approximate function) frequently along the way. Such a classic trade-off between compression ratio and throughput is often beneficial to applications that do not allow in-place updates. rANS FOR LeCo -fix LeCo -var LeCo Poly-fix LeCoPoly-var sin 2sin 2sin-freq 0 20 40 60 80 100 CPR(%) 82.2 61.4 54.6 50.5 42.3 41.8 36.7 25.8 21.1 Figure 12: Compression ratio on cosmos. 4.4 Cases for higher-order models Although linear models perform sufficiently well in the above integer benchmark8, there are cases where higher-order models shine. Because our setting is mostly read-only, it is usually worthwhile to spend more computation to compress the data once and then benefit from long-term space and query efficiency. We first verify the effectiveness of our Regressor Selector in the Hyperparameter Advisor (refer to Section 3.2.3). In this experiment, we consider the following six Regressor types: constant (FOR), linear, polynomial up to a degree of three, exponential, and logarithm. We create synthetic data sets (with random noise) for each Regressor type and extract the features introduced in Section 3.2.3 to train the classification model (i.e., CART) offline. We compare the compression ratios obtained by using our recommended Regressor per partition (labeled recommend) to those obtained by FOR, LeCo-fix, and the optimal (i.e., exhaustively search in the candidate Regressor types and pick the one with the best compression ratio). Figure 11 shows the results. Note that none of the eight tested data sets were used for training. We observe that recommend achieves a compression ratio close to the optimal, with up to 64.7% improvement over LeCo-fix (with linear regression only) on data sets that exhibit higher-order patterns. For data sets that are mostly linear (e.g. movieid), the benefit of applying higher-order models is limited, as expected. One can even extend the LeCo framework to leverage domain knowledge easily. For example, the cosmos data set contains a mixture of two signals (i.e., sine function) with random noise. As 7except for house_price where the enhancement of FOR over LeCo-fix is 49% 8Many data sets in the integer benchmark come from the SOSD benchmark [70], which favors linear models. 9 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. shown in Figure 12, if we include a sine term in the Regressor (labeled sin), we are able to achieve a better compression ratio (36.7%) compared to the recommended polynomial model (42.3%). If we include two sine terms (labeled 2sin), we are able to extract an additional 29.7% compression out of the LeCo framework compared to sin. If we further know the approximate frequencies of the two sine terms (labeled 2sin-freq), LeCo produces an even better compression ratio, as presented in Figure 12. 4.5 Compressing Dictionaries Building dictionaries that preserve the key ordering is a common technique to achieve compression and speed up query processing [34, 86, 114]. Reducing the memory footprint of such dictionaries is an important use case of LeCo. In the following experiment, we perform a hash join with the probe side being dictionary encoded. Specifically, we use the medicare dataset as the probe-side column, and we pre-build a hash table of size 84MB in memory, which contains 50% of the unique values (i.e., 50% hash table hit ratio during the join). The probe side first goes through a filter of selectivity of 1% and then probes the hash table for the join. The probe-side values are encoded using an order-preserving dictionary compressed by LeCo (i.e., LeCo-fix), FOR, and Raw (i.e., no compression). We vary the memory budget from 3GB to 500MB and report the throughput (defined as the raw data size of the probe side divided by the query execution time) of executing this query. Figure 14 shows that applying LeCo improves the throughput up to 95.7\u00d7 compared to FOR when the memory budget for this query is limited. This is because LeCo compresses the probe-side dictionary from 2.4GB to 5.5MB (cpr ratio = 0.23%) so that it constantly fits in memory. For comparison, the dictionary size compressed using FOR is still 400MB (cpr ratio = 17%). When the available memory is limited, this larger dictionary causes a significant number of buffer pool misses, thus hurting the overall query performance. 4.6 Multi-Column Benchmark In this section, we evaluate the effectiveness of LeCo on nine multicolumn tabular data sets9. As shown in Figure 13 (bottom right), we compute the \u201csortedness\u201d of a table (in the range [0, 1]) by averaging the sortedness of each column using the portion of inverse pairs [39] as the metric. From Figure 13 (the top row), we observe that LeCo achieves a better compression ratio than FOR in all nine tables. This is because columns in a table are often correlated [58, 65, 95]. Our \u201csortedness\u201d metric indicates that non-primary-key columns have different degrees of correlation with the primary-key (i.e., sorting) column across tables, thus partially inheriting the serial patterns. Tables with high sortedness such as inventory and data_dim are more likely to achieve better compression ratios with the LeCo variants. The bottom left of Figure 13 presents the compression ratios of the TPC-H tables10 with high-cardinality columns only (i.e., NDV > 10% #row). LeCo\u2019s has a more noticeable advantage over FOR on columns that are likely to select FOR as the compression method. 9Elias-Fano is not included as a baseline because most columns are not strictly sorted. 10Due to space limitations, we only present the results of TPC-H. Results of the other six data sets can be found in our technique report at [25]. 4.7 String Benchmark We compare LeCo (i.e., LeCo-fix) against the state-of-the-art lightweight string compression algorithm FSST [36] using three string data sets email, hex and words. FSST adopts a dictionary-based approach by building a fine-grained static symbol table to map a partial string to a 1-byte code. Because each compressed string has a variable length, FSST must store a byte-offset array to support random access. An optimization (not mentioned in the FSST paper) is to delta-encode this offset array to trade its random access speed for a better compression ratio. To perform a fair comparison, we tested six different block sizes of the delta encoding: 0 (i.e., no delta compression), 20, 40, 60, 80, and 100. For LeCo, we present two data points with different character-set sizes. Figure 15 shows the random access latencies and compression ratios for different algorithm configurations. Each LeCo point is marked with the base value used to convert strings. We observed that LeCo\u2019s string extension provides a higher random access speed while retaining a competitive compression ratio, compared to FSST on both email and hex data sets. The compression ratio of LeCo, however, is slightly worse than that of FSST on word. This is because dictionary-based algorithms are more suitable for human-readable strings that contain repeating patterns such as common prefixes, roots, and suffixes, while learned compression is better at leveraging serial patterns between values. 4.8 Partitioner Efficiency In this section, we compare LeCo\u2019s default Partitioner (as described in Section 3.2) to state-of-the-art partitioning algorithms, including the PLA algorithm adopted by time-series compression [87], as well as FITing tree [57], Sim-Piece introduced in [72], and the la_vector algorithm proposed in [35]. The angle-based PLA predefines a fixed global prediction error bound (\ud835\udf16) and determines the partition boundaries greedily in one pass. Sim-Piece adopts the angle-based PLA as its partitioner and compactly stores linear models with the same intercept together to reduce the overall space. They sacrifice model parameter precisions to create more segments with the same intercept. On the other hand, la_vector translates each data point \ud835\udc63\ud835\udc56into a vertex \ud835\udc56, where the weight of edge (\ud835\udc56, \ud835\udc57) is defined as the compression ratio of segment [\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57]. The optimal partitioning problem is thus converted into finding the shortest path in the above graph G. la_vector approximates G with G\u2032 with fewer edges and proofs that the best compression ratio achieved on G\u2032 is at most \ud835\udc58\u00b7 \ud835\udc59larger than that on G where \ud835\udc58is a constant and \ud835\udc59is the shortest path length. We integrated PLA, Sim-Piece, and la_vector into the LeCo framework (with the linear Regressor denoted by LeCo-PLA, Sim-Piece and LeCo-la-vec, respectively) and repeated the experiments in Section 4.3 on four representative data sets. As shown in Figure 16, all three candidate methods exhibit significantly worse compression ratios compared to LeCo-var. The globally-fixed error bound in LeCo-PLA fails to adapt to data segments with rapidly changing slopes. We also found that LeCo-PLA is more sensitive to its hyperparameter compared to LeCo-var, as shown in Figure 17 where we sweep the hyperparameters for LeCo-PLA (\ud835\udf16) and LeCo-var (\ud835\udf0f) on the books data set. The model compaction in Sim-Piece doesn\u2019t take effect because, on mostly sorted data sets, the intercept of 10 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile lineitem partsupp orders inventory catalog_sales date_dim geo stock course_info 0 20 40 60 Compression Ratio(%) 5.1% 15.1% 12.3% 25.9% 0.3% 55.2% 11.6% 7.1% 5.6% 5.1% 20.5% 11.8% 31.1% 17.0% 78.9% 28.8% 14.0% 15.7% FOR Delta-fix Delta-var LeCo-fix LeCo-var lineitem (2/16) partsupp (2/5) orders (2/9) inventory (0/4) catalog_sales (13/34) date_dim (5/28) geo (3/17) stock (5/6) course_info (1/6) 0 20 40 60 Compression Ratio(%) 15.58% 31.63% 18.15% 25.85% 1.26% 90.37% 10.75% 7.12% 29.69% 16.08% 32.35% 17.67% 31.14% 12.45% 91.2% 22.77% 14.01% 41.63% name lineitem partsupp orders inventory catalog_sales data_dim geo stock course_info size 725MB 114MB 164MB 226MB 283MB 10MB 1.5GB 9MB 73MB sortedness 0.24 0.32 0.51 0.81 0.07 0.78 0.45 0.98 0.19 # total columns 16 5 9 4 34 28 17 6 6 # numeric columns 8 4 3 4 33 16 4 5 6 Figure 13: Multiple Column \u2013 Compression ratio of five methods on nine tabular data sets. The second row of the result only considers columns with cardinality \u226510%. We report the size in bytes, average sortedness (in the range [0, 1]), total column number, and integer/numerical column number of each table. We mark the enhancement ratio of LeCo variants over FOR above the bars. 3G 2G 1G 800M 700M 600M 500M Memory Limit(B) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Throughput (GB/s) 1.2x 3.2x 42.1x 95.7x 34.1x 4.2x 2.2x LeCo FOR Raw Figure 14: Hash Probe TPS. 0 50 100 150 200 250 300 350 Random Access(ns) 20 0 20 40 60 80 Compression Ratio(%) 78 128 64 43 32 26 FSST_Email FSST_HEX FSST_Words LeCo_Email LeCo_HEX LeCo_Words Figure 15: String Evaluation. normal house price booksale movieid 0 10 20 30 40 50 60 Compression Ratio(%) LeCo-fix LeCo-PLA Leco-la-vec Sim-Piece LeCo-var Figure 16: Partition efficiency. 0.00 0.04 0.08 0.12 0.16 0.20 LeCo-var hyper parameter 0 20 40 60 Compression Ratio(%) LeCo-var LeCo-PLA 3 4 5 6 7 8 9 10 11 12 13 LeCo-PLA hyper parameter Figure 17: Robustness test. each linear model is also increasing. The precision sacrifice in their implementation results in an even worse compression ratio on house_price compared to LeCo-PLA. For LeCo-la-vec, although it finds the shortest path in the approximate \u201ccompression-ratio graph\u201d, it overlooked the length of the shortest path, resulting in an excessive number of models that dominate the compressed size on data sets such as movieid. 5 SYSTEM EVALUATION To show how LeCo can benefit real-world systems, we integrated LeCo into two system applications: (1) a columnar execution engine implemented using Arrow [4] and Parquet [6] and (2) RocksDB [14]. All experiments are conducted on a machine with 4\u00d7 Intel\u00aeXeon\u00ae (Cascade Lake) Platinum 8269CY CPU @ 2.50GHz, 32GB DRAM, and a local NVMe SSD of 447GB with 250k maximum read IOPS. We use Apache Arrow 8.0.0, Parquet version 2.6.0, and RocksDB Release version 6.26.1 in the following experiments. 5.1 Integration to Arrow and Parquet We first integrated LeCo (as well as FOR and Delta for comparison) into Apache Arrow (the most widely-used columnar in-memory format) and Apache Parquet (the most widely-used columnar storage format), and built an execution engine prototype using their C++ libraries to demonstrate how LeCo can benefit query processing. Parquet uses dictionary encoding as the default compression method. It falls back to plain encoding if the dictionary grows too large. We refer to this mechanism as Default. In the following experiments, we set Parquet\u2019s row group size to 10M rows and disable block compression unless specified otherwise. The primary component of the Arrow format is the Arrow Array that represents a sequence of values of the same type. Except for basic dictionary encoding, no compression is applied to Arrow arrays to guarantee maximum query-processing performance. We re-implemented the Arrow Array structure using lightweight compression methods (i.e., LeCo, FOR, and Delta) without changing its interface. We use a consistent lightweight-compressed format for the Arrow Array and Parquet Column Chunk so that no additional decoding is required when scanning the data from disk to memory. The Arrow Compute library implements various basic database operators (e.g., Take, Filter, GroupBy) on Arrow arrays as compute functions. Our execution engine uses these compute functions as building blocks. The engine is implemented using late materialization [37] where intermediate results are passed between operators as position bitmaps. We also push down the filters to the storage layer (i.e., Parquet). 5.1.1 Filter-Groupby-Aggregation. We create a query template of a typical filter-groupby-aggregation as follows. Suppose we have 10k sensors recording measurements. The table T has three columns: (1) ts, timestamps (in seconds, almost sorted) extracted from the ml [18] data set, (2) id, 16-bit sensor IDs ranging from 1 to 10k, and (3) val, 64-bit-integer sensor readings. To vary the compressibility of the table, we generate two different data distributions for the id and the val columns: (1) random: both id and val are randomly 11 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. 0.0 2.5 5.0 7.5 10.0 Query Time(s) 3.3x 3.2x 3.2x 3.2x 2.5x 2.1x 2.4x 2.5x 2.5x 2.3x 1.3x 1.3x 1.3x 1.3x 1.2x random 0.001 0.01 0.1 1 10 Selectivity(%) 0.0 2.5 5.0 7.5 10.0 Query Time(s) 5.2x 5.2x 5.2x 5.0x 3.5x 2.7x 3.1x 3.2x 3.2x 2.6x 1.7x 1.7x 1.7x 1.7x 1.4x correlated Default_groupby_CPU Default_filter_CPU Default_IO Delta_groupby_CPU Delta_filter_CPU Delta_IO FOR_groupby_CPU FOR_filter_CPU FOR_IO LeCo_groupby_CPU LeCo_filter_CPU LeCo_IO Figure 18: Filter Groupby Aggregation generated and are difficult to compress no matter which algorithm, and (2) correlated: ids are clustered in groups of 100, and vals are monotonically increasing across groups (but random within a group). There are serial patterns in this setting for lightweight compression algorithms to leverage. We construct the following query that outputs the average reading for each sensor within a given time range per day: SELECT AVG(val) FROM T WHERE ts\\_begin < ts \\% val\\_2 < ts\\_end GROUP BY id. We adjust the time range (i.e., ts_end ts_begin) to control the query\u2019s selectivity. When executing this query, our execution engine first pushes down the filter predicate to Parquet, which outputs a bitmap representing the filtering results. The engine then scans the id and the val column from Parquet into Arrow arrays and performs the groupby-aggregation. Both groupby and aggregation only decode entries that are still valid according to the filter-bitmap, which involves random accesses to the corresponding Arrow arrays. We generated four Parquet files with Default, Delta, FOR, and LeCo as the encoding algorithms (with a partition size of 10k entries). In the case of random distribution, the resulting file sizes are 3.8GB, 1.3GB, 1.5GB, and 1.4GB, respectively. For the correlated distribution, the corresponding file sizes are 3.8GB, 706MB, 1.2GB, and 785MB (with better compression ratios). We execute the above query template and repeat each query instance three times with its average execution time reported. As shown in Figure 18, all three lightweight compression algorithms outperform the Default because of the significant I/O savings proportional to the file size reduction. Compared to Delta, LeCo is much more CPU-efficient because Delta requires to decode the entire partition to random-access particular entries during the groupby-aggregation. Compared to FOR, LeCo mainly gains its advantage through the I/O reduction due to a better compression ratio. This I/O advantage becomes larger with a more compressible data set (i.e., correlated). Interestingly, LeCo is up to 10.5\u00d7 faster than FOR when performing the filter operation. Suppose that the model of a partition is \ud835\udf030 + \ud835\udf031 \u00b7 \ud835\udc56, and the bit-length of the delta array is \ud835\udc4f. For a less-than predicate \ud835\udc63< \ud835\udefc, for example, once LeCo decodes the partition up to position \ud835\udc58, where \ud835\udf030 + \ud835\udf031 \u00b7 \ud835\udc58\u22122\ud835\udc4f\u22121 > \ud835\udefc(assume \ud835\udf031 \u22650), we can safely skip the values in the rest of the sequence because they are guaranteed to be out of range. FOR cannot perform such a computation pruning because the ts column is not strictly sorted. 5.1.2 Bitmap Aggregation. In this experiment, we zoom in on the critical bitmap aggregation operation of the above end-to-end query and further verify LeCo\u2019s performance and space benefits on four different data sets introduced in Section 4.1: normal, poisson, 0.001 0.01 0.1 1 10 0.0 0.5 1.0 Query Time(s) normal Default_CPU Default_IO Delta_CPU Delta_IO FOR_CPU FOR_IO LeCo_CPU LeCo_IO 0.001 0.01 0.1 1 10 0.0 0.5 1.0 booksale 0.001 0.01 0.1 1 10 selectivity(%) 0.0 0.5 1.0 1.5 2.0 Query Time(s) poisson 0.001 0.01 0.1 1 10 selectivity(%) 0.0 0.5 1.0 1.5 2.0 ml Figure 19: Bitmap Aggregation. normal books poisson ml 0.0 0.5 1.0 1.5 File Size (GB) 1.3x 2.7x 5.8x 1.4x 1.0x 1.1x 1.5x 1.0x 1.1x 7.6x 1.1x 1.2x Default Default-zstd FOR FOR-zstd LeCo LeCo-zstd Figure 20: Parquet With zstd Compression \u2013 Numbers on bars indicating additional improvement introduced by zstd. booksale, and ml11. For each data set, we create four Parquet files with different lightweight compression algorithms (i.e., Default, Delta, FOR, and LeCo) enabled as above. The bitmaps used in the experiments include ten set-bit clusters following a Zipf-like distribution with a varying ratio of \u201cones\u201d (to represent different filter selectivities). Data is scanned directly into Arrow arrays in a rowgroup granularity, where a row-group is skipped if the bits in the corresponding area in the bitmap are all zeros. We then feed the arrays and the bitmap to the Arrow Compute function to perform the summation. As shown in Figure 19, LeCo consistently outperforms Default (by up to 11.8\u00d7), Delta (by up to 3.9\u00d7), and FOR (by up to 5.0\u00d7). LeCo\u2019s speedup comes from both the I/O reduction (due to a better compression ratio) and the CPU saving (due to fast random access and better caching). Moreover, we found that LeCo consumes less memory during the execution. The peak memory usage (for processing a Parquet row group) of LeCo is 60.5%, 35.3%, and 10.0% less compared to Default, FOR, and Delta, respectively on average. This is much preferred for systems with constrained memory budgets. 5.1.3 Enabling Block Compression. People often enable block compression on columnar storage formats such as Parquet and ORC [5] to further reduce the storage overhead. We repeat the Parquet loading phase of the above experiments with zstd [19] enabled to show how block compression algorithms affect the final file sizes. As shown in Figure 20, the additional improvement introduced by zstd is marked above each bar. Applying zstd on top of the lightweight encoding schemes in Parquet can further reduce the file sizes. The relative improvement of LeCo + zstd over LeCo is higher than that in the case of FOR. This shows that LeCo\u2019s ability to remove serial redundancy is complementary to some degree to the general-purpose block compression algorithms. 11we scale ml to 200M rows while preserving its value distribution. 12 \fLeCo: Lightweight Compression via Learning Serial Correlations SIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Default FOR LeCo 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Query Time(s) CPU_w/o_zstd IO_w/o_zstd CPU_w/_zstd IO_w/_zstd Figure 21: Time breakdown of zstd on Parquet. 0.10 0.11 0.12 0.13 0.14 0.15 2 3 4 6 10 0.00 0.01 0.02 Block Cache size (GB) Million op/s Baseline_1 Baseline_16 Baseline_128 LeCo Figure 22: RocksDB Seek Query Throughput. The decompression overhead of zstd, however, can be significant. We perform the bitmap selection experiment with zstd turned on for Parquet. Figure 21 shows an example result (ml data set, selectivity = 0.01). We observe that the I/O savings from zstd are outweighed by its CPU overhead, leading to an increase in the overall query time. The result confirms our motivation in Section 2 that heavyweight compression algorithms are likely to cause CPU bottlenecks in modern data processing systems. 5.2 RocksDB Index Block Compression RocksDB is a key-value store based on log-structured merge trees. Each level consists of a sorted run of key-value pairs stored in a sequence of SSTables. Each SSTable is divided into multiple data blocks (4KB by default). RocksDB builds an index on top of the data blocks. For each pair of adjacent data blocks \ud835\udc35\ud835\udc56\u22121 and \ud835\udc35\ud835\udc56, an index entry is created where the key is the shortest string greater than the last key in \ud835\udc35\ud835\udc56\u22121 and smaller than the first key in \ud835\udc35\ud835\udc56. The value of the index entry is a \u201cblock handle\u201d that records the byte offset and the size of \ud835\udc35\ud835\udc56. To locate a particular key \ud835\udc58, RocksDB performs a binary search in the index block and obtains the entry with the smallest key \u2265\ud835\udc58. It then reads the associated \u201cblock handle\u201d and fetches the corresponding data block that (potentially) contains \ud835\udc58. RocksDB offers a native compression scheme for the index blocks. It includes a hyper-parameter called \u201crestart interval\u201d (RI) to make trade-offs between the lookup performance and the index size. The value of RI determines the size of a compression unit in an index block. Within each compression unit, RocksDB applies a variation of Delta Encoding to both the keys and values. For the index keys, suppose \ud835\udc58\ud835\udc56\u22121 proceeds \ud835\udc58\ud835\udc56in the compressed sequence. Then \ud835\udc58\ud835\udc56 is encoded as (\ud835\udc5a\ud835\udc56,\ud835\udc58\u2032 \ud835\udc56) where \ud835\udc5a\ud835\udc56denotes the length of the shared prefix between \ud835\udc58\ud835\udc56\u22121 and \ud835\udc58\ud835\udc56, and \ud835\udc58\u2032 \ud835\udc56is the remaining suffix. For the \u201cblock handles\u201d, RocksDB simply stores the offset of each block in a delta-encoded sequence. We use LeCo to compress the keys and values separately in a RocksDB index block to shrink its size and to improve the lookup performance at the same time. We adopt LeCo-fix for both key and value sequences. Because all internal keys in RocksDB are strings, we use LeCo with the string extension to compress the keys. We compare RocksDB with LeCo against12 three baseline configurations: Baseline_1, Baseline_16, and Baseline_128. The number at the end of each label denotes the value of the RI parameter (1 is RocksDB\u2019s default). We configured RocksDB according to the settings in its Performance Benchmark [15]13. We turned on direct I/O to bypass the large OS page cache. 12The fixed partition size are set to 64 entries for LeCo. 13block_size = 4096B; pin_l0_filter_and_index_blocks_in _cache is enabled. In each experiment, we first load the RocksDB with 900 million record generated from the above RocksDB Performance Benchmark. Each record has a 20-byte key and a 400-byte value. The resulting RocksDB is around 110 GB. LeCo, Baseline_1, Baseline_16, and Baseline_128 achieve a compression ratio of 28.1%, 71.3%, 18.9% and 15.9%, respectively on the index blocks in RocksDB. We then perform 200M non-empty Seek queries using 64 threads. The query keys are generated using YCSB [43] with a skewed configuration where 80% of the queries access 20% of the total keys. We repeat each experiment three times and report the average measurement. Figure 22 shows the system throughputs for LeCo, and the baselines with a varying block cache size. RocksDB with LeCo consistently outperforms the three baseline configurations by up to 16% compared to the second-best configuration. The reasons are twofold. First, compared to Baseline_1 where no compression for the index blocks are carried out (each compression unit only contains one entry), LeCo produces smaller index blocks so that more data blocks can fit in the block cache to save I/Os. Such a performance improvement is more recognizable with a smaller block cache. Second, compared to Baseline_16 and Baseline_128 where the index blocks are compressed using Delta Encoding. Although LeCo no longer exhibits an index-size advantage over these baselines, it saves a significant amount of computations. Compared to Baseline_128 which need to decompress the entire 128-entry unit before it accesses a single entry, LeCo only requires two memory probes to perform a random access in the index block. To sum up, applying LeCo speeds up binary search in the index blocks. Such a small change improved the performance of a complex system (RocksDB) noticeably. We believe that other systems with similar \u201czone-map\u201d structures can benefit from LeCo as well. 6 RELATED WORK Many prior compression algorithms leverage repetitions in a data sequence. Null suppression omits the leading zeros in the bit representation of an integer and records the byte length of each value [1, 29, 92, 96, 100]. Dictionary [32, 34, 36, 83, 86, 94, 114] and entropy-based compression algorithms [63, 106] build a bijective map between the original values and the code words. Block compression algorithms such as LZ77 [116], Gzip [7], Snappy [8], LZ4 [11], and zstd [19] achieve compression by replacing repeated bit patterns with shorter dictionary codes. These approaches, however, miss the opportunity to exploit the serial correlation between values to achieve a compressed size beyond Shannon\u2019s Entropy. A pioneer work by Boffa et al. [35] proposed to use a similar linear model as in the PGM-Index [55] with a customized partitioning algorithm (i.e., la_vector) to compress a specific data structure called the rank&select dictionaries. Their approach represents a specific design point in the LeCo framework that is much more general and extensible in model types and partitioning algorithms. Also, LeCo\u2019s default variable-length partitioning algorithm is shown to be more efficient than la_vector for compressing columnar data. Semantic compression [58, 65, 66] aims to compress tabular data by exploiting correlations between columns using complex models like Bayesian networks. LFR[111] and DFR[110] use linear model or Delta-like model to compress data without partitioning. Because 13 \fSIGMOD\u201924, June 11\u201316, 2024, Santiago, Chile Liu et al. their model parameters vary at each data point, they do not support quick random access. Data partitioning plays an essential role in achieving a good compression ratio for various algorithms. Several prior work [88, 91] targeting inverted indexes proposed partitioning algorithms for specific compression schemes like Elias-Fano [103] and VByte [102, 105]. The partitioning algorithms introduced in Section 3.2 are applicable to an arbitrary linear combination of regression models. In terms of storage format, FastPFOR [117] and NewPFD [112] stores outlier values separately in a different format to improve the overall storage and query efficiency. Time-series/IoT data compression field adopts a similar idea with LeCo of approximating data distribution with models, but they target keeping the prediction error within a predetermined threshold and achieve lossy compression. Their optimization goal is to minimize the total space of model parameters. Partitioning algorithms for linear models [51, 87, 108] and constant value models [77] are designed to minimize the segment number. Sim-Piece[72] introduces a more compact format to keep the output models. Eichinger et al. [50] consider utilizing higher order models but require additional computation effort in the approximation process. Codec selection is critical in improving data compression performances. A common practice is to define a feature set and use machine learning classifiers for selection. Abadi et al. [29] empirically analyzed the performance of different codecs and manually built a decision tree for selection. While the features introduced by CodecDB [68] overlook the chance to utilize distribution patterns, in contrast to our Regressor Selector. Both learned indexes and learned compression use regression to model data distributions. RMI [73] and RS [71] apply hierarchical machine learning models to fit the CDFs, while PGM-Index [55], FITing-Tree [57], and CARMI [115] put more effort into the partitioning strategies to reduce model prediction errors. ALEX [47] and Finedex [82] proposed techniques such as a gapped array and non-blocking retraining to improve the indexes\u2019 update efficiency. Previous work [28, 118] have shown that heavyweight compression algorithms [7, 8, 63] designed for disk-oriented systems could incur notable computational overhead to the overall system performance. Algorithms such as FSST [36] and PIDS [67], therefore, emphasize low CPU usage besides a competitive compression ratio. Other related work reduces the computational overhead by enabling direct query execution on compressed formats [29, 45, 68], including filter and aggregation/join pushdowns [41, 46, 54, 60, 75, 79, 84]. 7" + }, + { + "url": "http://arxiv.org/abs/2304.05622v4", + "title": "SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM", + "abstract": "The Segment Anything Model (SAM) is a new image segmentation tool trained\nwith the largest available segmentation dataset. The model has demonstrated\nthat, with prompts, it can create high-quality masks for general images.\nHowever, the performance of the model on medical images requires further\nvalidation. To assist with the development, assessment, and application of SAM\non medical images, we introduce Segment Any Medical Model (SAMM), an extension\nof SAM on 3D Slicer - an image processing and visualization software\nextensively used by the medical imaging community. This open-source extension\nto 3D Slicer and its demonstrations are posted on GitHub\n(https://github.com/bingogome/samm). SAMM achieves 0.6-second latency of a\ncomplete cycle and can infer image masks in nearly real-time.", + "authors": "Yihao Liu, Jiaming Zhang, Zhangcong She, Amir Kheradmand, Mehran Armand", + "published": "2023-04-12", + "updated": "2024-02-03", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction The advent of foundation models has led to significant progress in image analysis with potential for future advancements. SAM [Kirillov et al., 2023] is a revolutionary foundation model for image segmentation and has already shown the capability of handling diverse segmentation tasks. SAM especially prevails in its generalization capability compared with the existing fine-tuned models that are trained on specific domains. Thus, SAM holds significant promise for application in medical image segmentation: The advantage lies in adapting it to address the inherent inter-subject variations and low signal-to-noise ratio commonly found in medical images. Medical image segmentation is the task to separate different structures within an image. The segmentation results can then be used to detect the region of interest or reconstruct 3-dimensional anatomical models [Sinha and Dolz, 2021]. The existing AI-based segmentation methods, however, do not fully bridge the domain gap among different anatomies and different imaging techniques [Wang et al., 2020]. This introduces challenges for training, as AI systems need to be trained on anatomyand task-specific datasets. Therefore, a universal tool would be valuable if it can be applied across all image modalities and various anatomical structures. However, the universal tool must also overcome a series of critical challenges including (but not limited to) data privacy, ethics, expenses, scalability, data integrity, and validation [Gao et al., 2023]. In contrast, pretrained SAM can perform a new segmentation task without being fine-tuned on \u2217Equal Contribution arXiv:2304.05622v4 [eess.IV] 3 Feb 2024 \fSAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM Figure 1: Overall architecture of the integration of 3D Slicer and SAM. Images from different modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), or ultrasound (US), are contained in a scalar volume node. The node (vtkMRMLScalarVolumeNode) is a class in the Visualization Toolkit (VTK) that represents a volume of the scalar data within the Medical Reality Markup Language (MRML) framework. task-specific datasets [Kirillov et al., 2023]. This feature makes SAM a promising segmentation tool for various image modalities with less effort. Despite the extensive use of AI in medical imaging, the application of foundation models, such as SAM, remains largely unexplored. The application of these models would require resolving differences in the structure between medical images and non-medical images. 3D Slicer Fedorov et al. [2012], as a widely-used open-source software, provides off-the-shelf functionalities to manipulate 2D and 3D medical images, using consistent and user-friendly interfaces and visualization tools. Therefore, we introduce the Segment Any Medical Model (SAMM) that incorporates 3D Slicer and SAM, to assist the investigation of applying general foundation models to medical images. 2 Methodology 2.1 Overall Architecture Figure 1 presents the overall architecture of SAMM, which consists of a SAM Server and an interactive prompt plugin for 3D Slicer (Slicer-IPP). SAM Server first loads the pretrained SAM. It runs in parallel with Slicer-IPP and keeps monitoring the requests sent from 3D Slicer. On the other side, Slicer-IPP handles all the image slices with the built-in interfaces of 3D Slicer. Then, it processes all the slices and send them to SAM Server to compute their embeddings. Subsequently, the embeddings of the slices are stored in a format of binary files in Random Access Memory (RAM) for efficient retrieval at the inference stage. Once the embeddings of all slices are ready, the user may start the segmentation using prompts. These prompts are the fiducial points placed on the 2D slice to indicate adding or removing the region. The prompt points are transmitted to the prompt encoder of SAM, and the inference stage starts synchronously. Here the prompt transmission and the SAM inference are synchronized. In such way, the mask generation responds to user action in real-time. Note that the image encoders run once per volume, rather than per prompt, which allows the users to segment the same image multiple times with different prompts in real-time. Given that the initialization of image embedding occurs in advance, the subsequent mask generation process can be performed with small latency (Section 3). The Slicer-IPP handles the transformations from the volume coordinates to 2D slice numbers and pixel coordinates. It can work out with discrepancies between the RAS (right, anterior, superior), IJK (slice ID in different views), and 2D image pixel coordinate systems by providing proper conversion functionalities. For instance, at an inference request, Slicer-IPP converts the coordinates of RAS to IJK to identify the image IDs. Slice-IPP then transmits the IDs along with the prompts to the SAM Server. With the coordinates converted, the masks of a 2D slice can be generated by SAM Server in the inference step. The coordinates of the generated masks are transformed back from the pixel to the RAS system, and are sent to Slicer-IPP for the visualization of the segmentation results. 2.2 Slicer-IPP The Slicer-IPP is composed of a data communication module, a prompt labeling module, and a visualization module. The data communication module accepts any volumetric image format, and it can pack them as image files used by SAM. The Slicer-IPP and SAM Server are designed to run five parallel tasks denoted as \u201csend inference request\u201d (SND_ INF), \u201creceive inference request\u201d (RCV_ INF), \u201ccomplete SAM inference\u201d (CPL_ INF), \u201creceive mask transmission\u201d (RCV_ MSK), and \u201capply mask\u201d (APL_ MSK). The affiliation of tasks is shown in Figure 2. The Slicer-IPP hosts SND_ INF, RCV_ MSK, and CPL_ INF, while the server end hosts RCV_ MSK and APL_ MSK. Each task is executed synchronously as an independent loop. All tasks run with the \u201cbest-effort\u201d mode (opposite to \u201cguaranteed delivery\u201d 2 \fSAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM Figure 2: The affiliation of five tasks, namely are \u201csend inference request\u201d (SND_ INF), \u201creceive inference request\u201d (RCV_ INF), \u201ccomplete SAM inference\u201d (CPL_ INF), \u201creceive mask transmission\u201d (RCV_ MSK), and \u201capply mask\u201d (APL_ MSK). Slicer-IPP and SAM Server use ZMQ/Numpy memory mapping package to enable real-time communication. mode), as the realtime-ness is the priority. Since 3D Slicer is a single-threaded software, each loop in the Slicer-IPP is set to have a 60 ms gap to process other tasks. A complete inference cycle starts from SND_ INF and ends with APL_ MSK. The time latency of one inference cycle is discussed in Section 3. Figure 3: Example results for different image formats (CT, MRI, and US). The prompts with green points are for the regions to be selected, whereas the red points are for the regions to be removed. To facilitate communication between the 3D Slicer and external tools or services, the platform uses ZeroMQ (ZMQ) [Hintjens, 2013] messaging library and Numpy [Harris et al., 2020] memory mapping. ZMQ is a lightweight messaging library that enables high-performance, asynchronous communication between applications. In SAMM, ZMQ and Numpy are employed to transfer images, prompts, and requests between the Slicer-IPP and the SAM Server. The segmentation task is real-time by applying these two packages. This integration enables researchers to take advantage of SAM\u2019s cutting-edge segmentation capabilities within the familiar 3D Slicer platform, expanding the range of tasks that can be performed on images. The use of ZMQ and Numpy memory mapping also provides the flexibility to customize the communication protocol to fit the user\u2019s specific needs, further enhancing the versatility of the 3D Slicer platform. 3 \fSAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM Figure 4: (a) is the event plot of the 5 tasks (SND_ INF, RCV_ INF, CPL_ INF, RCV_ MSK, and APL_ MSK). Each row represents one task and each bar represents one event. A 16-second period (covers 60 cycles) from the initialization phase of SAMM (when all tasks are initially launched) to the stable phase (when all tasks run at a steady frequency) is shown. Events executed within the same segmentation cycle are marked with the same number on the bottom two panels. The time interval of five tasks between event 1 and 2 is marked on (b) while the time interval of five tasks between event 59 and 60 is marked on (c). The time intervals between two cycles are shown in the green boxes whereas the time latency of one complete cycle is highlighted in the purple box. 3 Experiments and Results We evaluated the integration with SA-1B pre-trained model 1 for different formats including CT, MRI, and US, using images with different anatomies. Figure 3 shows the manually placed prompts and the segmentation masks generated from the sample datasets of 3D Slicer. The Slicer-IPP provides a built-in markup tool for placing the prompt points. The segmentation mask is overlaid on the original image once the CPL_ INF task ends. All cases are tested in the same environment (Ubuntu 20.04, AMD Ryzen 9 3900X, Nvidia GeForce RTX 3090). The result demonstrates that SAM, although not specifically trained on medical image datasets, can generate masks for zero-shot segmentation tasks across different image formats. Here we use the term \u201cevent\u201d to represent a task is completed. A complete cycle of the image segmentation consists of five events (Figure 4). For each time instance, the task owner logs the timestamp to a Python data storage object. The log data, generated in chronological order, is the output to a Python pickle file once 1000 segmentation cycles are completed. In Figure 4, only 60 complete cycles are shown. We evaluated the performance of the system using the end-to-end latency, defined as the time between a request of inference and the application of the inferred mask for the same image slice. In addition, the time intervals between every two consecutive tasks were measured. 1SA-1B is a model for SAM that is trained on 11 million diverse and high-resolution images and 1.1 billion high-quality segmentation masks [Kirillov et al., 2023] 4 \fSAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM In order to facilitate real-time interactive segmentation, the Slicer-IPP first computes the image embeddings and subsequently enables users to place prompt points. Therefore, the total execution time is divided into two components. The first component is the total time to compute the embeddings of all slices, and the second one is the latency per image in its runtime. In the test environment, the embedding computation for a 352 \u00d7 352 \u00d7 240 MRI image takes 162.9 seconds. For the same test dataset, the latency of an end-to-end process within the same segmentation cycle, including timestamp logging, is 0.612 seconds (measured at event 60 in the \u201cStable Phase\u201d, shown in Figure 4). 4 Discussion and" + }, + { + "url": "http://arxiv.org/abs/2212.06060v2", + "title": "On Finite Difference Jacobian Computation in Deformable Image Registration", + "abstract": "Producing spatial transformations that are diffeomorphic is a key goal in\ndeformable image registration. As a diffeomorphic transformation should have\npositive Jacobian determinant |J| everywhere, the number of voxels with |J|<0\nhas been used to test for diffeomorphism and also to measure the irregularity\nof the transformation. For digital transformations, |J| is commonly\napproximated using a central difference, but this strategy can yield positive\n|J|'s for transformations that are clearly not diffeomorphic -- even at the\nvoxel resolution level. To show this, we first investigate the geometric\nmeaning of different finite difference approximations of |J|. We show that to\ndetermine if a deformation is diffeomorphic for digital images, the use of any\nindividual finite difference approximation of |J| is insufficient. We further\ndemonstrate that for a 2D transformation, four unique finite difference\napproximations of |J|'s must be positive to ensure that the entire domain is\ninvertible and free of folding at the pixel level. For a 3D transformation, ten\nunique finite differences approximations of |J|'s are required to be positive.\nOur proposed digital diffeomorphism criteria solves several errors inherent in\nthe central difference approximation of |J| and accurately detects\nnon-diffeomorphic digital transformations. The source code of this work is\navailable at https://github.com/yihao6/digital_diffeomorphism.", + "authors": "Yihao Liu, Junyu Chen, Shuwen Wei, Aaron Carass, Jerry Prince", + "published": "2022-12-12", + "updated": "2023-05-28", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV" + ], + "main_content": "Introduction The goal of deformable image registration is to establish a nonlinear spatial transformation that aligns two images. For many tasks, it is reasonable to assume that the anatomy in the two images share the same topology. Therefore, for deformable registration algorithms, the ability to produce a topology preserving transformation is preferred. Following the work of Christensen et al. (Christensen, 1994), many registration algorithms either penalize (Avants et al., 2008; Chen et al., 2017; Mok and Chung, 2020) or constrain (Beg et al., 2005; Chen et al., 2015; Dalca et al., 2018) their output transformations to be diffeomorphic and preserve topology. In the 1 arXiv:2212.06060v2 [eess.IV] 28 May 2023 \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration continuous domain, a diffeomorphic transformation is a smooth and invertible mapping with a smooth inverse that is guaranteed to maintain the topology of the anatomy being transformed. A diffeomorphic transformation should have positive Jacobian determinant |J| everywhere1, and testing if a transformation is diffeomorphic involves local computation of |J|. When a transformation is not diffeomorphic, it is common to use the number of voxels with negative |J| (Balakrishnan et al., 2019; Chen et al., 2022, 2021; Liu et al., 2022) or the standard deviation of the logarithmic transformed |J| (Hering et al., 2022; Kabus et al., 2009; Johnson and Christensen, 2002) to measure the irregularity of the transformation. Given a digital transformation that is defined on a regular grid, a widely accepted practice for computing the Jacobian is to use finite difference approximations of spatial derivatives. There are three standard methods for computing finite differences along each axis. We denote the forward, backward, and central differences that operate along the x axis (and similarly for the y and z axes) as D+x, D\u2212x, and D0x, respectively. To approximate |J| in 2D or 3D transformations, either the same or a different type of finite difference can be used for different axes. We denote the central difference approximation of |J| in 2D and 3D as D0xD0y|J| and D0xD0yD0z|J|, respectively. Despite its current popularity, the central difference approximation of the Jacobian does not always work as expected in the evaluation of the diffeomorphism property of spatial transformations. For example, Fig. 1(a) shows a transformation around center point p that is not diffeomorphic despite the fact that D0xD0y|J| = 1. In fact, the transformation at p has no effect on the computation of D0xD0y|J|(p), even if p moves outside the field of view. We call this the checkerboard problem because the central difference based Jacobian computations on the transformations of the \u201cblack\u201d and \u201cwhite\u201d pixels (of a checkerboard) are independent of each other. A possible solution to this problem is to use D\u2212xD\u2212y|J| or D+xD+y|J| instead. However, Fig. 1(b) shows an example in which D\u2212xD\u2212y|J| and D+xD+y|J| have opposite signs, leading to contradictory conclusions. These 1We do not consider the case of all negative |J|, which would produce a reflection of the entire image. (a) y x pt (b) pt Figure 1 (a) is an illustration of the checkerboard problem for D0xD0y|J|. (b) is a transformation that illustrates the inconsistency issue between D\u2212xD\u2212y|J| and D+xD+y|J|. In both figures, the center point p is transformed to pt . The transformation around the center point is visualized as displacements (shown as dotted arrows pointing toward solid dots) and the displacement for the center point is highlighted in red examples illustrate the problem with naive application of finite differences in this application. We seek a better approach that still involves finite differences, but avoids these types of contradictory situations. In this work, we first investigate the geometric meaning of finite difference approximations of |J|. We show that when using forward or backward differences, the sign of |J|(p) determines if the underlying transformation T is invertible and orientation-preserving in a triangle (2D) or tetrahedron (3D) adjacent to p. Reversing the orientation indicates folding in space. We formally define digital transformations that are globally invertible and free of folding as digital diffeomorphisms. In order to determine if a transformation is a digital diffeomorphism, at each point it is necessary to consider four finite difference approximated |J|\u2019s for 2D transformations and ten |J|\u2019s for 3D transformations. We also demonstrate that because of the checkerboard problem and other errors that are inherent in the central difference based |J|, the number of non-diffeomorphic voxels it reports is always less than or equal to the actual number. Finally, we propose to use non-diffeomorphic area (2D) and volume (3D) as more meaningful measurements of irregularity in computed transformations. \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration 3 (a) y x p\u2212x p\u2212y p p+y p+x (b) p\u2212x t p\u2212y t pt p+y t p+x t (c) p\u2212x t p\u2212y t pt p+y t p+x t D\u2212xD\u2212y|J|(p) D\u2212xD+y|J|(p) D+xD\u2212y|J|(p) D+xD+y|J|(p) Figure 2 (a) shows the notations for p and its 4-connected neighbors. (b) shows pt , the transformed version of p as well as the transformed position of the 4-connected neighbors. (c) shows the triangular regions and their corresponding forward and backward difference based |J|\u2019s 2 Methodology 2.1 Backward Difference Based Jacobian Determinant in 2D Consider the standard Euclidean space R2 that follows the right-hand rule. Let T be a digital transformation for R2 that is defined for every grid point p. When using backward differences on both x and y axes for approximating |J| at point p, we have the formulation D\u2212xD\u2212y|J|(p) = \f \f \f \f D\u2212xTx(p) D\u2212yTx(p) D\u2212xTy(p) D\u2212yTy(p) \f \f \f \f , (1) where Tx(p) and Ty(p) are the x and y components of T (p). We denote the 4-connected neighbors of p as p\u2212x, p+x, p\u2212y, and p+y, as shown in Fig. 2(a). Their transformed locations are denoted with subscripts t, as shown in Fig. 2(b). For example, p+x t := T (p+x). We denote the triangular region defined by the vectors pp\u2212x and pp\u2212y as \u25b3pp\u2212xp\u2212y, and we assume that the 2D transformation T is linearly interpolated on \u25b3pp\u2212xp\u2212y. Proposition 1 A 2D transformation T is invertible for \u25b3pp\u2212xp\u2212y if and only if D\u2212xD\u2212y |J| (p) for T is nonzero. Proof Since T is linearly interpolated on \u25b3pp\u2212xp\u2212y, T is linear for \u25b3pp\u2212xp\u2212y and can be written as a 2 \u00d7 2 matrix with pt p\u2212x t and pt p\u2212y t as its columns. Thus, T is invertible if and only if pt p\u2212x t and pt p\u2212y t are linearly independent (not colinear). D\u2212xD\u2212y|J|(p) can be written as a triple product: D\u2212xD\u2212y|J|(p) = (pt p\u2212x t \u00d7 pt p\u2212y t ) \u00b7 n, (2) where n is the unit vector perpendicular to vectors pt p\u2212x t and pt p\u2212y t and is positively oriented following the right-hand rule. Therefore, T is invertible if and only if D\u2212xD\u2212y|J|(p) \u0338= 0. Definition 1 A 2D transformation T is said to cause folding of \u25b3pp\u2212xp\u2212y if the orientation of \u25b3pp\u2212xp\u2212y is reversed by T . Proposition 2 A 2D transformation T is free of folding for \u25b3pp\u2212xp\u2212y if and only if D\u2212xD\u2212y |J| (p) for T is positive. Proof By the right-hand rule, \u25b3pp\u2212xp\u2212y is positively oriented. (\u21d2) When T is free of folding, the orientation of \u25b3pp\u2212xp\u2212y is preserved by T . Because of linear interpolation, \u25b3pp\u2212xp\u2212y is transformed to \u25b3pt p\u2212x t p\u2212y t , which is also positively oriented. Equation 2 shows that D\u2212xD\u2212y|J|(p) equals twice the signed area of \u25b3pt p\u2212x t p\u2212y t . Therefore, D\u2212xD\u2212y|J|(p) > 0. (\u21d0) When D\u2212xD\u2212y|J|(p) > 0, from Eq. 2 \u25b3pt p\u2212x t p\u2212y t is positively oriented. Because of linear interpolation, \u25b3pp\u2212xp\u2212y is transformed to \u25b3pt p\u2212x t p\u2212y t and both of them are positively oriented. Therefore, T is free of folding for \u25b3pp\u2212xp\u2212y. Definition 2 A 2D transformation T is digitally diffeomorphic for the region \u25b3pp\u2212xp\u2212y if T is invertible and free of folding for \u25b3pp\u2212xp\u2212y. Proposition 3 A 2D transformation T is digitally diffeomorphic for the region \u25b3pp\u2212xp\u2212y if and only if D\u2212xD\u2212y |J| (p) > 0. \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration Proof Proposition 3 is a direct consequence of Propositions 1 and 2. In conclusion, D\u2212xD\u2212y |J| for the point p informs us about the digitally diffeomorphic property of a triangle adjacent to p, under the assumption that the transformation is linearly interpolated. 2.2 Digital Diffeomorphism in Two Dimensions Similar to D\u2212xD\u2212y|J|(p), we can replicate Proposition 3 to establish that any |J|(p) approximated using any combination of forward and backward differences is testing if the transformation is digitally diffeomorphic for a triangular region around p (see Fig. 2(c)), assuming that the region is linearly interpolated. Since these |J|(p)\u2019s cover different regions, their signs are independent of each other. For example, T can have a positive D\u2212xD\u2212y|J|(p) and a negative D+xD+y|J|(p) at the same time (see Fig. 1(b)). For each of these approximations, the implied triangular regions for all p\u2019s taken together cover only half of the space (e.g., D\u2212xD\u2212y|J| only considers the red tiles in Fig. 3(a)). Consequently, even if a transformation T has D\u2212xD\u2212y|J|(p) > 0 for all p\u2019s, half the space is not considered and can potentially exhibit folding or be non-invertible. This is also the case for any other forward and backward difference computations of |J|. Although combining the triangular regions of D\u2212xD\u2212y|J| and D+xD+y|J| can cover the entire space, positive D\u2212xD\u2212y|J| and D+xD+y|J| only guarantee that the transformation is digitally diffeomorphic when the transformation is piecewise linearly interpolated as shown in Fig. 3(a). A different choice, for example that shown in Fig. 3(b), which corresponds to D\u2212xD+y|J| and D+xD\u2212y|J| can give contradictory conclusions. Since there are two ways of dividing the square-size area between grid points, there are many more piecewise linear transformations that correspond to the same digital transformation, e.g., Fig. 3(c). To anticipate all possible choices of triangulation\u2014each of which leads to its own invertible transformation on the plane\u2014we should therefore consider all finite difference approximations in determining whether a transformation is digitally diffeomorphic. This leads naturally to the following definition: Definition 3 A 2D digital transformation T is a digital diffeomorphism if for every grid point p it satisfies D\u2212xD\u2212y|J|(p) > 0, D\u2212xD+y|J|(p) > 0, D+xD\u2212y|J|(p) > 0, and D+xD+y|J|(p) > 0. The proposed digital diffeomorphism definition guarantees the transformation to be free of folding and invertible regardless of the piecewise linear transformation (on triangles that divide the squares between pixel centers) that is used. 2.3 Central Difference Based Jacobian Determinant In this section, we analyze the central difference approximation of |J| given the previous analysis. We first ask where p can be positioned to yield a digital diffeomorphism when its neighbors have fixed transformations. We start with the following definitions: Definition 4 For grid point p with fixed p\u2212x t , p\u2212y t , p+x t , and p+y t , R(p) is defined to be the region in R2 such that D\u2212xD\u2212y|J|(p) > 0, D\u2212xD+y|J|(p) > 0, D+xD\u2212y|J|(p) > 0, and D+xD+y|J|(p) > 0. Definition 5 Let a and b be points in R2. The halfplane H(a, b) is defined as p \u2208R2 such that \u25b3abp is positively oriented. Proposition 4 R(p) is the intersection of the four half-planes H(p\u2212x t , p\u2212y t ), H(p\u2212y t , p+x t ), H(p+x t p+y t ), and H(p+y t , p\u2212x t ). Proof Equation 1 can be written as: D\u2212xD\u2212y|J|(p) = (p\u2212x t p\u2212y t \u00d7 p\u2212x t pt ) \u00b7 n. Therefore, when D\u2212xD\u2212y|J|(p) > 0, \u25b3p\u2212x t p\u2212y t pt is positively oriented. Thus pt is in the half-plane H(p\u2212x t , p\u2212y t ). Analogous statements can be made for the other forward and backward difference based |J|\u2019s. When all the four |J|\u2019s are positive, pt must be inside the four half-planes H(p\u2212x t , p\u2212y t ), H(p\u2212y t , p+x t ), H(p+x t p+y t ), and H(p+y t , p\u2212x t ). Therefore, R(p) is the intersection of these four half-planes. Figure 4 shows examples of R(p). In Figs. 4(c) and (d), no matter where pt is located, at least one forward or backward difference based |J|(p) \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration 5 (a) (b) (c) Figure 3 Illustration of the combinations of Jacobian determinants and their corresponding triangular regions. (a) D\u2212xD\u2212y|J|(p) and D+xD+y|J|(p). (b) D\u2212xD+y|J|(p) and D+xD\u2212y|J|(p). (c) Each pixel uses a different combination of Jacobian determinants (a) p\u2212x t p\u2212y t p+y t p+x t R (b) p\u2212x t p\u2212y t p+y t p+x t R (c) p\u2212x t p\u2212y t p+y t p+x t (d) p\u2212x t p\u2212y t p+y t p+x t Figure 4 Examples of R(p) (blue shaded region) for pt such that T is digitally diffeomorphic. (c) and (d) show examples of T \u2019s that are causing folds in space and thus R(p) is an empty set is guaranteed to be negative and, thus, R(p) is empty. This makes sense since the transformations in Figs. 4(c) and (d) cause folds in space. Proposition 5 Assume p\u2212x t , p\u2212y t , p+x t , and p+x t forms a simple polygon (without selfintersection). Then R(p) is non-empty if and only if D0xD0y |J| (p) > 0. Proof D0xD0y|J|(p) can be written as: D0xD0y|J| = 1 2 \u0010 D\u2212xD\u2212y|J| + D+xD\u2212y|J| +D\u2212xD+y|J| + D+xD+y|J| \u0011 . (3) (\u21d2) If R(p) is non-empty, for every pt \u2208R(p) all its forward and backward difference based |J|(p)\u2019s are positive by Definition 4, and therefore from Eq. 3, D0xD0y|J|(p) > 0. (\u21d0) On the other hand, if D0xD0y|J| > 0, the polygon p\u2212x t p\u2212y t p+x t p+y t is positively oriented (Braden, 1986) (i.e. interior to the left). Thus, if there exists a pt that is visible to all vertices of the polygon, then pt \u2208R(p) by Proposition 4. Following (Chv\u00b4 atal, 1975), such pt always exists for simple polygons with five or fewer vertices. Therefore, R(p) is non-empty. Although a positive D0xD0y|J|(p) ensures that R(p) is non-empty, p can still be digitally nondiffeomorphic when pt \u0338\u2208R(p), which we see in the checkerboard problem. In addition to the checkerboard problem, D0xD0y|J| also fails to provide a meaningful interpretation when the polygon p\u2212x t p\u2212y t p+x t p+y t exhibits self-intersection (see Fig. 4(d)). In conclusion, D0xD0y|J|(p) \u22640 always indicates that T is digitally non-diffeomorphic but D0xD0y|J|(p) > 0 does not mean it is digitally diffeomorphic because of the checkerboard or selfintersection problems. Therefore, use of central differences alone to estimate |J| is an inadequate characterization of digital diffeomorphism. 2.4 Digital Diffeomorphism in Three Dimensions Consider the standard Euclidean space R3 that follows the right-hand rule. Let T be a digital transformation of R3 that is defined for every grid point p. We denote the 6-connected neighbors of p as p\u00b1x, p\u00b1y, p\u00b1z (see Fig. 5(a)) and their transformed locations are denoted with the subscript t. We denote the tetrahedron defined by the vectors pp\u2212x, pp\u2212y, and pp\u2212z as pp\u2212xp\u2212yp\u2212z, and we \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration assume that the 3D transformation T is linearly interpolated on pp\u2212xp\u2212yp\u2212z. Proposition 6 A 3D transformation T is invertible for pp\u2212xp\u2212yp\u2212z if and only if D\u2212xD\u2212yD\u2212z|J| for p is nonzero. Proof Since T is linearly interpolated on pp\u2212xp\u2212yp\u2212z, T is linear for pp\u2212xp\u2212yp\u2212z and can be written as a 3 \u00d7 3 matrix with pt p\u2212x t , pt p\u2212y t , and pt p\u2212z t as its columns. Thus, T is invertible if and only if pt p\u2212x t , pt p\u2212y t , and pt p\u2212z t are linearly independent (not colinear). D\u2212xD\u2212yD\u2212z|J|(p) can be written as a triple product: D\u2212xD\u2212yD\u2212z|J|(p) = \u2212(pt p\u2212x t \u00d7 pt p\u2212y t ) \u00b7 pt p\u2212z t . (4) Therefore, T is invertible if and only if D\u2212xD\u2212yD\u2212z|J|(p) \u0338= 0. Definition 6 A 3D transformation T is said to cause folding for pp\u2212xp\u2212yp\u2212z if the orientation of pp\u2212xp\u2212yp\u2212z is reversed by T . Proposition 7 A 3D transformation T is free of folding for pp\u2212xp\u2212yp\u2212z if and only if T has D\u2212xD\u2212yD\u2212z|J|(p) > 0. Proof (\u21d2) When T is free of folding, the orientation of pp\u2212xp\u2212yp\u2212z (negatively oriented by the right-hand rule) is preserved by T . Because of linear interpolation, pp\u2212xp\u2212yp\u2212z is transformed to pt p\u2212x t p\u2212y t p\u2212z t , which is also negatively oriented. Equation 4 shows that D\u2212xD\u2212yD\u2212z|J|(p) equals to six times the negative signed volume of pt p\u2212x t p\u2212y t p\u2212z t . Therefore, D\u2212xD\u2212yD\u2212z|J|(p) > 0. (\u21d0) D\u2212xD\u2212yD\u2212z|J|(p) > 0 indicates that pt p\u2212x t p\u2212y t p\u2212z t is negatively oriented by Eq. 4. Because of linear interpolation pp\u2212xp\u2212yp\u2212z is mapped to pt p\u2212x t p\u2212y t p\u2212z t and both of them are negatively oriented. Therefore, T is free of folding for pp\u2212xp\u2212yp\u2212z. Definition 7 A 3D transformation T is digitally diffeomorphic for the region pp\u2212xp\u2212yp\u2212z if T is invertible and free of folding for pp\u2212xp\u2212yp\u2212z. Proposition 8 A 3D transformation T is digitally diffeomorphic for the region pp\u2212xp\u2212yp\u2212z if and only if D\u2212xD\u2212yD\u2212z|J|(p) > 0. Proof Proposition 8 is a direct consequence of Propositions 6 and 7. Similarly, we can prove that a |J|(p) approximated using any combination of forward and backward differences is testing if the transformation is digitally diffeomorphic in a tetrahedron adjacent to p. One of these tetrahedra (for D\u2212xD+yD\u2212z|J|) is visualized in Fig. 5(a). However, unlike the 2D case where all the triangular regions completely cover the entire 2D space, the union of all these adjacent tetrahedra does not fill the entire 3D space. Therefore, even if all forward and backward difference approximations of |J| are positive, the transformation can still cause folding in the spaces not covered by these adjacent tetrahedra. To solve this issue, we introduce another tetrahedron pp\u2212x\u2212yp\u2212x\u2212zp\u2212y\u2212z, as shown in Fig. 5(b). When combined with four existing tetrahedra from finite difference based |J|\u2019s, they completely cover the entire volume. We define |J\u22c6 1 |(p) as |J\u22c6 1 |(p) = (pt p\u2212x\u2212y t \u00d7 pt p\u2212x\u2212z t ) \u00b7 pt p\u2212y\u2212z t , (5) where p\u2212x\u2212y t , p\u2212x\u2212z t , and p\u2212y\u2212z t are the transformed locations of p\u2212x\u2212y, p\u2212x\u2212z, p\u2212y\u2212z. The signed volume of the extra tetrahedron after applying T equals 1 6|J\u22c6 1 |(p). Similar to 2D, there are two ways of dividing the cube-size volume inbetween grid points into five tetrahedra, as shown in Figs. 5(b) and (c). The signed volume of the extra tetrahedron in Fig. 5(c) can be computed as 1 6|J\u22c6 2 |(p), where |J\u22c6 2 |(p) = (pt p+x+y t \u00d7 pt p+y+z t ) \u00b7 pt p+x+z t . (6) There are other ways to divide a cube into tetrahedra (Carr et al., 2006). These other schemes are not considered here because 1) their computation involve finite difference approximations at different points than those we consider, or 2) their computation requires interpolating the transformation at non-grid point. \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration 7 z y x (a) p\u2212z p+z p\u2212x p+x p\u2212y p+y p (c) (b) p p\u2212x\u2212z p\u2212x\u2212y p\u2212y\u2212z p p+x+y p+x+z p+y+z Figure 5 (a) shows the notations for p and its 6-connected neighbors in 3D and the tetrahedron considered by D\u2212xD+yD\u2212z|J|. (b) and (c) are illustrations of the two schemes to divide the cube volume in-between grid points in 3D Definition 8 A 3D digital transformation T is a digital diffeomorphism if for every grid point p its forward and backward difference based |J|\u2019s of p are all positive and both |J\u22c6 1 |(p) and |J\u22c6 2 |(p) are positive. For similar reason as in 2D, our definition of digital diffeomorphism involves both schemes shown in Figs. 5(b) and (c). Specifically, each of the two schemes corresponds to a particular piecewise linear transformation. But for a given digital transformation there are many plausible piecewise linear transformations (see Fig. 3(c)). By considering both schemes, we avoid the ambiguity of choosing different schemes for every cube-sized volume. The central difference approximation of |J| in 3D calculates the signed volume of an octahedron with vertices p\u2212x t , p+x t , p\u2212y t , p+y t , p\u2212z t , and p+z t , when the octahedron is simple. The proof is a straightforward extension of Eq. 3. It is easy to show that D0xD0yD0z|J| can also have the checkerboard problem or the self-intersection problem as in the 2D case. Note that Proposition 5 cannot be generalized to 3D because p\u2212x t p+x t p\u2212y t p+y t p\u2212z t p+z t may not be tetrahedralizable (O\u2019Rourke et al., 1987) and thus, D0xD0yD0z|J|(p) > 0 cannot guarantee R(p) in 3D is non-empty. 2.5 Non-diffeomorphic Space Measurement For non-diffeomorphic transformations in 3D, the number and percentage of non-diffeomorphic voxels (a) y x pt (b) pt (c) pt Figure 6 A demonstration of measuring non-diffeomorphic space in 2D. The transformations are visualized as displacement fields. In each of the three subfigures, only the center points are transformed to their corresponding pt \u2019s and the other grid points remain fixed. The triangular region corresponds to the forward difference based |J| is shown in green if |J| > 0, otherwise it is shown in red is often used to measure its irregularity. Specifically, a voxel is considered non-diffeomorphic if its center location p has a central difference based Jacobian determinant that is not positive (i.e. in 3D: D0xD0yD0z|J|(p) \u22640). However, as we have demonstrated in Sections 1 and 2, the central difference based Jacobian determinant underestimates the non-diffeomorphic space, in general. Given the limitations of the central difference-based Jacobian determinant, we seek an alternative way of evaluating the diffeomorphic property of digital transformations. We aim to find a quantitative measure that involves the computation of finite difference-based Jacobians and is consistent with our definition of digital diffeomorphism. As a result, we propose non-diffeomorphic volume (NDV) to quantify the size of the non-diffeomorphic space in 3D caused by a digital transformation. \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration (a) (b) (c) (d) Figure 7 A visualization of the proposed non-diffeomorphic area (only the displacement inside the coronal plane was considered in this example). (a) The warped image. (b) The grid line representation of the transformation (generated using Voxelmorph (Balakrishnan et al., 2019)). (c) The warped image with non-diffeomorphic pixels (marked in red) as measured by the central difference Jacobian determinant (D0xD0y|J|) highlighted in red. (d) The warped image with a map overlay indicating the non-diffeomorphic area with brighter shades of red indicating larger non-diffeomorphic area Given a grid point p, we denote its eight forward and backward difference based Jacobian determinants as |Ji|, i \u2208[1, . . . , 8]. As shown in Section 2.4, each of these determinants is equal to six times the signed volume of a tetrahedron adjacent to p. Thus, the non-diffeomorphic volume caused by a given transformation within any tetrahedron is given by \u2212min(|Ji|, 0)/6, which is positive only if the tetrahedron is folded. Following Def. 8, we use the total volume of all folded tetrahedrons from 1) |Ji|, i \u2208[1, . . . , 8], 2) |J\u22c6 1 |(p) given in Eq. 5, and 3) |J\u22c6 2 |(p) given in Eq. 6 to compute NDV exhibited in the entire domain as: NDV = \u22121 2 X p \" 8 X i=1 min(|Ji|(p), 0) 6 + min(|J\u22c6 1 |(p), 0) 6 +min(|J\u22c6 2 |(p), 0) 6 \u0015 . (7) This computation is the average of the two tetrahedralization schemes shown in Figs. 5(b) and (c). While it is possible to report the minimum or maximum achievable non-diffeomorphic volume by tetrahedralizing each cube between voxels based on the specific transformation, doing so would inevitably associate the given digital transformation with one particular piecewise linear transformation. Therefore, averaging the two tetrahedralization schemes provides a more comprehensive and unbiased measure for a digital transformation. Further discussion on the impact of spatially varying tetrahedralization schemes can be found in Section 4. The proposed NDV is connected to the ideas of simplex counting (SC) (Yushkevich et al., 2010) and surface propagation (SP) (Pai et al., 2016), which are used to assess the degree of volume change on a per-voxel basis. However, those methods typically use only one scheme to discretize the space. In contrast, NDV considers the two tetrahedralization schemes shown in Figs. 5(b) and (c). This choice is motivated by the definition of digital diffeomorphism but it also offers a rotation-invariant property. Specifically, rotating the transformation by any multiple of 90 degrees does not affect its result. This cannot be achieved using only one scheme. Further discussion on the relationship and distinction between SC, SP, and the proposed NDV can be found in Section 4. For completeness we note the 2D version of non-diffeomorphic space, which we term nondiffeomorphic area (NDA), follows from Def. 3, NDA = \u22121 2 X p 4 X i=1 min(|Ji|(p), 0) 2 , (8) where |Ji|, i \u2208[1, . . . , 4], are the Jacobian determinants approximated using the four possible combinations of forward and backward differences. A demonstration of NDA is provided in Fig. 6. When using the central difference based |J|, all three cases in Fig. 6 would be considered diffeomorphic (D0xD0yD0z|J|(p) > 0) because of the checkerboard problem. The forward difference based |J| is able to identify that Figs. 6(b) and (c) exhibit folding, but only NDA can provide the observation that the non-diffeomorphic space caused by the transformation shown in Fig. 6(c) is larger than the non-diffeomorphic space in Fig. 6(b). \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration 9 Table 1 Our proposed non-diffeomorphic volume and several other measures on the IXI dataset and the seven comparison algorithms. For \u2018D0xD0yD0z|J| \u22640\u2019 and \u2018Any |Ji| \u22640\u2019 we report the mean number of voxels (voxel #) over the 115 test subjects and the corresponding standard deviation (\u00b1), as well as the percentage (%) with respect to the brain mask. We also report our proposed measure of non-diffeomorphic space\u2014i.e. non-diffeomorphic volume (NDV) in 3D\u2014and the corresponding standard deviations and percentages. Methods are listed in the order in which they were published. D0xD0yD0z|J| \u22640 Any |Ji| \u22640 Proposed # of voxels # of voxels NDV (%) (%) (%) 222.2 \u00b1 776.1 288.9 \u00b1 945.7 10.9 \u00b1 45.4 NiftyReg (Modat et al., 2010) (0.01%) (0.02%) (< 0.00%) 5704.7 \u00b1 1939.6 12212.9 \u00b1 3118.1 1597.4 \u00b1 661.5 deedsBCV (Heinrich et al., 2015) (0.37%) (0.79%) (0.10%) 41233.1 \u00b1 8091.3 98241.0 \u00b1 17061.1 16261.5 \u00b1 2709.3 Voxelmorph (Balakrishnan et al., 2019) (2.64%) (6.26%) (1.04%) 44126.5 \u00b1 8526.4 99560.0 \u00b1 16833.8 17923.6 \u00b1 3083.3 Cyclemorph (Kim et al., 2021) (2.83%) (6.38%) (1.15%) 0.0 \u00b1 0.0 0.1 \u00b1 0.7 0.0 \u00b1 0.0 MIDIR (Qiu et al., 2021) (< 0.00%) (< 0.00%) (< 0.00%) 35324.1 \u00b1 7887.7 88263.1 \u00b1 17261.7 14034.7 \u00b1 2902.5 Transmorph (Chen et al., 2022) (2.26%) (5.65%) (0.90%) 8291.4 \u00b1 5928.4 20282.7 \u00b1 7853.8 822.6 \u00b1 248.5 im2grid (Liu et al., 2022) (0.53%) (1.3%) (0.05%) 3 Experiments We compared the commonly used central difference based Jacobian determinant (D0xD0yD0z|J|) and our proposed non-diffeomorphic volume (NDV) using several deformable registration algorithms on two publicly available datasets: IXI: A total of 576 T1\u2013weighted brain magnetic resonance (MR) images from the publicly available IXI dataset were used. 403 scans were used in training for the task of atlas-to-subject registration (Kim et al., 2021) and 58 scans were used for validation. The transformations generated from registering an atlas brain MR images to 115 test scans were evaluated. Learn2Reg OASIS: We also used the brain T1weighted MR images from the 2021 Learn2Reg challenge (Hering et al., 2022; LaMontagne et al., 2019). Scans were preprocessed using FreeSurfer (Fischl, 2012; Hoopes et al., 2021). All algorithms were trained using the training set of 414 scans and the transformations for the 19 validation pairs were evaluated. The central difference Jacobian determinant approximation was implemented directly from the 2021 Learn2Reg challenge evaluation script. The implementation details and hyper-parameters for each of the algorithms were adopted from (Chen et al., 2022) and (Liu et al., 2022). A visualization of a result from the IXI dataset is shown in Fig. 7. Since it is difficult to visualize displacements across slices (anterior-to posterior direction in this case), only the displacements within the coronal plane were considered. Thus, we computed the non-diffeomorphic area instead of non-diffeomorphic volume. For each pixel in Fig. 7(d), a higher red intensity indicates larger non-diffeomorphic area around that pixel. The nondiffeomorphic pixels computed from D0xD0y|J| are highlighted in Fig. 7(c) for comparison. The results on the IXI dataset are summarized in Table 1 and for the Learn2Reg OASIS in Table 2. Only the voxels within the brain were considered and the percentages were calculated relative to the brain volume of the fixed image. In both tables, we report the number of non-diffeomorphic voxels (# of voxel) and its percentage (%) based on the D0xD0yD0z|J|. We also report the number (and percentage) of voxels that have at least one |Ji| \u22640, denoted as \u2018Any |Ji| \u22640\u2019 in the tables. We observe that in most of the cases, there are \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration actually more than twice the number of voxels having |Ji| \u22640 for some finite difference than found using only the central difference. The differences between \u2018D0xD0yD0z|J| \u22640\u2019 and \u2018Any |Ji| \u22640\u2019 highlight that errors in using the central difference approximation are very common in practice. The proposed average non-diffeomorphic volume (NDV) and its percentage are shown in the last three columns of Tables 1 and 2. For algorithms that impose strong regularization on the transformations (e.g., MIDIR (Qiu et al., 2021), deedsBCV (Heinrich et al., 2015), NiftyReg (Modat et al., 2010), and SyN (Avants et al., 2008)), a small\u2014even zero\u2014NDV is observed. However, for the deep learning methods that directly output deformation fields (Kim et al., 2021; Balakrishnan et al., 2019; Chen et al., 2022; Liu et al., 2022), we usually have higher NDVs. It is important to note that the results shown in Tables 1 and 2 do not reflect the accuracy of the algorithms, just the proportion of their deformation that is non-diffeomorphic. 4 Discussion The Jacobian determinant of a transformation is a widely used measure in deformable image registration, but the details of its computation are often overlooked. In this paper, we focused on the finite difference based approximation of |J|. Contrary to what one might expect, the commonly used central difference based |J| does not reflect if the transformation is diffeomorphic or not. Our investigation shows that each of the finite difference approximated |J| corresponds to the signed area of a triangle in 2D or the signed volume of a tetrahedron in 3D when the digital transformations are assumed to be piecewise linear. Following this, we propose the definition of a digital diffeomorphism that allows diffeomorphisms\u2014a concept in continuous domain\u2014to be applied to digital transformations. It solves several problems that are inherent in the central difference based |J|. We further propose to use non-diffeomorphic volume to measure the irregularity of 3D transformations and non-diffeomorphic area for 2D transformations. As demonstrated in Fig. 6 and Fig. 7, our proposed approach measures the severity of the irregularity whereas the commonly used central difference based |J| is only a binary indicator of folding (also with errors). The transformation shown in Fig. 6(b) is obviously more favorable than the one shown in Fig. 6(c) in terms of regularity. As such, it is important for us to be able to draw distinctions between these two scenarios. The non-diffeomorphic measures presented in Eqs. 7 and 8 are averages of two choices of tetrahedralization of the volume. It is possible, however, to tetrahedralize each cube between voxels based on the specific registration outcome, for example, to yield the minimum or maximum achievable nondiffeomorphic volume. These measures must be computed for the entire image domain since the brain mask is defined at voxel level. For the IXI dataset, the average non-diffeomorphic volume for the best case is 33227 voxel3 for Voxelmorph and 31488 voxel3 for Transmorph while the average non-diffeomorphic Volume for the worse case is 41903 voxel3 for Voxelmorph and 40964 voxel3 for Transmorph. In comparison, our proposed NDV is 37565 voxel3 for Voxelmorph and 36226 voxel3 for Transmorph for the entire image domain. This result shows that, at least for these two algorithms, the non-diffeomorphic volume cannot be made substantially different from the average value that we specified in Eq. 7 by alternative selection of tetrahedralization. The idea of discretizing the space using triangles or tetrahedrons has been presented before for regularizing deformable transformations and deformation-based volume change estimation (Pai et al., 2016; Holland et al., 2011; Yushkevich et al., 2010). It was previously believed that computing the volume change in discretized space using tetrahedrons is more accurate than using Jacobian determinants because the latter involves finite difference approximation (Yushkevich et al., 2010). Our analysis shows that discretizing the space using triangles or tetrahedrons is in fact the consequence of finite difference approximation of Jacobian determinants. Therefore, the two approaches are equivalent when the corresponding finite differences are chosen for a given partition of space. Haber et al. (Haber and Modersitzki, 2004, 2007) and Burger et al. (Burger et al., 2013) used the volume of triangles in 2D or tetrahedrons in 3D as a regularization term for their registration algorithm. Initially, Haber et al. proposed a hard equality constraint (Haber and Modersitzki, 2004) that enforces the preservation of the discretized volume (or area in 2D) of every deformed box. Recognizing that this approach could not detect \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration 11 Table 2 Results for the Learn2Reg OASIS dataset. For \u2018D0xD0yD0z|J| \u22640\u2019 and \u2018Any |Ji| \u22640\u2019 we report the mean number of voxels (voxel #) over the 19 validation subject pairs and the corresponding standard deviation (\u00b1), as well as the percentage (%) with respect to the brain mask. We also report our proposed measure of non-diffeomorphic space\u2014i.e. non-diffeomorphic volume (NDV) in 3D\u2014and the corresponding standard deviations and percentages. Methods are listed in the order in which they were published. D0xD0yD0z|J| \u22640 Any |Ji| \u22640 Proposed # of voxels # of voxels NDV (%) (%) (%) 205.5 \u00b1 331.1 278.2 \u00b1 415.3 85.0 \u00b1 204.1 SyN (Avants et al., 2008) (0.01%) (0.02%) (0.01%) 40418.4 \u00b1 8991.3 94343.8 \u00b1 18452.7 18448.2 \u00b1 4319.0 Voxelmorph (Balakrishnan et al., 2019) (2.84%) (6.64%) (1.30%) 31646.6 \u00b1 7609 78533.7 \u00b1 16113.0 13502.7 \u00b1 3779.9 Transmorph (Chen et al., 2022) (2.22%) (5.52%) (0.95%) 22905.6 \u00b1 4142.3 161071.8 \u00b1 18271.5 8774.7 \u00b1 975.6 im2grid (Liu et al., 2022) (1.61%) (11.36%) (0.62%) \u201ctwists\u201d (i.e., folding), they later proposed an inequality constraint (Haber and Modersitzki, 2007) that calculates the volumes of the tetrahedrons to prevent such twists by imposing positive volumes of the tetrahedrons. However, their methods only account for a single combination of Jacobian determinants (as shown in Fig. 3(a)), which is insufficient to guarantee a digital diffeomorphism. Moreover, these previous works were motivated by the fact that calculating the volume of a \u201ctwisted\u201d polygon or octahedron is difficult. Our analysis on the central difference approximated |J| explains why using a polygon or octahedron is inaccurate. With respect to deep network based registration methods, many ensure that the output of their network is diffeomorphic, by adopting a scalingand-squaring layer as the output layer (Dalca et al., 2019; Chen et al., 2022; Hoopes et al., 2021). However, several works have reported that the scaling-and-squaring approach produces voxels with negative central difference based Jacobian determinant. The analysis presented in this paper explains why the scaling-and-squaring approach cannot guarantee a folding-free digital transformation. Specifically, when composing two digital transformations, one of the transformation needs to be sampled at non-grid locations, which usually involves bilinear or trilinear interpolation. These interpolation methods, however, are inconsistent with the piecewise linear transformation that is implicitly assumed by the finite difference based Jacobian determinant computation. As a result, the sampling process can introduce folding and result in locations with negative Jacobian determinant. For recent deep learning based registration methods, our digital diffeomorphism criteria has the potential to be used as a loss function to improve the smoothness and promote digitally diffeomorphic transformations (Mok and Chung, 2020). This is a promising direction for future research. Statements and Declarations Data availability The datasets analysed during the current study are available in the OASIS repository, https://www.oasis-brains.org/ and the IXI repository, https://brain-development.org/ ixi-dataset/ Ethical approval declarations This work involved human subjects or animals in its research. Approval of all ethical and experimental procedures and protocols was granted by the relevant local Institutional Review Boards, and performed in line with the Declaration of Helsinki. Conflict of interest The authors have no competing interests to declare that are relevant to the content of this article Funding This work was supported in part by the National Institute of Health (NIH) National Eye Institute grant R01-EY032284 (PI: J.L. Prince). \fSpringer Nature 2023 L AT EX template On Finite Difference Jacobian Computation in Deformable Image Registration" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file