diff --git "a/abs_29K_G/test_abstract_long_2405.01217v1.json" "b/abs_29K_G/test_abstract_long_2405.01217v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01217v1.json" @@ -0,0 +1,85 @@ +{ + "url": "http://arxiv.org/abs/2405.01217v1", + "title": "CromSS: Cross-modal pre-training with noisy labels for remote sensing image segmentation", + "abstract": "We study the potential of noisy labels y to pretrain semantic segmentation\nmodels in a multi-modal learning framework for geospatial applications.\nSpecifically, we propose a novel Cross-modal Sample Selection method (CromSS)\nthat utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c\nmodelled by multiple sensors/modalities d of a given geospatial scene.\nConsistency of predictions across sensors $d$ is jointly informed by the\nentropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of\neach sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the\nperformance of our approach, we conduct experiments with Sentinel-1 (radar) and\nSentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12\ndataset. We pair those scenes with 9-class noisy labels sourced from the Google\nDynamic World project for pretraining. Transfer learning evaluations\n(downstream task) on the DFC2020 dataset confirm the effectiveness of the\nproposed method for remote sensing image segmentation.", + "authors": "Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "We study the potential of noisy labels y to pretrain semantic segmentation\nmodels in a multi-modal learning framework for geospatial applications.\nSpecifically, we propose a novel Cross-modal Sample Selection method (CromSS)\nthat utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c\nmodelled by multiple sensors/modalities d of a given geospatial scene.\nConsistency of predictions across sensors $d$ is jointly informed by the\nentropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of\neach sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the\nperformance of our approach, we conduct experiments with Sentinel-1 (radar) and\nSentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12\ndataset. We pair those scenes with 9-class noisy labels sourced from the Google\nDynamic World project for pretraining. Transfer learning evaluations\n(downstream task) on the DFC2020 dataset confirm the effectiveness of the\nproposed method for remote sensing image segmentation.", + "main_content": "INTRODUCTION In the realm of Big Geospatial Data, one critical challenge is the lack of labeled data for deep learning model training. Self-Supervised Learning (SSL) received significant attention for its ability to extract representative features from unlabeled data (Wang et al., 2022). Popular SSL algorithms include generative Masked Autoencoders (MAE) (He et al., 2022) and contrastive learning methods such as DINO (Caron et al., 2021) and MoCo (Chen et al., 2020). MAE is inspired by image reconstruction, as most works utilizing vision transformers (ViTs) (Dosovitskiy et al., 2021). Constrastive learning methods can make a difference for both, convolutional backbones and ViTs. Recent studies suggest that deep learning models exhibit a degree of robustness against label noise (Zhang et al., 2021; Liu et al., 2024). Promising results were observed in pretraining models with extensive volumes of noisy social-media labels for image classification (Mahajan et al., 2018) and video analysis (Ghadiyaram et al., 2019). In the realm of remote sensing (RS), pretraining on crowdsourced maps such as OpenStreetMap for building and road extraction has been surveyed (Kaiser et al., 2017; Maggiori et al., 2017). These results indicate that inherently noisy labels can significantly reduce the level of human supervision required to effectively train deep learning models. Moreover, as the number of launched satellites grows, we are increasingly exposed to a variety of satellite data types, including but not limited to multi-spectral, Light Detection And Ranging (LiDAR), and Synthetic Aperture Radar (SAR) data. Multi-modal learning has emerged as a prominent area of study, where the complementary information showcases efficacy in boosting the learning from different modalities, such as optical and LiDAR data (Xie et al., 2023), multi-spectral and SAR data (Chen & Bruzzone, 2022). However, the application of multi-modal learning to improve learning from noisy labels remains for detailed exploration. 1 arXiv:2405.01217v1 [cs.CV] 2 May 2024 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Crops Grass Trees Water Bareland Built area Shrub & scrub Ice & snow Flooded vegetation Figure 1: An example of sentinel-1 (VV, right) and sentinel-2 (RGB, left) data paired with noisy labels (middle) from 4 seasons. In this work, we study the potential of noisy labels in multi-modal pretraining settings for RS image segmentation, where a novel Cross-modal Sample Selection method, referred to as CromSS, is introduced to further mitigate the adverse impact of label noise. In the pretraining stage, we first employ two U-Nets (Ronneberger et al., 2015) backboned with ResNet-50 (He et al., 2016) to separately extract features and generate confidence masks within each modality. After that, the sample selection is implemented for each modality on its enhanced confidence masks by fortifying the shared information across modalities. Given that radar and optical satellites are sensitive to distinct features on the ground1, such cross-modal enhancement bears potential to boost the mutual learning between modalities. We test middle and late fusion strategies to improve the architecture design for multi-modal learning. In our experiments, we utilize Sentinel-1 (S1) of radar and Sentinel-2 (S2) of multi-spectral data from the SSL4EO-S12 dataset (Wang et al., 2023) as two modalities. We pair those scenes with pixel-wise noisy labels of the Google Dynamic World (DW) project (Brown et al., 2022) for pretraining. Evaluation of the pretrained ResNet-50 encoders is based on the DFC2020 dataset (Yokoya, 2019) referenced to pretrained DINO and MoCo models presented as baselines in the SSL4EO-S12 work. 2 DATA In the pretraining stage, we utilize the extended version of the SSL4EO-S12 dataset, a large-scale self-supervision dataset in Earth observation, plus 9-class noisy labels sourced from the DW project on the Google Earth Engine as illustrated in Figure 1. SSL4EO-S12 sampled data globally from 251,079 locations. Each location corresponds to 4 S1 and S2 image pairs of 264\u00d7264 pixels from 4 different seasons, among which 103,793 locations have noisy label masks matched for all the seasons. We only utilize the image-label pairs of these 103,793 locations for pretraining with noisy labels. Notice that this dataset is a good reflection of real cases, where noisy labels are still harder to obtain compared to images, thus of a smaller size than unlabeled data. We utilize DFC2020 as the downstream segmentation task, where the 986 validation patches are used as the fine-tuning training data with the 5128 test ones for test. 3 METHODOLOGY Our methodology links semantic segmentation maps of single-modal models by two principles: (a) consistent prediction of the physical ground truth (consistency loss Lc), and (b) tolerance to noisy supervision (segmentation loss Ls). For the latter, we extend the idea of Cao & Huang (2022) working on a single modality to multiple modalities with cross-modal interactions for estimating the uncertainty of a given pixel-level class label. Each modality-specific model predicts the probability P (d) of a given noisy label at a physical location. While one model d = 1 may be certain about the label y, another d = 2 may assign low probability: P (1)(y) \u226bP (2)(y). Section 3.2 details on how we integrate these information to obtain a cross-modality score of a label perceived noisy. Similarly, we exploit the entropy of P (d) to introduced a criterion for a cross-modality consistency loss on label predictions between single-modality models. The overall approach is summarized by Figure 2, where Q(d) represents an estimate of P (d). 1e.g., persistant metal scatterers in SAR have little signatur in optical sensors 2 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Modality 2 (S2) Modality 1 (S1) Noisy labels \ud835\udc3f\ud835\udc50 (1/2): Unweighted consistency loss \ud835\udc3f\ud835\udc60 (1/2): Unweighted segmentation loss \ud835\udc44(1) \u229b \u2295 element-wise production element-wise addition Detached copy summation of all elements \u2211 \ud835\udc44(1) \ud835\udc44(1) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc3f\ud835\udc50 (1) \ud835\udc3f\ud835\udc50 (2) \ud835\udc3f\ud835\udc60 (1) \ud835\udc3f\ud835\udc60 (2) Encoder Decoder Encoder Decoder \u229b \u229b \u229b \u2295 \u2295 Predicted probability of the noisy label class \ud835\udc44(1)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc44(2)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc4a \ud835\udc59 (1/2): Label-based selection mask \ud835\udc4a \ud835\udc59 (2) \ud835\udc4a \ud835\udc59 (1) \ud835\udc39 \ud835\udc59 (2) \ud835\udc39 \ud835\udc59 (1) Cross-modal informed label selection \u229b \u229b \u229b \u2295 \u2295 Entropy Entropy \ud835\udc4a \ud835\udc52 (1/2) : Entity-based selection mask \ud835\udc39 \ud835\udc52 (2) \ud835\udc39 \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (2) Cross-modal informed consistency Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc36 Calculating \ud835\udc3f\ud835\udc36 Segmentation loss \u2211 \u2211 Consistency loss \u2211 \u2211 Figure 2: Illustration of the proposed CromSS. The decoders in the middle share the weights when middle fusion is applied. In late fusion, they are separately optimized per modality. The shaded areas (green to the left, purple to the right) highlight the key components of cross-modal sample selection. 3.1 MULTI-MODAL FUSION We employ middle and late multi-modal fusion (Chen & Bruzzone, 2022) to explore the complementary information across modalities to aid model training. Our fusion strategies do not concatenate feature vectors of different modalities. While middle fusion shares a common decoder for all modalities, late fusion retains individual decoders. 3.2 CROSS-MODAL SAMPLE SELECTION As depicted by Figure 2, the key in CromSS when compared to naive multi-modal training is the introduction of sample selection masks W (d) l/e (the shaded areas in Figure 2). They serve as weights for calculating the segmentation and consistency losses, Ls and Lc, cf. the label-based masks W (d) l and the entity-based masks W (d) e for modality d. To compute W (d) l and W (d) e , we first generate the corresponding confidence masks F (d) l and F (d) e from the softmax outputs, i.e., the estimated class distributions Q(d) for P (d). Let q(d) i,j,c \u2208Q(d) denote the softmax output at image pixel location (i, j) and class c, and yi,j be its given noisy label. Then, we take q(d) i,j,c with c = yi,j as the estimated label-based confidence scores in F (d) l . For the entity-based confidence, we define f (d) (e)i,j \u2208F (d) e using the entropy of its softmax vector h(d) i,j as follows, f (d) (e)i,j = 1 \u2212h(d) i,j /K = 1 + 1 K C X c=1 q(d) i,j,c log q(d) i,j,c (1) where C is the total number of classes, K = log C is the upper bound of hi,j \u2208[0, K] when qi,j,c = 1/C for c = 1, \u00b7 \u00b7 \u00b7 , C, i.e., equal distribution of maximum entropy. For two modalities d \u2208{1, 2}, the final confidence masks are combined into the following: F \u2032(1/2) l/e = 1 2 \u0010 F (1/2) l/e + F (1) l/e F (2) l/e \u0011 = 1 2F (1/2) l/e \u0010 1 + F (2/1) l/e \u0011 , (2) where the factor F (1/2) l/e F (2/1) l/e serves to magnify the selection probabilities of the samples exhibiting high confidence while diminishing cases where both modalities d = 1 and d = 2 agree on low confidence score. To generate final sample selection masks, we utilize a soft selection strategy rather than the one-hot selection masks for W (d) l , in order to avoid models from enforcing their own prediction errors. Mathematically speaking: given the selection ratio \u03b1 \u2208[0, 1], we define w(d) i,j \u2208W (d) l as, w(d) i,j = min h 1, f \u2032(d) i,j /w i , (3) 3 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop where f \u2032(d) i,j \u2208F \u2032(d) l , w is the (\u03b1 \u00b7 n)th highest value in F \u2032(d) l with n denoting the size of F \u2032(d) l . For the consistency loss, we utilize the weighting factor \u03b3 \u2208[0, 1] to generate W (d) e from F \u2032(d) e as W (d) e = (1 \u2212\u03b3) + \u03b3F \u2032(d) e with \u03b3 gradually ramping up from 0 to 1 during the training. With the losses weighted by W (d) l and W (d) e , the samples of lower confidence can contribute less in the optimization process. 4 EXPERIMENTS We pretrained ResNet-50 (He et al., 2016) nested in U-Nets (Ronneberger et al., 2015) using the combined segmentation losses of CrossEntropy and Dice (Jadon, 2020) along with Kullback-Leibler divergence (Kullback & Leibler, 1951) serving as the consistency losses. The selection proportion \u03b1 we set to 50% after exponentially ramping down from 100% for the first 80 epochs. At the same time, the weighting factor \u03b3 ramps up from 0 to 1 in parallel. We employed a seasonal data augmentation strategy, where the data from a randomly selected season were fed to U-Nets in each iteration. An Adam optimizer (Kingma & Ba, 2017) was used with a learning rate of .5 \u00b7 10\u22123. We employed the ReduceLROnPlateau scheduler to cut in half the learning rate when the validation loss is not decreasing over 30 consecutive epochs. We randomly split off 1% of the entire training set as the validation set. The pretraining was implemented on 4 NVIDIA A100 GPUs running approx. 13 hours for 100 epochs. When transferred to the DFC2020 dataset, pretrained ResNet-50 encoders were embedded into PSPNets (Zhao et al., 2017), fine-tuned with Adam and a learning rate of .5 \u00b7 10\u22124 for 50 epochs. As reference, we also present the results of single-modal pretraining (S1/S2) as well as multi-modal pretraining without sample selection, in which midF and lateF denote middle and late fusion, respectively. Pretrained weights by DINO and MoCo were provided by Wang et al. (2023). Results reported with error bars stem from 3 repeated runs of each setup. Table 1: Transfer learning results on the DFC2020 dataset with S1 and S2 as inputs, respectively, where \u201cFine-tuned\u201d and \u201cFrozen\u201d indicate whether the encoder weights would be adjusted along with decoder ones or not. Modality Encoder Frozen Fine-tuned Metrics OA AA mIoU OA AA mIoU S1 Random 54.41\u00b10.35 40.68\u00b10.23 29.16\u00b10.06 52.65\u00b10.42 42.17\u00b10.29 28.36\u00b10.22 MoCo 60.88\u00b10.41 47.46\u00b10.52 34.25\u00b10.27 60.31\u00b10.40 44.98\u00b10.66 31.80\u00b10.46 single-modal (S1) 61.73\u00b10.58 46.13\u00b10.34 34.77\u00b10.30 61.07\u00b10.19 45.78\u00b10.48 34.13\u00b10.19 midF 62.08\u00b10.73 45.01\u00b10.40 34.64\u00b10.48 61.24\u00b10.44 45.44\u00b10.84 33.86\u00b10.16 lateF 61.09\u00b10.11 45.77\u00b10.29 34.15\u00b10.14 62.19\u00b10.49 47.43\u00b10.41 34.58\u00b10.48 CromSS-midF 61.66\u00b10.41 45.07\u00b10.28 34.38\u00b10.02 62.32\u00b11.01 47.19\u00b10.84 35.17\u00b10.63 CromSS-lateF 62.58\u00b10.36 46.37\u00b10.53 34.80\u00b10.37 60.92\u00b10.76 46.13\u00b10.60 33.94\u00b10.55 S2 random 56.42\u00b10.49 45.12\u00b10.18 31.50\u00b10.14 58.68\u00b10.77 46.03\u00b10.43 33.56\u00b10.28 DINO 64.82\u00b10.22 48.83\u00b10.08 37.81\u00b10.08 63.64\u00b10.72 49.92\u00b11.33 36.95\u00b10.55 MoCo 63.25\u00b10.47 51.00\u00b10.28 37.67\u00b10.57 61.19\u00b10.39 47.29\u00b10.36 34.86\u00b10.63 single-modal (S2) 66.66\u00b10.19 53.24\u00b10.21 40.88\u00b10.07 67.11\u00b10.22 53.14\u00b10.69 41.06\u00b10.24 midF 68.36\u00b10.65 53.23\u00b10.42 41.52\u00b10.35 68.07\u00b10.64 52.60\u00b10.52 41.17\u00b10.28 lateF 67.61\u00b10.91 54.08\u00b10.92 41.59\u00b10.75 68.43\u00b11.18 53.72\u00b10.76 41.76\u00b10.76 CromSS-midF 69.41\u00b10.68 55.97\u00b10.31 42.89\u00b10.35 69.20\u00b10.66 54.86\u00b10.59 42.58\u00b10.34 CromSS-lateF 66.61\u00b11.20 54.23\u00b11.06 41.12\u00b10.11 69.10\u00b10.29 54.86\u00b10.42 42.55\u00b10.36 As shown in Table 1, the proposed CromSS can improve the effectiveness of the pretrained encoders in remote sensing image segmentation\u2014in particular for S2 multi-spectral data. The improvement for S1 radar data is less significant. We attribute this discrepancy to the different capabilities of two modalities in the pretraining task, i.e., land cover classification in this work. The sample selection in CromSS is still fundamentally based on its own confidence masks for each modality. S1, which can be regarded as a weak modality in this case, can potentially take more advantages from S2 with additional specific strategies. Furthermore, the middle fusion strategy showcases a larger margin compared to late fusion, which indicates that the implicit data fusion via decoder weight sharing can boost the learning across modalities to some extent. We can also observe some improvements of single-modal pretraining with noisy labels compared to DINO and MoCo. These outcomes further demonstrate the potential of using noisy labels in task-specific pretraining for segmentation downstream tasks. 4 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop 5", + "additional_graph_info": { + "graph": [ + [ + "Chenying Liu", + "Hunsoo Song" + ] + ], + "node_feat": { + "Chenying Liu": [ + { + "url": "http://arxiv.org/abs/2405.01217v1", + "title": "CromSS: Cross-modal pre-training with noisy labels for remote sensing image segmentation", + "abstract": "We study the potential of noisy labels y to pretrain semantic segmentation\nmodels in a multi-modal learning framework for geospatial applications.\nSpecifically, we propose a novel Cross-modal Sample Selection method (CromSS)\nthat utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c\nmodelled by multiple sensors/modalities d of a given geospatial scene.\nConsistency of predictions across sensors $d$ is jointly informed by the\nentropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of\neach sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the\nperformance of our approach, we conduct experiments with Sentinel-1 (radar) and\nSentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12\ndataset. We pair those scenes with 9-class noisy labels sourced from the Google\nDynamic World project for pretraining. Transfer learning evaluations\n(downstream task) on the DFC2020 dataset confirm the effectiveness of the\nproposed method for remote sensing image segmentation.", + "authors": "Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION In the realm of Big Geospatial Data, one critical challenge is the lack of labeled data for deep learning model training. Self-Supervised Learning (SSL) received significant attention for its ability to extract representative features from unlabeled data (Wang et al., 2022). Popular SSL algorithms include generative Masked Autoencoders (MAE) (He et al., 2022) and contrastive learning methods such as DINO (Caron et al., 2021) and MoCo (Chen et al., 2020). MAE is inspired by image reconstruction, as most works utilizing vision transformers (ViTs) (Dosovitskiy et al., 2021). Constrastive learning methods can make a difference for both, convolutional backbones and ViTs. Recent studies suggest that deep learning models exhibit a degree of robustness against label noise (Zhang et al., 2021; Liu et al., 2024). Promising results were observed in pretraining models with extensive volumes of noisy social-media labels for image classification (Mahajan et al., 2018) and video analysis (Ghadiyaram et al., 2019). In the realm of remote sensing (RS), pretraining on crowdsourced maps such as OpenStreetMap for building and road extraction has been surveyed (Kaiser et al., 2017; Maggiori et al., 2017). These results indicate that inherently noisy labels can significantly reduce the level of human supervision required to effectively train deep learning models. Moreover, as the number of launched satellites grows, we are increasingly exposed to a variety of satellite data types, including but not limited to multi-spectral, Light Detection And Ranging (LiDAR), and Synthetic Aperture Radar (SAR) data. Multi-modal learning has emerged as a prominent area of study, where the complementary information showcases efficacy in boosting the learning from different modalities, such as optical and LiDAR data (Xie et al., 2023), multi-spectral and SAR data (Chen & Bruzzone, 2022). However, the application of multi-modal learning to improve learning from noisy labels remains for detailed exploration. 1 arXiv:2405.01217v1 [cs.CV] 2 May 2024 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Crops Grass Trees Water Bareland Built area Shrub & scrub Ice & snow Flooded vegetation Figure 1: An example of sentinel-1 (VV, right) and sentinel-2 (RGB, left) data paired with noisy labels (middle) from 4 seasons. In this work, we study the potential of noisy labels in multi-modal pretraining settings for RS image segmentation, where a novel Cross-modal Sample Selection method, referred to as CromSS, is introduced to further mitigate the adverse impact of label noise. In the pretraining stage, we first employ two U-Nets (Ronneberger et al., 2015) backboned with ResNet-50 (He et al., 2016) to separately extract features and generate confidence masks within each modality. After that, the sample selection is implemented for each modality on its enhanced confidence masks by fortifying the shared information across modalities. Given that radar and optical satellites are sensitive to distinct features on the ground1, such cross-modal enhancement bears potential to boost the mutual learning between modalities. We test middle and late fusion strategies to improve the architecture design for multi-modal learning. In our experiments, we utilize Sentinel-1 (S1) of radar and Sentinel-2 (S2) of multi-spectral data from the SSL4EO-S12 dataset (Wang et al., 2023) as two modalities. We pair those scenes with pixel-wise noisy labels of the Google Dynamic World (DW) project (Brown et al., 2022) for pretraining. Evaluation of the pretrained ResNet-50 encoders is based on the DFC2020 dataset (Yokoya, 2019) referenced to pretrained DINO and MoCo models presented as baselines in the SSL4EO-S12 work. 2 DATA In the pretraining stage, we utilize the extended version of the SSL4EO-S12 dataset, a large-scale self-supervision dataset in Earth observation, plus 9-class noisy labels sourced from the DW project on the Google Earth Engine as illustrated in Figure 1. SSL4EO-S12 sampled data globally from 251,079 locations. Each location corresponds to 4 S1 and S2 image pairs of 264\u00d7264 pixels from 4 different seasons, among which 103,793 locations have noisy label masks matched for all the seasons. We only utilize the image-label pairs of these 103,793 locations for pretraining with noisy labels. Notice that this dataset is a good reflection of real cases, where noisy labels are still harder to obtain compared to images, thus of a smaller size than unlabeled data. We utilize DFC2020 as the downstream segmentation task, where the 986 validation patches are used as the fine-tuning training data with the 5128 test ones for test. 3 METHODOLOGY Our methodology links semantic segmentation maps of single-modal models by two principles: (a) consistent prediction of the physical ground truth (consistency loss Lc), and (b) tolerance to noisy supervision (segmentation loss Ls). For the latter, we extend the idea of Cao & Huang (2022) working on a single modality to multiple modalities with cross-modal interactions for estimating the uncertainty of a given pixel-level class label. Each modality-specific model predicts the probability P (d) of a given noisy label at a physical location. While one model d = 1 may be certain about the label y, another d = 2 may assign low probability: P (1)(y) \u226bP (2)(y). Section 3.2 details on how we integrate these information to obtain a cross-modality score of a label perceived noisy. Similarly, we exploit the entropy of P (d) to introduced a criterion for a cross-modality consistency loss on label predictions between single-modality models. The overall approach is summarized by Figure 2, where Q(d) represents an estimate of P (d). 1e.g., persistant metal scatterers in SAR have little signatur in optical sensors 2 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Modality 2 (S2) Modality 1 (S1) Noisy labels \ud835\udc3f\ud835\udc50 (1/2): Unweighted consistency loss \ud835\udc3f\ud835\udc60 (1/2): Unweighted segmentation loss \ud835\udc44(1) \u229b \u2295 element-wise production element-wise addition Detached copy summation of all elements \u2211 \ud835\udc44(1) \ud835\udc44(1) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc3f\ud835\udc50 (1) \ud835\udc3f\ud835\udc50 (2) \ud835\udc3f\ud835\udc60 (1) \ud835\udc3f\ud835\udc60 (2) Encoder Decoder Encoder Decoder \u229b \u229b \u229b \u2295 \u2295 Predicted probability of the noisy label class \ud835\udc44(1)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc44(2)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc4a \ud835\udc59 (1/2): Label-based selection mask \ud835\udc4a \ud835\udc59 (2) \ud835\udc4a \ud835\udc59 (1) \ud835\udc39 \ud835\udc59 (2) \ud835\udc39 \ud835\udc59 (1) Cross-modal informed label selection \u229b \u229b \u229b \u2295 \u2295 Entropy Entropy \ud835\udc4a \ud835\udc52 (1/2) : Entity-based selection mask \ud835\udc39 \ud835\udc52 (2) \ud835\udc39 \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (2) Cross-modal informed consistency Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc36 Calculating \ud835\udc3f\ud835\udc36 Segmentation loss \u2211 \u2211 Consistency loss \u2211 \u2211 Figure 2: Illustration of the proposed CromSS. The decoders in the middle share the weights when middle fusion is applied. In late fusion, they are separately optimized per modality. The shaded areas (green to the left, purple to the right) highlight the key components of cross-modal sample selection. 3.1 MULTI-MODAL FUSION We employ middle and late multi-modal fusion (Chen & Bruzzone, 2022) to explore the complementary information across modalities to aid model training. Our fusion strategies do not concatenate feature vectors of different modalities. While middle fusion shares a common decoder for all modalities, late fusion retains individual decoders. 3.2 CROSS-MODAL SAMPLE SELECTION As depicted by Figure 2, the key in CromSS when compared to naive multi-modal training is the introduction of sample selection masks W (d) l/e (the shaded areas in Figure 2). They serve as weights for calculating the segmentation and consistency losses, Ls and Lc, cf. the label-based masks W (d) l and the entity-based masks W (d) e for modality d. To compute W (d) l and W (d) e , we first generate the corresponding confidence masks F (d) l and F (d) e from the softmax outputs, i.e., the estimated class distributions Q(d) for P (d). Let q(d) i,j,c \u2208Q(d) denote the softmax output at image pixel location (i, j) and class c, and yi,j be its given noisy label. Then, we take q(d) i,j,c with c = yi,j as the estimated label-based confidence scores in F (d) l . For the entity-based confidence, we define f (d) (e)i,j \u2208F (d) e using the entropy of its softmax vector h(d) i,j as follows, f (d) (e)i,j = 1 \u2212h(d) i,j /K = 1 + 1 K C X c=1 q(d) i,j,c log q(d) i,j,c (1) where C is the total number of classes, K = log C is the upper bound of hi,j \u2208[0, K] when qi,j,c = 1/C for c = 1, \u00b7 \u00b7 \u00b7 , C, i.e., equal distribution of maximum entropy. For two modalities d \u2208{1, 2}, the final confidence masks are combined into the following: F \u2032(1/2) l/e = 1 2 \u0010 F (1/2) l/e + F (1) l/e F (2) l/e \u0011 = 1 2F (1/2) l/e \u0010 1 + F (2/1) l/e \u0011 , (2) where the factor F (1/2) l/e F (2/1) l/e serves to magnify the selection probabilities of the samples exhibiting high confidence while diminishing cases where both modalities d = 1 and d = 2 agree on low confidence score. To generate final sample selection masks, we utilize a soft selection strategy rather than the one-hot selection masks for W (d) l , in order to avoid models from enforcing their own prediction errors. Mathematically speaking: given the selection ratio \u03b1 \u2208[0, 1], we define w(d) i,j \u2208W (d) l as, w(d) i,j = min h 1, f \u2032(d) i,j /w i , (3) 3 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop where f \u2032(d) i,j \u2208F \u2032(d) l , w is the (\u03b1 \u00b7 n)th highest value in F \u2032(d) l with n denoting the size of F \u2032(d) l . For the consistency loss, we utilize the weighting factor \u03b3 \u2208[0, 1] to generate W (d) e from F \u2032(d) e as W (d) e = (1 \u2212\u03b3) + \u03b3F \u2032(d) e with \u03b3 gradually ramping up from 0 to 1 during the training. With the losses weighted by W (d) l and W (d) e , the samples of lower confidence can contribute less in the optimization process. 4 EXPERIMENTS We pretrained ResNet-50 (He et al., 2016) nested in U-Nets (Ronneberger et al., 2015) using the combined segmentation losses of CrossEntropy and Dice (Jadon, 2020) along with Kullback-Leibler divergence (Kullback & Leibler, 1951) serving as the consistency losses. The selection proportion \u03b1 we set to 50% after exponentially ramping down from 100% for the first 80 epochs. At the same time, the weighting factor \u03b3 ramps up from 0 to 1 in parallel. We employed a seasonal data augmentation strategy, where the data from a randomly selected season were fed to U-Nets in each iteration. An Adam optimizer (Kingma & Ba, 2017) was used with a learning rate of .5 \u00b7 10\u22123. We employed the ReduceLROnPlateau scheduler to cut in half the learning rate when the validation loss is not decreasing over 30 consecutive epochs. We randomly split off 1% of the entire training set as the validation set. The pretraining was implemented on 4 NVIDIA A100 GPUs running approx. 13 hours for 100 epochs. When transferred to the DFC2020 dataset, pretrained ResNet-50 encoders were embedded into PSPNets (Zhao et al., 2017), fine-tuned with Adam and a learning rate of .5 \u00b7 10\u22124 for 50 epochs. As reference, we also present the results of single-modal pretraining (S1/S2) as well as multi-modal pretraining without sample selection, in which midF and lateF denote middle and late fusion, respectively. Pretrained weights by DINO and MoCo were provided by Wang et al. (2023). Results reported with error bars stem from 3 repeated runs of each setup. Table 1: Transfer learning results on the DFC2020 dataset with S1 and S2 as inputs, respectively, where \u201cFine-tuned\u201d and \u201cFrozen\u201d indicate whether the encoder weights would be adjusted along with decoder ones or not. Modality Encoder Frozen Fine-tuned Metrics OA AA mIoU OA AA mIoU S1 Random 54.41\u00b10.35 40.68\u00b10.23 29.16\u00b10.06 52.65\u00b10.42 42.17\u00b10.29 28.36\u00b10.22 MoCo 60.88\u00b10.41 47.46\u00b10.52 34.25\u00b10.27 60.31\u00b10.40 44.98\u00b10.66 31.80\u00b10.46 single-modal (S1) 61.73\u00b10.58 46.13\u00b10.34 34.77\u00b10.30 61.07\u00b10.19 45.78\u00b10.48 34.13\u00b10.19 midF 62.08\u00b10.73 45.01\u00b10.40 34.64\u00b10.48 61.24\u00b10.44 45.44\u00b10.84 33.86\u00b10.16 lateF 61.09\u00b10.11 45.77\u00b10.29 34.15\u00b10.14 62.19\u00b10.49 47.43\u00b10.41 34.58\u00b10.48 CromSS-midF 61.66\u00b10.41 45.07\u00b10.28 34.38\u00b10.02 62.32\u00b11.01 47.19\u00b10.84 35.17\u00b10.63 CromSS-lateF 62.58\u00b10.36 46.37\u00b10.53 34.80\u00b10.37 60.92\u00b10.76 46.13\u00b10.60 33.94\u00b10.55 S2 random 56.42\u00b10.49 45.12\u00b10.18 31.50\u00b10.14 58.68\u00b10.77 46.03\u00b10.43 33.56\u00b10.28 DINO 64.82\u00b10.22 48.83\u00b10.08 37.81\u00b10.08 63.64\u00b10.72 49.92\u00b11.33 36.95\u00b10.55 MoCo 63.25\u00b10.47 51.00\u00b10.28 37.67\u00b10.57 61.19\u00b10.39 47.29\u00b10.36 34.86\u00b10.63 single-modal (S2) 66.66\u00b10.19 53.24\u00b10.21 40.88\u00b10.07 67.11\u00b10.22 53.14\u00b10.69 41.06\u00b10.24 midF 68.36\u00b10.65 53.23\u00b10.42 41.52\u00b10.35 68.07\u00b10.64 52.60\u00b10.52 41.17\u00b10.28 lateF 67.61\u00b10.91 54.08\u00b10.92 41.59\u00b10.75 68.43\u00b11.18 53.72\u00b10.76 41.76\u00b10.76 CromSS-midF 69.41\u00b10.68 55.97\u00b10.31 42.89\u00b10.35 69.20\u00b10.66 54.86\u00b10.59 42.58\u00b10.34 CromSS-lateF 66.61\u00b11.20 54.23\u00b11.06 41.12\u00b10.11 69.10\u00b10.29 54.86\u00b10.42 42.55\u00b10.36 As shown in Table 1, the proposed CromSS can improve the effectiveness of the pretrained encoders in remote sensing image segmentation\u2014in particular for S2 multi-spectral data. The improvement for S1 radar data is less significant. We attribute this discrepancy to the different capabilities of two modalities in the pretraining task, i.e., land cover classification in this work. The sample selection in CromSS is still fundamentally based on its own confidence masks for each modality. S1, which can be regarded as a weak modality in this case, can potentially take more advantages from S2 with additional specific strategies. Furthermore, the middle fusion strategy showcases a larger margin compared to late fusion, which indicates that the implicit data fusion via decoder weight sharing can boost the learning across modalities to some extent. We can also observe some improvements of single-modal pretraining with noisy labels compared to DINO and MoCo. These outcomes further demonstrate the potential of using noisy labels in task-specific pretraining for segmentation downstream tasks. 4 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop 5" + }, + { + "url": "http://arxiv.org/abs/2402.16164v2", + "title": "Task Specific Pretraining with Noisy Labels for Remote sensing Image Segmentation", + "abstract": "Compared to supervised deep learning, self-supervision provides remote\nsensing a tool to reduce the amount of exact, human-crafted geospatial\nannotations. While image-level information for unsupervised pretraining\nefficiently works for various classification downstream tasks, the performance\non pixel-level semantic segmentation lags behind in terms of model accuracy. On\nthe contrary, many easily available label sources (e.g., automatic labeling\ntools and land cover land use products) exist, which can provide a large amount\nof noisy labels for segmentation model training. In this work, we propose to\nexploit noisy semantic segmentation maps for model pretraining. Our experiments\nprovide insights on robustness per network layer. The transfer learning\nsettings test the cases when the pretrained encoders are fine-tuned for\ndifferent label classes and decoders. The results from two datasets indicate\nthe effectiveness of task-specific supervised pretraining with noisy labels.\nOur findings pave new avenues to improved model accuracy and novel pretraining\nstrategies for efficient remote sensing image segmentation.", + "authors": "Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu", + "published": "2024-02-25", + "updated": "2024-05-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Deep learning turned into a powerful tool for data mining on vast amounts of remote sensing (RS) imagery [1]. However, efficient training of deep learning models requires a large amount of accurate annotations, which is hard to obtain due The work is to appear as a conference paper at IEEE IGARSS 2024. The work of C. Liu, C. M. Albrecht, and Y. Wang is funded by the Helmholtz Association through the Framework of HelmholtzAI, grant ID: ZT-I-PF-5-01 \u2013 Local Unit Munich Unit @Aeronautics, Space and Transport (MASTr). The compute related to this work was supported by the Helmholtz Association\u2019s Initiative and Networking Fund on the HAICORE@FZJ partition. The work of X. X. Zhu is supported by the German Federal Ministry of Education and Research (BMBF) in the framework of the international future AI lab \u201cAI4EO \u2013 Artificial Intelligence for Earth Observation: Reasoning, Uncertainties, Ethics and Beyond\u201d (grant number: 01DD20001). to human labor intensive labeling process. Recently, selfsupervised learning (SSL) has demonstrated great success in alleviating this problem by distillation of representative features from unlabeled data [2]. Existing SSL methods such as contrastive learning [3, 4] primarily rely on image-level information. Those turn out suboptimal for semantic segmentation downstream tasks relative to classification tasks [5]. This discrepancy requests alternative strategies to enhance the efficacy of pretrained models for segmentation tasks. Recent studies indicate that deep learning models may be robust against label noise [6, 7]. Models trained on billions of Instagram image\u2013hashtag pairs without manual dataset curation exhibit excellent transfer learning performance for image classification tasks [8]. Similar results were obtained when pretraining video models on large volumes of noisy socialmedia video data [9]. In remote sensing, systematic studies have been carried out to employ crowd-sourced maps like OpenStreetMap (OSM) providing large-scale, publicly available labels for pretraining building and road extraction models [10, 11, 12]. Results indicate that OSM labels, though noisy, can significantly reduce human supervision required to successfully train segmentation models in these tasks. Building upon the success of the existing works, we aim to further explore the potential of noisy labels in model pretraining for RS image segmentation tasks. We target to address the following questions: 1. Can supervised pretraining with noisy labels enhance the performance of encoders in general segmentation tasks compared to SSL methods? If so, what is the mechanism behind it? 2. To what extent does the inconsistency of category definitions between pretraining and fine-tuning tasks impact the overall efficacy of the pretrained encoders? 3. Are the encoders pretrained within a given framework useful when transferred to a different framework utilizing separate decoders for downstream tasks? To answer these questions, we pretrain ResNet encoders in a supervised fashion on noisy labels to compare them with their SSL counterparts pretrained by DINO [3] and MoCo [4]. We assemble two datasets to evaluate model effectiveness: arXiv:2402.16164v2 [cs.CV] 22 May 2024 \f(a) Optical image (b) GT (c) Noisy labels Fig. 1. Visualization of a data triple within the NYC dataset. Table 1. Quality assessment of the NYC noisy labels. CLASS background trees buildings roads MEAN OA 67.83 precision 62.77 78.80 79.76 60.04 70.34 recall 78.72 62.44 60.31 56.89 64.59 IoU 53.67 53.46 52.30 41.26 50.17 \u2022 the New York City (NYC) dataset representing a smallscale, in-domain scenario, and \u2022 the SSL4EO-S2DW dataset potentially used for RS foundation model construction. In the following, we first present the details of the two datasets and our experimental setups in Section 2, followed by results and corresponding discussions in Section 3. We summarize our findings for future lines in Section 4. 2. PRETRAINING WITH NOISY LABELS 2.1. Datasets 2.1.1. NYC dataset The New York City (NYC) dataset was collected over New York City in 2017. We picked the 1m spatial resolution orthophotos as inputs. The four spectral bands are: nearinfrared (NIR), red (R), green (G), and blue (B). We paired the pixel-level ground truth (GT) masks with 8 categories as depicted to the right of Fig. 1. The noisy labels were generated from LiDAR data using the AutoGeoLabel approach as proposed in [13], yet containing three classes, only: trees, buildings, and roads. Unclassified pixels are annotated as background. All data were curated into small patches of 288\u00d7288 pixels, cf. Fig. 1. A total of 26,500 data triples (orthophoto, GT, noisy labels) have been curated, with 4,500 hold out for testing. Table 1 quantifies the quality of the noisy labels. We provide this dataset as test for the effectiveness of noisy label pretraining in a small-scale in-domain scenario. 22,000\u2212X data pairs serve pretraining either with or without noisy labels. The fine-tuning is implemented with 100 randomly selected image-GT pairs from the 22,000 pretraining patches. Optical image Noisy labels Optical image Noisy labels Fig. 2. Visualization of orthophotos and corresponding noisy label masks for two seasons (left|right) at a random location of the SSL4EO-S2DW dataset. Blue, red, yellow, and orange represent water, crops, built area, and bare land, respectively. 2.1.2. SSL4EO-S2DW dataset The dataset termed SSL4EO-S2DW we extend from the SSL4EO-S12 dataset [5], a large-scale self-supervision dataset for Earth observation. SSL4EO-S12 samples data globally from 251,079 locations. Each location corresponds to 4 Sentinel-1 and -2 image pairs of 264\u00d7264 pixels from each season. Here, we only include Sentinel-2 data for SSL pretraining. We pair them with the 9-class labels from the Google Dynamic World (DW) project [14], cf. Fig. 2. The 9 classes include: water, trees, grass, flooded vegetation, crops, shrub and scrub, built area, bare land, and ice & snow. We curated 103,793 locations with noisy label masks matching all seasons. SSL4EO-S2DW resembles use cases where noisy labels are still a bit harder to obtain than abundant RS imagery. To evaluate pretrained encoders, we utilize the same downstream segmentation tasks as in [5], namely: DFC2020 [15] for land cover classification and OSCD for urban area change detection [16]. We employ the training and test sets of the OSCD dataset. For the DFC2020 dataset, we utilize the 986 validation patches for fine-tuning, and 5128 test images for testing. 2.2. Implementation Details For pretraining, we use image data and noisy label pairs to train U-Nets with ResNet encoder backbones in a standard supervised setup. Given the dataset sizes, we chose ResNet18 (NYC) and ResNet50 (SSL4EO-S2DW). For transfer learning, we test the pretrained encoders within different frameworks: U-Net [17], DeepLabv3++ [18], and PSPNet [19]. Our pretrained encoders are compared with random initialization and those obtained by DINO and MoCo from [5]. We applied an Adam optimizer on a loss combining CrossEntropy and Dice. Random flipping served as our data augmentation strategy. The pretraining learning rate we set to 1e-3. We use a smaller learning rate of 5e-4 adjusted by a cos-scheduler for fine-tuning. For SSL4EO-S2DW pretraining, we randomly cropped patches into 256\u00d7256, and chose the data from an arbitrary season at each geospatial location and training iteration to act as an additional augmentation strategy. We pretrain the models with a batch size of 256 per GPU. Pre-training for 100 epochs takes about 5 \fTable 2. Fine-tuning results (IoU, %) obtained on the NYC dataset with different frameworks. Framework Pretraining trees grass/schrubs bareland water buildings roads other impervious railroads mIoU random 46.75 22.78 91.09 96.55 44.69 32.73 30.67 90.86 54.28 U-Net DINO 49.34 22.90 79.38 90.04 46.15 42.31 31.70 91.14 57.09 (fixed encoder) MoCo 47.56 22.65 78.76 72.13 47.21 42.27 31.50 91.10 56.52 noisy labels 58.74 26.97 91.05 81.74 59.37 57.10 39.56 91.20 63.08 random 48.39 19.03 79.11 86.24 48.53 43.20 29.02 90.26 55.28 DeepLabv3++ DINO 49.61 19.83 72.68 86.44 51.73 44.11 29.20 75.02 53.46 (fine-tuned encoder) MoCo 49.13 20.65 69.05 87.41 51.98 47.82 30.24 82.68 54.76 noisy labels 54.78 23.86 84.41 92.08 59.22 58.04 37.99 81.10 61.31 Table 3. Fine-tuning results (%) obtained on the DFC2020 dataset using PSPNet as frameworks, where OA presents overall accuracy, and AA is average accuracy. Pretraining fixed encoder fine-tuned encoder OA mIoU AA OA mIoU AA random 56.42 31.50 45.12 58.68 33.56 46.03 DINO 64.82 37.81 48.83 63.64 36.95 49.92 MoCo 63.25 37.67 51.00 61.19 34.86 47.29 noisy labels 66.66 40.88 53.24 67.11 41.06 53.14 hours on an NVIDIA A100 GPU with the NYC dataset. For SSL4EO-S2DW it takes 4 GPUs to train for 100 epochs in 6 hours. 3. EXPERIMENTAL RESULTS 3.1. Transfer Learning 3.1.1. NYC dataset We transfer the (3+1)-class noisy label pretrained encoder to an 8-class land cover land use segmentation downstream task. While we freeze the encoder when the downstream task utilizes the same framework as in pretraining (U-Net), we let adjust the encoder weights along with the decoder when adopting a different framework (DeepLabv3++). As shown in Table 2, the noisy label pretrained encoder outperforms the other models on almost all classes although the pretrained model has not been pretrained on some of the classes: Including semantic information for pretraining is beneficial for models to learn generic features that are discriminative for semantic segmentation downstream tasks. Notably, the pretrained encoder works for different training frameworks, too. In this case, pretrained encoders seem compatible when transferred to, e.g., DeepLabv3++. In contrast, the two SSL methods fail to show an edge over random initialization on the NYC dataset. We partly attribute this result to a lack of large amounts of unlabeled data in a small-scale setup. 3.1.2. SSL4EO-S2DW dataset We picked PSPNet and U-Net as frameworks for the DFC2020 and OSCD datasets. We test two fine-tuning settings with Table 4. Fine-tuning results (%) obtained on the OSCD dataset using U-Net as frameworks. Pretraining fixed encoder fine-tuned encoder OA IoU Precision OA IoU Precision random 95.47 17.08 78.06 95.16 21.80 66.27 DINO 95.59 21.74 73.83 95.53 31.05 66.45 MoCo 95.66 23.81 73.34 95.70 32.56 66.39 noisy labels 95.79 26.80 73.90 95.98 33.37 71.34 fixed and fine-tuned encoders on both datasets. Table 3 and Table 4 present results, respectively. Noisy label\u2013 pretrained encoders yield better results when compared to SSL\u2013pretraining or when referenced to random initialization. Performance margins increase when the encoders are fixed in the fine-tuning stage, which indicates that the encoder pretrained with noisy labels is able to generate features adapted to segmentation tasks. Our experiments on two distinct downstream tasks further illustrate the generalizability of encoders pretrained by noisy labels. 3.2. Impact of Label Noise on Model Training To understand the mechanism behind the success of noisy label pretraining, we utilize a 3-class version of the NYC dataset discriminating trees, buildings, and background as illustrated in Fig. 3. We train two U-Net models from scratch employing all available training patches; one with noisy labels and one with exact labels (GT masks), respectively. After training, we analyze the output features of each convolutional layer of the U-Net given an input patch. For one input patch, we visualize the first principle component of each such output feature data cube in Fig. 3. We observe: \u2022 the encoder features visually share spatial characteristics, i.e., the encoder seems little impacted by label noise \u2022 the closer the convolutional layer gets to the U-Net\u2019s output, the more the features become contaminated by label noise Since the encoder learns to extract basic spatial features from local semantic information of the input data, it is affected little by label noise. In terms of backpropagation, decoders are \fFig. 3. Visualization of the dominant principle component from the output of each convolutional layer of the U-Net model trained with exact labels (top row) and noisy labels (bottom row). (a) Fisher ratios (b) KL divergences Fig. 4. Quantitative assessment of data statistics for convolutional modules after U-Net training: (a) Fisher ratios of module output features after U-Net training. Shaded areas indicate standard deviations w.r.t. data samples investigated. (b) KL divergence of module weight statistics comparing the model trained with exact labels to the one trained on noisy labels. The solid line connects the smoothed results by a Savitzky\u2013Golay filter, and the shaded areas indicates the standard deviation independently training 5 models from scratch. closer to the (noisy) label mask to optimize the U-Net\u2019s output on. Thus, while the decoder adapts to output noisy labels, the encoder is less biased by label noise. To quantify our observations, we calculate the Fisher ratios of output feature cubes as presented in Fig. 4 (a). The Fisher ratio is a widely used index to assess feature discrimination in pattern recognition [20]: Larger values indicate better discrimination of features. Two insights we read from Fig. 4: 1. decoder features, whether trained on noisy or exact labels, are more discriminative compared to encoder features 2. the discriminative character of decoder features is degraded when the model is trained on noisy labels, i.e., the decoder is significantly affected by label noise We did also investigate the model training with exact and noisy labels by computing the Kullback\u2013Leibler (KL) divergence [21] of weight statistics within each module. As demonstrated in Fig. 4 (b), the convolutional layers in the encoder are governed by similar weight statistics, while those in the decoder follow diverging weight statistics towards the semantic segmentation outputs. Those observations highlight that encoder features are less biased by label noise, yet, they benefit from the semantics provided by pixel-level noisy label masks. 4." + } + ], + "Hunsoo Song": [ + { + "url": "http://arxiv.org/abs/2309.15978v1", + "title": "Assessment of Local Climate Zone Products via Simplified Classification Rule with 3D Building Maps", + "abstract": "This study assesses the performance of a global Local Climate Zone (LCZ)\nproduct. We examined the built-type classes of LCZs in three major metropolitan\nareas within the U.S. A reference LCZ was constructed using a simple rule-based\nmethod based on high-resolution 3D building maps. Our evaluation demonstrated\nthat the global LCZ product struggles to differentiate classes that demand\nprecise building footprint information (Classes 6 and 9), and classes that\nnecessitate the identification of subtle differences in building elevation\n(Classes 4-6). Additionally, we identified inconsistent tendencies, where the\ndistribution of classes skews differently across different cities, suggesting\nthe presence of a data distribution shift problem in the machine learning-based\nLCZ classifier. Our findings shed light on the uncertainties in global LCZ\nmaps, help identify the LCZ classes that are the most challenging to\ndistinguish, and offer insight into future plans for LCZ development and\nvalidation.", + "authors": "Hunsoo Song, Gaia Cervini, Jinha Jung", + "published": "2023-09-27", + "updated": "2023-09-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "physics.ao-ph" + ], + "main_content": "INTRODUCTION Urban geospatial data have been widely used to investigate the current status of cities and to develop urban design strategies for sustainable urbanization. Many studies have developed a variety of map classification schemes that can abstract an urban landscape into a form that can better illustrate its impacts on urban environments. The local climate zone (LCZ) is one successful classification scheme that classifies land cover into 17 classes based on its physical properties related to urban climate [3]. In particular, the LCZ classes are closely associated with the urban 3D structure, unlike the typical land cover classification system [4, 5, 6]. In 2022, a global LCZ map with a 100m resolution was released [1]. The LCZ classification was performed using a supervised random forest classifier that used a large amount of labeled training samples and diverse earth observation inputs. The earth observation inputs include earth observation satellite images and other sources that provide textual and height information [2, 7]. While the LCZ classifier integrated 3D urban structure information into its classification process, we hypothesize that some errors or biases may be present, particularly when characterizing 3D urban features. This is because the height input for the LCZ classifier is largely based on approximate height information with coarse resolution [1]. This research assesses the recent global LCZ product based on a simplified LCZ classification rule that employs high-resolution 3D building maps generated from airborne LiDAR data. 2. DATASETS AND METHODS We conducted our evaluation using three geographically diverse and distantly located metropolitan areas: New York City, NY; Dallas, TX; and Denver, CO. In particular, our assessment primarily focused on built-type classes whose classifications are notably affected by 3D building information. Specifically, LCZ Classes 1-6, 8, and 9 were examined, due to the scarcity of Classes 7 and 10 in the datasets. For creating a reference, we \u201cre-classified\u201d three metropolitan areas using a simple classification rule. First, we created large-scale 3D building maps at 1 m resolution for these areas (Fig. 1). These maps were generated utilizing an open-source building mapping algorithm [8, 9] applied to LiDAR data from the U.S. Geological Survey\u2019s 3D Elevation Program [10]. We then matched a 100m x 100m 3D building map tile with its corresponding 100m resolution LCZ map. Utilizing the 3D building map, we calculated the Building Surface Fraction (BSF) and the Height of Roughness Elements (HRE) [3]. These measures enabled the classification of Classes 1-6 and Class 9. Subsequently, we \u201cpost\u201d-classified any instances deemed either Class 1-6 or Class 9 into Class 8, if they met the criteria based on Sky View Factor (SVF) and Pervious Surface Fraction (PSF). SVF was calculated from the digital surface model, and PSF was computed using NDVI from the National Agriculture Imagery Program (NAIP) and a surface water map derived from [11]. It is important to note that our simplified LCZ classification scheme is mutually exclusive, though not collectively exhaustive. The proposed rule does not account for all conditions of LCZ classification and cannot distinguish all 17 LCZ classes. However, the classification rule is explicitly defined and anchored to the standard LCZ classification scheme [3]. This clarity renders the classification results a reliable referThis work is a part of the International Geoscience and Remote Sensing Symposium (IGARSS) 2023 proceedings. Copyright \u00a92023 IEEE. arXiv:2309.15978v1 [cs.CV] 27 Sep 2023 \fFig. 1. RGB, 3D Building Map, and Local Climate Zone from experimental datasets ((c) was adapted from [2]) Table 1. Simplified LCZ classification rules Class Classification rule 1 BSF > 0.4 & HRE > 25 2 BSF > 0.4 & 10 \u2264HRE \u226425 3 BSF > 0.4 & 3 \u2264HRE \u226410 4 0.2 \u2264BSF \u22640.4 & HRE > 25 5 0.2 \u2264BSF \u22640.4 & 10 \u2264HRE \u226425 6 0.15 \u2264BSF \u22640.25 & 3 \u2264HRE \u226410 8 SVF > 0.7 & PSF < 0.2 9 0.05 \u2264BSF \u22640.15 & 3 \u2264HRE \u226410 Others N/A 1 BSF: Building Surface Fraction, HRE: Height of Roughness Elements, SVF: Sky View Factor, PSF: Pervious Surface Fraction. ence LCZ for evaluation. Particularly, due to the high accuracy of the airborne LiDAR-based 3D building map, this reclassified LCZ can serve as a highly reliable reference for evaluation, especially in terms of 3D elevation accuracy. 3. EXPERIMENTAL RESULTS Table 2 and Fig. 2. show the comparative distribution of LCZ classes and confusion matrix, respectively, between the LCZ map of the global product (\u201coriginal LCZ\u201d) and the LCZ map resulting from the simplified rule (\u201creclassified LCZ\u201d). For Fig. 3., we normalize the confusion matrices\u2019 (Fig. 2.) each row by its diagonal element to identify where the global LCZ product is likely to get confused. If any off-diagonal element exceeds 1, it indicates the global LCZ product is significantly perplexed by the class of the off-diagonal element. As depicted in Figures 2-3, \u201cheated\u201d areas demonstrate several common tendencies. In Classes 1 and 8, the two different LCZs show a relative alignment. However, there is a notable lack of agreement in Classes 3, 4, and 5. Furthermore, as demonstrated in Table 2, the count of Class 9 is significantly underestimated across all cities, mainly due to confusion with Class 6, which is predominant in the three \fTable 2. Comparison of the number of pixels between original and reclassified LCZ maps by class Dataset LCZ Class 1 2 3 4 5 6 7 8 9 10 Original LCZ [1] New York City 2470 22852 9377 2899 42994 68018 0 29103 223 3897 Dallas 144 248 773 10 1886 82174 0 38008 2670 0 Denver 163 32 985 23 696 46761 0 19273 2784 0 Reclassified LCZ New York City 1672 5658 7439 1307 7511 48034 0 7747 46238 0 Dallas 171 874 1577 195 1628 32125 0 8456 32467 0 Denver 98 381 388 107 951 17315 0 6587 25347 0 Fig. 2. Confusion matrices comparing the original LCZ and the reclassified LCZ for three different cities Fig. 3. Row-wise normalized confusion matrices comparing the original LCZ and the reclassified LCZ for three different cities datasets. If we assume that the primary distinction between Classes 6 and 9 lies in the Building Surface Fraction (BSF), it appears that the classifier for the global LCZ product struggles with identifying detailed building information and tends to be heavily biased toward Class 6. In addition, Classes 4 and 5 of the global LCZ product are often reclassified as either 5 or 6 in the reference LCZ. Given that Classes 4, 5, and 6 primarily vary based on the overall elevation of buildings, it suggests that the global LCZ product struggles in differentiating subtle elevation differences in \u201copen urban areas (Classes 4, 5, and 6)\u201d. Considering that the global LCZ product is primarily based on spectral characters and coarse elevation products, the classification challenge among Classes 4, 5, and 6 is not surprising. Interestingly, while global LCZ products across different cities face similar classification challenges, the areas of major disagreement vary significantly among the cities, suggesting the presence of a data distribution shift. For instance, as depicted in Figure 3, the case of New York City shows the highest disagreement rates in Classes 4 and 5, whereas Dallas and Denver have the most pronounced disagreements in Class 3. The reference LCZ is created based on a rule that applies uniformly across all cities, with Classes 3-5 determined exclusively by the 3D building map. As such, we can anticipate that the reference LCZ will be considerably less affected by data distribution shifts. Given this robustness, the observed \fdiscrepancies across cities indicate that the classifier for the global LCZ product might have experienced data distribution shifts. This suggestion is further supported by the imbalanced ratios across the three datasets, as shown in Table 2. These inconsistencies could potentially stem from variations in spectral characteristics or discrepancies in the accuracy of GIS products used for mapping across different cities. 4. SUMMARY AND FUTURE REMARKS The global LCZ product [1] was assessed based on highresolution 3D building maps, focusing on built-type classes of three metropolitan cities in the U.S. A reference LCZ map was generated using a simple yet robust classification rule with 3D building maps. We found the global LCZ product tends to underestimate Class 9 while overestimating Classes 6 and 8. Also, the product exhibits limited capability in discerning subtle elevation differences among Classes 4-6. Moreover, noticeable inter-city biases in the distribution of these classes were observed. The significance of machine learning-based LCZ mapping is unquestionable, especially considering the scarcity of highresolution 3D elevation products. Nonetheless, in light of the rapidly changing landscape of available geospatial data, such as airborne LiDAR data, it may be necessary to make amendments to the LCZ classification method and scheme. Also, these changes should be accompanied by thorough validation, to ensure consistent outcomes across large areas and to reliably extract knowledge from multi-city studies. Although this might compromise the role of LCZ as a tool for characterizing climate-land interactions, a mapping method based on more definitive features\u2014such as the number of buildings, rather than the Aspect Ratio, which lacks a standardized method\u2014could enhance consistency across extensive areas and foster more generalizable results. Our current simple classification rule has limitations in capturing non-linear, complex features, which are essential for comprehensive LCZ classification. Nonetheless, our evaluation results\u2014obtained through a methodology that ensures high consistency across different cities\u2014effectively reveal the existing challenges in LCZ classifications. We anticipate that our evaluation methods and results will contribute to the development of a reliable validation tool and offer valuable insights for the future enhancement of LCZ products. 5." + }, + { + "url": "http://arxiv.org/abs/2208.11243v1", + "title": "A new explainable DTM generation algorithm with airborne LIDAR data: grounds are smoothly connected eventually", + "abstract": "The digital terrain model (DTM) is fundamental geospatial data for various\nstudies in urban, environmental, and Earth science. The reliability of the\nresults obtained from such studies can be considerably affected by the errors\nand uncertainties of the underlying DTM. Numerous algorithms have been\ndeveloped to mitigate the errors and uncertainties of DTM. However, most\nalgorithms involve tricky parameter selection and complicated procedures that\nmake the algorithm's decision rule obscure, so it is often difficult to explain\nand predict the errors and uncertainties of the resulting DTM. Also, previous\nalgorithms often consider the local neighborhood of each point for\ndistinguishing non-ground objects, which limits both search radius and\ncontextual understanding and can be susceptible to errors particularly if point\ndensity varies. This study presents an open-source DTM generation algorithm for\nairborne LiDAR data that can consider beyond the local neighborhood and whose\nresults are easily explainable, predictable, and reliable. The key assumption\nof the algorithm is that grounds are smoothly connected while non-grounds are\nsurrounded by areas having sharp elevation changes. The robustness and\nuniqueness of the proposed algorithm were evaluated in geographically complex\nenvironments through tiling evaluation compared to other state-of-the-art\nalgorithms.", + "authors": "Hunsoo Song, Jinha Jung", + "published": "2022-08-24", + "updated": "2022-08-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "eess.IV" + ], + "main_content": "Introduction The digital terrain model (DTM), also often referred to as the digital elevation model (DEM), is a 3-dimensional representation of the bare earth surface excluding any ground-standing objects like trees and buildings. DTM is an essential geospatial data for various studies, namely, hydrological modeling (Callow et al., 2007; Chaney et al., 2018; Jarihani et al., 2015), glacier monitoring (Shean et al., 2019), landslide monitoring (Jaboyedo\ufb00et al., 2012; Tseng et al., 2013; Kim et al., 2015), land-cover classi\ufb01cation (RodriguezGaliano et al., 2012; Yan et al., 2015), building mapping (Song and Jung, 2022), forestry (Oh et al., 2022; Simpson et al., 2017), and agricultural management (Tarolli and Stra\ufb00elini, 2020). Since errors and uncertainties in DTM can signi\ufb01cantly a\ufb00ect the knowledge gained from such studies, producing an accurate and reliable estimate of the terrain is crucial (Goulden et al., 2016; Wechsler, 2007). Generating DTM requires classifying the bare earth surface among all 3-dimensional coordinate measurements over the earth. Light detection and ranging (LiDAR) (Chen et al., 2017), radar (Farr et al., 2007; Rizzoli et al., 2017), and photogrammetry technologies (Turner et al., 2012; Bhushan et al., 2021; Shean et al., 2016) that can retrieve the 3dimensional coordinates of the earth are generally used for producing DTM. Among di\ufb00erent sources for producing DTM, airborne LiDAR data (airborne laser scanning) has become the most powerful sensor for generating highresolution DTM in terms of its accuracy, and numerous algorithms have been developed for DTM generation. However, generating an accurate and reliable DTM with a scalable method remains a challenge. DTM generating algorithm with airborne LiDAR data usually necessitates the procedure of classifying ground and non-ground objects. In most cases, the available data for this binary classi\ufb01cation is coordinate measurements of the earth\u2019s surface. Therefore, geometrical shapes and associations among coordinates are used for the classi\ufb01cation. Typically, DTM generating algorithms aim to make a decision based on the assumption that ground is generally smooth while non-ground objects have protruding shapes (Meng et al., 2010; Chen et al., 2017). Considering that most DTM generating algorithms share the common goal of discriminating between smooth and protruding shapes, algorithms can be classi\ufb01ed according to how to represent 3-dimensional coordinate measurements. Point cloud (Bartels and Wei, 2010; Bartels et al., 2006; Sithole and Vosselman, 2005; Vosselman, 2000; Zhang et al., 2016), triangulated irregular network (TIN) (Axelsson, 2000; Sohn and 2 \fDowman, 2002; Zhang and Lin, 2013), and image grid (Amirkolaee et al., 2022; Chen et al., 2012; Gevaert et al., 2018; Hu and Yuan, 2016; Lohmann et al., 2000; Mongus and \u02c7 Zalik, 2013; Wack and Wimmer, 2002; Zhang et al., 2003) are the three most common representations of 3-dimensional coordinate measurements. First, point clouds-based algorithms typically consider each point\u2019s relative coordinates with respect to its local neighboring points (Meng et al., 2010). Then, the common way to classify non-ground points is based on a discriminant function describing slopes among a set of points. The point clouds representation has an advantage in that it can preserve and directly use the raw measurements, and it allows more \ufb02exible operations as its representation is non-gridded. However, handling outlier is di\ufb03cult, and it often fails to produce reliable results particularly when the local point density is varying. Second, TIN represents 3-dimensional coordinate measurements as a continuous surface consisting of triangular facets, also referred to as a triangle mesh. The TIN representation allows algorithms to e\ufb00ectively use the local structure of point coordinates as each triangular facet can be assumed as an approximation of the local surface. However, it has the same disadvantages as point cloud representations in that they are irregularly spaced data that are di\ufb03cult to process. Lastly, the image grid representation projects the point cloud into a 2-dimensional image grid and considers the elevation (Z) of coordinates as the pixel value of the image. In other words, this method creates a digital surface model (DSM) \ufb01rst and generates DTM. As it transforms 3-dimensional measurements into gridded data, it has advantages in that it allows morphological operation and the operation is conceptually simple. However, it necessarily distorts and compromises the original data. Regardless of which representation method is adopted, DTM generating algorithms try to classify ground from non-ground based on the assumption that ground is generally smooth while non-ground objects have protruding shapes. The di\ufb03cult thing in the classi\ufb01cation is that there is no clear boundary between \u201csmoothness\u201d and \u201dprotrudeness\u201d. Non-ground objects also can have smooth surfaces, and the challenge is how large an area should be taken into account when classifying objects (Meng et al., 2010). For example, a large building with a \ufb02at roof can be classi\ufb01ed as non-ground only if the algorithm considers a larger area than the building. Otherwise, points near the center of the \ufb02at roof will be classi\ufb01ed as ground. Yet, simply expanding the area of consideration does not help the problem. The more the algorithm considers a large area, the more variables there are, and the more the 3 \falgorithm has to \ufb01nd a complex and sophisticated decision boundary. This often results in requiring a lot of parameter tuning for the algorithm, and in turn, the generalization capability of the algorithm would be degraded. In a nutshell, resolving uncertainties in measuring the smoothness and in determining area-to-consider is the key to the algorithm. To be speci\ufb01c, DTM generating algorithms based on either point clouds or TIN generally set search radius and compute slopes or angles to adjacent points within the search radius for quantifying the smoothness. Then, the discriminant function to \ufb01lter non-ground objects is the function of the search radius and slopes. Again, the challenging part is setting a proper size of the search radius. As object size varies considerably, the pre-determined discriminant function to \ufb01lter non-ground objects is hard to generalize for diverse landscapes. Especially as point clouds and TIN representations operate with irregularly spaced data, de\ufb01ning suitable parameters for search radius and slope threshold can be more di\ufb03cult. Even with the image grid representation that can take advantage of simple morphological operations easily, the discriminant function needs to determine a certain window (or kernel) size, conceptually the same as the search radius. Indeed, previous studies had di\ufb03culties in selecting the proper window size for the morphological operation as the shape and the size of various objects are hard to generalize (Lohmann et al., 2000). Also, setting a proper search radius often requires prior knowledge of the given area. To resolve the problem of pre-de\ufb01ned search radius, several algorithms have been developed to adaptively change their search radiuses (Zhang et al., 2003). However, as algorithms become more complex, they tend to require more computations and a larger number of parameters to produce satisfactory results, while losing generalization capability, becoming more di\ufb03cult to set proper parameters, and making the resulting DTM di\ufb03cult to predictable. Alternative recent methods for generating DTM algorithms include deep learning-based methods (Amirkolaee et al., 2022; Gevaert et al., 2018; Hu and Yuan, 2016) and a cloth simulation-based method (Zhang et al., 2016). Deep learning-based algorithms usually regard DSM as an image and try to extract non-ground pixels similar to the common approach for computervision tasks, namely, semantic segmentation or object detection. Although deep learning-based methods produced promising results, they require a large number of labeled training samples and huge computation resources. Also, the quality of output is bounded by not only the LiDAR data but also the reference DTM for the training, and the trained model may not reproduce 4 \fsatisfactory results when the target area has di\ufb00erent properties from that of the trained area. Unlike deep learning-based methods, the cloth simulationbased method assumes that a virtual cloth covering on the upside-down DSM could be a DTM. With this simulation of the physical process, the cloth simulation-based method also produced reasonable results and reduced the number of parameters to tune compared to those of the conventional methods. However, it still requires tricky parameter tunings and can cause errors when dealing with very large low buildings and terrains having unique shapes, such as bridges (\u02c7 Stroner et al., 2021; Yu et al., 2022). This paper presents a novel, open-source DTM generating algorithm for airborne LiDAR data based on the image grid representation. The algorithm projects point clouds into a \ufb01nely gridded DSM so that DSM su\ufb03ciently preserves the information of the original point cloud. With the \ufb01nely rasterized DSM, the algorithm uses a Sobel operator to calculate the slope information and classi\ufb01es ground and non-ground based on a simple but novel assumption. The assumption is that any non-ground object is surrounded by a certain steep level of a slope while grounds are smoothly connected to each other eventually. Di\ufb00erent from previous algorithms relying on the local neighborhood for de\ufb01ning a discriminant function, the proposed algorithm can consider beyond the local neighborhood and classi\ufb01es non-ground objects based on their context. More importantly, as it classi\ufb01es ground and non-ground based on a physically straightforward rule, slope, the parameter tuning is very easy and straightforward, and the results are explainable and predictable. The algorithm turns out to be robust in diverse scenarios, computationally e\ufb03cient, and easy-to-use as it requires only a few parameters that can be easily determined by the user\u2019s objective. In addition, the algorithm includes a feature that can detect and map the elevation of the water body. Therefore, users of this algorithm can expect a seamless, full, rasterized DTM over the entire area of interest with only raw data that includes XYZ coordinates of point observations. The algorithm will be publicly available via GitHub. The remainder of this paper is organized as follows. Section 2 elaborates on the proposed DTM generating algorithm. Section 3 discusses experimental results in comparison with widely adopted DTM generating methods (i.e., cloth simulation \ufb01ltering (Zhang et al., 2016) and TIN-based method (Axelsson, 2000)) and provides suggestions for parameter tuning. Section 4 concludes the paper. 5 \f2. Proposed DTM generating algorithm The proposed algorithm consists of the following main four steps: (1) \ufb01nely rasterized DSM generation, (2) break-line mapping with the Sobel operator, (3) \ufb01ltering non-ground objects, and (4) water mapping. The following subsection describes each of the four main steps in detail and provides a summary of the proposed algorithm. 2.1. Finely rasterized DSM generation 3-dimensional coordinates of the earth\u2019s surface are usually collected as point cloud data, and the point cloud is assumed to be proper enough to model DTM. However, the point cloud is still a set of observations of realworld entities, so it necessarily has a limitation in perfectly representing the world. In particular, sensors most widely used for DTM generation, including LiDAR, have limitations in that their outputs are irregularly spaced 3-dimensional coordinates and their local point densities are inevitably varying. Even if the same laser pulse rate has been used during the \ufb02ight mission, the point spacing will inevitably be di\ufb00erent as it is a\ufb00ected by many factors such as \ufb02ight con\ufb01guration, \ufb02ight condition, and objects on the ground (Habib et al., 2011; Morsdorf et al., 2008; Yu et al., 2004). This is the main reason why most algorithms based on point clouds or TIN representations require lots of parameters to tune and are hard to generalize in universal scenarios. In contrast, the image grid representation provides regularly spaced data. Speci\ufb01cally, when point clouds are the observation of the earth\u2019s surface from airborne laser scanning, the image grid representation of point clouds is DSM. To generate DSM, users need to determine the grid size for rasterization, and the grid size often becomes the resolution of DTM. Depending on the grid size, multiple point clouds can share the same grid and some grids might not have any points. The down-sampling, which generates a large coarse grid DSM, often alongs with the rasterization to prevent void grids (Hyyppa et al., 2001; Maltezos et al., 2018; Oh et al., 2022). Here, this type of rasterization with down-sampling is referred to as a \u201ccoarse rasterization\u201d. The coarse rasterization can prevent void grids but results in data loss. To alleviate the data loss problem, this study adopts a \u201c\ufb01ne rasterization\u201d that projects point clouds into a \ufb01nely and regularly spaced image grid. Figure 1 provides a graphical illustration comparing coarse rasterization and \ufb01ne rasterization. To project 7 observation points into an image grid, the 6 \fFigure 1: A comparison between (a) coarse rasterization and (b) \ufb01ne rasterization. Fine rasterization is a relative concept compared to coarse rasterization. Either all or the majority of original point clouds can be preserved with a marginal horizontal displacement in the \ufb01ne rasterization depending on the user-de\ufb01ned grid size. Di\ufb00erent colors of the grid represent di\ufb00erent elevations coarse rasterization uses an image of 2 by 2 grids while the \ufb01ne rasterization uses an image of 3 by 3 grids. The coarse rasterization does not have void grids while the \ufb01ne rasterization has void grids initially. Void grids of the up-sampled image grid in the \ufb01ne rasterization are \ufb01lled up with the nearest neighbor interpolation. Fine rasterization can preserve more observation points with smaller displacement (registration) error than coarse rasterization, and thus it can help to generate precise, high-resolution DTM. However, the image grid representation either with coarse or \ufb01ne rasterization may still result in data loss if some points coexist on the same grid. When multiple points occupy the same pixel, our DTM generation algorithm uses the lowest elevation point (the last return of LiDAR points) for the DSM value as the ground typically be the lowest elevation among neighboring points. 2.2. Break-line mapping with the Sobel operator A Sobel operator is a widely used kernel, particularly for edge detection in image processing applications as it essentially computes the gradient of the image intensity (Abdou and Pratt, 1979). Typically, a Sobel operator convolves two 3 by 3 kernels with the original image where kernels calculate derivatives of horizontal and vertical directions, respectively. Then, the Euclidean norm of two derivatives can describe the gradient of each pixel of the image. When the image is DSM, the gradient can be used to approximate the slope of the surface (Gelbman and Papo, 1984). Therefore, DSM can be transformed into a slope map that describes the slope of the topography. Based on the slope map, we delineate a break-line map that shows the line 7 \fFigure 2: A procedure of the proposed DTM generation method: break-line mapping (a-c), non-ground \ufb01ltering (d-e), and DTM (f) where the topography is steeper than a certain level of slope. Figure 2 (ac) illustrates an exemplary procedure of break-line mapping with the Sobel operator. 2.3. Filtering non-ground objects A common assumption for DTM generation is that ground is generally smooth while non-ground objects have protruding shapes. In addition to this conventional assumption, we add the assumption that any non-ground object is surrounded by steep slopes while grounds are smoothly connected eventually. Thus, the proposed algorithm \ufb01lters out any area surrounded by more than a certain degree of slope (i.e., the slope threshold). This assumption is reasonable and robust as hills, high-relief terrains, cli\ufb00s, mountain ranges, valleys, and overpasses are eventually connected to smooth surfaces 8 \fin most cases; while non-ground objects like buildings and trees are enclosed by steep slopes or break-lines. In addition, since the break-line can span in\ufb01nitely, the algorithm is not con\ufb01ned to the local neighborhood but can consider the global neighborhood and can classify non-ground objects regardless of their sizes and shapes. As a result, a non-ground object in our algorithm is clearly de\ufb01ned as a set of break-lines and an area surrounded by break-lines. Lastly, pixels classi\ufb01ed as non-ground objects are masked and linearly interpolated with neighboring ground elevations to produce a seamless rasterized DTM. The essential parameter of the proposed algorithm is the slope threshold parameter which is in charge of delineating break-lines. We claim that the parameter clearly possesses a physically meaningful value, and it can be easily tuned based on the user\u2019s objective and topographical characteristics. We set the slope threshold of 45 degrees as default because it is robust enough to produce reliable DTM in most topographies. The impact and suggestions for parameter selection are discussed in Section 3.2.1. A few additional considerations were put into the algorithm to increase its scalability and facilitate its practical usage. First, some non-ground objects can lie on the edge of any given DSM layer. In this case, the non-ground object cannot be classi\ufb01ed as a non-ground object because it is not fully surrounded by the break-line but is partially opened due to the limited data extent (the extent of given DSM layer). Therefore, the algorithm set the edge of the DSM as a break-line initially. Note that this action will enclose the ground that was originally not enclosed with break-lines. To prevent this issue, a decision based on the area was made to determine whether an enclosed area is ground or non-ground: First, if the enclosed area is smaller than a low limit (A1), the enclosed area is determined as a non-ground. Second, if the enclosed area is larger than a high limit (A2), it is determined as ground. Third, for the enclosed area between a low limit (A1) and a high limit (A2), a metric called the rectangularity that describes the ratio of the enclosed area to its minimum bounding rectangle area was considered. If the rectangularity is larger than a certain value (R), the enclosed area is classi\ufb01ed as a non-ground, otherwise, it is classi\ufb01ed as ground. This decision is based on the assumption that relatively large non-ground objects are mostly large buildings with rectangular shapes. Also, as all areas are eventually bounded by the size of the data extent, A2 is necessary. We set A1, A2, and R as 40,000m2, 100,000m2, and 50% as a default. This is because objects larger than 40,000m2 are rectangular-shaped buildings in most cases. We found 9 \fFigure 3: A ground and non-ground classi\ufb01cation rule of the proposed DTM generation method these default values hardly produce artifacts. Figure 2 (d-e) displays an exemplary procedure of non-ground \ufb01ltering, and Figure 2 (f) shows the \ufb01nal DTM. Figure 3 illustrates a ground and non-ground classi\ufb01cation rule of the proposed DTM generation method. 2.4. Water mapping As the main task of DTM generation is typically considered as a classi\ufb01cation of ground and non-ground, a group of algorithms called the \u201cground \ufb01ltering algorithm\u201d has received lots of attention instead of DTM generation (Meng et al., 2010). Also, studies often evaluate the performance of DTM generation based on several binary classi\ufb01cation metrics (Sithole and Vosselman, 2004; Mongus and \u02c7 Zalik, 2013; Hu et al., 2015). However, the ground \ufb01ltering algorithm is usually limited in its purpose for classifying ground and non-ground by its de\ufb01nition and is not for mapping a full extent of a digital map that include both grounds and water bodies. Also, in general, external sources for water mapping are readily available due to lots of accumulated remotely sensed imagery (Huang et al., 2018). Perhaps these are the reasons why most DTM generation algorithms have ignored water body mapping (Hu and Yuan, 2016; Gevaert et al., 2018; Amirkolaee et al., 2022) even if subsequent analyses based on DTM often require a water map. However, the use of external data sources can lead to errors due to registration issues or mismatches in spatial and temporal resolution with the LiDAR data. Also, obscuration from clouds can be a problem when timely mapping is needed. In addition, a water map itself can be helpful to prevent errors in DTM mapping (Susaki, 2012). Therefore, we include a function that can extract water bodies and their elevations as well into our open-source work\ufb02ow of DTM generation. In the proposed algorithm, water pixels are identi\ufb01ed based on the assumption that the point density over water bodies is much lower than in non-water areas as water bodies hardly re\ufb02ect laser points. With the \ufb01nely rasterized 10 \fDSM before the interpolation, the average point density (P) of a given scan is calculated by dividing the number of non-void grids by the number of total grids. Then, the number of non-void grids in a certain size of a sliding window (N pixels) will have a binomial distribution B(N, P). Speci\ufb01cally, B(N, P/2) was used to compensate for the imbalance of point density due to scanning overlap and to avoid overly detecting water. Based on the binomial distribution, lower con\ufb01dence bound was used for the decision boundary of water classi\ufb01cation. We used a window of 9 by 9 and a con\ufb01dence level of 4 as a default. Lastly, the elevation of water bodies was selected by taking the 10th percentile of elevations among each water segment to prevent outliers. These parameters were set empirically and found to be robust in diverse topographic airborne laser scanning, but it is worth noting that elevation values cannot guarantee the true elevation as observations of water bodies contain lots of noise. This is because a LiDAR for topographic mapping commonly uses a near-infrared laser, which is absorbed by water and cannot re\ufb02ect the laser point. Moreover, the elevation of the water bodies is dynamic in nature due to the water cycle. A more detailed description and impacts of water-related parameters are provided in Section 3.2.3. 2.5. Summary of DTM generation algorithm The proposed DTM generation algorithm can convert a points cloud of airborne laser scanning to a rasterized DTM. The algorithm starts by generating the \ufb01nely rasterized DSM to keep original points and transform the data regularly gridded. Based on the assumption that all non-ground objects are enclosed by a certain level of a steep slope, the algorithm delineates a break-line map with a Sobel operator and classi\ufb01es non-ground objects based on the rectangularity and the size of the enclosed area. Finally, water mapping is performed considering the point density. Our method performs in an end-to-end manner and is easy to use as the meanings of parameters are very straightforward. Also, its results and errors are explainable and predictable in general, which can greatly reduce uncertainties in the resulting DTM. 3. Experiments The proposed DTM generation method (\u201cOUR\u201d) was compared with two of the most popular DTM generation methods, the cloth simulation-based method (\u201cCSF\u201d) (Zhang et al., 2016) and TIN-based ground \ufb01ltering method 11 \f(Axelsson, 2000) implemented in LAStools (\u201cLAS\u201d) 1. To compare their performances, an experimental area consisting of diverse landscapes, such as buildings, hilly forests, cropland, river, and deep valleys, was selected. We will refer to the study area as \u201cPurdue University Dataset\u201d hereafter. Purdue University Dataset includes West Lafayette and Lafayette, Indiana, United States. It covers 4.572 km by 4.572 km. A total of 91,031,226 observation points (4.35 points/m2) were acquired from an airborne laser scanning. The RGB aerial image and the \ufb01nely rasterized DSM were shown in Figure 4. For our DTM generation method, all parameters were selected as default values described in Section 2. For LAS, noise removal preceded as recommended in the Lastools documentation, and default parameters were used for DTM generation. For CSF, the \u201crelief\u201d scenario of CloudCompare 2 plug-in was adopted, and other settings were set as a default. All DTMs were generated with 0.5-meter resolution. To e\ufb00ectively evaluate in a large area, we adopted a tiling comparison method. The tiling comparison is a method that compares maps by dividing them into small tiles (Song and Jung, 2022). Conventionally, ground \ufb01ltering methods have compared their performances by regarding it as a classi\ufb01cation task that determines whether a given point is ground or non-ground (Hu et al., 2015; Mongus and \u02c7 Zalik, 2013; Sithole and Vosselman, 2004). Under the premise that there is a high quality of ground truth points, this method can provide clear comparative, quantitative results among di\ufb00erent ground \ufb01ltering algorithms as the ground \ufb01ltering algorithm itself is to classify ground and non-ground points. However, it has a limitation in that accurate ground truth is hard to be obtained for a large area, resulting in limited experimental areas (Meng et al., 2010; Polidori and El Hage, 2020). Thus, we adopted the tiling comparison method to e\ufb00ectively compare DTM generation methods. Also, quantitative measurements, the mean absolute error (MAE) and the root mean square error (RMSE) are also provided for the comparison as other image grid-based studies adopted (Chen et al., 2012; Hu and Yuan, 2016; Gevaert et al., 2018; Amirkolaee et al., 2022). To be speci\ufb01c, we tiled the entire DTM of the Purdue University dataset into 81 tiles so that area of each tile is to be 0.5 km by 0.5 km. Then, we ranked the DTMs based on MAE between our DTM and others. The MAE 1https://rapidlasso.com/lastools/ 2https://www.cloudcompare.org/ 12 \fFigure 4: RGB aerial imagery and gray-scaled DSM over Purdue University Dataset was calculated by comparing all pixel elevation values of our DTM to other two DTMs, respectively. Likewise, RMSE between our DTM and others was also computed. As water elevations of LAS and CSF were either 0 or signi\ufb01cantly lower values, which are not reliable, we masked the water area before computing MAE and RMSE. 3.1. Experimental results This subsection provides the comparison results among OUR, CSF, and LAS. We excerpted four tiles that show distinctive di\ufb00erences among di\ufb00erent methods and that can provide helpful information to potential users. Figure 5 shows aerial RGB images and three di\ufb00erent DTMs from di\ufb00erent methods. The RGB images are from the U.S. Department of Agriculture\u2019s (USDA) National Agriculture Imagery Program (NAIP)\u2019s orthoimagery. The rank denoted with RGB images indicates the order of the highest MAE values out of 81 tiles. The elevation ranges of OUR\u2019s DTM were provided for reference. The RMSE and MAE values of other DTMs calculated by comparing to OUR\u2019s DTM were also provided. Figure 5 (a) displays an urban area along the river. CSF and LAS were not able to generate proper DTM for the large building. CSF regarded the large building as the ground while LAS produced a hole (0 value) as same as 13 \fthe river. On the other hand, OUR \ufb01ltered out the building as a non-ground and interpolated it with nearby ground elevations. Another unique di\ufb00erence can be found in the bridge. As the bridge is connected to the ground without a discrete elevation change, OUR classi\ufb01ed the bridge into a ground category. However, neither CSF nor LAS considered the bridge as a ground object. Figure 5 (b) shows the overpass structures. As the overpass is connected to the ground smoothly, OUR classi\ufb01ed the overpass as ground. However, both CSF and LAS considered the overpass as non-ground. This is because CSF simulates a cloth covering on the upside-down DSM. The TIN-based method, LAS, also produced a similar result. Another di\ufb00erence is that OUR regarded the pile of soil as the ground while CSF and LAS removed it from the ground category. Figure 5 (c) illustrates the disadvantage of OUR as it overly smoothed deep valleys (i.e., West Lafayette Parks Maintenance). The reason why the small, narrow valleys were smoothed is that some narrow valleys were enclosed by steep terrains. Figure 5 (d) shows the advantage of OUR in that our method was able to \ufb01lter out a large building as a non-ground and produced a reliable DTM while a signi\ufb01cant portion of the large building remains in both DTMs of CSF and LAS. In summary, OUR produced more reliable DTMs compared to CSF and LAS, particularly for urban areas with large buildings. Also, OUR has a unique characteristic that can map bridges and overpasses as a category of the ground. It does not mean OUR falsely classi\ufb01ed bridges and overpasses just because bridges and overpasses are human-made structures. Although it is not a natural terrain, it is an arti\ufb01cial terrain like roads and plays a role more like a ground. Moreover, the terrain under the bridge and overpasses are actually unknown. In addition, note that both CSF and LAS partially mapped overpasses as ground as shown in Figure 5 (b). CSF and LAS simply interpolated unmeasured grounds after removing the layer on top. Also, considering CSF and LAS classi\ufb01ed some parts of the overpass as ground classes, we can argue that CSF and LAS were not consistent in their decision rules for overpasses, and their results are di\ufb03cult to predict and explain. Rather than debating whether bridges and overpasses are ground or not, it is worth noting that each DTM de\ufb01nition has its own merits. DTM that regarding bridges and overpasses as the terrain can be bene\ufb01cial in some applications where overpasses and bridges should be considered as ground such as building extraction (Song and Jung, 2022). Lastly, OUR method 14 \fwith the default parameter overly smoothed some steep, narrow areas as shown in Figure 5 (c). However, it can be prevented by tuning the slope threshold. The impact of the slope threshold is detailed in Section 3.2. 3.2. Suggestions for parameter selection 3.2.1. Slope threshold The slope threshold is the most important parameter because our method is based on the unique assumption that non-ground objects are surrounded by break-lines, and break-lines are determined by the slope threshold. We set the slope threshold to 45 degrees as default in experiment 3.1. and found the generated DTM is reliable where terrain relief is moderate (approximately lower than 45 degrees). However, DTM tends to get blurred where the areas are surrounded by steep slopes like a deep narrow valley. To investigate the impact of the slope threshold, we investigated the DTMs when the slope thresholds were set to di\ufb00erent values. A total of three slope thresholds (i.e., 45 degrees, 60 degrees, and 75 degrees) were selected, and their resultant DTMs are shown in Figure 6. The rank indicates the order of the highest MAE values compared to results of 45 degrees. DTM elevation ranges are provided. MAE and RMSE values that are compared to DTM with the default slope threshold (45 degrees) are provided as well. As a result, it was found that the impact of the slope threshold was distinct in mountainous and hilly areas as shown in Figure 6 (a-c). Compared to the DTM with a default slope threshold (45 degrees), DTMs with higher slope thresholds were able to delineate reliable terrain maps near steep and narrow valleys. However, several buildings particularly near hilly areas were identi\ufb01ed as terrain. This is because some buildings could have been connected to the terrain with less than 60 or 75 slope degrees. In the \ufb02at urban area, however, buildings were identi\ufb01ed as a terrain in most cases. 3.2.2. A1, A2, and R A1, A2, and R are to prevent the case where the ground is misclassi\ufb01ed as non-ground when the disconnected, small area of ground lies on the edge of the LiDAR \ufb01le data extent (Please refer to Figure 3). Therefore, only A2 will be required if a wider range of point clouds surrounding the target area are available. For example, in practical usage, users can generate DTM by gathering larger surrounding coverage of point clouds than the target of interest, and crop the centered target area to prevent errors. However, it will increase a computational cost and there must be a case when the larger data 15 \fFigure 5: Comparison of DTMs generated from di\ufb00erent methods. 4 out of 81 tiles that showed signi\ufb01cant and distinctive di\ufb00erence among di\ufb00erent DTMs were excerpted 16 \fFigure 6: Comparison of DTMs generated from OUR with di\ufb00erent slope thresholds 17 \fFigure 7: An example of an error near the data boundary and a remedy for the error extent is not available. To address this practical issue, A1, A2, and R were used. The following illustration shows the error near the edge of the data extent and how those parameters can resolve the error. Figure 7 shows an example of an error near the data boundary and a remedy for the error. Figure 7(a) (\u201csmaller data extent\u201d) is the case when the LiDAR \ufb01les were divided and processed separately while Figure 7(b) (\u201clarger data extent\u201d) shows the case when the LiDAR \ufb01les were merged and processed together. In the smaller data extent, the diverging bridge ends with the tile boundary and ends up enclosing the interim area between diverging bridges. Since the interim area was smaller than A1, the area was classi\ufb01ed as a non-ground. If this area was larger than the A1, depending on parameters of A2 and R, the area could have been determined as a ground. In this example, the default parameter was not able to classify the interim area properly. However, when the larger data extent is used, the interim area is eventually connected to a larger ground area, and eventually, the combined area of the interim area and larger ground areas must have been either larger than A2 or larger than A1 and smaller than R. In the end, the larger data extent produced a reliable DTM. 18 \f3.2.3. Water-related parameters Water is identi\ufb01ed by considering the distribution of the number of points in the given window. Normal distribution was assumed to \ufb01nd water pixels considering that the point density over the water area is signi\ufb01cantly lower than that of the non-water area. As all parameters associated with water body mapping were parameters for the normal distribution, the parameter setting could be easy and intuitive. Unless the point distribution of the scan is severely imbalanced or the laser scan contains a large occluded area (e.g., near tall buildings), the default parameter would perform well with most of the topographic near-infrared LiDAR data. Figure 8 illustrates a water mapping and the impacts of water-related parameters. Figure 8(a) displays a RGB image for reference. Figure 8(b) shows LiDAR point occupancy that shows the grid occupied by the LiDAR points in white and otherwise in black. Figure 8(c) shows a zoomed-in image of Figure 8(b). Figure 8(d) illustrates the results of water mapping for di\ufb00erent parameter combinations. As described in Section 2.4., we used B(N, P/2) and a con\ufb01dence level of 4 as a default, where N is the number of pixels in a sliding window and P is the average point density. The size of the window was 9 by 9 (81 pixels), and the average point density was 0.6. As a result, the central pixel of the window whose number of LiDAR points is less than the threshold (7) out of 81 was classi\ufb01ed as water. As the threshold decreases, water segments tend to be less detected and smaller. After water segments were mapped, the 10th percentile of elevations among each water segment was used for the elevation of the segment. 3.3. Limitations Since DTM generation involves the binary classi\ufb01cation between nonground and ground, our method shares the common limitation of binary classi\ufb01cation that has a trade-o\ufb00between omission and commission errors. To be speci\ufb01c, as the slope threshold increases, the DTM of the steep area retains sharper terrain reliefs, but some non-ground objects can remain as ground. Conversely, if the slope threshold decreases, the DTM is getting more smoothed while it could prevent the presence of non-ground objects in the resulting DTM. This limitation commonly exists in DTM generation algorithms and the trade-o\ufb00can be controlled by several parameter tunings in some algorithms (Chen et al., 2017; Liu, 2008; Meng et al., 2010). For example, CSF can tune the rigidness of cloth (Zhang et al., 2016). LAS can tune the parameter for the maximal standard deviation for planar patches 19 \fFigure 8: Water mappings and the impacts of water-related parameters 20 \fof TIN (Axelsson, 2000). It is worth mentioning that the parameter setting of our method is very straightforward and intuitive, and thus, the outcome according to the parameter is easily predictable, compared to other methods. This advantage enables users to \ufb01nd the proper parameters for their objectives and study areas and can signi\ufb01cantly reduce uncertainties in the resulting DTM. Although the proposed algorithm was con\ufb01rmed to be robust to generate DTM of diverse topography, our algorithm can su\ufb00er when only the physical shape of the target is not enough to identify whether it is a ground or non-ground. This limitation is common in most DTM generation algorithms. For example, there is no way to distinguish between dome-shaped buildings and the same shape of rocky terrain unless a sophisticated semantic understanding is possible. Adding more delicate decision rules might enable the algorithm to distinguish some rare but di\ufb03cult cases, but it will compromise the algorithm\u2019s generalization capability and computational ef\ufb01ciency. Deep learning-based algorithms that can make a decision based on the learned distribution may work better, particularly when sophisticated semantic understanding is needed. However, it will necessarily entail errors from the distribution shift where the distribution of the target area is different from that of the trained area (Moreno-Torres et al., 2012; Tuia et al., 2016). Also, another limitation of deep learning-based algorithms is their decisions are necessarily limited by their input size (Amirkolaee et al., 2022; Gevaert et al., 2018; Hu and Yuan, 2016). The typical input size of deep learning-based models for semantic segmentation is 256 by 256. Therefore, the model should decide whether the target is ground or non-ground based on the limited area of 256-meter by 256-meter if the resolution of DSM is 1-meter. However, if the input is composed of only the center of a large \ufb02at building, the deep model will be likely to fail in its mapping. Errors in large object detection frequently occur in deep learning-based methods (Song and Jung, 2022; Ji et al., 2018). On the other hand, our proposed algorithm can consider the entire data extent for the decision. In other words, inputs do not need to be tiled like in deep learning. More importantly, due to uncertainties of the decision rule of deep learning methods, the so-called \u201cblack box\u201d, predicting and explaining the results would be di\ufb03cult, which could jeopardize the credibility of the subsequent analyses. Of course, our algorithm is not without errors. However, the magnitude and in\ufb02uence of the error can be better estimated than other algorithms as parameter tunings are very intuitive and the result of the algorithm can be easily explained. 21 \fAnother limitation to be noted is the uncertainties in water elevation and water mapping. Our DTM mapping work\ufb02ow includes a feature for water body mapping to alleviate elevation errors near water bodies and to assist subsequent studies. Having the water mapping in our work\ufb02ow is advantageous as users can expect a DTM having all elevations in both terrain and water, and it can replace the post-processing of water mapping that requires another external source of data. However, due to the low re\ufb02ectance of a near-infrared laser to water, observations include lots of noise, resulting in uncertainties of water elevation. In fact, even if the MAE was calculated after masking the water area when performing the tile comparison in Figure 5, tiles containing large water bodies recorded the highest MAE (18 out of the top 20 contain water bodies either river or lake). This suggests that the water mask was not perfect and that the CSF and LAS also had errors near water bodies. Particularly, we witnessed water with fast \ufb02owing streams and lots of \ufb02oating often has high point density, resulting in being omitted from the water mask. There have been previous studies that map water with airborne LiDAR data by using a supervised classi\ufb01cation (Brzank et al., 2008; Smeeckaert et al., 2013) or LiDAR signal intensity (H\u00a8 o\ufb02e et al., 2009). Although they require either training procedures (Brzank et al., 2008; Smeeckaert et al., 2013) or a radiometric correction of intensity (H\u00a8 o\ufb02e et al., 2009), they could be an alternative way of mapping water bodies. Future studies for more accurate and scalable water mapping with airborne LiDAR data are needed. 4." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file