AcademicEval / intro_8K /test_introduction_short_2404.16818v1.json
XaiverZ's picture
syn
ed3212e
raw
history blame
44.2 kB
{
"url": "http://arxiv.org/abs/2404.16818v1",
"title": "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals",
"abstract": "Unsupervised semantic segmentation aims to automatically partition images\ninto semantically meaningful regions by identifying global categories within an\nimage corpus without any form of annotation. Building upon recent advances in\nself-supervised representation learning, we focus on how to leverage these\nlarge pre-trained models for the downstream task of unsupervised segmentation.\nWe present PriMaPs - Principal Mask Proposals - decomposing images into\nsemantically meaningful masks based on their feature representation. This\nallows us to realize unsupervised semantic segmentation by fitting class\nprototypes to PriMaPs with a stochastic expectation-maximization algorithm,\nPriMaPs-EM. Despite its conceptual simplicity, PriMaPs-EM leads to competitive\nresults across various pre-trained backbone models, including DINO and DINOv2,\nand across datasets, such as Cityscapes, COCO-Stuff, and Potsdam-3.\nImportantly, PriMaPs-EM is able to boost results when applied orthogonally to\ncurrent state-of-the-art unsupervised semantic segmentation pipelines.",
"authors": "Oliver Hahn, Nikita Araslanov, Simone Schaub-Meyer, Stefan Roth",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "Semantic image segmentation is a dense prediction task that classifies image pixels into categories from a pre-defined semantic taxonomy. Owing to its fundamental nature, semantic segmentation has a broad range of applications, such as image editing, medical imaging, robotics, or autonomous driving (see Minaee et al., 2022, for an overview). Addressing this problem via supervised learning requires ground-truth labels for every pixel (Long et al., 2015; Ronneberger et al., 2015; Chen et al., 2018b). Such manual annotation is extremely time and resource intensive. For instance, a trained human annotator requires an average of 90 minutes to label up to 30 classes in a single 2 MP image (Cordts et al., 2016). While committing significant resources to large-scale annotation efforts achieves excellent results (Kirillov et al., 2023), there is natural interest in a more economical approach. Alternative lines of research aim to solve the problem using cheaper \u2013 so-called \u201cweaker\u201d \u2013 variants of annotation. For example, image-level supervision describing the semantic categories present in the image, or bounding-box annotations, can reach impressive levels of segmentation accuracy (Dai et al., 2015; Araslanov & Roth, 2020; Oh et al., 2021; Xu et al., 2022; Ru et al., 2023). As an extreme problem scenario toward reducing the annotation effort, unsupervised semantic segmentation aims to consistently discover and categorize image regions in a given data domain without any labels, know- ing only how many classes to discover. Unsupervised semantic segmentation is highly ambiguous as class boundaries and the level of categorical granularity are task-dependent.1 However, we can leverage the fact that typical image datasets have a homogeneous underlying taxonomy and exhibit invariant domain char- acteristics. Therefore, it is still feasible to decompose images in such datasets in a semantically meaningful and consistent manner without annotations. Despite the challenges of unsupervised semantic segmentation, we have witnessed remarkable progress on this task in the past years (Ji et al., 2019; Cho et al., 2021; Van Gansbeke et al., 2021; 2022; Ke et al., 2022; 1While assigning actual semantic labels to regions without annotation is generally infeasible, the assumption is that the categories of the discovered segments will strongly correlate with human notions of semantic meaning. 1 arXiv:2404.16818v1 [cs.CV] 25 Apr 2024 Image Mask 1 Mask 2 Mask 3 PriMaPs (all) Pseudo Label ... ... ... Figure 1: PriMaPs pseudo label example. Principal mask proposals (PriMaPs) are iteratively extracted from an image (dashed arrows). Each mask is assigned a semantic class resulting in a pseudo label. The examples are taken from the Cityscapes (top), COCO-Stuff (middle), and Potsdam-3 (bottom) datasets. Yin et al., 2022; Hamilton et al., 2022; Karlsson et al., 2022; Li et al., 2023; Seong et al., 2023; Seitzer et al., 2023). Deep representations obtained with self-supervised learning (SSL), such as DINO (Caron et al., 2021), have played a critical role in this advance. However, it remains unclear whether previous work leverages the intrinsic properties of the original SSL representations, or merely uses them for \u201cbootstrapping\u201d and learns a new representation on top. Exploiting the inherent properties of SSL features is preferable for two reasons. First, training SSL models incurs a substantial computational effort, justifiable only if the learned feature extractor is sufficiently versatile. In other words, one can amortize the high computational cost over many downstream tasks, provided that task specialization is computationally negligible. Second, studying SSL representations with lightweight tools, such as linear models, leads to a more interpretable empirical analysis than with the use of more complex models, as evidenced by the widespread use of linear probing in SSL evaluation. Such interpretability advances research on SSL models toward improved cross-task generalization. Equipped with essential tools of linear modeling, i. e. Principal Component Analysis (PCA), we generate Principal Mask Proposals, or PriMaPs, directly from the SSL representation. Complementing previous findings on object-centric images (Tumanyan et al., 2022; Amir et al., 2022), we show that principal com- ponents of SSL features tend to identify visual patterns with high semantic correlation also in scene-centric imagery. Leveraging PriMaPs and minimalist post-processing, we construct semantic pseudo labels for each image as illustrated in Fig. 1. Finally, instead of learning a new embedding on top of the SSL representation (Hamilton et al., 2022; Seong et al., 2023; Seitzer et al., 2023; Zadaianchuk et al., 2023), we employ a moving average implementation of stochastic Expectation Maximization (EM) (Chen et al., 2018a) to assign a con- sistent category to each segment in the pseudo labels and directly optimize class prototypes in the feature space. Our experiments show that this straightforward approach not only boosts the segmentation accu- racy of the DINO baseline, but also that of more advanced state-of-the-art approaches tailored for semantic segmentation, such as STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023). We make the following contributions: (i) We derive lightweight mask proposals, leveraging intrinsic properties of the embedding space, e. g., the covariance, provided by an off-the-shelf SSL approach. (ii) Based on the mask proposals, we construct pseudo labels and employ moving average stochastic EM to assign a consistent semantic class to each proposal. (iii) We demonstrate improved segmentation accuracy across a wide range of SSL embeddings and datasets.",
"main_content": "Our work builds upon recent advances in self-supervised representation learning, and takes inspiration from previous unsupervised semantic and instance segmentation methods. 2 The goal of self-supervised representation learning (SSL) is to provide generic, task-agnostic feature extractors (He et al., 2020; Chen et al., 2020; Grill et al., 2020). A pivotal role in defining the behavior of self-supervised features on future downstream tasks is taken by the self-supervised objective, the so-called pretext task. Examples of such tasks include predicting the context of a patch (Doersch et al., 2015) or its rotation (Gidaris et al., 2018), image inpainting (Pathak et al., 2016), and \u201csolving\u201d jigsaw puzzles (Noroozi & Favaro, 2016). Another family of self-supervised techniques is based on contrastive learning (Chen et al., 2020; Caron et al., 2020). More recently, Transformer networks (Dosovitskiy et al., 2020) revived some older pretext tasks, such as context prediction (Caron et al., 2021; He et al., 2022), in a more data-scalable fashion. While the standard evaluation practice in SSL (e. g., linear probing, transfer learning) offers some glimpse into the feature properties, understanding the embedding space produced by SSL remains an active terrain for research (Ericsson et al., 2021; Naseer et al., 2021). In particular, DINO features (Caron et al., 2021; Oquab et al., 2024) are known to encode accurate object-specific information, such as object parts (Amir et al., 2022; Tumanyan et al., 2022). However, it remains unclear to what extent DINO embeddings allow for semantic representation of the more ubiquitous multi-object scenes. Here, following previous work (e. g., Hamilton et al., 2022; Seong et al., 2023), we provide further insights. Early techniques for unsupervised semantic segmentation using deep networks (Cho et al., 2021; Van Gansbeke et al., 2021) approach the problem in the spirit of transfer learning and, under certain nomenclatures, may not be considered fully unsupervised. Specifically, starting with supervised ImageNet pre-training (Russakovsky et al., 2015), a network obtains a fine-tuning signal from segmentation-oriented training objectives. Such supervised \u201cbootstrapping\u201d appears to be crucial in the ill-posed unsupervised formulation. Unsupervised training of a deep model for segmentation from scratch is possible, albeit sacrificing accuracy (Ji et al., 2019; Ke et al., 2022). However, training a new deep model for each downstream task contradicts the spirit of SSL of amortizing the high SSL training costs over many computationally cheap specializations of the learned features (Bommasani et al., 2021). Relying on self-supervised DINO pre-training, recent work (Hamilton et al., 2022; Li et al., 2023; Seong et al., 2023) has demonstrated the potential of such amortization with more lightweight fine-tuning for semantic segmentation. Nevertheless, most of this work (e. g., Hamilton et al., 2022; Van Gansbeke et al., 2022) has treated the SSL representation as an inductive prior by learning a new embedding space over the SSL features (e. g., Hamilton et al., 2022; Seong et al., 2023). In contrast, following SSL principles, we use the SSL representation in a more direct and lightweight fashion \u2013 by extracting mask proposals using linear models (PCA) with minimal post-processing and learning a direct mapping from feature to prediction space. Mask proposals have an established role in computer vision (Arbelaez et al., 2011; Uijlings et al., 2013), and remain highly relevant in deep learning (Hwang et al., 2019; Van Gansbeke et al., 2021; Yin et al., 2022). Different from previous work, we directly derive the mask proposals from SSL representations. Our approach is inspired by the recent use of classical algorithms, such as normalized cuts (Ncut Shi & Malik, 2000), in the context of self-supervised segmentation (Wang et al., 2023a;b). However previous approaches (Van Gansbeke et al., 2021; 2022; Wang et al., 2023a;b) mainly proposed foreground object masks on object-centric data, utilized in a multi-step self-training. In contrast, we develop a straightforward method for extracting dense pseudo labels for learning unsupervised semantic segmentation of scene-centric data and show consistent benefits in improving the segmentation accuracy across a variety of baselines and state-of-the-art methods (Hamilton et al., 2022; Seong et al., 2023). 3 PriMaPs: Principal Mask Proposals This work leverages recent advances in self-supervised representation learning (Caron et al., 2021; Oquab et al., 2024) for the specific downstream task of unsupervised semantic segmentation. Our approach is based on the observation that such pre-trained features already exhibit intrinsic spatial similarities, capturing semantic correlations, providing guidance to fit global pseudo-class representations. A simple baseline. Consider a simple baseline that applies K-means clustering to DINO ViT features (Caron et al., 2021). Surprisingly, this already leads to reasonably good unsupervised semantic segmentation results, e. g., around 15 % mean IoU to segment 27 classes on Cityscapes (Cordts et al., 2016), see Tab. 1. 3 However, supervised linear probing between the same feature space and the ground-truth labels \u2013 the theoretical upper bound \u2013 leads to clearly superior results of almost 36 %. Given this gap and the simplicity of the approach, we conclude that there is valuable potential in directly obtaining semantic segmentation without enhancing the original feature representation, unlike in previous work (Hamilton et al., 2022; Seong et al., 2023). From K-means to PriMaPs-EM. When examining the K-means baseline as well as state-of-the-art methods (Hamilton et al., 2022; Seong et al., 2023), see Fig. 4, it can be qualitatively observed that more local consistency within the respective predictions would already lead to less mis-classification. We take inspiration from (Drineas et al., 2004; Ding & He, 2004), who showed that the PCA subspace, spanned by principal components, is a relaxed solution to K-means clustering. We observe that principal components have high semantic correlation for objectas well as scene-centric image features (cf. Fig. 1). We utilize this by iteratively partitioning images based on dominant feature patterns, identified by means of the cosine similarity of the image features to the respective first principal component. We name the resulting class-agnostic image decomposition PriMaPs \u2013 Principal Mask Proposals. PriMaPs stem directly from SSL representations and guide the process of unsupervised semantic segmentation. Shown in Fig. 2, our optimization-based approach, PriMaPs-EM, operates over an SSL feature representation computed from a frozen deep neural network backbone. The optimization realizes stochastic EM of a clustering objective guided by PriMaPs. Specifically, PriMaPs-EM fits class prototypes to the proposals in a globally consistent manner by optimizing over two identically sized vector sets, with one of them being an exponential moving average (EMA) of the other. We show that PriMaPs-EM enables accurate unsupervised partitioning of images into semantically meaningful regions while being highly lightweight and orthogonal to most previous approaches in unsupervised semantic segmentation. 3.1 Deriving PriMaPs We start with a frozen pre-trained self-supervised backbone model F : R3\u00d7h\u00d7w \u2192RC\u00d7H\u00d7W , which embeds an image I \u2208R3\u00d7h\u00d7w into a dense feature representation f \u2208RC\u00d7H\u00d7W as f = F(I) . (1) Here, C refers to the channel dimension of the dense features, and H = h/p, W = w/p with p corresponding to the output stride of the backbone. Based on this image representation, the next step is to decompose the image into semantically meaningful masks to provide a local grouping prior for fitting global class prototypes. Initial principal mask proposal. To identify the initial principal mask proposal in an image I, we analyze the spatial statistical correlations of its features. Specifically, we consider the empirical feature covariance matrix \u03a3 = 1 HW H X i=1 W X j=1 \u0000f:,i,j \u2212\u00af f \u0001\u0000f:,i,j \u2212\u00af f \u0001\u22a4, (2) where f:,i,j \u2208RC are the features at position (i, j) and \u00af f \u2208RC is the mean feature. To identify the feature direction that captures the largest variance in the feature distribution, we seek the first principal component of \u03a3 by solving \u03a3v = \u03bbv . (3) We obtain the first principal component as the eigenvector v1 to the largest eigenvalue \u03bb1, which can be computed efficiently with Singular Value Decomposition (SVD) using the flattened features f. To identify a candidate region, our next goal is to compute a spatial feature similarity map to the dominant feature direction. We observe that doing so directly with the principal direction does not always lead to sufficient localization, i. e., high similarities arise across multiple visual concepts in an image, elaborated in more detail in Appendix A.1. This can be circumvented by first spatially anchoring the dominant feature vector in the feature map. To that end, we obtain the nearest neighbor feature \u02dc f \u2208RC of the first principal component v1 by considering the cosine distance in the normalized feature space \u02c6 f as \u02dc f = \u02c6 f:,i,j , where (i, j) = arg max i,j \u0000v\u22a4 1 \u02c6 f \u0001 . (4) 4 Given this, we compute the cosine-similarity map M \u2208RH\u00d7W of the dominant feature w. r. t. all features as M = (Mi,j)i,j , where Mi,j = \u0000 \u02dc f \u0001\u22a4\u02c6 f:,i,j . (5) Next, a threshold \u03c8 \u2208(0, 1) is applied to the similarity map in order to suppress noise and further localize the initial mask. Accordingly, elements of a binary similarity map P 1 \u2208{0, 1}H\u00d7W are set to 1 when larger than a fraction \u03c8 of the maximal similarity, and 0 otherwise, i. e., P 1 = h Mi,j > \u03c8 \u00b7 max m,n Mm,n i i,j , (6) where [\u00b7] denotes the Iverson bracket. This binary principal mask P 1 gives rise to the first principal mask proposal in image I. Further principal mask proposals. Subsequent mask proposals result from iteratively repeating the described procedure. To that end, it is necessary to suppress features that have already been assigned to a pseudo label. Specifically in iteration z, given the mask proposals P s, s = 1, . . . , z \u22121, extracted in previous iterations, we mask out the features that have already been considered as f z :,i,j = f:,i,j \u0014Xz\u22121 s=1 P s i,j = 0 \u0015 . (7) Applying Eqs. (2) to (6) on top of the masked features f z yields principal mask proposal P z, and so on. We repeat this procedure until the majority of features (e. g., 95%) have been assigned to a mask. In a final step, the remaining features, in case there are any, are assigned to an \u201cignore\u201d mask P 0 i,j = 1 \u2212 Z\u22121 X z=1 P z i,j . (8) This produces a tensor P \u2208{0, 1}Z\u00d7H\u00d7W of Z spatial similarity masks decomposing a single image into Z non-overlapping regions. Proposal post-processing. To further improve the alignment of the masks with edges and color-correlated regions in the image, a fully connected Conditional Random Field (CRF) with Gaussian edge potentials (Kr\u00e4henb\u00fchl & Koltun, 2011) is applied to the initial mask proposals P (after bilinear upsampling to the image resolution) for 10 inference iterations. In order to form a pseudo label for semantic segmentation out of the Z mask proposals, each mask has to be assigned one out of K class labels. This is accomplished using a segmentation prediction of our optimization process, called PriMaPs-EM, detailed below. The entire PriMaPs pseudo label generation process is illustrated in Figure 2b. 3.2 PriMaPs-EM Shown in Fig. 2, PriMaPs-EM is an iterative optimization technique. It leverages the frozen pre-trained self-supervised backbone model F and two identically sized vector sets, the class prototypes \u03b8S and their moving average, the momentum class prototypes \u03b8T . The class prototypes \u03b8S and \u03b8T are the K pseudo class representations in the feature space, projecting the C-dimensional features linearly to K semantic pseudo classes. PriMaPs-EM constructs pseudo labels using PriMaPs, which provide guidance through local consistency for fitting the global class prototypes. In every optimization iteration, we compute the segmentation prediction y from the momentum class prototypes \u03b8T . Next, we assign the pseudo-class ID that is most frequently predicted within each proposal, yielding the final pseudo-label map P \u2217\u2208{0, 1}K\u00d7h\u00d7w, a one-hot encoding of a pseudo-class ID. Finally, we optimize the class prototypes \u03b8S using the pseudo label. PriMaPs-EM consists of two stages, since in our case a meaningful initialization of the class prototypes is vital to provide a reasonable optimization signal. This can be traced back to the pseudo-label generation, which utilizes a segmentation prediction to assign globally consistent classes to the masks. Initializing the class prototypes randomly leads to a highly unstable and noisy signal. 5 Image I Aug. Image I\u2032 F _ F _ f \u03b8T f \u2032 \u03b8S \u00b7 \u00b7 y y\u2032 Gen. PriMaPs Pseudo Label Lfocal EMA (a) PriMaPs-EM architecture Features Image label \u03b8T Pred. Pseudo 1st PC NN Similarity Map Feat. Mask. ID Assign CRF P (b) PriMaPs pseudo label generation Figure 2: (a) PriMaPs-EM architecture. Images are embedded by the frozen self-supervised backbone F. First, both class prototypes \u03b8S and \u03b8T are initialized via a clustering objective. The segmentation prediction y from the momentum class prototypes \u03b8T arises via a dot product with the image features f. While PriMaPs are based on f alone, the pseudo labels additionally use the image I and the segmentation prediction y from the momentum class prototypes \u03b8T . We use the pseudo labels to optimize the class prototypes \u03b8S, which are gradually transferred to \u03b8T by means of an EMA. (b) PriMaPs pseudo label generation. Masks are proposed by iterative binary partitioning based on the cosine similarity of the features of any unassigned pixel to their first principal component. Next, the masks P are aligned to the image using a CRF (Kr\u00e4henb\u00fchl & Koltun, 2011). Finally, a pseudo-class ID is assigned per mask based on the segmentation prediction from the \u03b8T . Gray indicates iterative steps. Initialization. We initialize the class prototypes \u03b8T with the first K principal components. Next, a cosine distance batch-wise K-means (MacQueen, 1967) loss LK-means(\u03b8T ) = \u2212 X i,j max \u0000\u03b8\u22a4 T f:,i,j \u0001 (9) is minimized with respect to \u03b8T for a fixed number of epochs. This minimizes the cumulative cosine distances of the image features f:,i,j to their respective closest class prototype. \u03b8S is initialized with the same prototypes. Moving average stochastic EM. In each iteration, we use the backbone features and momentum class prototypes \u03b8T to yield a segmentation prediction y from which pseudo labels are generated as described in Sec. 3.1. \u03b8S is optimized by applying a batch-wise focal loss (Lin et al., 2020) with respect to these pseudo labels. The focal loss Lfocal is a weighted version of the cross-entropy loss, increasing the loss contribution of less confident classes, i. e., Lfocal(\u03b8S; y\u2032) = \u2212 X k,i,j (1 \u2212\u03c7k)2P \u2217 k,i,j log(y\u2032 k,i,j) , (10) where y\u2032 :,i,j = softmax(\u03b8\u22a4 S f:,i,j) are the predictions and \u03c7k is the class-wise confidence value approximated by averaging y\u2032 :,i,j spatially. The class prototypes \u03b8S are optimized with an augmented input image I\u2032. We employ photometric augmentations (Gaussian blur, grayscaling, and color jitter), introducing a controlled noise, thereby strengthening the robustness of our class representation. The momentum class prototypes \u03b8T are the exponential moving average of the class prototypes \u03b8S. This is utilized in order to stabilize the optimization, accounting for the noisy nature of unsupervised signal used for optimization. We update \u03b8T every \u03b3t iterations with a decay \u03b3\u03c8 as \u03b8t+\u03b3t T = \u03b3\u03c8\u03b8t T + (1 \u2212\u03b3\u03c8)\u03b8t+\u03b3t S , (11) where t is the iteration index of the previous update. This optimization approach resembles moving average stochastic EM. Hereby, the E-step amounts to finding pseudo labels using PriMaPs and the momentum class prototypes. The M-step optimizes the class prototypes with respect to their focal loss Lfocal. Stochasticity arises from performing EM in mini-batches. 6 Inference. At inference time, we obtain a segmentation prediction from the momentum class prototypes \u03b8T , refined using a fully connected CRF with Gaussian edge potentials (Kr\u00e4henb\u00fchl & Koltun, 2011) following previous approaches (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). This is the identical CRF as already used for refining the masks in the PriMaPs pseudo-label generation. We use the identical CRF parameters as previous work (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). 4 Experiments To assess the efficacy of our approach, we compare it to the current state-of-the-art in unsupervised semantic segmentation. For a fair comparison, we closely follow the overall setup used by numerous previous works (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Seong et al., 2023). 4.1 Experimental Setup Datasets. Following the practice of previous work, we conduct experiments on Cityscapes (Cordts et al., 2016), COCO-Stuff (Caesar et al., 2018), and Potsdam-3 (ISPRS). Cityscapes and COCO-Stuff are evaluated using 27 classes, while Potsdam is evaluated on the 3-class variant. Adopting the established evaluation protocol (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Seong et al., 2023), we resize images to 320 pixels along the smaller axis and crop the center 320 \u00d7 320 pixels. This is adjusted to 322 pixels for DINOv2. Different from previous work, we apply this simple scheme throughout this work, thus dispensing with elaborate multi-crop approaches of previous methods (Hamilton et al., 2022; Yin et al., 2022; Seong et al., 2023). Self-supervised backbone. Experiments are conducted across a collection of pre-trained self-supervised feature embeddings: DINO (Caron et al., 2021) based on ViT-Small and ViT-Base using 8 \u00d7 8 patches; and DINOv2 (Oquab et al., 2024) based on ViT-Small and ViT-Base using 14 \u00d7 14 patches. In the spirit of SSL principles, we keep the backbone parameters frozen throughout the experiments. We use the output from the last network layer as our SSL feature embeddings. Since PriMaPs-EM is agnostic to the used embedding space, we can also apply it on top of current state-of-the-art unsupervised segmentation pipelines. Here, we consider STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023), which also use DINO features but learn a target domain-specific subspace. Baseline. Following (Hamilton et al., 2022; Seong et al., 2023), we train a single linear layer as a baseline with the same structure as \u03b8S and \u03b8T by minimizing the cosine distance batch-wise K-Means loss from Eq. (9). Hereby, parameters, such as the number of epochs and the learning rate, are identical to those used when employing PriMaPs-EM. PriMaPs-EM. As discussed in Sec. 3.2, the momentum class prototypes \u03b8T are initialized using the first K principal components; we use 2975 images for PCA, as this is the largest number of training images shared by all datasets. Next, \u03b8T is pre-trained by minimizing Eq. (9) using Adam (Kingma & Ba, 2015). We use a learning rate of 0.005 for 2 epochs on all datasets and backbones. The weights are then copied to \u03b8S. For fitting the class prototypes using EM, \u03b8S is optimized by minimizing the focal loss from Eq. (10) with Adam (Kingma & Ba, 2015) using a learning rate of 0.005. The momentum class prototypes \u03b8T are updated using an EMA according to Eq. (11) every \u03b3s = 10 steps with decay \u03b3\u03c8 = 0.98. We set the PriMaPs mask proposal threshold to \u03c8 = 0.4. We use a batch size of 32 for 50 epochs on Cityscapes and Potsdam-3, and use 5 epochs on COCO-Stuff due to its larger size. Importantly, the same hyperparameters are used across all datasets and backbones. Moreover, note that fitting class prototypes with PriMaPs-EM is quite practical, e. g., about 2 hours on Cityscapes. Experiments are conducted on a single NVIDIA A6000 GPU. Supervised upper bounds. To assess the potential of the SSL features used, we report supervised upper bounds. Specifically, we train a linear layer using cross-entropy and Adam with a learning rate of 0.005. Since PriMaPs-EM uses frozen SSL features, its supervised bound is the same as that of the underlying features. This is not the case, however, for prior work (Hamilton et al., 2022; Seong et al., 2023), which project the feature representation affecting the upper bound. 7 Table 1: Cityscapes \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU IIC (Ji et al., 2019; Cho et al., 2021) 47.9 6.4 \u2013 \u2013 MDC (Caron et al., 2018; Cho et al., 2021) 40.7 7.1 \u2013 \u2013 PiCIE (Cho et al., 2021) 65.5 12.3 \u2013 \u2013 VICE (Karlsson et al., 2022) ResNet18 +FPN 31.9 12.8 86.3 31.6 Baseline (Caron et al., 2021) 61.4 15.8 91.0 35.4 + TransFGU (Yin et al., 2022) 77.9 16.8 \u2013 \u2013 + HP (Seong et al., 2023) 80.1 18.4 91.2 30.6 + PriMaPs-EM 81.2 19.4 91.0 35.4 + HP (Seong et al., 2023) + PriMaPs-EM DINO ViT-S/8 76.6 19.2 91.2 30.6 Baseline (Caron et al., 2021) 49.2 15.5 91.6 35.9 + STEGO (Hamilton et al., 2022; Koenig et al., 2023) 73.2 21.0 89.6 28.0 + HP (Seong et al., 2023) 79.5 18.4 90.9 33.0 + PriMaPs-EM 59.6 17.6 91.6 35.9 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-B/8 78.6 21.6 89.6 28.0 Baseline (Oquab et al., 2024) 49.5 15.3 90.8 41.9 + PriMaPs-EM DINOv2 ViT-S/14 71.5 19.0 90.8 41.9 Baseline (Oquab et al., 2024) 36.1 14.9 91.0 44.8 + PriMaPs-EM DINOv2 ViT-B/14 82.9 21.3 91.0 44.8 Evaluation. For inference, we use the prediction from the momentum class prototypes \u03b8T . CRF refinement uses 10 inference iterations and standard parameters a = 4, b = 3, \u03b8\u03b1 = 67, \u03b8\u03b2 = 3, \u03b8\u03b3 = 1 from prior work (Van Gansbeke et al., 2021; Hamilton et al., 2022; Seong et al., 2023). We evaluate common metrics in unsupervised semantic segmentation, specifically the mean Intersection over Union (mIoU) and Accuracy (Acc) over all classes after aligning the predicted class IDs with ground-truth labels by means of Hungarian matching (Kuhn, 1955). SotA + PriMaPs-EM. To explore our method\u2019s potential, we additionally employ PriMaPs-EM on top of STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023). For each backbone-dataset combination, we apply it on top of the best previous method in terms of mIoU. To that end, the training signal for learning the feature projection of (Hamilton et al., 2022; Seong et al., 2023) remains unchanged. We apply PriMaPs-EM fully orthogonally, using the DINO backbone features for pseudo-label generation and fit a direct connection between the feature space of the state-of-the-art method and the prediction space. 4.2 Results We compare PriMaPs-EM against prior work for unsupervised semantic segmentation (Ji et al., 2019; Cho et al., 2021; Hamilton et al., 2022; Yin et al., 2022; Li et al., 2023; Seong et al., 2023). As in previous work, we use DINO (Caron et al., 2021) as the main baseline. Additionally, we also test PriMaPs-EM on top of DINOv2 (Oquab et al., 2024), STEGO (Hamilton et al., 2022), and HP (Seong et al., 2023). Overall, we observe that the DINO baseline already achieves strong results (cf. Tabs. 1 to 3). DINOv2 features significantly raise the supervised upper bounds in terms of Acc and mIoU, the improvement in the unsupervised case remains more modest. Nevertheless, PriMaPs-EM further boosts the unsupervised segmentation performance. In Tab. 1, we compare to previous work on the Cityscapes dataset. PriMaPs-EM leads to a consistent improvement over all baselines in terms of unsupervised segmentation accuracy. For example, PriMaPsEM boosts DINO ViT-S/8 by +3.6% and +19.8% in terms of mIoU and Acc, respectively, which leads to 8 Table 2: COCO-Stuff \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU IIC (Ji et al., 2019; Cho et al., 2021) 21.8 6.7 44.5 8.4 MDC (Caron et al., 2018; Cho et al., 2021) 32.2 9.8 48.6 13.3 PiCIE (Cho et al., 2021) 48.1 13.8 54.2 13.9 PiCIE+H (Cho et al., 2021) 50.0 14.4 54.8 14.8 VICE (Karlsson et al., 2022) ResNet18 +FPN 28.9 11.4 62.8 25.5 Baseline (Caron et al., 2021) 34.2 9.5 72.0 41.3 + TransFGU (Yin et al., 2022) 52.7 17.5 \u2013 \u2013 + STEGO (Hamilton et al., 2022) 48.3 24.5 74.4 38.3 + ACSeg (Li et al., 2023) \u2013 16.4 \u2013 \u2013 + HP (Seong et al., 2023) 57.2 24.6 75.6 42.7 + PriMaPs-EM 46.5 16.4 72.0 41.3 + HP (Seong et al., 2023) + PriMaPs-EM DINO ViT-S/8 57.8 25.1 75.6 42.7 Baseline (Caron et al., 2021) 38.8 15.7 74.0 44.6 + STEGO (Hamilton et al., 2022) 56.9 28.2 76.1 41.0 + PriMaPs-EM 48.5 21.9 74.0 44.6 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-B/8 57.9 29.7 76.1 41.0 Baseline (Oquab et al., 2024) 44.5 22.9 77.9 52.8 + PriMaPs-EM DINOv2 ViT-S/14 46.5 23.8 77.9 52.8 Baseline (Oquab et al., 2024) 35.0 17.9 77.3 53.7 + PriMaPs-EM DINOv2 ViT-B/14 52.8 23.6 77.3 53.7 state-of-the-art performance. Notably, we find PriMaPs-EM to be complementary to other state-of-the-art unsupervised segmentation methods like STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023) on the corresponding backbone model. This suggests that these methods use their SSL representation only to a limited extent and do not fully leverage the inherent properties of the underlying SSL embeddings. Similar observations can be drawn for the experiments on COCO-Stuff in Tab. 2. PriMaPs-EM leads to a consistent improvement across all four SSL baselines, as well as an improvement over STEGO and HP. For instance, combining STEGO with PriMaPs-EM leads to +14.0% and +19.1% improvement over the baseline in terms of mIoU and Acc for DINO ViT-B/8. Experiments on the Potsdam-3 dataset follow the same pattern (cf. Tab. 3). PriMaPs-EM leads to a consistent gain over the baseline, e. g. +17.6% and +14.4% in terms of mIoU and Acc, respectively, for DINO ViT-B/8. Moreover, it also boosts the accuracy of STEGO and HP. In some cases, the gain of PriMaPs-EM is limited. For example, in Tab. 1 for DINO ViT-B/8 + PriMaPs-EM, the class prototype for \u201csidewalk\u201d is poor while the classes \u201croad\u201d and \u201cvegetation\u201d superimpose smaller objects. For DINO ViT-S/8 + PriMaPs-EM in Tab. 3, the class prototype \u201croad\u201d is poor. This limits the overall performance of our method while still outperforming the respective baseline in both cases. Overall, PriMaPs-EM provides modest but consistent benefits over a wide range of baselines and datasets and reaches competitive segmentation performance w. r. t. the state-of-the-art using identical hyperparameters across all backbones and datasets. Recalling the simplicity of the techniques behind PriMaPs, we believe that this is a significant result. The complementary effect of PriMaPs-EM on other state-of-the-art methods (STEGO, HP) further suggests that they rely on DINO features for mere \u201cbootstrapping\u201d and learn feature representations with orthogonal properties to those of DINO. We conclude that PriMaPs-EM constitutes a straightforward, entirely orthogonal tool for boosting unsupervised semantic segmentation. 4.3 Ablation Study To untangle the factors behind PriMaPs-EM, we examine the individual components in a variety of ablation experiments to access the contribution. 9 Table 3: Potsdam-3 \u2013 PriMaPs-EM (Ours) comparison to existing unsupervised semantic segmentation methods, using Accuracy and mean IoU (in %) for unsupervised and supervised probing. Double citations refer to a method\u2019s origin and the work conducting the experiment. Method Backbone Unsupervised Supervised Acc mIoU Acc mIoU RandomCNN (Cho et al., 2021) 38.2 \u2013 \u2013 \u2013 K-Means (Pedregosa et al., 2011; Cho et al., 2021) 45.7 \u2013 \u2013 \u2013 SIFT (Lowe, 2004; Cho et al., 2021) 38.2 \u2013 \u2013 \u2013 ContextPrediction (Doersch et al., 2015; Cho et al., 2021) 49.6 \u2013 \u2013 \u2013 CC (Isola et al., 2015; Cho et al., 2021) 63.9 \u2013 \u2013 \u2013 DeepCluster (Caron et al., 2018; Cho et al., 2021) 41.7 \u2013 \u2013 \u2013 IIC (Ji et al., 2019; Cho et al., 2021) VGG 11 65.1 \u2013 \u2013 \u2013 Baseline (Caron et al., 2021) 56.6 33.6 82.0 69.0 + STEGO (Hamilton et al., 2022; Koenig et al., 2023) 77.0 62.6 85.9 74.8 + PriMaPs-EM 62.5 38.9 82.0 69.0 + STEGO (Hamilton et al., 2022) + PriMaPs-EM DINO ViT-S/8 78.4 64.2 85.9 74.8 Baseline (Caron et al., 2021) 66.1 49.4 84.3 72.8 + HP (Seong et al., 2023) 82.4 69.1 88.0 78.4 + PriMaPs-EM 80.5 67.0 84.3 72.8 + HP (Seong et al., 2023)+ PriMaPs-EM DINO ViT-B/8 83.3 71.0 88.0 78.4 Baseline (Oquab et al., 2024) 75.9 61.0 86.6 76.2 + PriMaPs-EM DINOv2 ViT-S/14 78.5 64.3 86.6 76.2 Baseline (Oquab et al., 2024) 82.4 69.9 87.9 78.3 + PriMaPs-EM DINOv2 ViT-B/14 83.2 71.1 87.9 78.3 Table 4: Ablation study analyzing design choices and components in the PriMaPs pseudo-label generation (a) and PriMaPs-EM (b) for COCO-Stuff using DINO ViT-B/8. (a) PriMaPs pseudo label ablation Method Acc mIoU Baseline (Caron et al., 2021) 38.8 15.7 Similarity Masks 46.3 19.8 + NN 44.9 20.0 + P-CRF (\u2261PriMaPs-EM) 48.4 21.9 PriMaPs-EM (non-iter.) 47.9 21.7 (b) PriMaPs-EM ablation Method Acc mIoU Baseline (Caron et al., 2021) 38.8 15.7 + PriMaPs pseudo label 38.8 18.0 + EMA 45.0 20.2 + Augment 46.0 20.4 + CRF (\u2261PriMaPs-EM) 48.4 21.9 PriMaPs pseudo-label ablations. In Tab. 4a, we analyze the contribution of the individual sub-steps for PriMaPs pseudo-label generation by increasing the complexity of label generation. We provide the DINO baseline, which corresponds to K-means feature clustering, for reference. In the most simplified case, we directly use the similarity mask, similar to Eq. (4). Next, we use the nearest neighbor (+NN in Tab. 4a) of the principal component to get the masks as in Eq. (5), followed by the full approach with CRF refinement (+P-CRF). Except for the changes in the pseudo-label generation, the optimization remains as described in Sec. 4.1. We observe that the similarity masks already provide a good staring point, yet we identify a gain from every single component step. This suggests that using the nearest neighbor improves the localization of the similarity mask. Similarly, CRF refinement improves the alignment between the masks and the image content. We also experiment with using the respective next principal direction (non-iter.) instead of iteratively extracting the first component from masked features. This leads to slightly inferior results. PriMaPs-EM architecture ablations. In a similar vein, we analyze the contribution of the different architectural components of PriMaPs-EM. Optimizing over a single set of class prototypes using the proposed 10 Table 5: Oracle quality assessment of PriMaPs pseudo labels for Cityscapes, COCO-Stuff, and Potsdam-3 by assigning oracle class IDs to the masks. \u201cPseudo\u201d refers to evaluating only the pixels contained in the pseudo label, \u201cAll\u201d to evaluating including the \u201cignore\u201d assignments of the pseudo label. Method Cityscapes COCO-Stuff Potsdam-3 Acc mIoU Acc mIoU Acc mIoU Pseudo 92.4 54.0 93.4 82.4 95.2 90.9 All 73.2 32.4 74.1 55.9 67.4 48.9 DINO ViT-B/8 Baseline (Caron et al., 2021) 49.2 15.5 38.8 15.7 66.1 49.4 Cityscapes COCO-Stuff Potsdam-3 Ground truth PriMaPs Oracle IDs PriMaPs Colored Image Figure 3: Qualitative PriMaPs examples using DINO ViT-B/8 for Cityscapes, COCO-Stuff, and Potsdam-3. PriMaPs Colored \u2013 each mask proposal is visualized in a different color. PriMaPs Oracle class IDs \u2013 each mask is colored in the corresponding ground-truth class color. PriMaPs pseudo labels already provides moderate improvement (+PriMaPs pseudo label in Tab. 4b), despite the disadvantage of an unstable and noisy optimization signal. Adding the EMA (+EMA) leads to a more stable optimization and further improved segmentation. Augmenting the input (+Augment) results in a further gradual improvement. Similarly, refining the prediction with a CRF improves the results further (+CRF). Assessing PriMaPs pseudo labels. To estimate the quality of the pseudo labels, respectively the principal masks, we decouple those from the class ID assignment by providing the oracle ground-truth class for each mask in Tab. 5. To that end, we evaluate all pixels included in our pseudo labels (\u201cPseudo\u201d), corresponding to the upper bound of our optimization signal. Furthermore, we evaluate \u201cAll\u201d by assigning the \u201cignore\u201d pixels to a wrong class. The results indicate a high quality of the pseudo-label maps. Fig. 3 shows qualitative examples of the PriMaPs mask proposals and pseudo labels. We visualize individual masks, each in a different color (PriMaPs Colored). We also display oracle pseudo labels assigning each mask a color based on the ground-truth label (PriMaPs Oracle class IDs). We observe that the mask proposals align well with the ground-truth labels across all three datasets, generalizing across three distinct domains. PriMaPs effectively partitions images into semantically meaningful masks. Qualitative results. We show qualitative results for Cityscapes, COCO-Stuff, and Potsdam-3 in Fig. 4. We observe that PriMaPs-EM leads to less noisy results compared to the baseline, showcasing an improved 11 Cityscapes COCO-Stuff Potsdam-3 STEGO + PriMaPs-EM STEGO PriMaPs-EM Baseline Ground Truth Image Figure 4: Qualitative results for the DINO ViT-B/8 baseline, PriMaPs-EM (Ours), STEGO (Hamilton et al., 2022), and STEGO+PriMaPs-EM (Ours) for Cityscapes, COCO-Stuff, and Potsdam-3. Our method produces locally more consistent segmentation results reducing overall misclassification compared to the corresponding baseline. local consistency of the segmentation and reduced mis-classification. The comparison with STEGO as a baseline exhibits a similar trend. For further examples and comparisons with HP, please refer to Appendix B.2. Limitations. One of the main challenges is to distinguish between classes that happen to share the same SSL feature representation. This is hardly avoidable if the feature representation is fixed, as was the case here and in previous work (Hamilton et al., 2022; Seong et al., 2023). Another limitation across existing unsupervised semantic segmentation approaches is the limited spatial image resolution. This limitation comes from the SSL training objectives (Caron et al., 2021; Oquab et al., 2024), which are image-level, rather than pixel-level. As a result, we can observe difficulties in segmenting very small, finely resolved structures. 5 Conclusion We present PriMaPs, a novel dense pseudo-label generation approach for unsupervised semantic segmentation. We derive lightweight mask proposals directly from off-the-shelf self-supervised learned features, leveraging the intrinsic properties of their embedding space. Our mask proposals can be used as pseudo labels to effectively fit global class prototypes using moving average stochastic EM with PriMaPs-EM. Despite the simplicity, PriMaPs-EM leads to a consistent boost in unsupervised segmentation accuracy when applied to a variety of SSL features or orthogonally to current state-of-the-art unsupervised semantic segmentation pipelines, as shown by our results across multiple datasets. 12 Acknowledgments This project is partially funded by the European Research Council (ERC) under the European Union\u2019s Horizon 2020 research and innovation programme (grant agreement No. 866008) as well as the State of Hesse (Germany) through the cluster projects \u201cThe Third Wave of Artificial Intelligence (3AI)\u201d and \u201cThe Adaptive Mind (TAM)\u201d."
}