diff --git "a/abs_29K_G/test_abstract_long_2405.04370v1.json" "b/abs_29K_G/test_abstract_long_2405.04370v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04370v1.json" @@ -0,0 +1,195 @@ +{ + "url": "http://arxiv.org/abs/2405.04370v1", + "title": "Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos", + "abstract": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.", + "authors": "Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.", + "main_content": "Introduction Accurately anticipating human intentions and future actions is important for artificial intelligence systems in robotics and extended reality [1, 2, 3]. Recent works have tried to tackle the problem from various perspectives, including action recognition and anticipation [4, 5, 6, 7], gaze prediction [8, 9, 10, 11], hand trajectory prediction [12, 13, 14, 15], and object affordance extraction [12, 16, 14, 17]. Among them, jointly predicting hand motion and object affordances can effectively facilitate more reasonable robot manipulation as the prior contextual information, which has been demonstrated on some robot platforms [1, 18, 19]. We believe that deploying such models pretrained by internet-scale human videos on robots is a promising path towards embodied agents. Therefore, our work aims to jointly predict hand trajectories and object affordances on egocentric videos as a concrete hand-object interaction (HOI) expression, following the problem modeling of previous works [12, 14]. Currently, the state-of-the-art approaches [12, 13] predicting hand trajectories and object affordances on egocentric videos tend to exploit the autoregressive (AR) model. They reason about the next \u2217Corresponding author: wanghesheng@sjtu.edu.cn Preprint. Under review. arXiv:2405.04370v1 [cs.CV] 7 May 2024 \fview1 (other observations) view2 (last observation) gap egocentric images (a) Existing Paradigm (b) Diff-IP2D Paradigm t autoregressive model HOI (t2) HOI (t1) predicted interaction diffusion-based model denoising HOI (t1) HOI (t2) HOI (t3) predicted interaction egocentric images t steps HOI (t1) HOI (t3) HOI (t1) HOI (t2) in parrallel motion features (c) Autoregressive Generation vs. Parallel Generation (d) Inherent Gaps gt gt ego motion real actions pixel movement gap accumulated error gt bidirectional unidirectional 3D environments Figure 1: Diff-IP2D vs. Existing Paradigm. The existing HOI prediction paradigm (a) tends to accumulate prediction errors under unidirectional constraints. In contrast, our proposed Diff-IP2D (b) directly forecasts all the future interaction states in parallel with denoising diffusion, mitigating error accumulation with bidirectional constraints (c). Moreover, we integrate egomotion information into our proposed paradigm to narrow the inherent gaps (d) in HOI prediction. HOI state only according to the previous steps (Fig. 1(a)). However, expected \u201cpost-contact states\u201d also affect \u201cpre-contact states\u201d according to human intentions that persist across the holistic HOI process as an oracle. There must be more coherent constraints that reflect human intention and mutually connect the preceding and the following motion in the HOI prediction process. Inspired by this, we argue that predicting future HOI states in parallel considering the bidirectional constraints within the holistic sequence outperforms generating the next state autoregressively (Fig. 1(c)). With diffusion models emerging across multiple domains [20, 21, 22, 23, 24, 25, 26, 27], their strong forecasting capability has been widely validated. Therefore, we propose a diffusion-based method to predict future hand-object interaction in parallel, considering bidirectional constraints in the latent space compared to the traditional autoregressive generation (Fig. 1(b)). In the forward process, the past and future video images are first encoded to sequential latent features. Noises are gradually added to the part of the future sequence while the past features remain anchored. Subsequently, a Transformer-based network is devised for learning to reverse the diffusion and reconstruct the input latent features. Finally, the proposed predictors are exploited to recover future hand trajectories and object affordances from the denoised latents. A new regularization strategy is also proposed to link the two latent spaces adjacent to the denoising diffusion process. Moreover, we also identify two inherent gaps (Fig. 1(d)) affecting HOI prediction in the existing paradigm: 1) Directly predicting the projection of 3D future hand trajectories and object affordances on 2D egocentric image plane is an ill-posed problem involving spatial ambiguities. There is generally a gap between 2D pixel movements and 3D real actions, which can be bridged by spatial transformation across multiple views changing with egomotion. 2) The past egocentric videos are absorbed to predict future interaction states on the last observed image, which is actually a \u201ccanvas\u201d from a different view w.r.t all the other frames. Therefore, there is also a gap between the last observation (first-person view) and the other observations (analogous to third-person view) caused by egomotion. To fill the two gaps together, we further propose to integrate the camera wearer\u2019s egomotion into our diffusion-based paradigm. The utilized homography features enable the denoising model aware of the camera wearer\u2019s dynamics and the spatial relationship between consecutive egocentric video frames. The main contributions of this paper are as follows: 1) We propose a diffusion-based hand-object interaction prediction method, dubbed Diff-IP2D. To our best knowledge, this is the first work to jointly forecast future hand trajectories and object affordances by the devised denoising diffusion probabilistic model with only 2D egocentric videos as input. It provides a foundation generative paradigm in the field of HOI prediction. 2) The homography egomotion features are integrated to fill the motion-related gaps inherent in HOI prediction on egocentric videos. 3) We extend the existing metrics and propose the first protocol for jointly evaluating the performance of hand trajectory prediction and object affordance prediction. 4) Comprehensive experiments are conducted to demonstrate that our Diff-IP2D can predict plausible hand trajectories and object affordances compared to the state-of-the-art baselines, showing its potential for deployment on artificial intelligence systems. 2 \f2 Related work Understanding hand-object interaction. Human HOI comprehension can guide the downstream tasks in artificial intelligence systems. As a pioneer work, Calway et al. [28] connect the specific human tasks to relevant objects, revealing the importance of object-centric understanding in different HOI modes. In contrast, Liu et al. [29] focus on capturing the changeable attributes of objects, which underlines the relationship between object-centric interaction and goal-oriented human activities. After that, more and more works contribute to HOI understanding by pixel-wise semantic segmentation [30, 31, 32, 33], bounding-box-wise detection [34, 35, 36, 37], fine-grained hand/object pose estimation [38, 39, 40, 41, 42, 43]. Ego4D [44] further provides a standard benchmark that divides HOI understanding into several predefined subtasks. Predicting hand-object interaction. Analyzing only past human behavior may be insufficient for service robot manipulation or extended reality. Forecasting possible future object-centric HOI states based on historical observations is also valuable, which attracts increasing attention due to the general knowledge that can be transferred to robot applications [1, 18, 19, 45]. For example, Dessalene et al. [46] propose to generate contact anticipation maps and next active object segmentations as future HOI predictions. Liu et al. [14] first achieve hand trajectory and object affordance prediction simultaneously, revealing that predicting hand motion benefits the extraction of interaction hotspots. Following this work, Liu et al. [12] further develop an object-centric Transformer to jointly forecast future trajectories and affordances autoregressively, and annotate publicly available datasets to support future works. More recently, Bao et al. [13] lift the problem to 3D spaces where hand trajectories are predicted by an uncertainty-aware state space Transformer in an autoregressive manner. However, this method needs additional 3D perception inputs from the RGB-D camera. In this work, we still achieve joint hand trajectory and object affordance prediction on 2D human videos rather than in 3D space. We focus on capturing more general knowledge from only egocentric camera observations in an iterative non-autoregressive (iter-NAR) manner, rather than the autoregressive way of the state-of-the-art works [12, 13]. Diffusion-based egocentric video analysis. Diffusion models have been successfully utilized in exocentric and egocentric video prediction [47, 48, 49, 50, 2] due to their strong generation ability. With only egocentric videos as inputs, diffusion-based techniques can also achieve human mesh recovery [51, 52], 3D HOI reconstruction [53, 54], and 3D HOI synthesizing [16, 55]. However, none of these works concentrate on the combination of fine-grained hand trajectories and object affordances as future HOI representations for potential utilization in artificial intelligence systems. Our proposed Diff-IP2D first achieves this based on the denoising diffusion probabilistic model [20], which dominates the existing paradigm [12, 13] in prediction performance on egocentric videos. 3 Proposed Method 3.1 Preliminaries Task definition. Given the video clip of past egocentric observations I = {It}0 t=\u2212Np+1, we aim to predict future hand trajectories H = {HR t , HL t }Nf t=1(HR t , HL t \u2208R2) and potential object contact points O = {On}No n=1(On \u2208R2), where Np and Nf are the numbers of frames in the past and future time horizons respectively, and No denotes the number of predicted contact points used to calculate interaction hotspots as object affordances. Following the previous works [12, 14], we predict the future positions of the right hand, the left hand, and the affordance of the next active object on the last observed image of the input videos. Diffusion models. In this work, we propose a diffusion-based approach to gradually corrupt the input to noisy features and then train a denoising model to reverse this process. We first map the input images into a latent space z0 \u223cq(z0), which is then corrupted to a standard Gaussian noise zS \u223cN(0, I). In the forward process, the perturbation operation can be represented as q(zs|zs\u22121) = N(zs; \u221a1 \u2212\u03b2szs\u22121, \u03b2sI), where \u03b2 is the predefined variance scales. In the reverse process, we set a denoising diffusion model to gradually reconstruct the latent z0 from the noisy zS. The denoised features can be used to recover the final future hand trajectories and object affordances. 3 \fforward process future HOI features conditional past HOI features reverse process Multi-Feature Extractor egomotion homography Hand Trajectory Head trajectory loss shared weights regularization affordance loss diffusion-related losses Input: sequential past egocentric images Output: future HOI states feature space (s=S) Side-Oriented Fusion Module MADT Predictors MADT Object Affordance Head global/right/left intermediate features right/left fused features diffusion process feature space (s=S/2) feature space (s=0) Hand Trajectory Head Figure 2: System Overview of Diff-IP2D. Our proposed paradigm takes in sequential past egocentric images and jointly predicts hand trajectories and object affordances as future HOI states. The observations are mapped to the latent feature space for the diffusion process. 3.2 Architecture System overview. Accurately reconstructing the future part of the input sequence is critical in the diffusion-based prediction task. We empirically found that ground-truth hand waypoints Hgt = {HR,gt t , HL,gt t }Nf t=1(HR,gt t , HL,gt t \u2208R2) and contact points Ogt = {Ogt n}No n=1(Ogt n \u2208R2) provide discrete and sparse supervision signals for reconstruction, which is not enough for capturing possible high-level semantics such as human intentions in the denoising process. Therefore, as Fig. 2 shows, we first use Multi-Feature Extractor and Side-Oriented Fusion Module to transform the input images into latent HOI features, and then implement diffusion-related operation in the latent continuous space. The HOI features denoised by Motion-Aware Denoising Transformer are further absorbed by Hand Trajectory Head and Object Affordance Head to generate future hand trajectories and object hotspots. Multi-Feature Extractor (MFE). Following the previous work [12], we use MFE that consists of a pretrained Temporal Segment Network (TSN) provided by Furnari et al. [34], RoIAlign [56] with average pooling, and Multilayer Perceptron (MLP) to extract hand, object, and global features for each sequence image It \u2208I. The positions of hand-object bounding boxes are also encoded to feature vectors fused with hand and object features. Side-Oriented Fusion Module (SOFM). Our proposed SOFM is a learnable linear transformation to fuse the above-mentioned three types of feature vectors into the final latent form for two sides respectively. Specifically, the global features and right-side features (right-hand/object features) are concatenated to the right-side HOI features FR = {F R t }X t=\u2212Np+1(F R t \u2208Ra, X = Nf for training and X = 0 for inference). The operation and feature sizes are the same as the leftside counterparts, leading to FL = {F L t }X t=\u2212Np+1. We further concatenate the side-oriented features along the time axis respectively to generate the input latents F R seq, F L seq \u2208R(Np+X)\u00d7a for the following diffusion model. Motion-Aware Denoising Transformer (MADT). Our proposed MADT takes in the noisy latent HOI features and reconstructs future HOI features for the following predictors conditioned on past HOI counterparts. MADT consists of several stacked Transformer layers as shown in Fig. 3. Inspired by the text generation technique [26], we anchor the past HOI features for both forward and reverse processes. We only impose noises and denoise at the positions of the future feature sequence. The features of the two sides are denoised using the same model, leading to \u02c6 F R seq and \u02c6 F L seq. In addition, egomotion guidance is proposed here to fill the gaps mentioned in Sec. 1. Specifically, we first extract the Scale-Invariant Feature Transform (SIFT) descriptors to find the pixel correspondence between two adjacent images of past observations I. Then we calculate the homography matrix with RANSAC that finds a transformation to maximize the number of inliers in the keypoint pairs. We accumulate the consecutive homography matrices and obtain Mseq \u2208RNp\u00d73\u00d73 representing the camera wearer\u2019s motion between It (t \u22640) and I0. They are further linearly embedded into an egomotion feature Eseq \u2208RNp\u00d7b by Motion Encoder. The multi-head cross-attention module 4 \fMHSA Add & Norm MHCA Add & Norm FFN Add & Norm past HOI features TE PE egomotion feature latent noisy samples denoised future HOI features \u3002 homography Motion Encoder N X input video clip \u3002\u3002 t m1,1 m1,2 m1,3 m2,1 m2,2 m2,3 m3,1 m3,2 m3,3 ... ... ... ... ... ... Figure 3: Architecture of our proposed MADT. MADT receives corrupted latent HOI features with the position embedding (PE) and time embedding (TE), and outputs denoised future HOI features. (MHCA) in the devised Transformer layer then absorbs the egomotion feature to guide the denoising process. More analysis on the use of egomotion guidance can be found in Appendix, Sec. B. Predictors. Our proposed predictors consist of Hand Trajectory Head (HTH) and Object Affordance Head (OAH). HTH contains an MLP that receives the future parts of the denoised features, \u02c6 F R seq[Np+1: Np+Nf] and \u02c6 F L seq[Np+1 : Np+Nf] to generate future waypoints H of two hands. As to OAH, we empirically exploit Conditional Variational Autoencoder (C-VAE) [57] to generate possible contact points O in the near future. Take the right hand as an example, the condition is selected as the time-averaged \u02c6 F R seq and predicted waypoints HR t . Note that we additionally consider denoised future HOI features \u02c6 F R seq[Np+1 : Np+Nf] (t>0) besides the features from the past observation (t\u22640) for object affordance prediction. This aligns with the intuitive relationship between the contact points and the overall interaction process. Therefore, we integrate richer conditional features from trajectory prediction into the object affordance prediction compared to the previous work [12] only conditioned on historical features. 3.3 Training Forward process. We implement partial noising [26] in the forward process during training. Taking the right side as an example, the output of SOFM is first extended by a Markov transition q(z0|F R seq) = N(F R seq, \u03b20I), where F R seq \u2208R(Np+Nf)\u00d7a. We discard the embedding process from Gong et al. [26] since the HOI feature F R seq is already in the continuous latent space. In each following forward step of the diffusion model, we implement q(zs|zs\u22121) by adding noise to the future part of zs\u22121, i.e., zs\u22121[Np+1:Np+Nf] for both sides. Reverse process. After corrupting the initial z0 to zS by the forward process, our proposed MADT is adopted to denoise zS to z0 in a classifier-free manner. Considering the guidance of egomotion features, the reverse process can be modeled as pMADT(z0:S) := p(zs) QS s=1 pMADT(zs\u22121|zs, Mseq). Specifically, the MADT model fMADT(zs, s, Mseq) predicts the injected noise for each forward step with pMADT(zs\u22121|zs, Mseq) = N(zs\u22121; \u00b5MADT(zs, s, Mseq), \u03c3MADT(zs, s, Mseq)). The same denoising operation and motion-aware guidance are applied to HOI features of both sides. Training objective. The loss function to train the networks in Diff-IP2D contains four parts, including diffusion-related losses, trajectory loss, affordance loss, and an additional regularization term (see Fig. 2). Take the right side as an example, we use the variational lower bound LR VLB as the diffusion-related losses: LR VLB = S X s=2 ||zR 0 \u2212fMADT(zR s, s, Mseq)||2 + ||F R seq \u2212\u02c6 F R seq||2, (1) where \u02c6 F R seq = fMADT(zR 1, 1, Mseq). To reconstruct hand trajectories beyond the latent feature space, we further set trajectory loss LR traj with the distance between the ground-truth waypoints and the ones predicted by HTH: LR traj = Nf X t=1 ||HR t \u2212HR,gt t ||2, (2) 5 \fwhere HR t = fHTH( \u02c6 F R seq[Np+1:Np+Nf]). We only focus on the future part out of the holistic sequence for computing LR traj since we let HTH be more sensitive to predictions rather than bias it to past observations. As to the object affordance prediction, we also compute the affordance loss Laff after multiple stochastic sampling considering the next active object recognized following Liu et al. [12] (assuming in the right side here for brevity): Laff = No X n=1 ||On \u2212Ogt n||2 + cLKL, (3) where On =fOAH( \u02c6 F R seq, HR t ), and LKL = 1 2(\u2212log \u03c32 OAH( \u02c6 F R seq, HR t )+\u00b52 OAH( \u02c6 F R seq, HR t )+\u03c32 OAH( \u02c6 F R seq, HR t )\u2212 1) is the KL-Divergence regularization for C-VAE, which is scaled by c = 1e-3. The latent features and predicted hand waypoints are fused by MLP suggested by the previous work [12]. We consider both reconstructed future HOI features \u02c6 F R seq[Np+1:Np+Nf] and anchored past counterparts \u02c6 F R seq[0:Np] compared to [12] as mentioned before. We also notice that the latent feature spaces before and after the denoising diffusion process represent the same \u201cprofile\u201d of the input HOI sequence. Therefore, we propose an additional regularization term implicitly linking F R seq and \u02c6 F R seq by hand trajectory prediction: LR reg = Nf X t=1 || \u02dc HR t \u2212HR,gt t ||2, (4) where \u02dc HR t = fHTH(F R seq[Np+1:Np+Nf]). Although Eq. (4) does not explicitly contain the term \u02c6 F R seq, the training direction is the same with Eq. (2), thus maintaining training stability. The regularization helps the convergence of Diff-IP2D by consistently constraining the two latent spaces alongside the diffusion process. Here we do not use object affordance prediction for regularization because we empirically found that incorporating OAH mitigates training efficiency while the positive effect is not obvious. Finally, we get the total loss to train our proposed Diff-IP2D: Ltotal = \u03bbVLB(LR VLB + LL VLB) + \u03bbtraj(LR traj + LL traj) + \u03bbaffLaff + \u03bbreg(LR reg + LL reg), (5) where \u03bbVLB, \u03bbtraj, \u03bbaff, and \u03bbreg are the weights to balance different losses. Besides, we leverage the importance sampling technique proposed in improved DDPM [58], which promotes the training process focusing more on the steps with relatively large Ltotal. 3.4 Inference In the inference stage, we first sample F R noise, F L noise \u2208RNf\u00d7a from a standard Gaussian distribution, which is then concatenated with F R seq, F L seq \u2208RNp\u00d7a along the time axis to generate zR S and zL S. Then we use MADT to predict zR 0 and zL 0 based on DDIM sampling [59]. Note that we anchor the past part of reparameterized zs as the fixed condition in every step of the inference process following Gong et al. [26]. Finally, the generated \u02c6 F R seq and \u02c6 F L seq are used to predict future hand waypoints and contact points by fHTH(\u00b7) and fOAH(\u00b7) as mentioned before. It can be seen from the inference stage that Diff-IP2D can be regarded as an iter-NAR model in the latent feature space. Compared to the state-of-the-art baselines in an autoregressive manner, our approach shifts the iteration from F1,1 F1,2 F1, Nf ... F2,1 F2,2 F2, Nf ... FS,1 FS,2 FS, Nf ... ... denoising diffusion process time axis ... ... F1 F2 FNf ... time axis H1 H2 HN ... f FS-1,1 FS-2,1 FS, Nf ... H1 H2 HN ... f F3 H3 F1 F2 FNf ... time axis H1 H2 HN ... f F3 H3 (b) Iter-NAR Prediction (a) AR Prediction Figure 4: Comparison of AR and our iter-NAR prediction. the time axis to the denoising direction, which is shown in Fig. 4. This alleviates the accumulated artifacts caused by the limited iteration in the time dimension, and maintains bidirectional constraints among the sequential features to generate future HOI states in parallel, providing a deeper understanding of human intention. We further present the mathematical relationship between the two iter-NAR models, Diff-IP2D for HOI prediction and DiffuSeq [26] for text generation in Appendix, Sec. A. 6 \f4 Experiments 4.1 Experimental setups Datasets. Following the previous work [12], we utilize three publicly available datasets including Epic-Kitchens-55 (EK55) [60], Epic-Kitchens-100 (EK100) [61], and EGTEA Gaze+ (EG) [11]. For the EK55 and EK100 datasets, we sample past Np = 10 frames (2.5 s) to forecast HOI states in future Nf = 4 frames (1.0 s), both at 4 FPS. As to the EG dataset, Np = 9 frames (1.5 s) are used for Nf = 3 HOI predictions (0.5 s) at 6 FPS. See the Appendix, Sec. C.2 for more details. Diff-IP2D configuration. MFE extracts the hand, object, and global feature vectors all with the size of 512 for each input image. For the EK55 and EK100 datasets, the outputs of SOFM F R seq, F L seq have the size of 14 \u00d7 512 for training and 10 \u00d7 512 for inference. For the EG dataset, F R seq, F L seq are 9 \u00d7 512 for training and 12 \u00d7 512 for inference. As to the diffusion process, the total number of steps S is set to 1000. We also provide an ablation study on multiple steps for training and inference in Appendix, Sec. D.3. The square-root noise schedule in Diffusion-LM [62] is adopted here for the forward diffusion process. MADT has 6 Transformer layers (Fig. 3) for denoising, where the embedding dimension is 512, the number of heads is set to 4, and the intermediate dimension of the feed-forward layer is set to 2048. Motion Encoder linearly projects each homography matrix to an egomotion feature vector of 512. We use an MLP with hidden dimensions 256 and 64 to predict the hand waypoints as HTH, and a C-VAE containing an MLP with a hidden dimension 512 to predict contact points as OAH. The training configurations can be found in Appendix, Sec. C.2. In the reference stage, we generate the 10 candidate samples for each prediction. Baseline configuration. We choose Constant Velocity Hand (CVH), Seq2Seq [63], FHOI [14], OCT [12], and USST [13] as the baselines for hand trajectory prediction. CVH is the most straightforward one which assumes two hands remain in uniform motion over the future time horizon with the average velocity during past observations. Besides, we adjust the input and architecture of USST to the 2D prediction task since it was originally designed for 3D hand trajectory prediction. We choose Center Object [14], Hotspots [64], FHOI [14], OCT [12], and Final Hand of USST [13] (USST-FH) as the baselines for object affordance prediction. USST-FH puts a mixture of Gaussians at the last hand waypoint predicted by USST since its vanilla version can only predict waypoints. Evaluation metrics. Following the previous work [14, 12, 13], we use Final Displacement Error (FDE) to evaluate prediction performance on hand trajectories. Considering the general knowledge of \u201cpost-contact trajectories\u201d extracted from human videos is potentially beneficial to robot manipulation [1, 18], we additionally extend the metric Average Displacement Error to Weighted Displacement Error (WDE): WDE = 1 2Nf X R,L Nf X t=1 t Nf D(Ht, Hgt t ), (6) where D(\u00b7) denotes the L2 distance function and the later waypoints contribute to larger errors. We select the mean error among the 10 samples for each hand trajectory prediction. As to the object affordance prediction, we use Similarity Metric (SIM) [65], AUC-Judd (AUC-J) [66], and Normalized Scanpath Saliency (NSS) [67] as evaluation metrics. We use all 10 contact point candidates to compute the metric values for each affordance prediction. Moreover, we propose a novel object-centric protocol to jointly evaluate the two prediction tasks. We first calculate the averaged hand waypoints \u00af HR t and \u00af HL t for each future timestamp from multiple samples. Then we select the waypoint closest to each predicted contact prediction On as an additional \u201cinteraction point\u201d, which can be formulated by: \u00af Hip n = minR,L,tD( \u00af Ht, On), (7) Finally, the joint hotspot is predicted using { \u00af Hip n \u222aOn}No n=1. This protocol comprehensively considers object-centric attention since HOI changes the object states and hand waypoints must have a strong correlation with object positions. Note that we also use the quantitative metrics same as the ones for object affordance prediction, which are denoted as SIM\u2217, AUC-J\u2217, and NSS\u2217. More clarifications about our proposed new protocol can be found in Appendix, Sec. C.1. 7 \fTable 1: Comparison of performance on hand trajectory and object affordance prediction approach EK55 EK100 EG WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 CVH 0.636 0.315 0.658 0.329 0.689 0.343 Seq2Seq [63] 0.505 0.212 0.556 0.219 0.649 0.263 FHOI [14] 0.589 0.307 0.550 0.274 0.557 0.268 OCT [12] 0.446 0.208 0.467 0.206 0.514 0.249 USST [13] 0.458 0.210 0.475 0.206 0.552 0.256 Diff-IP2D (ours) 0.411 0.181 0.407 0.187 0.478 0.211 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 Center Object [14] 0.083 0.553 0.448 0.081 0.558 0.401 0.094 0.562 0.518 Hotspots [64] 0.156 0.670 0.606 0.147 0.635 0.533 0.150 0.662 0.574 FHOI [14] 0.159 0.655 0.517 0.120 0.548 0.418 0.122 0.506 0.401 OCT [12] 0.213 0.710 0.791 0.187 0.677 0.695 0.227 0.704 0.912 USST-FH [13] 0.208 0.682 0.757 0.179 0.658 0.754 0.190 0.675 0.729 Diff-IP2D (ours) 0.226 0.725 0.980 0.211 0.736 0.917 0.242 0.722 0.956 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 FHOI [14] 0.130 0.602 0.487 0.113 0.545 0.409 0.118 0.501 0.379 OCT [12] 0.219 0.720 0.848 0.182 0.684 0.662 0.194 0.672 0.752 Diff-IP2D (ours) 0.222 0.730 0.888 0.204 0.727 0.844 0.226 0.701 0.825 Figure 5: Visualization of hand trajectory prediction on Epic-Kitchens. The waypoints from groundtruth labels, Diff-IP2D, and the second-best baseline [12] are connected by red, white, and blue dashed lines respectively. 4.2 Separate evaluation on hand trajectory and object affordance prediction We first present the evaluation results on hand trajectory prediction. As Tab. 1 depicts, our proposed Diff-IP2D outperforms all the baselines on the EK55 and EK100 datasets on WDE and FED. This is mainly achieved by the devised iter-NAR paradigm of Diff-IP2D alleviating degeneration in AR baselines, as well as the egomotion guidance. The visualization of the related hand prediction results is shown in Fig. 5. It can be seen that our proposed method can better capture the camera wearer\u2019s intention (such as putting the food in the bowl) and generate more reasonable future trajectories even if there is a lack of past observations for hands (such as reaching out towards the table). Besides, our method can predict a good final hand position although there is a large shift in the early stage (the subfigure in the bottom right corner of Fig. 5), which benefits from our diffusion-based parallel generation. When directly transferring the models trained on Epic-Kitchens to the unseen EG dataset, our method still outperforms the other baselines, which improves by 7.0% and 15.3% against the second-best method on WDE and FDE respectively. This reveals the solid generalization capability of our diffusion-based approach across different environments. The comparison results of object affordance prediction are also shown in Tab. 1. Our proposed Diff-IP2D predicts the hotspots with larger SIM, AUC-J, and NSS compared to all the baselines on both Epic-Kitchens data and unseen EG data. Fig. 6 illustrates the predicted contact points with minimum distances to the ground-truth ones. Our proposed method focuses more on objects of interest considering the features of the holistic interaction and potential hand trajectories, and therefore grounds the contact points closer to the ground-truth labels than the counterparts of the baseline. 8 \f\u8981\u8bf4 \u4e3a\u4e86\u663e\u793a\u65b9\u4fbf \u52a0\u4e86\u4e2a\u865a\u62df\u7684hotspots\u5728\u4e0a\u9762 Figure 6: Visualization of object affordance prediction on Epic-Kitchens. The contact points from ground-truth, Diff-IP2D, and the state-of-the-art baseline OCT [12] are represented by red, white, and blue dots respectively. For a clearer illustration, we additionally put a fixed Gaussian with each contact point as the center. See the Appendix, Sec. D.6 for more visualization results. Table 2: Ablation study on egomotion guidance approach EK55 EK100 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 Diff-IP2D* 0.427 0.186 0.218 0.717 0.929 0.439 0.198 0.201 0.710 0.846 Diff-IP2D 0.411 0.181 0.226 0.725 0.980 0.407 0.187 0.211 0.736 0.917 improvement 3.7% 2.7% 3.7% 1.1% 5.5% 7.3% 5.6% 5.0% 3.7% 8.4% Diff-IP2D*: Diff-IP2D w/o egomotion guidance 4.3 Joint evaluation on hand trajectory and object affordance prediction We further compare Diff-IP2D with the other two joint prediction baselines, FHOI [14] and OCT [12], using our proposed object-centric protocol. The video clips containing both ground-truth hand waypoints and contact points are used for evaluation in this experiment. The results are also shown in Tab. 1, which indicates that our proposed Diff-IP2D can generate the best object-centric HOI predictions considering the two tasks concurrently on both Epic-Kitchens and unseen EG data. The results also suggest that Diff-IP2D outperforms the baselines on object-centric HOI prediction by focusing more attention on the target objects and predicting reasonable hand trajectories around them. 4.4 Ablation study on egomotion guidance We provide an ablation study of the egomotion features used to guide MADT denoising on the EK55 and EK100 datasets. Here we replace the MHCA in MADT with a multi-head self-attention module (MHSA) to remove the egomotion guidance while keeping the same parameter number. The experimental results in Tab. 2 show that the guidance of motion features improves our proposed diffusion-based paradigm noticeably on both hand trajectory prediction and object affordance prediction. This is achieved by narrowing the two gaps caused by 2D-3D ill-posed problem and view difference mentioned in Sec. 1. Note that the egomotion guidance is more significant on the EK100 dataset than on the EK55 dataset. The reason could be that EK100 has a larger volume of training data incorporating more diverse egomotion patterns than EK55, leading to a model that can capture human dynamics better. More results of the related joint evaluation are presented in Appendix, Sec. D.1. 4.5", + "additional_graph_info": { + "graph": [ + [ + "Jingyi Xu", + "Hieu Le" + ] + ], + "node_feat": { + "Jingyi Xu": [ + { + "url": "http://arxiv.org/abs/2403.10001v1", + "title": "Visual Foundation Models Boost Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation", + "abstract": "Unsupervised domain adaptation (UDA) is vital for alleviating the workload of\nlabeling 3D point cloud data and mitigating the absence of labels when facing a\nnewly defined domain. Various methods of utilizing images to enhance the\nperformance of cross-domain 3D segmentation have recently emerged. However, the\npseudo labels, which are generated from models trained on the source domain and\nprovide additional supervised signals for the unseen domain, are inadequate\nwhen utilized for 3D segmentation due to their inherent noisiness and\nconsequently restrict the accuracy of neural networks. With the advent of 2D\nvisual foundation models (VFMs) and their abundant knowledge prior, we propose\na novel pipeline VFMSeg to further enhance the cross-modal unsupervised domain\nadaptation framework by leveraging these models. In this work, we study how to\nharness the knowledge priors learned by VFMs to produce more accurate labels\nfor unlabeled target domains and improve overall performance. We first utilize\na multi-modal VFM, which is pre-trained on large scale image-text pairs, to\nprovide supervised labels (VFM-PL) for images and point clouds from the target\ndomain. Then, another VFM trained on fine-grained 2D masks is adopted to guide\nthe generation of semantically augmented images and point clouds to enhance the\nperformance of neural networks, which mix the data from source and target\ndomains like view frustums (FrustumMixing). Finally, we merge class-wise\nprediction across modalities to produce more accurate annotations for unlabeled\ntarget domains. Our method is evaluated on various autonomous driving datasets\nand the results demonstrate a significant improvement for 3D segmentation task.", + "authors": "Jingyi Xu, Weidong Yang, Lingdong Kong, Youquan Liu, Rui Zhang, Qingyuan Zhou, Ben Fei", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Point cloud segmentation is vital for real-world applications such as 3D scene perception, robotics, and autonomous driving [36, 45]. During this process, each individual point within the point cloud is assigned a semantic label to enhance understanding and analysis [37, 58]. However, labeling massive 3D data is a laborious and costly process [13, 57]. Hence, it is significant to develop domain adaptation, i.e. unsupervised, methods that could efficiently exploit existing annotation of the source point cloud and transfer the acquired knowledge to the label-free target 3D domain [18]. Otherwise, assigning semantic labels for point clouds is intrinsically challenging due to their sparsely distributed, unstructured, and colorless nature [14]. To incorporate multi-modal information, the advent of multimodal autonomous driving datasets [1, 2, 6, 7] has facilitated the availability of concurrent images alongside point clouds, which opens a valuable research topic and enables researchers to utilize the rich visual information embedded in images that vastly facilitate 3D semantic segmentation. Recent research has proposed a promising line of frameworks [3, 15, 24, 28] that simultaneously leverage multimodal to address the 3D segmentation task in the unlabeled target domain. In these approaches, the neural network for different modalities was first pre-trained and then applied to the target domain for generating pseudo-labels (PL) [23]. The utilization of these pseudo-labels in the subsequent training stage can provide supervision for the target domain, thereby improving the overall performance. Despite the proven effectiveness of this method, pseudo-labels are inevitably noisy (Fig. 1a Left). These noises arise from the limited capacity of pre-trained models, which consequently restricts the segmentation capability of neural networks. Besides, the information exchange across various domains is realized through shared neural networks, resulting in limited adaptation at a coarse level [51]. Visual Foundation Models (VFMs) have already demonstrated remarkable performance on a variety of open-world 2D vision tasks [21, 43, 44, 60, 61]. Specifically, the Segment Anything Model (SAM) [21] has achieved outstanding performance on zero-shot 2D segmentation, while Segment Everything Everywhere Model (SEEM) [61] further extends such capability of SAM by providing accurate semantic labels for generated masks. In the light of rich and robust visual priors learned by VFMs [33], we propose VFMSeg to fully exploit their zeroshot segmentation capability of VFMs and transfer 2D visual knowledge across modalities and domains. To tackle the inaccurate PLs generated by pre-trained models, we present VFM-PL, which generates more precise pseudolabels (Fig. 1a Right) by taming SEEM for labeling images from autonomous driving datasets. Furthermore, aiming to further narrow the gap between the source and target domain, we have additionally developed a method dubbed FrustumMixing, as illustrated in Fig. 1b. FrustumMixing utilizes SAM [21] to generate fine-grained masks for images from both domains. These masks are then utilized to mix cross-modal and cross-domain samples, with a portion of the masks involved in the mixing process. Since the masks generated by SAM lack semantic meaning, we adopt SEEM to complement the absence of semantic labels. FrustumMixing operates similarly to the concept of view frustum and excels in generating semantically augmented images and point clouds by combining different perspectives. The inclusion of these semantically augmented samples, which encompass fine-grained semantic instances extracted from the other domain, is anticipated to provide significant performance improvement when feeding into neural networks [29]. To assess the effectiveness of our proposed method, we conducted comprehensive experiments in various cross-modal 3D UDA segmentation scenarios. Our results demonstrate that our method significantly outperforms existing off-the-shelf approaches by a substantial margin. Our contributions of the proposed VFMSeg are fourfold: \u2022 We propose VFMSeg, a novel cross-modal unsupervised domain adaptation framework that boosts the performance of 3D semantic segmentation by the merit of visual foundation models. \u2022 To tackle the inaccuracy of traditional pseudo labels, we exploit the knowledge priors learned by VFMs to produce more precise labels for target domain. \u2022 To further narrow the domain gap, we leverage another VFM trained on fine-grained 2D masks to guide the generation of semantically augmented images and point clouds, thereby enhancing the cross-domain capability of the backbones. \u2022 Extensive experiments on three cross-domain settings demonstrate our VFMSeg can outperform existing stateof-the-art counterparts. 2. Related Works Unsupervised Domain Adaptation for 3D Segmentation. Domain adaptation aims to transfer knowledge and bridge the distribution gap between source and target domains [38]. For UDA, the source domain has annotations while the target domain is unlabeled and numerous methods have already been proposed to tackle 2D segmentation task [19, 27, 40, 41, 56]. UDA for 3D segmentation has drawn great attention in recent studies due to its paramount importance for autonomous driving vehicles [31, 31, 34, 52]. Although these methods are promising for uni-modal (image or point cloud) segmentation, the benefit of leveraging complementary information from both modalities has not been fully exploited. As a pioneer work, Jaritz et al. [15, 16] proposed xMUDA framework to capitalize both 2D and 3D \f2D Branch 3D Branch 2D Network 3D Network 2D Feature 3D Feature 2D Head 1 Source/Target Point Clouds 3D Head 1 2D Head 2 3D Head 2 Image Information Flow Point Cloud Information Flow Back Propagation Loss Point image with Class Labels Class Labels Segmentation Loss Source/Target Images Cross-modal Learning VFM Prior Knowledge Visual Prior Visual Prior VFMs Mixed Source/Target Images VFMs Mixed Source/Target Point Clouds (Visualized in Point Image) Frustrum Mixing Class Probs. Class Probs. Frustrum Mixing Point Image Figure 2. Framework overview. Both 2D and 3D neural networks are trained on source and target data. Hence, the domain-invariant feature is captured during parameter optimization. There are two projection heads in those networks. The first head leverages supervision signal within labels and the second head provides cross-modal information exchange through KL-Divergence (Sec 3.1). Since the target domain is free of labels under the UDA setting, pre-trained 2D and 3D networks are first utilized to generate pseudo-labels for the target domain. VFM is applied to provide guidance for producing more accurate pseudo-labels (Sec 3.2). The visual prior of a VFM is also leveraged to create diverse training samples that bridge the gap between two domains (3.3). modalities for UDA in 3D segmentation. Based on that effective framework, Liu et al. [24] further incorporate an adversarial training scheme to enhance the information transfer between images and point clouds. Peng et al. [28] introduce a deformable 2D feature patch for better information exchange with 3D point clouds which eventually leads to sufficient domain adaptation. Cardace et al. [3] exploit additional depth information to train a 2D encoder that is resistant to domain shift. Chen et al. [5] explore a new setting (different from xMUDA) of UDA where the source point clouds are removed from the training process and leverage a mixing strategy for data augmentation to compensate for the absence of source 3D data. In this paper, we focus on utilizing VFMs to provide refined supervision in cross-modal UDA. Visual Foundation Models. Pre-trained language foundation models [12, 17, 39, 42] have not only achieved significant advancements in natural language processing (NLP) but also transformed the way people work and conducting research within the community. Following this trend, several visual foundation models (VFMs) [21, 30, 43, 44, 60, 61] have emerged and showcased their revolutionary capabilities in the field of 2D vision. Notable VFMs include Segment Anything Model (SAM) [21], X-Decoder [60], Segment Everything Everywhere all at once (SEEM) [61], HIPE [43] and SegGPT [44]. These VFMs have made significant contributions to image segmentation tasks and have shown promising potential. Most recently, VFMs are utilized for various 3D tasks [25, 50, 53]. However, the fruitful knowledge inherent in these VFMs has not been fully exploited under the UDA 3D segmentation. Data Augmentation via mixing. Deep neural networks commonly exhibit undesirable behaviors, including memorization and overfitting. To address this problem [54, 55], mixing strategies are employed to train neural networks using additional data generated through the convex combination of paired samples and labels. This involves mixing either the entire samples [55] or cutting and pasting patches from different samples [54]. Mixing strategies have also demonstrated their effectiveness in mitigating domain shifts in UDA for tasks such as image classification [46, 48] and semantic segmentation [8, 49]. Zou et al. [59] introduce the concept of Mix3D [26] as a pretext task for classification, where the rotation angle of mixed pairs is predicted. Kong et al. [22] presented a semi-supervised learning pipeline by incorporating a novel LiDAR mixing technique called LaserMix, which intertwines laser beams from different scans to leverage the distinctive spatial prior in LiDAR scenes. Compositional Semantic Mix (CoSMix) [34] is proposed as the first single-modal UDA approach [35, 47] for point cloud segmentation based on sample mixing. However, the application of mixing strategies to tackle crossmodal UDA in 3D semantic segmentation has not been fully explored in prior research. To bridge this research gap, we propose a novel VFM-guided mixing strategy that surpasses the conventional approach of simply concatenating two point clouds or randomly selecting crops. Our VFM-PL takes advantage of VFM to semantically guide the mixing process, thereby enhancing the effectiveness of the mixing strategy. \fSEEM Object Level Segmentation with Semantic Class Pre-trained 2D Network Predict Projection Head 1 Class Mapping Car Pers. Rd. Tree Car Pers. Rd. Tree Pixel-wise Class Distribution Pixel-wise Class Distribution Average Pixel-wise Pseudo-Labels Targe Domain Person Tree \u201c Person\u201d \u221a Figure 3. VFM-PL: Leveraging the visual prior for generating pseudo labels. We utilize VFM to provide guidance for generating pseudo-labels in the target domain. Since SEEM [61] is trained on a huge amount of image-text pairs and segmentation masks across diverse scenes, its learned feature encoder is naturally resistant to domain shifts. By averaging the probabilistic prediction of pretrained 2D network and SEEM, the generation of pseudo-labels can be more precise and robust. 3. Method In this section, we first present the overall pipeline of our VFMSeg for cross-modal UDA that leverages both 2D and 3D modalities (Sec 3.1). Then we elaborate on our proposed VFM-PL of transferring visual prior learned by VFM to source and target domains (Sec 3.2). Finally, we introduce the proposed FrustumMixing strategy that further narrows down the domain gap (Sec 3.3). 3.1. Framework Overview The overall architecture is depicted in Fig. 2. The main steps of the framework can be summarized as follows. Initially, we generate the semantically augmented data domain M by mixing samples from source and target domain with our FrustumMixing (see Sec. 3.3). Then, we input the data of the source domain S, target domain T , and the mixed source and target domain M into the 2D and 3D networks. This process generates the corresponding feature maps before the classifier, namely FS 2D, FS 3D, FT 2D, and FT 3D. Following that, our VFMSeg generates predictions for 3D semantic segmentation in both the source and target domains, denoted as PS 2D, PS 3D, PT 2D, and PT 3D. Subsequently, the source domain predictions are supervised using the corresponding source domain labels, whereas our target domain predictions are supervised using accurate pseudo labels derived from our proposed VFM-PL method. With the help of our proposed VFM-PL and FrustumMixing, the performance of cross-modal 3D UDA segmentation can be boosted. 3.2. VFM-PL: Adapting Prior Knowledge of VFM SEEM has been trained on rich image-text pairs across numerous scenes. It has learned robust visual priors and can provide accurate object-level 2D masks with accurate semantic labels. In light of this, we introduce VFM-PL to generate refined pseudo labels. VFM-PL comprises two steps: (1) Pre-train a 2D neural network that could predict semantic labels on target domain; (2) Leverage SEEM [61] and the pre-trained 2D neural network to generate pseudo labels for the subsequent training stage. Cross-modal and supervised pre-training. We follow the image and point cloud information flow in the framework depicted in Fig. 2 (mixed data are excluded in this stage) to pre-train a 2D neural network. In both the 2D and 3D neural networks, there are two projection heads. The first projection head (PH1) is specifically designed for the final prediction. In the source domain, the labeled data can provide the first head of precise semantic labels for both neural networks and help them capture significant domain features. The target domain, on the other hand, provides no supervision for the first head in this stage. The second projection head (PH2) is designed to transfer visual knowledge across two modalities via KL-Divergence. More specifically, the 2D to 3D and 3D to 2D information exchange can be described as following [16]: L2D\u21923D = DKL(3DPH1 || 2DPH2 ), (1) L3D\u21922D = DKL(2DPH1 || 3DPH2 ), (2) where L2D\u21923D and L3D\u21922D are cross-modal losses. At the end of the pre-training stage, the final state, i.e. the last checkpoints, of the 2D and 3D neural networks are kept for the generation of pseudo labels in the target domain. VFM assisted refinement of pseudo labels. Pseudo labels generated by the pre-trained 2D neural networks are considered to be noisy and lack precision as shown in Fig. 1a. Applying these inaccurate labels as supervision signals introduces intrinsic segmentation errors in our neural networks. In contrast, SEEM could produce consistent and relatively precise semantic masks with accurate labels. Hence, we propose to leverage the robust visual prior learned by SEEM to further refine the produced pseudo labels. The overall procedure is shown in Fig. 3. Firstly, we input target image to SEEM and it produces pixel-wise segmentation prediction. By applying softmax function to its output logits, we obtain the class-wise probability distribution for each pixel. Then we exploit this robust visual prior to refining generated pseudo labels by averaging the predicted probabilities from SEEM and pre-trained neural 2D network: PLR = Max(Softmax(P2Dpretrain) + Softmax(PSEEM)) (3) where PLR represents the pixel-wise refined pseudo label, Softmax stands for the Softmax function. P2Dpretrain is the predicted probability of pre-trained 2D neural network, while PSEEM denotes the predicted probability of SEEM. \f2D Branch 3D Branch SAM Fine-grained Masks Source Target Source Target A.Source to Target Mix B.Target to Source MIx VFM Guided Mixing Mixed Point Clouds Mixed Images Cross-modal Cross-domain A A B B Images Point Clouds Figure 4. FrustumMixing: VFM guided semantically mixing. To further enhance the capability of neural networks to bridge the gap across domains, we propose to utilize SAM [21] to generate fine-grained 2D masks by feeding images from both domains. The image mixing is realized by using masks that are generated according to one image to cut out corresponding areas, then fill in these masked areas with respective pixels selected from the other image. Although the outputs of pre-trained 2D neural network are imprecise, the domain-specific features are also captured during pre-training process. We argue that the learned yet noisy feature could help SEEM adapt its visual knowledge to the specific target domain which we are addressing. The empirical evaluation validates our assumption and we will elaborate on that later in Sec 4.4. 3.3. FrustumMixing: VFM guided Data Mixing To further facilitate the information exchange between different domains, we propose a new mixing strategy, FrustrumMixing. SAM [21] has demonstrated its preeminent capability to generate precise yet fine-grained masks for various input images. These remarkable segmentation results have not only inspired us to develop this data-mixing approach but also provided us with the basic ingredients for mixing image and point cloud samples in a fine-grained manner. The overall FrustrumMixing pipeline is depicted in Fig. 4. There are two mixing branches in our strategy, namely, the source-to-target mix and the target-to-source mix. The operation of both mixing branches is identical and the only difference is in the first step that which the image domain is selected to generate masks. Fig. 1b demonstrates the target to source FrustrumMixing and we will explain this process as follows. Basically, there are four steps to perform the mixing. (1) Input target image to SAM and save the fine-grained masks. (2) Randomly sample a proportion of generated masks and merge them into one layer of mask. (3) Apply the fused mask to target image and paste the masked pixels onto source image and cover the original area. Now we have obtained the mixed target to source image sample. (4) The final step is to pick point clouds to construct mixed 3D data. By applying the 3D to 2D projection matrix, we could produce a point image that contains all necessary points within the sight of the 2D camera. The aligned 2D image and calculated point image now pave the way for applying the merged SAM mask to select point clouds. We utilize the same mask to choose points from the target point image and delete the points inside the corresponding area in the source point image. The mixed target to source point cloud sample is generated by filling up the emptied area in the source point image with picked points from the target domain. FrustrumMixing provides neural networks with semantically mixed samples from both domains and is beneficial for UDA 3D segmentation performance. We will analyze the effectiveness of this method in Sec. 4.4. 4. Experiments 4.1. Datasets To construct our domain adaptation scenarios, we utilized publicly available datasets including nuScenes-Lidarseg [2], VirtualKITTI [7], SemanticKITTI [1], and A2D2 [9]. The details regarding the dataset splits can be found in the Appendix. Our selected scenarios encompass various typical challenges in domain adaptation. These challenges include changes in scene layout, such as the transition between right-hand-side and left-hand-side driving in the nuScenes-Lidarseg: USA/Singapore scenario (nuSc.L.Seg:USA/Sing.). Additionally, we address lighting variations, such as the shift from day to night in the nuScenes-Lidarseg: Day/Night scenario (nuSc.L.Seg:Day/Night). Furthermore, we tackle the synthetic-to-real data shift by incorporating data from VirtualKITTI /SemanticKITTI (V.KITTI/S.KITTI), where we bridge the gap between simulated depth and RGB data to real LiDAR and camera data. Lastly, we explore different sensor setups and characteristics, such as resolution and FoV, through the A2D2/SemanticKITTI scenario (A2D2/Sem.KITTI). Our code (https://github.com/EtronTech/VFMSeg) facilitates the replication of all training data and splits, and further details can be found in the Appendix. 4.2. Implementation Details Data Pre-processing. Considering the computation resources required for VFMs and the repetitive nature of sampling data for training neural networks, we generate all \fA2D2/Sem.KITTI V.KITTI/S.KITTI nuSc.L.Seg:USA/Sing. nuSc.L.Seg:Day/Night Method 2D 3D Avg 2D 3D Avg 2D 3D Avg 2D 3D Avg Baseline (Source Only) 34.2 35.9 40.4 26.8 42.0 42.2 58.4 62.8 68.2 47.8 68.8 63.3 xMUDA 38.6 45.8 45.2 38.1 43.8 44.7 64.1 62.4 68.7 55.5 69.2 67.4 xMUDAP L 41.2 49.8 47.5 38.7 46.1 45.0 65.6 63.8 68.4 57.6 69.6 64.4 AUDA 43.0 43.6 46.8 35.8 37.8 41.3 64.0 64.0 69.2 55.6 69.8 64.8 AUDAP L 46.8 48.1 50.6 35.9 45.5 45.9 65.9 65.3 70.6 54.3 69.6 61.1 DsCML+CMAL 46.3 50.7 51.0 38.4 38.4 45.5 65.6 56.2 66.1 50.9 49.3 53.2 DsCML+CMAL P L 46.8 51.8 52.4 39.6 41.8 42.2 65.6 57.5 66.9 51.4 49.8 53.8 Ours 45.0 52.3 50.0 57.2 52.0 61.0 70.0 65.6 72.3 60.6 70.5 66.5 Oracle 59.3 71.9 73.6 66.3 78.4 80.1 75.4 76.0 79.6 61.5 69.8 69.2 Table 1. Comparison of Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation. We report the mIoU results (with best and 2nd best) on the target set for each network as well as the ensembling result by averaging the predicted probabilities from 2D and 3D network. Following experimental settings in [16], we compare methods (xMUDA [16], AUDA [24], DsCML [28]) that utilize 2D image and 3D points from both source and target domains. The \u2018Baseline\u2019 model [16] is trained on source domain S only, which provides us the lower bound for UDA segmentation performance. The \u2018Oracle\u2019 [16] performs the assumed upper bound. It is not only trained on both domains, but also given the correct supervised label of target domain T . Due to the lack of results in some settings from original papers of AUDA and DsCML, we utilize their published codes to produce corresponding results. For AUDA, only the results of A2D2/Sem.KITTI are available from the original paper. As to DsCML, we only utilize its code in V.KITTI/S.KITTI setting since the original paper reports the results of the other three settings. In most of the test scenarios, Our proposed method boosts the performance on segmentation task and achieves superior results when compared to other effective methods. Detailed analysis is provided in Sec. 4.3. nuSc.L.Seg:Day/Night nuSc.L.Seg:USA/Sing. Vehicle A2D2/Sem.KITTI V.KITTI/S.KITTI Mammade Car Vegetation Drivable Surface Ignored Sidewalk Terrain Vegetation/Terrain Building Object Car Ignored Truck Road Truck Bike Person Road Sidewalk Nature Parking Building Object Ignored Figure 5. Qualitative results. We show the ensembling results of four scenarios by averaging the softmax outputs of 2D and 3D networks. Our method can improve the performance of 3D semantic segmentation. Noted that, by the merits of VFMs, our method can segment detailed objects very well. From top to bottom, the focused areas are the trunk of a tree, manmade objects under restricted lighting condition, the silhouette of a vehicle, and most importantly, a kid playing close to the road. masks for FrustrumMixing beforehand. Compared to generating semantic masks and fine-grained label-free masks on-the-fly, the training time in our hardware environment shrinks from weeks to days. For SEEM masks, we it\ferate all training samples and save both class labels and masked areas in a pickle file (.pkl). The process for SAM masks needs additional steps. The SAM mask data has the shape of an image and with only one channel to store \u2018True\u2019 and \u2018False\u2019, which indicates whether the respective pixel is masked. Since we perform a random sampling of SAM mask, as illustrated in Sec. 3.3, all fine-grained masks must be preserved for the training stage. However, the storage space required for all training images is enormous (estimated to be in a few Tera Bytes). Hence, we first give each mask a unique number and then merge all mask data into one matrix. Such a matrix is identical in size to the individual mask but stores the number instead. This pre-processing method simultaneously reduces the storage space and training time cost. Network Architecture. To ensure a fair comparison with the only existing multi-modal 3D domain adaptation method, we employ the following approaches: For the 2D network, we utilize ResNet34 [11], which has been pretrained on the ImageNet dataset, as the encoder for the U-Net [32]. For the 3D network, we employ SparseConvNet [10] with a U-Net architecture, implementing six rounds of down-sampling. Additionally, we adopt a voxel size of 5cm in the 3D network. This voxel size ensures that each voxel contains only one 3D point, maintaining a level of granularity suitable for the task. Training Details. Our method and the other baselines were trained and evaluated using the PyTorch toolbox on the Python 3.7 platform. The implementation of all proposed models was conducted on four NVIDIA RTX 3090Ti GPUs, each with 24GB of RAM. During the training phase, we adopted a batch size of 8 and employed the Adam optimizer [20] with \u03b21 = 0.9 and \u03b22 = 0.999. The initial learning rate was set to 1e-3, and we utilized the poly learning rate policy [4] with a power of 0.9. The maximum number of training iterations was set to 30k for V.KITTI/S.KITTI, the other three scenarios are ste to 100k. Evaluation. Consistent with previous domain adaptation studies [16, 28], we assess the performance of our model on the test set using the widely used PASCAL VOC intersection-over-union (IoU) metric. The mean IoU (mIoU) is calculated as the average of the IoU values across all categories. 4.3. Experimental Results and Comparison To validate the effectiveness of our proposed VFMSeg, we carried out four domain shift scenarios as introduced by [16]. Table 1 presents the experimental results and performance comparison of our method with previous unsupervised domain adaptation methods for 3D segmentation, following the setup introduced in Sec. 4.2. Each experiment includes two common reference methods: a baseline model called Source only, trained solely on the source doScenarios Method #1 #2 #3 #4 Baseline (xMUDA) 38.6 38.1 64.1 55.5 xMUDAP L 41.2 38.7 65.6 57.6 \u2206 \u21912.6 \u21910.6 \u21911.5 \u21912.1 SEEM Only 35.7 51.3 50.5 33.7 \u2206 \u21932.9 \u219113.2 \u219313.6 \u219321.8 SEEM+2D Avg. 43.0 55.3 67.7 57.9 \u2206 \u21914.4 \u219117.2 \u21913.6 \u21912.4 VFM-PL 43.6 55.7 68.8 57.1 \u2206 \u21915.0 \u219117.6 \u21914.7 \u21911.6 Table 2. Ablation study on the effect of pseudo-labels generated via VFM guidance. We report the mIoU segmentation performance of 2D networks to validate the effectiveness of proposed VFM-PL. Column #1 to Column #4 represents the A2D2/Sem.KITTI, V.KITTI/S.KITTI, nuSc.L.Seg: USA/Singapore and nuSc.L.Seg: Day/Night scenarios respectively. main, and an upper-bound model named Oracle, trained exclusively on the target data with annotations. And we compare our VFMSeg with other multi-modal methods based on xMUDA, such as AUDA [24] and DsCML [28]. Among these methods, xMUDA achieves better performance on V.KITTI \u2192S.KITTI and Day \u2192Night, while AUDA obtains comparable results on USA \u2192Singpore. By the merits of our VFM-PL and FrustumMixing, our VFMSeg outperforms these methods by +32.9% (V.KITTI \u2192 S.KITTI), +2.4% (USA \u2192Singpore). For Day \u2192Night scenario, VFMSeg achieves the best 3D segmentation performance and is even 0.7% higher than the assumed upper bound, \u2018Oracle\u2019 model, which is fully supervised on target domain T . As to A2D2 \u2192S.KITTI scenario, that SEEM provides no class label near the semantic meaning of \u2018Trunk\u2019 under this setting. Hence, we fully ignored this supervised signal for training and the noise introduced via this processing method could lead to the inferior results in 2D segmentation. Still, VFMSeg achieves the second best performance among all segmentation results in this setting and is only 0.1% behind the best results ( 52.4% from DsCML + CMALPL ). Overall, the empirical experiments have validated the the effectiveness of our proposed VFMSeg method. 4.4. Ablation Study To demonstrate the effectiveness of each module in our method, we conduct ablation studies on four unsupervised domain adaptation scenarios. Furthermore, we evaluate the impact of various visual foundation models on the perfor\fNo. w. Mix 2D 3D Avg #1 \u2718 43.6 50.3 47.6 \u2714 45.0 (+1.4) 52.3 (+2.0) 50.0 (+2.4) #2 \u2718 55.7 49.9 59.8 \u2714 57.2 (+1.5) 51.9 (+2.0) 61.0 (+1.2) #3 \u2718 68.2 64.0 71.1 \u2714 70.0 (+1.8) 65.6 (+1.6) 72.3 (+1.2) #4 \u2718 57.1 69.8 68.3 \u2714 60.6 (+3.5) 70.5 (+0.7) 66.5 (-1.8) Table 3. Ablation study on the effect of mixing strategy under VFM guidance. Row #1 to Row #4 represents the A2D2/Sem.KITTI, V.KITTI/S.KITTI, nuSc.L.Seg: USA/Singapore and nuSc.L.Seg: Day/Night scenarios respectively. \u2018w.Mix\u2019 indicates whether the mixed data is involved in the training process. PLs by pre-trained models + GT Points PLs by VFM-PL + GT Points Noisy Edges Noisy PLs Smooth Edges Accurate PLs Figure 6. Projection errors caused by projecting points onto images. PLs from the pre-trained model tend to be noisy but can learn the noisy edges from projection errors. Our VFM-PL is able to generate accurate PLs, where the smooth edges will cause gaps compared with ground truth. mance of our method. Effects of VFM-guided Accurate Pseudo-Label Generation. To validate the effectiveness of our proposed VFMPL, further ablation studies are conducted. Table 2 demonstrates that fine-tuning the xMUDA model using pseudo labels generated by pre-trained xMUDA models results in a marginal improvement in performance across all four UDA scenarios. Surprisingly, our findings indicate that the pseudo labels generated by SEEM only outperform xMUDAP L in the VirtualKITTI/SemanticKITTI setting. This observation can be primarily attributed to the presence of projection errors in the point clouds when projected onto images. From Fig. 6, it is evident that the edges of objects in the ground truth exhibit noise, whereas the pseudo-labels generated by our VFM-PL demonstrate remarkably smooth edges. Therefore, we employ the pseudo-labels generated by pre-trained models to assist VFM-PL in bridging the gap between the pseudo-labels obtained from SEEM and the ground truth. To achieve this, we perform an ensemble of the pseudo-labels obtained from pre-trained models and those from SEEM by averaging the softmax logits. This approach enables the supervision of the noisy edges in the ground truth through the pseudo-labels from pre-trained models, while the main parts of objects can be learned from the pseudo-labels generated by SEEM. Effects of VFM-guided Semantic Data Augmentation To gain a deeper understanding of the effectiveness of our FrustumMixing method, we conducted additional ablation studies. Table 3 demonstrates that models trained with our VFM-guided semantic data augmentation exhibit significant improvements in both 2D and 3D performance across all four UDA scenarios, thereby leading to better improvements in average performance. The results obtained from our experiments clearly indicate that our FrustumMixing approach, guided by masks generated by VFM, operates similarly to the concept of view frustum. This results in more effective semantic data augmentation, as opposed to a random mix-up of source and target samples. The incorporation of semantic data augmentation contributes to improving the learning process of the networks, ultimately leading to enhanced overall performance. 5. Discussion and Future Work The robust and consistent visual priors of VFMs inspired us to leverage their capability to facilitate our 3D segmentation task. To the best of our knowledge, we are the first to incorporate two VFMs into UDA for 3D framework. The key takeaway here is plain and simple. Feeding neural networks with semantically mixed samples across various domains is foreseeingly beneficial. The fine-grained, yet rich in visual semantic meaning, masks generated by SAM fit right on the spot for generating sufficiently mixed samples. Besides, the lack of object-level text labels in SAM masks could be compensated by adopting segmentation VFMs that are trained with abundant image-text pairs, in our case, we choose to utilize SEEM for refining pseudo labels for the target domain. Projection errors are a common issue encountered in different cross-modal autonomous driving datasets, creating a challenge for cross-modal UDA in 3D semantic segmentation. Although our VPM-PL approach, as discussed in Sec. 4.4, helps alleviate this problem, it does not completely solve it. As a result, our future work will concentrate on addressing projection errors that arise when projecting point clouds onto images. Once this issue is effectively resolved, it has the potential to further enhance the performance of 3D semantic segmentation. \f6." + }, + { + "url": "http://arxiv.org/abs/2307.07677v1", + "title": "Learning from Pseudo-labeled Segmentation for Multi-Class Object Counting", + "abstract": "Class-agnostic counting (CAC) has numerous potential applications across\nvarious domains. The goal is to count objects of an arbitrary category during\ntesting, based on only a few annotated exemplars. In this paper, we point out\nthat the task of counting objects of interest when there are multiple object\nclasses in the image (namely, multi-class object counting) is particularly\nchallenging for current object counting models. They often greedily count every\nobject regardless of the exemplars. To address this issue, we propose\nlocalizing the area containing the objects of interest via an exemplar-based\nsegmentation model before counting them. The key challenge here is the lack of\nsegmentation supervision to train this model. To this end, we propose a method\nto obtain pseudo segmentation masks using only box exemplars and dot\nannotations. We show that the segmentation model trained on these\npseudo-labeled masks can effectively localize objects of interest for an\narbitrary multi-class image based on the exemplars. To evaluate the performance\nof different methods on multi-class counting, we introduce two new benchmarks,\na synthetic multi-class dataset and a new test set of real images in which\nobjects from multiple classes are present. Our proposed method shows a\nsignificant advantage over the previous CAC methods on these two benchmarks.", + "authors": "Jingyi Xu, Hieu Le, Dimitris Samaras", + "published": "2023-07-15", + "updated": "2023-07-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Class-agnostic counting (CAC) aims to infer the number of objects in an image, given a few object exemplars. Compared to conventional object counters that count objects from a specific category, e.g., human crowds [31], cars [27], animals [3], or cells [38], CAC can count objects of an arbitrary category of interest, which enables numerous applications across various domains. Most of the current CAC methods focus on capturing the intra-class similarity between image features [23, 32, 30, 14]. For example, BMNet [32] adopts a self-similarity module to enhance the feature\u2019s robustness against intra-class variations. Another recent approach, SAFECount [41], uses BMNet+ SAFECount Figure 1. Visualizations of the density maps predicted by two recently proposed class-agnostic counting methods, i.e., BMNet+[32] and SAFECount [41]. They fail to count the objects of interest when multiple objects of different classes appear in the same image. a similarity-aware feature enhancement framework to better capture the support-query relationship. These methods perform quite well on the current benchmark, i.e. FSC-147, in which images only contain objects from a single dominant class. However, we observe that when objects of multiple classes appear in the same image, these methods tend to greedily count every single object regardless of the exemplars (see Figure 1). This issue greatly limits the potential applicability of these methods in real-world scenarios. A possible reason is that the current counting datasets only contain single-class training images, causing the counting models to overlook the inter-class discriminability due to the absence of multi-class training data. A natural solution to resolve this issue is to train the counting model with images containing objects of multiple classes. However, building such labeled multi-class datasets for counting is not an easy task: it is non-trivial to collect images for counting with a large diversity of categories, and annotating them is costly since point annotation is required for instances from different classes. An alternative way is to synthesize multi-class training data from singleclass images. By simply concatenating two or more images of different classes, we can easily create a large amount of multi-class data without additional annotation costs. However, our experiments show that while the model trained on arXiv:2307.07677v1 [cs.CV] 15 Jul 2023 \fthese images indeed performs better on multi-class test images, the performance on single-class counting drops significantly (5.1). This could be because, in order for the model to selectively count the objects of interest, it needs to recognize certain discriminative features that can distinguish between different classes. This will inevitably sacrifice some of its robustness against the variations within the same class. In other words, there is a trade-off between invariance and discriminative power for the counting model [34]. Due to this trade-off, instead of training an end-to-end model for multi-class counting, our strategy is to localize the area containing the objects of interest first and then count the objects inside. Given a multi-class image for counting and a few exemplars, our goal is to obtain a segmentation mask highlighting the regions of interest. Such an exemplar-based segmentation model can be easily trained if mask annotations are available. However, this is not the case for the current counting datasets [30, 17], and collecting such annotations is time-consuming and laborintensive. To this end, we propose a method to obtain pseudo segmentation masks using only box exemplars and dot annotations. We show that a segmentation model trained with only these pseudo-labeled masks can effectively localize objects of interest for multi-class counting. We aim to obtain a mask covering all the objects that belong to the same class as the exemplars while not including any irrelevant object or background. We show that an unsupervised clustering method, K-Means, can be used for this purpose. In particular, given a synthetic multi-class image, a few annotated exemplars, and a pre-trained singleclass counting model, we first represent each mask pixel with an image patch based on the receptive field of the network. Then we extract the feature embeddings for all the image patches as well as the provided exemplars and run KMeans clustering on them. We consider the patches whose embeddings fall into the same cluster as the exemplar to contain the objects of interest and assign positive labels to the corresponding mask pixels. We assign negative labels otherwise. Note that the output of K-Means is sensitive to the choice of K, which is hard to determine for each image. In our case, we choose the K that results in a pseudo mask that best benefits the pre-trained counting model, i.e., the counting model can produce the density map closest to the ground truth map after the pseudo mask is applied. The obtained pseudo masks can then be used as the supervision signal to train an exemplar-based segmentation model. To evaluate the performance of different methods on multi-class counting, we introduce two new benchmarks, a synthetic multi-class dataset originating from FSC-147, and a new test set of real images in which objects from multiple classes are present. Our proposed method outperforms current counting methods by a large margin on these two benchmarks. In short, our main contributions are: \u2022 We identify a critical issue of the previous classagnostic counting methods, i.e., greedily counting every object when objects of multiple classes appear in the same image, and propose a simple segment-andcount strategy to resolve it. \u2022 We propose a method to obtain pseudo-labeled segmentation masks using only annotated exemplars and use them to train a segmentation model. \u2022 We introduce two benchmarks for multi-class counting, on which our proposed method outperforms the previous counting methods by a large margin. 2. Related Work 2.1. Class-specific Object Counting Class-specific object counting aims to count objects from pre-defined categories, such as humans [22, 46, 43, 37, 33, 19, 1, 44, 31, 45, 39, 24, 35], animals [3], cells [38] and cars [27, 17]. Generally, there are two groups of class-specific counting methods: detection-based methods [6, 17, 21] and regression-based methods [44, 9, 10, 36, 46, 5, 25]. Detection-based methods apply an object detector on the image and count the number of objects based on the detected boxes. However, detection-based methods often struggle with detecting tiny objects. Regression-based methods predict a density map for each input image, and the final result is obtained by summing up the pixel values. Both types of methods require a large amount of training data with rich training annotations. Moreover, they can not be used to count objects of arbitrary categories at test time. 2.2. Class-agnostic Object Counting Class-agnostic object counting aims to count arbitrary categories given only a few exemplars [26, 30, 40, 32, 14, 28, 23, 42, 2]. Previous methods mostly focus on how to better capture the similarity between exemplars and image features. For example, SAFECount [41] uses a similarityaware feature enhancement framework to better model the support-query relationship. RCAC [14] is proposed to enhance the counter\u2019s robustness against intra-class diversity. Nguyen et al. [28] recently introduce new benchmarks for object counting, which contains images of objects from multiple classes, originating from the FSC-147 and LVIS [15] datasets. However, these benchmarks are designed for the task of jointly detecting and counting object instances in complex scenes, where the central focus is on how to detect them accurately. 2.3. Unsupervised Semantic Segmentation A closely related task to ours is unsupervised semantic segmentation [20, 8, 29, 7, 16, 18, 12, 13], which aims \fSimilarity Map Masked Density Map Feature Extractor (a) Computing the optimal pseudo mask Counter Pseudo-mask K = 2 K = 3 K = 4 GT Density Map L2 Distance 0.1 0.4 0.7 C F Optimal Density Map Supervision Segmentation Model Seg Optimal Pseudo-mask Predicted Mask (b) Training a segmentation model Figure 2. Overview of our approach. We propose a method to obtain the pseudo segmentation masks using only box exemplars and dot annotations (a), and then use the obtained pseudo masks to train an exemplar-based segmentation model (b). Specifically, given a multi-class image and a few annotated exemplars, we crop a set of image patches, each of which corresponds to a mask pixel (we only visualize 6 patches here for simplicity). We run K-Means clustering on the feature embeddings extracted from all cropped patches and the exemplars. Those pixels whose embeddings fall into the same cluster as the exemplar form an object mask indicating the image area containing the objects of interest. We find the optimal number of clusters, K, such that the counting model can produce the density map closest to the ground truth after the pseudo mask is applied. We use the obtained pseudo masks to train an exemplar-based segmentation model, which can then be used to infer the object mask given an arbitrary test image. to discover classes of objects within images without external supervision. IIC [20] attempts to learn semantically meaningful features through transformation equivariance. PiCIE [8] further improves on IIC\u2019s segmentation results by incorporating geometric consistency as an inductive bias. Although these methods can semantically segment images without supervision, they typically require a large-scale dataset [4, 11] to learn an embedding space that is cluster-friendly. Moreover, the label space of semantic segmentation is limited to a set of pre-defined categories. In comparison, our goal is to localize the region of interest specified by a few exemplars, which can belong to an arbitrary class. 3. Method In order to perform multi-class object counting, our strategy is to compute a mask that can be applied to the similarity maps of a pre-trained counting model to localize the area containing the objects of interest and count the objects inside. Figure 2 summarizes our approach. We propose a method to obtain pseudo segmentation masks using only box exemplars and dot annotations, and then use these pseudo masks to train an exemplar-based segmentation model. Specifically, given a multi-class image and a few annotated exemplars, we tile the input image into different patches, each of which corresponds to a pixel on the mask. We run K-Means clustering on the feature embeddings extracted from all cropped patches and the exemplars. Those mask pixels whose corresponding patch embeddings fall into the same cluster as the exemplar will form an object mask indicating the image area containing the objects of interest. We find the optimal number of clusters, K, such that a pre-trained single-class counting model can produce the density map closest to the ground truth after the pseudo mask is applied. We use the obtained pseudo masks to train an exemplar-based segmentation model, which can then be used to infer the object mask given an arbitrary test image. For the rest of the paper, we denote the pre-trained singleclass counting model as the \u201cbase counting model\u201d. Below we will first describe how we train this base counting model and then present the detail of our proposed multiclass counting method. 3.1. Training The Base Counting Model We first train a base counting model using images from the single-class counting dataset [30]. Similar to previous works [30, 32], the base counting model uses the input image and the exemplars to obtain a density map for object counting. The model consists of a feature extractor F and a counter C. Given a query image I and an exemplar B of an arbitrary class c, we input I and B to the feature extractor to obtain the corresponding output, denoted as F(I) and F(B) respectively. F(I) is a feature map of size d\u2217hI \u2217wI and F(B) is a feature map of size d \u2217hB \u2217wB. We further perform global average pooling on F(B) to form a feature vector b of d dimensions. After this feature extraction step, we obtain the similarity map S by correlating the exemplar feature vector b with the image feature map F(I). Specifically, let w(i,j) = F(i,j)(I) be the channel feature at spatial position (i, j), S can be \fcomputed by: \\label { eq : si mi} S_{(i,j)}(I, B) = w_{(i,j)}^T b. (1) In the case where n exemplars are given, we use Eq. 1 to calculate n similarity maps, and the final similarity map is the average of these n similarity maps. We then concatenate the image feature map F(I) with the similarity map S, and input them into the counter C to predict a density map D. The final predicted count N is obtained by summing over the predicted density map D: \\ lab el {eq:final_count} {N} = \\sum _{i,j}D_{(i,j)}, \\vspace {-2mm} (2) where D(i,j) denotes the density value for pixel (i, j). The supervision signal for training the counting model is the L2 loss between the predicted density map and the ground truth density map: \\labe l {eq: co u nting _los s} L_{\\textnormal {count}} = \\|D(I, B) D^{*}(I,B)\\|_2^2, (3) where D\u2217denotes the ground truth density map. 3.2. Multi-class Object Counting 3.2.1 Pseudo-Labeling Segmentation Masks In this section, we describe our method to obtain pseudomasks using only box exemplars and dot annotations. The mask is of the same size as the similarity map from the base counting model and each pixel on the mask is associated with a region in the original image. Ideally, the pixel value on the mask is 1 if the corresponding region contains the object of interest and 0 elsewhere. Specifically, for the pixel from the mask M at location (i, j), we find its corresponding patch p(i, j) in the input image centering around (iI, jI), where iI = i \u2217r + 0.5 \u2217r and jI = j \u2217r + 0.5 \u2217r. Here, r is the downsampling ratio between the original image and the similarity map. The width and height of p(i, j) are set to be the mean of the width and height of the exemplar boxes. We denote P = {p1, p2, ...pn} as a set of image patches, each of which corresponds to one pixel in the mask. The goal is to assign a binary label to each patch indicating if it contains the object of interest or not. To achieve this, we first extract the ImageNet features for all patches in P to get a set of embeddings F = {f1, f2, ...fn}. Then we compute the average of the embeddings extracted from the examplar boxes in this image, denoted as fB. We run K-means on the union of {f1, f2, ...fn} and {fB}. Those patches whose embeddings fall into the same cluster as fB will be considered to contain the object of interest, and result in a 1 value in the corresponding pixel of the mask. On the contrary, the pixel value will be 0 if the corresponding patch embedding falls into a different cluster as fB. Here K-Means groups similar objects together, which can serve our purpose of segmenting objects belonging to different classes. It is worth noting that the number of clusters, denoted as K, has a large effect on the output binary mask and the final counting results. If K is too small, too many patch embeddings will fall into the same cluster as the exemplar embedding and the counter will over-count the objects; if K is set too high, too few embeddings will fall into the same cluster, which results in too many regions being masked out. In our case, we find the optimal K for each image in the multi-class training set that results in the binary mask that minimizes the counting error. Specifically, given a multiclass image \u00af I and an exemplar \u00af B, let S(\u00af I, \u00af B) denote the similarity map outputted by the pre-trained counting model, and M(\u00af I, \u00af B)k denote the mask obtained when the number of clusters is set to k. By applying M(\u00af I, \u00af B)k on S(\u00af I, \u00af B), the similarity scores on the non-target area are set to a small constant value \u03f5 and the similarity scores on the target area remain the same: \\ la b el {eq:m a s kin g_ m ap} S(\\b ar {I }, \\ba r {B} ) _{ (i ,j)}^k = \\Bigg \\{ \\begin {array}{ll} S(\\bar {I}, \\bar {B})_{(i,j)}, & \\text {if } M(\\bar {I}, \\bar {B})_{(i,j)}^k = 1, \\\\ \\\\ \\epsilon , & \\text {otherwise.} \\end {array} \\vspace {-2mm} (4) We then input S(\u00af I, \u00af B)k to the pre-trained counter C to get the corresponding density map D(\u00af I, \u00af B)k. We find the optimal k such that the L2 loss between the predicted density map and the ground truth density map is the smallest: \\ l abel { e q:op ti m al_ k } k^ * = \\operatorname *{argmin}_k \\|D(\\bar {I}, \\bar {B})^k D^{*}(\\bar {I})\\|_2^2, (5) where k\u2217denotes the optimal k and D\u2217(\u00af I) denotes the ground truth density map for input image \u00af I. 3.2.2 Training Exemplar-based Segmentation Model After obtaining the optimal masks for all the images in the multi-class training set, we train a segmentation model P to predict the pseudo segmentation masks based on the input image and the corresponding exemplar. In particular, suppose we have a multi-class image \u00af I and an exemplar \u00af B, we first input \u00af I and \u00af B to the segmentation model to get the corresponding feature map output P(\u00af I) and P( \u00af B). We then apply global average pooling on P( \u00af B) to form a feature vector v. In the case where multiple exemplars are provided, we apply global average pooling on each P( \u00af B) and the final vector v is the average of all these pooling vectors. The predicted mask M p is obtained by computing the cosine similarity between v and the channel feature at each spatial location of P(\u00af I). Specifically, the value of the predicted mask at position (i, j) is: \\ label { eq : co s _simi} M^p_{ (i, j )}(\\bar {I}, \\bar {B}) = \\text {cos}(P_{(i,j)}(\\bar {I})^T, v). (6) \fThe supervision signal for training this segmentation model is the L2 loss between the predicted mask and the optimal mask obtained by finding the best k with Eq. 5: \\label {eq:masking_loss} L_{\\textnormal {seg}} = \\|M^p(\\bar {I}, \\bar {B}) M^{*}(\\bar {I}, \\bar {B})\\|_2^2, g (7) where M \u2217(\u00af I, \u00af B) denotes the optimal mask under k\u2217. 3.2.3 Inference on Multi-class Testing Data After the exemplar-based segmentation model is trained, we use it together with the pre-trained counting model to perform multi-class object counting. Given an input image for testing, we first input it to the feature extractor of the pre-trained counting model to get the corresponding similarity map. Then we use the segmentation model to predict a coarse mask where high values indicate the region of interets. We binarize this predicted mask with a simple threshold and apply it on the similarity map based on Eq. 4. The counter then take the masked similarity map as input and predict the density map and final object counts. 4. Experiments 4.1. Implementation Details Network architecture For the base counting model, we use ResNet-50 as the backbone of the feature extractor, initialized with weights of a pre-trained ImageNet model. The backbone outputs feature maps of 1024 channels. For each query image, the number of channels is reduced to 256 using 1 \u00d7 1 convolution. For each exemplar, the feature maps are first processed with global average pooling and then linearly mapped to a 256-d feature vector. The counter consists of 5 convolution and bilinear upsampling layers to regress a density map of the same size as the query image. The segmentation model shares the same architecture as the backbone of the feature extractor. The output mask is of the same size as the similarity map from the base counting model. Dataset We train the base counting model on the FSC147 dataset. FSC-147 is the first large-scale dataset for class-agnostic counting. It includes 6135 images from 147 categories varying from animals, kitchen utensils, to vehicles. The categories in training, validation, and test sets have no overlap. We create synthetic multi-class images from FSC-147 dataset to train the segmentation model. Specifically, we randomly select two images belonging to different classes, crop a part from each image and then concatenate the two cropped parts horizontally. To evaluate the performance of multi-class counting on real images, we further collect a test set of 450 multi-class images. For each image in this test set, there are at least two categories whose object instances appear multiple times. We provide dot annotations for 600 groups of object instances. The synthetic validation set and test set contain 1431 and 1359 images respectively. We test the trained model on both the synthetic multi-class images and our collected real multi-class images. Training details Both the base counting model and the segmentation model are trained using the AdamW optimizer with a fixed learning rate of 10\u22125 and a batch size of 8. The base counting model is trained for 300 epochs and the segmentation model is trained for 20 epochs. We resize the input query image to a fixed height of 384, and the width is adjusted accordingly to preserve the aspect ratio of the original image. Exemplars are resized to 128\u00d7128 before being fed into the feature extractor. We run Kmeans on the extracted patch embeddings to find the K that leads to the optimal mask for each image. The embeddings are extracted from a pre-trained ImageNet backbone. The threshold for binarizing the segmentation mask is 0.6 and the number of clusters K ranges from 2 to 6. 4.2. Evaluation Metrics For our collected multi-class test set, the counting error \u03f5 for image i is defined as \u03f5i = |yi \u2212\u02c6 yi|, where yi and \u02c6 yi are the ground truth and the predicted number of objects respectively. For our synthetic multi-class test set, the objects of interest are only present in the left / right-half part of the image. Ideally, the predicted number of objects should be close to the ground truth in the area of interest while being zero elsewhere. Thus, we define the counting error as \u03f5i = |yi \u2212\u02c6 yi| + \u00af \u02c6 yi, where \u02c6 yi and \u00af \u02c6 yi denote the predicted number of objects in the interest area and non-interest area respectively. We use Mean Average Error (MAE), Root Mean Squared Error (RMSE), Normalized Relative Error (NAE) and Squared Relative Error (SRE) to measure the performance of different object counters over all testing images. In particular, MAE = 1 n Pn i=1 \u03f5i; RMSE = q 1 n Pn i=1 \u03f52 i ; NAE = 1 n Pn i=1 \u03f5i yi ; SRE = q 1 n Pn i=1 \u03f52 i yi where n is the number of testing images. 4.3. Comparing Methods We compare our method with recent class-agnostic counting methods, including CounTR (Counting TRansformer [23]), FamNet (Few-shot adaptation and matching Network [30]), SAFECount (Similarity-Aware Feature Enhancement block for object Counting [41]) and BMNet (Bilinear Matching Network [32]). 4.4. Results Quantitative results. Table 1 compares our proposed method with previous class-agnostic counting methods on our synthetic multi-class validation and test sets. (We include the results on single-class datasets in the Supp. Mat due to space limitations). The performance of all these \fSAFECount FamNet Ours BMNet+ Input 38 33 49 53 24 27 12 14 73 69 36 23 69 95 39 18 15 5 27 68 Figure 3. Qualitative results on our collected multi-class counting test dataset. We visualize a few input images, the corresponding annotated exemplar (bounded in a dashed white box) and the predicted density maps. Predicted object counts are shown at the top-left corner. Our predicted density maps can highlight the objects of interest specified by the annotated box, which will lead to more accurate object counts. Method Val Set Test Set MAE RMSE NAE SRE MAE RMSE NAE SRE CounTR [41] 32.29 47.07 1.89 3.31 40.20 83.03 1.85 3.79 FamNet [30] 18.15 33.16 0.63 4.42 22.22 40.85 0.79 9.29 FamNet+ [30] 27.74 39.78 1.33 7.29 29.90 43.59 1.16 8.82 BMNet [32] 32.39 46.01 1.75 9.86 36.94 46.73 1.65 9.46 BMNet+ [32] 31.09 42.43 1.75 9.51 39.78 57.85 1.81 11.96 SAFECount [41] 22.58 34.68 1.21 2.18 26.44 40.68 1.14 2.89 Ours 14.34 26.03 0.61 4.48 11.13 16.96 0.41 2.80 Table 1. Quantitative comparisons on our synthetic multi-class dataset. Our proposed method outperforms the previous class-agnostic counting methods by a large margin, achieving the lowest mean average error on both validation and test set. Method MAE RMSE NAE SRE CountTR [41] 24.73 45.16 1.62 3.10 FamNet [30] 13.54 21.22 0.65 3.38 FamNet+ [30] 19.42 38.46 0.95 6.13 BMNet [32] 21.92 37.09 1.18 1.68 BMNet+ [32] 25.55 40.35 1.36 1.81 SAFECount [41] 23.57 40.99 1.25 1.69 Ours 6.97 13.03 0.37 0.54 Table 2. Quantitative comparisons on our collected multi-class dataset. Our proposed method has the lowest counting error compared with the previous class-agnostic counting methods. methods drops significantly when tested on our synthetic dataset. The state-of-the-art single-class counting method CounTR [23], for example, shows a 20.34 error increase w.r.t. validation MAE (from 13.13 to 32.29) and a 28.25 error increase w.r.t. test MAE (from 11.95 to 40.20). Interestingly, we find that FamNet, which has the largest counting error on the single-class test set among these methods, performs best on our synthetic multi-class dataset. Unlike other methods, FamNet keeps the backbone of the counting model fixed without any adaptation, which prevents the model from over-capturing the intra-class similarity and greedily counting everything. This further validates that there is a trade-off between single-class and multi-class counting performance. Our proposed method outperforms the other methods by a large margin, achieving 14.34 on validation MAE and 11.13 on test MAE. Table 2 shows the comparison with previous methods on our collected test set. Similarly, our proposed method significantly outperforms other methods by a large margin, as reflected by a reduction of 6.82 w.r.t. MAE over FamNet and 14.77 w.r.t. MAE over BMNet. Qualitative analysis. In Figure 3, we present a few input testing images, the corresponding annotated bounding box and the density maps produced by different counting methods. We can see that when there are objects of multiple classes present in the image, previous methods fail to distinguish them accurately, which often leads to overcounting. In comparison, the density map predicted by our method can highlight the objects of interest specified by the annotated box, even for the hard case where the objects are \fK = 2 K = 4 K = 6 w/o mask Ours 18 7 5 8 7 47 20 37 35 35 36 33 16 32 87 46 31 25 37 84 6 67 10 9 67 7 Input 26 28 31 7 Figure 4. Qualitative analysis on the number of clusters. We visualize a few input images, the corresponding annotated exemplar (bounded in a dashed white box) and the density maps when using masks computed from K-means as well as predicted by our segmentation model. Predicted counting results are shown at the top-left corner. The density maps under the optimal K are framed in green. The value of K has a large effect on the counting results and the optimal K varies from image to image. irregularly placed in the image (the 3rd row). K 2 3 4 5 6 Ours MAE 15.13 10.77 8.17 7.98 8.03 6.97 RMSE 28.09 20.71 15.38 14.93 15.31 13.03 NAE 0.94 0.63 0.44 0.42 0.40 0.37 SRE 1.68 1.18 0.69 0.62 0.54 0.54 Table 3. Quantitative analysis on the number of clusters. Our proposed method outperforms K-Means under different values of K on our collected multi-class test set. 5. Analyses 5.1. Comparison with Training with Synthetic Data Our strategy for multi-class counting is to compute a coarse mask to localize the image area of interest first and then count the objects inside with a single-class counting model. An alternative way is to train an end-to-end model for multi-class counting using images containing objects from multiple classes. In this section, we compare the performance of these two strategies. Specifically, we use our synthetic multi-class images to fine-tune three pre-trained single-class counting models: BMNet+ [32], FamNet+ [30] and SAFECount [41]. Results are summarized in Table 4. As shown in the table, after fine-tuning on multi-class images, although the counting error on the multi-class test set is reduced, the performance on the single-class test set drops significantly for all three counting methods. Our method, in comparison, achieves the best performance for multi-class counting without sacrificing the performance on the singleclass test set. 5.2. Analysis on the Number of Clusters When running K-means, the number of clusters, K, has a large effect on the computed binary mask and the final counting results. However, it is non-trivial to determine K given an arbitrary image. To resolve this issue, we first compute the optimal pseudo masks for the training images based on the dot annotations. Then we train an exemplar-based segmentation model to predict the obtained pseudo masks. During testing, we can use the trained model to predict the segmentation mask based on exemplars. In this section, we provide analyses on how K affects the final counting results and show a comparison with our proposed method. 5.2.1 Quantitative Results We report the counting performance when computing masks by running K-means under different values of K as well as using our predicted masks on the collected multiclass test set. Results are summarized in Table 3. As K goes from 2 to 6, both the MAE and RMSE decrease first and then increase, achieving the lowest when K = 5, i.e., 7.98 w.r.t. MAE and 14.93 w.r.t. RMSE. Using our predicted masks outperforms the performance under the best K by 12.6% w.r.t. MAE and 12.7% w.r.t. RMSE, which demonstrates the advantages of using our trained segmentation model to predict the mask. \fMethod Training Multi Set Single Set Set Test MAE Test RMSE Val MAE Val RMSE Test MAE Test RMSE BMNet+ Single 25.55 40.35 15.74 58.53 14.62 91.83 Single+Syn-multi 11.44 23.22 24.24 73.42 20.89 99.04 FamNet+ [30] Single 19.42 39.78 23.75 69.07 22.08 99.54 Single+Syn-multi 11.31 18.84 29.45 94.33 26.93 116.12 SAFECount [30] Single 23.57 40.99 14.42 51.72 13.56 91.30 Single+Syn-multi 9.80 32.40 27.65 58.01 27.24 100.55 Ours Single+Syn-multi 6.97 13.03 18.55 61.12 20.68 109.14 Table 4. Comparison with training other class-agnostic counting methods (BMNet+, FamNet+ and SAFECount) using our synthetic multi-class images. Although the counting error on the multi-class test set is reduced, the performance on the single-class test set drops significantly for all three baseline methods. 5.2.2 Qualitative Results In Figure 4, we visualize a few input images and the corresponding density maps when using masks computed from K-means as well as using masks predicted by our segmentation model. As can be seen from the figure, the choice of K has a large effect on the counting results. If K is too small, too many patch embeddings will fall into the same cluster as the exemplar embedding and the counter will over-count the objects (the 4th row when K = 2); if K is too large, too few embeddings will fall into the same cluster, which results in too many regions being masked out (the 2nd row when K = 6). The optimal K varies from image to image, and it is non-trivial to determine the optimal K for an arbitrary image. Using our trained segmentation model, on the other hand, does not require any prior knowledge about the test image while producing more accurate masks and density maps based on the provided exemplars. 5.3. Analysis on the Trade-off between Invariance and Discriminative Power We observe that there is a trade-off between single-class and multi-class counting performance. Our explanation is that when images contain objects from a single dominant class, the model will focus only on capturing the intraclass similarity while ignoring the inter-class discrepancy; when objects from multiple classes exist in the image, the model will focus more on the inter-class discrepancy in order to distinguish between them. To get a better understanding of this trade-off, we provide the detailed feature distribution statistics in Table 5. Specifically, we measure the intra-class distance and inter-class distance of the exemplar features extracted from our baseline counting model before and after fine-tuning using the synthetic multi-class dataset. Intra-class distance refers to the mean of Euclidean distance between a feature embedding and the corresponding class\u2019s embedding center. Inter-class distance refers to the mean of the minimum distance between embedding centers. As shown in the table, after fine-tuning the model using the synthetic multi-class dataset, both intra-class distance and inter-class distance increase. Larger inter-class distance means features from different classes are more separable, suggesting a better discriminative power of the model; larger intra-class distance means features within the same class are less compact, suggesting inferior robustness against within-class variations of the model. This trade-off between invariance and discriminative power makes it nontrivial to train one model to perform well on single-class counting and multi-class counting simultaneously. Split Training Intra Inter Single Syn-multi Set MAE MAE Val Single 2.35 1.12 18.55 32.46 Single+Syn-multi 2.90 1.30 32.36 25.74 Test Single 2.31 1.19 20.68 42.22 Single+Syn-multi 2.86 1.48 32.34 29.12 Table 5. Analysis on the trade-off between invariance and discriminative power of the counting model. After fine-tuning on our synthetic multi-class dataset, both the intra-class and inter-class distances of exemplar features become larger. 6." + }, + { + "url": "http://arxiv.org/abs/2304.05096v1", + "title": "Generating Features with Increased Crop-related Diversity for Few-Shot Object Detection", + "abstract": "Two-stage object detectors generate object proposals and classify them to\ndetect objects in images. These proposals often do not contain the objects\nperfectly but overlap with them in many possible ways, exhibiting great\nvariability in the difficulty levels of the proposals. Training a robust\nclassifier against this crop-related variability requires abundant training\ndata, which is not available in few-shot settings. To mitigate this issue, we\npropose a novel variational autoencoder (VAE) based data generation model,\nwhich is capable of generating data with increased crop-related diversity. The\nmain idea is to transform the latent space such latent codes with different\nnorms represent different crop-related variations. This allows us to generate\nfeatures with increased crop-related diversity in difficulty levels by simply\nvarying the latent norm. In particular, each latent code is rescaled such that\nits norm linearly correlates with the IoU score of the input crop w.r.t. the\nground-truth box. Here the IoU score is a proxy that represents the difficulty\nlevel of the crop. We train this VAE model on base classes conditioned on the\nsemantic code of each class and then use the trained model to generate features\nfor novel classes. In our experiments our generated features consistently\nimprove state-of-the-art few-shot object detection methods on the PASCAL VOC\nand MS COCO datasets.", + "authors": "Jingyi Xu, Hieu Le, Dimitris Samaras", + "published": "2023-04-11", + "updated": "2023-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Object detection plays a vital role in many computer vision systems. However, training a robust object detector often requires a large amount of training data with accurate bounding box annotations. Thus, there has been increasing attention on few-shot object detection (FSOD), which learns to detect novel object categories from just a few annotated training samples. It is particularly useful for problems where annotated data can be hard and costly to obtain such as rare medical conditions [31, 41], rare animal species [20, 44], satellite images [2, 19], or failure cases in bird 54% cow 22% bird 55% bird 57% Easy Crop Hard Crop (a) DeFRCN [33] (b) Ours Figure 1. Robustness to different object crops of the same object instance. (a) The classifier head of the state-of-the-art FSOD method [33] classifies correctly a simple crop of the bird but misclassifies a hard crop where some parts are missing. (b) Our method can handle this case since it is trained with additional generated features with increased crop-related diversity. We show the class with the highest confidence score. autonomous driving systems [27,28,36]. For the most part, state-of-the-art FSOD methods are built on top of a two-stage framework [35], which includes a region proposal network that generates multiple image crops from the input image and a classifier that labels these proposals. While the region proposal network generalizes well to novel classes, the classifier is more error-prone due to the lack of training data diversity [40]. To mitigate this issue, a natural approach is to generate additional features for novel classes [12, 55, 57]. For example, Zhang et al. [55] propose a feature hallucination network to use the variation from base classes to diversify training data for novel classes. For zero-shot detection (ZSD), Zhu et al. [57] propose to synthesize visual features for unseen objects based on a conditional variational auto-encoder. Although much progress has been made, the lack of data diversity is still a challenging issue for FSOD methods. arXiv:2304.05096v1 [cs.CV] 11 Apr 2023 \fHere we discuss a specific type of data diversity that greatly affects the accuracy of FSOD algorithms. Specifically, given a test image, the classifier needs to accurately classify multiple object proposals1 that overlap the object instance in various ways. The features of these image crops exhibit great variability induced by different object scales, object parts included in the crops, object positions within the crops, and backgrounds. We observe a typical scenario where the state-of-the-art FSOD method, DeFRCN [33], only classifies correctly a few among many proposals overlapping an object instance of a few-shot class. In fact, different ways of cropping an object can result in features with various difficulty levels. An example is shown in Figure 1a where the image crop shown in the top row is classified correctly while another crop shown in the bottom row confuses the classifier due to some missing object parts. In general, the performance of the method on those hard cases is significantly worse than on easy cases (see section 5.4). However, building a classifier robust against crop-related variation is challenging since there are only a few images per few-shot class. In this paper, we propose a novel data generation method to mitigate this issue. Our goal is to generate features with diverse crop-related variations for the few-shot classes and use them as additional training data to train the classifier. Specifically, we aim to obtain a diverse set of features whose difficulty levels vary from easy to hard w.r.t. how the object is cropped.2 To achieve this goal, we design our generative model such that it allows us to control the difficulty levels of the generated samples. Given a model that generates features from a latent space, our main idea is to enforce that the magnitude of the latent code linearly correlates with the difficulty level of the generated feature, i.e., the latent code of a harder feature is placed further away from the origin and vice versa. In this way, we can control the difficulty level by simply changing the norm of the corresponding latent code. In particular, our data generation model is based on a conditional variational autoencoder (VAE) architecture. The VAE consists of an encoder that maps the input to a latent representation and a decoder that reconstructs the input from this latent code. In our case, inputs to the VAE are object proposal features, extracted from a pre-trained object detector. The goal is to associate the norm (magnitude) of the latent code with the difficulty level of the object proposal. To do so, we rescale the latent code such that its norm linearly correlates with the Intersection-Over-Union (IoU) score of the input object proposal w.r.t. the ground-truth object box. This IoU score is a proxy that partially indicates the difficulty level: A high IoU score indicates that the ob1Note that an RPN typically outputs 1000 object proposals per image. 2In this paper, the difficulty level is strictly related to how the object is cropped. ject proposal significantly overlaps with the object instance while a low IoU score indicates a harder case where a part of the object can be missing. With this rescaling step, we can bias the decoder to generate harder samples by increasing the latent code magnitude and vice versa. In this paper, we use latent codes with different norms varying from small to large to obtain a diverse set of features which can then serve as additional training data for the few-shot classifier. To apply our model to FSOD, we first train our VAE model using abundant data from the base classes. The VAE is conditioned on the semantic code of the input instance category. After the VAE model is trained, we use the semantic embedding of the few-shot class as the conditional code to synthesize new features for the corresponding class. In our experiments, we use our generated samples to finetune the baseline few-shot object detector DeFRCN [33]. Surprisingly, a vanilla conditional VAE model trained with only ground-truth box features brings a 3.7% nAP50 improvement over the DeFRCN baseline in the 1-shot setting of the PASCAL VOC dataset [4]. Note that we are the first FSOD method using VAE-generated features to support the training of the classifier. Our proposed Norm-VAE can further improve this new state-of-the-art by another 2.1%, i.e., from 60% to 62.1%. In general, the generated features from Norm-VAE consistently improve the state-of-the-art fewshot object detector [33] for both PASCAL VOC and MS COCO [24] datasets. Our main contributions can be summarized as follows: \u2022 We show that lack of crop-related diversity in training data of novel classes is a crucial problem for FSOD. \u2022 We propose Norm-VAE, a novel VAE architecture that can effectively increase crop-related diversity in difficulty levels into the generated samples to support the training of FSOD classifiers. \u2022 Our experiments show that the object detectors trained with our additional features achieve state-of-the-art FSOD in both PASCAL VOC and MS COCO datasets. 2. Related Work Few-shot Object Detection Few-shot object detection aims to detect novel classes from limited annotated examples of previously unseen classes. A number of prior methods [5, 7, 8, 10, 11, 17, 17, 21, 23, 25, 26, 32, 40, 45\u201347, 56] have been proposed to address this challenging task. One line of work focuses on the meta-learning paradigm, which has been widely explored in few-shot classification [6, 16, 37, 43, 50, 52\u201354]. Meta-learning based approaches introduce a meta-learner to acquire meta-knowledge that can be then transferred to novel classes. [16] propose a meta feature learner and a reweighting module to fully exploit generalizable features from base classes and quickly adapt the prediction network to predict novel classes. [43] pro\fpose specialized meta-strategies to disentangle the learning of category-agnostic and category-specific components in a CNN based detection model. Another line of work adopts a two-stage fine-tuning strategy and has shown great potential recently [3,33,40,42,48]. [42] propose to fine-tune only box classifier and box regressor with novel data while freezing the other paramters of the model. This simple stragetegy outperforms previous meta-learners. FSCE [40] leverages a contrastive proposal encoding loss to promote instance level intra-class compactness and inter-class variance. Orthogonal to existing work, we propose to generate new samples for FSOD. Another data generation based method for FSOD is Halluc [55]. However, their method learns to transfer the shared within-class variation from base classes while we focus on the crop-related variance. Feature Generation Feature generation has been widely used in low-shot learning tasks. The common goal is to generate reliable and diverse additional data. For example, in image classification, [51] propose to generate representative samples using a VAE model conditioned on the semantic embedding of each class. The generated samples are then used together with the original samples to construct class prototypes for few-shot learning. In spirit, their conditionalVAE system is similar to ours. [49] propose to combine a VAE and a Generative Adversarial Network (GAN) by sharing the decoder of VAE and generator of GAN to synthesize features for zero-shot learning. In the context of object detection, [55] propose to transfer the shared modes of within-class variation from base classes to novel classes to hallucinate new samples. [56] propose to synthesize visual features for unseen objects from semantic information and augment existing training algorithms to incorporate unseen object detection. Recently, [15] propose to synthesize samples which are both intra-class diverse and inter-class separable to support the training of zero-shot object detector. However, these methods do not take into consideration the variation induced by different crops of the same object, which is the main focus of our proposed method. Variational Autoencoder Different VAE variants have been proposed to generate diverse data [9, 14, 18, 38]. \u03b2VAE [14] imposes a heavy penalty on the KL divergence term to enhance the disentanglement of the latent dimensions. By traversing the values of latent variables, \u03b2VAE can generate data with disentangled variations. ControlVAE [38] improves upon \u03b2-VAE by introducing a controller to automatically tune the hyperparameter added in the VAE objective. However, disentangled representation learning can not capture the desired properties without supervision. Some VAE methods allow explicitly controllable feature generation including CSVAE [18] and PCVAE [9]. CSVAE [18] learns latent dimensions associated with binary properties. The learned latent subspace can easily be inspected and independently manipulated. PCVAE [9] uses a Bayesian model to inductively bias the latent representation. Thus, moving along the learned latent dimensions can control specific properties of the generated data. Both CSVAE and PCVAE use additional latent variables and enforce additional constrains to control properties. In contrast, our Norm-VAE directly encodes a variational factor into the norm of the latent code. Experiments show that our strategy outperforms other VAE architectures, while being simpler and without any additional training components. 3. Method In this section, we first review the problem setting of few-shot object detection and the conventional two-stage fine-tuning framework. Then we introduce our method that tackles few-shot object detection via generating features with increased crop-related diversity. 3.1. Preliminaries In few-shot object detection, the training set is divided into a base set DB with abundant annotated instances of classes CB, and a novel set DN with few-shot data of classes CN, where CB and CN are non-overlapping. For a sample (x, y) \u2208DB \u222aDN, x is the input image and y = {(ci, bi), i = 1, ..., n} denotes the categories c \u2208CB \u222aCN and bounding box coordinates b of the n object instances in the image x. The number of objects for each class in CN is K for K-shot detection. We aim to obtain a few-shot detection model with the ability to detect objects in the test set with classes in CB \u222aCN. Recently, two-stage fine-tuning methods have shown great potential in improving few-shot detection. In these two-stage detection frameworks, a Region Proposal Network (RPN) takes the output feature maps from a backbone feature extractor as inputs and generates region proposals. A Region-of-Interest (RoI) head feature extractor first pools the region proposals to a fixed size and then encodes them as vector embeddings, known as the RoI features. A classifier is trained on top of the RoI features to classify the categories of the region proposals. The fine-tuning often follows a simple two-stage training pipeline, i.e., the data-abundant base training stage and the novel fine-tuning stage. In the base training stage, the model collects transferable knowledge across a large base set with sufficient annotated data. Then in the fine-tuning stage, it performs quick adaptation on the novel classes with limited data. Our method aims to generate features with diverse crop-related variations to enrich the training data for the classifier head during the fine-tuning stage. In our experiments, we show that our generated features significantly improve the performance of DeFRCN [33]. \fReconstructed Feature Input Feature \ud835\udc67\u0303 Latent Codes \ud835\udc65 $ Transformed Latent Codes \ud835\udc67 x IoU = 0.9 IoU = 0.7 g(0.7) g(0.9) IoU = 0.7 IoU = 0.9 IoU = 0.7 IoU = 0.7 Semantic Embedding a Semantic Embedding a g(x) = w*x + b 0 0 Figure 2. Norm-VAE for modelling crop-related variations. The original latent code z is rescaled to \u02c6 z such that the norm of \u02c6 z linearly correlates with the IoU score of the input crop (w.r.t. the ground truth box). The original latent codes are colored in blue while the rescaled ones are colored in yellow. The norm of the new latent code is the output of a simple linear function g(\u00b7) taking the IoU score as the single input. As can be seen, the two points whose IoU = 0.7 are both rescaled to norm g(0.7) while another point whose IoU = 0.9 is mapped to norm g(0.9). As a result, different latent norms represent different crop-related variations, enabling diverse feature generation. 3.2. Overall Pipeline Figure 2 summarizes the main idea of our proposed VAE model. For each input object crop, we first use a pre-trained object detector to obtain its RoI feature. The encoder takes as input the RoI feature and the semantic embedding of the input class to output a latent code z. We then transform z such that its norm linearly correlates with the IoU score of the input object crop w.r.t. the ground-truth box. The new norm is the output of a simple linear function g(\u00b7) taking the IoU score as the single input. The decoder takes as input the new latent code and the class semantic embedding to output the reconstructed feature. Once the VAE is trained, we use the semantic embedding of the few-shot class as the conditional code to synthesize new features for the class. To ensure the diversity w.r.t. object crop in generated samples, we vary the norm of the latent code when generating features. The generated features are then used together with the few-shot samples to fine-tune the object detector. 3.2.1 Norm-VAE for Feature Generation We develop our feature generator based on a conditional VAE architecture [39]. Given an input object crop, we first obtain its Region-of-Interest (RoI) feature f via a pretrained object detector. The RoI feature f is the input for the VAE. The VAE is composed of an Encoder E(f, a), which maps a visual feature f to a latent code z, and a decoder G(z, a) which reconstructs the feature f from z. Both E and G are conditioned on the class semantic embedding a. We obtain this class semantic embedding a by inputting the class name into a semantic model [30,34]. It contains classspecific information and serves as a controller to determine the categories of the generated samples. Conditioning on these semantic embeddings allows reliably generating features for the novel classes based on the learned information from the base classes [51]. Here we assume that the class names of both base and novel classes are available and we can obtain the semantic embedding of all classes. We first start from a vanilla conditional VAE model. The loss function for training this VAE for a feature fi of class j can be defined as: \\ labe l { e q:cvae} \\begin {alig n e d} L_{V}(f_i) = \\textnormal {KL} \\left ( q(z_i|f_i,a^j)||p(z|a^j) \\right ) \\\\ \\textnormal {E}_{q(z_i|f_i, a^j)}[\\textnormal {log }p(f_i|z_i,a^j)], \\end {aligned} g (1) where aj is the semantic embedding of class j. The first term is the Kullback-Leibler divergence between the VAE posterior q(z|f, a) and a prior distribution p(z|a). The second term is the decoder\u2019s reconstruction error. q(z|f, a) is modeled as E(f, a) and p(f|z, a) is equal to G(z, a). The prior distribution is assumed to be N(0, I) for all classes. The goal is to control the crop-related variation in a generated sample. Thus, we establish a direct correspondence between the latent norm and the crop-related variation. To accomplish this, we transform the latent code such that its norm correlates with the IoU score of the input crop. Given an input RoI feature fi of a region with an IoU score si, we first input this RoI feature to the encoder to obtain its latent code zi. We then transform zi to \u02dc zi such that the norm of \u02dc zi correlates to si. The new latent code \u02dc zi is the output of the transformation function T (\u00b7, \u00b7): \\l a b el { eq: n or mali z ation} \\tilde {z_i} = \\mathcal {T}(z_i,s_i) = \\frac {z_i} {\\lVert z_i \\rVert } * {g}(s_i), (2) where \u2225zi\u2225is the L2 norm of zi, si is the IoU score of the input proposal w.r.t. its ground-truth object box, and g(\u00b7) is a simple pre-defined linear function that maps an IoU score to a norm value. With this new transformation step, the loss function of the VAE from equation 1 for an input feature fi from class j with an IoU score si thus can be rewritten as: \\ labe l { e q: n orm_vae} \\begin {ali g n ed} L_{V}(f_ i ,s_i) = \\textnormal {KL} \\left ( q(z_i|f_i,a^j)||p(z|a^j) \\right ) \\\\ \\textnormal {E}_{q\\left ({z_i}|f_i, a^j\\right )}\\left [\\textnormal {log }p(f_i|\\mathcal {T}(z_i,s_i),a^j)\\right ]. \\end {aligned} g (3) \f3.2.2 Generating Diverse Data for Improving Few-shot Object Detection After the VAE is trained on the base set, we generate a set of features with the trained decoder. Given a class y with a semantic vector ay and a noise vector z, we generate a set of augmented features Gy: \\ l a b el { e q: o utp u t} \\begin {aligned} \\mathbb {G}^y = \\{\\hat {f}|\\hat {f} = G(\\frac {z} {\\lVert z \\rVert } * \\beta , a^y)\\}, \\end {aligned} (4) where we vary \u03b2 to obtain generated features with more crop-related variations. The value range of \u03b2 is chosen based on the mapping function g(\u00b7). The augmented features are used together with the few-shot samples to finetune the object detector. We fine-tune the whole system using an additional classification loss computed on the generated features together with the original losses computed on real images. This is much simpler than the previous method of [55] where they fine-tune their system via an EM-like (expectation-maximization) manner. 4. Experiments 4.1. Datasets and Evaluation Protocols We conduct experiments on both PASCAL VOC (07 + 12) [4] and MS COCO datasets [24]. For fair comparison, we follow the data split construction and evaluation protocol used in previous works [16]. The PASCAL VOC dataset contains 20 categories. We use the same 3 base/novel splits with TFA [42] and refer them as Novel Split 1,2, 3. Each split contains 15 base classes and 5 novel classes. Each novel class has K annotated instances, where K = 1, 2, 3, 5, 10. We report AP50 of the novel categories (nAP50) on VOC07 test set. For MS COCO, the 60 categories disjoint with PASCAL VOC are used as base classes while the remaining 20 classes are used as novel classes. We evaluate our method on shot 1,2,3,5,10,30 and COCOstyle AP of the novel classes is adopted as the evaluation metrics. 4.2. Implementation Details Feature generation methods like ours in theory can be built on top of many few-shot object detectors. In our experiments, we use the pre-trained Faster-RCNN [35] with ResNet-101 [13] following previous work DeFRCN [33]. The dimension of the extracted RoI feature is 2048. For our feature generation model, the encoder consists of three fully-connected (FC) layers and the decoder consists of two FC layers, both with 4096 hidden units. LeakyReLU and ReLU are the non-linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic vector are both set to be 512. Our semantic embeddings are extracted from a pre-trained CLIP [34] model in all main experiments. An additional experiment using Word2Vec [29] embeddings is reported in Section 5.2. After the VAE is trained on the base set with various augmented object boxes , we use the trained decoder to generate k = 30 features per class and incorporate them into the fine-tuning stage of the DeFRCN model. We set the function g(\u00b7) in Equation 2 to a simple linear function g(x) = w \u2217x + b which maps an input IoU score x to the norm of the new latent code. Note that x is in range [0.5, 1] and the norm of the latent code of our VAE before the rescaling typically centers around \u221a 512 (512 is the dimension of the latent code). We empirically choose g(\u00b7) such that the new norm ranges from \u221a 512 to 5 \u2217 \u221a 512. We provide further analyses on the choice of g(\u00b7) in the supplementary material. For each feature generation iteration, we gradually increase the value of the controlling parameter \u03b2 in Equation 4 with an interval of 0.75. 4.3. Few-shot Detection Results We use the generated features from our VAE model together with the few-shot samples to fine-tune DeFRCN. We report the performance of two models: \u201cVanilla-VAE\u201d denotes the performance of the model trained with generated features from a vanilla VAE trained on the base set of ground-truth bounding boxes and \u201cNorm-VAE\u201d denotes the performance of the model trained with features generated from our proposed Norm-VAE model. PASCAL VOC Table 1 shows our results for all three random novel splits from PASCAL VOC. Simply using a VAE model trained with the original data outperforms the state-of-the-art method DeFRCN in all shot and split on PASCAL VOC benchmark. In particular, vanilla-VAE improves DeFRCN by 3.7% for 1-shot and 4.3% for 3-shot on Novel Split 1. Using additional data from our proposed Norm-VAE model consistently improves the results across all settings. We provide qualitative examples in the supplementary material. MS COCO Table 2 shows the FSOD results on MS COCO dataset. Our generated features bring significant improvements in most cases, especially in low-shot settings (K \u226410). For example, Norm-VAE brings a 2.9% and a 2.0% nAP improvement over DeFRCN in 1-shot and 2-shot settings, respectively. Pseudo-Labeling is better than our method in higher shot settings. However, they apply mosaic data augmentation [1] during fine-tuning. 5. Analyses 5.1. Effectiveness of Norm-VAE We compare the performance of Norm-VAE with a baseline vanilla VAE model that is trained with the same set of augmented data. As shown in Table 4, using the vanilla VAE with more training data does not bring performance improvement compared to the VAE model trained with the \fNovel Split 1 Novel Split 2 Novel Split 3 Method 1 2 3 5 10 1 2 3 5 10 1 2 3 5 10 TFA w/ fc [42] 36.8 29.1 43.6 55.7 57.0 18.2 29.0 33.4 35.5 39.0 27.7 33.6 42.5 48.7 50.2 TFA w/ cos [42] 39.8 36.1 44.7 55.7 56.0 23.5 26.9 34.1 35.1 39.1 30.8 34.8 42.8 49.5 49.8 MPSR [48] 41.7 51.4 55.2 61.8 24.4 39.2 35.1 39.9 47.8 42.3 48.0 49.7 FsDetView [50] 24.2 35.3 42.2 49.1 57.4 21.6 24.6 31.9 37.0 45.7 21.2 30.0 37.2 43.8 49.6 FSCE [40] 44.2 43.8 51.4 61.9 63.4 27.3 29.5 43.5 44.2 50.2 37.2 41.9 47.5 54.6 58.5 CME [22] 41.5 47.5 50.4 58.2 60.9 27.2 30.2 41.4 42.5 46.8 34.3 39.6 45.1 48.3 51.5 SRR-FSD [56] 47.8 50.5 51.3 55.2 56.8 32.5 35.3 39.1 40.8 43.8 40.1 41.5 44.3 46.9 46.4 Halluc. [55] 45.1 44.0 44.7 55.0 55.9 23.2 27.5 35.1 34.9 39.0 30.5 35.1 41.4 49.0 49.3 FSOD-MC [5] 40.1 44.2 51.2 62.0 63.0 33.3 33.1 42.3 46.3 52.3 36.1 43.1 43.5 52.0 56.0 FADI [3] 50.3 54.8 54.2 59.3 63.2 30.6 35.0 40.3 42.8 48.0 45.7 49.7 49.1 48.3 51.5 CoCo-RCNN [25] 43.9 44.5 53.1 64.6 65.5 29.4 31.3 43.8 44.3 51.8 39.1 43.9 47.2 54.7 60.3 MRSN [26] 47.6 48.6 57.8 61.9 62.6 31.2 38.3 46.7 47.1 50.6 35.5 30.9 45.6 54.4 57.4 FCT [11] 49.9 57.1 57.9 63.2 67.1 27.6 34.5 43.7 49.2 51.2 39.5 54.7 52.3 57.0 58.7 Pseudo-Labelling [17] 54.5 53.2 58.8 63.2 65.7 32.8 29.2 50.7 49.8 50.6 48.4 52.7 55.0 59.6 59.6 DeFRCN [33] 56.3 60.3 62.0 67.0 66.1 35.7 45.2 51.5 54.1 53.3 54.5 55.6 56.6 60.8 62.7 Vanila-VAE (Ours) 60.0 63.3 66.3 68.3 67.1 39.3 46.2 52.7 53.5 53.4 56.0 58.8 57.1 62.6 63.6 Norm-VAE (Ours) 62.1 64.9 67.8 69.2 67.5 39.9 46.8 54.4 54.2 53.6 58.2 60.3 61.0 64.0 65.5 Table 1. Few-shot object detection performance (nAP50) on PASCAL VOC dataset. We evaluate the performance on three different splits. Our method consistently improves upon the baseline for all three splits across all shots. Best performance in bold. nAP nAP75 Method 1 2 3 5 10 30 1 2 3 5 10 30 TFA w/ fc [42] 2.9 4.3 6.7 8.4 10.0 13.4 2.8 4.1 6.6 8.4 9.2 13.2 TFA w/ cos [42] 3.4 4.6 6.6 8.3 10.0 13.7 3.8 4.8 6.5 8.0 9.3 13.2 MPSR [48] 2.3 3.5 5.2 6.7 9.8 14.1 2.3 3.4 5.1 6.4 9.7 14.2 FADI [3] 5.7 7.0 8.6 10.1 12.2 16.1 6.0 7.0 8.3 9.7 11.9 15.8 FCT [11] 7.9 17.1 21.4 7.9 17.0 22.1 Pseudo-Labelling [17] \u2020 17.8 24.5 17.8 25.0 DeFRCN [33] 6.6 11.7 13.3 15.6 18.7 22.4 7.0 12.2 13.6 15.1 17.6 22.2 Vanilla-VAE (ours) 8.8 13.0 14.1 15.9 18.7 22.5 7.9 12.5 13.4 15.1 17.6 22.2 Norm-VAE (ours) 9.5 13.7 14.3 15.9 18.7 22.5 8.8 13.7 14.2 15.3 17.8 22.4 Table 2. Few-shot detection performance for the novel classes on MS COCO dataset. Our approach outperforms baseline methods in most cases, especially in low-shot settings (K < 10). \u2020 applies mosaic data augmentation introduced in [1] during fine-tuning. Best performance in bold. base set. This suggests that training with more diverse data does not guarantee diversity in generated samples w.r.t. a specific property. Our method, by contrast, improves the baseline model by 1.3% \u223c1.9%, which demonstrates the effectiveness of our proposed Norm-VAE. 5.2. Performance Using Different Semantic Embeddings We use CLIP [34] features in our main experiments. In Table 3, we compare this model with another model trained with Word2Vec [29] on PASCAL VOC dataset. Note that CLIP model is trained with 400M pairs (image and its text title) collected from the web while Word2Vec is trained with only text data. Our Norm-VAE trained with Word2Vec embedding achieves similar performance to the model trained with CLIP embedding. In both cases, the model outperform the state-of-the-art FSOD method in all settings. 5.3. Robustness against Inaccurate Localization In this section, we conduct experiments to show that our object detector trained with features with diverse croprelated variation is more robust against inaccurate bounding box localization. Specifically, we randomly select 1000 testing instances from PASCAL VOC test set and create 30 augmented boxes for each ground-truth box. Each augmented box is created by enlarging the ground-truth boxes by x% for each dimension where x ranges from 0 to 30. The result is summarized in Figure 3 where \u201cBaseline\u201d denotes the performance of DeFRCN [33], \u201cVAE\u201d is the performance of the model trained with features generated from a vanilla VAE, and \u201cNorm-VAE\u201d is the model trained with generated features from our proposed model. \fMethod Semantic Novel Split 1 Novel Split 2 Novel Split 3 Embedding 1-shot 2-shot 3-shot 1-shot 2-shot 3-shot 1-shot 2-shot 3-shot DeFRCN [33] 56.3 60.3 62.0 35.7 45.2 51.5 54.5 55.6 56.6 Vanilla VAE Word2Vec 60.4 62.9 66.7 38.7 45.2 52.9 55.6 58.7 57.9 Norm-VAE 61.6 63.4 66.3 40.7 46.4 53.3 56.8 59.0 60.2 Vanilla VAE CLIP 60.0 63.3 66.3 39.3 46.2 52.7 56.0 58.8 57.1 Norm-VAE 62.1 64.9 67.8 39.9 46.8 54.4 58.2 60.3 61.0 Table 3. FSOD Performance of VAE models trained with different class semantic embeddings. CLIP [34] is trained with 400M pairs (image and its text title) collected from the web while Word2Vec [29] is trained with only text data. Data 1-shot 2-shot 3-shot DeFRCN [33] 56.3 60.3 62.0 VAE Orginal 60.0 63.3 66.3 VAE Augmented 60.1 62.7 66.4 Norm-VAE Augmented 62.1 64.9 67.8 Table 4. Performance comparisons between vanilla VAE and Norm-VAE on PASCAL VOC dataset. Training a the vanilla VAE with the augmented data does not bring performance improvement. One possible reason is that the generated samples are not guaranteed to be diverse even with sufficient data. Figure 3 (a) shows the classification accuracy of the object detector on the augmented box as the IoU score between the augmented bounding box and the ground-truth box decreases. For both the baseline method DeFRCN and the model trained with features from a vanilla VAE, the accuracy drops by \u223c10% as the IoU score decreases from 1.0 to 0.5. These results suggest that these models perform much better for boxes that have higher IoU score w.r.t. the ground-truth boxes. Our proposed method has higher robustness to these inaccurate boxes: the accuracy of the model trained with features from Norm-VAE only drops by \u223c5% when IoU score decreases from 1 to 0.5. Figure 3 (b) plots the average probability score of the classifier on the ground-truth category as the IoU score decreases. Similarly, the probability score of both baseline DeFRCN and the model trained with features from a vanilla VAE drops around 0.08 as the IoU score decreases from 1.0 to 0.5. The model trained with features from Norm-VAE, in comparison, has more stable probability score as the IoU threshold decreases. 5.4. Performance on Hard Cases In Table 5, we show AP 50\u223c75 of our method on PASCAL VOC dataset (Novel Split 1) in comparison with the state-of-the-art method DeFRCN. Here AP 50\u223c75 refers to the average precision computed on the proposals with the IoU thresholds between 50% and 75% and discard the proposals with IoU scores (w.r.t. the ground-truth box) larger (a) Accuracy (b) Probability score Figure 3. Classification accuracy and probability score of the object detector on the augmented box. We compare between the baseline DeFRCN [33], the model trained with features from vanilla VAE and our proposed Norm-VAE. By generating features with diverse crop-related variations, we increase the object detector\u2019s robustness against inaccurate object box localization. Method 1-shot 2-shot 3-shot DeFRCN [33] 16.6 13.3 15.2 Ours (\u2191Improvement) 18.8 (\u21912.2) 16.4 (\u21913.1) 19.2 (\u21914.0) Table 5. AP50\u223c75 of our method and DeFRCN on PASCAL VOC dataset. AP 50\u223c75 refers to the average precision computed on the proposals with the IoU thresholds between 50% and 75% and discard the proposals with IoU scores larger than 0.75, i.e., only \u201chard\u201d cases. than 0.75. Thus, AP 50\u223c75 implies the performance of the model in \u201chard\u201d cases where the proposals do not significantly overlap the ground-truth object boxes. In this extreme test, the performance of both models are worse than their AP50 counterparts (Table 1), showing that FSOD methods are generally not robust to those hard cases. Our method mitigates this issue, outperforming DeFRCN by substantial margins. However, the performance is still far from perfect. Addressing these challenging cases is a fruitful venue for future FSOD work. \fFeatures 1-shot 2-shot 3-shot 5-shot nAP50 nAP75 nAP50 nAP75 nAP50 nAP75 nAP50 nAP75 Low-IoU (Hard cases) 60.9 30.5 63.7 40.6 66.6 40.7 68.9 41.2 High-IoU (Easy cases) 60.2 31.6 63.2 41.0 66.3 41.5 68.3 42.1 Table 6. Comparison between models trained with different groups of generated features. The model trained with \u201cLow-IoU\u201d (hard cases) features has better nAP50 scores while the \u201cHigh-IoU\u201d (easy cases) model has better nAP75 scores. Features corresponding to different difficulty levels improve the performance differently in terms of nAP50 and nAP75. 5.5. Performance with Different Subsets of Generated Features In this section, we conduct experiments to show that different groups of generated features affect the performance of the object detector differently. Similar to Section 4.2, we generate 30 new features per few-shot class with various latent norms. However, instead of using all norms, we only use large norms (top 30% highest values) to generate the first group of features and only small norms (top 30% lowest values) to generate the second group of features. During training, larger norms correlate to input crops with smaller IoU scores w.r.t. the ground-truth boxes and vice versa. Thus, we denote these two groups as \u201cLow-IoU\u201d and \u201cHigh-IoU\u201d correspondingly. We train two models using these two sets of features and compare their performance in Table 6. As can be seen, the model trained with \u201cLow-IoU\u201d features has higher AP50 while the \u201cHigh-IoU\u201d model has higher AP75 score. This suggests that different groups of features affect the performance of the classifier differently. The \u201cLow-IoU\u201d features tend to increase the model\u2019s robustness to hard-cases while the \u201cHigh-IoU\u201d features can improve the performance for easier cases. Note that the performance of both of these models is not as good as the model trained with diverse variations and interestingly, very similar to the performance of the vanilla VAE model (Table 1). 5.6. Comparisons with Other VAE architectures Our proposed Norm-VAE can increase diversity w.r.t. image crops in generated samples. Here, we compare the performance of our proposed Norm-VAE with other VAE architectures, including \u03b2-VAE [14] and CSVAE [18]. We train all models on image features of augmented object crops on PASCAL VOC dataset using the same backbone feature extractor. For \u03b2-VAE, we generate additional features by traversing a randomly selected dimension of the latent code. For CSVAE, we manipulate the learned latent subspace to enforce variations in the generated samples. We use generated features from each method to finetune DeFRCN. The results are summarized in Table 7. In all cases, the generated features greatly benefit the baseline DeFRCN. This shows that lacking crop-related variation is a critical issue for FSOD, and augmenting features with increased crop-related diversity can effectively alleviate the problem. Our proposed Norm-VAE outperforms both \u03b2VAE and CSVAE in all settings. Note that CSVAE requires additional encoders to learn a pre-defined subspace correlated with the property, while our Norm-VAE directly encode this into the latent norm without any additional constraints. 1-shot 2-shot 3-shot DeFRCN [33] 56.3 60.3 62.0 \u03b2-VAE [14] 61.3 64.0 67.3 CSVAE [18] 61.6 64.1 67.4 Norm-VAE 62.1 64.9 67.8 Table 7. Comparison between Norm-VAE and other VAE variants. Norm-VAE outperforms \u03b2-VAE and CSVAE on PASCAL VOC dataset under all settings. Best performance in bold. 6." + }, + { + "url": "http://arxiv.org/abs/2303.11730v1", + "title": "Abstract Visual Reasoning: An Algebraic Approach for Solving Raven's Progressive Matrices", + "abstract": "We introduce algebraic machine reasoning, a new reasoning framework that is\nwell-suited for abstract reasoning. Effectively, algebraic machine reasoning\nreduces the difficult process of novel problem-solving to routine algebraic\ncomputation. The fundamental algebraic objects of interest are the ideals of\nsome suitably initialized polynomial ring. We shall explain how solving Raven's\nProgressive Matrices (RPMs) can be realized as computational problems in\nalgebra, which combine various well-known algebraic subroutines that include:\nComputing the Gr\\\"obner basis of an ideal, checking for ideal containment, etc.\nCrucially, the additional algebraic structure satisfied by ideals allows for\nmore operations on ideals beyond set-theoretic operations.\n Our algebraic machine reasoning framework is not only able to select the\ncorrect answer from a given answer set, but also able to generate the correct\nanswer with only the question matrix given. Experiments on the I-RAVEN dataset\nyield an overall $93.2\\%$ accuracy, which significantly outperforms the current\nstate-of-the-art accuracy of $77.0\\%$ and exceeds human performance at $84.4\\%$\naccuracy.", + "authors": "Jingyi Xu, Tushar Vaidya, Yufei Wu, Saket Chandra, Zhangsheng Lai, Kai Fong Ernest Chong", + "published": "2023-03-21", + "updated": "2023-03-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.SC", + "math.AC", + "13P25, 68W30", + "I.1; I.2.4; I.2.6; I.5.1" + ], + "main_content": "Introduction When we think of machine reasoning, nothing captures our imagination more than the possibility that machines would eventually surpass humans in intelligence tests and general reasoning tasks. Even for humans, to excel in IQ tests, such as the well-known Raven\u2019s progressive matrices (RPMs) [8], is already a non-trivial feat. A typical RPM instance is composed of a question matrix and an answer set; see Fig. 1. A question matrix is a 3 \u00d7 3 grid of panels \u2217Equal contributions. \u2020 Corresponding author. \u25e6This work was done when the author was previously at SUTD. Code: https://github.com/Xu-Jingyi/AlgebraicMR Figure 1. An example of RPM instance from the I-RAVEN dataset. The correct answer is marked with a red box. that satisfy certain hidden rules, where the \ufb01rst 8 panels are \ufb01lled with geometric entities, and the 9-th panel is \u201cmissing\u201d. The goal is to infer the correct answer for this last panel from among the 8 panels in the given answer set. The ability to solve RPMs is the quintessential display of what cognitive scientists call \ufb02uid intelligence. The word \u201c\ufb02uid\u201d alludes to the mental agility of discovering new relations and abstractions [56], especially for solving novel problems not encountered before. Thus, it is not surprising that abstract reasoning on novel problems is widely hailed as the hallmark of human intelligence [9]. Although there has been much recent progress in machine reasoning [29, 32, 59, 60, 62, 63, 73, 75, 83, 84], a common criticism [13, 49, 50] is that existing reasoning frameworks have focused on approaches involving extensive training, even when solving well-established reasoning tests such as RPMs. Perhaps most pertinently, as [13] argues, reasoning tasks such as RPMs should not need taskThis work is supported by the National Research Foundation, Singapore under its AI Singapore Program (AISG Award No: AISG-RP2019-015) and under its NRFF Program (NRFFAI1-2019-0005), and by Ministry of Education, Singapore, under its Tier 2 Research Fund (MOET2EP20221-0016). 1 arXiv:2303.11730v1 [cs.CV] 21 Mar 2023 \fFigure 2. An overview of our algebraic machine reasoning framework, organized into 2 stages. speci\ufb01c performance optimization. After all, if a machine optimizes performance by training on task-speci\ufb01c data, then that task cannot possibly be novel to the machine. To better emulate human reasoning, we propose what we call \u201calgebraic machine reasoning\u201d, a new reasoning framework that is well-suited for abstract reasoning. Our framework solves RPMs without needing to optimize for performance on task-speci\ufb01c data, analogous to how a gifted child solves RPMs without needing practice on RPMs. Our key starting point is to de\ufb01ne concepts as ideals of some suitably initialized polynomial ring. These ideals are treated as the \u201cactual objects of study\u201d in algebraic machine reasoning, which do not require any numerical values to be assigned to them. We shall elucidate how the RPM task can be realized as a computational problem in algebra involving ideals. Our reasoning framework can be broadly divided into two stages: (1) algebraic representation, and (2) algebraic machine reasoning; see Fig. 2. In the \ufb01rst stage, we represent RPM panels as ideals, based on perceptual attribute values extracted from object detection models. In the second stage, we propose 4 invariance modules to extract patterns from the RPM question matrix. To summarize, our main contributions are as follows: \u2022 We reduce \u201csolving the RPM task\u201d to \u201csolving a computational problem in algebra\u201d. Speci\ufb01cally, we present how the discovery of abstract patterns can be realized very concretely as algebraic computations known as primary decompositions of ideals. \u2022 In our algebraic machine reasoning framework, we introduce 4 invariance modules for extracting patterns that are meaningful to humans. \u2022 Our framework is not only able to select the correct answer from a given answer set, but also able to generate answers without needing any given answer set. \u2022 Experiments conducted on RAVEN and I-RAVEN datasets demonstrate that our reasoning framework signi\ufb01cantly outperforms state-of-the-art methods. 2. Related Work RPM solvers. There has been much recent interest in solving RPMs with deep-learning-based methods [29, 43, 62,84,85,88\u201391]. Most methods extract features from raw RPM images using nueral networks, and select answers by measuring panel similarities. Several works instead focus on generating correct answers without needing the answer set [55, 64]. To evaluate the reasoning capabilities of these methods, RPM-like datasets such as PGM [62] and RAVEN [83] have been proposed. Subsequently, I-RAVEN [26] and RAVEN-FAIR [5] are introduced to overcome a shortcut \ufb02aw in the answer set generation of RAVEN. Algebraic methods in AI. Using algebraic methods in AI is not new. Systems of polynomial equations are commonly seen in computer vision [57] and robotics [12], which are solved algebraically via Gr\u00a8 obner basis computations. In statistical learning theory, methods in algebraic geometry [78] and algebraic statistics [15] are used to study singularities in statistical models [40,79,80,82], to analyze generalization error in hierarchical models [76,77], to learn invariant subspaces of probability distributions [34,38], and to model Bayesian networks [16,70]. A common theme in these works is to study suitably de\ufb01ned algebraic varieties. In deep learning, algebraic methods are used to study the expressivity of neural nets [11, 31, 45, 87]. In automated theorem proving, Gr\u00a8 obner basis computations are used in proof-checking [68]. Recently, a matrix formulation of \ufb01rst-order logic was applied to the RPM task [86], where relations are approximated by matrices and reasoning is framed as a bilevel optimization task to \ufb01nd best-\ufb01t matrix operators. As far as we know, methods from commutative algebra have not been used in machine reasoning. 3. Proposed Algebraic Framework In abstract reasoning, a key cognitive step is to \u201cdiscover patterns from observations\u201d, which can be formulated con2 \fcretely as \u201c\ufb01nding invariances in observations\u201d. In this section, we describe how algebraic objects known as ideals are used to represent RPM instances, how patterns are extracted from such algebraic representations, and how RPMs can be solved, both for answer selection and answer generation, as computational problems in algebra. 3.1. Preliminaries Throughout, let R = R[x1, . . . , xn] be the ring of polynomials in variables x1, . . . , xn, with real coef\ufb01cients. In particular, R is closed under addition and multiplication of polynomials, i.e., for any a, b \u2208R, we have a + b, ab \u2208R. 3.1.1 Algebraic de\ufb01nitions Ideals in polynomial rings. A subset I \u2286R is called an ideal if there exist polynomials g1, . . . , gk in R such that I = {f1g1 + \u00b7 \u00b7 \u00b7 + fkgk|f1, . . . , fk \u2208R} contains all polynomial combinations of g1, . . . , gk. We say that G = {g1, . . . , gk} is a generating set for I, we call g1, . . . , gk generators, and we write either I = \u27e8g1, . . . , gk\u27e9 or I = \u27e8G\u27e9. Note that generating sets of ideals are not unique. If I has a generating set consisting only of monomials, then we say that I is a monomial ideal. (Recall that a monomial is a polynomial with a single term.) Given ideals J1 = \u27e8g1, . . . , gk\u27e9and J2 = \u27e8h1, . . . , h\u2113\u27e9, there are three basic operations (sums, products, intersections): J1 + J2 := \u27e8g1, . . . , gk, h1, . . . , h\u2113\u27e9; J1J2 := \u27e8{gihj|1 \u2264i \u2264k, 1 \u2264j \u2264\u2113}\u27e9; J1 \u2229J2 := {r \u2208R : r \u2208J1 and r \u2208J2}. Most algebraic computations involving ideals, especially \u201cadvanced\u201d operations (e.g. primary decompositions), require computing their Gr\u00a8 obner bases as a key initial step. More generally, Gr\u00a8 obner basis computation forms the backbone of most algorithms in algebra; see Appendix A.2. Primary decompositions. In commutative algebra, primary decompositions of ideals are a far-reaching generalization of the idea of prime factorization for integers. Its importance to algebraists cannot be overstated. Informally, every ideal J has a decomposition J = J1 \u2229\u00b7 \u00b7 \u00b7 \u2229Js as an intersection of \ufb01nitely many primary ideals. This intersection is called a primary decomposition of J, and each Jj is called a primary component of the decomposition. In the special case when J is a monomial ideal, there is an unique minimal primary decomposition with maximal monomial primary components [4]; We denote this unique set of primary components by pd(J). See Appendix A.3 for details. 3.1.2 Concepts as monomial ideals We de\ufb01ne a concept to be a monomial ideal of R. In particular, the zero ideal \u27e80\u27e9\u2286R is the concept \u201cnull\u201d, and could be interpreted as \u201cimpossible\u201d or \u201cnothing\u201d, while the ideal \u27e81\u27e9= R is the concept \u201cconceivable\u201d, and could be interpreted as \u201cpossible\u201d or \u201ceverything\u201d. Given a concept J \u2286R, a monomial in J is called an instance of the concept. For example, xblackxsquare is an instance of \u27e8xsquare\u27e9 (the concept \u201csquare\u201d). For each xi, we say \u27e8xi\u27e9\u2286R is a primitive concept, and xi is a primitive instance. Theorem 3.1. There are in\ufb01nitely many concepts in R, even though there are \ufb01nitely many primitive concepts in R. Furthermore, if J \u2286R is a concept, then the following hold: (i) J has in\ufb01nitely many instances, unless J = \u27e80\u27e9. (ii) J has a unique minimal generating set consisting of \ufb01nitely many instances, which we denote by mingen(J). (iii) If J \u0338= \u27e81\u27e9, then J has a unique set of associated concepts {P1, . . . , Pk}, together with a unique minimal primary decomposition J = J1 \u2229\u00b7 \u00b7 \u00b7 \u2229Jk, such that each Ji is a concept contained in Pi, that is maximal among all possible primary components contained in Pi that are concepts. See Appendix A.4 for a proof of Theorem 3.1 and for more details on why de\ufb01ning concepts as monomial ideals captures the expressiveness of concepts in human reasoning. 3.2. Stage 1: Algebraic representation We shall use the RPM instance depicted in Fig 1 as our running example, to show the entire algebraic reasoning process: (1) algebraic representation; and (2) algebraic machine reasoning. In this subsection, we focus on the \ufb01rst stage. Recall that every RPM instance is composed of 16 panels \ufb01lled with geometric entities. For our running example, each entity can be described using 4 attributes: \u201ccolor\u201d, \u201csize\u201d, \u201ctype\u201d, and \u201cposition\u201d. We also need one additional attribute to represent the \u201cnumber\u201d of entities in the panel. 3.2.1 Attribute concepts In human cognition, certain semantically similar concepts are naturally grouped to form a more general concept. For example, concepts such as \u201cred\u201d, \u201cgreen\u201d, \u201cblue\u201d, \u201cyellow\u201d, etc., can be grouped to form a new concept that represents \u201ccolor\u201d. Intuitively, we can think of \u201ccolor\u201d as an attribute, and \u201cred\u201d, \u201cgreen\u201d, \u201cblue\u201d, \u201cyellow\u201d as attribute values. For our running example, the 5 attributes are represented by 5 concepts (monomial ideals). In general, all possible values for each attribute are encoded as generators for the concept representing that attribute. However, for ease of explanation, we shall consider only those attribute values that are involved in Fig. 1 to explain our example: Anum := {xone, xtwo}, Apos := {xleft, xright}, Atype := {xtriangle, xsquare, xpentagon, xhexagon, xcircle}, Acolor := {xwhite, xgray, xdgray, xblack}, Asize := {xsmall, xavg, xlarge}. Let L := {num, pos, type, color, size} be the set of attribute labels, and let Aall := S \u2113\u2208L A\u2113. Initialize the ring 3 \fR := R[Aall] of all polynomials on the variables in Aall with real coef\ufb01cients. For each \u2113\u2208L, let J\u2113be the concept \u27e8A\u2113\u27e9\u2286R. These concepts, which we call attribute concepts, are task-speci\ufb01c. We assume humans tend to discover and organize complex patterns in terms of attributes. Thus for pattern extraction, we shall use the inductive bias that a concept representing a pattern is deemed meaningful if it is in some attribute concept. 3.2.2 Representation of RPM panels In order to encode the RPM images algebraically, we \ufb01rst need to train perception modules to extract attribute information directly from raw images. One possible approach for perception, as used in our experiments, is to train 4 RetinaNet models (each with a ResNet-50 backbone) separately for all 4 attributes except \u201cnumber\u201d, which can be directly inferred by counting the number of bounding boxes. After extracting attribute values for entities, we can represent each panel as a concept. For example, the top-left panel of the RPM in Fig. 1 can be encoded as the concept J1,1 = \u27e8xtwoxleftxsquarexblackxavg, xtwoxrightxtrianglexgrayxavg\u27e9 in the polynomial ring R. Here, J1,1 represents a panel with two entities, a black square of average size on the left, and a gray triangle of average size on the right. The indices in J1,1 tell us that the panel is in row 1, column 1. Similarly, we can encode the remaining 7 panels of the question matrix as concepts J1,2, J1,3, . . . , J3,2 and encode the 8 answer options as concepts Jans1, . . . , Jans8. In general, every monomial generator of each concept describes an entity in the associated panel. The list of 8 concepts J = [J1,1, . . . , J3,2] shall be called a concept matrix; this represents the RPM question matrix with a missing 9-th panel. Let Ji := [Ji,1, Ji,2, Ji,3] (for i = 1, 2) represent the i-th row in the question matrix. 3.3. Stage 2: algebraic machine reasoning Previously in Section 3.2, we have already encoded the question matrix in an RPM instance as a concept matrix J = [J1,1, . . . , J3,2]. In this subsection, we will introduce the reasoning process of our algebraic framework. Our goal of extracting patterns for a single row of J can be mathematically formulated as \u201c\ufb01nding invariance\u201d across the concepts that represent the panels in this row. (The same process can be applied to columns.) This seemingly imprecise idea of \u201c\ufb01nding invariance\u201d can be realized very concretely via the computation of primary decompositions. Ideally, we want to extract patterns that are meaningful to humans. Hence we have designed 4 invariance modules to mimic human cognition in pattern recognition. 3.3.1 Prior knowledge To use algebraic machine reasoning, we adopt: \u2022 Inductive bias of attribute concepts (see Section 3.2.1); \u2022 Useful binary operations on numerical values; \u2022 Functions that map concepts to concepts. There are numerous binary operations, such as +, \u2212, \u00d7, \u00f7, min, max, etc., that can be applied to numerical values extracted from concepts. For the RPM task, we use +, \u2212. In algebra, the study of functions between algebraic objects is a productive strategy for understanding the underlying algebraic structure. Analogously, we shall use maps on concepts to extract complex patterns. For the RPM task, we need to cyclically order the values in A\u2113for each attribute \u2113\u2208L before we can extract sequential information. To encode the idea of \u201cnext\u201d, we introduce the function fnext(J|\u2206) de\ufb01ned on concepts J, where \u2206represents the step-size. Each variable x \u2208A\u2113that appears in a generator of J is mapped to the \u2206-th variable after x, w.r.t. the cyclic order on A\u2113. For example, fnext(\u27e8xsquarexgrayxavg\u27e9|1) = \u27e8xpentagonxdgrayxlarge\u27e9, and fnext(\u27e8xsquare\u27e9| \u22122) = \u27e8xcircle\u27e9. 3.3.2 Reasoning via primary decompositions Given concepts J1, . . . , Jk that share a common \u201cpattern\u201d, how do we extract this pattern? Abstractly, a common pattern can be treated as a concept K that contains all of these concepts J1, . . . , Jk. If there are several common patterns K1, . . . , Kr, then each concept Ji can be \u201cdecomposed\u201d as Ji = K1 \u2229\u00b7 \u00b7 \u00b7 \u2229Kr \u2229K\u2032 i for some ideal K\u2032 i. Thus, we have the following algebraic problem: Given J1, . . . , Jk, compute their common components K1, . . . , Kr. Recall that a concept J has a unique minimal primary decomposition, since concepts are monomial ideals. Thus, to extract the common patterns of concepts J1, . . . , Jk, we \ufb01rst have to compute pd(J1), . . . , pd(Jk), then extract the common primary components. The intersection of (any subset of) these common components would yield a new concept, which can be interpreted as a common pattern of the concepts J1, . . . , Jk. As part of our inductive bias, we are only interested in those primary components that are contained in attribute concepts. See Appendix A.3 for further details. 3.3.3 Proposed invariance modules Our 4 proposed invariance modules are: (1) intra-invariance module, (2) inter-invariance module, (3) compositional invariance module, and (4) binary-operator invariance module. Intuitively, they check for 4 general types of invariances across a sequence of concepts J1, . . . , Jk (e.g. a row Ji = [Ji,1, Ji,2, Ji,3] for the RPM task). Such invariances apply not just to the RPM task, but could be applied to other RPM-like tasks, e.g. based on different prior knowledge, different grid layouts, etc. Full computational details for our running example can be found in Appendix B.3. 1. Intra-invariance module extracts patterns where the set of values for some attribute within concept Ji remains invariant over all i. First, we de\ufb01ne J+ := J1 + \u00b7 \u00b7 \u00b7 + Jk and J\u2229:= J1 \u2229\u00b7 \u00b7 \u00b7 \u2229Jk; see Section 3.1.1. Intuitively, J+ and 4 \fJ\u2229are concepts that capture information about the entire sequence J1, . . . , Jk in two different ways. Next, we compute the common primary components of J+ and J\u2229that are contained in attribute concepts. Finally, we return the attributes associated to these common primary components: Pintra([J1 . . . Jk]) := \b attr \u2208L | \u2203I \u2208pd(J+) \u2229pd(J\u2229), I \u2286\u27e8Aattr\u27e9 \t . 2. Inter-invariance module extracts patterns arising from the set difference between pd(J\u2229) and pd(J+). Thereafter, we check for the invariance of these extracted patterns across multiple sequences. The extracted set of patterns is: Pinter([J1, . . . , Jk]) := ( (attr, I) \f \f \f \f \f I \u2286pd(J\u2229) \u2212pd(J+), attr \u2208L, I \u2286\u27e8Aattr\u27e9\u2200I \u2208I ) , where I is a set of concepts, and \u201c\u2212\u201d refers to set difference. We omit pd(J+) so that we do not overcount the patterns already extracted in the previous module. Informally, for each pair (attr, I), the concepts in I can be interpreted as those \u201cprimary\u201d concepts that correspond to at least one of J1, . . . , Jk, that do not correspond to all of J1, . . . , Jk, and that are contained in \u27e8Aattr\u27e9. 3. Compositional invariance module extracts patterns arising from invariant attribute values in the following new sequence of concepts: [J\u2032 1, . . . , J\u2032 k] = [f k\u22121(J1), f k\u22122(J2), . . . , f(Jk\u22121), Jk], where f is some given function. Intuitively, for such patterns, there are some attributes whose values are invariant in [f(Ji), Ji+1] for all i = 1, . . . , k \u22121. By checking the intersection of primary components of the concepts in the new sequence, the extracted set of patterns is given by: Pcomp([J1, . . . , Jk]) := ( (attr, f) \f \f \f \f \f \u2203I \u2208Tk i=1 pd(f k\u2212i(Ji)), attr \u2208L, I \u2286\u27e8Aattr\u27e9 ) . The given function used for the RPM task is fnext(\u00b7|\u2206), where \u2206represents the number of steps; see Section 3.3.1. 4. Binary-operator module extracts numerical patterns, based on a given real-valued function g on concepts, and a given set \u039b of binary operators. The extracted patterns are: Pbinary(Ji) := ( \u2298 \f \f \f \f \f \u2298= [\u22981, . . . , \u2298k\u22122], \u2298i \u2208\u039b, g(J1) \u22981 \u00b7 \u00b7 \u00b7 \u2298k\u22122 g(Jk\u22121) = g(Jk) ) . 3.3.4 Extracting row-wise patterns Given a concept matrix J = [J1,1, . . . , J3,2], how do we extract the patterns from its i-th row? We \ufb01rst begin by extracting the common position values among all 8 panels: comPos(J) := \b p \u2208Apos | \u2203I \u2208T J\u2208J pd(J), p \u2208I \t For each common position p \u2208comPos(J), we generate two new concept matrices \u00af J(p) and \u02c6 J(p), such that: \u2022 Each concept \u00af J(p) i,j in \u00af J(p) is generated by the unique generator in Ji,j that is divisible by p; \u2022 Each concept \u02c6 J(p) i,j in \u02c6 J(p) is generated by all generators in Ji,j that are not divisible by p. (Recall that generators of a concept are polynomials.) Informally, we are splitting each panel in the RPM image into 2 panels, one that contains only the entity in the common position p, and the other that contains all remaining entities not in position p. This step allows us to reason about rules that involve only a portion of the panels. Consequently, if comPos(J) = {p1, . . . , pk}, then we can extend the single concept matrix into a list of concept matrices [J, \u00af J(p1), \u02c6 J(p1), . . . , \u00af J(pk), \u02c6 J(pk)]. For each concept matrix \u02c7 J from the extended list, we consider its i-th row \u02c7 Ji = [ \u02c7 Ji,1, \u02c7 Ji,2, \u02c7 Ji,3] (left-to-right) and extract patterns from \u02c7 Ji via the 4 modules from Section 3.3.3. Let P(\u02c7 Ji) be the set of all such patterns, i.e., P(\u02c7 Ji) := Pintra(\u02c7 Ji) \u222aPinter(\u02c7 Ji) \u222aPcomp(\u02c7 Ji) \u222aPbinary(\u02c7 Ji). Finally, for row i = 1, 2, we de\ufb01ne P(all) i (J) := S \u02c7 J \b (K, \u02c7 J) | K \u2208P(\u02c7 Ji) \t , (1) where the union ranges over all concept matrices \u02c7 J in the extended list, i.e. \u02c7 J \u2208[J, \u00af J(p1), \u02c6 J(p1), . . . , \u00af J(pk), \u02c6 J(pk)]. Note that P(all) i (J) can be regarded as all the patterns extracted from the i-th row of the original concept matrix J. If instead J = [J1,1, . . . , J3,3] is a list containing 9 concepts, then we can de\ufb01ne P(all) 3 (J) analogously. Algorithm 1 Answer selection. Inputs: Concept matrix J = [J1,1 . . . J3,2], and associated answer set [Jans1, . . . , Jans8]. 1: Initialize comPattern = [0, . . . , 0]1\u00d78. 2: Compute P1,2(J) := P(all) 1 (J) \u2229P(all) 2 (J). // see (1) 3: for i from 1 to 8 do 4: J \u2190[J1,1, . . . , J3,2, Jansi] 5: Compute P(all) 3 (J). 6: comPattern[i] \u2190|P1,2(J) \u2229P(all) 3 (J)| 7: return answer index i = argmaxi\u2032 comPattern[i\u2032]. 3.4. Solving RPMs 3.4.1 Answer selection In Section 3.3.4, we described how row-wise patterns can be extracted using the 4 invariance modules. Thus, a natural approach for answer selection is to determine which answer option, when inserted in place of the missing panel, would maximize the number of patterns that are common to all three rows. Consequently, answer selection is reduced to a simple optimization problem; see Algorithm 1. 3.4.2 Answer generation Since our algebraic machine reasoning framework is able to extract common patterns that are meaningful to humans, hidden in the raw RPM images, it provides a new way to generate answers without needing a given answer set. This is similar to a gifted human who is able to solve the RPM task, by \ufb01rst recognizing the patterns in the \ufb01rst two rows, then inferring what the missing panel should be. Intuitively, we are applying \u201cinverse\u201d operations of the 4 invariance modules to generate the concept representing the missing panel; see Algorithm 2 for an overview. 5 \fBrie\ufb02y speaking, for a given RPM concept matrix J, we \ufb01rst compute the common patterns among the \ufb01rst two rows via P1,2(J) := P(all) 1 (J) \u2229P(all) 2 (J); see (1). Each element in P1,2(J) is a pair (K, \u02c7 J), where K is a common pattern (for rows 1 and 2) speci\ufb01c to one attribute, and \u02c7 J is the corresponding concept matrix. (This represents the dif\ufb01cult step of pattern discovery by a gifted human.) Then, we go through all common patterns to compute the attribute values for the missing 9th panel. (This represents a routine consistency check of the discovered patterns; see Appendix B.2 for full algorithmic details, and B.3 for an example.) In general, when integrating all the attribute values for J3,3 derived from the patterns in P1,2(J), it is possible that entities (i) have multiple possible values for a single attribute; or (ii) have missing attribute values. Case (i) occurs when there are multiple patterns extracted for a single attribute, while case (ii) occurs when there are no noncon\ufb02icting patterns for this attribute. For either case, we randomly select an attribute value from the possible values. Algorithm 2 Answer generation. Inputs: Concept matrix J = [J1,1 . . . J3,2]. 1: for (K, \u02c7 J) \u2208P(all) 1 (J) \u2229P(all) 2 (J) do // see (1) 2: if [ \u02c7 J3,1, \u02c7 J3,2] does not con\ufb02ict with pattern K then 3: Compute attribute value for \u02c7 J3,3 using pattern K. 4: Collect all the above attribute values for J3,3. 5: while \u2204unique value for some attribute of an entity do 6: Randomly choose one valid attribute value. 7: Generate ideal J3,3 \u2286R. 8: return J3,3 and the corresponding image. 4. Discussion Algebraic machine reasoning provides a fundamentally new paradigm for machine reasoning beyond numerical computation. Abstract notions in reasoning tasks are encoded very concretely as ideals, which are computable algebraic objects. We treat ideals as \u201cactual objects of study\u201d, and we do not require numerical values to be assigned to them. This means our framework is capable of reasoning on more qualitative or abstract notions that do not naturally have associated numerical values. Novel problem-solving, such as the discovery of new abstract patterns from observations, is realized concretely as computations on ideals (e.g. computing the primary decompositions of ideals). In particular, we are not solving a system of polynomial equations, in contrast to existing applications of algebra in AI (cf. Section 2). Variables (or primitive instances) are not assigned values. We do not evaluate polynomials at input values. Theory-wise, our proposed approach breaks new ground. We established a new connection between machine reasoning and commutative algebra, two areas that were completely unrelated previously. There is over a century\u2019s worth of very deep results in commutative algebra that have not been tapped. Could algebraic methods be the key to tackling the long-standing fundamental questions in machine reasoning? It was only much more recently in 2014 that L\u00b4 eon Bottou [6] suggested that humans should \u201cbuild reasoning capabilities from the ground up\u201d, and he speculated that the missing ingredient could be an algebraic approach. Why use ideals to represent concepts? Why not use sets? Why not use symbolic expressions, e.g. polynomials? Intuitively, we think of a concept as an \u201cumbrella term\u201d consisting of multiple (potentially in\ufb01nitely many) instances of the concept. Treating concepts as merely sets of instances is inadequate in capturing the expressiveness of human reasoning. A set-theoretic representation system with \ufb01nitely many \u201cprimitive sets\u201d can only have \ufb01nitely many possible sets in total. In contrast, we proved that we can construct in\ufb01nitely many concepts from only \ufb01nitely many primitive concepts (Theorem 3.1). This agrees with our intuition that humans are able to express in\ufb01nitely many concepts from only \ufb01nitely many primitive concepts. The main reason is that the \u201cricher\u201d algebraic structure of ideals allows for signi\ufb01cantly more operations on ideals, beyond set-theoretic operations. See Appendix A.4 for further discussion. Why is our algebraic method fundamentally different from logic-based methods, e.g. those based on logic programming? At the heart of logic-based reasoning is the idea that reasoning can be realized concretely as the resolution (or inverse resolution) of logical expressions. Inherent in this idea is the notion of satis\ufb01ability; cf. [28]. Intuitively, we have a logical expression, usually expressed in a canonical normal form, and we want to assign truth values (true or false) to literals in the logical expression, so that the entire expression is satis\ufb01ed (i.e. truth value is \u201ctrue\u201d); see Appendix C.1 for more discussion. In fact, much of the exciting progress in automated theorem proving [2,27,36,39,81,92] is based on logic-based reasoning. In contrast, algebraic machine reasoning builds upon computational algebra and computer algebra systems. At the heart of our algebraic approach is the idea that reasoning can be realized concretely as solving computational problems in algebra. Crucially, there is no notion of satis\ufb01ability. We do not assign truth values (or numerical values) to concepts in R = k[x1, . . . , xn]. In particular, although primitive concepts \u27e8x1\u27e9, . . . , \u27e8xn\u27e9in R correspond to the variables x1, . . . , xn, we do not assign values to primitive concepts. Instead, ideals are treated as the \u201cactual objects of study\u201d, and we reduce \u201csolving a reasoning task\u201d to \u201csolving non-numerical computational problems involving ideals\u201d. Moreoever, our framework can discover new patterns beyond the actual rules of the RPM task; see Section 5.2. In the RPM task, we have attribute concepts representing \u201cposition\u201d, \u201cnumber\u201d, \u201ctype\u201d, \u201csize\u201d, and \u201ccolor\u201d; these are concepts that categorize the primitive instances according 6 \fMethod Avg. Acc. Center 2\u00d72G 3\u00d73G O-IC O-IG L-R U-D 1 LSTM [83] 18.9 / 13.1 26.2 / 13.2 16.7 / 14.1 15.1 / 13.7 21.9 / 12.2 21.1 / 13.0 14.6 / 12.8 16.5 / 12.4 2 WReN [62] 23.8 / 34.0 29.4 / 58.4 26.8 / 38.9 23.5 / 37.7 22.5 / 38.8 21.5 / 22.6 21.9 / 21.6 21.4 / 19.7 3 ResNet [83] 40.3 / 53.4 44.7 / 52.8 29.3 / 41.9 27.9 / 44.3 46.2 / 63.2 35.8 / 53.1 51.2 / 58.8 47.4 / 60.2 4 ResNet+DRT [83] 40.4 / 59.6 46.5 / 58.1 28.8 / 46.5 27.3 / 50.4 46.0 / 69.1 34.2 / 60.1 50.1 / 65.8 49.8 / 67.1 5 LEN [88] 41.4 / 72.9 56.4 / 80.2 31.7 / 57.5 29.7 / 62.1 52.1 / 84.4 31.7 / 71.5 44.2 / 73.5 44.2 / 81.2 6 CoPINet [84] 46.1 / 91.4 54.4 / 95.1 36.8 / 77.5 31.9 / 78.9 52.2 / 98.5 42.8 / 91.4 51.9 / 99.1 52.5 / 99.7 7 DCNet [91] 49.4 / 93.6 57.8 / 97.8 34.1 / 81.7 35.5 / 86.7 57.0 / 99.0 42.9 / 91.5 58.5 / 99.8 60.0 / 99.8 8 NCD [89] 48.2 / 37.0 60.0 / 45.5 31.2 / 35.5 30.0 / 39.5 62.4 / 40.3 39.0 / 30.0 58.9 / 34.9 57.2 / 33.4 9 SRAN [26] 60.8 / 78.2 / 50.1 / 42.4 / 68.2 / 46.3 / 70.1 / 70.3 / 10 PrAE [85] 77.0 / 65.0 90.5 / 76.5 85.4 / 78.6 45.6 / 28.6 63.5 / 48.1 60.7 / 42.6 96.3 / 90.1 97.4 / 90.9 11 Our Method 93.2 / 92.9 99.5 / 98.8 89.6 / 91.9 89.7 / 93.1 99.6 / 98.2 74.7 / 70.1 99.7 / 99.2 99.5 / 99.1 Human [83] / 84.4 / 95.5 / 81.8 / 79.6 / 86.4 / 81.8 / 86.4 / 81.8 Table 1. Performance on I-RAVEN/RAVEN. We report mean accuracy, and the accuracies for all con\ufb01gurations: Center, 2x2Grid, 3x3Grid, Out-InCenter, Out-InGrid, Left-Right, and Up-Down. to their semantics, into what humans would call attributes. Intuitively, an attribute concept combines certain primitive concepts together in a manner that is \u201cmeaningful\u201d to the task. For example, \u27e8xwhite, xgray, xblack\u27e9is \u201cmore meaningful\u201d than \u27e8xwhite, xcircle, xlarge\u27e9as a \u201csimpler\u201d or \u201cgeneralized\u201d concept, since we would treat xwhite, xgray, xblack as instances of a single broader \u201ccolor\u201d concept. Notice that the primitive concepts correspond precisely to the prediction classes of our object detection models. Such prediction classes are already implicitly identi\ufb01ed by the available data. Consequently, our method is limited by what our perception modules can perceive. For other tasks, e.g. where text data is available, entity extraction methods can be used to identify primitive concepts. Note also that our method requires prior knowledge, since there is no training step for the reasoning module. This limitation can be mitigated if we replace user-de\ufb01ned functions on concepts with trainable functions optimized via deep learning. In general, the identi\ufb01cation of attribute concepts is taskspeci\ufb01c, and the resulting reasoning performance would depend heavily on these identi\ufb01ed attribute concepts. Effectively, our choice of attribute concepts would determine the inductive bias of our reasoning framework: As we decompose a concept J into \u201csimpler\u201d concepts (i.e. primary components in pd(J)), only those \u201csimpler\u201d concepts contained in attribute concepts are deemed \u201cmeaningful\u201d. Concretely, let J, J\u2032 \u228aR be concepts such that pd(J) = {J1, . . . , Jk} and pd(J\u2032) = {J\u2032 1, . . . , J\u2032 \u2113}, i.e. J, J\u2032 have minimal primary decompositions J = J1\u2229\u00b7 \u00b7 \u00b7\u2229Jk and J\u2032 = J\u2032 1\u2229\u00b7 \u00b7 \u00b7\u2229J\u2032 \u2113, respectively. We can examine their primary components and extract out those primary components (between the two primary decompositions) that are contained in some common attribute concept. For example, if A is an attribute concept of R such that J1 \u2286A and J\u2032 1 \u2286A, then J and J\u2032 share a \u201ccommon pattern\u201d, represented by the attribute concept A. 5. Experiment results To show the effectiveness of our framework, we conducted experiments on the RAVEN [83] and I-RAVEN datasets. In both datasets, RPMs are generated according to 7 con\ufb01gurations. We trained our perception modules on 4200 images from I-RAVEN [26] (600 from each con\ufb01guration), and used them to predict attribute values of entities. The average accuracy of our perception modules is 96.24%. For both datasets, we tested on 2000 instances for each con\ufb01guration. Overall, our reasoning framework is fast (7 hours for 14000 instances on a 16-core Gen11 Intel i7 CPU processor). See Appendix B for full experiment details. 5.1. Comparison with other baselines Table 1 compares the performance of our method with 10 other baseline methods. We use the accuracies on IRAVEN reported in [26,89] for methods 1-7, and the accuracies on RAVEN reported in [83,89] for methods 1-5. All the other accuracies are obtained from the original papers. As a reference, we also include the human performance on the RAVEN dataset (i.e. not I-RAVEN) as reported in [83]. 5.2. Ambiguous instances and new patterns Although our method outperforms all baselines, some instances have multiple answer options that are assigned equal top scores by our framework. Most of these cases occur due to the discovery of (i) \u201caccidental\u201d unintended rules (e.g. Fig. 3); or (ii) new patterns beyond the actual rules in the dataset (e.g. Fig. 4). Case (i) occurs because in the design of I-RAVEN, at most one rule is assigned to each attribute. Interestingly, case (ii) reveals that our framework is able to discover completely new patterns that are not originally designed as rules for I-RAVEN. In Fig. 4, the new pattern discovered is arguably very natural to humans. 7 \fFigure 3. An example of an ambiguous RPM instance. The given answer is option g. For I-RAVEN, the type sequence (\u201ccircle\u201d, \u201chexagon\u201d, \u201cpentagon\u201d) in the \ufb01rst two rows follows a Progression rule with consecutively decreasing type indices, so g could be a correct answer. (Remaining attribute values are determined by other patterns.) However, our framework assigns equal top scores to both options d and g, as a result of another inter-invariance pattern for type (the type set {\u201ccircle\u201d, \u201chexagon\u201d, \u201cpentagon\u201d} is invariant across the rows). Thus, option d could also be correct. Figure 4. An example of an RPM instance with an unexpected new pattern. The given answer is option h. In each row, the number of entities in the \ufb01rst 2 panels sum up to the number of entities in the 3rd panel, so h could be correct. However, our framework assigns equal top scores to both options b and h, as a result of a new interinvariance pattern for number (informally, every panel has either 1 or 2 entities). Thus option b could also be correct. 5.3. Evaluation of answer generation Every RPM instance is assumed to have a single correct answer from the given answer set. However, there are multiple other possible images that are also acceptable as correct answers. For example, images modi\ufb01ed from the given correct answer, via random perturbations of those attributes that are not involved in any of the rules (e.g. entity angles in the I-RAVEN dataset), are also correct. All these distinct correct answers (images) can be encoded algebraically as the same concept, based on prior knowledge of which raw perceptual attributes are relevant for the RPM task. Hence, to evaluate the answer generation process proposed in Section 3.4.2, we will directly evaluate the generated concepts. Let J = \u27e8e1, . . . , ek\u27e9and J\u2032 = \u27e8e\u2032 1, . . . , e\u2032 \u2113\u27e9be concepts representing the ground truth answer and our generated answer, respectively. Here, each ei (or e\u2032 i) is a monomial of the form x(pos) i x(type) i x(color) i x(size) i , and represents an entity described by 4 attributes. Motivated by the well-known idea of Intersection over Union (IoU), we propose a new similarity measure between J and J\u2032. In order to de\ufb01ne analogous notions of \u201cintersection\u201d and \u201cunion\u201d, we \ufb01rst pair ei with e\u2032 j if x(pos) i = x\u2032 j (pos) (i.e. same \u201cposition\u201d values). This pairing is well-de\ufb01ned, since the \u201cposition\u201d values of the entities in any panel are uniquely determined. Hence we can group all entities in J and J\u2032 into 3 sets: S1 := {(ei, e\u2032 j) | ei \u2208J, e\u2032 j \u2208J\u2032, x(pos) i = x\u2032 j (pos)}; S2 := {ei \u2208J | \u2204e\u2032 j \u2208J\u2032 such that (ei, e\u2032 j) \u2208S1}; S3 := {e\u2032 j \u2208J\u2032 | \u2204ei \u2208J such that (ei, e\u2032 j) \u2208S1}. We can interpret S1 and S1 \u222aS2 \u222aS3 as analogous notions of the \u201cintersection\u201d and \u201cunion\u201d of J and J\u2032, respectively. Thus, we de\ufb01ne our similarity measure as follows: \u03d5(J, J\u2032) := P (ei,e\u2032 j)\u2208S1 \u03c6(ei, e\u2032 j) |S1| + |S2| + |S3| ; (2) \u03c6(ei, e\u2032 j) := 1 4 X a 1(x(a) i = x\u2032 j (a)); (3) where in (3), a ranges over the 4 attributes in {pos, type, color, size}. Here, \u03c6(ei, e\u2032 j) is the similarity score between ei and e\u2032 j, measured by the proportion of common variables. The overall average similarity score of the generated answers is 67.7%. Note that within a panel, some attribute values such as \u201csize\u201d, \u201ccolor\u201d and \u201cposition\u201d, may be totally random for 2x2Grid, 3x3Grid, Out-InGrid (e.g. as shown in Fig. 3). Hence, achieving high similarity scores for such cases would inherently require task-speci\ufb01c optimization and knowledge of how the data is generated. We assume neither. This could explain why our overall similarity score is lower than our answer selection accuracy. For examples of generated images, see Appendix B.5. 6." + }, + { + "url": "http://arxiv.org/abs/2303.02001v2", + "title": "Zero-shot Object Counting", + "abstract": "Class-agnostic object counting aims to count object instances of an arbitrary\nclass at test time. It is challenging but also enables many potential\napplications. Current methods require human-annotated exemplars as inputs which\nare often unavailable for novel categories, especially for autonomous systems.\nThus, we propose zero-shot object counting (ZSC), a new setting where only the\nclass name is available during test time. Such a counting system does not\nrequire human annotators in the loop and can operate automatically. Starting\nfrom a class name, we propose a method that can accurately identify the optimal\npatches which can then be used as counting exemplars. Specifically, we first\nconstruct a class prototype to select the patches that are likely to contain\nthe objects of interest, namely class-relevant patches. Furthermore, we\nintroduce a model that can quantitatively measure how suitable an arbitrary\npatch is as a counting exemplar. By applying this model to all the candidate\npatches, we can select the most suitable patches as exemplars for counting.\nExperimental results on a recent class-agnostic counting dataset, FSC-147,\nvalidate the effectiveness of our method. Code is available at\nhttps://github.com/cvlab-stonybrook/zero-shot-counting", + "authors": "Jingyi Xu, Hieu Le, Vu Nguyen, Viresh Ranjan, Dimitris Samaras", + "published": "2023-03-03", + "updated": "2023-04-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Object counting aims to infer the number of objects in an image. Most of the existing methods focus on counting objects from specialized categories such as human crowds [37], cars [29], animals [4], and cells [46]. These methods count only a single category at a time. Recently, classagnostic counting [28, 34, 38] has been proposed to count objects of arbitrary categories. Several human-annotated bounding boxes of objects are required to specify the objects of interest (see Figure 1a). However, having humans in the loop is not practical for many real-world applications, such as fully automated wildlife monitoring systems or vi*Work done prior to joining Amazon Be happy , Douya (a) Few-shot Counting (b) Zero-Shot Counting Figure 1. Our proposed task of zero-shot object counting (ZSC). Traditional few-shot counting methods require a few exemplars of the object category (a). We propose zero-shot counting where the counter only needs the class name to count the number of object instances. (b). Few-shot counting methods require human annotators at test time while zero-shot counters can be fully automatic. sual anomaly detection systems. A more practical setting, exemplar-free class-agnostic counting, has been proposed recently by Ranjan et al. [33]. They introduce RepRPN, which first identifies the objects that occur most frequently in the image, and then uses them as exemplars for object counting. Even though RepRPN does not require any annotated boxes at test time, the method simply counts objects from the class with the highest number of instances. Thus, it can not be used for counting a specific class of interest. The method is only suitable for counting images with a single dominant object class, which limits the potential applicability. Our goal is to build an exemplar-free object counter where we can specify what to count. To this end, we introduce a new counting task in which the user only needs to provide the name of the class for counting rather than the exemplars (see Figure 1b). In this way, the counting model can not only operate in an automatic manner but also allow the user to define what to count by simply providing the class name. Note that the class to count during test time can be arbitrary. For cases where the test class is completely unseen to the trained model, the counter needs to adapt to the unseen class without any annotated data. Hence, we arXiv:2303.02001v2 [cs.CV] 24 Apr 2023 \fname this setting zero-shot object counting (ZSC), inspired by previous zero-shot learning approaches [6,57]. To count without any annotated exemplars, our idea is to identify a few patches in the input image containing the target object that can be used as counting exemplars. Here the challenges are twofold: 1) how to localize patches that contain the object of interest based on the provided class name, and 2) how to select good exemplars for counting. Ideally, good object exemplars are visually representative for most instances in the image, which can benefit the object counter. In addition, we want to avoid selecting patches that contain irrelevant objects or backgrounds, which likely lead to incorrect object counts. To this end, we propose a two-step method that first localizes the class-relevant patches which contain the objects of interest based on the given class name, and then selects among these patches the optimal exemplars for counting. We use these selected exemplars, together with a pre-trained exemplar-based counting model, to achieve exemplar-free object counting. In particular, to localize the patches containing the objects of interest, we first construct a class prototype in a pretrained embedding space based on the given class name. To construct the class prototype, we train a conditional variational autoencoder (VAE) to generate features for an arbitrary class conditioned on its semantic embedding. The class prototype is computed by taking the average of the generated features. We then select the patches whose embeddings are the k-nearest neighbors of the class prototype as the class-relevant patches. After obtaining the class-relevant patches, we further select among them the optimal patches to be used as counting exemplars. Here we observe that the feature maps obtained using good exemplars and bad exemplars often exhibit distinguishable differences. An example of the feature maps obtained with different exemplars is shown in Figure 2. The feature map from a good exemplar typically exhibits some repetitive patterns (e.g., the dots on the feature map) that center around the object areas while the patterns from a bad exemplar are more irregular and occur randomly across the image. Based on this observation, we train a model to measure the goodness of an input patch based on its corresponding feature maps. Specifically, given an arbitrary patch and a pre-trained exemplar-based object counter, we train this model to predict the counting error of the counter when using the patch as the exemplar. Here the counting error can indicate the goodness of the exemplar. After this error predictor is trained, we use it to select those patches with the smallest predicted errors as the final exemplars for counting. Experiments on the FSC-147 dataset show that our method outperforms the previous exemplar-free counting method [33] by a large margin. We also provide analyses to show that patches selected by our method can be Pre-trained Counter Bad Exemplar Good Exemplar Query Image Figure 2. Feature maps obtained using different exemplars given a pre-trained exemplar-based counting model. The feature maps obtained using good exemplars typically exhibit some repetitive patterns while the patterns from bad exemplars are more irregular. used in other exemplar-based counting methods to achieve exemplar-free counting. In short, our main contributions can be summarized as follows: \u2022 We introduce the task of zero-shot object counting that counts the number of instances of a specific class in the input image, given only the class name and without relying on any human-annotated exemplars. \u2022 We propose a simple yet effective patch selection method that can accurately localize the optimal patches across the query image as exemplars for zeroshot object counting. \u2022 We verify the effectiveness of our method on the FSC147 dataset, through extensive ablation studies and visualization results. 2. Related Work 2.1. Class-specific Object Counting Class-specific object counting focuses on counting predefined categories, such as humans [1,15,24,26,37,39,40, 42,47,52,53,55,56], animals [4], cells [46], or cars [14,29]. Generally, existing methods can be categorized into two groups: detection-based methods [8,14,18] and regressionbased methods [7, 10, 11, 27, 41, 53, 56]. Detection-based methods apply an object detector on the image and count the number of objects based on the detected boxes. Regressionbased methods predict a density map for each input image, and the final result is obtained by summing up the pixel values. Both types of methods require abundant training data to learn a good model. Class-specific counters can perform well on trained categories. However, they can not be used to count objects of arbitrary categories at test time. 2.2. Class-agnostic Object Counting Class-agnostic object counting aims to count arbitrary categories given only a few exemplars [3,13,25,28,31,34, 38,50,51]. GMN [28] uses a shared embedding module to \f\u201cgrape\u201d Prototype G R 0.1 0.4 0.2 0.4 Class-relevant Patches Pre-trained Feature Space Selected Exemplar Generator Error Predictor Query Image Pre-trained Counter F K-nearest neighbors Estimated Counting Errors Figure 3. Overview of the proposed method. We first use a generative model to obtain a class prototype for the given class (e.g. grape) in a pre-trained feature space. Then given an input query image, we randomly sample a number of patches of various sizes and extract the corresponding feature embedding for each patch. We select the patches whose embeddings are the nearest neighbors of the class prototype as class-relevant patches. Then for each of the selected class-relevant patches, we use a pre-trained exemplar-based counting model to obtain the intermediate feature maps. Our proposed error predictor then takes the feature maps as input and predicts the counting error (here we use normalized counting errors). We select the patches with the smallest predicted errors as the final exemplar patches and use them for counting. extract feature maps for both query images and exemplars, which are then concatenated and fed into a matching module to regress the object count. FamNet [34] adopts a similar way to do correlation matching and further applies testtime adaptation. These methods require human-annotated exemplars as inputs. Recently, Ranjan et al. have proposed RepRPN [33], which achieves exemplar-free counting by identifying exemplars from the most frequent objects via a Region Proposal Network (RPN)-based model. However, the class of interest can not be explicitly specified for the RepRPN. In comparison, our proposed method can count instances of a specific class given only the class name. 2.3. Zero-shot Image Classification Zero-shot classification aims to classify unseen categories for which data is not available during training [5, 9, 12, 16, 19, 21, 23, 35, 36]. Semantic descriptors are mostly leveraged as a bridge to enable the knowledge transfer between seen and unseen classes. Earlier zero-shot learning (ZSL) works relate the semantic descriptors with visual features in an embedding space and recognize unseen samples by searching their nearest class-level semantic descriptor in this embedding space [17, 36, 43, 54]. Recently, generative models [20, 22, 48, 49] have been widely employed to synthesize unseen class data to facilitate ZSL [30,44,45]. Xian et al. [44] use a conditional Wasserstein Generative Adversarial Network (GAN) [2] to generate unseen features which can then be used to train a discriminative classifier for ZSL. In our method, we also train a generative model conditioned on class-specific semantic embedding. Instead of using this generative model to hallucinate data, we use it to compute a prototype for each class. This class prototype is then used to select patches that contain objects of interest. 3. Method Figure 3 summarizes our proposed method. Given an input query image and a class label, we first use a generative model to construct a class prototype for the given class in a pre-trained feature space. We then randomly sample a number of patches of various sizes and extract the feature embedding for each patch. The class-relevant patches are those patches whose embeddings are the nearest neighbors of the class prototype in the embedding space. We further use an error predictor to select the patches with the smallest predicted errors as the final exemplars for counting. We use the selected exemplars in an exemplar-based object counter to infer the object counts. For the rest of the paper, we denote this exemplar-based counter as the \u201cbase counting model\u201d. We will first describe how we train this base counting model and then present the details of our patch selection method. 3.1. Training Base Counting Model We train our base counting model using abundant training images with annotations. Similar to previous works [34,38], the base counting model uses the input image and the exemplars to obtain a density map for object counting. The model consists of a feature extractor F and a counter C. Given a query image I and an exemplar B of an arbitrary class c, we input I and B to the feature extractor to obtain the corresponding output, denoted as F(I) and F(B) re\fspectively. F(I) is a feature map of size d \u2217hI \u2217wI and F(B) is a feature map of size d \u2217hB \u2217wB. We further perform global average pooling on F(B) to form a feature vector b of d dimensions. After feature extraction, we obtain the similarity map S by correlating the exemplar feature vector b with the image feature map F(I). Specifically, if wij = Fij(I) is the channel feature at spatial position (i, j), S can be computed by: \\labe l { eq :simi} S_{ij}(I, B) = w_{ij}^T b. (1) In the case where n exemplars are given, we use Eq. 1 to calculate n similarity maps, and the final similarity map is the average of these n similarity maps. We then concatenate the image feature map F(I) with the similarity map S, and input them into the counter C to predict a density map D. The final predicted count N is obtained by summing over the predicted density map D: \\ lab el {eq:final_count} {N} = \\sum _{i,j}D_{(i,j)}, \\vspace {-2mm} (2) where D(i,j) denotes the density value for pixel (i, j). The supervision signal for training the counting model is the L2 loss between the predicted density map and the ground truth density map: \\labe l {eq: co u nting_l oss} L_{\\textnormal {count}} = \\|D(I, B) D^{*}(I)\\|_2^2, (3) where D\u2217denotes the ground truth density map. 3.2. Zero-shot Object Counting In this section, we describe how we count objects of any unseen category given only the class name without access to any exemplar. Our strategy is to select a few patches in the image that can be used as exemplars for the base counting model. These patches are selected such that: 1) they contain the objects that we are counting and 2) they benefit the counting model, i.e., lead to small counting errors. 3.2.1 Selecting Class-relevant Patches To select patches that contain the objects of interest, we first generate a class prototype based on the given class name using a conditional VAE model. Then we randomly sample a number of patches across the query image and select the class-relevant patches based on the generated prototype. Class prototype generation. Inspired by previous zeroshot learning approaches [44, 45], we train a conditional VAE model to generate features for an arbitrary class based on the semantic embedding of the class. The semantic embedding is obtained from a pre-trained text-vision model [32] given the corresponding class name. Specifically, we train the VAE model to reconstruct features in a pre-trained ImageNet feature space. The VAE is composed of an Encoder E, which maps a visual feature x to a latent code z, and a decoder G which reconstructs x from z. Both E and G are conditioned on the semantic embedding a .The loss function for training this VAE for an input feature x can be defined as: \\ lab e l {eq:cvae} \\begin {aligned} L_{V}(x) = \\textnormal {KL} \\left ( q(z|x,a)||p(z|a) \\right ) \\\\ \\textnormal {E}_{q(z|x, a)}[\\textnormal {log }p(x|z,a)]. \\end {aligned} z (4) The first term is the Kullback-Leibler divergence between the VAE posterior q(z|x, a) and a prior distribution p(z|a). The second term is the decoder\u2019s reconstruction error. q(z|x, a) is modeled as E(x, a) and p(x|z, a) is equal to G(z, a). The prior distribution is assumed to be N(0, I) for all classes. We can use the trained VAE to generate the class prototype for an arbitrary target class for counting. Specifically, given the target class name y, we first generate a set of features by inputting the respective semantic vector ay and a noise vector z to the decoder G: \\ ma thb b {G}^ y = \\{ \\hat {x} | \\hat {x} = G(z, y), z \\sim \\mathcal {N}(0, I)\\}. z (5) The class prototype py is computed by taking the mean of all the features generated by VAE: \\ l abel { eq:p r ototype} \\textnormal {p}^y = \\frac {1}{|\\mathbb {G}^y|} {\\sum }_{\\hat {x} \\in \\mathbb {G}^y} {\\hat {x}} (6) Class-relevant patch selection. The generated class prototype can be considered as a class center representing the distribution of features of the corresponding class in the embedding space. Using the class prototype, we can select the class-relevant patches across the query image. Specifically, we first randomly sample M patches of various sizes {b1, b2, ..., bm} across the query image and extract their corresponding ImageNet features {f1, f2, ..., fm}. To select the class-relevant patches, we calculate the L2 distance between the class prototype and the patch embedding, namely di = \u2225fi \u2212py\u22252. Then we select the patches whose embeddings are the k-nearest neighbors of the class prototype as the class-relevant patches. Since the ImageNet feature space is highly discriminative, i.e., features close to each other typically belong to the same class, the selected patches are likely to contain the objects of the target class. 3.2.2 Selecting Exemplars for Counting Given a set of class-relevant patches and a pre-trained exemplar-based object counter, we aim to select a few exemplars from these patches that are optimal for counting. To do so, we introduce an error prediction network that predicts the counting error of an arbitrary patch when the patch is used as the exemplar. The counting error is calculated from the pre-trained counting model. Specifically, to train this error predictor, given a query image \u00af I and an arbitrary patch \f\u00af B cropped from \u00af I, we first use the base counting model to get the image feature map F(\u00af I), similarity map \u00af S, and the final predicted density map \u00af D. The counting error of the base counting model can be written as: \\ l abe l {eq:e r r o r} \\epsilon = | \\sum _{i,j} \\bar {D}_{(i,j)} \\bar {N^*}|, \\vspace {-2mm} (7) where \u00af N \u2217denotes the ground truth object count in image \u00af I. \u03f5 can be used to measure the goodness of \u00af B as an exemplar for \u00af I, i.e., a small \u03f5 indicates that \u00af B is a suitable exemplar for counting and vice versa. The error predictor R is trained to regress the counting error produced by the base counting model. The input of R is the channel-wise concatenation of the image feature map F(\u00af I) and the similarity map \u00af S. The training objective is the minimization of the mean squared error between the output of the predictor R(F(\u00af I), \u00af S) and the actual counting error produced by the base counting model \u03f5. After the error predictor is trained, we can use it to select the optimal patches for counting. The candidates for selection here are the class-relevant patches selected by the class prototype in the previous step. For each candidate patch, we use the trained error predictor to infer the counting error when it is being used as the exemplar. The final selected patches for counting are the patches that yield the top-s smallest counting errors. 3.2.3 Using the Selected Patches as Exemplars Using the error predictor, we predict the error for each candidate patch and select the patches that lead to the smallest counting errors. The selected patches can then be used as exemplars for the base counting model to get the density map and the final count. We also conduct experiments to show that these selected patches can serve as exemplars for other exemplar-based counting models to achieve exemplarfree class-agnostic counting. 4. Experiments 4.1. Implementation Details Network architecture For the base counting model, we use ResNet-50 as the backbone of the feature extractor, initialized with the weights of a pre-trained ImageNet model. The backbone outputs feature maps of 1024 channels. For each query image, the number of channels is reduced to 256 using an 1 \u00d7 1 convolution. For each exemplar, the feature maps are first processed with global average pooling and then linearly mapped to obtain a 256-d feature vector. The counter consists of 5 convolutional and bilinear upsampling layers to regress a density map of the same size as the query image. For the feature generation model, both the encoder and the decoder are two-layer fully-connected (FC) networks with 4096 hidden units. LeakyReLU and ReLU are the non-linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic embeddings are both set to be 512. For the error predictor, 5 convolutional and bilinear upsampling layers are followed by a linear layer to output the counting error. Dataset We use the FSC-147 dataset [34] to train the base counting model and the error predictor. FSC-147 is the first large-scale dataset for class-agnostic counting. It includes 6135 images from 147 categories varying from animals, kitchen utensils, to vehicles. The categories in the training, validation, and test sets do not overlap. The feature generator is trained on the MS-COCO detection dataset. Note that the previous exemplar-free method [33] also uses MS-COCO to pre-train their counter. Training details Both the base counting model and the error predictor are trained using the AdamW optimizer with a fixed learning rate of 10\u22125. The base counting model is trained for 300 epochs with a batch size of 8. We resize the input query image to a fixed height of 384, and the width is adjusted accordingly to preserve the aspect ratio of the original image. Exemplars are resized to 128 \u00d7 128 before being input into the feature extractor. The feature generation model is trained using the Adam optimizer and the learning rate is set to be 10\u22124. The semantic embeddings are extracted from CLIP [32]. To select the class-relevant patches, we randomly sample 450 boxes of various sizes across the input query image and select 10 patches whose embeddings are the 10-nearest neighbors of the class prototype. The final selected patches are those that yield the top-3 smallest counting errors predicted by the error predictor. 4.2. Evaluation Metrics We use Mean Average Error (MAE) and Root Mean Squared Error (RMSE) to measure the performance of different object counters. Besides, we follow [31] to report the Normalized Relative Error (NAE) and Squared Relative Error (SRE). In particular, MAE = 1 n Pn i=1 |yi \u2212\u02c6 yi|; RMSE = q 1 n Pn i=1(yi \u2212\u02c6 yi)2; NAE = 1 n Pn i=1 |yi\u2212\u02c6 yi| yi ; SRE = q 1 n Pn i=1 (yi\u2212\u02c6 yi)2 yi where n is the number of test images, and yi and \u02c6 yi are the ground truth and the predicted number of objects for image i respectively. 4.3. Comparing Methods We compare our method with the previous works on class-agnostic counting. RepRPN-Counter [33] is the only previous class-agnostic counting method that does not require human-annotated exemplars as input. In order to make other exemplar based class-agnostic methods including GMN (General Matching Network [28]), FamNet (Fewshot adaptation and matching Network [34]) and BMNet \fMethod Exemplars Val Set Test Set MAE RMSE NAE SRE MAE RMSE NAE SRE GMN [28] GT 29.66 89.81 26.52 124.57 RPN 40.96 108.47 39.72 142.81 FamNet+ [34] GT 23.75 69.07 0.52 4.25 22.08 99.54 0.44 6.45 RPN 42.85 121.59 0.75 6.94 42.70 146.08 0.74 7.14 BMNet [38] GT 19.06 67.95 0.26 4.39 16.71 103.31 0.26 3.32 RPN 37.26 108.54 0.42 5.43 37.22 143.13 0.41 5.31 BMNet+ [38] GT 15.74 58.53 0.27 6.57 14.62 91.83 0.25 2.74 RPN 35.15 106.07 0.41 5.28 34.52 132.64 0.39 5.26 RepRPN-Counter [33] 30.40 98.73 27.45 129.69 Ours (Base) GT 18.55 61.12 0.30 3.18 20.68 109.14 0.36 7.63 RPN 32.19 99.21 0.38 4.80 29.25 130.65 0.35 4.35 Patch-Selection 26.93 88.63 0.36 4.26 22.09 115.17 0.34 3.74 Table 1. Quantitative comparisons on the FSC-147 dataset. \u201cGT\u201d denotes using human-annotated boxes as exemplars. \u201cRPN\u201d denotes using the top-3 RPN proposals with the highest objectness scores as exemplars. \u201cPatch-Selection\u201d denotes using our selected patches as exemplars. (Bilinear Matching Network [38]) work in the exemplarfree setup, we replace the human-provided exemplars with the exemplars generated by a pre-trained object detector. Specifically, we use the RPN of Faster RCNN pre-trained on MS-COCO dataset and select the top-3 proposals with the highest objectness score as the exemplars. We also include the performance of these methods using humanannotated exemplars for a complete comparison. 4.4. Results Quantitative results. As shown in Table 1, our proposed method outperforms the previous exemplar-free counting method [33] by a large margin, resulting in a reduction of 10.10 w.r.t. the validation RMSE and 14.52 w.r.t. the test RMSE. We also notice that the performance of all exemplarbased counting methods drops significantly when replacing human-annotated exemplars with RPN generated proposals. The state-of-the-art exemplar-based method BMNet+ [38], for example, shows an 19.90 error increase w.r.t. the test MAE and a 40.81 increase w.r.t. the test RMSE. In comparison, the performance gap is much smaller when using our selected patches as exemplars, as reflected by a 1.41 increase w.r.t. the test MAE and a 6.03 increase w.r.t. the test RMSE. Noticeably, the NAE and the SRE on the test set are even reduced when using our selected patches compared with the human-annotated exemplars. Qualitative analysis. In Figure 4, we present a few input images, the image patches selected by our method, and the corresponding density maps. Our method effectively identifies the patches that are suitable for object counting. The density maps produced by our selected patches are meaningful and close to the density maps produced by humanannotated patches. The counting model with random image patches as exemplars, in comparison, fails to output meaningful density maps and infers incorrect object counts. 5. Analyses 5.1. Ablation Studies Our proposed patch selection method consists of two steps: the selection of class-relevant patches via a generated class prototype and the selection of the optimal patches via an error predictor. We analyze the contribution of each step quantitatively and qualitatively. Quantitative results are in Table 2. We first evaluate the performance of our baseline, i.e. using 3 randomly sampled patches as exemplars without any selection step. As shown in Table 2, using the class prototype to select class-relevant patches reduces the error rate by 7.19 and 6.07 on the validation and test set of MAE, respectively. Applying the error predictor can improve the baseline performance by 7.22 on the validation MAE and 7.57 on the test MAE. Finally, applying the two components together further boosts performance, achieving 26.93 on the validation MAE and 22.09 on the test MAE. We provide further qualitative analysis by visualizing the selected patches. As shown in Figure 5, for each input query image, we show 10 class-relevant patches selected using our generated prototype, ranked by their predicted counting error (from low to high). All the 10 selected class-relevant patches exhibit some class specific features. However, not all these patches are suitable to be used as counting exemplars, i.e., some patches only contain parts of the object, and some patches contain some background. By further applying our proposed error predictor, we can identify the most suitable patches with the smallest predicted counting errors. 5.2. Generalization to Exemplar-based Methods Our proposed method can be considered as a general patch selection method that is applicable to other visual counters to achieve exemplar-free counting. To verify that, we use our selected patches as the exemplars for three \f12 15 15 9 9 9 83 10 9 8 8 36 8 8 26 22 Ground Truth Random Ours Figure 4. Qualitative results on the FSC-147 dataset. We show the counting exemplars and the corresponding density maps of ground truth boxes, randomly selected patches, and our selected patches respectively. Predicted counting results are shown at the top-right corner. Our method accurately identifies suitable patches for counting and the predicted density maps are close to the ground truth density maps. Predicted Counting Error Low High chicken wing peach red bean Figure 5. Qualitative ablation analysis. All the 10 selected class-relevant patches exhibit some class-specific attributes. They are ranked by the predicted counting errors and the final selected patches with the smallest errors are framed in green. Prototype Predictor Val Set Test Set MAE RMSE NAE SRE MAE RMSE NAE SRE 35.20 106.70 0.61 6.68 31.37 134.98 0.52 5.92 ! 28.01 88.29 0.39 4.66 25.30 113.82 0.40 4.88 ! 27.98 88.62 0.43 4.59 23.80 128.36 0.40 4.43 ! ! 26.93 88.63 0.36 4.26 22.09 115.17 0.34 3.74 Table 2. Ablation study on each component\u2019s contribution to the final results. We show the effectiveness of the two steps of our framework: selecting class-relevant patches via a generated class prototype and selecting optimal patches via an error predictor. other different exemplar-based methods: FamNet [34], BMNet and BMNet+ [38]. Figure 6 (a) shows the results on the FSC-147 validation set. The baseline uses three randomly sampled patches as the exemplars for the pre-trained exemplar-based counter. By using the generated class prototype to select class-relevant patches, the error rate is reduced by 5.18, 8.59 and 5.60 on FamNet, BMNet and BMNet+, respectively. In addition, as the error predictor is additionally adopted, the error rate is further reduced by 1.76, 1.00 and 1.08 on FamNet, BMNet and BMNet+, respectively. Similarly, Figure 6 (b) shows the results on the FSC\f147 test set. Our method achieves consistent performance improvements for all three methods. FamNet BMNet BMNet+ Ours 0 10 20 30 40 50 MAE 46.13 38.32 36.04 35.2 40.94 29.73 30.44 28.01 39.18 28.73 29.7 26.93 FSC147 Val Set Base Base+Proto Base+Proto+Err (a) FamNet BMNet BMNet+ Ours 0 10 20 30 40 50 MAE 46.21 34.52 29.89 31.37 39.01 27.05 23.92 25.3 37.27 24.67 22.46 22.09 FSC147 Test Set Base Base+Proto Base+Proto+Err (b) Figure 6. Using our selected patches as exemplars for other exemplar-based class-agnostic counting methods (FamNet, BMNet and BMNet+) on FSC-147 dataset. Blue bars are the MAEs of using three randomly sampled patches. Orange bars are the MAEs of using the class prototype to select class-relevant patches as exemplars. Green bars are the MAEs of using the class prototype and error predictor to select optimal patches as exemplars. 5.3. Multi-class Object Counting Our method can count instances of a specific class given the class name, which is particularly useful when there are multiple classes in the same image. In this section, we show some visualization results in this multi-class scenario. As seen in Figure 7, our method selects patches according to the given class name and counts instances from that specific class in the input image. Correspondingly, the heatmap highlights the image regions that are most relevant to the specified class. Here the heatmaps are obtained by correlating the exemplar feature vector with the image feature map in a pre-trained ImageNet feature space. Note that we mask out the image region where the activation value in the heatmap is below a threshold when counting. We also show the patches selected using another exemplar-free counting method, RepRPN [33]. The class of RepRPN selected patches can not be explicitly specified. It simply selects patches from the class with the highest number of instances in the image according to the repetition score. Broccoli: 19 Carrot: 25 RepRPN Pred: 23 \u201cBroccoli\u201d Pred: 10 \u201cCarrot\u201d Pred: 27 (a) RepRPN Pred: 34 \u201cStrawberry\u201d Pred: 40 \u201cBanana\u201d Pred: 32 Banana: 31 Strawberry: 38 (b) RepRPN Pred: 55 \u201cGreen Bean\u201d Pred: 29 \u201cTomato\u201d Pred: 59 Green Bean: 32 Tomato: 62 (c) Figure 7. Visualization results of our method in some multi-class examples. Our method selects patches according to the given class name and the corresponding heatmap highlights the relevant areas. 6." + }, + { + "url": "http://arxiv.org/abs/2205.02918v1", + "title": "Generating Representative Samples for Few-Shot Classification", + "abstract": "Few-shot learning (FSL) aims to learn new categories with a few visual\nsamples per class. Few-shot class representations are often biased due to data\nscarcity. To mitigate this issue, we propose to generate visual samples based\non semantic embeddings using a conditional variational autoencoder (CVAE)\nmodel. We train this CVAE model on base classes and use it to generate features\nfor novel classes. More importantly, we guide this VAE to strictly generate\nrepresentative samples by removing non-representative samples from the base\ntraining set when training the CVAE model. We show that this training scheme\nenhances the representativeness of the generated samples and therefore,\nimproves the few-shot classification results. Experimental results show that\nour method improves three FSL baseline methods by substantial margins,\nachieving state-of-the-art few-shot classification performance on miniImageNet\nand tieredImageNet datasets for both 1-shot and 5-shot settings. Code is\navailable at: https://github.com/cvlab-stonybrook/fsl-rsvae.", + "authors": "Jingyi Xu, Hieu Le", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Few-shot learning (FSL) methods aim to learn useful representations with limited training data. They are extremely useful for situations where machine learning solutions are required but large labelled datasets are not trivial to obtain (e.g. rare medical conditions [49, 71], rare animal species [75], failure cases in autonomous systems [42,43,58]). Generally, FSL methods learn knowledge from a fixed set of base classes with a surplus of labelled data and then adapt the learned model to a set of novel classes for which only a few training examples are available [73]. Many FSL methods [10, 23, 39, 65, 65, 77, 82] employ a prototype-based classifier for its simplicity and good performance. They aim to find a prototype for each novel class such that it is close to the testing samples of the same class and far away from testing samples for other classes. How*Work done outside of Amazon Representative samples Non-representative samples Gaussian distribution Figure 1. Representative Samples. We refer representative samples to the \u201ceasy-to-recognize\u201d samples that faithfully reflect the key characteristics of the category. We identify those samples and then use them to train a VAE model for feature generation, conditioned on class-representative semantic embeddings. We show that the generated data significantly improves few-shot classification performance. ever, it is challenging to estimate a representative prototype just from a few available support samples [37,79]. An effective strategy to enhance the representativeness of the prototype is to employ textual semantic embeddings learned via NLP models [13, 46, 52, 53] using large unsupervised text corpora [77, 82]. These semantic embeddings implicitly associate a class name, such as \u201cYorkshire Terriers\u201d, with the class representative semantic attributes such as \u201csmallest dog\u201d or \u201clong coat\u201d [1] ( Fig. 1), providing strong and unbiased priors for category recognition. For the most part, current FSL methods focus on learning to adaptively leverage the semantic information to complete the original biased prototype estimated from the few available samples. For example, the recent FSL method of Zhang et al. [82] learns to fuse the primitive knowledge and attribute features into a representative prototype, depending on the set of given few-shot samples. Similarly, Xing et al. [77] propose a method that computes an adaptive mixture coefficient to combine features from the visual and tex\ftual modalities. However, learning to recover an arbitrarily biased prototype is challenging due to the drastic variety of the possible combinations of few-shot samples. In this paper, we propose a novel FSL method to obtain class-representative prototypes. Inspired by zero-shot learning (ZSL) methods [4, 18, 85], we propose to generate visual features via a variational autoencoder (VAE) model [66] conditioned on the semantic embedding of each class. This VAE model learns to associate a distribution of features to a conditioned semantic code. We assume that such association generalizes across the base and novel classes [3,47]. Therefore, the model trained with sufficient data from the base classes can generate novel-class features that align with the real unseen features. We then use the generated features together with the few-shot samples to construct class prototypes. We show that this strategy achieves state-of-the-art results on both miniImageNet and tieredImageNet datasets. It works exceptionally well for 1shot scenarios where our method outperforms state-of-theart methods [76, 80] by 5 \u223c6% in terms of classification accuracy. Moreover, to enhance the representativeness of the prototype, we guide the VAE to generate more representative samples. Here we refer representative samples to the \u201ceasyto-recognize\u201d samples that faithfully reflect the key characteristics of the category (see Fig. 1). The embeddings of these representative samples often lie close to their corresponding class centers, which are particularly useful for constructing class-representative prototypes. Specifically, we guide the VAE model to generate representative samples by selecting only representative data from the base classes for training it. In essence, our VAE model is trained to model the data distribution of the training set. As the training set contains only representative data, the trained VAE model outputs samples that are also representative. Specifically, to select those representative features, we first assume that the feature vectors of each class follow a multivariate Gaussian distribution and estimate this distribution for each base class. Based on these distributions, we compute the probability of each sample belonging to its corresponding category to measure the representativeness for the sample. We filter out the non-representative samples and train the VAE using only representative samples. Interestingly, we show that the representativeness of the training set highly corresponds to the accuracy of the few-shot classifier. We obtain the highest accuracy when training the VAE with the most representative samples. In this case, we only use a small percentage of the whole training set, e.g., 10% for the case of miniImagenet dataset, to obtain the best results. Our analyses show that this approach consistently improves the FSL classification performance by 1 \u223c2% across all benchmarks for three different baselines [10,39,65]. Our main contributions can be summarized as follows: \u2022 We are the first to use a VAE-based feature generation approach conditioned on class semantic embeddings for few-shot classification. \u2022 We propose a novel sample selection method to collect representative samples. We use these samples to train a VAE model to obtain reliable data points for constructing class-representative prototypes. \u2022 Our experiments show that our methods achieve stateof-the-art performance on two challenging datasets, tieredImageNet and miniImageNet. We summarize related FSL works in Section 2. Section 3 provides a rundown of our approach. Section 4 reports the main results obtained with our method. In section 5, we provide multiple analyses to clarify different aspects of our methods. 2. Related Work Few-shot Learning. FSL is helpful when we only have limited labeled training data [7,25\u201330]. Representative FSL approaches include metric learning based [65,67,68,70,79, 80,83], optimization based [17,31,33,34,37,54,59,62], and data augmentation based methods [2, 61, 74, 78]. Similar to our method, some FSL methods use semantic information to improve the few-shot classifiers [21, 51, 69, 77, 82]. Zhang et al. [82] and Xing et al. [77] propose methods that learn to adaptively combine the visual features and the semantic features to obtain an unified cross-modality representation for each class. These two methods focus on the fusing strategies that combine features of the two domains. Hu et al. [21] propose to disentangle the visual features into the sub-spaces that associate to different semantic attributes. The FSL method of Peng et al. [51] uses semantic information to infer a classifier for novel classes and adaptively combines this classifier with the few-shot samples. Our method is the first FSL method that uses a conditional VAE model to directly generate visual features, conditioned on the semantic embedding of each class. Conditional Variational Autoencoder. The practice of using a conditional VAE to model a feature distribution has been used before in many computer vision tasks such as image classification [23,60,78,84], image generation [16,38], image restoration [14], or video processing [50]. Using VAE models for generating features conditioned on the corresponding semantic embedding is fairly common in ZSL methods [4, 18, 47, 60, 81, 85]. Mishra et al. [47] are the first to propose to use a conditional VAE for ZSL where they view ZSL as a case of missing data. They find that such an approach can handle well the domain shift problem. Similarly, Arora et al. [3] show that a conditional VAE can \fa Feature extractor x Encoder z Decoder x Concatenation Latent code KL loss Reconstruction loss Input image Likelihood threshold deep features Semantic embedding (a) Sample Selection method (b) Conditional VAE model Gaussian distribution Figure 2. Overview \u2013 The key aspect of our approach is to subset our training set to the most representative samples to train a conditional VAE model that generates more representative features. (a) To select representative samples, we assume that the features of each class follow a multivariate Gaussian distribution. We estimate the distribution parameters and compute a probability for each data point belonging to the class distribution. We identify a set of representative samples by setting a threshold on the probability. (b) We train a VAE to generate visual features, conditioned on the semantic embedding of each class. Using only representative samples (the output of the sample selection step) to train this VAE model improves the representativeness of the generated samples. be used together with a GAN system to synthesize images for unseen classes effectively. Keshari et al. [22] focus on generating a specific set of hard samples which are closer to another class and the decision boundary. For the most part, ZSL methods aim to model the whole distribution of data [6,9,40,60], while our method focuses on modeling the distribution of representative samples useful for constructing the class-representative prototypes. Sample Selection. To the best of our knowledge, we are the first to propose using a sample selection method for selecting training samples for a VAE model. Here we select only representative samples for training the VAE. This is a new sample selection regime since mainstream sample selection works mainly focus on identifying the most informative samples [5, 24] for training their models, which is widely used in active-learning [32, 63]. In FSL, Chang et al. [8] propose a method to select the most informative data that should be annotated for a few-shot text generation system. Zhou et al. [86] propose a method to select the useful base classes to train their model, while our work selects useful individual samples within an arbitrary set of base classes. 3. Method 3.1. Problem Definition In a typical few-shot classification setting, we are given a set of data-label pairs D = {(xi, yi)}. Here xi \u2208Rd is the feature vector of a sample and yi \u2208C, where C denotes the set of classes. The set of classes is divided into base classes Cb and novel classes Cn. The sets of class Cb and Cn are disjoint, i.e. Cb \u2229Cn = \u2205. For a N-way K-shot problem, we sample N classes from the novel set Cn, and K samples are available for each class. K is often small (i.e., K = 1 or K = 5). Our goal is to classify query samples correctly using the few samples from the support set. 3.2. Overall Pipeline Fig. 2 gives an overview of our sample selection method and VAE training approach. We propose a method to select a set of representative samples from a set of base classes. We use these selected representative data to train a conditional VAE model for feature generation. To select representative samples, we assume that the features of each class follow a multivariate Gaussian distribution. We estimate the parameters for each class distribution and compute the probability for each data point belonging to its class. By setting a threshold on the probabilities, we identify a set of representative samples. We then use these selected representative samples to train a VAE model that generates samples conditioned on the semantic attributes of each class. We train this VAE on the base classes and use the trained model to generate samples for the novel classes. The generated features are then used together with the few-shot samples to construct the prototype for each class. Our method is a simple plug-and-play module and can be built on top of any pretrained feature extractors. In our experiments, we show that our method consistently improves three baseline few-shot classification methods: Meta-Baseline [10], ProtoNet [65] and E3BM [39] by large margins. \f3.2.1 Class-representative Sample Selection In this paper, we are interested in representative samples as they can serve as reliable data points for constructing a class-representative prototype [10, 65]. The main idea is to train a feature generator with only representative data to obtain more representative generated samples. To select the representative features, we assume that the feature distribution of the base classes follows a Gaussian distribution and estimate the parameters of this distribution for each class. We calculate the Gaussian mean of a base class i as the mean of every single dimension in the vector: \\ l a be l { eq: mean} \\mu ^i = \\frac {1}{n^i}\\sum _{j=1}^{n^i} x^j, (1) where xj is a feature vector of the j-th sample from the base class i and ni is the total number of samples in class i. The covariance matrix \u03a3i for the distribution of class i is calculated as: \\ l a be l {e q :va e} \\ Sigma ^ i = \\frac {1}{n^i-1} \\sum \\limits _{j=1}^{n^i}(x^j-\\mu ^i)(x^j-\\mu ^i)^T. (2) Once we estimate the parameters of the Gaussian distribution using the adequate samples from the base classes, the probability density of observing a single feature, xj, being generated from the Gaussian distribution of class i is given by: \\label {eq : likel i hood } \\be gin {al i gned } p(x^j|\\mu ^i , \\Sigma ^i) &= \\frac {\\textnormal {exp}\\{-\\frac {1}{2}(x^j \\mu ^i)^T{\\Sigma ^i}^{-1}(x^j \\mu ^i)\\}}{(2\\pi )^{k/2}|\\Sigma ^i|^{1/2}}, \\end {aligned} (3) where k is the dimension of the feature vector. Here we assume that the probability of a single sample belongs to its category\u2019s distribution reflects the representativeness of the sample, i.e., the higher the probability, the more representative the sample is. By setting a threshold \u03f5 on the estimated probability, we filter out those samples with small probabilities and get a set of representative features for class i: \\ l abe l {eq:rep res e ntative} \\mathbb {D}^i = \\{x^j \\text { } | \\text { } p(x^j|\\mu ^i, \\Sigma ^i) > \\epsilon \\}, (4) where Di stores the features for class i with the probabilities larger than a threshold \u03f5. 3.2.2 Conditional VAE Model for Feature Generation We use our sample selection method to select a set of representative samples and use them for training our feature generation model. We develop our feature generator based on a conditional variational autoencoder (VAE) architecture [66] (see Fig. 2b). The VAE is composed of an Encoder E(x, a), which maps a visual feature x to a latent code z, and a decoder G(z, a) which reconstructs x from z. Both E and G are conditioned on the semantic embedding a. The loss function for training the VAE for a feature xj of class i can be defined as: \\ labe l { e q:cvae} \\begin {ali g n ed} L_{V}(x^j) = & \\textnormal {KL} \\left ( q(z|x^j,a^i)||p(z|a^i) \\right ) \\\\ & \\textnormal {log}p(x^j|z,a^i), \\end {aligned} g (5) where ai is the semantic embedding of class i. The first term is the Kullback-Leibler divergence between the VAE posterior q(z|x, a) and a prior distribution p(z|a). The second term is the decoder\u2019s reconstruction error. q(z|x, a) is modeled as E(x, a) and p(x|z, a) is equal to G(z, a). The prior distribution is assumed to be N(0, I) for all classes. The loss for training the feature generator is the loss over all selected representative training samples: L _V = \\ s um _ {i =1}^{C_b}\\sum _{x \\in \\mathbb {D}^i} L_V(x) (6) 3.2.3 Constructing Class Prototypes After the VAE is trained on the base set, we generate a set of features for a class y by inputting the respective semantic vector ay and a noise vector z to the decoder G: \\ ma thb b {G}^ y = \\ { \\ha t {x} | \\hat {x} = G(z, a^y), z \\sim \\mathcal {N}(0, I)\\}. (7) The generated features along with the original support set features for a few-shot task is then served as the training data for a task-specific classifier. Following our baseline methods, we compute the prototype for each class and apply the nearest neighbour classifier. Specifically, we first compute two separated prototypes: one using the support features and the other using the generated features. Each prototype is the mean vector of the features of each group. We then take a weighted sum of the two prototypes to obtain the final prototype py for class y: \\ la b e l {e q : combi n e} \\t e x tnor m al {p}^y = w_g * \\frac {1}{|\\mathbb {G}^y|} {\\sum }_{\\hat {x}^j \\in \\mathbb {G}^y} {\\hat {x}^j} + w_s*\\frac {1}{|\\mathbb {S}^y|} {\\sum }_{{x}^j \\in \\mathbb {S}^y} {{x^j}}, (8) where Sy is the support set features and (wg, ws) are the coefficients of the generated feature prototype and the real feature prototype, respectively. We classify samples by finding the nearest class prototype for an embedding query feature. We conduct further analysis to show that our generated features can benefit all types of classifiers (see Section 5.2). Compared to the methods that correct the original biased prototype, our model does not require any carefully designed combination scheme. \fMethod Backbone miniImageNet tieredImageNet 1-shot 5-shot 1-shot 5-shot Matching Net [70] ResNet-12 65.64 \u00b1 0.20 78.72 \u00b1 0.15 68.50 \u00b1 0.92 80.60 \u00b1 0.71 MAML [17] ResNet-18 64.06 \u00b1 0.18 80.58 \u00b1 0.12 SimpleShot [72] ResNet-18 62.85 \u00b1 0.20 80.02 \u00b1 0.14 69.09 \u00b1 0.22 84.58 \u00b1 0.16 CAN [20] ResNet-12 63.85 \u00b1 0.48 79.44 \u00b1 0.34 69.89 \u00b1 0.51 84.23 \u00b1 0.37 S2M2 [44] ResNet-18 64.06 \u00b1 0.18 80.58 \u00b1 0.12 TADAM [48] ResNet-12 58.50 \u00b1 0.30 76.70 \u00b1 0.30 62.13 \u00b1 0.31 81.92 \u00b1 0.30 AM3 [77] ResNet-12 65.30 \u00b1 0.49 78.10 \u00b1 0.36 69.08 \u00b1 0.47 82.58 \u00b1 0.31 DSN [64] ResNet-12 62.64 \u00b1 0.66 78.83 \u00b1 0.45 66.22 \u00b1 0.75 82.79 \u00b1 0.48 Variational FSL [84] ResNet-12 61.23 \u00b1 0.26 77.69 \u00b1 0.17 MetaOptNet [31] ResNet-12 62.64 \u00b1 0.61 78.63 \u00b1 0.46 65.99 \u00b1 0.72 81.56 \u00b1 0.53 Robust20-distill [15] ResNet-18 63.06 \u00b1 0.61 80.63 \u00b1 0.42 65.43 \u00b1 0.21 70.44 \u00b1 0.32 FEAT [80] ResNet-12 66.78 \u00b1 0.20 82.05 \u00b1 0.14 70.80 \u00b1 0.23 84.79 \u00b1 0.16 RFS [68] ResNet-12 62.02 \u00b1 0.63 79.64 \u00b1 0.44 69.74 \u00b1 0.72 84.41 \u00b1 0.55 Neg-Cosine [36] ResNet-12 63.85 \u00b1 0.81 81.57 \u00b1 0.56 FRN [76] ResNet-12 66.45 \u00b1 0.19 82.83 \u00b1 0.13 71.16 \u00b1 0.22 86.01 \u00b1 0.15 Meta-Baseline [10] ResNet-12 63.17 \u00b1 0.23 79.26 \u00b1 0.17 68.62 \u00b1 0.27 83.29 \u00b1 0.18 Meta-Baseline + SVAE (Ours) ResNet-12 69.96 \u00b1 0.21 79.92 \u00b1 0.16 73.05 \u00b1 0.24 83.96 \u00b1 0.18 Meta-Baseline + R-SVAE (Ours) ResNet-12 72.79 \u00b1 0.19 80.70 \u00b1 0.16 73.90 \u00b1 0.24 84.17 \u00b1 0.18 ProtoNet [80] ResNet-12 62.39 80.53 68.23 84.03 ProtoNet + SVAE (Ours) ResNet-12 73.01 \u00b1 0.24 83.13 \u00b1 0.40 76.36 \u00b1 0.65 85.65 \u00b1 0.50 ProtoNet + R-SVAE(Ours) ResNet-12 74.84 \u00b1 0.23 83.28 \u00b1 0.40 76.98 \u00b1 0.65 85.77 \u00b1 0.50 E3BM [39] ResNet-12 64.09 \u00b1 0.37 80.29 \u00b1 0.25 71.34 \u00b1 0.41 85.82 \u00b1 0.29 E3BM + SVAE (Ours) ResNet-12 73.07 \u00b1 0.39 80.82 \u00b1 0.31 79.85 \u00b1 0.43 86.82 \u00b1 0.32 E3BM + R-SVAE(Ours) ResNet-12 73.35 \u00b1 0.37 80.95 \u00b1 0.31 80.46 \u00b1 0.43 86.99 \u00b1 0.32 Table 1. Comparison to prior works on miniImageNet and tieredImageNet. Average 5-way 1-shot and 5-way 5-shot accuracy (%) with 95% confidence intervals. SVAE denotes our method using the VAE trained with all features in the base set. R-SVAE denotes the one trained with only representative features. The best performance is highlighted in bold. 4. Experiments 4.1. Experimental Settings Datasets. We evaluate our method on two widely-used benchmarks for few-shot learning, miniImageNet [55] and tieredImageNet [57]. miniImageNet is a subset of the ILSVRC-12 dataset [12]. It contains 100 classes and each class consists of 600 images. The size of each image is 84 \u00d7 84. Following the evaluation protocol of [56], we split the 100 classes into 64 base classes, 16 validation classes, and 20 novel classes for pre-training, validation, and testing. tieredImageNet is a larger subset of ILSVRC-12 dataset, which contains 608 classes sampled from hierarchical category structure. The average number of images in each class is 1281. It is first partitioned into 34 super-categories that are split into 20 classes for training, 6 classes for validation, and 8 classes for testing. This leads to 351 actual categories for training, 97 for validation, and 160 for testing. Baseline methods. Our method can be used as a simple plug-and-play module for many existing few-shot learning methods without fine-tuning their feature extractors. We investigate three baseline few-shot classification methods used in conjunction with our method: ProtoNet [80], Meta-Baseline [10] and E3BM [39]. ProtoNet is known as a strong and classic prototypical approach. In our experiments, we use the ProtoNet implementation of Ye et al. [80]. Meta-Baseline [10] uses a ProtoNet model to fine-tune a generic classifier via meta-learning. E3BM [39] metalearns the ensemble of epoch-wise models to achieve robust predictions for FSL. For each baseline method, we extract the corresponding feature representations to train our feature generation VAE model. We then use the trained VAE to generate features and obtain the class prototypes for fewshot classification. Evaluation protocol. We use the top-1 accuracy as the evaluation metric to measure the performance of our method. We report the accuracy on standard 5-way 1-shot and 5-shot settings with 15 query samples per class. We randomly sample 2000 episodes from the test set and report the mean accuracy with the 95% confidence interval. 4.2. Implementation Details All the three baselines use ResNet12 backbone as the feature extractor. The feature representation is extracted by average pooling the final residual block outputs. The dimension of the feature representation is 640 for ProtoNet [80], 512 for Meta-Baseline [10], and 640 for E3BM [39]. For our feature generation model, both the encoder and the decoder are two-layer fully-connected (FC) networks with 4096 hidden units. LeakyReLU and ReLU [19] are the non\f0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 threshold 70.0 70.5 71.0 71.5 72.0 72.5 accuracy (%) miniImageNet, 1-shot 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 threshold 80.0 80.2 80.4 80.6 accuracy (%) miniImageNet, 5-shot 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 threshold 73.0 73.2 73.4 73.6 73.8 74.0 74.2 accuracy (%) tieredImageNet, 1-shot 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 threshold 83.95 84.00 84.05 84.10 84.15 accuracy (%) tieredImageNet, 5-shot 100 200 300 400 500 600 number of samples 100 200 300 400 500 600 number of samples 600 800 1000 1200 number of samples 600 800 1000 1200 number of samples accuracy number of samples Figure 3. Few-shot classification results with different probability thresholds. We report the classification accuracy (%) (red) and the number of samples (green) when setting different thresholds for the probabilities. A higher threshold means we select samples that are more representative, resulting in a less amount of training data points. In general, the classification performance increases when the number of training samples decreases with increasing representativeness thresholds. linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic vector are both set to be 512. The network is trained using the Adam optimizer with 10\u22124 learning rate. Our semantic embeddings are extracted from CLIP [53]. We empirically set the combination weights [wg, ws] in Equation 8 to [ 1 2, 1 2] for 1-shot settings and to [ 1 6, 5 6] for 5-shot settings. We set the probability threshold to 0.9 for the main experiments and discuss the performance under different values of this threshold in Section 5.1. 4.3. Results Table 1 presents the 5-way 1-shot and 5-way 5-shot classification results of our methods on miniImageNet and tieredImageNet in comparision with previous FSL methods. Here all methods use ResNet12/ResNet18 architectures as feature extractors with input images of size 84 \u00d7 84. Thus, the comparison is fair. For the rest of the paper, we denote our VAE trained with all data as SVAE (Semantic-VAE) and the model trained with only representative data as R-SVAE (Representative-SVAE). We apply our methods on top of the Meta-Baseline [10], ProtoNet [80], and E3BM [39]. Our methods consistently improve all three baselines under all settings and for all datasets. They work particularly well under the 1-shot settings, in which sample bias is a more pronounced issue. Using the model trained on all data SVAE, we report 6.8% \u223c10% 1-shot accuracy improvements for all three baselines. Our 1-shot performance for all the baselines outperforms the state-of-the-art method [76] by large margins. In 5-shot, our method consistently brings a 0.5 \u223c2.7% performance gains to all baselines. Using representative samples to train our VAE model further improves the three baseline methods under all settings and for all datasets. Compared to SVAE, training on strictly representative data improves the 1-shot classification accuracy by 0.3% \u223c2.8% and the 5-shot classification accuracy by 0.2% \u223c0.8%. R-SVAE achieves state-of-theart few-shot classification on miniImageNet dataset with the ProtoNet baseline and on tieredImageNet dataset with the E3BM baseline. 5. Analyses All the following analyses use the feature extractor from the Meta-Baseline method [10]. 5.1. Analysis on the Probability Threshold In our main setting, we set a threshold of 0.9 on the probabilities to select those class-representative samples as the training data for our VAE model (the higher, the more representative). In this section, we conduct experiments with different threshold values to see how it affects the classifier\u2019s performance. Fig. 3 shows the classification accuracy under different thresholds on miniImageNet and tieredImageNet datasets. As the threshold increases, more non-representative samples are filtered out, resulting in less training data for R-SVAE. Interestingly, we observe that the model generally performs better with higher threshold val\f(a) Support Features (b) Query Features (c) Generated Features with SVAE (d) Generated Features with R-SVAE Figure 4. Feature Visualization. We show the t-SNE visualization of the original features (marked as dark points) and our generated features (marked as transparent points) on tieredImageNet dataset. Different colors represent different classes. From left to right, we show the original support set (a), the query set (b), the features generated by SVAE (c), and the features generated by R-SVAE (d). 0.0 0.2 0.4 0.6 0.8 distance 0 1 2 3 4 5 density miniImageNet, 1-shot baseline with SVAE with R-SVAE 0.00 0.05 0.10 0.15 0.20 0.25 0.30 distance 0.0 2.5 5.0 7.5 10.0 12.5 15.0 density miniImageNet, 5-shot baseline with SVAE with R-SVAE Figure 5. Distance Distributions. Kernel Density Estimation of the distance between the estimated prototypes and the ground truth prototype. A smaller value means the estimated prototypes are closer to the ground truth prototypes. ues under both 1-shot and 5-shot settings. For example, under the 1-shot setting on miniImageNet dataset, we only use 58 images per class on average when setting the threshold to 0.9. Training the VAE model with this small set of images improves the performance by 2.95% compared with the model trained using all data in the base set with 600 images per class on average. The results suggest that the performance of our method strongly corresponds to the representativeness of training data. Moreover, it shows that our sample selection method provides a reliable measurement for the representativeness of the training samples. 5.2. Performance with Different Classifiers In our main experiments, we classify samples by finding the nearest neighbor among class prototypes. In this section, we apply another three different types of classifiers: 1-nearest neighbor classifier (1-N-N), Support Vector Machine (SVM), and Logistic Regression (LR). Table 2 shows the 1-shot performance of different classifiers using our generated features on miniImageNet and tieredImageNet datasets. It shows that the features generated by our VAEs improve the performance of all three classifiers. For example, the 1-shot accuracy on miniImageNet using LR is improved by 8.8% with SVAE and by 10.1% with R-SVAE. The consistent performance improvements show that our generated features can benefit different types of classifiers. 5.3. Feature Distribution Analysis In Fig. 4, we show the t-SNE representation [41] of different sets of features for three classes from the novel set of tieredImageNet dataset. From left to right, we visualize the distribution of the original support set (a), the query set (b), the features generated by SVAE (c), and the features generated by R-SVAE (d). Note that our methods do not rely on the support features to generate features. Fig. 4(c) and (d) visualize the effect of our sample selection method. Fig. 4(c) visualizes features generated from our method trained with all available data from the base classes, which consist of 1281 images per class on average. In Fig. 4(d), we train the same model with only 484 representative images per class on average. Our model trained with a representative subset of data generates features that lie closer to the real features, showing the effectiveness of our sample selection method. Moreover, we plot the distance distributions between the estimated prototypes and the ground truth prototypes of each class. Specifically, for each class, we first obtain the ground-truth prototype by taking the mean of all the features of the class. Then we calculate the L2 distance between the ground truth prototype and three different prototypes: 1) Baseline: the prototype was estimated using only the support samples. 2) SVAE: the prototype was estimated using the support samples and the generated samples from our SVAE model. 3) R-SVAE: the prototype was estimated using the support samples and the generated samples from our R-SVAE model. We sample 2400 tasks from miniImageNet dataset under both 5-way 1-shot and 5-way 5-shot settings. For each task, we obtain five distances, one distance per class. Then we plot the probability density distribution of the distance, shown in Fig. 5. The probability density is calculated by binning and counting observations and then smoothing them with a Gaussian kernel, namely, Kernel Density Esti\fRepresentative Non-representative Figure 6. Examples of representative samples (left) and non-representative samples (right). We visualize 5 images with high probabilities and 5 images with small probabilities computed via our proposed method for 3 classes from tieredImageNet dataset. miniImageNet tieredImageNet Classifier support samples + SVAE + R-SVAE support samples + SVAE +R-SVAE Prototype [10] 63.17 \u00b1 0.23 69.96 \u00b1 0.21 72.79 \u00b1 0.19 68.62 \u00b1 0.27 73.05 \u00b1 0.24 73.90 \u00b1 0.24 1-N-N 63.28 \u00b1 0.23 67.25 \u00b1 0.20 69.27 \u00b1 0.19 68.73 \u00b1 0.26 68.05 \u00b1 0.25 69.82 \u00b1 0.24 SVM 63.41 \u00b1 0.23 70.30 \u00b1 0.20 72.84 \u00b1 0.19 68.88 \u00b1 0.25 69.26 \u00b1 0.25 71.28 \u00b1 0.24 LR 63.33 \u00b1 0.22 72.11 \u00b1 0.20 73.41 \u00b1 0.19 69.15 \u00b1 0.25 74.99 \u00b1 0.23 75.98 \u00b1 0.23 Table 2. Choices of the classifiers. One-shot classification accuracy on miniImageNet and tieredImageNet using different types of classifiers, i.e., 1-N-N, SVM and LR. All methods use the feature extractor from the Meta-Baseline method [10]. mation [11]. As can be seen the Fig., our estimated class prototypes are much closer to the ground truth prototypes, compared to the baseline. 5.4. Sample Visualization In Fig. 6, we visualize some representative samples and non-representative samples based on the representativeness probability computed via our method. The samples on the left panel are images with high probabilities. These images mostly contain the main object of the category and are easy to recognize. On the contrary, the samples on the right panel are those with small probabilities. They contain various class-unrelated objects and can lead to noisy features for constructing class prototypes. 5.5. Performance with Different Semantic Embedding We use CLIP features in our main experiments. The performance of our method trained with Word2Vec [45] features are shown in Table 3. Note that CLIP model is trained with 400M pairs (image and its text title) collected from the web while Word2Vec is trained with only text data. Our model outperforms state-of-the-art methods in both cases. 6. Limitations and Discussion We propose a feature generation method using a conditional VAE model. Here we focus on modeling the distribution of the representative samples rather than the whole 1-shot 5-shot Meta-Baseline 63.17 \u00b1 0.23 79.26 \u00b1 0.17 Meta-Baseline + SVAE 67.39 \u00b1 0.21 79.77 \u00b1 0.17 Meta-Baseline + R-SVAE 68.03 \u00b1 0.22 79.93 \u00b1 0.16 Table 3. Classification accuracy using Word2Vec [45] as the semantic feature extractor. data distribution. To accomplish that, we propose a sample selection method to collect a set of strictly representative training samples for training our VAE model. We show that our method brings consistent performance improvements over multiple baselines and achieves state-of-the-art performance on both miniImageNet and tieredImageNet datasets. Our method requires a pre-trained NLP model to obtain the semantic embedding of each class. It might also inherit some potential biases from the textual domain. Note that our method does not aim to generate diverse data with large intra-class variance [35, 78]. Building a system that can generate both representative and non-representative samples can greatly benefit various downstream computer vision tasks and is an interesting direction to extend our work. Acknowledgements. Jingyi Xu is partially supported by a research grant from Zebra Technologies and the SUNY2020 ITSC grant. Hieu Le is funded by Amazon Robotics to attend the conference. We thank Tran Truong, Kien Huynh, and Bento Gonc \u00b8alves for proofreading the paper." + }, + { + "url": "http://arxiv.org/abs/2204.04677v1", + "title": "FedCorr: Multi-Stage Federated Learning for Label Noise Correction", + "abstract": "Federated learning (FL) is a privacy-preserving distributed learning paradigm\nthat enables clients to jointly train a global model. In real-world FL\nimplementations, client data could have label noise, and different clients\ncould have vastly different label noise levels. Although there exist methods in\ncentralized learning for tackling label noise, such methods do not perform well\non heterogeneous label noise in FL settings, due to the typically smaller sizes\nof client datasets and data privacy requirements in FL. In this paper, we\npropose $\\texttt{FedCorr}$, a general multi-stage framework to tackle\nheterogeneous label noise in FL, without making any assumptions on the noise\nmodels of local clients, while still maintaining client data privacy. In\nparticular, (1) $\\texttt{FedCorr}$ dynamically identifies noisy clients by\nexploiting the dimensionalities of the model prediction subspaces independently\nmeasured on all clients, and then identifies incorrect labels on noisy clients\nbased on per-sample losses. To deal with data heterogeneity and to increase\ntraining stability, we propose an adaptive local proximal regularization term\nthat is based on estimated local noise levels. (2) We further finetune the\nglobal model on identified clean clients and correct the noisy labels for the\nremaining noisy clients after finetuning. (3) Finally, we apply the usual\ntraining on all clients to make full use of all local data. Experiments\nconducted on CIFAR-10/100 with federated synthetic label noise, and on a\nreal-world noisy dataset, Clothing1M, demonstrate that $\\texttt{FedCorr}$ is\nrobust to label noise and substantially outperforms the state-of-the-art\nmethods at multiple noise levels.", + "authors": "Jingyi Xu, Zihan Chen, Tony Q. S. Quek, Kai Fong Ernest Chong", + "published": "2022-04-10", + "updated": "2022-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "main_content": "Introduction Federated learning (FL) is a promising solution for largescale collaborative learning, where clients jointly train a machine learning model, while still maintaining local data privacy [20, 26, 37]. However, in real-world FL implementations over heterogeneous networks, there may be differ*Equal contributions. \u2020 Corresponding author. Code: https://github.com/Xu-Jingyi/FedCorr ences in the characteristics of different clients due to diverse annotators\u2019 skill, bias, and hardware reliability [4,38]. Client data is rarely IID and frequently imbalanced. Also, some clients would have clean data, while other clients may have data with label noise at different noise levels. Hence, the deployment of practical FL systems would face challenges brought by discrepancies in two aspects i): local data statistics [5, 13, 21, 26], and ii): local label quality [4, 38]. Although recent works explored the discrepancy in local data statistics in FL, and learning with label noise in centralized learning (CL), there is at present no uni\ufb01ed approach for tackling both challenges simultaneously in FL. The \ufb01rst challenge has been explored in recent FL works, with a focus on performance with convergence guarantees [22, 27]. However, these works have the common implicit assumption that the given labels of local data are completely correct, which is rarely the case in real-world datasets. The second challenge can be addressed by reweighting [4,7,31] or discarding [36] those client updates that are most dissimilar. In these methods, the corresponding clients are primarily treated as malicious agents. However, dissimilar clients are not necessarily malicious and could have label noise in local data that would otherwise still be useful after label correction. For FL systems, the requirement of data privacy poses an inherent challenge for any label correction scheme. How can clients identify their noisy labels to be corrected without needing other clients to reveal sensitive information? For example, [38] proposes label correction for identi\ufb01ed noisy clients with the guidance of extra data feature information exchanged between clients and server, which may lead to privacy concerns. Label correction and, more generally, methods to deal with label noise, are well-studied in CL. Yet, even state-ofthe-art CL methods for tackling label noise [3, 8, 9, 18, 30, 33, 35, 40], when applied to local clients, are inadequate in mitigating the performance degradation in the FL setting, due to the limited sizes of local datasets. These CL methods cannot be applied on the global sever or across multiple This work is supported by the National Research Foundation, Singapore under its AI Singapore Program (AISG Award No: AISG-RP-2019015), and under its NRFF Program (NRFFAI1-2019-0005). This work is also supported in part by the SUTD Growth Plan Grant for AI. 1 arXiv:2204.04677v1 [cs.LG] 10 Apr 2022 \f\u2460 \u2461 \u22ef \u2463 \u22ef \u22ef \u2464 \u2465 \u22ef Identified clean samples Identified noisy samples Server GMM GMM Clean Noisy : Set of noisy clients. : Set of clean clients. Training Correction 1. Federated Pre-processing Stage (multiple iterations) \u2462Aggregation Downloading Uploading 2. Federated Finetuning Stage \u2469 Server 3. Usual Federated Learning Stage 11 \u22ef \u22ef \u2466 \u2467 \u2468Label correction Server Label correction : Identified clean client : Identified noisy client GMM \u22ef Figure 1. An overview of FedCorr, organized into three stages. Algorithm steps are numbered accordingly. clients due to FL privacy requirements. So, it is necessary and natural to adopt a more general framework that jointly considers the two discrepancies, for a better emulation of real-world data heterogeneity. Most importantly, privacypreserving label correction should be incorporated in training to improve robustness to data heterogeneity in FL. In this paper, we propose a multi-stage FL framework to simultaneously deal with both discrepancy challenges; see Fig. 1 for an overview. To ensure privacy, we introduce a dimensionality-based \ufb01lter to identify noisy clients, by measuring the local intrinsic dimensionality (LID) [11] of local model prediction subspaces. Extensive experiments have shown that clean datasets can be distinguished from noisy datasets by the behavior of LID scores during training [24,25]. Hence, in addition to the usual local weight updates, we propose that each client also sends an LID score to the server, which is a single scalar representing the discriminability of the predictions of the local model. We then \ufb01lter noisy samples based on per-sample training losses independently for each identi\ufb01ed noisy client, and relabel the largeloss samples with the predicted labels of the global model. To improve training stability and alleviate the negative impact caused by noisy clients, we introduce a weighted proximal regularization term, where the weights are based on the estimated local noise levels. Furthermore, we \ufb01netune the global model on the identi\ufb01ed clean clients and relabel the local data for the remaining noisy clients. Our main contributions are as follows: \u2022 We propose a general multi-stage FL framework FedCorr to tackle data heterogeneity, with respect to both local label quality and local data statistics. \u2022 We propose a general framework for easy generation of federated synthetic label noise and diverse (e.g. non-IID) client data partitions. \u2022 We identify noisy clients via LID scores, and identify noisy labels via per-sample losses. We also propose an adaptive local proximal regularization term based on estimated local noise levels. \u2022 We demonstrate that FedCorr outperforms state-ofthe-art FL methods on multiple datasets with different noise levels, for both IID and non-IID data partitions. 2. Related work 2.1. Federated methods In this paper, we focus on three closely related aspects of FL: generation of non-IID federated datasets, methods to deal with non-IID local data, and methods for robust FL. The generation of non-IID local data partitions for FL was \ufb01rst explored in [26], based on dividing a given dataset into shards. More recent non-IID data partitions are generated via Dirichlet distributions [1,13,31]. Recent federated optimization work mostly focus on dealing with the discrepancy in data statistics of local clients and related inconsistency issues [1, 21, 32]. For instance, FedProx deals with non-IID local data, by including a proximal term in the local loss functions [21], while FedDyn uses a dynamic proximal term based on selected clients [1]. SCAFFOLD [15] is another method suitable for non-IID local data that uses control variates to reduce clientdrift. In [13] and [27], adaptive FL optimization methods for the global server are introduced, which are compatible with non-IID data distributions. Moreover, the Power-ofChoice (PoC) strategy [6], a biased client selection scheme that selects clients with higher local losses, can be used to increase the rate of convergence. There are numerous works on improving the robustness of FL; these include robust aggregation methods [7,19,31], 2 \freputation mechanism-based contribution examining [36], credibility-based re-weighting [4], distillation-based semisupervised learning [14], and personalized multi-task learning [19]. However, these methods are not designed for identifying noisy labels. Even when these methods are used to detect noisy clients, either there is no mechanism for further label correction at the noisy clients [7, 19, 31, 36], or the effect of noisy labels is mitigated with the aid of an auxiliary dataset, without any direct label correction [4,14]. One notable exception is [38], which carries out label correction during training by exchanging feature centroids between clients and server. This exchange of centroids may lead to privacy concerns, since centroids could potentially be used as part of reverse engineering to reveal non-trivial information about raw local data. In contrast to these methods, FedCorr incorporates the generation of diverse local data distributions with synthetic label noise, together with noisy label identi\ufb01cation and correction, without privacy leakage. 2.2. Local intrinsic dimension (LID) Informally, LID [11] is a measure of the intrinsic dimensionality of the data manifold. In comparison to other measures, LID has the potential for wider applications as it makes no further assumptions on the data distribution beyond continuity. The key underlying idea is that at each datapoint, the number of neighboring datapoints would grow with the radius of neighborhood, and the corresponding growth rate would then be a proxy for \u201clocal\u201d dimension. LID builds upon this idea [12] via the geometric intuition that the volume of an m-dimensional Euclidean ball grows proportionally to rm when its radius is scaled by a factor of r. Speci\ufb01cally, when we have two m-dimensional Euclidean balls with volumes V1, V2, and with radii r1, r2, we can compute m as follows: V2 V1 = \u0012r2 r1 \u0013m \u21d2m = log(V2/V1) log(r2/r1) . (1) We shall now formally de\ufb01ne LID. Suppose we have a dataset consisting of vectors in Rn. We shall treat this dataset as samples drawn from an n-variate distribution D. For any x \u2208Rn, let Yx be the random variable representing the (non-negative) distance from x to a randomly selected point y drawn from D, and let FYx(t) be the cumulative distribution function of Yx. Given r > 0 and a sample point x drawn from D, de\ufb01ne the LID of x at distance r to be LIDx(r) := lim \u03b5\u21920 log FYx((1 + \u03b5)r) \u2212log FYx(r) log(1 + \u03b5) , provided that it exists, i.e. provided that FYx(t) is positive and continuously differentiable at t = r. The LID at x is de\ufb01ned to be the limit LIDx = limr\u21920 LIDx(r). Intuitively, the LID at x is an approximation of the dimension of a smooth manifold containing x that would \u201cbest\u201d \ufb01t the distribution D in the vincinity of x. Estimation of LID: By treating the smallest neighbor distances as \u201cextreme events\u201d associated to the lower tail of the underlying distance distribution, [2] proposes several estimators of LID based on extreme value theory. In particular, given a set of points X, a reference point x \u2208X, and its k nearest neighbors in X, the maximum-likelihood estimate (MLE) of x is: d LID(x) = \u2212 \u00121 k k X i=1 log ri(x) rmax(x) \u0013\u22121 , (2) where ri(x) denotes the distance between x and its i-th nearest neighbor, and rmax(x) is the maximum distance from x among the k nearest neighbors. 3. Proposed Method In this section, we introduce FedCorr, our proposed multi-stage training method to tackle heterogeneous label noise in FL systems (see Algorithm 1). Our method comprises three stages: pre-processing, \ufb01netuning and usual training. In the \ufb01rst stage, we sample the clients without replacement using a small fraction to identify noisy clients via LID scores and noisy samples via per-sample losses, after which we relabel the identi\ufb01ed noisy samples with the predicted labels of the global model. The noise level of each client is also estimated in this stage. In the second stage, we \ufb01netune the model with a typical fraction on relatively clean clients, and use the \ufb01netuned model to further correct the samples for the remaining clients. Finally, in the last stage, we train the model via the usual FL method (FedAvg [26]) using the corrected labels at the end of the second stage. 3.1. Preliminaries Consider an FL system with N clients and an M-class dataset D = {Dk}N k=1, where each Dk = {(xi k, yi k)}nk i=1 denotes the local dataset for client k. Let S denote the set of all N clients, and let w(t) k (resp. w(t)) denote the local model weights of client k (resp. global model weights obtained by aggregation) at the end of communication round t. At the end of round t, the global model f (t) G would have its weights w(t) updated as follows: w(t) \u2190 X k\u2208St |Dk| P i\u2208St |Di|w(t) k , (3) where St \u2286S is the subset of selected clients in round t. For the rest of this subsection, we shall give details on client data partition, noise model simulation, and LID score computation. These are three major aspects of our proposed approach to emulate data heterogeneity, and to deal with the discrepancies in both local data statistics and label quality. Data partition. We consider both IID and non-IID heterogeneous data partitions in this work. For IID partitions, 3 \fAlgorithm 1 FedCorr (Red and Black line numbers in the pre-processing stage refer to operations for clients and server, respectively.) Inputs: N (number of clients), T1, T2, T3, D = {Di}N i=1 (dataset), w(0) (initialized global model weights). Output: Global model f \ufb01nal G // Federated Pre-processing Stage 1: (\u02c6 \u00b5(0) 1 , . . . , \u02c6 \u00b5(0) N ) \u2190(0, . . . , 0) // estimated noise levels 2: for t = 1 to T1 do 3: S =Shuf\ufb02e({1, . . . , N}) 4: winter \u2190w(t\u22121) // intermediary weights 5: for k \u2208S do 6: w(t) k \u2190weights that minimize loss function (5) 7: Upload weights w(t) k and LID score to server 8: Update global model w(t) \u2190winter 9: Divide all clients into clean set Sc and noisy set Sn based on cumulative LID scores via GMM 10: for noisy client k \u2208Sn do 11: Divide Dk into clean subset Dc k and noisy subset Dn k based on per-sample losses via GMM 12: \u02c6 \u00b5(t) k \u2190|Dn k | |Dk| // update estimated noise level 13: y(i) k \u2190arg max f(x(i) k ; w(i)), \u2200(x(i) k , y(i) k ) \u2208Dn k // Federated Finetuning Stage 14: Sc \u2190{k|k \u2208S, \u00b5k < 0.1}, Sn \u2190S \\ Sc. 15: for t = T1 + 1 to T1 + T2 do 16: Update w(t) k by usual FedAvg among clients in Sc 17: for Noisy client k \u2208Sn do 18: y(i) k \u2190arg max f(x(i) k ; w(i)), \u2200(x(i) k , y(i) k ) \u2208Dk // Usual Federated Learning Stage 19: for t = T1 + T2 + 1 to T1 + T2 + T3 do 20: Update w(t) k by usual FedAvg among all clients 21: returnf \ufb01nal G := f(\u00b7; w(T1+T2+T3)) the whole dataset D is uniformly distributed at random among N clients. For non-IID partitions, we \ufb01rst generate an N \u00d7 M indicator matrix \u03a6, where each entry \u03a6ij indicates whether the local dataset of client i contains class j. Each \u03a6ij shall be sampled from the Bernoulli distribution with a \ufb01xed probability p. For each 1 \u2264j \u2264M, let \u03c5j be the sum of entries in the j-th column of \u03a6; this equals the number of clients whose local datasets contain class j. Let qj be a vector of length \u03c5j, sampled from the symmetric Dirichlet distribution with the common parameter \u03b1Dir > 0. Using qj as a probability vector, we then randomly allocate the samples within class j to these \u03c5j clients. Note that our non-IID data partition method provides a general framework to control the variability in both class distribution and the sizes of local datasets (see Fig. 2). Noise model. To emulate label noise in real-world data, Figure 2. Depiction of non-IID partitions for different parameters. we shall introduce a general federated noise model framework. For simplicity, this work only considers instanceindependent label noise. This framework has two parameters \u03c1 and \u03c4, where \u03c1 denotes the system noise level (ratio of noisy clients) and \u03c4 denotes the lower bound for the noise level of a noisy client. Every client has a probability \u03c1 of being a noisy client, in which case the local noise level for this noisy client is determined randomly, by sampling from the uniform distribution U(\u03c4, 1). Succinctly, the noise level of client k (for k = 1, . . . , N) is \u00b5k = ( u \u223cU(\u03c4, 1), with probability \u03c1; 0, with probability 1 \u2212\u03c1. (4) When \u00b5k \u0338= 0, the 100 \u00b7 \u00b5k% noisy samples are chosen uniformly at random, and are assigned random labels, selected uniformly from the M classes. LID scores for local models. In this paper, we associate LID scores to local models. Consider an arbitrary client with local dataset D and current local model f(\u00b7). Let X := {f(x)}x\u2208D be the set of prediction vectors, and for each x \u2208D, compute [ LID(f(x)) w.r.t. the k nearest neighbors in X, as given in (2). We de\ufb01ne the LID score of (D, f) to be the average value of [ LID(f(x)) over all x \u2208D. Note that as the local model f(\u00b7) gets updated with each round, the corresponding LID score will change accordingly. Experiments have shown that given the same training process, models trained on a dataset with label noise tend to have larger LID scores as compared to models trained on the same dataset with clean labels [24, 25]. Intuitively, the prediction vectors of a well-trained model, trained on a clean dataset, would cluster around M possible one-hot vectors, corresponding to the M classes. However, as more label noise is added to the clean dataset, the prediction vector of a noisy sample would tend to be shifted towards the other clusters, with different noisy samples shifted in different directions. Hence, the prediction vectors near each one-hot vector would become \u201cmore diffuse\u201d and would on average span a higher dimensional space. 3.2. Federated pre-processing stage FedCorr begins with the pre-processing stage, which iteratively evaluates the quality of the dataset of each client, and relabels identi\ufb01ed noisy samples. This pre-processing stage differs from traditional FL in the following aspects: 4 \f\u2022 All clients will participate in each iteration. Clients are selected without replacement, using a small fraction. \u2022 An adaptive local proximal term is added to the loss function, and mixup data augmentation is used. \u2022 Each client computes its LID score and per-sample cross-entropy loss after local training and sends its LID score together with local model updates to the server. Client iteration and fraction scheduling. The pre-processing stage is divided into T1 iterations. In each iteration, every client participates exactly once. Every iteration is organized by communication rounds, similar to the usual FL, but with two key differences: a small fraction is used, and clients are selected without replacement. Each iteration ends when all clients have participated. It is known that large fractions could help improve the convergence rate [26], and a linear speedup could even be achieved in the case of convex loss functions [29]. However, large fractions have a weak effect in non-IID settings, while intuitively, small fractions would yield aggregated models that deviate less from local models; cf. [23]. These observations inspire us to propose a fraction scheduling scheme that combines the advantages of both small and large fractions. Speci\ufb01cally, we sample clients using a small fraction without replacement in the pre-processing stage, and use a typical larger fraction with replacement in the latter two stages. By sampling without replacement during preprocessing, we ensure all clients participate equally for the evaluation of the overall quality of labels in local datasets. Mixup and local proximal regularization. Throughout the pre-processing stage, for client k with batch (Xb, Yb) = {(xi, xj)}nb i=1 (where nb denotes batch size), we use the following loss function: L(Xb) = LCE \u0010 f (t) k ( \u02dc Xb), \u02dc Yb \u0011 +\u03b2\u02c6 \u00b5(t\u22121) k \r \rw(t) k \u2212w(t\u22121)\r \r2. (5) Here, f (t) k = f(\u00b7; w(t) k ) denotes the local model of client k in round t, and w(t\u22121) denotes the weights of the global model obtained in the previous round t \u22121. The \ufb01rst term in (5) represents the cross-entropy loss on the mixup augmentation of (Xb, Yb), while the second term in (5) is an adaptive local proximal regularization term, where \u02c6 \u00b5(t\u22121) k is the estimated noise level of client k to be de\ufb01ned later. It should be noted that our local proximal regularization term is only applied in the pre-processing stage. Recall that mixup [41] is a data augmentation technique that favors linear relations between samples, and that has been shown to exhibit strong robustness to label noise [3,18]. Mixup generates new samples (\u02dc x, \u02dc y) as convex combinations of randomly selected pairs of samples (xi, yi) and (xj, yj), given by \u02dc x = \u03bbxi+(1\u2212\u03bb)xj, \u02dc y = \u03bbyi+(1\u2212\u03bb)yj, where \u03bb \u223cBeta(\u03b1, \u03b1), and \u03b1 \u2208(0, \u221e). (We use \u03b1 = 1 in our experiments.) Intuitively, mixup achieves robustness to label noise due to random interpolation. For example, if (xi, \u02c6 yi) is a noisy sample and if yi is the true label, then the negative impact caused by an incorrect label \u02c6 yi is alleviated when paired with a sample whose label is yi. Our adaptive local proximal regularization term is scaled by \u02c6 \u00b5(t\u22121) k , which is the estimated noise level of client k computed at the end of round t \u22121. (In particular, this term would vanish for clean clients.) The hyperparameter \u03b2 is also incorporated to control the overall effect of this term. Intuitively, if a client\u2019s dataset has a larger discrepancy from other local datasets, then the corresponding local model would deviate more from the global model, thereby contributing a larger loss value for the local proximal term. Identi\ufb01cation of noisy clients and noisy samples. To address the challenge of heterogeneous label noise, we shall iteratively identify and relabel the noisy samples. In each iteration of this pre-processing stage, where all clients will participate, every client will compute the LID score and per-sample loss for its current local model (see Algorithm 1, lines 3-9). Speci\ufb01cally, when client k is selected in round t, we train the model f (t) k on the local dataset Dk and then compute the LID score of (Dk, f (t) k ) via (2). Note that our proposed framework preserves the privacy of client data, since in comparison to the usual FL, there is only an additional LID score sent to the server, which is a single scalar that re\ufb02ects only the predictive discriminability of the local model. Since the LID score is computed from the predictions of the output layer (of the local model), knowing this LID score does not reveal information about the raw input data. This additional LID score is a single scalar, hence it has a negligible effect on communication cost. At the end of iteration t, we shall perform the following three steps: 1. The server \ufb01rst computes a Gaussian Mixture Model (GMM) on the cumulative LID scores of all N clients. Using this GMM, the set of clients S is partitioned into two subsets: Sn (noisy clients) and Sc (clean clients). 2. Each noisy client k \u2208Sn locally computes a new GMM on the per-sample loss values for all samples in the local dataset Dk. Using this GMM, Dk is partitioned into two subsets: a clean subset Dc k, and a noisy subset Dn k. We observe that the large-loss samples are more likely to have noisy labels. The local noise level of client k can then be estimated by \u02c6 \u00b5(t) k = |Dn k|/|Dk| if k \u2208Sn, and \u02c6 \u00b5(t) k = 0 otherwise. 3. Each noisy client k \u2208Sn performs relabeling of the noisy samples by using the predicted labels of the global model as the new labels. In order to avoid overcorrection, we only relabel those samples that are identi\ufb01ed to be noisy with high con\ufb01dence. This partial relabeling is controlled by a relabel ratio \u03c0 and a con\ufb01dence threshold \u03b8. Take noisy client k for example: We \ufb01rst choose samples from Dn k that corresponds to the top-\u03c0\u00b7|Dn k| largest per-sample cross-entropy losses. 5 \fNext, we obtain the prediction vectors of the global model, and relabel a sample only when the maximum entry of its prediction vector exceeds \u03b8. Thus, the subset f Dn k \u2032 of samples to be relabeled is given by f Dn k = arg max \u02dc D\u2286Dn k | \u02dc D|=\u03c0\u00b7|Dn k | LCE( \u02dc D; f (t) G ); (6) f Dn k \u2032 = \b (x, y) \u2208f Dn k \f \f max(f (t) G (x)) \u2265\u03b8 \t ; (7) where f (t) G is the global model at the end of iteration t. Why do we use cumulative LID scores in step 1? In deep learning, it has been empirically shown that when training on a dataset with label noise, the evolution of the representation space of the model exhibits two distinct phases: (1) an early phase of dimensionality compression, where the model tends to learn the underlying true data distribution, and (2) a later phase of dimensionality expansion, where the model over\ufb01ts to noisy labels [25]. We observed that clients with larger noise levels tend to have larger LID scores. Also, the overlap of LID scores between clean and noisy clients would increase during training. This increase could be due to two reasons: (1) the model may gradually over\ufb01t to noisy labels, and (2) we correct the identi\ufb01ed noisy samples after each iteration, thereby making the clients with low noise levels less distinguishable from clean clients. Hence, the cumulative LID score (i.e., the sum of LID scores in all past iterations) is a better metric for distinguishing noisy clients from clean clients; see the top two plots in Fig. 3 for a comparison of using LID score versus cumulative LID score. Furthermore, the bottom two plots in Fig. 3 show that cumulative LID score has a stronger linear relation with local noise level. 3.3. Federated \ufb01netuning stage We aim to \ufb01netune the global model fG on relatively clean clients over T2 rounds and further relabel the remaining noisy clients. The aggregation at the end of round t is given by the same equation (3), with one key difference: St is now a subset of Sc = {k|1 \u2264k \u2264N, \u02c6 \u00b5(T1) k \u2264\u03ba}, where \u03ba is the threshold used to select relatively clean clients based on the estimated local noise levels \u02c6 \u00b5(T1) 1 , ..., \u02c6 \u00b5(T1) N . At the end of the \ufb01netuning stage, we relabel the remaining noisy clients Sn = S \\ Sc with the predicted labels of fG. Similar to the correction process in the pre-processing stage, we use the same con\ufb01dence threshold \u03b8 to control the subset of samples to be relabeled; see (7). 3.4. Federated usual training stage In this \ufb01nal stage, we train the global model over T3 rounds via the usual FL (FedAvg) on all the clients, using the labels corrected in the previous two training stages. We also incorporate this usual training stage with three FL methods to show that methods based on different techniques Figure 3. Empirical evaluation of LID score (left) and cumulative LID score (right) after 5 iterations on CIFAR-10 with noise model (\u03c1, \u03c4) = (0.6, 0.5), and with IID data partition, over 100 clients. Top: probability density function and estimated GMM; bottom: LID/cumulative LID score vs. local noise level for each client. Dataset CIFAR-10 CIFAR-100 Clothing1M Size of Dtrain 50,000 50,000 1,000,000 # of classes 10 100 14 # of clients 100 50 500 Fraction \u03b3 0.1 0.1 0.02 Architecture ResNet-18 ResNet-34 pre-trained ResNet-50 Table 1. List of datasets used in our experiments. can be well-incorporated with FedCorr, even if they are not designed speci\ufb01cally for robust FL; see Sec. 4.2. 4. Experiments In this section, we conduct experiments in both IID (CIFAR-10/100 [16]) and non-IID (CIFAR-10, Clothing1M [34]) data settings, at multiple noise levels, to show that FedCorr is simultaneously robust to both local label quality discrepancy and data statistics discrepancy. To demonstrate the versatility of FedCorr, we also show that various FL methods can have their performances further improved by incorporating the \ufb01rst two stages of FedCorr. We also conduct an ablation study to show the effects of different components of FedCorr. Details on data partition and the noise model used have already been given in Sec. 3.1. 4.1. Experimental Setup Baselines. There are two groups of experiments. In the \ufb01rst group, we demonstrate that FedCorr is robust to discrepancies in both data statistics and label quality. We compare FedCorr with the following state-of-the-art methods from three categories: (1) methods to tackle label noise in CL (JointOpt [30] and DivideMix [18]) applied to local clients; (2) classic FL methods (FedAvg [26] 6 \fSetting Method Best Test Accuracy (%) \u00b1 Standard Deviation (%) \u03c1 = 0.0 \u03c1 = 0.4 \u03c1 = 0.6 \u03c1 = 0.8 \u03c4 = 0.0 \u03c4 = 0.0 \u03c4 = 0.5 \u03c4 = 0.0 \u03c4 = 0.5 \u03c4 = 0.0 \u03c4 = 0.5 Centralized (for reference) JointOpt 93.73\u00b10.21 92.29\u00b10.37 92.11\u00b10.21 91.26\u00b10.46 88.42\u00b10.33 89.18\u00b10.29 85.62\u00b11.17 DivideMix 95.64\u00b10.05 96.39\u00b10.09 96.17\u00b10.05 96.07\u00b10.06 94.59\u00b10.09 94.21\u00b10.27 94.36\u00b10.16 Federated FedAvg 93.11\u00b10.12 89.46\u00b10.39 88.31\u00b10.80 86.09\u00b10.50 81.22\u00b11.72 82.91\u00b11.35 72.00\u00b12.76 FedProx 92.28\u00b10.14 88.54\u00b10.33 88.20\u00b10.63 85.80\u00b10.41 85.25\u00b11.02 84.17\u00b10.77 80.59\u00b11.49 RoFL 88.33\u00b10.07 88.25\u00b10.33 87.20\u00b10.26 87.77\u00b10.83 83.40\u00b11.20 87.08\u00b10.65 74.13\u00b13.90 ARFL 92.76\u00b10.08 85.87\u00b11.85 83.14\u00b13.45 76.77\u00b11.90 64.31\u00b13.73 73.22\u00b11.48 53.23\u00b11.67 JointOpt 88.16\u00b10.18 84.42\u00b10.70 83.01\u00b10.88 80.82\u00b11.19 74.09\u00b11.43 76.13\u00b11.15 66.16\u00b11.71 DivideMix 77.96\u00b10.15 77.35\u00b10.20 74.40\u00b12.69 72.67\u00b13.39 72.83\u00b10.30 68.66\u00b10.51 68.04\u00b11.38 Ours 93.82\u00b10.41 94.01\u00b10.22 94.15\u00b10.18 92.93\u00b10.25 92.50\u00b10.28 91.52\u00b10.50 90.59\u00b10.70 Table 2. Average (5 trials) and standard deviation of the best test accuracies of various methods on CIFAR-10 with IID setting at different noise levels (\u03c1: ratio of noisy clients, \u03c4: lower bound of client noise level). The highest accuracy for each noise level is boldfaced. Method Best Test Accuracy (%) \u00b1 Standard Deviation(%) \u03c1 = 0.0 \u03c1 = 0.4 \u03c1 = 0.6 \u03c1 = 0.8 \u03c4 = 0.0 \u03c4 = 0.5 \u03c4 = 0.5 \u03c4 = 0.5 JointOpt (CL) 72.94\u00b10.43 65.87\u00b11.50 60.55\u00b10.64 59.79\u00b12.45 DivideMix (CL) 75.58\u00b10.14 75.43\u00b10.34 72.26\u00b10.58 71.02\u00b10.65 FedAvg 72.41\u00b10.18 64.41\u00b11.79 53.51\u00b12.85 44.45\u00b12.86 FedProx 71.93\u00b10.13 65.09\u00b11.46 57.51\u00b12.01 51.24\u00b11.60 RoFL 67.89\u00b10.65 59.42\u00b12.69 46.24\u00b13.59 36.65\u00b13.36 ARFL 72.05\u00b10.28 51.53\u00b14.38 33.03\u00b11.81 27.47\u00b11.08 JointOpt 67.49\u00b10.36 58.43\u00b11.88 44.54\u00b12.87 35.25\u00b13.02 DivideMix 45.91\u00b10.27 43.25\u00b11.01 40.72\u00b11.41 38.91\u00b11.25 Ours 72.56\u00b12.07 74.43\u00b10.72 66.78\u00b14.65 59.10\u00b15.12 Table 3. Average (5 trials) and standard deviation of the best test accuracies on CIFAR-100 with IID setting. Method\\(p, \u03b1Dir) (0.7, 10) (0.7, 1) (0.3, 10) FedAvg 78.88\u00b12.34 75.98\u00b12.92 67.75\u00b14.38 FedProx 83.32\u00b10.98 80.40\u00b10.94 73.86\u00b12.41 RoFL 79.56\u00b11.39 72.75\u00b12.21 60.72\u00b13.23 ARFL 60.19\u00b13.33 55.86\u00b13.30 45.78\u00b12.84 JointOpt 72.19\u00b11.59 66.92\u00b11.89 58.08\u00b12.18 DivideMix 65.70\u00b10.35 61.68\u00b10.56 56.67\u00b11.73 Ours 90.52\u00b10.89 88.03\u00b11.08 81.57\u00b13.68 Table 4. Average (5 trials) and standard deviation of the best test accuracies of different methods on CIFAR-10 with different nonIID setting. The noise level is (\u03c1, \u03c4) = (0.6, 0.5). Settings FedAvg FedProx RoFL ARFL JointOpt Dividemix Ours FL 70.49 71.35 70.39 70.91 71.78 68.83 72.55 CL 72.23 74.76 Table 5. Best test accuracies on Clothing1M with non-IID setting. CL results are the accuracies reported in corresponding papers. and FedProx [21]); and (3) FL methods designed to be robust to label noise (RoFL [38] and ARFL [7]). For reference, we also report experimental results on JointOpt and DivideMix in CL, so as to show the performance reduction of these two methods when used in FL. In the second group, we demonstrate the versatility of FedCorr. We examine the performance improvements of three state-of-the-art methods when the \ufb01rst two stages of FedCorr are incorporated. These methods are chosen from three different aspects to improve FL: local optimization (FedDyn [1]), aggregation (Median [39]) and client selection (PoC [6]). Implementation details. We choose different models and number of clients N for each dataset; see Tab. 1. For data pre-processing, we perform normalization and image augmentation using random horizontal \ufb02ipping and random cropping with padding=4. We use an SGD local optimizer with a momentum of 0.5, with a batch size of 10 for CIFAR-10/100 and 16 for Clothing1M. With the exception of JointOpt and DivideMix used in FL settings, we shall always use 5 local epochs across all experiments. For FedCorr, we always use the same hyperparameters on the same dataset. In particular, we use T1 = 5, 10, 2 for CIFAR-10, CIFAR-100, Clothing1M, respectively. For fraction scheduling, we use the fraction \u03b3 = 1 N in the preprocessing stage, and we use the fractions speci\ufb01ed in Tab. 1 for the latter two stages. Further implementation details can be found in the supplementary material; see Appendix B. 4.2. Comparison with state-of-the-art methods IID settings. We compare FedCorr with multiple baselines at different noise levels, using the same con\ufb01guration. Tab. 2 and Tab. 3 show the results on CIFAR-10 and CIFAR-100, respectively. In summary, FedCorr achieves best test accuracies across all noise settings tested on both datasets, with particularly signi\ufb01cant outperformance in the case of high noise levels. Note that we have implemented 7 \fMethod Best Test Accuracy (%) \u00b1 Standard Deviation (%) \u03c1 = 0.0 \u03c1 = 0.4 \u03c1 = 0.6 \u03c1 = 0.8 \u03c4 = 0.0 \u03c4 = 0.0 \u03c4 = 0.5 \u03c4 = 0.0 \u03c4 = 0.5 \u03c4 = 0.0 \u03c4 = 0.5 Ours 93.82\u00b10.41 94.01\u00b10.22 94.15\u00b10.18 92.93\u00b10.25 92.50\u00b10.28 91.52\u00b10.50 90.59\u00b10.70 Ours w/o correction 92.85\u00b10.66 93.71\u00b10.20 93.60\u00b10.21 92.15\u00b10.29 91.77\u00b10.65 90.48\u00b10.56 88.77\u00b11.10 Ours w/o frac. scheduling 86.05\u00b11.47 85.59\u00b11.10 78.44\u00b17.90 80.29\u00b12.62 77.96\u00b13.65 76.67\u00b13.48 72.71\u00b15.03 Ours w/o local proximal 93.37\u00b10.05 93.64\u00b10.15 93.46\u00b10.17 92.34\u00b10.14 91.74\u00b10.47 90.45\u00b10.94 88.74\u00b11.72 Ours w/o \ufb01netuning 92.71\u00b10.18 93.06\u00b10.15 92.62\u00b10.28 91.41\u00b10.14 89.31\u00b10.90 89.62\u00b10.40 83.81\u00b12.59 Ours w/o usual training 93.11\u00b10.10 93.53\u00b10.17 93.46\u00b10.14 92.16\u00b10.24 91.50\u00b10.51 90.62\u00b10.59 88.97\u00b11.37 Ours w/o mixup 90.63\u00b10.70 88.83\u00b11.88 91.34\u00b10.39 87.79\u00b10.89 87.50\u00b11.33 87.86\u00b10.53 83.29\u00b11.78 Table 6. Ablation study results (average and standard deviation of 5 trials) on CIFAR-10. Figure 4. Best test accuracies of three FL methods combined with FedCorr on CIFAR-10/100 with multiple \u03c1 and \ufb01xed \u03c4 = 0.5. JointOpt and DivideMix in both centralized and federated settings to show the performance reduction (10% \u223c 30% lower for best accuracy) when these CL methods are applied to local clients in FL. Furthermore, the accuracies in CL can also be regarded as upper bounds for the accuracies in FL. Remarkably, the accuracy gap between DivideMix in CL and FedCorr in FL is < 4% even in the extreme noise setting (\u03c1, \u03c4) = (0.8, 0.5). In the centralized setting, we use the dataset corrupted with exactly the same scheme as in the federated setting. For the federated setting, we warm up the global model for 20 rounds with FedAvg to avoid introducing additional label noise during the correction process in the early training stage, and we then apply JointOpt or DivideMix locally on each selected client, using 20 local training epochs. Non-IID settings. To evaluate FedCorr in more realistic heterogeneous data settings, we conduct experiments using the non-IID settings as described in Sec. 3.1, over different values for (p, \u03b1Dir). Tab. 4 and Tab. 5 show the results on CIFAR-10 and Clothing1M, respectively. Note that we do not add synthetic label noise to Clothing1M, since it already contains real-world label noise. For CIFAR-10, FedCorr consistently outperforms all baselines by at least 7%. For Clothing1M, FedCorr also achieves the highest accuracy in FL, and this accuracy is even higher than the reported accuracy of JointOpt in CL. Combination with other FL methods. We also investigate the performance of three state-of-the-art methods, when the \ufb01rst two stages of FedCorr are incorporated. As shown in Fig. 4, we consistently obtain signi\ufb01cant accuracy improvements on CIFAR-10/100 for various ratios of noisy clients. 4.3. Ablation study Tab. 6 gives an overview of the effects of the components in FedCorr. Below, we consolidate some insights into what makes FedCorr successful: \u2022 All components help to improve accuracy. \u2022 Fraction scheduling has the largest effect. The small fraction used in the pre-processing stage helps to capture local data characteristics, as it avoids information loss brought by aggregation over multiple models. \u2022 The highest accuracy among different noise levels is primarily achieved at a low noise level (e.g. \u03c1 = 0.4) and not at the zero noise level, since additional label noise could be introduced during label correction. 5." + } + ], + "Hieu Le": [ + { + "url": "http://arxiv.org/abs/2307.08716v1", + "title": "Enforcing Topological Interaction between Implicit Surfaces via Uniform Sampling", + "abstract": "Objects interact with each other in various ways, including containment,\ncontact, or maintaining fixed distances. Ensuring these topological\ninteractions is crucial for accurate modeling in many scenarios. In this paper,\nwe propose a novel method to refine 3D object representations, ensuring that\ntheir surfaces adhere to a topological prior. Our key observation is that the\nobject interaction can be observed via a stochastic approximation method: the\nstatistic of signed distances between a large number of random points to the\nobject surfaces reflect the interaction between them. Thus, the object\ninteraction can be indirectly manipulated by using choosing a set of points as\nanchors to refine the object surfaces. In particular, we show that our method\ncan be used to enforce two objects to have a specific contact ratio while\nhaving no surface intersection. The conducted experiments show that our\nproposed method enables accurate 3D reconstruction of human hearts, ensuring\nproper topological connectivity between components. Further, we show that our\nproposed method can be used to simulate various ways a hand can interact with\nan arbitrary object.", + "authors": "Hieu Le, Nicolas Talabot, Jiancheng Yang, Pascal Fua", + "published": "2023-07-16", + "updated": "2023-07-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Modeling the interaction between objects is at the heart of numerous applications such as computer graphics, virtual reality, robotics, and health care. Most existing works focus on the dynamic interactions between human objects such as hand-object [21], body-garment [16], or human-scene [6]. In this paper, we focus on the problem of multi-objects reconstruction where there is a strong prior indicating the topological interaction between them. The goal is to simultaneously reconstruct multiple objects while ensuring their interaction aligns with a prior. The interaction [4] can take various forms including containment, contact with specific surface ratios, or maintaining fixed distances between objects. This is particularly useful in modeling different parts of composite objects. For example, when reconstructing a heart from imagery for medical purposes, the four ventricles should touch each other at the anatomically appropriate locations, and exhibit the right connections to allow blood flow, but without ever overlapping with each other. We propose a novel way to enforce topological constraints between 3D objects, with a specific focus on cases like the aforementioned heart ventricles. More specifically, given two 3D objects represented by their deep implicit signed distance functions [19] and a prior indicating a desired contact ratio (%) between them, our goal is to refine them such that they contact each other with that exact ratio and do not intersect. To achieve this, we first uniformly sample a large number of points and compute their signed distances to the two objects. At any given state of the objects, we can estimate the intersection Preprint. Under review. arXiv:2307.08716v1 [cs.CV] 16 Jul 2023 \fvolume by counting the number of random points that fall within both objects. Similarly, the contact ratio can be estimated by evaluating the ratio of points that lie in close proximity to the surfaces of both objects. Based on the prior, we can estimate the expected number of random points that should be in contact with both objects and fine-tune the object\u2019s implicit functions accordingly. We apply our proposed method to the problem of reconstructing different components of the 3D human heart. Each pair of heart components exhibits a consistent type of topological interaction. They should never intersect and consistently maintain a specific contact ratio, i.e., there should be no inter-penetrations but they should match tightly, without unwarranted gaps. While these constraints are not automatically satisfied when using individual deepSDF [19] to model each component, the results of our method exhibit proper topological interactions and also have lower Chamfer distances than the baseline method. Further, we show that our proposed system can modify a generative implicit hand model to interact with an arbitrary object in a proper manner. Our proposed losses enable the hand model to be re-positioned and its pose optimized to increase the contact ratio while preventing any intersection with the object. In short, our core contribution is to show that topological interactions between 3D objects can be enforced via uniformly sampled 3D points. In essence, we use the statistics of random-point-object distances to indirectly observe and manipulate the object interaction. 2 Related Work Image-Based Topological Interaction. Many 2D segmentation problems involve semantic classes that have some relative topology constraints between them, such as road connectivity over a background or cell nuclei that should be contained within the cytoplasm. Mosinska et al. [17] proposes a topology-aware loss that uses the response of selected filters from a pre-trained VGG19 network. These filters prefer elongated shapes and thus alleviate the broken connection issue. Hu et al. [9] uses a topology loss based on persistent diagrams to help cell segmentation. Other works rely on detecting and penalizing critical pixels for topology interaction between classes, such as Hu [8] and Gupta et al. [4], using homotopy warping and convolutions respectively to find these pixels. However, image and pixel based topology constraints do not lend themselves easily to implicit multi-object 3D reconstruction. Multi-Object 3D Reconstruction. Multi-object 3D reconstruction [10, 14] is a fundamental task for scene understanding or generation. The presence of multiple objects poses a different set of challenges compared to single-object reconstruction where objects are usually treated as isolated geometries without considering the scene context, such as object locations and instance-to-instance interactions. For multi-object reconstruction, Mesh R-CNN [3] augments Mask R-CNN [7] with a mesh predictions branch that estimates a 3D mesh for each detected object in an image. Total3DUnderstanding [18] presents a framework that predicts room layout, 3D object bounding boxes, and meshes for all objects in an image based on the known 2D bounding boxes. However, these three methods first detect objects in the 2D image, and then independently produce their 3D shapes with single object reconstruction modules. Liu and Liu [14] propose a system to infer the pose, size, and location of 3D bounding boxes and the 3D shapes of multiple object instances in the scene, which is divided into a grid whose cells are occupied by objects. Irshad et al. [11] recovers objects shape, appearance, and poses using implicit representations, as has become increasingly frequent, e.g., [11, 12, 21, 1]. 3D Interaction. While some works aim to enforce physical plausibility between objects in a scene, e.g., Engelmann et al. [2] that enforces a collision loss between reconstructed objects, most works focus on human-object interactions. Karunratanakul et al. [12] models the grasp between the hand and an object as implicit surfaces and learns to generate new grasps using a VAE, Ye et al. [21] reconstruct from an image the hand-object interaction, and at the same time the object as an implicit surface, and Contact2Grasp [13] learns to synthesize grasps by first predicting a contact map on the object surfaces. Other directions include body-garment interaction, such as DrapeNet [1] with a physically based self-supervision, or human-scene interaction [6]. However, these works primarily focus on interactions between the articulated human body and objects, without incorporating any prior information on how they should interact. In contrast, our approach considers a specific topological prior that needs to be enforced between two objects. 2 \fA B Uniformly sampled points SDFA SDFB A B Lcontact Lnon-intersection A B prior: contact ratio = 20% Lnon-contact (a) (b) (c) (d) Figure 1: Overview. We start with a pair of objects represented by their deep signed distance functions [19] (a) and a prior indicating the desired contact surface percentage. We first compute the signed distances between the two objects and a set of uniformly sampled points (b). We select a subset of points that reside in close proximity to the surfaces of both objects, with the number of points determined based on the provided prior. By utilizing these points as anchors, we refine the objects by fine-tuning their deep-signed distance functions to ensure that no points are located inside the surfaces of both objects while simultaneously pulling the objects towards areas that should be in contact (c). After fine-tuning, the two objects exhibit proper interaction, aligning with the topological prior (d). 3 Method Our method refines two 3D implicit objects to ensure their interaction aligns with a topological prior. In this section, we describe our method designed to enforce two conditions: 1) objects should not intersect each other and 2) they contact each other with given percentage surface areas. Different configurations of our method designed for other interactions are included in the supplementary material. Figure 1 summarizes our proposed method. Given a pair of objects represented by their deep signed distance functions (deep-SDF)[19] and a prior indicating the contact ratio (the percentage of surface areas that should be in contact with the other object), we first compute the signed distances between the two objects and a randomly sampled point cloud. We then select a subset of points that reside in close proximity to the surfaces of both objects, with the number of points determined based on the provided prior. By utilizing these points as anchors, we refine the objects by fine-tuning their deep-SDF to ensure that no points are located inside the surfaces of both objects while simultaneously pulling the object surfaces towards areas that should be in contact. 3.1 Deep-SDF Preliminary Implicit surface representations in the form of singed-distance functions [19] have recently emerged as a powerful model to learn continuous representations of 3D shapes. They allow detailed reconstructions of object instances as well as meaningful interpolations between them. A signed distance function of an object is a function that outputs the point\u2019s distance to the closest object surface: SDF(x) = s : x \u2208R3, s \u2208R (1) Conventionally, the distance is negative if the point is inside the object and positive otherwise. In this paper, we focus on the interaction between a pair of objects instances of two categories (A, B) represented by two deep signed-distance functions fA(a, x) = SDFidx(a) A (x) and fB(b, x) = SDFidx(b) B (x). Here, fA(\u00b7, \u00b7) and fB(\u00b7, \u00b7) are two deep networks approximating SDFs of multiple objects of category A and B, respectively. The latent vector a encodes information for a specific instance idx(a) of category A and the latent b for a specific object idx(b) similarly. Following [19], fA(\u00b7, \u00b7) and fB(\u00b7, \u00b7) are trained on a large set of training instances while at test time, we optimize the latent codes (a, b) based on the testing instances\u2019 signed distances as well as their interaction prior. For the rest of the paper, we refer to the approximated deep SDF functions fA(a, \u00b7) and fB(b, \u00b7) as deep-SDF objects. 3 \f3.2 Enforcing Topological Interaction via Random Points Let us consider explicit meshes of two 3D objects denoted as (MA, MB). The contact ratio of mesh MA w.r.t. mesh MB is defined as follows: PMA,MB = Area(SAB) Area(SA) (2) where SAB represents the partial surface of object A that includes all points within a small distance to object B, and SA refers to the entire surface of object A. However, computing this ratio using object meshes is computationally expensive, and further, it is not directly applicable for implicitly represented objects where the object surfaces are not readily available [15]. To overcome this issue, our key observation is that the contact ratio between the two implicit objects can be closely approximated via the signed distances between them to a larger number of uniformly random points. In particular, the contact ratio of a deep-SDF object fA(ai, \u00b7) to another deep-SDF object fB(bi, \u00b7) can be approximated by the following equation: P \u2032 A,B = PN i=1 1(|fA(a, xi)| < \u03f5) \u00d7 1(|fB(b, xi)| < \u03f5) PN i=1 1(|fA(a, xi)| < \u03f5) (3) where {xi : i \u2208(1, N)} is the set of N uniformly random points, \u03f5 is a small threshold indicating the \u201ccontact\u201d distance, and 1(\u00b7) is the indicator function that returns the value 1 if its statement is true and 0 otherwise. In essence, the contact ratio here is estimated via a stochastic Monte Carlo method by counting the number of points lying close to the surfaces of both objects and the number of points lying close to the surface of one object. Our analysis shows that with a large enough value of N, the result obtained from equation 3 closely approximates the contact ratio calculated using explicit meshes in equation 2. Building on this observation, we propose to refine the topological interaction between two implicit objects by adjusting their distances to a uniformly sampled point cloud. The goal of the adjustment is to ensure the two objects interact with a similar contact ratio as a given prior while having no intersection. Let us consider two deep-SDF objects fA(a, \u00b7) and fB(b, \u00b7) with a prior that k% of fA(a, \u00b7) surface should be in contact distance with fB(b, \u00b7). Here we assume that k is given or can be trivially computed from the training set. Given N uniformly random points {xi : i \u2208 (1, N)}, the number of points that is close to the surface of fA(a, \u00b7) can be calculated as SurfaceA = PN i=1 1(|fA(a, xi)| < \u03f5). Based on the prior, the expected number of points lying on the contact surface between the two objects should be T = SurfaceA \u00d7 k%. Thus, we choose a set of anchor points Acontact containing the top T points with the smallest distance to B among points lying close to the surface of A. To prevent the two objects from having a larger contact ratio than the prior, we select a set of anchor points Anon-contact containing all points within a contact distance to both objects while not being in Acontact. To ensure that there is no intersection between two objects, we find all anchor points lying inside both fA(a, \u00b7) and fB(b, \u00b7) and push their surfaces to these points via a non-intersection loss function: Lni = X x\u2208Acontact\u222aAnon-contact:(fA(a,x)<0)\u2227(fB(b,x)<0) clamp(|fA(a, x)|, \u03b41) + clamp(|fB(b, x)|, \u03b41) (4) where clamp(\u00b7, \u03b4) function restricts a given value between an upper and lower bound [\u2212\u03b4, \u03b4] and \u03b4\u00b7 are hyper-parameters of our model. Since fA(a, x) and fB(b, x) are both negative, this loss function drives the surface of both objects to be pushed toward these anchor points. To enforce a contact ratio of k% between the two objects, we enforce that all chosen anchor points in Acontact need to be within small distances to both objects via a contact loss function: Lcontact = X x\u2208Acontact:(fA(a,x)<0)\u2228(fB(b,x)<0) clamp(|fA(x) + fB(x)|, \u03b42) (5) This loss pushes the surfaces of the two objects closer together. In particular, if fA(a, x) < 0 and fB(a, x) > 0 ,i.e., the point x lies inside the surface of object fA(a, \u00b7), the loss function will increase 4 \f|fA(a, x)| while decreasing |fB(b, x)|1 such that their values closely match. If fA(a, x) > 0 and fB(a, x) > 0, the loss simply pulls the surfaces of both objects to this anchor point. To prevent the two objects from having a larger contact ratio than the given prior, we push the surface of two objects further away from the anchor points in Anon-contact by a pushing loss function: Lnon\u2212contact = X x\u2208Anon-contact clamp(\u2212fA(x) \u2212fB(x), \u03b43) (6) The loss function to fine-tune the deep-SDFs of the two objects is: L = Lni \u00d7 \u03bbni + Lcontact \u00d7 \u03bbcontact + Lnon\u2212contact \u00d7 \u03bbnon\u2212contact + Ldata \u00d7 \u03bbdata (7) where Ldata is the data reconstruction term that regresses the signed distances[19] and \u03bb\u00b7 are controlling parameters. During the optimization process, it is important to note that we periodically sample points to ensure they accurately reflect the current state of the object shapes. Further details regarding the loss functions and the optimization algorithm can be found in the supplementary material. 4 Metrics and Experiments We conduct experiments on the task of 3D heart reconstruction and further demonstrate an application for simulating hands-contacting objects. Following [19], we first train deep SDFs models for different objects on a training set. At test time, we optimize a pair of random latent codes such that the models fit the signed distances to the testing objects\u2019 surfaces. 4.1 Measuring Topological Interaction Similarity via Signed Distance Histogram Distance Comparing object interactions is non-trivial since it is hard to describe. Current metrics for 3D object reconstruction including the intersection-over-union (IoU) or Chamfer distance, for the most part, do not sufficiently characterize the differences between two interactions. Two interactions can have small individual parts\u2019 chamfer distances while being completely distinct such as contacting versus slightly intersecting. To measure the topological similarities between two interactions, we propose the use of signed distance histogram distances. The histogram is computed by dividing the range of signed distances ([\u22121, 1] in our paper) into bins and then counting the percentage of surface area that falls into each bin. The histogram considers both the intersection ratio (at bin \u201c<0\u201d) and the contact ratio (the smallest positive bin). In this paper, we set the bin values at [\u2212\u221e, 0, 0.008, 0.08, 0.8, \u221e] since all meshes are rendered to the size of 2563 while the signed distances range from -1 to 1. Thus, each pixel corresponds to 0.008 units of distance. The distance between two histograms can be computed by summing the absolute difference at each bin. 4.2 3D Heart Reconstruction Accurate reconstruction of heart substructures is important in the development of clinical applications. Here we are particularly interested in building a 3D model of the human heart that is topologically correct, reflecting the accurate connectivity between different parts of human hearts. We experiment on the whole-heart segmentation dataset introduced by Zhuang and Shen [22], which includes 120 whole-heart models with 3D segmentation of various parts of the human heart such as the myocardium of left ventricle (M-LV), left atrium (LA), left ventricle (LV), right atrium (RA), and right ventricle (RV). The dataset is split into 100 training instances and 20 testing instances. We first extract connectivity priors in terms of contact ratios between different components of the heart, i.e., the ratio between the area of the contact surface over the total surface area of each part. We visualize different components of a 3D heart model and the contact ratios between them in Figure 2. In plot 2b, we use the data meshes to measure the contact ratios (i.e., equation 2) while in plot 2c, we measure the contact ratios via the signed distances of the meshes to a random point cloud (following equation 3). The values between the two plots are closely matched, showing that we can use a set of random points to measure the contact ratio between two 3D objects. As can be seen, 87.5% of the 1In this case, |fA(x)| is smaller than |fB(x)| or else the two objects would intersect. 5 \fM-LV (0) LA (1) LV (2) RA (3) RV (4) (a) Heart Components (b) Contact ratios (c) Appx. contact ratios Figure 2: Different Components of Human Heart and their contact ratios. We show in (a) a top-down visualization of different components of the human heart: (0)the myocardium of the left ventricle (M-LV), (1) the left atrium (LA), (2) the left ventricle (LV), (3) the right atrium (RA), and (4) the right ventricle (RV). In (b), we show the contact ratios between different components, which is the ratio between the contact surface area over the whole surface area of the object (equation 2). In (c), we show that we can use a set of uniformly random points to closely approximate these ratios (equation 3). LV\u2019s surface is in contact distance with the M-LV\u2019s surface since M-LV is an outer layer covering LV (see Figure 3). These ratios are relatively consistent across different instances. We average these numbers across the whole training set and use them as priors when reconstructing testing instances. Table 1 summarizes the results of our method in comparison with deepSDF[19] models trained without our proposed losses. We conduct experiments on different pairs of heart components where we focus on pairs with significant contact ratios: 1) M-LV and LV, 2) LA and LV, 3) RA and RV, and 4) M-LV and RV. We report the average intersection ratio (lower is better), contact ratio (closer to the ground truth is better), chamfer distance (lower is better), and signed distance histogram distance (lower is better). As can be seen, our proposed losses improve the 3D reconstruction in all metrics. The reconstructed components do not intersect each other more than 0.1% in all cases while having contact ratios closely approximating the ground truth. Shapes generated from a vanilla deepSDF [19] model significantly intersect each other while having lower contact ratios. We visualize qualitative samples for two different cases in Figure 3. For each case, we show how the two heart components connect based on the ground truth data (first two columns), the reconstructed 3D meshes using the baseline method DeepSDF, and the reconstructed 3D meshes using our method. The contact surfaces are colored green while the intersecting surfaces are colored red. The contact and intersection ratios are shown next to the plots. The results of our method show no surface intersection between components. We include more results on whole-heart reconstruction in the supplementary material. 4.3 Simulating Hand Contacting Objects The interaction between human hands and objects poses a significant challenge in modeling, given the variability of hand poses and the diversity of objects involved. Simulating and understanding the various ways in which a hand can come into contact with an object is crucial for numerous applications. We demonstrate that our system using random points to drive the interaction between 3D objects can be used to manipulate a generative hand model to increase the contact ratio between the hand and the object while preventing intersection between them. We first train an auto-decoder model 2 on the SMPL-H dataset [20] containing 1581 hand meshes captured from 31 subjects, each with 51 hand poses (the same for all subjects). Given the trained hand model and a 3D object in space, we are interested in simulating how a hand interacts with the object. To move the hand in space such that it gets closer to the object, we incorporate an affine transformation consisting of a rotation matrix and a translation vector into the deepSDF model: T(f, z, x, R, r) = f(z, Rx + r) (8) 2The detailed network architecture and training losses are included in the supplementary material. 6 \f87% 0% 39% 0% 54% 24% 24% 11% 77% 0.2% 36% 0.1% 12.5% 0% 7.3% 0% 2.3% 8.4% 1.1% 4.1% 11.6% 0% 5.6% 0% (a) M-LV and LV. (b) RA and RV. The bottom part is rotated to better show the contact areas. Figure 3: Refining topological interaction. We show two cases of how pairs of heart components connect: (a) the myocardium of the left ventricle (M-LV) and the left ventricle (LV) and (b) the right atrium (RA) and the right ventricle (RV). From left to right: the ground-truth 3D meshes, the reconstructed meshes using the baseline method DeepSDF, and the reconstructed 3D meshes using our method. The contact surfaces are colored green while the intersecting surfaces are colored red. The contact and intersection ratios are shown next to the plots. Table 1: Heart reconstruction. We report the average intersection ratio (lower is better), contact ratio (closer to the ground truth is better), chamfer distance (lower is better), and signed distance histogram distance (lower is better). Our proposed losses improve the 3D reconstruction in all metrics where the reconstructed components do not intersect each other more than 0.1% in all cases while having contact ratios closely approximating the ground truth. Shapes generated from a vanilla deepSDF [19] model significantly intersect each other while having lower contact ratios. Intersection (%) Contact (%) Chamfer (\u00d710e4) SDH Dist. Dataset Part dSDF Ours GT dSDF Ours dSDF Ours dSDF Ours MLV-LV MLV 24.0 0.0 38.6 11.3 35.7 1.4 1.5 1.90 0.31 LV 53.5 0.1 85.2 24.7 78.3 1.1 1.1 LA-LV LA 6.9 0.1 13.0 3.2 9.2 1.6 1.5 0.38 0.12 LV 4.5 0.1 8.8 2.1 6.1 1.1 1.0 RA-RV RA 7.0 0.0 12.8 2.7 8.7 1.6 1.4 0.37 0.14 RV 3.9 0.0 7.4 1.5 4.8 1.3 1.2 MLV-RV RA 8.0 0.0 14.0 3.9 10.9 1.4 1.3 0.12 0.04 RV 13.4 0.0 22.8 6.5 18.1 1.4 1.2 Average 15.2 0.0 25.3 7.0 21.5 1.4 1.3 0.7 0.2 where R \u2208R3\u00d73 is a rotation matrix and r \u2208R3 is a translation vector. We construct the rotation matrix from the 3 angular parameters such that it does not include any scaling factor. Function T(.) allows moving an implicit object freely in space while training on only zero-centered data. To simulate how a hand interacts with an object, we put the deepSDF model of a hand in random positions in space (defined via the affine parameters R, r) and the object is at the origin. The latent code is initialized as a random latent code z from a random training pose. We set a fixed contact ratio prior for hand and object at [0.2], i.e., 20% of the hand surface should be in contact distance. However, we must note that finding a realistic hand pose satisfying the exact prior contact ratio is not possible with the generative deepSDF hand model [12] due to the limited training data. Our system, hence, mainly aims to establish hand-object contact while preventing the intersection between them. In fact, our model can strictly enforce a specific contact ratio between the hand and the object but it results in unrealistic hand surfaces. Table 2 summarizes the results of our proposed method testing on 10 objects from the HO3D dataset [5]. We average over 10 runs for each object. To evaluate the hand-object interaction, we measure the intersection ratio (%): the ratio between the surface area of the hand that intersects the object over the surface area of the hand; and the contact ratio (%) between the hand and the object. Note 7 \fthat the object can intersect the hand in its initial starting position. Our losses, optimizing over the values of (z, R, r), re-position the hand as well as modify the hand pose to lower the intersection while increasing the contact ratios in all cases. Table 2: Hand-Object Interactions The results of our proposed method tested on 10 objects from the HO3D dataset [5] (averaged over 10 runs). We measure the intersection ratio (%): the ratio between the surface area of the hand that intersects the object over the surface area of the hand; and the contact ratio (%) between the hand and the object. We compare these numbers for the hand in its initial state (Init.), the same hand but at the optimized position and angle (Aff.), and the final optimized hand with optimized latent code (Aff. + Latent). Our losses re-position the hand as well as modify the hand pose to lower the intersection ratios while increasing the contact ratios in all cases. Intersection(%) Contact(%) Item Init. Aff. Aff. + Latent Init. Aff. Aff. + Latent 11-Banana 6.8 3.7 1.1 2.1 14.1 15.8 24-Bowl 4.9 4.2 0.4 1.5 8.6 10.7 04-Sug.box 4.9 1.1 0.4 1.3 6.8 7.2 06-M.bottle 4.3 1.1 0.8 1.4 8.4 9.4 35-Drill 8.4 3.3 0.5 2.4 4.8 7.0 37-Scissors 4.0 2.5 0.9 1.2 15.9 18.2 61-FoamBrick 8.4 5.3 0.7 2.7 14.2 17.5 25-Mug 3.8 2.8 0.4 1.2 9.3 11.2 03-C.box 10.9 3.8 0.4 2.4 2.9 4.1 52-L.clamp 3.6 1.2 0.4 1.4 6.9 8.0 Mean 6.0 2.8 0.6 1.7 9.0 10.7 Here we do not aim to generate realistic hand grasps but simply force the hand to interact with objects. In many cases, it simply results in the object resting on the hand. Generating realistic hand-grasping objects is a challenging and active research problem on its own. Nevertheless, our system can generate various feasible and realistic hand-object interactions, as can be seen in Figure 4. For each row, we show the original positions of the hand and object in space, and then three views of the optimized hand. Please see the supplementary material for more results. In Figure 5, we visualize how our losses optimize hand shapes to have better \u201cgrips\u201d on objects. For each pair, we show the original hand pose (colored in gray) overlaid by the optimized hand pose (colored in the skin color), and how the optimized hand interacts with the object. The hand model moves the fingers to positions that increase the contact areas (top row) or to positions that do not intersect the objects (bottom row). It shows that our losses can serve as supervision signals to drive generative shape models in meaningful manners. 5 Limitations There are limitations of our work that can be interesting directions for future work. Computing signed distances from object surfaces to a large number of points can be computationally expensive and might not be trivially applicable to other kinds of surface representations. Here we only consider a fixed topological constraint for a given pair of classes while in practice, they can be in a more dynamic form and can be even hard to estimate. Further, we must note that we indirectly manipulate the object surfaces via stochastic Monte Carlo estimation. There is no guarantee about the correctness of the topological interactions since at no point do we directly measure them. 6" + }, + { + "url": "http://arxiv.org/abs/2205.07795v1", + "title": "Referring Expressions with Rational Speech Act Framework: A Probabilistic Approach", + "abstract": "This paper focuses on a referring expression generation (REG) task in which\nthe aim is to pick out an object in a complex visual scene. One common\ntheoretical approach to this problem is to model the task as a two-agent\ncooperative scheme in which a `speaker' agent would generate the expression\nthat best describes a targeted area and a `listener' agent would identify the\ntarget. Several recent REG systems have used deep learning approaches to\nrepresent the speaker/listener agents. The Rational Speech Act framework (RSA),\na Bayesian approach to pragmatics that can predict human linguistic behavior\nquite accurately, has been shown to generate high quality and explainable\nexpressions on toy datasets involving simple visual scenes. Its application to\nlarge scale problems, however, remains largely unexplored. This paper applies a\ncombination of the probabilistic RSA framework and deep learning approaches to\nlarger datasets involving complex visual scenes in a multi-step process with\nthe aim of generating better-explained expressions. We carry out experiments on\nthe RefCOCO and RefCOCO+ datasets and compare our approach with other\nend-to-end deep learning approaches as well as a variation of RSA to highlight\nour key contribution. Experimental results show that while achieving lower\naccuracy than SOTA deep learning methods, our approach outperforms similar RSA\napproach in human comprehension and has an advantage over end-to-end deep\nlearning under limited data scenario. Lastly, we provide a detailed analysis on\nthe expression generation process with concrete examples, thus providing a\nsystematic view on error types and deficiencies in the generation process and\nidentifying possible areas for future improvements.", + "authors": "Hieu Le, Taufiq Daryanto, Fabian Zhafransyah, Derry Wijaya, Elizabeth Coppock, Sang Chin", + "published": "2022-05-16", + "updated": "2022-05-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Presented with a scene involving two dogs, where one has a frisbee in its mouth, native speakers of English will e\ufb00ortlessly characterize the lucky dog as the dog with the frisbee. Computers are not so good at this yet. The task in question is called referring expression generation (REG). A common approach to REG is modeling the problem as a two-agent system in which a speaker agent would generate an expression given some input and a listener agent would then evaluate the expression. This modeling method is widely applied, for example in [1]. In the last few years, many attempts at REG have applied deep learning to both the speaker and listener agents, utilizing the advantage of big datasets and massive computation power. As with many other NLP tasks, deep learning has been shown to achieve state-of-the-art results in REG. For example, [2] applied supervised learning and computer vision techniques to referring expression. Nevertheless, explainability remains a problem as it is di\ufb03cult to fully understand how a deep learning model can generate some texts arXiv:2205.07795v1 [cs.CL] 16 May 2022 \fgiven an image and a target. On the other hand, recent developments in computational pragmatics have yielded probabilistic models that follow simple conversational rules with great explanatory power. One important example is the Rational Speech Act framework (RSA) by [3], where probabilistic speakers and listeners recursively reason about each other\u2019s mental states to communicate\u2014speakers reason about probability distribution over utterances given a referent object, while listeners reason about probability distribution over objects in the scene given an utterance. While [3] and [4, i.a.] have shown that RSA can generate sentences that are pragmatically appropriate, the datasets are small, with simple examples that are carefully crafted with perfect information. [5] extend RSA to real world examples of reference games by using simple shallow models as building blocks to build the speaker and listener agents. [5]\u2019s approach is intractable though, as the speaker model has to consider all possible utterances. [6] resolves this issue by using a character level LSTM to predict one character at a time, thus reducing the search space. At each step, instead of generating one utterance, the speaker model generates one character. This method greatly reduces the search space and make the neural RSA system more e\ufb03cient with harder examples. However, their method is applied to the task of generating a referring expression for an image given several other images instead of a referring expression for an object in a scene. In addition, by performing RSA on a character level [6] partially compromises the explainability of RSA as it is harder to reasoning why at each step, the model would prefer one character over another on describing the target. To extend on the work of [5] and [3] we want to explore a di\ufb00erent approach from [6] that would not compromise on the explainability of RSA. In this paper, we introduce a novel way to apply the RSA framework to real world images and a large scale dataset. Speci\ufb01cally, our contributions are as follows: 1. We tackle the intractability problem that [5] faced, we use Graph R-CNN [7] and Detectron2 [8] to extract textual information about objects and their properties i.e., types and attributes, and relations to other objects. This step vastly reduces the search space when generating utterances. 2. We use the world view generated from the previous step to constrain the utterance space; we also sequentially update the utterance prior and the prior over objects in the scene as each descriptor in the utterance is produced. To our knowledge, this is the \ufb01rst attempt to use iterative update of the both the utterance and object probability distributions. 3. We evaluate our framework on refCOCO and refCOCO+ [9], and evaluate generated expressions in terms of accuracy (with human evaluation)\u2014whether the expressions are distinctive\u2014and automatic metrics. 4. We provide a detailed analysis of the result, speci\ufb01cally on the types of error based on the human evaluation. This deviates from standard evaluation process where they key metric is the comprehension accuracy (i.e is the expression distinctively describe the target) and provides a new angle in analysing expression quality. 2. Background 2.1. RSA RSA, \ufb01rst introduced by [3] encapsulates the idea that pragmatic reasoning is essentially Bayesian. In the reference game scenario studied by [3], the domain consists of a set of objects with various qualities that are fully available to two players. The speaker will \fdescribe one targeted object unknown to the listener by creating a referring expression and the listener needs to reason about which object the expression is referring to. As laid out by [10], RSA is a simple Bayesian inference model with three components: literal listener, pragmatic speaker and pragmatic listener. For a given object o and utterance u: literal listener PL0(o|u) \u221dJuK(o) \u00b7 P(o) (1) pragmatic speaker PS1(u|o) \u221d\u03b1U(u, o) (2) pragmatic listener PL1(o|u) \u221dPS1(u|o) \u00b7 P(o) (3) where JuK is the literal meaning of u, either true (1) or false (0). The literal listener thus interprets an utterance at face value, modulo the prior probability of referring to that object P(o), which we take to correspond to the object\u2019s salience. The pragmatic speaker decides which utterance to make by using the utility function U(u, o), which is a combination of literal listener score and a cost function and the \u03b1 term denotes the rationality scale of the speaker. Lastly, the pragmatic listener infers the targeted object by estimating the likelihood that the speaker would use the given utterance to describe it. [3] showed that RSA can accurately model human listener behavior for one-word utterances in controlled contexts with few objects and few relevant properties. Since then, a wealth of evidence has accumulated in support of the framework; see [10] for some examples. Still, most RSA models use a very constrained utterance space, each utterance being a single lexical item. [4] explore RSA models with two-word utterances where each utterance is associated with its own (continuous) semantics. But it remains a major open question how to scale up RSA models for large-scale natural language processing tasks. Figure 1: The work\ufb02ow of Iterative RSA where the input image is passed through information extraction algorithm (Graph R-CNN/Detectron) and pre-processed before given to Iterative RSA. 2.2. Detectron2 and Graph R-CNN The RSA framework requires prior knowledge about the images and targets in order to generate expressions. Most approaches that use RSA and the speaker/listener model acquire this knowledge through a deep learning model that learns an embedding of the image and the target object, represented as a bounded box or bounded area inscribed on the image then use these embeddings to generate expressions. Instead of using embeddings, we decided to take a di\ufb00erent route by generating the symbolic knowledge in the form of scene graph obtained from the image using Detectron2 and Graph R-CNN, which contains objects, properties, and relations, all in a lingual format, which is the ideal input for an RSA model. \fDetectron2 is the state-of-the-art object detection model developed by [8] that utilizes multiple deep learning architecture such as Faster-RCNN [11] and Mask-RCNN [12] and is applicable to multiple object detection tasks. Graph R-CNN [7] is a scene graph generation model capable of detecting objects in images as well as relations between them using a graph convolutional neural network inspired by Faster-RCNN with a relation proposal network (RPN). RPN and Graph R-CNN is among the state-of-the-art architecture in objects\u2019 relation detection and scene graph generation. 3. Method As discussed in [10] and [3], RSA requires a speci\ufb01cation of the utterance space and background knowledge about the state of the \u2018world\u2019 under consideration. Thus, we view the problem of generating referring expressions as a two-step process where, given an image and a targeted region, we: (1) Acquire textual classi\ufb01cations (e.g. car) of the objects inside the image and the relations between objects in the image; (2) Generate a referring expression from the knowledge acquired from step (1). In step (1), most previous work falls into two categories. [3] and [4] assume the information about objects and their properties are known to the agent generating the expression. On the other hand, [5] and [6] use deep learning to obtain embeddings of the image and the targeted region. [13] combine the embedding extraction step with the referring expression in one single model. In step (1), we neither assume the availability of descriptive knowledge of the images like [3] nor do we use an image and region embedding like [5]. Instead, we generate both the utterance space and the literal semantics of the input image by applying Graph R-CNN to obtain objects\u2019 relations and Detectron2 to obtain objects\u2019 properties. This idea is motivated by the intractable problem that [5] face when considering a vast number of utterances at every step. By extracting the symbolic textual information from images, we vastly reduce the number of utterances per step since the number of objects, their relations, and properties are limited in each image. Speci\ufb01cally, Detectron2 outputs objects and the probability that some property is applicable to those objects. For example, a given object categorized as an elephant might have a high probability of having the property big and a lower probability of having the property pink. Graph R-CNN outputs pairs of objects and probabilities of how true some prede\ufb01ned relation is to some pair of objects. One challenge in merging computer vision systems with datasets like RefCOCO is matching the target referent in the dataset to the right visually detected object (assuming it is found). RefCOCO provides a bounding box around the target referent, and Detectron2 and Graph R-CNN may or may not identify an object with the same position and dimensions. One simple approach is to use the most overlapped detected object with the target box as the subject for the generation algorithm. However, there is no guarantee that the most overlapped detected object is the target. We overcome this problem by combining feature extraction with target feature extraction from Detectron2. We \ufb01rst let Detectron2 identify all the objects it can in the image (call this the context). We then instruct Detectron2 to consider the target box an object and classify it. If there is an object in the context that overlaps at least 80% with the target box and is assigned the same class, then we leave the context as is; otherwise we add the target box to the context. To enrich object relations beyond binary relations in Graph R-CNN, we also implemented a simple algorithm to generate ordinal relations. We do so by sorting detected objects of the same category (e.g all dogs in an image) by the x-axis and assign prede\ufb01ned ordinal \fFigure 2: An example of the textual knowledge acquired from Detectron2: each row corresponds to all information about a suggested bounding box, which contains box name, dimension, location, and the likelihoods of types and attributes. relations such as left, right, or second from left. The product of these image analysis methods are used in the literal semantics, which are categorical, although they are based on the gradient output of Detectron2 and Graph RCNN, which assigns objects to properties and relations with varying degrees of certainty. Since Detectron2 and Graph-RCNN output likelihood values for attributes and types for each object as shown in Figure 2, the last step in the textual extraction process is using a cuto\ufb00threshold to decide what level of likelihood make one attribute belongs to a particular object. If the threshold is too low, then objects would contain many irrelevant attributes; if the threshold is too high, there may not be enough attributes to uniquely describe some objects. Currently, we use a hard-coded value that is slightly higher than the minimum value where most of the irrelevant attributes and types are, as examined by hand. Thus, in the spirit of [14], we assume a threshold \u03b8 to decide whether a given type or attribute holds of a given object. Let F be a function that assigns: to each attribute and type, a function from D to [0,1]; and to each relation, a function from D\u00d7D to [0,1], where D is the set of objects in the image. F represents the output of the Detectron2 and Graph R-CNN. For each type, attribute, and relation symbol u, \u03b8(u) is a threshold between 0 and 1 serving as the cuto\ufb00for the truthful application of the type, attribute, or relation to the object(s). Then JuK(o) = 1 i\ufb00F(u)(o) \u2265\u03b8(u), etc. Ultimately we plan to learn these thresholds from referring expression training datasets such as RefCOCO. Currently, they are \ufb01xed by hand: one uniform threshold for types/attributes and relations, respectively. Using categorical semantics rather than the gradient semantics that would be obtained directly from the Detectron2 avoids the well-known problems of modi\ufb01cation in fuzzy semantics, a proper solution to which would require conditional probabilities that are unknown [15]. Our key contribution with respect to step (2) is at the speaker level. We introduce iterative RSA, described in the Algorithm 1 below. Iterative RSA takes as input the domain of all objects D, a prior P(d) over all objects d \u2208D, the referent object o and list of possible \u2018utterances\u2019 U. Although an utterance may consist of multiple words, each \u2018utterance\u2019 here is a single predicate (e.g. dog, second from left, wearing black polo). We will use the word \u2018descriptor\u2019 instead of \u2018utterance\u2019 in this setting, because the strings in question may be combined into a single output that the speaker pronounces once (a single utterance, in the proper sense of the word). Again, we take the prior over objects to be proportional to salience (which we de\ufb01ne as object size). Our RSA speaker will iteratively generate one descriptor at a time and update the listener\u2019s prior over objects at every step until \feither (i) the entropy of the probability distribution over objects reaches some desirable threshold K, signifying that the listener has enough information to di\ufb00erentiate o among objects in D, or (ii) the maximum utterance length T has been reached. input : o, D, U, P 0 D initialization: E = [] ; while t < T & Entropy(P t\u22121 D ) < K do u = sample(Speaker PS1(u|o, P t\u22121 D , UE)); P t D = Literal listener PL0(o|u, P t\u22121 D ); add u to E; end output: E Algorithm 1: Iterative RSA In standard RSA, the utility function U(u, o) is de\ufb01ned as U = log(PL0(o|u)) + cost(u) [10]. We de\ufb01ne ours as: UE = log(PL0(o|u) + Pngram(u|E)) + cost(u) (4) where Pngram is the probability of u following the previous n words in E. Speci\ufb01cally, we use a 3-gram LSTM model (n=3). Figure 1 outlines our overall work\ufb02ow. 4. Experiment and Result The framework is implemented in Python and will be made publicly available. In the implementation of Algorithm 1, we set T = 4. This value for maximum utterances per expressions come from the average length of the expressions from our target dataset, both RefCOCO and RefCOCO+ have average length less than 4 utterances per expression. We evaluate our framework on the test set of RefCOCO and RefCOCO+ datasets released by [9]. For these two datasets, each data point consists of one image, one bounding box for a referent (the target box) and some referring expressions for the referent. We used pretrained weights from the COCO dataset for Graph R-CNN and Detectron2. Additionally, we experiment separately with \ufb01netuning Detectron on RefCOCO referring expressions. Finally, we test the framework with RefCOCO Google split test set and RefCOCO+ UNC split test set. We evaluate the generated expressions on the test dataset with both automatic overlapbased metrics (BLEU, ROUGE and METEOR) and accuracy (human evaluation) (Table 2). Speci\ufb01cally, we run human evaluation through crowdsourcing site Proli\ufb01c on the following scheme: our IterativeRSA, RecurrentRSA [6] and SLR [16] trained on 0.1%, 1% and 10% of the training sets of RefCOCO and RefCOCO+. For each scheme, we collected survey results for 1000 randomly selected instance from the RefCOCO test dataset from 20 participants and 3000 instances from RefCOCO+ test dataset from 60 participants. Each image is preprocessed by adding 6 bounding boxes on some objects in the image, one of which is the true target. The boxes are chosen from 5 random objects detected by Detectron2 an the true target object. Each participant is asked to \ufb01nd the matching object given expression for 50 images through multiple choice questions. In addition, we also manually insert 5 extra instances where the answer is fairly obvious and use those instances as a sanity check. Data from participants who failed more than half of the sanity checks (i.e 3/5) was not included in the analysis. Since our referring expressions are generated based on extracted textual information about individual objects and not the raw image as a whole, there are cases where Detectron2 does not recognize the object in the \ftarget box or the suggested bounding box from Detectron2 is di\ufb00erent in size compared to the target box. In such cases, our algorithm ended up generating an expression for a di\ufb00erent observable object than the targeted one. To understand the di\ufb00erent types of errors our model makes, we also included additional options in cases where the testers cannot identify a box that matched the expression. Speci\ufb01cally, we added three categories of error when no (unique) matching object is identi\ufb01ed: 1. nothing in the picture matches the description 2. several things match this description equally well 3. the thing that matches the description best is not highlighted Despite the simplicity of our proposed method, it achieves comparable performance in terms of METEOR score to the Speaker-Listener-Reinforcer(SLR) [16]. More importantly, our method outperforms SLR in human comprehension under low training data scheme and RecurrentRSA with both RefCOCO and RefCOCO+. True False Under-informative no-match not-highlighted adjusted-accuracy IterativeRSA 27.25 13.03 11.59 44.49 3.64 52.54 Iterative RSA + f-Det2 26.52 15.1 13.79 38.51 6.07 47.86 SLR-10% [16] 26.95 15.71 15.98 36.51 4.85 45.96 SLR-1% [16] 14.3 16.24 11.96 51.13 6.37 33.65 SLR-0.1% [16] 6.1 18.14 10.48 59.73 5.55 17.57 Table 1: Human evaluation response on RefCOCO+ images across IterativeRSA and SLR trained under restricted data. Beside raw accuracy, we also report the accuracy rate using the formula adjusted \u2212 accuracy = True/(True + False + Underinformative) where Underinformative counts instances where the expressions correctly refer to the referent objects but are not distinctive enough. Our human evaluation accuracy is slightly less than that of MMI [9] and while our METEOR score is higher. However, our performance measures fall short when compared to the state-of-the-art extensively trained end-to-end deep neural network model by SLR [17]. This is to be expected as our method was not trained and does not require training on the speci\ufb01c task of referring expression generation or comprehension. Further performance analysis will be given in the next sections. 5. Comparison with Recurrent RSA and SLR trained with limited data As discussed above, to see the advantages and drawbacks of Iterative RSA, we run human evaluation on generated expressions from RefCOCO and RefCOCO+ datasets and compare Iterative RSA with RecurrentRSA-another RSA approach as well as SLR. From Table 3, Iterative RSA outperforms RecurrentRSA with 28% compared to 26.9%. On the other hand, to make a fair comparison with a deep learning end-to-end approach like SLR, we decided to train SLR with limited training data as Iterative RSA does not require bleu rouge meteor MMI [9] 0.37 0.333 0.136 SLR[17] 0.38 0.386 0.16 rerank [13] 0.366 0.354 0.15 Iterative RSA 0.18 0.125 0.11 Table 2: NLP metric comparisons to some previous approaches on RefCOCO+ dataset. \fRefCOCO RefCOCO+ Iterative RSA 28.05 27.25 Iterative RSA + f-Det2 41.3 26.52 Recurrent RSA[6] 26.9 SLR-10% [16] 66.2 26.95 SLR-1% [16] 49.85 14.3 SLR-0.1% [16] 38.5 6.1 Table 3: Raw accuracy of referring expression comprehension evaluated human evaluation on Iterative RSA, Iterative RSA with \ufb01netuned Detectron2 (f-Det2), Recurrent RSA and SLR trained with limited data of 0.1%, 1% and 10% of RefCOCO and refCOCO+ training set. any direct training process. From Table 3, the Iterative RSA (no training) outperforms all SLR models trained with 0.1%, 1% and 10% training data for refCOCO+ dataset and outperform SLR model trained with highly limited training data (0.1%) on RefCOCO. Furthermore, when examining the SLR-generated expressions, we observed that for the model trained and tested on RefCOCO dataset, a lot of the expressions contains positional property of objects such as left, right, which makes identifying the target easier when the expression is low quality and incomplete (as a result of training on limited data). Thus, we can see that SLR performs better on RefCOCO than RefCOCO+. On the other hand, IterativeRSA performs more consistently, especially when used without any training or observation of the data. Finetuning the Detectron2 model for object detection with RefCOCO expressions improve the performance on the corresponding dataset, however, using the same model on the RefCOCO+ dataset does not show any signi\ufb01cant change in accuracy. Figure 3 is an example of referring expression generated with RSA compared to SLR Figure 3: IterativeRSA: jeans, SLR-1%: man in black, SLR-10%: woman in red trained with limited data. For the RSA expression, it clearly shows that the model explains Gricean maxim of quantity by generating the shortest possible word to describe the target which are the jeans, whereas SLR shows the over\ufb01tting behavior when generating unrelated expression to the target. 6. Analysis of the human evaluation As mentioned above, in our study, aside from letting users choose one of the objects surrounded by bounding boxes given the generated expression, we also give additional options to handle the case where survey participants cannot \ufb01nd a sensible object to match the description. Overall, we observe that incorrect responses can be divided into the following categories: under-informative expression, not highlighted, no match and false. These categories of error help in identifying the sources of de\ufb01ciency in our approach. If the expression is under-informative, there are two possibilities. The \ufb01rst is that the textual data extraction step (i.e., Detectron2) was able to identify multiple objects of the same \ftype, but the algorithm is unable to di\ufb00erentiate between the target and the rest of the objects. In this case the problem is on the linguistic side of our model. Another possibility is that not all objects of the relevant type were detected, which is the de\ufb01ciency of our visual system (Detectron2). Another type of visual system de\ufb01ciency happens when the described object is not the highlighted one or if there is no match. In these cases, the visual system (Detectron2) mis-classi\ufb01ed the object in the bounding box. As shown in Table 3, about 48% of the recorded instances belong to these two categories. 6.1. Under-informative expressions One type of error is when the generated expression is under-informative. This occurs when the expression correctly indicated the type of the target object but failed to di\ufb00erentiate between the target and other objects of the same type in the picture. For example, in Figure 4, the algorithm was able to correctly identify the type of object in the bounding box but the modi\ufb01er (cooking) failed to di\ufb00erentiate the target from the other instance of that type. Figure 4: Generated Expression: cooking pizza, Gold Label: pizza on left, Target box: 2. i.e., the light green box surrounding the smaller pizza. 6.2. Object not highlighted Another type of errors revealed through human evaluation is when the matching object is not highlighted as the target. This type of de\ufb01ciency is due to the textual extraction Figure 5: Example of not highlighted response Generated Expression: laying down man, Gold Label: guy bottom left, Target box: 2 i.e., the light green box at the bottom left of the image component (Detectron2) not observing all objects of the same type. In Figure 5, Detectron2 can only observe four instances of the category man, which are all highlighted in this image with box 1, 2, 3, 5. When comparing the available attributes for these mans, target man in box 2 (i.e., the light green box at the bottom left of the image) is assigned a distinctive attribute that others do not have: laying down (although he is sitting, not \flaying down). The use of this modi\ufb01er increases the salience of the target relative to the other individuals that are detected. It is quite possible that participants assumed laying down man refers to the only person at the bottom center of the image who is actually laying down. However, that individual is not detected by Detectron2 and thus there is no highlighted box. 6.3. High quality expression When the participants correctly identify the target object by choosing the right bounding box, we observe that the textual extraction step provides su\ufb03cient information for the algorithm to work correctly. Figure 6 is an example where we observe that the system works well when the extracted textual information is accurate and su\ufb03cient. Speci\ufb01cally, Detectron2 found all the objects of the type train in box 4 and 5. Furthermore, the train objects have fairly sensible attributes, including the left and the right. Figure 6: Generated Expression: the right train, Gold Label: right train, Target box: 4. 7. Discussion The Iterative RSA introduced in this paper is able to generate multiple-modi\ufb01er descriptions, which goes far beyond the vanilla RSA speaker described by [10] and [3], and our RSA speaker has even gone past the two-word stage of [4]. While the result is not at the level of the state-of-the-art end-to-end model, Iterative RSA outperforms Recurrent RSA and SLR trained under limited data. We can clearly explain how our model comes up with the referring expressions it generates. The explainability of our model is a contrast feature when compare with RecurrentRSA. While RecurrentRSA also applies the RSA model to generate expressions, its expression generation by recursively generate characters makes it hard to explain why at each step, why one character is a feasible choice that helps identify a target object. Furthermore, to our knowledge, we are the \ufb01rst attempt to apply pure probabilistic RSA model without any neural network components in the expression generation step of the referring expression generation from image task. From the analysis of the human evaluation and concrete examples, it is clear that the performance of Iterative RSA is tightly coupled with the performance of the textual extraction model, particularly Detectron2. When Detectron2 detects enough information, including the objects in a given image as well as their probable attributes, we observe that our proposed Iterative RSA can create high quality expressions with distinctive modi\ufb01ers. Another key strength and also a weakness of our proposed iterative RSA is the size of the vocabulary of descriptors. Currently, this vocabulary is limited to the attributes and types vocabulary that Detectron2 possesses. While this vastly reduces the search space of all possible descriptors, it also limits the possible descriptors that RSA can choose from, given a target. The textual extraction step (Detectron2 in this case) can be analogized to the act of \u201cobserving\u201d and the Iterative RSA algorithm to \u201creasoning\u201d. One cannot reason about objects or aspects of objects that are not observed. On the other hand, in terms of e\ufb03ciency, our proposed method is fast because Iterative RSA does not require training data and can be applied directly on the \ufb02y with any given \ftextual extraction system. In addition, our application of Detectron2 and Graph-RCNN also does not require training as it utilizes pre-trained weights. Experiments with \ufb01netuning Detectron2 with RefCOCO data does show better accuracy on the test set of RefCOCO dataset but does not show any major improvement when tested on RefCOCO+ as shown in Table 1. Thus, the base Iterative RSA is more generalized and consistent across di\ufb00erent datasets. Minimal reliance on training data has other advantages: That property makes our approach a promising one for low-resource languages where labeled data for training, especially for vision-language tasks such as referring expression generation/comprehension, are virtually non-existent [18] for languages other than English. 8." + }, + { + "url": "http://arxiv.org/abs/2202.12872v2", + "title": "AutoFR: Automated Filter Rule Generation for Adblocking", + "abstract": "Adblocking relies on filter lists, which are manually curated and maintained\nby a community of filter list authors. Filter list curation is a laborious\nprocess that does not scale well to a large number of sites or over time. In\nthis paper, we introduce AutoFR, a reinforcement learning framework to fully\nautomate the process of filter rule creation and evaluation for sites of\ninterest. We design an algorithm based on multi-arm bandits to generate filter\nrules that block ads while controlling the trade-off between blocking ads and\navoiding visual breakage. We test AutoFR on thousands of sites and we show that\nit is efficient: it takes only a few minutes to generate filter rules for a\nsite of interest. AutoFR is effective: it generates filter rules that can block\n86% of the ads, as compared to 87% by EasyList, while achieving comparable\nvisual breakage. Furthermore, AutoFR generates filter rules that generalize\nwell to new sites. We envision that AutoFR can assist the adblocking community\nin filter rule generation at scale.", + "authors": "Hieu Le, Salma Elmalaki, Athina Markopoulou, Zubair Shafiq", + "published": "2022-02-25", + "updated": "2023-03-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "main_content": "Introduction Adblocking is widely used today to improve the security, privacy, performance, and browsing experience of web users. Twenty years after the introduction of the \ufb01rst adblocker in 2002, the number of web users who use some form of adblocking now exceeds 42% [9]. Adblocking primarily relies on \ufb01lter lists (e.g., EasyList [22]) that are manually curated based on crowd-sourced user feedback by a small community of \ufb01lter list (FL) authors. There are hundreds of different adblocking \ufb01lter lists that target different platforms and geographic regions [10]. It is well-known that the \ufb01lter list curation process is slow and error-prone [6], and requires signi\ufb01cant continuous effort by the \ufb01lter list community to keep them up-to-date [40]. The research community is actively working on machine learning (ML) approaches to assist with \ufb01lter rule generation [11, 29, 60] or to build models to replace \ufb01lter lists altogether [1, 33, 59, 74]. There are two key limitations of prior ML-based approaches. First, existing ML approaches are supervised as they rely on human feedback and/or existing \ufb01lter lists (which are also manually curated) for training. This introduces a circular dependency between these supervised ML models and \ufb01lter lists \u2014 the training of models relies on the very \ufb01lter lists (and humans) that they aim to augment or replace. Second, existing ML approaches do not explicitly consider the trade-off between blocking ads and avoiding breakage. An over-aggressive adblocking approach might block all ads on a site but may block legitimate content at the same time. Thus, despite recent advances in ML-based adblocking, \ufb01lter lists remain defacto in adblocking. Fig. 1(a) illustrates the work\ufb02ow of a FL author for creating rules for a particular site: (1) select a network request to block; (2) design a \ufb01lter rule that corresponds to this request and apply it on the site; (3) visually inspect the page to evaluate if the \ufb01lter rule blocks ads and/or causes breakage and; (4) repeat for other network requests and rules; since modern sites are highly dynamic, and often more so in response to adblocking [6,17,40,76], the FL author usually revisits the site multiple times to ensure the rule remains effective; and (5) stop when a set of \ufb01lter rules can adequately block ads without causing breakage. We ask the question: how can we minimize the manual effort of FL authors by automating the process of generating and evaluating adblocking \ufb01lter rules? We propose AutoFR to automate each of the aforementioned steps, as illustrated in Fig. 1(b), and we make the following contributions. First, we formulate the \ufb01lter rule generation problem within a reinforcement learning (RL) framework, which enables us to ef\ufb01ciently create and evaluate good candidate rules, as opposed to brute force or random selection. We focus on URL-based \ufb01lter rules that block ads, a popular and representative type of rules that can be visually audited. An important component, which replaces the visual inspection, is the detection of ads (through a perceptual classi\ufb01er, Ad Highlighter [63]) and of visual breakage (through JavaScript [JS] for images and text) on a page. We design a reward function that combines these metrics to enable explicit control over the trade-off between blocking ads and avoiding breakage. Second, we design and implement AutoFR to train the RL 1 arXiv:2202.12872v2 [cs.LG] 8 Mar 2023 \fBrowser Site (\u2113) Filter List Author Network Requests 2. Create Filter Rule and Apply 3. Visual Inspection Filter Rules 1. Selects 5. Stop Filter List Author 4. Repeat (a) Filter List Authors\u2019 (Human) Work\ufb02ow. How \ufb01lter list authors create \ufb01lter rules for a site \u2113: (1) they select a network request caused by the site; (2) they create a \ufb01lter rule and apply it on the site; (3) they visually inspect whether it blocked ads without breakage; (4) they repeat the process if necessary for other network requests; and (5) they stop when they have crafted \ufb01lter rules that can block all/most ads for the site without causing signi\ufb01cant breakage. Configs Updates Environment User of AutoFR Browser Site (\u2113) Agent Policy Action Space 2. Action (a) (Filter Rule) 3. Reward Output Filter Rules 1. Selects 5. Stop 4. Repeat (b) AutoFR(Automated)Work\ufb02ow.AutoFRautomatesthesestepsasfollows: (1) the agent selects an action (i.e., \ufb01lter rule) following a policy; (2) it applies the action on the environment; (3) the environment returns a reward, used to update the action space; (4) the agent repeats the process if necessary; and (5) the agent stops when a time limit is reached, or no more actions are available to be explored. The human \ufb01lter list author only provides a site \u2113and con\ufb01gurations (e.g., threshold w and hyper-parameters). Figure 1: AutoFR automates the steps taken by FL authors to generate \ufb01lter rules for a particular site. FL authors can con\ufb01gure the AutoFR parameters but no longer perform the manual work. Once rules are generated by AutoFR, it is up to the FL authors to decide when and how to deploy the rules to end-users. agent by accessing sites in a controlled realistic environment. It creates rules for a site in under two minutes, which is crucial for scalability. We deploy and evaluate AutoFR\u2019s ef\ufb01cient implementation on Top\u201310K websites, and we \ufb01nd that the \ufb01lter rules generated by AutoFR block 86% of the ads. We also \ufb01nd that they generalize well to new sites, e.g., blocking 80% of the ads on the Top 5K\u201310K sites. The effectiveness of the AutoFR rules is overall comparable to EasyList in terms of blocking ads and visual breakage. Thus, we envision that the adblocking community will use AutoFR to automatically generate and update \ufb01lter rules at scale. The rest of our paper is organized as follows. Sec. 2 provides background and related work. Sec. 3 formalizes the problem of \ufb01lter rule generation, including the human process, the formulation as an RL problem, and our particular multi-arm bandit algorithm for solving it. Sec. 4 presents our implementation of the AutoFR framework. Sec. 5 provides its evaluation on the Top\u201310K sites. Sec. 6 concludes the paper. The appendices provide additional details and results. 2 Background & Related Work Filter Rules. Adblockers have relied on \ufb01lter lists since their inception. The \ufb01rst adblocker in 2002, a Firefox extension, allowed users to specify custom \ufb01lter rules to block resources (e.g., images) from a particular domain or URL path [48]. There are different types of \ufb01lter rules. The most popular type is URL-based \ufb01lter rules, which block network requests to provide performance and privacy bene\ufb01ts [61]. Other types of \ufb01lter rules are element-hiding rules (hide HTML elements) and JS-based rules (stop JS execution). App. A provides a longitudinal analysis and discussion of widely used \ufb01lter rules. Filter rules can also be per-site (i.e., they are only allowed to trigger for particular sites) or treated as global rules (i.e., allowed to trigger for any sites). Popular \ufb01lter lists, such as EasyList, support these rules. Per-site rules are denoted with the \u201c$domain\u201d option in EasyList. This paper focuses on URL-based, per-site rules. Filter Lists and their Curation. Since it is non-trivial for lay web users to create \ufb01lter rules, several efforts were established to curate rules for the broader adblocking community. Speci\ufb01cally, rules are curated by \ufb01lter list (FL) authors based on informal crowd-sourced feedback from users of adblocking tools. As elaborated in App. A, there is now a rich ecosystem of thousands of different \ufb01lter lists focused on blocking ads, trackers, malware, and other unwanted web resources. EasyList [22] is the most widely used adblocking \ufb01lter list. Started in 2005 by Rick Petnel, it is now maintained by a small set of FL authors and has 22 language-speci\ufb01c versions. An active EasyList community provides feedback to FL authors on its of\ufb01cial forum and GitHub. The research community has looked into the \ufb01lter list curation process to investigate its effectiveness and painpoints [6, 40, 61, 70]. Snyder et al. [61] studied EasyList\u2019s evolution and showed that it needs to be frequently updated (median update interval of 1.12 hours) because of the dynamic nature of online advertising and efforts from advertisers to evade \ufb01lter rules. They found that it has grown signi\ufb01cantly over the years, with 124K+ rule additions and 52K+ rule deletions over the last decade. Alrizah et al. [6] showed that EasyList\u2019s curation, despite extensive input from the community, is prone to errors that result in missed ads (false negatives) and over-blocking of legitimate content (false positives). They concluded that most errors in EasyList can be attributed to mistakes by FL authors. We elaborate further on the challenges of \ufb01lter rule generation in Sec. 3.1. Machine Learning for Adblocking. Motivated by these challenges, prior work has explored using machine learning (ML) to assist with \ufb01lter list curation or replace it altogether. One line of prior work aims to develop ML models to automatically generate \ufb01lter rules for blocking ads [11,29,60]. Bhagavatula et al. [11] trained supervised ML classi\ufb01ers to detect advertising URLs. Similarly, Gugelmann et al. [29] trained supervised ML classi\ufb01ers to detect advertising and tracking domains. Sjosten et al. [60] is the closest related to our work. First, they trained a hybrid perceptual and web execution classi\ufb01er to detect ad images [13]. Second, they generated adblocking \ufb01lter rules by \ufb01rst identifying the URL of the script responsible for retrieving the ad and then simply using the effective second-level domain (eSLD) and path information of the script as a rule (similar to Table 1 row 3). We found that 99% of rules that they open-sourced had paths. However, this overreliance on rules with paths makes 2 \fthem brittle and easily evaded with minor changes [40]. Furthermore, the design of these rules did not automatically consider potential breakage. Another line of prior work, instead of generating \ufb01lter rules, trains ML models to automatically detect and block ads [1,2,33,59,63,74]. AdGraph [33], WebGraph [59], and WTAGraph [74] represent web page execution information as a graph and then train classi\ufb01ers to detect advertising resources. Ad Highlighter [63], Sentinel [2], and PERCIVAL [1] use computer vision techniques to detect ad images. These efforts do not generate \ufb01lter rules but instead attempt to replace \ufb01lter lists altogether. While promising, existing ML-based approaches have not seen any adoption by adblocking tools. Our discussions with the adblocking community have revealed a healthy skepticism of replacing \ufb01lter lists with ML models due to performance, reliability, and explainability concerns. On the performance front, the overheads of feature instrumentation and running ML pipelines at run-time are non-trivial and almost negate the performance bene\ufb01ts of adblocking [47]. On the reliability front, concerns about the accuracy and brittleness of ML models in the wild [1,2,60], combined with a lack of explainability [66], have hampered their adoption. In short, it seems unlikely that \ufb01lter lists will be replaced by ML models any time soon, and \ufb01lter rules remain crucial for adblocking tools. ML-assisted FL Curation. There is, however, optimism in using ML-based approaches to assist with maintenance of \ufb01lter lists. For example, Brave [60], Adblock Plus [2], and the research community [40] have been using ML models to assist FL authors in prioritizing \ufb01lter rule updates. However, they have two main limitations. First, they rely on \ufb01lter lists, such as EasyList, for training their supervised ML models causing a circular dependency: a supervised model is only as good as the ground-truth data it is trained on. This also means that the adblocking community has to continue maintaining both ML models as well as \ufb01lter lists. Second, existing ML approaches do not explicitly consider the trade-off between blocking ads and avoiding breakage. An over-aggressive adblocking approach might block all ads on a site but may block legitimate content at the same time. It is essential to control this trade-off for real-world deployment. In summary, a deployable MLbased adblocking approach should be able to generate \ufb01lter rules without relying on existing \ufb01lter lists for training, while also providing control to navigate the trade-off between blocking ads and avoiding breakage. To the best of our knowledge, AutoFR is the only system that can generate and evaluate \ufb01lter rules automatically (without relying on humans) and from scratch (without relying on existing \ufb01lter lists). Reinforcement Learning. We formulate the problem of \ufb01lter rule curation from scratch (i.e., without any ground truth or existing list) as a reinforcement learning (RL) problem; see Sec. 3. Within the vast literature in RL [64], we choose the Multi-Arm Bandits (MAB) framework [7], for reasons explained in Sec. 3.2. Identifying the top\u2013k arms [14,44] rather than searching for the one best arm [27] has been used in the problems of coarse ranking [35] and crowd-sourcing [15,30]. Contextual MAB has been used to create user pro\ufb01les to personalize ads and news [42]. Bandits where arms have similar expected rewards, commonly called Lipschitz bandits [36], have also been utilized in ad auctions and dynamic pricing problems [37]. In our context of \ufb01lter rule generation, we leverage the theoretical guarantees established for MAB to search for \u201cgood\u201d \ufb01lter rules and identify the \u201cbad\u201d \ufb01lter rules, while searching for opportunities of \u201cpotentially good\u201d \ufb01lter rules (hierarchical problem space [71]), as discussed in Sec. 3.3. While RL algorithms, in general, have been applied to several application domains [12,24,25,75], RL often faces challenges in the real-world [21] including convergence and adversarial settings [8,28,32,55,73]. Our Work in Perspective. The design of the framework is described in Sec. 3 and illustrated in Fig. 1(b). AutoFR is the \ufb01rst to fully automate the process of \ufb01lter rule generation and create URL-based, per-site rules that block ads from scratch, using reinforcement learning. The majority of prior ML-based techniques relied on existing \ufb01lter lists at some point in their pipeline, thus creating a circular dependency. Furthermore, AutoFR is the \ufb01rst to choose the granularity of the URL-based rule to explicitly optimize the trade-off between blocking ads and avoiding visual breakage. The implementation is described in Sec. 4 and illustrated in Fig. 4. Within the RL framework, AutoFR\u2019s key design contributions include the action space, the RL components (e.g., agent, environment, reward, policy), the annotation of raw AdGraphs into site snapshots, and the logic and implementation of utilizing site snapshots to emulate site visits. The latter was instrumental in scaling the approach (it reduced the time for generating rules for a single site from approximately 13 hours to 1.6 minutes) and making our results reproducible. For some individual RL components, we leverage state-of-the-art tools: (1) we utilize one part of AdGraph that creates a graph representing the site (we do not use the trained ML model of AdGraph); and (2) we use Ad Highlighter to automatically detect ads, which is used to compute our reward function. As these individual components improve over time, the AutoFR framework can bene\ufb01t from new and improved versions or even incorporate newly available tools in the future. 3 AutoFR Framework We formalize the problem of \ufb01lter rule generation, including the process followed by human FL authors (Sec. 3.1 and Fig. 1(a)), our formulation as a reinforcement learning problem (Sec. 3.2 and Fig. 1(b)), and our multi-arm bandit algorithm for solving it (Sec. 3.3 and Alg. 1). Table 4 in the appendix summarizes the notation used throughout the paper. 3.1 Filter List Authors\u2019 Work\ufb02ow Scope. Among all possible \ufb01lter rules, we focus on the important case of URL-based rules for blocking ads to 3 \fDescription Filter Rule 1 eSLD ||ad.com\u02c6 2 FQDN ||img.ad.com\u02c6 3 With Path ||ad.com/banners/ or ||img.ad.com/banners/ Table 1: URL-based Filter Rules. They block requests, listed from coarser to \ufb01ner-grain: eSLD (effective second-level domain), FQDN (fully quali\ufb01ed domain), With Path (domain and path). demonstrate our approach. In App. A, we provide a longitudinal analysis of \ufb01lter lists to show that these rules are the most widely used today. Table 1 shows examples of URL-based rules at different granularities: blocking by the effective second-level domain (eSLD), fully quali\ufb01ed domain (FQDN), and including the path. Filter List Authors\u2019 Work\ufb02ow for Creating Filter Rules. Our design of AutoFR is motivated by the bottlenecks of \ufb01lter rule generation, revealed by prior work [6,40], our discussions with FL authors, and our own experience in curating \ufb01lter rules. Next, we break down the process that FL authors employ into a sequence of tasks, also illustrated in Fig. 1(a). When FL authors create \ufb01lter rules for a speci\ufb01c site, they start by visiting the site of interest using the browser\u2019s developer tools. They observe the outgoing network requests and create, try, and select rules through the following work\ufb02ow. Task 1: Select a Network Request. FL authors consider the set of outgoing network requests and treat them as candidates to produce a \ufb01lter rule. The intuition is that blocking an ad request will prevent the ad from being served. For sites that initiate many outgoing network requests, it may be time-consuming to go through the entire list. When faced with this task, FL authors depend on sharing knowledge of ad server domains with each other or heuristics based on keywords like \u201cads\u201d and \u201cbid\u201d in the URL. FL authors may also randomly select network requests to test. Task 2: Create a Filter Rule and Apply. FL authors must create a \ufb01lter rule that blocks the selected network request. However, there are many options to consider since rules can be the entire or part of the URL, as shown in Table 1. FL authors intuitively handle this problem by trying \ufb01rst an eSLD \ufb01lter rule because the requests can belong to an ad server (i.e., all resources served from the eSLD relate to ads). However, the more speci\ufb01c the \ufb01lter rule is (e.g., eSLD \u2192 FQDN), the less likely it would lead to breakage. Then, the FL authors apply the \ufb01lter rule of choice onto the site. Task 3: Visual Inspection. Once the \ufb01lter rule is applied on the site, FL authors inspect its effect, i.e., whether it indeed blocks ads and/or causes breakage (i.e., legitimate content goes missing or the page displays improperly). FL authors use differential analysis. They visit a site with and without the rule applied, and they visually inspect the page and observe whether ads and non-ads (e.g., images and text) are present/missing before/after applying the rule. In assessing the effectiveness of a rule, it is essential to ensure that it blocks at least one request, i.e., a hit. Filter rules are considered \u201cgood\u201d if they block ads without breakage and \u201cbad\u201d otherwise. Avoiding breakage is critical for FL authors because rules can impact millions of users. If a rule blocks ads but causes breakage, it is considered a \u201cpotentially good\u201d rule. Task 4: Repeat. FL authors repeat the process of Tasks 1, 2, 3, multiple times to make sure that the \ufb01lter rule is effective. Repetition is necessary because modern sites typically are dynamic. Different visits to the same site may trigger different page content being displayed and different ads being served. If a rule from Task 2 blocks ads but causes breakage, the author may then try a more granular \ufb01lter rule (e.g., eSLD \u2192FQDN from Table 1). If the rule does not block ads, go back to Task 1. Task 5: Stop and Store Good Filter Rules. FL authors stop this iterative process when they have identi\ufb01ed a set of \ufb01lter rules that block most ads without breakage (i.e., a best-effort approach). None of the considered rules may satisfy these (somewhat subjective) conditions, in which case no \ufb01lter rules are produced. Bottlenecks: Scale and Human-in-the-Loop. The work\ufb02ow above is labor-intensive and does not scale well. There is a large number of candidate rules to consider for sites with a large number of network requests (Task 1) and long and often obfuscated URLs (Task 2). The scale of the problem is ampli\ufb01ed by site dynamics, which requires repeatedly visiting a site (Task 4). The effect of applying each single rule must then be evaluated by the human FL author through visual inspection (Task 3), which is time-consuming on its own. Motivated by these observations, we aim to automate the process of \ufb01lter rule generation per-site. We reduce the number of iterations needed (by intelligently navigating the search space for good \ufb01lter rules via reinforcement learning), and we minimize the work required by the human FL author in each step (by automating the visual inspection and assessment of a rule as \u201cgood\u201d or \u201cbad\u201d). Our proposed methodology is illustrated in Fig. 1(b) and formalized in the next section. 3.2 Reinforcement Learning Formulation As described earlier and illustrated in Fig. 1(a), FL authors repeatedly apply different rules and evaluate their effects until they build con\ufb01dence on which rules are generally \u201cgood\u201d for a particular site. This repetitive action-response cycle lends itself naturally to the reinforcement learning (RL) paradigm, as depicted in Fig. 1(b), where actions are the applied \ufb01lter rules and rewards (response) must capture the effectiveness of the rules upon applying them to the site (environment). Testing all possible \ufb01lter rules by brute force is infeasible in practice due to time and power resources. However, RL can enable ef\ufb01cient navigation of the action space. 4 \fFigure 2: Hierarchical Action Space. A node (\ufb01lter rule) within the action space has two different edges (i.e., dependencies to other rules): (1) the initiator edge, \u2192, denotes that the source node initiated requests to the target node; and (2) the \ufb01ner-grain edge, 99K, targets a request more speci\ufb01cally, as discussed in Task 4 and Table 1. An example of an entire action space is provided in App. B.2 and Fig. 15. More speci\ufb01cally, we choose the multi-arm bandit (MAB) RL formulation. The actions in MAB are independent kbandit arms and the selection of one arm returns a numerical reward sampled from a stationary probability distribution that depends on this action. The reward determines if the selected arm is a \u201cgood\u201d or a \u201cbad\u201d arm. Through repeated action selection, the objective of the MAB agent is to maximize the expected total reward over a time period [7]. The MAB framework \ufb01ts well with our problem. The MAB agent replaces the human (FL author) in Fig. 1(a). The agent knows all available \u201carms\u201d (possible \ufb01lter rules), i.e., the action space; see Sec. 3.2.1. The agent picks a \ufb01lter rule (arm) and applies it to the MAB environment, which, in our case, consists of the site \u2113(with its unknown dynamics as per Task 4), the browser, and a selected con\ufb01guration (how we value blocking ads vs. avoiding breakage, explained in Sec. 3.3). The latter affects the reward of an action (rule) the agent selects. Filter rules are independent of each other. Furthermore, the order of applying different \ufb01lter rules does not affect the result. In adblockers, like Adblock Plus, blocking rules do not have precedence. Through exploring available arms, the agent ef\ufb01ciently learns which \ufb01lter rules are best at blocking ads while minimizing breakage; see Sec. 3.2.2. Next, we de\ufb01ne the key components of the proposed AutoFR framework, depicted in Fig. 1(b). It replaces the human-in-the-loop in two ways: (1) the FL author is replaced by the MAB policy that avoids brute force and ef\ufb01ciently navigates the action space; and (2) the reward function is automatically computed, as explained in Sec. 3.2.2, without requiring a human\u2019s visual inspection. 3.2.1 Actions Action a (Filter Rule). An action is a URL blocking \ufb01lter rule that can have different granular levels, shown in Table 1, and is applied by the agent onto the environment. We use the terms action, arm, and \ufb01lter rule, interchangeably. Hierarchical Action Space AH. Based on the outgoing network requests of a site \u2113(Task 1), there are many possible rules that can be created (Task 2) to block that request. Fig. 2 shows an example of dependencies among candidate rules: 1. We should try rules that are coarser grain \ufb01rst (doubleclick.net) before trying more \ufb01ner-grain rules (stats.g.doubleclick.net) (the horizontal dotted lines). This intuition was discussed in Task 4. 2. If doubleclick.net initiates requests to clmbtech.com, we should explore it \ufb01rst, before trying clmbtech.com (the vertical solid lines). Sec. 4.2 describes how we retrieve the initiator information. The dependencies among rules introduce a hierarchy in the action space AH, which can be leveraged to expedite the exploration and discovery of good rules via pruning. If an action (\ufb01lter rule) is good (it brings a high reward, as de\ufb01ned in Sec. 3.2.2), the agent no longer needs to explore its children. We further discuss the size of action spaces in App. D.1.2 and Fig. 20; we show that they can be large. The creation of AH automates Task 2. 3.2.2 Rewards Once a rule is created, it is applied on the site (Task 2). The human FL author visually inspects the site, before and after the application of the rule, and assesses whether ads have been blocked without breaking the page (Task 3). To automate this task, we need to de\ufb01ne a reward function for the rule that mimics the human FL author\u2019s assessment of whether a rule blocks ads and the breakage that could occur. Site Representation. We abstract the representation of a site \u2113by counting three types of content visible to the user: we count the ads (CA), images (CI), and text (CT) displayed. An example is shown in Fig. 3. The baseline representation refers to the site before applying the rule. Since a site \u2113has unknown dynamics (Task 4), we need to visit it multiple times and average these counters: CA, CI, and CT. We envision that obtaining these counters from a site can be done not only by a human (as it is the case today in Task 3) but also automatically using image recognition (e.g., Ad Highlighter [63]) or better tools as they become available. This is an opportunity to remove the human-in-the-loop and further automate the process. We further detail this in Sec. 4.3. Site Feedback after Applying a Rule. When the agent applies an action a (rule), the site representation will change from (CA,CI,CT) to (CA, CI, CT). The intuition is that, after applying a \ufb01lter rule it is desirable to see the number of ads decrease as much as possible (ideally CA =0) and continue to see the legitimate content (i.e., no change in CI, CT compared to the baseline). To measure the difference before and after applying the rule, we de\ufb01ne the following: b CA = CA\u2212CA CA , b CI = |CI\u2212CI| CI , b CT = |CT \u2212CT| CT (1) b CA measures the fraction of ads blocked; the higher, the better the rule is at blocking ads. Ideally all ads are blocked, i.e., b CA is 1. In contrast, b CI and b CT measure the fraction of page 5 \fCT CI CA 11 20 3 Figure 3: Site Representation. We represent a site as counts of visible ads (CA), images (CI), and text (CT ),as explained in Sec. 3.2.2. Applying a \ufb01lterrule changes them,by blocking ads (reducingCA) and/or hiding legitimate content (changingCI andCT , thus breakage B). broken. Higher values incur more breakage. We de\ufb01ne page breakage (B) as the visible images (b CI) and text (b CT), which are not related to ads but are missing after a rule is applied: B = b CI+ b CT 2 (2) We take a neutral approach and treat both visual components equally and average b CI, b CT. This can be con\ufb01gured to express different preferences by the user, e.g., treat content above-the-fold as more important. Lastly, avoiding breakage is measured by 1\u2212B. It is desirable that 1\u2212B is 1, and the site has no visual breakage. Trade-off: Blocking Ads (b CA) vs. Avoiding Breakage (1\u2212B). The goal of a human FL author is to choose \ufb01lter rules that block as many ads as possible (high b CA) without breaking the page (high 1 \u2212B). There are different ways to capture this trade-off. We could have taken a weighted average of b CA and B. However, to better mimic the practices of today\u2019s FL authors, we use a threshold w\u2208[0,1] as a design parameter to control how much breakage a FL author tolerates: 1\u2212B \u2265w. Blocking ads is easy when there is no constraint on breakage \u2014 one can choose rules that break the whole page. FL authors control this either by using more speci\ufb01c rules (e.g., eSLD \u2192FQDN) to avoid breakage or avoid blocking at all. We rely on this trade-off as the basis of our evaluation in Sec. 5. An example is illustrated in App. D.1.2 and Fig. 19. It is desirable to operate where b CA = 1 and 1 \u2212B = 1. In practice, FL authors tolerate little to no breakage, e.g., w\u22650.9. However, w is a con\ufb01gurable parameter in our framework. Reward Function RF. When the MAB agent applies a \ufb01lter rule F (action a) at time t on the site \u2113(environment), this will lead to ads being blocked and/or content being hidden, which is measured by feedback (b CA, b CI, b CT) de\ufb01ned in Eq. (1). We design a reward function RF : R3 \u2192[\u22121,1] that mimics the FL author\u2019s assessment (Task 3) of whether a \ufb01lter rule F is good (RF(w, b CA,B) > 0)) or bad (RF(w, b CA,B) < 0)) at blocking ads based on the site feedback: RF(w,b CA,B)= \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u22121 if b CA =0 (3a) 0 if b CA >0 , 1\u2212B 0 , 1\u2212B \u2265w (3c) The rationale for this design is as follows. a) Bad Rules (Eq. (3a)): If the action does not block any ads (b CA =0), the agent receives a reward value of \u22121 to denote that this is not a useful rule to consider. b) Potentially Good Rules (Eq. (3b)): If the rule blocks some ads (b CA >0) but incurs breakage beyond the FL author\u2019s tolerable breakage, then it is considered as \u201cpotentially good\u201d1 and receives a reward value of zero. c) Good Rules (Eq. (3c)): If the rule blocks ads2 and causes no more breakage than what is tolerable for the FL author, then the agent receives a positive reward based on the fraction of ads that it blocked (b CA). 3.2.3 Policy Our goal is to identify \u201cgood\u201d \ufb01lter rules, i.e., rules that give consistently high rewards. To that end, we need to re\ufb01ne our notion of a \u201cgood\u201d rule and de\ufb01ne a strategy for exploring the space of candidate \ufb01lter rules. Expected Reward Qt(a). The MAB agent selects an action a, following a policy, from a set of available actions A, and applies it on the site to receive a reward (rt =RF(w, b CA,B)). It does this over some time horizon t = 1,2,..,T. However, due to the site dynamics as explained in Task 4, the reward varies over time, and we need a different metric that captures how good a rule is over time. In MAB, this metric is the weighted moving average of the rewards over time: Qt+1(a)= Qt(a)+\u03b1(rt \u2212Qt(a)), where \u03b1 is the learning step size. Policy. Due to the large scale of the problem and the cost of exploring candidate rules, the agent should spend more time exploring good actions. The MAB policy utilizes Qt(a) to balance between exploring new rules in AH and exploiting the best known a so far. This process automates Task 1 and 2. We use a standard Upper Bound Con\ufb01dence (UCB) policy to manage the trade-off between exploration and exploitation [7]. Instead of the agent solely picking the maximum Qt(a) at each t to maximize the total reward, UCB considers an exploration value Ut(a) that measures the con\ufb01dence level of the current estimates, Qt(a). An MAB agent that follows the UCB policy selects a at time t, such that at = argmaxa[Qt(a) + Ut(a)]. Higher values of Ut(a) mean that a should be explored more. It is updated using 1\u201cPotentially\u201d means that the rule may have children rules within the action space that are effective at blocking ads with less breakage. 2Eq. (3) explicitly requires a rule to block at least some ads, to receive a positive reward. AutoFR can select rules that have additional side-bene\ufb01ts (e.g., also blocks tracking requests, typically related to ads). 6 \fAlgorithm 1 AutoFR Algorithm Require: Design-parameter: w\u2208[0,1] Inputs: Site (\u2113) Reward function (RF :R3 \u2192[\u22121,1]) Noise threshold (\u03b5 =0.05) Number of site visits (n=10) Hyper-parameters: Exploration for UCB (c=1.4) Initial Q-value (Q0 =0.2) Learning step size (\u03b1= 1 N[a]) Time Horizon (T) Output: Set of \ufb01lter rules (F ) 1: 2: procedure INITIALIZE(\u2113, n) 3: CA,CI,CT , reqs \u2190VISITSITE(\u2113, n, / 0) 4: AH \u2190BUILDACTIONSPACE(reqs) 5: returnCA,CI,CT ,AH 6: end procedure 7: 8: procedure AUTOFR(\u2113, w, c, \u03b1, n) 9: CA,CI,CT ,AH \u2190INITIALIZE(\u2113,n) 10: F \u2190/ 0, A \u2190/ 0 11: A \u2190AH.root.children 12: repeat 13: Q(a)\u2190Q0, \u2200a\u2208A 14: for t =1 to T do 15: at \u2190CHOOSEARMUCB(A, Qt, c) 16: CAt ,CIt ,CTt , hits \u2190VISITSITE(\u2113, 1, at) 17: b CAt ,b CIt ,b CTt \u2190SITEFEEDBACK(CAt ,CIt ,CTt ) 18: Bt \u2190BREAKAGE(b CIt ,b CTt ) 19: if at \u2208hits then 20: rt \u2190RF(w, b CAt ,Bt) 21: Qt+1(at)\u2190Qt(at)+\u03b1(rt \u2212Qt(at)) 22: else 23: Put at to sleep 24: end if 25: end for 26: A \u2190{a.children , \u2200a\u2208A |\u2212\u03b5 <=Q(a)<= \u03b5} 27: F \u2190F \u222a{\u2200a\u2208A |Q(a)> \u03b5 } 28: until A is / 0 29: return F 30: end procedure Ut(a) = c \u00d7 q logN[a\u2032] N[a] , where N[a\u2032] is the number of times the agent selected all actions (a\u2032) and N[a] is the number of times the agent has selected a, and c is a hyper-parameter that controls the amount of exploration. 3.3 AutoFR Algorithm Algorithm 1 summarizes our AutoFR algorithm. The inputs are the site \u2113that we want to create \ufb01lter rules for, the design parameter (threshold) w, and various hyper-parameters (discussed in App. D.1.1). In the end, it outputs a set of \ufb01lter rules F , if any. It consists of the two procedures discussed next. INITIALIZE Procedure. First, we obtain the baseline representation of a site of interest \u2113(Sec. 3.2.2), when no \ufb01lter rules are applied. To do so, it will visit the site n times (i.e., VISITSITE) to capture some dynamics of \u2113. The environment will return the average counters CA,CI,CT, and the set of outgoing reqs. The average counters will be used in evaluating the reward function (Eq. (3)). Next, we build the hierarchical action space AH using all network requests reqs (Task 1, 2). AUTOFR Procedure. This is the core of AutoFR algorithm. We call INITIALIZE and then traverse the action space AH from the root node to get the \ufb01rst set of arms to consider, denoted as A. Note that we treat every layer (A) of AH as a separate run of MAB with independent arms (\ufb01lter rules). One run of MAB starts by initializing the expected values of all \u201carms\u201d at Q0 and then running UCB for a time horizon T, as explained in Sec. 3.2.3. Since the size of A can change at each run, we scale T based on the number of arms; by default, we used 100 \u00d7A.size. Each run of the MAB ends by checking the candidates for \ufb01lter rules. In particular, we check if a \ufb01lter rule should be further explored (down the AH) or become part of the output set F , using Eq. (3) as a guide. A technicality is that Eq. (3b) compares the reward RF to zero, while in practice, Q(a) may not converge to exactly zero. Therefore, we use a noise threshold (\u03b5=0.05) to decide if Qt(a) is close enough to zero (\u2212\u03b5 \u2264Q(a) \u2264\u03b5). Then, we apply the same intuition as in Eq. (3) but using Q(a), instead of RF, to assess the rule and next steps. a) Bad Rules: Ignore. This case is not explicitly shown but mirrors Eq. (3a). If a rule is Q(a)< \u03b5, then we ignore it and do not explore its children. b) Potentially Good Rules: Explore Further. Mirroring Eq. (3b), if a rule is within a range of \u00b1 \u03b5 of zero, it helps with blocking ads but also causes more breakage than it is acceptable (w). In that case, we ignore the rule but further explore its children within AH. An example based on doubleclick.net is shown on Fig. 2. In that case, A is reset to be the immediate children of these arms, and we proceed to the next MAB run. c) Good Rules: Select. When we \ufb01nd a good rule (Q(a) > \u03b5), we add that rule to our list F and no longer explore its children. This mimicks Eq. (3c). An example is shown in Fig. 2: if doubleclick.net is a good rule, then its children are not explored further. We repeatedly run MAB until there are no more potentially good \ufb01lter rules to explore3. This stopping condition automates Task 5. The output is the \ufb01nal set of good \ufb01lter rules F . 4 AutoFR Implementation In this section, we present the AutoFR tool that fully implements the RL framework as described in the previous section. AutoFR removes the human-in-the-loop. The FL author only needs to provide their preferences (i.e., how much they care about avoiding breakage via w) and hyper-parameters (detailed in Alg. 1), and the site of interest \u2113. AutoFR then automates Tasks 1\u2013 5 and outputs a list of \ufb01lter rules F speci\ufb01c to \u2113, and their corresponding values Q. 3When we \ufb01nd a rule that we cannot apply, we put it to \u201csleep\u201d, in MAB terminology. This is because theydo notblockanynetworkrequest(i.e.,no hits, in Task 3), and we expect them to not likely affect the site in the future, either. 7 \fEnvironment (Controlled) Site Snapshots (NetworkX) Agent (Python) Policy (Python) Action Space (NetworkX) 2. Action (a) (adblockparser) Output (Text File) Filter Rules User of AutoFR Site (\u2113) Configs (\ud835\udc64) Hits by Action (adblockparser) b. Extract requests (Selenium) Updates 1. Selects c. Extract (JS) & annotate (Selenium) Docker a. Visit site n times (Selenium) 5. Stop AdGraph Browser Ad Highlighter 4. Repeat Initialize AutoFR Algorithm 3. Reward (Python) Figure 4: AutoFR Example Work\ufb02ow (Controlled Environment). INITIALIZE (a\u2013c, Alg. 1): (a) spawns n=10 docker instances and visits the site until it \ufb01nishes loading; (b) extracts the outgoing requests from all visits and builds the action space; (c) extracts the raw graph and annotates it to denoteCA,CI, andCT , using JS and Selenium. Once all 10 site snapshots are annotated, we run the RL portion of the AUTOFR procedure (steps 1\u20134). Lastly, AutoFR outputs the \ufb01lter rules at step 5, e.g., ||s.yimg.com/rq/darla/4-10-0/html/r-sf.html. Implementation Costs. Let us revisit Fig. 1(b) and re\ufb02ect on the interactions with the site. The MAB agent (as well as the human FL author) must visit the site \u2113, apply the \ufb01lter rule, and wait for the site to \ufb01nish loading the page content and ads (if any). The agent must repeat this several times to learn the expected reward of rules in the set of available actions A. First, for completeness, we implemented exactly that in a live environment (referred to as AutoFR-L: details in App. C and evaluation in App. C.2.3). We employed cloud services using Amazon Web Services (AWS) to scale to tens of thousands of sites. This has high computation and network access costs and, more importantly, introduces long delays until convergence. To make things concrete. For the delay, we found it took 47 seconds per-visit to a site, on average, by sampling 100 sites in the Top\u20135K. Thus, running AutoFR for one site with ten arms in the \ufb01rst MAB run, for 1K iterations, would take 13 hours for one site alone! For the monetary cost, running AutoFR-L on 1K sites and scaling it using one AWS EC2 instance per-site ($0.10/hour) would cost roughly $1.3K for 1K sites, or $1.3 to run it once per-site. This a well-known problem with applying RL in a real-world setting. Thus, an implementation of AutoFR that creates rules by interacting with live sites is inherently slow, expensive, and does not scale to a large number of sites. Scalable and Practical. Although AutoFR-L is already an improvement over the human work\ufb02ow, we were able to design an even faster tool, which produces rules for a single site in minutes instead of hours. The core idea is to create rules in a realistic but controlled environment, where the expensive and slow visits to the website are performed in advance, stored once, and then used during multiple MAB runs, as explained in Sec. 3.3. In this section, we present the design of this implementation in a controlled environment: AutoFR-C, or AutoFR for simplicity. An overview of our implementation is provided in Fig. 4. Importantly, this allows our AutoFR tool to scale across thousands of sites and, thus, utilized as a practical tool. 4.1 Environment To deal with the aforementioned delays and costs during training, we replace visiting a site live with emulating a visit to the site, using saved site snapshots. This provides advantages: (1) we can parallelize and speed up the collection of snapshots, and then run MAB off-line; (2) we can reuse the same stored snapshots to evaluate different w values, algorithms, or reward functions while incurring the collection cost only once; and (3) we plan to make these snapshots available to the community (i.e., it can replicate our results and utilize snapshots in its own work). Collecting and Storing Snapshots. Site snapshots are collected up-front during the INITIALIZE phase of Alg. 1 and saved locally. We illustrate this in Fig. 4, steps a\u2013c. We use AdGraph [33], an instrumented Chromium browser that outputs a graph representation of how the site is loaded. To capture the dynamics, we visit a site multiple times using Selenium to control AdGraph and collect and store the site snapshots. The environment is dockerized using Debian Buster as the base image, making the setup simple and scalable. For example, we can retrieve 10 site snapshots in parallel, if the host machine can handle it. In Sec. 5.1, we \ufb01nd that a site snapshot takes 49 seconds on average to collect. Without parallelization, this would take 8 minutes to collect 10 snapshots sequentially. De\ufb01ning Site Snapshots. Site snapshots represent how a site \u2113is loaded. They are directed graphs with known root nodes and possible cycles. An example is shown in Fig. 5. Site snapshots are large and contain thousands of nodes and edges; see App. D.1.2, Fig. 20. We use AdGraph as the starting point for de\ufb01ning the graph structure and build upon it. First, we automatically identify the visible elements, i.e., ads (AD), images (IMG), and text (TEXT) (technical details in Sec. 4.3), for which we need to compute counts CA, CI, and CT, respectively. Second, once we identify them, we make sure that AdGraph knows that these elements are of interest to us. Thus, we annotate the elements with a new attribute such as \u201cFRG-ad\u201d, \u201cFRG-image\u201d, and \u201cFRG-textnode\u201d set to \u201cTrue\u201d. Annotating is challenging because ads have complex nested structures, and we cannot attach attributes to text nodes. Third, we include how JS scripts interact with each other using \u201cScript-used-by\u201d edges, shown in Fig. 5. Lastly, we save site snapshots as \u201c.graphml\u201d \ufb01les. Due to lack of space, we defer technical details on building site snapshots to App. B.3. Emulating a Visit to a Site. Emulation means that the agent does not actually visit the site live but instead reads a site snapshot and traverses the graph to infer how the site was loaded. To emulate a visit to the site, we randomly read a site snapshot into memory using NetworkX and traverse the graph in a breadth-\ufb01rst search manner starting from the root \u2014 effectively replaying the events (JS execution, 8 \fHTML node creation, requests that were initiated, etc.) that happened during the loading of a site. This greatly increases the performance of AutoFR as the agent does not wait for the per-site visit to \ufb01nish loading or for ads to \ufb01nish being served. Thus, reducing the network usage cost. We hard-code a random seed (40) so that experiments can be replicated later. Applying Filter Rules. To apply a \ufb01lter rule, we use an of\ufb02ine adblocker, adblockparser [56], which can be instantiated with our \ufb01lter rule. If a site snapshot node has a URL, we can determine whether it is blocked by passing it to adblockparser. We further modi\ufb01ed adblockparser to expose which \ufb01lter rules caused the blocking of the node (i.e., hits). If a node is blocked, we do not consider its children during the traversal. Capturing Site Feedback from Site Snapshots. The next step is to assess the effect of applying the rule on the site snapshot. At this point, the nodes of site snapshots are already annotated. We need to compute the counters of ads, images, and text (CA, CI, CT), which are then used to calculate the reward function. Its python implementation follows Sec. 3.2.2. We use the following intuition. If we block the source node of edge types \u201cActor\u201d, \u201cRequestor\u201d, or \u201cScript-used-by\u201d, then their annotated descendants (IMG, TEXT, AD) will be blocked (e.g., not visible or no longer served) as well. Consider the following examples on Fig. 5: (1) if we block JS Script A, then we can infer that the annotated IMG and TEXT will be blocked; (2) if we block the annotated IMG node itself, then it will block the URL (i.e., stop the initiation of the network request), resulting in the IMG not being displayed; and (3) if we block JS Script B that is used by JS Script A, then the annotated nodes IMG, TEXT, IFRAME (AD) will all be blocked. As we traverse the site snapshot, we count as follows. If we encounter an annotated node, we increment the respective counters CA. CI, CT. If an ancestor of an annotated node is blocked, then we do not count it. Limitations. To capture the site dynamics due to a site serving different content and ads, we perform several visits persite and collect the corresponding snapshots. We found that 10 visits were suf\ufb01cient to capture site dynamics in terms of the eSLDs on the site, which is a similar approach taken by prior work [40,76] (see App. D.1.1). However, there is also a different type of dynamics that snapshots miss. When we emulate a visit to the site while applying a \ufb01lter rule, we infer the response based on the stored snapshot. In the live setting, the site might detect the adblocker (or detect missing ads [40]) and try to evade it (i.e., trigger different JS code), thus leading to a different response that is not captured by our snapshots. We evaluate this limitation in App. D.1.3 and show that it does not greatly impact the effectiveness of our rules. Another limitation can be explained via Fig. 5. When JS Script B is used by JS Script A, we assume that blocking B will negatively affect A. Therefore, if A is responsible for IMG and TEXT, then blocking B will also block this content; this may not happen in the real world. When we did not consider this scenario, we found that AutoFR may create \ufb01lter rules Figure 5: Site Snapshot. It is a graph that represents how a site is loaded. The nodes represent JS Scripts, HTML nodes (e.g., DIV, IMG, TEXT, IFRAME), and network requests (e.g., URL). \u201cActor\u201d edges track which source node added or modi\ufb01ed a target node. \u201cRequestor\u201d edges denote which nodes initiated a network request. \u201cDOM\u201d edges capture the HTML structure between HTML nodes. Lastly, \u201cScript-used-by\u201d edges track how JS scripts call each other. As described in Sec. 4.1, nodes annotated by AutoFR have \ufb01lled backgrounds, while grayed-out nodes are invisible to the user. that cause major breakage. Since breakage must be avoided and we cannot differentiate between the two possibilities, we maintain our conservative approach. 4.2 Agent Action Space AH. During the INITIALIZE procedure (Alg. 1), we visit the site \u2113multiple times and construct the action space, as explained in App. B.2 and summarized here. First, we convert every request to three different \ufb01lter rules, as shown in Table 1. We add edges between them (eSLD \u2192 FQDN \u2192With path), which serve as the \ufb01ner-grain edges, shown in Fig. 2. We further augment AH by considering the \u201cinitiator\u201d of each request, retrieved from the Chrome DevTools protocol and depicted in solid lines in Fig. 2. This makes the AH taller and reduces the number of arms to explore per run of MAB, as described in Sec. 3.3. The resulting action space is a directed acyclic graph with nodes that represent \ufb01lter rules; see Fig. 2 for a zoom-in along with App. B.2 and Fig. 15 for a larger example. We implement it as a NetworkX graph and save it as a \u201c.graphml\u201d \ufb01le, a standard graph \ufb01le type utilized by prior work [60]. Policy. The UCB policy of Sec. 3.2.3 is implemented in python. At time t (Alg. 1, line 14), the agent retrieves the \ufb01lter rule selected by the policy and applies it on the randomly chosen site snapshot instance. 4.3 Automating Visual Component Detection A particularly time-consuming step in the human work\ufb02ow is Task 3 in Fig. 1(a). The FL author visually inspects the page, 9 \fbefore and after they apply a \ufb01lter rule, to assess whether the rule blocked ads (b CA) and/or impacted the page content (b CI, b CT). AutoFR in Fig. 1(b) summarizes this assessment in the reward in Eq. (3). However, to minimize the human work, we also need to replace the visual inspection and automatically detect and annotate elements as ads (AD), images (IMG), or text (TEXT) on the page. Detection of AD (Perceptual). To that end, we automatically detect ads using Ad Highlighter [63], a perceptual ad identi\ufb01er (and web extension) that detects ads on a site. We evaluated different ad perceptual classi\ufb01ers, including Percival [1], and we chose Ad Highlighter because it has high precision and does not rely on existing \ufb01lter rules. We utilize Selenium to traverse nested iframes to determine whether Ad Highlighter has marked them as ads. The details of how Ad Highlighter works are deferred to App. B.3, C.2.1. Detection of IMG and TEXT. We automatically detect visible images and text by using Selenium to inject our custom JS that walks the HTML DOM and \ufb01nds image-related elements (i.e., ones that have background-urls) or the ones with text node type, respectively. To know if they are visible, we see whether the element\u2019s or text container\u2019s size is > 2px [40]. Discussion of the Visual Components. It is important to note that our framework is agnostic to how we detect elements on the page. For detecting ads, this can be done by a human, the current Ad Highlighter, future improved perceptual classi\ufb01ers, heuristics, or any component that identi\ufb01es ads with high precision. This also applies to detecting the number of images and text. Images can be counted using an instrumented browser that hooks into the pipeline of rendering images [1]. Text can be extracted from screenshots of a site using Tesseract [63], an OCR engine. Therefore, the AutoFR framework is modular and dependent on how well these components perform. Discussion of Blocking Ads vs. Tracking. We focus on detecting ads and generating \ufb01lter rules that block ads for two reasons. First, they are the most popular type of rules in \ufb01lter lists (App. A, Fig. 14). Second, ads can be visually detected, enabling a human (FL author) or a visual detection module (such as Ad Highlighter) to assess if the rule was successful (the ad is no longer displayed) or not at blocking ads. Although tracking is related to ads, it is impossible to detect visually, and assessing the success of a rule that blocks tracking is more challenging, e.g., involves JS code analysis [17]. Extending AutoFR for tracking is a direction for future use. 5 Evaluation In this section, we evaluate the performance of AutoFR (i.e., the trade-off between blocking ads and avoiding breakage) and compare it to EasyList as a baseline. In addition, we characterize properties of the \ufb01lter rules produced by AutoFR: how they can be controlled via parameter w, how they compare to EasyList rules, how fast they need to be updated, and how well Datasets w=0.9 Sites Filter Rules Snapshots W09-Dataset (Sites \u22651 rule) 933 361 9.3K Full-W09-Dataset (All sites) 1042 361 10.4K Table 2: AutoFR Top\u20135K Results. they generalize across sites. Parameter selection, automated evaluation work\ufb02ow, and more can be found in App. D. 5.1 Filter Rule Evaluation Per-Site We apply AutoFR on the Tranco Top\u20135K sites [41,67] to generate rules using the breakage tolerance threshold of w=0.9. All other AutoFR parameters are the same as in Alg. 1. AutoFR Results. Table 2 summarizes our results. Overall, AutoFR generated 361 \ufb01lter rules for 933 sites. For some sites, AutoFR did not generate any rules since none of the potential rules were viable at the selected w threshold. Ef\ufb01ciency. AutoFR is ef\ufb01cient and practical: it can take 1.6\u20139 minutes to run per-site (see App. D.1.2), which is an order of magnitude improvement over the 13 hours per-site of live training in Sec. 4. During each per-site run, we explore tens to hundreds of potential rules and conduct up to thousands of iterations within MAB runs (see Fig. 20). This ef\ufb01ciency is key to scaling AutoFR to a large number of sites and over time. AutoFR: Validation with Snapshots. Since AutoFR generates rules for each particular site (i.e., per-site), we \ufb01rst apply these rules to the site for which they have been created. To that end, we \ufb01rst apply the rules to the stored site snapshots, and we report the results in Fig. 6(a) and Table 3 col. 1. We see that the rules block ads on 77% of the sites within the w = 0.9 breakage threshold. As we demonstrate next, this number is lower due to the limitations of traversing snapshots (Sec. 4.1) and the rules are more effective when tested on sites in the wild. AutoFR vs. EasyList: Validation In The Wild. Next, we apply the rules from AutoFR to the same sites they have been created for, but this time on the real site (\u201cin the wild\u201d), not on the site snapshots. For comparison, we also apply EasyList4 to the same set of Top\u20135K sites and we report our results in Fig. 6(b) and Table 3 col. 2 and 4. AutoFR\u2019s rules block 95% (or more) of ads with less than 5% breakage for 74% of the site (i.e., within the operating point) as compared to 79% for EasyList. For sites within the w threshold, AutoFR and EasyList perform comparably at 86% and 87%, respectively (row 2). Overall, our rules blocked 86% of all ads vs. 87% by EasyList, within the w threshold (row 3). Some sites fall below the w threshold partly due to limitations discussed in App. D.1.2, including limitations of AdGraph [33]. To further con\ufb01rm our results for AutoFR and EasyList, we randomly selected 272 sites (a sample size out of 933 4For a fair comparison, we parse EasyList and utilize delimiters (e.g., \u201c$\u201d, \u201c||\u201d, and \u201c\u02c6\u201d) to identify URL-based \ufb01lter rules and keep them. 10 \f0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 200 400 600 800 1000 Number of Sites (a) AutoFR (Snapshots) 0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 200 400 600 800 1000 Number of Sites (b) AutoFR (In the Wild) 0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 200 400 600 800 1000 Number of Sites (c) EasyList (In the Wild) Figure 6: AutoFR (Top\u20135K). All sub-\ufb01gures exhibit similar patterns. First, the \ufb01lter rules were able to block ads with minimal breakage for the majority of sites. Thus, the top-right bin (the operating point) is the darkest. Second, there are edge cases for sites with partially blocked ads within the w threshold (right of w line) and sites below the w threshold (left of w line). Fig. 19 explains how to read these plots. See Table 3, col. 1, 2, and 4, for additional information. 0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 500 1000 1500 Number of Sites (a) AutoFR (All Rules) 0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 500 1000 1500 Number of Sites (b) AutoFR (Rules from \u22653 sites) 0.0 0.2 0.4 0.6 0.8 1.0 Avoiding Breakage (1 \u2212\ue22e) 0.0 0.2 0.4 0.6 0.8 1.0 Blocking Ads ( \u0302 CA) w = 0.9 0 500 1000 1500 Number of Sites (c) EasyList (In the Wild) Figure 7: Testing Filter Rules on New Sites (Top 5K\u201310K, In the Wild). We create two \ufb01lter lists, Fig. 7(a) with all rules from W09-Dataset and Fig. 7(b) that contains rules that were created for \u22653 sites. We test them in the wild on the Top\u20135K to 10K sites (new sites) and show their effectiveness along with EasyList (Fig. 7(c)). We observe that Fig. 7(b) performs better, blocking 8% more ads than Fig. 7(a). Fig. 19 explains how to read these plots. Table 3, col. 6\u20138, contains additional information. Sec. 5.1, Fig. 6, Top\u20135K Sec. 5.3.1 Sec. 5.3.3, Fig. 7, Top\u20135K to 10K AutoFR (Snapshots) (Jan. 2022) AutoFR (In the Wild) (Jan. 2022) AutoFR (*Con\ufb01rm) (In the Wild) EasyList (In the Wild) (Jan. 2022)) AutoFR (In the Wild) (July 2022) AutoFR (All rules) (In the Wild) AutoFR (\u22653 sites) (In the Wild) EasyList (In the Wild) Description (w=0.9) 1 2 3 4 5 6 7 8 1 Sites in operating point: b CA \u22650.95, 1\u2212B \u22650.95 62% 74% 85% 79% 72% 67% 73% 80% 2 Sites within w: b CA >0, 1\u2212B \u22650.9 77% 86% 85% 87% 82% 76% 80% 87% 3 Ads blocked within w: \u2211\u2113(CA\u00d7 b CA) / \u2211\u2113CA; 1\u2212B\u22650.9 70% 86% 84% 87% 78% 72% 80% 86% Table 3: Results. We provide additional results to Fig. 6 and 7, within their respective sections. We explain the meaning of each row: (1) the number of sites that are in the operating point (top-right corner of the \ufb01gures), where \ufb01lter rules were able to block the majority of ads with minimal breakage; (2) the number of sites that are within w; and (3) the fraction of ads that were blocked across all ads within w. *Con\ufb01rming via Visual Inspection (In the Wild) (Sec. 5.1): col. 3 is based on a binary evaluation. As it is not simple for a human to count the exact number of missing images and text, we evaluate each site based on whether the rules blocked all ads or not (i.e., b CA is either 0 or 1) and whether they caused breakage or not (i.e., B is either 0 or 1). For col. 5 (Sec. 5.3.1), we repeat the same experiment of col. 2 during July 2022 for a longitudinal study of AutoFR rules. 11 \feSLD FQDN With Path 0 20 40 60 80 Percent of Rules 73 13 12 62 9 27 AutoFR EasyList (a) Rule Types 220 279 78 AutoFR EasyList (b) Grouped by eSLD Figure 8: Comparing AutoFR Rules to EasyList. Some rules are common and some are unique to each approach. When comparing rules, one must consider the right granularity. sites to get a con\ufb01dence level of 95% with a 5% con\ufb01dence interval), and we visually inspected them. In particular, we looked for breakage not perfectly captured by automated evaluation. Table 3 col. 3 summarizes the results and con\ufb01rms our results obtained through the automated work\ufb02ow. We \ufb01nd that 3% (7/272) of sites had previously undetected breakage. For instance, the layout of four sites was broken (although all of the content was still visible), and one site\u2019s scroll functionality was broken. Note that this kind of functionality breakage is currently not considered by AutoFR. We observed two sites that intentionally caused breakage (the site loads the content, then goes blank) after detecting their ads were blocked. AutoFR\u2019s implementation currently does not handle this type of adblocking circumvention. Tuning AutoFR via Threshold w. AutoFR is the \ufb01rst approach that can be tuned per-site and explicitly allows to express a preference. The FL author that uses AutoFR must select the site to create rules for and express their preference by tuning a knob (threshold w) , which controls the tradeoff between blocking ads vs. avoiding breakage. Results are provided in App. D.1.5. 5.2 AutoFR vs. EasyList: Comparing Rules We compare the rules generated per-site by AutoFR and EasyList from Sec. 5.1. For a fair comparison, we only consider EasyList rules that are triggered when visiting sites. 5.2.1 Rule Type Granularity An important aspect to consider when comparing rules is the suitable granularity of the rules that block ads while limiting breakage. Fig. 8(a) breaks down the granularity of rules by AutoFR and EasyList. We note that both exhibit a similar distribution: eSLD rules are the most common, while the other rule types are less common. Across all granularities, there are 59 identical rules (e.g., ||pubwise.io\u02c6, ||adnuntius.com\u02c6, and ||deployads.com\u02c6) between AutoFR and EasyList, which represents 15% of EasyList rules. Next, we focus on rules that are related, i.e., they share a common eSLD but may differ in subdomain or path, to understand why AutoFR generates rules that are coarser or \ufb01nergrain than EasyList rules. In Fig. 8(b), we show that when we group rules by eSLD, there are 78 common eSLDs, 60 (77%) of which have at least one identical rule. For example, for mail.ru, both AutoFR and EasyList have ||ad.mail.ru\u02c6. For 26 eSLD groups, AutoFR and EasyList rules differ in granularity. First, 18 eSLDs have AutoFR rules that are coarser-grained than EasyList. For instance, AutoFR has ||cloud front.net\u02c6 but EasyList has 15 different rules based on FQDNs like ||d2na2p72vtqyok.cloud front.net\u02c6. CloudFront is a CDN that can serve resources for legitimate content, ads, and tracking. As AutoFR generates per-site rules, it can afford to be more coarse-grained because a particular site may only use CloudFront for ads and tracking. However, since EasyList rules that target CloudFront are not per-site, they are more \ufb01ner-grain to avoid breakage on other sites. Second, six eSLDs have AutoFR rules that are \ufb01ner-grain than EasyList. For instance, for moatads.com, AutoFR has ||z.moatads.com\u02c6 when EasyList has ||moatads.com\u02c6. Recall in Sec. 4.1 that AutoFR generates rules with a conservative approach when using site snapshots, and thus will consider \ufb01ner-grain rules for some cases to avoid breakage. Whereas FL authors manually verify rules for EasyList and will know that ||moatads.com\u02c6 is more appropriate. Lastly, four eSLDs share the same granularity but contain rules that are not identical. For example, for site pastemagazine.com, AutoFR has ||pastemagazine.com/common/ js/ads-gam-a9-ow.js, while EasyList has pastemagazine.com/common/js/ads-. Partial paths within EasyList may extend the life of a \ufb01lter rule over time for some sites. We further evaluate this in Sec. 5.3.1. AutoFR can extend to partial paths in the future. 5.2.2 Understanding Unique Rules We investigate why AutoFR generates rules that are not present in EasyList and vice versa. We found that when grouped by eSLD (Fig. 8(b)), unique rules are due to the design and implementation of our framework, as well as due to site dynamics. Methodology. To investigate each unique rule (either from AutoFR or EasyList), we apply the rule to its corresponding site snapshots (per-site) and extract the requests that were blocked. We manually investigate these requests as follows. For images, we visually decide whether it is an ad. For scripts, we use our domain knowledge and keywords (e.g., \u201cadvertising\u201d, \u201cbid\u201d) to examine the source code to discern whether they affect ads, tracking, functionality, or legitimate site content. When we cannot determine the nature of the request (e.g., due to obfuscated JS code), we fall back to applying the rule and evaluating its effectiveness via visual inspection, following the methodology in Sec. 5.1. Findings. Depicted in Fig. 8(b), the differences in rules when grouped by eSLDs are due to three main reasons. 1. AutoFR Framework: Our framework exhibits sev12 \feral strengths when generating rules. 48% (105/220) of the unique eSLDs for AutoFR have rules that are valid but seem challenging for a FL author to manually craft. Within this set, 19% (20/105) are \ufb01rst-party (e.g., ||kidshealth.org/.../inline_ad.html), 52% (55/105) block resources that involve both ads and tracking (e.g., ||snidigital.com\u02c6), 23% (24/105) block ad-related resources served by CDNs (e.g., ||cdn.fantasypros.com/realtime/media_trust.js), and 42% (44/105) block ad-related resources served through seemingly obfuscated URLs. We conclude that AutoFR can create rules that are not obviously ad-related (e.g., by looking at keywords in the URL) but are effective nonetheless. Next, we explain how certain design decisions behind AutoFR\u2019s framework can lead to missed EasyList rules. First, AutoFR focuses on rules that block at least some ads (due to Eq. (3a)), which is why AutoFR ignored 10% (28/279) of unique eSLDs from EasyList that are responsible for purely tracking requests. Second, we choose to generate rules that block ads across all 10 site snapshots of a site, not just one site snapshot, to be robust against site dynamics. In addition, we choose to stop exploring the hierarchical action space when we \ufb01nd a good rule following the intuition from Sec. 3.2.1, which improves the ef\ufb01ciency of AutoFR. Of course, these design decisions can be altered depending on the user\u2019s preference. When we do so, we \ufb01nd that the overlap in Fig. 8(b) goes from 22% (78/357) to 35% (124/357). For example, adtelligent.com and adscale.de are new common eSLDs found when we remove these design decisions. 2. AutoFR Implementation: Our implementation of Alg. 1 focuses on visual components (e.g., using Ad Highlighter to detect ads) and how \ufb01lter rules affect them. The rules generated are as good as the components that we utilize. First, AutoFR misses 28% (78/279) of unique eSLDS from EasyList because Ad Highlighter can only detect ads that contain transparency logos. However, AutoFR rules are still effective when compared to EasyList, as shown in Sec. 5.1 and Table 3. This demonstrates that we do not necessarily need to replicate all rules from EasyList to be effective. Second, 18% of unique eSLDs from AutoFR can affect both ads and functionality (e.g., cdn.ampproject.org/v0/amp-ad-0.1.js for ads, amp-accordion-0.1.js for functionality). AutoFR balances the trade-off between blocked ads and breakage, see Sec. 5.1. 3. Site Dynamics can also lead to differences in the site resources between site snapshots vs. the in the wild evaluation. Due to this, 18% (50/279) of unique eSLDs on the EasyList side did not appear in our W09-Dataset. Thus, AutoFR did not get an opportunity to generate these rules. Conversely, 5% (11/220) of unique eSLDs from AutoFR appear in EasyList but were not triggered during the evaluation of EasyList rules. This can be mitigated by increasing the number of site snapshots used in AutoFR\u2019s rule generation or applying EasyList more times during our in the wild evaluation. Although, recall that we already do these steps for 10 times. -10.0K 0 10.0K \u0394 Nodes 0.00 0.25 0.50 0.75 1.00 ECDF (Sites) -10.0K 0 10.0K \u0394 Edges -1.0K 0 1.0K \u0394 URL All Other Sites (94%) Sites to Rerun (6%) Figure 9: \u2206Site Snapshots between July vs. January 2022. The differences in site snapshots for nodes, edges, and URLs. A positive change in the x-axis denotes that July had more of the respective factor, while a zero denotes no change. Takeaways. The difference in the granularity of related rules generated by AutoFR and EasyList is mainly because AutoFR creates rules per-site. Unique rules to AutoFR or EasyList are due to the design and implementation of our framework and site dynamics. These differences are acceptable because the effectiveness of the rules from AutoFR and EasyList is comparable. This is crucial from a practical standpoint. 5.3 Robustness of AutoFR Filter Rules AutoFR generates rules for a particular site and uses snapshots collected at a particular time. Next, we investigate and discuss how well these rules perform over time, across different sites, and in adversarial scenarios. 5.3.1 How Long-lived are AutoFR Rules? Sites change naturally over time, which may result in changes in the site snapshots, and eventually into changes in the \ufb01lter rules. We show that AutoFR rules remain effective for a long time and can be rerun fast when needed to update. Ef\ufb01cacy of Rules Over Time. We re-apply per-site rules generated in January 2022 (Sec. 5.1) to the same sites in July 2022 and summarize the results in Table 3 (col. 5). We \ufb01nd that the majority of AutoFR rules are still effective after six months. 72% of sites (down only by 2%) still achieve the operating point (row 1), and 82% (down by 4%) achieve 1\u2212B \u22650.9 (row 2). Even more interestingly, we found only 6% of the sites now no longer have all or any ads blocked in July. For those few sites, which we refer to as \u201csites to rerun\u201d, we can rerun AutoFR; this takes 1.6 min-per-site on average. Site Snapshots Over Time. We recollect site snapshots for our entire W09-Dataset in July 2022 and associate them with the results of re-applying the rules above. For the 6% of sites that AutoFR needs to rerun, we report the changes in their corresponding snapshots. Fig. 9 reports the changes in snapshots of the same site between January and July in terms of different nodes, edges, and URLs. It also compares the differences for all sites, with those 6% sites to rerun AutoFR. For all other sites, 50% and 70% of sites have more than \u00b11K changes in nodes and edges, respectively; while 40% of sites have more than \u00b1100 changes in URL nodes. Compared to 13 \f\u22120.2 \u22120.1 0.0 \u0394 Jaccard Sim. (vs. July 15) July 15 July 19 July 23 July 27 July 31 August 4 August 8 August 12 August 16 August 20 August 24 August 28 September 1 September 5 September 9 September 13 0 10 20 Sites to Rerun (since July 15) Figure 10: Longitudinal Study Every Four Days. We conduct a \ufb01ner-grain longitudinal study of 100 sites over a two-month period. We \ufb01nd that over time, site snapshots will become less similar (i.e., negative \u2206Jaccard similarity), often denoting that rules may be less effective. FL authors can rerun AutoFR on these sites that change more frequently to output effective rules. sites to rerun, 75% of sites have more than \u00b11K changes in nodes and edges, while 65% of sites have more than \u00b1100 changes in URL nodes. As expected, the snapshots of the sites to rerun indeed change more than other sites. However, AutoFR\u2019s rules remain effective on the vast majority of sites whose snapshots do not signi\ufb01cantly change. Why do Rules become Ineffective? For the sites that need to be rerun, we conduct a comparative analysis of how rules change by rerunning AutoFR on those sites. We \ufb01nd that 23% of these sites have completely new rules than before, which is typically due to a change in ad-serving infrastructure on the site. 40% of the sites need some additional rules (some older rules still work), which is due to additional ad slots on the site. In addition, 9% of the sites have changes in their paths. Lastly, 29% of these sites have the same rules as before. We deduce that this is because the rules are the best we can do without pushing breakage beyond the acceptable threshold w. Takeaways. AutoFR rules need to be updated for a small fraction of sites (6% of Top\u20135K in six months), which demonstrates that AutoFR generates robust rules over time. AutoFR can be rerun for these sites at an average of 1.6 min-per-site. 5.3.2 How Frequently Should We Run AutoFR? Next, to understand how often FL authors should run AutoFR over time, we provide a \ufb01ner-grain longitudinal study of every four days for two months to study how site snapshots change and the sites that need AutoFR to be rerun. We choose every four days because this is how often EasyList is updated and deployed to end-users. In addition, we choose to focus on 100 sites, two-thirds of which are sampled from W09-Dataset and one-third is sampled from the set of 6% of sites that need to rerun in July (from Sec. 5.3.1). Fig. 10 illustrates our two-month results, using July 15, 2022, as our baseline. In this study, using Jaccard similarity, our comparison considers the relationship between HTML, JS, and CSS (different nodes within site snapshots). To do so, we retrieve the path from the root to every URL node for every site snapshot. We then convert these paths to strings and use them to calculate the Jaccard similarity between the site snapshots of July 15 to subsequent dates shown in the \ufb01gure. As expected, we arrive at the same conclusion as Sec. 5.3.1. As time passes, the similarity between site snapshots will naturally decrease, which denotes that there are sites where our rules are no longer effective, and we need to rerun AutoFR on them. For our 100 sites, we ran AutoFR on 13 sites only once (e.g., weheartit.com, legit.ng), three sites twice (e.g., buzzfeednews.com), and two sites three or more times (e.g., npr.org), within two months. In terms of the time between the reruns of AutoFR, we \ufb01nd that one site (e.g., charlotteobserver.com) varied between four to 10 days from August 12 to September 13. This was due to path changes that would evade our rules like ||charlotteobserver.com/.../0a086549941921c9ac8e.js. Similarly, one site (e.g., npr.org) varied from two weeks to one month. In addition, two sites had runs that were 1\u20132 weeks apart (e.g., AutoFR found additional rules for amarujala.com). Lastly, one site had runs that were one month apart (e.g., liputan6.com went from ||googlesyndication.com\u02c6 to a new rule, ||infeed.id\u02c6). By the end of this study, the similarity of site snapshots decreased by 10% (compared to site snapshots of July 15), and we ran AutoFR 27 times on 18 unique sites within two months. Takeaways. We \ufb01nd that each site will naturally change over time, causing site snapshots to be less similar. More changes often denote a higher possibility of rules being evaded. Overall, 18% of 100 sites needed a rerun of AutoFR. FL authors can periodically rerun AutoFR on sites that tend to change frequently in terms of weekly to monthly reruns. AutoFR minimizes the human effort for updating rules over time. 5.3.3 From Per-Site Rules To Global Filter Lists AutoFR generates URL-based \ufb01lter rules for a particular site. Similarly, EasyList supports per-site rules as well. It currently contains \u223c800 per-site rules. Although these rules are guaranteed to perform well on the sites that they have been designed for (as demonstrated in Sec. 5.1), it is not guaranteed that the same rules are as effective when applied to other sites, i.e., used as \u201cglobal\u201d rules. Collateral Damage. In Fig. 11, we report the potential collateral damage, de\ufb01ned as the sum of breakage (\u2211B), caused when AutoFR rules are treated as global rules. Rules are considered global when applied to sites other than the ones they have been created for. We observe that they tend to block tag managers (e.g., ||googletagmanager.com\u02c6, ||adobedtm.com\u02c6), CDNs or cloud storage services (e.g., ||cloud flare.com\u02c6, ||amazonaws.com\u02c6, ||rlcdn.com\u02c6), third14 \f10 1 Collateral Damage (\u2211\ue22e) ||googletagmanager.com^ ||rlcdn.com^ ||cookielaw.org^ ||amazonaws.com^ ||adobedtm.com^ ||cloudflare.com^ ||bing.com^ ||consensu.org^ ||jquery.com^ ||cloudflareinsights.com^ Filter Rules by AutoFR (Not in EasyList) 52 16 14 5 4 3 3 3 3 1 Figure 11: Collateral Damage of Global Rules. AutoFR rules are generated per-site and can potentially cause breakage when applied to other sites (i.e., treated as a global rule). We report the rules that are unique to AutoFR (i.e., not part of EasyList), ordered by decreasing total collateral damage (\u2211B) that they cause to site snapshots within Full-W09-Dataset. We can see that most of these rules (93%) cause negligible collateral damage (below 10 on the x-axis). Note that the possible max \u2211B of each rule is the size of the dataset. 10 1 10 2 Number of Sites ||doubleclick.net^ ||googlesyndication.com^ ||googletagservices.com^ ||googletagmanager.com^ ||amazon-adsystem.com^ ||google-analytics.com^ ||pubmatic.com^ ||cloudfront.net^ ||fastly.net^ ||indexww.com^ ||rubiconproject.com^ ||assets.hearstapps.com^ ||mdpcdn.com^ ||adlightning.com^ ||adsafeprotected.com^ ||tiqcdn.com^ ||criteo.net^ ||htlbid.com^ ||cookielaw.org^ ||googleapis.com^ Filter Rules by AutoFR 618 437 200 81 75 30 27 20 18 15 14 13 13 12 11 11 11 10 10 10 AutoFR Match w/ EL Figure 12: Top\u201320 Filter Rules by AutoFR for Top\u20135K Sites. They include the main advertising and tracking services, such as Alphabet (doubleclick.net), Amazon (amazon-adsystem.com), and PubMatic (pubmatic.com). Thus, they are likely to generalize well. party libraries (e.g., ||jquery.com\u02c6), and cookie consent forms (e.g., ||cookiekaw.org\u02c6, ||consensu.org\u02c6). These rules target domains that can serve legitimate content and ads across different sites. Thus, adopting a per-site rule into a global rule is nontrivial because the rule may not block as many ads or may cause more breakage (i.e., collateral damage). It is not a problem distinct to AutoFR. Our discussions with EasyList authors con\ufb01rmed that new rules are created per-site. They become global rules when FL authors know that the same rules are effective for other sites. FL authors rely on feedback from users to know when global rules either are ineffective or cause collateral damage on unknown sites [6]. Towards Global Filter Lists. Although we cannot guarantee, in advance, how well per-site rules will perform on other sites, we can try heuristics and assess their performance. Intuitively, if the same \ufb01lter rule is generated by AutoFR across multiple sites, then it has a better chance of generalizing to new 0.8 0.9 1 2 3 4 5 6 7 8 9 10 Popularity Threshold 0.42 0.44 Average Blocking Ads ( \u0302 CA) Avoiding Breakage (1 \u2212\ue22e) Reward (\ue23e) Figure 13: Selecting Per-Site Rules into Global Filter Lists. After creating the per-site AutoFR rules for each site (with w = 0.9), we create 10 global \ufb01lter lists. \u201cPopularity 1\u201d means that a rule is selected into the global list if it was generated in at least one site; \u201cpopularity 10\u201d means that a rule is selected if it was generated for at least 10 sites. Once selected, the rules are now treated as global rules. We apply these global \ufb01lter lists on our Full-W09-Dataset site snapshots and plot the average blocking ads, avoiding breakage, and reward. sites. We denote this as the \u201cpopularity\u201d of a rule. Fig. 12 shows the Top\u201320 AutoFR most common rules across sites. They intuitively make sense as they belong to widely used advertising and tracking services. Therefore, we utilize these heuristics as criteria to select AutoFR rules to include in \ufb01lter lists. Once selected, we now treat them as global rules. As the popularity increases, the global \ufb01lter list contains fewer global rules, resulting in fewer blocked ads but less breakage. We show the results in Fig. 13. We analyze in detail two global \ufb01lter lists. First, \u201cpopularity 1\u201d treats all AutoFR per-site rules as global rules, which serves as a baseline for comparison. Second, \u201cpopularity 3\u201d denotes AutoFR rules that were generated from \u22653 sites. Fig. 13 reveals that this has the highest average reward. Note that selecting the popularity threshold based on the average reward implicitly considers collateral damage because it encompasses breakage (Eq. (3)). We apply these global \ufb01lter lists on the Tranco Top 5K\u201310K sites in the wild. Fig. 7 and Table 3 col. 5\u20136 show the results. As expected, we see that the global \ufb01lter list created from rules that appeared in \u22653 sites perform better than the list with all rules. Moreover, Fig. 7(b) compares relatively well against Fig. 7(c) (EasyList): 73% of sites are in the desired operating point (top-right corner), vs. 80% by EasyList (row 1, col. 7\u20138). Overall, the rules generated from the Top\u20135K sites were able to block 80% of ads on the Top 5K\u201310K sites. This shows good generalization of AutoFR rules across unseen sites, which agrees with Fig. 12. 5.3.4 Evading URL-based Filter Rules AutoFR generates URL-based \ufb01lter rules, which EasyList also supports. Well-known evasion techniques for URL-based \ufb01lter rules, such as randomizing URL components, affect both AutoFR rules and EasyList rules [40]. The strength of AutoFR is that new rules can be learned automatically and quickly (e.g., in 1.6 min-per-site on average) when old ones 15 \fare evaded. Publishers and advertisers can also try to speci\ufb01cally evade AutoFR [40,66]. For example, they can put ads outside of iframes, use different ad transparency logos, or split the logo into smaller images, preventing Ad Highlighter from detecting ads [66]. This impacts our reward calculations. Defense approaches include the following. At the component level, we can try to improve Ad Highlighter to handle new logos or look beyond iframes, replace Ad Highlighter with a better future visual perception tool, or pre-process the logos to remove adversarial perturbations [34]. At the system level, as an adversarial bandits problem, where the reward received from pulling an arm comes from an adversary [8]. 6" + }, + { + "url": "http://arxiv.org/abs/1908.08628v1", + "title": "Shadow Removal via Shadow Image Decomposition", + "abstract": "We propose a novel deep learning method for shadow removal. Inspired by\nphysical models of shadow formation, we use a linear illumination\ntransformation to model the shadow effects in the image that allows the shadow\nimage to be expressed as a combination of the shadow-free image, the shadow\nparameters, and a matte layer. We use two deep networks, namely SP-Net and\nM-Net, to predict the shadow parameters and the shadow matte respectively. This\nsystem allows us to remove the shadow effects on the images. We train and test\nour framework on the most challenging shadow removal dataset (ISTD). Compared\nto the state-of-the-art method, our model achieves a 40% error reduction in\nterms of root mean square error (RMSE) for the shadow area, reducing RMSE from\n13.3 to 7.9. Moreover, we create an augmented ISTD dataset based on an image\ndecomposition system by modifying the shadow parameters to generate new\nsynthetic shadow images. Training our model on this new augmented ISTD dataset\nfurther lowers the RMSE on the shadow area to 7.4.", + "authors": "Hieu Le, Dimitris Samaras", + "published": "2019-08-23", + "updated": "2019-08-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Shadows are cast whenever a light source is blocked by an object. Shadows often confound computer vision algorithms such as segmentation, tracking, or recognition. The appearance of shadow edges is hard to distinguish from edges due to material changes [27]. Dark albedo material regions can be easily misclassi\ufb01ed as shadows [18]. Thus many methods have been proposed to identify and remove shadows from images. Early shadow removal work was based on physical shadow models [1]. A common approach is to formulate the shadow removal problem using an image formation model, in which the image is expressed in terms of material properties and a light source-occluder system that casts shadows. Hence, a shadow-free image can be obtained by estimating the parameters of the source-occluder system and then reversing the shadow effects on the image [10, 14, 13, 28]. w *I shadow+b I shadow I relit \u03b1 : shadow matte I shadow-free I shadow * \u03b1 + I relit * (1-\u03b1) Figure 1: Shadow Removal via Shadow Image Decomposition. A shadow-free image Ishadow-free can be expressed in terms of a shadow image Ishadow, a relit image Irelit and a shadow matte \u03b1. The relit image is a linear transformation of the shadow image. The two unknown factors of this system are the shadow parameters (w, b) and the shadow matte layer \u03b1. We use two deep networks to estimate these two unknown factors. These methods relight the shadows in a physically plausible manner. However, estimating the correct solution for such illumination models is non-trivial and requires considerable processing time or user assistance[39, 3]. On the other hand, recently published large-scale datasets [25, 34, 32] allow the use of deep learning methods for shadow removal. In these cases, a network is trained in an end-to-end fashion to map the input shadow image to a shadow-free image. The success of these approaches shows that deep networks can effectively learn transformations that relight shadowed pixels. However, the actual physical properties of shadows are ignored, and there is no guarantee that the networks would learn physically plausible transformations. Moreover, there are still well known 1 arXiv:1908.08628v1 [cs.CV] 23 Aug 2019 \fissues with images generated by deep networks: results tend to be blurry [15, 40] and/or contain artifacts [23]. How to improve the quality of generated images is an active research topic [16, 35]. In this work, we propose a novel method for shadow removal that takes advantage of both shadow illumination modelling and deep learning. Following early shadow removal works, we propose to use a simpli\ufb01ed physical illumination model to de\ufb01ne the mapping between shadow pixels and their shadow-free counterparts. Our proposed illumination model is a linear transformation consisting of a scaling factor and an additive constant per color channel for the whole umbra area of the shadow. These scaling factors and additive constants are the parameters of the model, see Fig. 1. The illumination model plays a key role in our method: with correct parameter estimates, we can use the model to remove shadows from images. We propose to train a deep network (SP-Net) to automatically estimate the parameters of the shadow model. Through training, SP-Net learns a mapping function from input shadow images to illumination model parameters. Furthermore, we use a shadow matting technique [3, 13, 39] to handle the penumbra area of the shadows. We incorporate our illumination model into an image decomposition formulation [24, 3], where the shadow-free image is expressed as a combination of the shadow image, the parameters of the shadow model, and a shadow density matte. This image decomposition formulation allows us to reconstruct the shadow-free image, as illustrated in Fig. 1. The shadow parameters (w, b) represent the transformation from the shadowed pixels to the illuminated pixels. The shadow matte represents the per-pixel linear combination of the relit image and the shadow image, which results to the shadow-free image. Previous work often requires user assistance[12] or solving an optimization system [20] to obtain the shadow mattes. In contrast, we propose to train a second network (M-Net) to accurately predict shadow mattes in a fully automated manner. We train and test our proposed SP-Net and M-Net on the ISTD dataset [34], which is the largest and most challenging available dataset for shadow removal. SP-Net alone (no matting) outperforms the state-of-the-art [12] in shadow removal by 29% in terms of RMSE on shadow areas, from 13.3 to 9.5 RMSE. Our full system with both SP-Net and M-Net further improves the overall results by another 17%, which yields a RMSE of 7.9. Our proposed method can realistically modify the shadow effects in the images. First we estimate the shadow parameters and shadow matte from an image. We then add the shadows back into the shadow-free image with a set of modi\ufb01ed shadow parameters. As we change the parameters, the shadow effects change accordingly. In this manner, we can synthetize additional shadow images that serve as augmented training data. Training our system on ISTD plus our newly synthesized images further lowers the RMSE on the shadow areas by 6%, compared to our model trained on the original ISTD dataset. The main contributions of this work are: \u2022 We propose a new deep learning approach for shadow removal, grounded by a simpli\ufb01ed physical illumination model and an image decomposition formulation. \u2022 We propose a method for shadow image augmentation based on our simpli\ufb01ed physical illumination model and the image decomposition formulation. \u2022 Our proposed method achieves state-of-the-art shadow removal results on the ISTD dataset. The pre-trained model, shadow removal results, and more details can be found at: www3.cs.stonybrook. edu/\u02dccvl/projects/SID/index.html 2. Related Works Shadow Illumination Models: Early research on shadow removal is motivated by physical modelling of illumination and color [10, 9, 11, 6]. Barrow & Tenenbaum [1] de\ufb01ne an intrinsic image algorithm that separates images into the intrinsic components of re\ufb02ectance and shading. Guo et al. [13] simplify this model to represent the relationship between the shadow pixels and shadow-free pixels via a linear system. They estimate the unknown factors via pairing shadow and shadow-free regions. Similarly, Shor & Lischinki [28] propose an illumination model for shadows in which there is an af\ufb01ne relationship between the lit and shadow intensities at a pixel, including 4 unknown parameters. They de\ufb01ne two strips of pixels: one in the shadowed area and one in the lit area to estimate their parameters. Finlayson et al.[8] create an illuminant-invariant image for shadow detection and removal. Their work is based on an insight that the shadowed pixels differ from their lit pixels by a scaling factor. Vicente et al. [31, 33] propose a method for shadow removal where they suggest that the color of the lit region can be transferred to the shadowed region via histogram equalization. Shadow Matting: Matting, introduced by Porter & Duff [24], is an effective tool to handle soft shadows. However, it is non-trivial to compute the shadow matte from a single image. Chuang et al. [3] use image matting for shadow editing to transfer the shadows between different scenes. They compute the shadow matte from a sequence of frames in a video captured from a static camera. Guo et al. [13] and Zhang et al. [39] both use a shadow matte for their shadow removal frameworks, where they estimate the shadow matte via the closed-form solution of Levin et al. [20]. \fDeep-Learning Based Shadow Removal: Recently published large-scale datasets [32, 34, 25] enable training deep-learning networks for shadow removal. The Deshadow-Net of Qu et al. [25] is trained to remove shadows in an end-to-end manner. Their network extracts multicontext features across different layers of a deep network to predict a shadow matte. This shadow matte is different from ours as it contains both the density and color offset of the shadows. The ST-CGAN proposed by Wang et al. [34] for both shadow detection and removal is a conditional GAN-based framework [15] for shadow detection and removal. Their framework is trained to predict the shadow mask and shadow-free image in an uni\ufb01ed manner, they use GAN losses to improve performance. Inspired by early work, our framework outputs the shadow-free image based on a physically inspired shadow illumination model and a shadow matte. We, however, estimate the parameters of our model and the shadow matte via two deep networks in a fully automated manner. 3. Shadow and Image Decomposition Model 3.1. Shadow Illumination Model Let us begin by describing our shadow illumination model. We aim to \ufb01nd a mapping function T to transform a shadow pixel Ishadow x to its non-shadow counterpart: Ishadow-free x = T(Ishadow x , w) where w are the parameters of the model. The form of T has been studied in depth in previous work as discussed in Sec. 2. In this paper, similar to the model of Shor & Lischinski [28], we use a linear function to model the relationship between the lit and shadowed pixels. The intensity of a lit pixel is formulated as: Ishadow-free x (\u03bb) = Ld x(\u03bb)Rx(\u03bb) + La x(\u03bb)Rx(\u03bb) (1) where Ishadow-free x (\u03bb) is the intensity re\ufb02ected from point x in the scene at wavelength \u03bb, L and R are the illumination and re\ufb02ectance respectively, Ld is the direct illumination and La is the ambient illumination. To cast a shadow on point x, an occluder blocks the direct illumination and a portion of the ambient illumination that would otherwise arrive at x. The shadowed intensity at x is: Ishadow x (\u03bb) = ax(\u03bb)La x(\u03bb)Rx(\u03bb) (2) where ax(\u03bb) is the attenuation factor indicating the remaining fraction of the ambient illumination that arrives at point x at wavelength \u03bb. Note that Shor & Lischinski further assume that ax(\u03bb) is the same for all wavelengths \u03bb to simplify their model. This assumption implies that the environment light has the same color from all directions. From Eq.1 and 2, we can express the shadow-free pixel as a linear function of the shadowed pixel: Ishadow-free x (\u03bb) = Ld x(\u03bb)Rx(\u03bb) + ax(\u03bb)\u22121Ishadow x (\u03bb) (3) We assume that this linear relation is preserved throughout the color acquisition process of the camera [7]. Therefore, we can express the color intensity of the lit pixel x as a linear function of its shadowed value: Ishadow-free x (k) = wk \u00d7 Ishadow x (k) + bk (4) where Ix(k) represents the value of the pixel x on the image I in color channel k (k \u2208R,G,B color channel), bk is the response of the camera to direct illumination, and wk is responsible for the attenuation factor of the ambient illumination at this pixel in this color channel. We model each color channel independently to account for possibly different spectral characteristics of the material in shadow as well as the sensor. We further assume that the two vectors w = [wR, wG, wB] and b = [bR, bG, bB] are constant across all pixels x in the umbra area of the shadow. Under this assumption, we can easily estimate the values of w and b given the shadow and shadow-free image using linear regression. We refer to (w, b) as the shadow parameters in the rest of the paper. In Sec. 4, we show that we can train a deep-network to estimate these vectors from a single image. 3.2. Shadow Image Decomposition System We plug our proposed shadow illumination model into the following well-known image decomposition system [3, 24, 30, 36]. The system models the shadow-free image using the shadow image, the shadow parameter, and the shadow matte. The shadow-free image can be expressed as: Ishadow-free = Ishadow \u00b7 \u03b1 + Irelit \u00b7 (1 \u2212\u03b1) (5) where Ishadow and Ishadow-free are the shadow and shadowfree image respectively, \u03b1 is the matting layer, and Irelit is the relit image. We de\ufb01ne \u03b1 and Irelit below. Each pixel i of the relit image Irelit is computed by: Irelit i = w \u00b7 Ishadow i + b (6) which is the shadow image transformed by the illumination model of Eq. 4. This transformation maps the shadowed pixels to their shadow-free values. The matting layer \u03b1 represents the per-pixel coef\ufb01cients of the linear combination of the relit image and the input shadow image that results into the shadow-free image. Ideally, the value of \u03b1 should be 1 at the non-shadow area and 0 at the umbra of the shadow area. For the pixels in the penumbra of the shadow, the value of \u03b1 gradually changes near the shadow boundary. \fSP-NET [w, b] w *I shadow +b M-Net I shadow shad. mask I relit shad. matte I shadow-free Regression Loss Reconstruction Loss Figure 2: Shadow Removal Framework. The shadow parameter estimator network SP-Net takes as input the shadow image and the shadow mask to predict the shadow parameters (w, b). The relit image Irelit is then computed via Eq. 6 using the estimated parameters from SP-Net. The relit image, together with the input shadow image and the shadow mask are then input into the shadow matte prediction network M-Net to get the shadow matte layer \u03b1. The system outputs the shadowfree image via Eq. 5, using the shadow image, the relit image, and the shadow matte. SP-Net learns to predict the shadow parameters (w, b), denoted as the regression loss. M-Net learns to minimize the L1 distance between the output of the system and the shadow-free image (reconstruction loss). The value of \u03b1 at pixel i based on the shadow image, shadow-free image, and relit image, follows from Eq. 5 : \u03b1i = Ii shadow-free \u2212Ii relit Ii shadow \u2212Ii relit (7) We use the image decomposition of Eq. 5 for our shadow removal framework. The unknown factors are the shadow parameters (w, b) and the shadow matte \u03b1. We present our method that uses two deep networks, SP-Net and M-Net, to predict these two factors in the following section. In Sec.5.3, we propose a simple method to modify the shadows for an image in order to augment the training data. 4. Shadow Removal Framework Fig. 2 summarizes our framework. The shadow parameter estimator network SP-Net takes as input the shadow image and the shadow mask to predict the shadow parameters (w, b). The relit image Irelit is then computed via Eq. 6 with the estimated parameters from SP-Net. The relit image, together with the input shadow image and the shadow mask is then input into the shadow matte prediction network M-Net to get the shadow matte \u03b1. The system outputs the shadow-free image via Eq. 5. 4.1. Shadow Parameter Estimator Network In order to recover the illuminated intensity at the shadowed pixel, we need to estimate the parameters of the linear model in Eq. 4. Previous work has proposed different methods to estimate the parameters of a shadow illumination model [28, 12, 13, 11, 8, 6]. In this paper, we train SPNet, a deep network model, to directly predict the shadow parameters from the input shadow image. To train SP-Net, we \ufb01rst generate training data. Given a training pair of a shadow image and a shadow-free image, we estimate the parameters of our linear illumination model using a least squares method [4]. For each shadow image, we \ufb01rst erode the shadow mask by 5 pixels in order to de\ufb01ne a region that does not contain the partially shadowed (penumbra) pixels. Mapping these shadow pixel values to the corresponding values in the shadow-free image, gives us a linear regression system, from which we calculate w and b. We compute parameters for each of the three RGB color channels and then combine the learned coef\ufb01cients to form a 6-element vector. This vector is used as the targeted output to train SP-Net. The input for SP-Net is the input shadow image and the associated shadow mask. We train SP-Net to minimize the L1 distance between the output of the network and these computed shadow parameters. We develop SP-Net by customizing a ResNeXt [37] model that is pre-trained on ImageNet [5]. Notice that while we use the ground truth shadow mask for training, during testing we estimate shadow masks using the shadow detection network proposed by Zhu et al.[41]. 4.2. Shadow Matte Prediction Network Our linear illumination model (Eq. 4) can relight the pixels in the umbra area (fully shadowed). The shadowed pixels in the penumbra (partially shadowed) region are more challenging as the illumination changes gradually across the shadow boundary [14]. A binary shadow mask cannot model this gradual change. Thus, using a binary mask within the decomposition model in Eq. 5 will generate an image with visible boundary artifacts. A solution for this is shadow matting where the soft shadow effects are expressed via the values of a blending layer. \fInput Relit Shad. Mask Using S.Mask Shad. Matte Using S.Matte Figure 3: A comparison of the ground truth shadow mask and our shadow matte. From the left to right: The input image, the relit image computed from the parameters estimated via SP-Net, the ground truth shadow mask, the \ufb01nal results when we use the shadow mask, the shadow matte computed using our M-Net, and the \ufb01nal shadow-free image when we use the shadow matte to combine the input and relit image. The matting layer handles the soft shadow and does not generate visible boundaries in the \ufb01nal result. (Please view in magni\ufb01cation on a digital device to see the difference more clearly.) In this paper, we train a deep network, M-Net, to predict this matting layer. In order to train M-Net, we use Eq. 5 to compute the output of our framework where the shadow matte is the output of M-Net. Then the loss function that drives the training of M-Net is the L1 distance between output image and ground truth training shadow-free image, marked as \u201creconstruction loss\u201d in Fig. 2. This is equivalent to computing the actual value of the shadow matte via Eq. 7 and then training M-Net to directly output this value. Fig. 3 illustrates the effectiveness of our shadow matting technique. We show in the \ufb01gure two shadow removal results which are computed using a ground-truth shadow mask and a shadow matte respectively. This shadow matte is computed by our model. One can see that using the binary shadow mask to form the shadow-free image creates visible boundary artifacts as it ignores the penumbra. The shadow matte from our model captures well the soft shadow and generates an image without shadow boundary artifacts. We design M-Net based on U-Net [26]. The M-Net inputs are the shadow image, the relit image, and the shadow mask. We use the shadow mask as input to M-Net since the matting layer can be considered as a relaxed shadow mask where each value represents the strength of the shadow effect at the location rather than just the shadow presence. 5. Experiments 5.1. Dataset and Evaluation Metric We train and evaluate on the ISTD dataset [34]. ISTD consists of image triplets: shadow image, shadow mask, and shadow-free image, captured from different scenes. The training split has 1870 image triplets from 135 scenes, whereas the testing split has 540 triplets from 45 scenes. We notice that the testing set of the ISTD dataset needs to be adjusted since the shadow images and the shadowfree images have inconsistent colors. This is a well known issue mentioned in the original paper [34]. The reason is that the shadow and shadow-free image pairs were captured Shad. Image Original GT Corrected GT Figure 4: An example of our color correction method. From left to right: input shadow image, provided shadowfree ground truth image (GT) from ISTD dataset, and the GT image corrected by our method. Comparing to the input shadow image on the non-shadow area only, the root-meansquare distance of the original GT is 12.9. This value on our corrected GT becomes 2.9. at different times of the day which resulted in slightly different environment lights for each image. For example, Fig. 4 shows a shadow and shadow-free image pair. The rootmean-square difference between these two images in the non-shadow area is 12.9. This color inconsistency appears frequently in the testing set of the ISTD dataset. On the whole testing set, the root-mean-square distance between the shadow images and shadow-free images in the nonshadow area is 6.83, as computed by Wang et al.[34]. In order to mitigate this color inconsistency, we use linear regression to transform the pixel values in the nonshadow area of each shadow-free image to map into their counterpart values in the shadow image. We use a linear regression for each color-channel, similar to our method for relighting the shadow pixels in Sec. 4.1. This simple transformation transfers the color tone and brightness of the shadow image to its shadow-free counterpart. The third column of Fig. 4 illustrates the effect of our colorcorrection method. Our proposed method reduces the rootmean-square distance between the shadow-free image and the shadow image from 12.9 to 2.9. The error reduction for the whole testing set of ISTD goes from 6.83 to 2.6. \f5.2. Shadow Removal Evaluation We evaluate our method on the adjusted testing set of the ISTD dataset. For metric evaluation we follow [34] and compute the RMSE in the LAB color space on the shadow area, non-shadow area, and the whole image, where all shadow removal results are re-sized into 256 \u00d7 256 to compare with the ground truth images at this size. Note that in contrast to other methods that only output shadow free images at that resolution, our shadow removal system works for input images of any size. Since our method requires shadow masks, we use the model proposed by Zhu et al.[41] pre-trained on the SBU dataset [32] for detecting shadows. We take the model provided by the author and \ufb01ne-tune it on the ISTD dataset for 3000 epochs. This model achieves 2.2 Balance Error Rate on the ISTD testing set. To remove the shadow effect in the image, we \ufb01rst use SP-Net to compute the shadow parameters (w, b) using the input image and the shadow mask computed from the shadow detection network. We use (w, b) to compute a relit image which is input to M-Net, together with the input image and the shadow mask to output a matte layer. We obtain the \ufb01nal shadow removal result via Eq. 5. In Table 1, we compare the performance of our method with the recent shadow removal methods of Guo et al.[13], Yang et al.[38], Gong et al.[12], and Wang et al.[34]. All numbers are computed on the adjusted testing images so that they are directly comparable. The \ufb01rst row shows the numbers for the input shadow images, i.e. no shadow removal performed. We \ufb01rst evaluate our shadow removal performance using only SP-Net, i.e. we use the binary shadow mask computed by the shadow detector to form the shadow-free image from the shadow image and the relit image. The binary shadow mask is obtained by simply thresholding the output of the shadow detector with a threshold of 0.95. As shown in column \u201cSP-Net\u201d (third from the right) in Fig. 8, SP-Net correctly estimates the shadow parameters to relight the shadow area. Even with visible shadow boundaries, SPNet alone outperforms the previous state-of-the-art, reducing the RMSE on the shadow area by 29%, from 13.3 to 9.5. We then evaluate the shadow removal results using both SP-Net and M-Net, denoted as \u201cSP+M-Net\u201d in Tab. 1 and Fig. 8. As shown in Fig. 8, the results of M-Net do not contain boundary artifacts. In the third row of Fig. 8, SP-Net overly relights the shadow area but the shadow matte computed from M-Net effectively corrects these errors. This is because M-Net is trained to blend the relit and shadow images to create the shadow-free image. Therefore, M-Net learns to output a smaller weight for a pixel that is overly lit by SP-Net. Using the matte layer of M-Net further reduces the RMSE on the shadow area by 17%, from 9.5 to 7.9. Overall, our method generates better results than other methods. Our method does a better job at estimating the Input Wang et al.[34] Ours GT Figure 5: Comparison of shadow removal between our method and ST-CGAN [34]. ST-CGAN tends to produce blurry images, random artifacts, and incorrect colors of the lit pixels while our method handles all cases well. overall illumination changes compared to the model of Gong et al., which tends to overly relight shadow pixels, as shown in Fig. 8. Our method does not show color inconsistencies within the relit area contrary to all other methods. Fig. 5 qualitatively compares our method and ST-CGAN, which illustrates common issues present in images generated by deep networks [15, 40]. ST-CGAN generally generates blurry images and introduces random artifacts. Our method, albeit not perfect, handles all cases well. Our method fails to recover the shadow-free pixels properly as shown in Fig. 6. The \ufb01rst row, shows how our method overly relights the shadowed area while in the second row, the color of the lit area is incorrect. Finally, we trained and evaluated two alternative designs that do not require shadow masks as input: (1) The \ufb01rst is an end-to-end shadow-removal system where we jointly train a shadow detector together with our proposed SP-Net and MNet. This framework is harder to train due to the increase in the number of network parameters. (2) The second is a version of our framework that does not input the shadow masks into both SP-Net and M-Net. Hence, SP-Net and MNet need to learn to localize the shadow areas implicitly. \fTable 1: Shadow removal results of our networks compared to state-of-the-art shadow removal methods on the adjusted ground truth. (\u2217) The method of Gong et al.[12] is an interactive method that de\ufb01nes the shadow/nonshadow regions via user inputs, thus generates minimal error on the non-shadow area. The metric is RMSE (the lower, the better). Best results are in bold. Methods Shadow Non-Shadow All Input Image 40.2 2.6 8.5 Yang et al. [38] 24.7 14.4 16.0 Guo et al. [13] 22.0 3.1 6.1 Wang et al.[34] 13.4 7.7 8.7 Gong et al. [12] 13.3 2.6* 4.2 SP-Net (Ours) 9.5 3.2 4.1 SP+M-Net (Ours) 7.9 3.1 3.9 Our Method with Alternative Settings With a Shad. Detector 8.4 5.0 5.5 No Input Shadow Mask 8.3 4.9 5.4 Input Ours GT Figure 6: Failure cases of our method. In the \ufb01rst row, our method overly lights up the shadow area. In the second row, our method generates incorrect colors. As can be seen in the two bottom rows of Tab. 1, both designs achieved slightly worse shadow removal results than our main setting. 5.3. Dataset Augmentation via Shadow Editing Many deep learning work focus on learning from more easily obtainable, weakly-supervised, or synthetic data [2, 19, 21, 22, 29, 18, 17]. In this section, we show that we can modify shadow effects using our proposed illumination model to generate additional training data. Given a shadow matte \u03b1, a shadow-free image, and paSyns. Image Real Image Syns. Image wsyn = w \u00d7 0.8 wsyn = w \u00d7 1.7 Figure 7: Shadow editing via our decomposition model. We use Eq. 8 to generate synthetic shadow images. As we change the shadow parameters, the shadow effects change accordingly. We show two example images from the ISTD training set where in the middle column are the original images and in the \ufb01rst and last column are synthetic. Table 2: Shadow removal results of our networks train on the augmented ISTD dataset. The metric is RMSE (the lower, the better). Training our framework on the augumented ISTD dataset drops the RMSE on the shadow area from 7.9 to 7.4. Methods Train. Set Shad. Non-Shad. All SP-Net Aug. ISTD 9.0 3.2 4.1 SP+M-Net Aug. ISTD 7.4 3.1 3.8 rameters (w, b), we can form a shadow image by: Ishadow = Ishadow-free \u00b7 \u03b1 + Idarkened \u00b7 (1 \u2212\u03b1) (8) where Idarkened has undergone the shadow effect associated to the set of shadow parameters (w, b). Each pixel i of Idarkened is computed by: Idarkened i = (Ishadow-free i \u2212b) \u00b7 w\u22121 (9) For each training image, we \ufb01rst compute the shadow parameters and the matte layer via Eqs. 4 and 7. Then, we generate a new synthetic shadow image via Eq. 8 with a scaling factor wsyn = w \u00d7 k. As seen in Fig. 7, a lower w leads to an image with a lighter shadow area while a higher w increases the shadow effects instead. Using this method, we augment the ISTD training set by simply choosing k = [0.8, 0.9, 1.1, 1.2] to generate a new set of 5320 images, which is four times bigger than the original training set. We augment the original ISTD dataset with this dataset. Training our model on this new augmented ISTD dataset improves our results, as the RMSE drops by 6%, from 7.9 to 7.4, as reported in Tab. 2. \fInput Guo et al. Yang et al. Gong et al. Wang et al. SP-Net SP+M-Net Ground [13] [38] [12] [34] (Ours) (Ours) Truth Figure 8: Comparison of shadow removal on ISTD dataset. Qualitative comparison between our method and previous state-of-the-art methods: Guo et al.[13], Yang et al.[38], Gong et al.[12], and Wang et al.[34]. \u201cSP-Net\u201d are the shadow removal results using the parameters computed from SP-Net and a binary shadow mask. \u201cSP+M-Net\u201d are the shadow removal results using the parameters computed from SP-Net and the shadow matte computed from M-Net. 6." + }, + { + "url": "http://arxiv.org/abs/1905.03313v2", + "title": "Weakly Labeling the Antarctic: The Penguin Colony Case", + "abstract": "Antarctic penguins are important ecological indicators -- especially in the\nface of climate change. In this work, we present a deep learning based model\nfor semantic segmentation of Ad\\'elie penguin colonies in high-resolution\nsatellite imagery. To train our segmentation models, we take advantage of the\nPenguin Colony Dataset: a unique dataset with 2044 georeferenced cropped images\nfrom 193 Ad\\'elie penguin colonies in Antarctica. In the face of a scarcity of\npixel-level annotation masks, we propose a weakly-supervised framework to\neffectively learn a segmentation model from weak labels. We use a\nclassification network to filter out data unsuitable for the segmentation\nnetwork. This segmentation network is trained with a specific loss function,\nbased on the average activation, to effectively learn from the data with the\nweakly-annotated labels. Our experiments show that adding weakly-annotated\ntraining examples significantly improves segmentation performance, increasing\nthe mean Intersection-over-Union from 42.3 to 60.0% on the Penguin Colony\nDataset.", + "authors": "Hieu Le, Bento Gon\u00e7alves, Dimitris Samaras, Heather Lynch", + "published": "2019-05-08", + "updated": "2019-05-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The vast and growing catalogs of high-resolution earth observation imagery present us with unprecedented opportunities for understanding ecological and geological processes, but time and domain expertise are in high demand and building a training dataset suf\ufb01cient for deep learning is often not feasible. Fortunately, many earth observation applications bene\ufb01t from dynamics that are slow relative to the repeat frequency of the available imagery. As a result, each image is similar to previous imagery, and prior information in the form of lower resolution or auxiliary information can be used to greatly improve classi\ufb01cation accuracy. The use of prior knowledge naturally extends to the classi\ufb01cation of imagery time series, which in aggregate can be 2011 / DEC 2012 / NOV 2013 / JAN Cloud cover Snow cover Geolocation True mask Figure 1. Penguin colony guano mask extraction on highresolution satellite imagery. In the upper row, we show highquality crops for three consecutive years of the Cape Crozier Ad\u00e9lie penguin colony during the breeding season. Brown to red shapes at the center of crops show the guano stains associated with the breeding colony that, when converted to area, can be used to approximate breeding population size. Boxes in the lower row show unsuitable images cropped at the same location, suffering from occlusion by clouds, heavy snow, and gross orthorecti\ufb01cation artifacts. The goal of this work is to use weakly-annotated data, in the form of percent guano coverage, to generate better segmentation masks. Imagery copyright DigitalGlobe, Inc. 2019. used to understand the dynamics of landscape change. High-resolution satellite imagery is becoming an ef\ufb01cient means to survey inaccessible, or perilous, regions remotely (e.g., [3, 19]). However, real-world applications for semantic segmentation often lack pixel-wise annotations because generating them is so time consuming. In this work we present a framework for weakly-supervised segmentation on georeferenced datasets. Our approach circumvents data acquisition limitations by using pixel-wise mismatched masks, unsuitable by themselves as segmentation groundtruth, to improve segmentation results. Such derived quantities are more robust to geolocation errors than pixel-wise segmentation masks and are easily acquired by a spatial query for available satellite imagery. We demonstrate our approach by segmenting penguin colonies visible in high1 arXiv:1905.03313v2 [cs.CV] 19 May 2019 \fresolution satellite imagery, but our method is more broadly applicable to high-resolution segmentation problems common in satellite image analysis. Antarctic penguins, being sensitive to climate-change driven shifts in the environment [2, 8] and amenable to satellite-based surveys [12], are ideal ecological indicators for the Southern Ocean ecosystem. Brush-tailed penguins, a group of three species in the genus Pygoscelis, nest together in large colonies on snow-free rock outcrops along the Antarctic coastline. During the austral summer penguin breeding season, colonies create large-scale guano stains that are visible from space [19] and with an area proportional to breeding population size [12]. In addition to snapshots of population size, we can take advantage of the site-\ufb01delity of penguins to extract time-series of population change by repeatedly surveying colonies via satellite. Timeseries of guano stain shape and areal extent are invaluable to furthering our understanding of penguin population dynamics \u2013 especially relevant in the face of climate change [6]. Previous satellite-based brush-tailed penguin surveys relied on manual annotation from domain experts [12, 19] or, when automated, suffered from poor transferability between images [30]. Despite their limitations, satellite-based surveys have been used successfully in a variety of contexts, from \ufb01nding new penguin super-colonies [4] to facilitating the \ufb01rst global population database for Ad\u00e9lie penguins (Pygoscelis adeliae) [18]. Manually annotating a single penguin colony, however, takes at least 30 minutes and often signi\ufb01cantly longer. Such a laborious process, coupled with the large number of images needed to attain suf\ufb01ciently temporal depth for time-series analyses at the pan-Antarctic scale, creates an urgent need for robust, automated approaches. To amass data to train an automated guano extraction tool, we use a small number of hand-drawn geolocated guano polygons [9, 18] as guides to query our highresolution imagery for images containing penguin colonies. The two issues with this type of weakly-labeled data are: 1) This data-extraction routine is error-prone, potentially generating training images where the corresponding guano stain is not visible, and 2) Re-purposed segmentation masks are imprecise since penguin colonies keep evolving overtime and the images of the same colony are not correctly registered at the pixel level (Fig. 1). In this paper, we propose a semi-supervised learning framework and a speci\ufb01c loss function to train a segmentation network from a small set of human-annotated images and the weakly-labeled data. We \ufb01rst train a classi\ufb01er network, C-Net, to verify if an image contains visible guano stains. After being trained, the C-Net can be used to \ufb01lter out unsuitable training images from the weaklylabeled training set, removing images that were covered in snow, shadows, clouds, or simply were not captured during the penguin breeding season. We demonstrate that, even with a small set of human-annotated images, C-Net successfully weeds out a large proportion of potentially misleading weakly-labeled training images. We employ a CNet-\ufb01ltered weakly-labeled training set, combined with our small set of human-annotated images, to train a segmentation network, S-Net, for penguin guano segmentation, using a speci\ufb01c loss function: For hand-labeled images, S-Net is trained to predict pixel-wise guano masks, whereas for weakly-labeled images, S-Net is trained to match percent cover. Our framework with C-Net and S-Net addresses two challenges: 1) how to determine if an image whose coordinates overlap with a penguin colony contains visible guano; and 2) how to use misaligned masks to train segmentation models. The main contributions of this paper are three-fold: 1) We propose a data acquisition scheme for georeferenced images, showcased with the Penguin Colony Dataset. From a few hand-drawn segmentation masks, our scheme generates thousands of weakly-labelled training images. 2) We propose a weakly-supervised framework using a speci\ufb01c loss function to learn segmentation masks from weak labels, in the form of percent cover, and a classi\ufb01er network to \ufb01lter out bad training examples. This approach is easily extensible to other georeferenced datasets for segmentation. 3) We present test results showing that predictions from our framework are superior to pure semantic segmentation approaches and two other baselines across a range of settings. 2. Related Works The expert annotation labor required to produce segmentation masks hinders the feasibility of fully-supervised deep learning methods. Hence, many deep learning based segmentation work focus on learning from more easily obtainable, weakly-supervised, or synthetic data [5, 15, 16, 17, 28, 32, 26, 14]. A typical example of weak supervision is applying bounding boxes to learn segmentation masks [7, 22]. Some methods can improve segmentation results by learning from as little as a few strokes [27, 29] or points [21, 25]. Malkin et al.[20] propose the adoption of statistical descriptors, in the form of the means and variances of low-level annotation masks, to train segmentation networks for highresolution imagery. 3. Penguin Colony Dataset We present the Penguin Colony Dataset, a dataset for penguin guano segmentation on high-resolution satellite imagery. Our dataset includes a set of 31 hand-labelled guano masks from 24 Ad\u00e9lie penguin colonies. We also provide full metadata for images cropped from highresolution imagery. These images include penguin colony \fcrops from four different high-resolution optical satellites: GeoEye-1, QuickBird-2, Worldview-2 and Worldview-3. Depending on sensor, resolution for our images ranges from 2.4m/pixel (QuickBird-2) to 1.2m/pixel (Worldview-3) \u2013 the highest available on current commercial imagery. Adding our penguin colony polygons to mediumresolution Landsat-based masks from [18], we store the locations of 193 Ad\u00e9lie penguin colonies. With colony polygons in hand, we query an archive of 99653 high-resolution satellite images from the Antarctic coastline for images that encircle penguin colony shapes. We then crop each image to the smallest bounding box for each penguin colony, adding 100 pixels of padding on each side. For each cropped image generated this way, we calculate a Shannon entropy index, discarding crops that score 5 or lower. Following this automated data acquisition routine, we cropped 2044 images at locations shown in Fig. 2(a), heretofore referenced as the \"weakly-annotated dataset\". These 2044 images can be grouped into a video segmentation dataset [10, 23, 13, 33] consisting of image time series for each of the 193 penguin colonies in our dataset. We then split the images from our 31 high-resolution masks into 18 training and 13 testing images. Similar to the weakly-annotated dataset, cropped images vary in size depending on the extent of the colony and the sensor resolution. Crops from high-resolution images for which we created segmentation masks are heretofore referenced as the \"manually-labeled dataset\". In summary, we provide a dataset containing shape\ufb01les for guano polygon masks and colony bounding boxes, and cropped images for manually-labelled and weaklyannotated penguin colonies. The weakly-annotated component of our dataset is easily expanded as our imagery archive grows. Though aircraft-based aerial imagery for related problems do exist (e.g. [1]), to the best of our knowledge, this is the \ufb01rst public dataset involving animal population estimation from high-resolution satellite imagery. More details can be found at: github.com/lynchlab/CVPR19-CV4GC-WeaklyLabeling 4. Weakly-Supervised Learning for Penguin Colony Segmentation As discussed in Section 3, only 0.8% (18 images) of our training data is hand-labelled, and there are 2044 penguin colony images with misaligned segmentation masks. There are two main challenges in this scenario: The \ufb01rst is that the image-level labels are unavoidably noisy. The images, although captured at the locations of known penguin colonies, might not contain visible penguin guano. The images could be covered in snow, shadows, or clouds, or were not captured in the breeding season when the penguin guano is visible. The second issue is that the pixel-level annotations are misaligned with the actual image contents due to georegis(a) (b) Figure 2. (a) Cropped image locations. Each square represents a colony bounding box (see inset) for which we found matching satellite imagery. To \ufb01nd matches, we query an archive of 99653 high-resolution imagery images obtained from 2002 to 2017. Our dataset harbors a total of 2044 satellite images, covering the vast majority of existing Ad\u00e9lie penguin colonies. (b) Two examples of hand-labelled guano mask (red overlay at right) on high-resolution imagery (left). Imagery copyright DigitalGlobe, Inc. 2019. tration errors or orthorecti\ufb01cation artifacts. We propose a method to learn from those weakly-labeled data for image segmentation. We use two networks, C-Net and S-Net, to maximize learning from the hand-labelled data. C-Net is a classi\ufb01cation network that learns to predict the image labels, e.g., whether an image contains any penguin guano and S-Net is a segmentation network that learns to segment the penguin guano areas. The main purpose of the C-Net is to \ufb01lter out bad training examples from the weakly-annotated training set to better train the S-Net. Our framework is summarized in Fig. 3. The C-Net \ufb01rst learns to classify images from the handannotated data. The training label for each image is binary: 0 implies the image is without any guano and 1 otherwise. Once the C-Net has been trained, we use it to assist the training of the S-Net. We train the S-Net on both the hand-labelled and the weakly-annotated data. For the handlabelled data, the S-Net is trained to predict the segmentation mask from the input image. For each weakly-annotated image, we want to mitigate the risk of using bad training examples. Hence, we use the C-Net to classify all images that are weakly-labelled as containing guano and then remove all images that are classi\ufb01ed as \"no guano\" from the training pool for the S-Net. As minor georegistration errors or orthorecti\ufb01cation artifacts create mismatches between annotation masks and input images, we do not use generated crops as groundtruth segmentation masks for S-Net. Instead, we train S-Net to recover the mean pixel values on weakly-annotated masks. Such a metric works as a proxy for fractional guano coverage in the images, which is more robust to imperfect georegistration. We essentially enforce an image-level statistic matching between predicted masks and weakly-annotated masks instead of minimizing pixel-wise differences. Let I denote an input image, and M(I) be the guano \fC-Net S-Net Mask L1 Loss BCE Loss Guano / no Guano Mask Label mean value Regression Loss mean value = 0.25 S-Net hand-labelled masks Weakly-annotated masks Not used Hand-labelled Data Weakly-annotated Data C-Net Guano No guano weakly-labelled as guano Figure 3. Weakly-Supervised Learning Framework For Penguin Colony Segmentation. Data with hand-labelled annotations are used to train both the C-Net and S-Net: The C-Net learns to predict the image label (if there is any penguin guano in the image) while the S-Net learns to segment the penguin guano areas. Once the C-Net is trained, it \ufb01lters out images weakly-marked as containing guano but without visible guano due to snow, shadows, cloud, or poor timing relative to the breeding season. The S-Net learns from the weakly-annotated images to output the segmentation masks such that the mean activation of pixels in the predicted masks approximate the weakly-annotated masks. Imagery copyright DigitalGlobe, Inc. 2019. mask of I. Let S(I) denote the output of the S-Net for the input image I. Ideally, the output should be 1 for guano pixels and 0 otherwise. The objective of S-Net\u2019s training is to minimize a weighted combination of two losses: LS(I) =\u03bbsegH(I) \u2225S(I) \u2212M(I)\u22252 + \u03bbreg(1 \u2212H(I)) \u2225mean(S(I)) \u2212mean(M(I))\u22251 where the value of H(I) is 1 if I is a hand-labelled image and 0 if I is a weakly annotated image. \u03bbseg and \u03bbreg control how much the S-Net should learn from the two losses respectively. We empirically set (\u03bbseg, \u03bbreg) to (1, 5). 5. Experiments We evaluate the performance of the C-Net and S-Net on the testing set of the Penguin Colony Dataset. The testing set contains 13 images of various sizes. To evaluate the performance of the C-Net, we crop the testing images into patches of size 256 \u00d7 256 with a step size of 64. The label for each image patch is obtained from the corresponding cropped mask. To train the S-Net, we crop each training image to small patches of size 384 \u00d7 384 with a step size of 192 to reduce I/O bottleneck issues arising due to the large sizes of images. From the original training set, we obtain 6055 hand-labelled and 100584 weakly-annotated training patches. To evaluate the performance of the S-Net, we compare the output of the S-Net to the hand-annotated guano masks of the Penguin Colony testing set. We design the C-Net based on Resnet-18 [31] and the S-Net based on U-Net [24]. We use stochastic gradient descent with the Adam solver [11] to train our models. Table 1. C-Net classi\ufb01cation results on the Penguin Colony Dataset. Confusion matrices summarizing the results of C-Net on the image patches cropped from the Penguin Colony Dataset testing set (a) and on image patches cropped from the weaklyannotated set (b; W.A. Set). \u201cGuano\u201d patches contain penguin guano areas and \u201cNo Guano\u201d patches contain only background. \u201cTrue Label\u201d means the patches are manually annotated. \u201cWeak Label\u201d means the labels are obtained via the possibly mismatched masks. An image patch is classi\ufb01ed as positive if C-Net outputs a positive score and negative otherwise. (a) Testing Set True Label Guano No Guano Pred. Guano 858 469 No Guano 265 13968 (b) W.A. Set Weak Label Guano No Guano Pred. Guano 9597 2070 No Guano 19446 69471 5.1. Penguin Guano Classi\ufb01cation. We \ufb01rst analyze the classi\ufb01cation performance of the CNet. We evaluate the C-Net on the testing set consisting of 15560 patches of sizes 256 \u00d7 256, which are cropped from 13 testing images with a step size of 64. Table 1(a) reports the confusion matrix summarizing the result of the C-Net on the testing image patches. The C-Net achieves a 0.65 precision and 0.76 recall on this set. For the weaklyannotated image patches shown in Table 1(b), C-Net classi\ufb01es 19446 patches weakly-labelled as \u201cGuano\u201d to be nonguano. These patches then are not used for training the S\fInput S-Net C+S-Net Figure 4. The Effect of C-Net: Segmentation results of the SNet trained on the whole dataset and on the dataset \ufb01ltered by C-Net. Input image (left column) covered by snow (top row) and fog (second row). Because the C-Net \ufb01lters out noisy training examples for the S-Net, S-Net does not predict any guano pixels in either the snow or fog corrupted images. Imagery copyright DigitalGlobe, Inc. 2019. Table 2. Segmentation results of our models on the Penguin Colony Dataset. We train our S-Net on different sets of training data and loss functions. \u201cH\u201d denotes hand-labelled data and \u201cW\u201d denotes weakly-annotated data. For \u201cSeg. (H)\u201d, we compute the segmentation loss on the hand-labelled data to train the network. For \u201cReg. (W)\u201d, we compute the regression loss on the weaklyannotated data. \u201cS.\u201d only uses the segmentation loss while \u201cSR.\u201d uses the weighted combination loss of the segmentation and regression losses. \u201c+\u201d uses weakly-annotated data, and \u201cC+S-Net\u201d is our S-Net trained on the C-Net \ufb01ltered data. Method Data Loss Function mIoU (%) S-Net S. H Seg.(H) 42.3 S-Net S. + H+W Seg.(H) + Seg.(W) 37.7 S-Net SR. + H+W Seg.(H) + Reg.(W) 55.0 C+S-Net SR. + H+W Seg.(H) + Reg.(W) 60.0 Net. We show the performances of S-Net trained with and without these removed training patches in Section 5.2. 5.2. Penguin Guano Segmentation. We evaluate our penguin colony segmentation network on the Penguin Colony Dataset. To obtain the output segmentation mask for an image, we \ufb01rst crop the image into patches of size 256\u00d7256 with a step size of 128. Each patch is input into the network to obtain a patch prediction mask. We obtain the \ufb01nal prediction mask for the input image by averaging all overlapped patch predictions at each pixel. We use mean Intersection-Over-Union (mIoU) to evaluate the segmentation masks. Table 2 summarizes the results of our model. We compare our method with three baselines. All methods use as the backbone segmentation network the network with the same architecture as the S-Net. The \ufb01rst row shows the results of the S-Net trained on only the hand-labelled data, using the segmentation loss, denoted as \u201cS-Net S.\u201d. This model achieves 42.3 mIoU on the testing set of the Penguin Colony Dataset. A straightforward use of the weakly-annotated data is to train a segmentation network to output the segmentation masks, regardless of the misgeoregistration. This network is trained using the same segmentation loss function, but with more data, denoted as \u201cSNet S. +\u201d in the second row. This model does not take into account the mis-georegistration issue of the guano polygons. Unsurprisingly, segmentation performance decreases from 42.3 to 37.7 mIoU score since the model is guided to output the guano pixels at the mismatched locations. The third row shows the effect of our mean activation regression loss to learn from the misaligned guano masks. We train a S-Net on both the hand-annotated data and weaklyannotated data. For the weakly-annotated data, this S-Net is only constrained to output the segmentation masks that have the same average pixel values as the weakly-annotated guano masks. This model is denoted as \u201cS-Net SR. +\u201d in the third row of Table 2. This simple modi\ufb01cation improves the mIoU by 30%, from 42.3 to 55 mIoU, compared to the model trained only on the hand-labelled training set. We then evaluate the effect of the C-Net. We use CNet to classify all 29043 training patches that are marked as containing guano, according to the weakly-annotated guano masks. We then remove image patches classi\ufb01ed as \u201cNo Guano\u201d from the training pool. As can be seen from Table 1, C-Net removes 19446 image patches, which are 67% of the whole set of positive training patches that are weaklyannotated. With less noisy training images, S-Net achieves better segmentation performance on the Penguin Colony Dataset where the mIoU improves from 55% to 60% mIoU score. Fig. 4 illustrates the effect of the C-Net. The \ufb01gure shows that S-Net trained on the complete data is forced to predict the guano even on the snow covered areas while the model that trained on the \ufb01ltered data, \u201cC+S-Net\u201d, does not predict any guano pixels in these cases. Fig. 5 shows a qualitative comparison between our proposed method and the other three baselines discussed above. The three penguin colonies shown in the \ufb01gure are (from top to bottom) Arthurson Ridge, Balaena Islands, and Cape Crozier. As can be seen, the model trained on only the handlabelled data does not segment Arthurson Ridge and Cape Crozier correctly. The model trained without the C-Net picks up some non-guano pixels at Arthurson Ridge while missing some guano areas at Balaena Islands and Cape Crozier. The model trained on the data \ufb01ltered by C-Net outputs cleaner and more complete segmentation masks. The effect of using weakly-labelled data is shown best in Fig. 6. We compare the results of our model trained with and without the weakly-annotated data for images captured at the penguin site named Arthurson Ridge throughout mul\fInput Overlaid GT S-Net S-Net C+S-Net Seg.(H) Seg.(H) + Reg.(W) Seg.(H) + Reg.(W) Figure 5. Qualitative comparison between our method and other baseline methods on the Penguin Colony Dataset. All methods use S-Net as the backbone segmentation network. From left to right: the input image, the input image overlaid by the ground truth penguin guano polygon, the results of S-Net trained only with the correctly-annotated set, the results of S-Net trained on the correctly-annotated set with the segmentation loss and on the weakly-annotated set with the regression loss. The last column is our proposed method using the C-Net to \ufb01lter out bad training examples for training the S-Net on both correctly-annotated and weakly-annotated set. Imagery copyright DigitalGlobe, Inc. 2019. tiple years. The weakly-annotated data improves signi\ufb01cantly the generalizability of the network. Figure 6. Qualitative comparison of S-Net trained with and without weakly-labeled data. The top row shows the input images, the middle row visualizes the results of S-Net trained only on the hand-labelled data, and the bottom row visualizes the results of S-Net trained on the hand-labelled data and weakly-annotated data. We used our trained C-Net to \ufb01lter the training data before training this S-Net. Imagery copyright DigitalGlobe, Inc. 2019. 6." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file