{ "url": "http://arxiv.org/abs/2404.16831v2", "title": "The Third Monocular Depth Estimation Challenge", "abstract": "This paper discusses the results of the third edition of the Monocular Depth\nEstimation Challenge (MDEC). The challenge focuses on zero-shot generalization\nto the challenging SYNS-Patches dataset, featuring complex scenes in natural\nand indoor settings. As with the previous edition, methods can use any form of\nsupervision, i.e. supervised or self-supervised. The challenge received a total\nof 19 submissions outperforming the baseline on the test set: 10 among them\nsubmitted a report describing their approach, highlighting a diffused use of\nfoundational models such as Depth Anything at the core of their method. The\nchallenge winners drastically improved 3D F-Score performance, from 17.51% to\n23.72%.", "authors": "Jaime Spencer, Fabio Tosi, Matteo Poggi, Ripudaman Singh Arora, Chris Russell, Simon Hadfield, Richard Bowden, GuangYuan Zhou, ZhengXin Li, Qiang Rao, YiPing Bao, Xiao Liu, Dohyeong Kim, Jinseong Kim, Myunghyun Kim, Mykola Lavreniuk, Rui Li, Qing Mao, Jiang Wu, Yu Zhu, Jinqiu Sun, Yanning Zhang, Suraj Patni, Aradhye Agarwal, Chetan Arora, Pihai Sun, Kui Jiang, Gang Wu, Jian Liu, Xianming Liu, Junjun Jiang, Xidan Zhang, Jianing Wei, Fangjun Wang, Zhiming Tan, Jiabao Wang, Albert Luginov, Muhammad Shahzad, Seyed Hosseini, Aleksander Trajcevski, James H. Elder", "published": "2024-04-25", "updated": "2024-04-27", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Original Paper", "paper_cat": "Diffusion AND Model", "gt": "Monocular depth estimation (MDE) aims at predicting the distance from the camera to the points of the scene de- picted by the pixels in the captured image. It is a highly ill-posed problem due to the absence of geometric priors usually available from multiple images. Nonetheless, deep learning has rapidly advanced this field and made it a real- ity, enabling results far beyond imagination. 1Independent 2University of Bologna 3Blue River Technology 4Oxford Internet Institute 5University of Surrey 6ByteDance 7University of Chinese Academy of Science 8RGA Inc. 9Space Research Institute NASU-SSAU, Kyiv, Ukraine 10Northwestern Polytechnical University, Xi\u2019an 11Indian Institute of Technology, Delhi 12Harbin Institute of Technology 13Fujitsu 14GuangXi University 15University of Reading 16York University For years, most proposed approaches have been tailored to training and testing in a single, defined domain \u2013 e.g., automotive environments [33] or indoor settings [64] \u2013 of- ten ignoring their ability to generalize to unseen environ- ments. Purposely, the Monocular Depth Estimation Chal- lenge (MDEC) in the last years has encouraged the commu- nity to delve into this aspect, by proposing a new benchmark for evaluating MDE models on a set of complex environ- ments, comprising natural, agricultural, urban, and indoor settings. The dataset comes with a validation and a testing split, without any possibility of training/fine-tuning over it thus forcing the models to generalize. While the first edition of MDEC [90] focused on bench- marking self-supervised approaches, the second [91] addi- tionally opened the doors to supervised methods. During the former, the participants outperformed the baseline [30, 92] in all image-based metrics (AbsRel, MAE, RMSE), but could not improve pointcloud reconstructions [65] (F- Score). The latter, instead, brought new methods capable of outperforming the baseline on both aspects, establishing a new State-of-the-Art (SotA). The third edition of MDEC, detailed in this paper, ran in conjunction with CVPR2024, following the successes of the second one by allowing sub- missions of methods exploiting any form of supervision, e.g. supervised, self-supervised, or multi-task. Following previous editions, the challenge was built around SYNS-Patches [1, 92]. This dataset was chosen because of the variegated diversity of environments it con- tains, including urban, residential, industrial, agricultural, natural, and indoor scenes. Furthermore, SYNS-Patches contains dense high-quality LiDAR ground-truth, which is very challenging to obtain in outdoor settings. This allows 1 arXiv:2404.16831v2 [cs.CV] 27 Apr 2024 for a benchmark that accurately reflects the real capabilities of each model, potentially free from biases. While the second edition counted 8 teams outperforming the SotA baseline in either pointcloud- or image-based met- rics, this year 19 submissions achieved this goal. Among these, 10 submitted a report introducing their approach, 7 of whose outperformed the winning team of the second edi- tion. This demonstrates the increasing interest \u2013 and efforts \u2013 in MDEC. In the remainder of the paper, we will provide an overview of each submission, analyze their results on SYNS-Patches, and discuss potential future developments.", "main_content": "Supervised MDE. Early monocular depth estimation (MDE) efforts utilized supervised learning, leveraging ground truth depth labels. Eigen et al. [26] proposed a pioneering end-to-end convolutional neural network (CNN) for MDE, featuring a scale-invariant loss and a coarse-to-fine architecture. Subsequent advancements incorporated structured prediction models such as Conditional Random Fields (CRFs) [54, 120] and regression forests [82]. Deeper network architectures [80, 109], multi-scale fusion [63], and transformer-based encoders [8, 16, 79] further enhanced performance. Alternatively, certain methods framed depth estimation as a classification problem [6, 7, 28, 51]. Novel loss functions were also introduced, including gradientbased regression [53, 104], the berHu loss [50], an ordinal relationship loss [14], and scale/shift invariance [80]. Self-Supervised MDE. To overcome the dependence on costly ground truth annotations, self-supervised methods were developed. Garg et al. [30], for the first time, proposed an algorithm based on view synthesis and photometric consistency across stereo image pairs, the importance of which for was extensively analyzed by Poggi et al. [74]. Godard et al. [34] introduced Monodepth, which incorporated differentiable bilinear interpolation [44], virtual stereo prediction, and a SSIM+L1 reconstruction loss. Zhou et al. [130] presented SfM-Learner, which required only monocular video supervision by replacing the known stereo transform with a pose estimation network. Following the groundwork laid by these frameworks, subsequent efforts focused on refining the depth estimation accuracy by integrating feature-based reconstructions [89, 119, 124], semantic segmentation [122], adversarial losses [3], proxydepth representations [5, 18, 48, 70, 83, 97, 107], trinocular supervision [75] and other constraints [9, 61, 103]. Other works focused on improving depth estimates at object boundaries [96, 99]. Moreover, attention has also been given to challenging cases involving dynamic scenarios during the training phase, which pose difficulties in providing accurate supervision signals for such networks. This has been addressed, for example, by incorporating uncertainty estimates [48, 73, 112], motion masks [11, 22, 37, 98], optical flow [59, 81, 118], or via the minimum reconstruction loss [35]. Finally, several architectural innovations, including 3D (un)packing blocks [38], position encoding [36], transformer-based encoders [2, 127], sub-pixel convolutions [71], progressive skip connections [60], and self-attention decoders [46, 110, 129], allowed further improvements. Among them, lightweight models tailored for real-time applications with memory and runtime constraints have also been developed [4, 19, 43, 68, 69, 72, 108]. Generalization and \u201cIn-the-Wild\u201d MDE. Estimating depth in the wild refers to the challenging task of developing methods that can generalize to a wide range of unknown settings [14, 15]. Early works in this area focused on predicting relative (ordinal) depth [14, 15]. Nonetheless, the limited suitability of relative depth in many downstream contexts has driven researchers to explore affine-invariant depth estimation [53, 113]. In the affine-invariant setting, depth is estimated up to an unknown global offset and scale, offering a compromise between ordinal and metric representations. Researchers have employed various strategies to achieve generalization, including leveraging annotations from large datasets to train monocular depth models [79, 80, 111], including internet photo collections [53, 113], as well as from automotive LiDAR [33, 38, 42], RGB-D/Kinect sensors [17, 64, 95], structure-from-motion reconstructions [52, 53], optical flow/disparity estimation [80, 109], and crowd-sourced annotations [14]. However, the varying accuracy of these annotations may have impacted model performance, and acquiring new data sources remains a challenge, motivating the exploration of self-supervised approaches [116, 125]. For instance, KBR(++) [93, 94] leverage large-scale self-supervision from curated internet videos. The transition from CNNs to vision transformers has further boosted performance in this domain, as demonstrated by DPT (MiDaS v3) [79] and Omnidata [25]. Furthermore, a few works like Metric3D [114] and ZeroDepth [40] revisited the depth estimation by explicitly feeding camera intrinsics as additional input. A notable recent trend involves training generative models, especially diffusion models [29, 41, 88] for monocular depth estimation [24, 45, 47, 84, 85]. Adverse Weather and Transparent/Specular Surfaces. Existing monocular depth estimation networks have struggled under adverse weather conditions. Approaches have addressed low visibility [89], employed day-night branches using GANs [100, 126], utilized additional sensors [31], or faced trade-offs [101]. Recently, md4all [32] enabled robust performance across conditions without compromising ideal setting performance. Furthermore, estimating depth for transparent or mirror (ToM) surfaces posed a unique challenge [121, 123]. Costanzino et al. [21] is the only work dedicated to this, introducing novel datasets [77, 78]. Their 2 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Outdoor-Urban 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Natural 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Agriculture 0 20 40 60 80 100 120 0.00 0.05 0.10 0.15 0.20 Indoor Figure 1. SYNS-Patches Properties. Top: Distribution of images per category in the validation split and the test split respectively. Bottom: Depth distribution per scene type \u2013 indoor scenes are limited to 20m, while outdoor scenes reach up to 120m; natural and Agriculture scenes contain a larger percentage of long-range depths (20-80m), while urban scenes focus on the mid-range (20-40m). approach relied on segmentation maps or pre-trained networks, generating pseudo-labels by inpainting ToM objects and processing them with a pre-trained depth model [80], enabling fine-tuning of existing networks to handle ToM surfaces. 3. The Monocular Depth Estimation Challenge The third edition of the Monocular Depth Estimation Challenge1 was organized on CodaLab [67] as part of a CVPR2024 workshop. The development phase lasted four weeks, using the SYNS-Patches validation split. During this phase, the leaderboard was public but the usernames of the participants were anonymized. Each participant could see the results achieved by their own submission. The final phase of the challenge was open for three weeks. At this stage, the leaderboard was completely private, disallowing participants to see their own scores. This choice was made to encourage the evaluation on the validation split rather than the test split and, together with the fact that all ground-truth depths were withheld, severely avoiding any possibility of overfitting over the test set by conducting repeated evaluations on it. Following the second edition [91], any form of supervision was allowed, in order to provide a more comprehensive overview of the monocular depth estimation field as a whole. This makes it possible to better study the gap between different techniques and identify possible, fu1 https://codalab.lisn.upsaclay.fr/competitions/17161 ture research directions. In this paper, we report results only for submissions that outperformed the baseline in any pointcloud-/image-based metric on the Overall dataset. Dataset. The challenge takes place based on the SYNS-Patches dataset [1, 92], chosen due to the diversity of scenes and environments. A breakdown of images per category and some representative examples are shown in Figure 1 and Figure 2. SYNS-Patches also provides extremely high-quality dense ground-truth LiDAR, with an average coverage of 78.20% (including sky regions). Given such dense ground-truth, depth boundaries were obtained using Canny edge-detection on the log-depth maps, allowing us to compute additional fine-grained metrics for these challenging regions. As outlined in [91, 92], the images were manually checked to remove dynamic object artifacts. Evaluation. Participants were asked to provide the up-toscale disparity prediction for each dataset image. The evaluation server bilinearly upsampled the predictions to the target resolution and inverted them into depth maps. Although self-supervised methods trained with stereo pairs and supervised methods using LiDAR or RGB-D data should be capable of predicting metric depth, in order to ensure comparisons are as fair as possible, the evaluation aligned any predictions with the ground-truth using the median depth. We set a maximum depth threshold of 100 meters. Metrics. Following the first and second editions of the challenge [90, 91], we use a mixture of image-/pointcloud/edge-based metrics. Image-based metrics are the most common (MAE, RMSE, AbsRel) and are computed using 3 Figure 2. SYNS-Patches Dataset. We show samples from diverse scenes, including complex urban, natural, and indoor spaces. Highquality ground-truth depth covers about 78.20% of the image, from which depth boundaries are computed as Canny edges in log space. pixel-wise comparisons between the predicted and groundtruth depth map. Pointcloud-based metrics [65] (F-Score, IoU, Chamfer distance) instead bring the evaluation in the 3D domain, evaluating the reconstructed pointclouds as a whole. Among these, we select reconstruction F-Score as the leaderboard ranking metric. Finally, edge-based metrics are computed only at depth boundary pixels. This includes image-/pointcloud-based metrics and edge accuracy/completion metrics from IBims-1 [49]. 4. Challenge Submissions We now highlight the technical details for each submission, as provided by the authors themselves. Each submission is labeled based on the supervision used, including groundtruth (D), proxy ground-truth (D*), DepthAnything [111] pretraining (\u2020) and monocular (M) or stereo (S) photometric support frames. Teams are numbered according to rankings. Baseline \u2013 S J. Spencer j.spencermartin@surrey.ac.uk C. Russell chris.russell@oii.ox.ac.uk S. Hadfield s.hadfield@surrey.ac.uk R. Bowden r.bowden@surrey.ac.uk Challenge organizers\u2019 submission from the first edition. Network. ConvNeXt-B encoder [56] with a base Monodepth decoder [34, 62] from [92]. Supervision. Self-supervised with a stereo photometric loss [30] and edge-aware disparity smoothness [34]. Training. Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. Team 1: PICO-MR \u2013 \u2020D* G. Zhou zhouguangyuan@bytedance.com Z. Li lizhengxin17@mails.ucas.ac.cn Q. Rao raoqiang@bytedance.com Y. Bao baoyiping@bytedance.com X. Liu liuxiao@foxmail.com Network. Based on Depth-Anything [111] with a BEiT384L backbone, starting from the authors\u2019 weights pre-trained on 1.5M labeled images and 62M+ unlabeled images. Supervision. The model is fine-tuned in a supervised manner, with proxy labels derived from stereo images. The final loss function integrates the SILog loss, SSIL loss, Gradient loss, and Random Proposal Normalization (RPNL) loss. Training. The network was fine-tuned on the CityScapes dataset [20], resizing the input to 384\u00d7768 resolution, while keeping proxy labels at 1024 \u00d7 \u00d72048 resolution. Random flipping is used to augment data, the batch size is set to 16 and the learning rate to 0.000161. The fine-tuning is carried out to predict metric depth and early stops at 4 epochs, a strategic choice to prevent overfitting and ensure the model\u2019s robustness to new data. Team 3: RGA-Robot \u2013 \u2020S D. Kim figure317@rgarobot.com J. Kim jsk24@rgarobot.com M. Kim wiseman218@rgarobot.com Network. It uses the Depth Anything [111] pre-trained model to estimate relative depth, accompanied by an auxiliary network to convert it into metric depth. This latter is NAFNet [13], processing the final feature maps and relative depth map predicted by the former model together with the input image. Supervision. Self-supervised loss with two main terms: image reconstruction loss and smoothness loss. The former integrates perceptual loss with photometric loss as used in monodepth2 [35], with the former using a pre-trained VGG19 backbone [87], following a similar approach as in ESRGAN [106]. Training. The train is carried out on Kitti Eigen-Zhou with batch size 8 and learning rate 1e\u22124 for 4 epochs. Only NAFNet is trained, while the Depth Anything model remains frozen. Team 4: EVP++ \u2013 \u2020D M. Lavreniuk nick 93@ukr.net Network. The architecture is based on Depth Anything [111], incorporating a VIT-L encoder [23] for feature extraction and the ZoeDepth metric bins module [8] as a de4 coder. This module computes per-pixel depth bin centers, which are linearly combined to produce metric depth. Supervision. The models were trained in a supervised manner using ground-truth depth information obtained from various datasets, employing the SILog loss function. Training. The models were trained on both indoor and outdoor data, respectively on the NYUv2 dataset [64] with an image size of 392 \u00d7 518, and on KITTI [33], Virtual KITTI 2 [10], and DIODE outdoor [102] with an image size of 518\u00d71078. The batch size was set to 16, the learning rate to 0.000161, and the maximum depth to 10 for indoor scenes. For outdoor scenes, the batch size was set to 1, the learning rate to 0.00002, and the maximum depth to 80. Both models were trained for 5 epochs. Team 6: 3DCreators \u2013 \u2020D R. Li lirui.david@gmail.com Q. Mao maoqing@mail.nwpu.edu.cn J. Wu 18392713997@mail.nwpu.edu.cn Y. Zhu yuzhu@nwpu.edu.cn J. Sun sunjinqiu@nwpu.edu.cn Y. Zhang ynzhang@nwpu.edu.cn Network. An architecture made of two sub-networks. The first model consists of a pre-trained ViT-large backbone [23] from Depth Anything [111] and a ZoeDepth decoder [8]. The second is Metric3D [115], which uses ConvNext-Large [57] backbone and a LeRes decoder [117]. Supervision. The first network is fine-tuned with the KITTI dataset using SILog loss. The second network uses the released pre-trained weights trained by a diverse collection of datasets as detailed in [115]. Training. The first network is fine-tuned using batch size 16 for 5 epochs. At inference, test-time augmentation \u2013 i.e., color jittering and horizontal flipping \u2013 is used to combine the predictions by the two models: the same image is augmented 10 times and processed by the two models, then the predictions are averaged. Team 7: visioniitd \u2013 D S. Patni suraj.patni@cse.iitd.ac.in A. Agarwal aradhye.agarwal.cs520@cse.iitd.ac.in C. Arora chetan@cse.iitd.ac.in Network. The model is ECoDepth [66], which provides effective conditioning for the MDE task to diffusion methods like stable diffusion. It is based on a Comprehensive Image Detail Embedding (CIDE) module which utilizes ViT embeddings of the image and subsequently transforms them to yield a semantic context vector. These embeddings are used to condition the pre-trained UNet backbone in Stable Diffusion, which produces hierarchical feature maps from its decoder. These are resized to a common dimension and passed to the Upsampling decoder and depth regressor to produce the final depth. Supervision. Supervised training using the ground truth depth with SILog loss as the loss function with variance focus (\u03bb) 0.85. Ground-truth depth is transformed as 1 (1+x). Training. Trained on NYUv2 [64], KITTI [33], virtual KITTI v2 [10] for 25 epochs, with one-cycle learning rate (min: 3e\u22125, max: 5e\u22124) and batch size 32 on 8\u00d7 A100 GPUs. Team 9: HIT-AIIA \u2013 \u2020D P. Sun 23s136164@stu.hit.edu.cn K. Jiang jiangkui@hit.edu.cn G. Wu gwu@hit.edu.cn J. Liu hitcslj@hit.edu.cn X. Liu csxm@hit.edu.cn J. Jiang jiangjunjun@hit.edu.cn Network. It involves the pre-trained Depth Anything encoder and pre-trained CLlP model. The latter is introduced to calculate the similarity between the keywords \u2018indoor\u2019 or \u2018outdoor\u2019 and features extracted from the input image to route it to two, different instances of Depth Anything specialized on indoor or outdoor scenarios. Supervision. Two instances of Depth Anything are finetuned on ground-truth labels, respectively from NYUv2 and KITTI for indoor and outdoor environments. Training. The training resolution is 392 \u00d7 518 on NYUv2 and 384 \u00d7 768 on KITTI. The batch size is 16 and both instances are trained for 5 epochs. Team 10: FRDC-SH \u2013 \u2020D X. Zhang zhangxidan@fujitsu.com J. Wei weijianing@fujitsu.com F. Wang wangfangjun@fujitsu.com Z. Tan zhmtan@fujistu.com Network. The depth network is the Depth Anything [111] pre-trained model \u2013 based on ZoeDepth [8] with a DPT BEiT L384 \u2013 and further fine-tuned. Supervision. Trained on ground-truth depth, with SILog and Hyperbolic Chamfer Distance losses. Training. The model is fine-tuned on NYU-v2 [64], 7Scenes [86], SUNRGBD [128], DIODE [102], KITTI [33], DDAD [39], and Argoverse [12] \u2013 without any resizing of the image resolution \u2013 for 20 epochs with batch size 32, a learning rate set to 1.61e-04, and a 0.01 weight decay. Team 15: hyc123 \u2013 D J. Wang 601533944@qq.com Network. Swin encoder [55] with skip connections and a decoder with channel-wise self-attention modules. Supervision. Trained with ground truth depths, using a loss consisting of a combination of two L1 losses and an SSIM loss, weighted accordingly. Training. The model was trained on Kitti Eigen-Zhou split using images of size 370 \u00d7 1224 for 100 epochs. 5 Team 16: ReadingLS \u2013 \u2020MD* A. Luginov a.luginov@pgr.reading.ac.uk M. Shahzad m.shahzad2@reading.ac.uk Network. The depth network is SwiftDepth [58], a compact model with only 6.4M parameters. Supervision. Self-supervised monocular training with the minimum reconstruction loss [35], enhanced by offline knowledge distillation from a large MDE model [111]. Training. The model is trained in parallel on Kitti Eigen-Zhou and a selection of outdoor YouTube videos, similarly to KBR [93]. Both training and prediction are performed with the input resolution of 192 \u00d7 640. The teacher model [111] is not trained on either these datasets or SYNSPatches. Team 19: Elder Lab \u2013 D S. Hosseini smhh@yorku.ca A. Trajcevski atrajcev@yorku.ca J. H. Elder jelder@yorku.ca Network. An off-the-shelf semantic segmentation model [105] is used at first to segment the image. Then, the depth of pixels on the ground plane is estimated by predicting the camera angle from the height of the highest pixel on the ground. Then, depth is propagated vertically for pixels above the ground, while the Manhattan frame is estimated with [76] to identify both Manhattan and non-Manhattan segments in the image and propagate depth along them in 3D space. Finally, the depth map is completed according to heat equations [27], with pixels for which depth has been already estimated imposing forcing conditions, while semantic boundaries and the image frame impose reflection boundary conditions. Supervision. Ground-truth depth is used for training three kernel regression models. Training. Three simple statistical models are trained on CityScapes [20] and NYUv2 [64]: 1) A kernel regression model to estimate ground elevation angle from the vertical image coordinate of the highest observed ground pixel. The ground truth elevation angle is computed by fitting a plane (constrained to have zero roll) to the ground truth ground plane coordinates; 2) A kernel regression model to estimate the depth of ground pixels from their vertical coordinate, conditioned on semantic class; 3) median depth of non-ground pixels in columns directly abutting the bottom of the image frame, conditioned on semantic class. 5. Results Submitted methods were evaluated on the testing split of SYNS-Patches [1, 92]. Participants were allowed to submit methods without any restriction on the supervision or the predictions by the model, which can be either relative or metric. Accordingly, to ensure a fair comparison among the methods, the submitted predictions are aligned to groundtruth depths according to median depth scaling. 5.1. Quantitative Results Table 1 highlights the results of this third edition of the challenge, with the top-performing techniques, ordered using FScore performance, achieving notable improvements over the baseline method. A first, noteworthy observation is the widespread adoption of the Depth Anything model [111], pre-trained on 62M of images, as the backbone architecture by the leading teams, including PICO-MR, RGA-Robot, EVP++, 3DCreators, HIT-AIIA, FRDC-SH, and ReadingLS, demonstrating its effectiveness and versatility. Specifically, Team PICO-MR, which secured the top position on the leaderboard, achieved an F-score of 23.72, outperforming the baseline method by a remarkable 72.9%. This represents a significant improvement over the previous state-of-the-art method, DJI&ZJU, which achieved an F-score of 17.51 in the \u201cThe Second Monocular Depth Estimation Challenge\u201d [91]. In particular, Team PICO-MR\u2019s result shows a 35.5% increase in performance compared to DJI&ZJU, highlighting the rapid progress made in monocular depth estimation within a relatively short period. This improvement can be also clearly observed in the other metrics considered, both accuracy and error \u2013 notably, achieving the second absolute results on F-Edges, MAE, and RMSE. Their success can be attributed to the fine-tuning of the Depth Anything model on the Cityscapes dataset using a combination of SILog, SSIL, Gradient, and Random Proposal Normalization losses, as well as their strategic choice of fine-tuning for a few epochs to prevent overfitting and ensure robustness to unseen data. Team RGA-Robot, in the third place, achieved an Fscore of 22.79, outperforming the baseline by 66.1%. Their novel approach of augmenting the Depth Anything model, maintained frozen, with an auxiliary network, NAFNet, to convert relative depth predictions into metric depth, combined with self-supervised loss terms, shows the effectiveness of this approach in enhancing depth accuracy. In terms of the F-Edges metric, this method achieves the best result. Team EVP++, ranking fourth, achieved an F-score of 20.87, surpassing the baseline by 52.1%. Their approach involved training the Depth Anything model on both indoor and outdoor datasets, adapting image sizes, batch sizes, and learning rates to each scenario, and highlighting the importance of tailoring model parameters to the specific characteristics of the target environment. This strategy notably improves the results in terms of standard 2D error metrics, yielding the lowest MAE, RMSE, and AbsRel. Several other teams also surpassed both the baseline method and the previous state-of-the-art from the second edition of the challenge. Team 3DCreators achieved an 6 Table 1. SYNS-Patches Results. We provide metrics across the whole test split of the dataset. Top-performing entries generally leverage the pre-trained Depth Anything [111] model. Only a few methods use self-supervised losses or proxy depth labels. Train Rank F\u2191 F-Edges\u2191 MAE\u2193 RMSE\u2193 AbsRel\u2193 Acc-Edges\u2193 Comp-Edges\u2193 PICO-MR \u2020D* 1 23.72 11.01 3.78 6.61 21.24 3.90 4.45 Anonymous ? 2 23.25 10.78 3.87 6.70 21.70 3.59 9.86 RGA-Robot \u2020S 3 22.79 11.52 5.21 9.23 28.86 4.15 0.90 EVP++ \u2020D 4 20.87 10.92 3.71 6.53 19.02 2.88 6.77 Anonymous ? 5 20.77 9.96 4.33 7.83 27.80 3.45 13.25 3DCreators \u2020D 6 20.42 10.19 4.41 7.89 23.94 3.61 5.80 visioniitd D 7 19.07 9.92 4.53 7.96 23.27 3.26 8.00 Anonymous ? 8 18.60 9.43 3.92 7.16 20.12 2.89 15.65 HIT-AIIA \u2020D 9 17.83 9.14 4.11 7.73 21.23 2.95 17.81 FRDC-SH \u2020D 10 17.81 9.75 5.04 8.92 24.01 3.16 14.16 Anonymous ? 11 17.57 9.13 4.28 8.36 23.35 3.18 20.66 Anonymous ? 12 16.91 9.07 4.14 7.35 22.05 3.24 18.52 Anonymous ? 13 16.71 9.25 5.48 11.05 34.20 2.57 18.04 Anonymous ? 14 16.45 8.89 5.29 10.53 33.67 2.60 18.73 hyc123 D 15 15.92 9.17 8.25 13.88 43.88 4.11 0.74 ReadingLS \u2020MD* 16 14.81 8.14 5.01 8.94 29.39 3.28 30.28 Baseline S 17 13.72 7.76 5.56 9.72 32.04 3.97 21.63 Anonymous ? 18 13.71 7.55 5.49 9.44 30.74 3.61 18.36 Anonymous ? 19 11.90 8.08 6.33 10.89 30.46 2.99 33.63 Elder Lab D 20 11.04 7.09 8.76 15.86 63.32 3.22 40.61 M=Monocular \u2013 S=Stereo \u2013 D*=Proxy Depth \u2013 D=Ground-truth Depth \u2013 \u2020=Pre-trained Depth Anything model F-score of 20.42, outperforming the baseline by 48.8% by fine-tuning and combining predictions from the Depth Anything model and Metric3D. Team visioniitd follows surpassing the baseline using ECoDepth, which conditions Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings. Team HIT-AIIA and FRDC-SH also achieved notable improvements, with F-score of 17.83 and 17.81, respectively, using specialized model instances and fine-tuning on diverse datasets. Finally, the remaining teams outperformed the baseline either on the F-score or any of the other metrics, yet not surpassing the winner of the previous edition. Team hyc123, with an F-score of 15.92, outperformed the baseline by 16.0% using a Swin encoder with skip connections and a decoder with channel-wise self-attention modules, while Team ReadingLS outperforms the baseline by distilling knowledge from Depth Anything to a lightweight network based on SwiftDepth, further improved using minimal reconstruction loss during training. Finally, Team Elder Lab employed an off-the-shelf semantic segmentation model and estimated depth using techniques such as predicting camera angle, propagating depth along Manhattan and non-Manhattan segments, and completing the depth map using heat equations. They achieved an F-score of 11.04, 19.5% lower than the baseline score of 13.72, yet they obtained 3.22 Acc-Edge, beating the baseline. 5.2. Qualitative Results Figure 3 provides qualitative results for the depth predictions of each submission. A notable trend among the top-performing teams, such as PICO-MR, RGA-Robot, EVP++, and 3DCreators, is the adoption of the Depth Anything model as a backbone architecture. While Depth Anything represents the current state-of-the-art in monocular depth estimation, the qualitative results highlight that there are still significant challenges in accurately estimating depth, particularly for thin structures in complex outdoor scenes. This is evident in columns 2, 4, 5, and 6 of Figure 3, where objects like trees and branches are not well-recovered, despite the impressive quantitative performance of these methods as shown in Table 1. Interestingly, Team visioniitd, which employs a novel approach called ECoDepth to condition Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings, demonstrates a remarkable ability to estimate depth for thin structures. Yet, they are outperformed quantitatively by other methodologies, suggesting that estimating depth in smooth regions may be more challenging than in thin structures. The qualitative results also reveal some method-specific anomalies. For instance, hyc123 exhibits salt-and-pepper noise artifacts, while Elder Lab\u2019s method, which ranks last, generates overly smooth depth maps that lose important scene objects. These anomalies highlight the importance of developing robust techniques that can handle diverse scene characteristics. Grid-like artifacts are observed in the predictions of top-performers PICO-MR and RGA-Robot, particularly in regions where the network seems uncertain about depth estimates. This suggests that further improvements in network architecture and training strategies may be necessary to mitigate these artifacts. 7 GT PICO-MR RGA-Robot EVP++ 3DCreators visioniitd HIT-AIIA FRDC-SH hyc123 ReadingLS Baseline Elder Lab Figure 3. SYNS-Patches Depth Visualization. Best viewed in color and zoomed in. Methods are ranked based on their F-Score in Table 1. We can appreciate how thin structures still represent one of the hardest challenges to any method, such as branches and railings, for instance. Near depth discontinuities, most approaches tend to produce \u201chalos\u201d, interpolating between foreground and background objects and thus failing to perceive sharp boundaries. Nonetheless, most methods expose higher level of detail compared to the baseline. The indoor scenario in the last column shows the strong performance of methods like PICO-MR, EVP++, HITAIIA, and FRDC-SH in estimating scene structure. This can be attributed to their use of large-scale pre-training, fine-tuning on diverse datasets, and carefully designed loss functions that capture both global and local depth cues. However, all methods still exhibit over-smoothing issues at depth discontinuities, manifesting as halo effects. While they outperform the baseline in this regard, likely due to their supervised training with ground truth or proxy labels, there remains significant room for improvement. A notable limitation across all methods is the inability to effectively estimate depth for non-Lambertian surfaces, such as glass or transparent objects. This is evident in the penultimate right column and the first column, corresponding to the windshield. The primary reason for this limitation is the lack of accurate supervision for such surfaces in the training data, highlighting the need for novel techniques and datasets that explicitly address this challenge. In conclusion, the qualitative results provide valuable insights into the current state of monocular depth estimation methods. While the adoption of large-scale pretraining and carefully designed architectures has led to significant improvements, challenges persist in accurately estimating depth for thin structures, smooth regions, and nonLambertian surfaces. Addressing these limitations through novel techniques, improved training strategies, and diverse datasets will be crucial for further advancing this field. 6. Conclusions & Future Work This paper has summarized the results for the third edition of MDEC. Over the various editions of the challenge, we have seen a drastic improvement in performance, showcasing MDE \u2013 in particular real-world generalization \u2013 as an exciting and active area of research. With the advent of the first foundational models for MDE during the last months, we observed a diffused use of frameworks such as Depth Anything [111]. This ignited a major boost to the results submitted by the participants, with a much higher impact compared to the specific kind of supervision chosen for the challenge. Nonetheless, as we can appreciate from the qualitative results, any methods still struggle to accurately predict fine structures and discontinuities, hinting that there is still room for improvement despite the massive amount of data used to train Depth Anything. We hope MDE will continue to attract new researchers and practitioners to this field and renew our invitation to participate in future editions of the challenge. Acknowledgments. This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1. 8", "additional_graph_info": { "graph": [ [ "Jaime Spencer", "Chris Russell" ], [ "Jaime Spencer", "Fabio Tosi" ], [ "Jaime Spencer", "Matteo Poggi" ], [ "Fabio Tosi", "Matteo Poggi" ], [ "Fabio Tosi", "Pierluigi Zama Ramirez" ], [ "Matteo Poggi", "Filippo Aleotti" ], [ "Matteo Poggi", "Andrea Conti" ] ], "node_feat": { "Jaime Spencer": [ { "url": "http://arxiv.org/abs/2404.16831v2", "title": "The Third Monocular Depth Estimation Challenge", "abstract": "This paper discusses the results of the third edition of the Monocular Depth\nEstimation Challenge (MDEC). The challenge focuses on zero-shot generalization\nto the challenging SYNS-Patches dataset, featuring complex scenes in natural\nand indoor settings. As with the previous edition, methods can use any form of\nsupervision, i.e. supervised or self-supervised. The challenge received a total\nof 19 submissions outperforming the baseline on the test set: 10 among them\nsubmitted a report describing their approach, highlighting a diffused use of\nfoundational models such as Depth Anything at the core of their method. The\nchallenge winners drastically improved 3D F-Score performance, from 17.51% to\n23.72%.", "authors": "Jaime Spencer, Fabio Tosi, Matteo Poggi, Ripudaman Singh Arora, Chris Russell, Simon Hadfield, Richard Bowden, GuangYuan Zhou, ZhengXin Li, Qiang Rao, YiPing Bao, Xiao Liu, Dohyeong Kim, Jinseong Kim, Myunghyun Kim, Mykola Lavreniuk, Rui Li, Qing Mao, Jiang Wu, Yu Zhu, Jinqiu Sun, Yanning Zhang, Suraj Patni, Aradhye Agarwal, Chetan Arora, Pihai Sun, Kui Jiang, Gang Wu, Jian Liu, Xianming Liu, Junjun Jiang, Xidan Zhang, Jianing Wei, Fangjun Wang, Zhiming Tan, Jiabao Wang, Albert Luginov, Muhammad Shahzad, Seyed Hosseini, Aleksander Trajcevski, James H. Elder", "published": "2024-04-25", "updated": "2024-04-27", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Supervised MDE. Early monocular depth estimation (MDE) efforts utilized supervised learning, leveraging ground truth depth labels. Eigen et al. [26] proposed a pioneering end-to-end convolutional neural network (CNN) for MDE, featuring a scale-invariant loss and a coarse-to-fine architecture. Subsequent advancements incorporated structured prediction models such as Conditional Random Fields (CRFs) [54, 120] and regression forests [82]. Deeper network architectures [80, 109], multi-scale fusion [63], and transformer-based encoders [8, 16, 79] further enhanced performance. Alternatively, certain methods framed depth estimation as a classification problem [6, 7, 28, 51]. Novel loss functions were also introduced, including gradientbased regression [53, 104], the berHu loss [50], an ordinal relationship loss [14], and scale/shift invariance [80]. Self-Supervised MDE. To overcome the dependence on costly ground truth annotations, self-supervised methods were developed. Garg et al. [30], for the first time, proposed an algorithm based on view synthesis and photometric consistency across stereo image pairs, the importance of which for was extensively analyzed by Poggi et al. [74]. Godard et al. [34] introduced Monodepth, which incorporated differentiable bilinear interpolation [44], virtual stereo prediction, and a SSIM+L1 reconstruction loss. Zhou et al. [130] presented SfM-Learner, which required only monocular video supervision by replacing the known stereo transform with a pose estimation network. Following the groundwork laid by these frameworks, subsequent efforts focused on refining the depth estimation accuracy by integrating feature-based reconstructions [89, 119, 124], semantic segmentation [122], adversarial losses [3], proxydepth representations [5, 18, 48, 70, 83, 97, 107], trinocular supervision [75] and other constraints [9, 61, 103]. Other works focused on improving depth estimates at object boundaries [96, 99]. Moreover, attention has also been given to challenging cases involving dynamic scenarios during the training phase, which pose difficulties in providing accurate supervision signals for such networks. This has been addressed, for example, by incorporating uncertainty estimates [48, 73, 112], motion masks [11, 22, 37, 98], optical flow [59, 81, 118], or via the minimum reconstruction loss [35]. Finally, several architectural innovations, including 3D (un)packing blocks [38], position encoding [36], transformer-based encoders [2, 127], sub-pixel convolutions [71], progressive skip connections [60], and self-attention decoders [46, 110, 129], allowed further improvements. Among them, lightweight models tailored for real-time applications with memory and runtime constraints have also been developed [4, 19, 43, 68, 69, 72, 108]. Generalization and \u201cIn-the-Wild\u201d MDE. Estimating depth in the wild refers to the challenging task of developing methods that can generalize to a wide range of unknown settings [14, 15]. Early works in this area focused on predicting relative (ordinal) depth [14, 15]. Nonetheless, the limited suitability of relative depth in many downstream contexts has driven researchers to explore affine-invariant depth estimation [53, 113]. In the affine-invariant setting, depth is estimated up to an unknown global offset and scale, offering a compromise between ordinal and metric representations. Researchers have employed various strategies to achieve generalization, including leveraging annotations from large datasets to train monocular depth models [79, 80, 111], including internet photo collections [53, 113], as well as from automotive LiDAR [33, 38, 42], RGB-D/Kinect sensors [17, 64, 95], structure-from-motion reconstructions [52, 53], optical flow/disparity estimation [80, 109], and crowd-sourced annotations [14]. However, the varying accuracy of these annotations may have impacted model performance, and acquiring new data sources remains a challenge, motivating the exploration of self-supervised approaches [116, 125]. For instance, KBR(++) [93, 94] leverage large-scale self-supervision from curated internet videos. The transition from CNNs to vision transformers has further boosted performance in this domain, as demonstrated by DPT (MiDaS v3) [79] and Omnidata [25]. Furthermore, a few works like Metric3D [114] and ZeroDepth [40] revisited the depth estimation by explicitly feeding camera intrinsics as additional input. A notable recent trend involves training generative models, especially diffusion models [29, 41, 88] for monocular depth estimation [24, 45, 47, 84, 85]. Adverse Weather and Transparent/Specular Surfaces. Existing monocular depth estimation networks have struggled under adverse weather conditions. Approaches have addressed low visibility [89], employed day-night branches using GANs [100, 126], utilized additional sensors [31], or faced trade-offs [101]. Recently, md4all [32] enabled robust performance across conditions without compromising ideal setting performance. Furthermore, estimating depth for transparent or mirror (ToM) surfaces posed a unique challenge [121, 123]. Costanzino et al. [21] is the only work dedicated to this, introducing novel datasets [77, 78]. Their 2 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Outdoor-Urban 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Natural 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Agriculture 0 20 40 60 80 100 120 0.00 0.05 0.10 0.15 0.20 Indoor Figure 1. SYNS-Patches Properties. Top: Distribution of images per category in the validation split and the test split respectively. Bottom: Depth distribution per scene type \u2013 indoor scenes are limited to 20m, while outdoor scenes reach up to 120m; natural and Agriculture scenes contain a larger percentage of long-range depths (20-80m), while urban scenes focus on the mid-range (20-40m). approach relied on segmentation maps or pre-trained networks, generating pseudo-labels by inpainting ToM objects and processing them with a pre-trained depth model [80], enabling fine-tuning of existing networks to handle ToM surfaces. 3. The Monocular Depth Estimation Challenge The third edition of the Monocular Depth Estimation Challenge1 was organized on CodaLab [67] as part of a CVPR2024 workshop. The development phase lasted four weeks, using the SYNS-Patches validation split. During this phase, the leaderboard was public but the usernames of the participants were anonymized. Each participant could see the results achieved by their own submission. The final phase of the challenge was open for three weeks. At this stage, the leaderboard was completely private, disallowing participants to see their own scores. This choice was made to encourage the evaluation on the validation split rather than the test split and, together with the fact that all ground-truth depths were withheld, severely avoiding any possibility of overfitting over the test set by conducting repeated evaluations on it. Following the second edition [91], any form of supervision was allowed, in order to provide a more comprehensive overview of the monocular depth estimation field as a whole. This makes it possible to better study the gap between different techniques and identify possible, fu1 https://codalab.lisn.upsaclay.fr/competitions/17161 ture research directions. In this paper, we report results only for submissions that outperformed the baseline in any pointcloud-/image-based metric on the Overall dataset. Dataset. The challenge takes place based on the SYNS-Patches dataset [1, 92], chosen due to the diversity of scenes and environments. A breakdown of images per category and some representative examples are shown in Figure 1 and Figure 2. SYNS-Patches also provides extremely high-quality dense ground-truth LiDAR, with an average coverage of 78.20% (including sky regions). Given such dense ground-truth, depth boundaries were obtained using Canny edge-detection on the log-depth maps, allowing us to compute additional fine-grained metrics for these challenging regions. As outlined in [91, 92], the images were manually checked to remove dynamic object artifacts. Evaluation. Participants were asked to provide the up-toscale disparity prediction for each dataset image. The evaluation server bilinearly upsampled the predictions to the target resolution and inverted them into depth maps. Although self-supervised methods trained with stereo pairs and supervised methods using LiDAR or RGB-D data should be capable of predicting metric depth, in order to ensure comparisons are as fair as possible, the evaluation aligned any predictions with the ground-truth using the median depth. We set a maximum depth threshold of 100 meters. Metrics. Following the first and second editions of the challenge [90, 91], we use a mixture of image-/pointcloud/edge-based metrics. Image-based metrics are the most common (MAE, RMSE, AbsRel) and are computed using 3 Figure 2. SYNS-Patches Dataset. We show samples from diverse scenes, including complex urban, natural, and indoor spaces. Highquality ground-truth depth covers about 78.20% of the image, from which depth boundaries are computed as Canny edges in log space. pixel-wise comparisons between the predicted and groundtruth depth map. Pointcloud-based metrics [65] (F-Score, IoU, Chamfer distance) instead bring the evaluation in the 3D domain, evaluating the reconstructed pointclouds as a whole. Among these, we select reconstruction F-Score as the leaderboard ranking metric. Finally, edge-based metrics are computed only at depth boundary pixels. This includes image-/pointcloud-based metrics and edge accuracy/completion metrics from IBims-1 [49]. 4. Challenge Submissions We now highlight the technical details for each submission, as provided by the authors themselves. Each submission is labeled based on the supervision used, including groundtruth (D), proxy ground-truth (D*), DepthAnything [111] pretraining (\u2020) and monocular (M) or stereo (S) photometric support frames. Teams are numbered according to rankings. Baseline \u2013 S J. Spencer j.spencermartin@surrey.ac.uk C. Russell chris.russell@oii.ox.ac.uk S. Hadfield s.hadfield@surrey.ac.uk R. Bowden r.bowden@surrey.ac.uk Challenge organizers\u2019 submission from the first edition. Network. ConvNeXt-B encoder [56] with a base Monodepth decoder [34, 62] from [92]. Supervision. Self-supervised with a stereo photometric loss [30] and edge-aware disparity smoothness [34]. Training. Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. Team 1: PICO-MR \u2013 \u2020D* G. Zhou zhouguangyuan@bytedance.com Z. Li lizhengxin17@mails.ucas.ac.cn Q. Rao raoqiang@bytedance.com Y. Bao baoyiping@bytedance.com X. Liu liuxiao@foxmail.com Network. Based on Depth-Anything [111] with a BEiT384L backbone, starting from the authors\u2019 weights pre-trained on 1.5M labeled images and 62M+ unlabeled images. Supervision. The model is fine-tuned in a supervised manner, with proxy labels derived from stereo images. The final loss function integrates the SILog loss, SSIL loss, Gradient loss, and Random Proposal Normalization (RPNL) loss. Training. The network was fine-tuned on the CityScapes dataset [20], resizing the input to 384\u00d7768 resolution, while keeping proxy labels at 1024 \u00d7 \u00d72048 resolution. Random flipping is used to augment data, the batch size is set to 16 and the learning rate to 0.000161. The fine-tuning is carried out to predict metric depth and early stops at 4 epochs, a strategic choice to prevent overfitting and ensure the model\u2019s robustness to new data. Team 3: RGA-Robot \u2013 \u2020S D. Kim figure317@rgarobot.com J. Kim jsk24@rgarobot.com M. Kim wiseman218@rgarobot.com Network. It uses the Depth Anything [111] pre-trained model to estimate relative depth, accompanied by an auxiliary network to convert it into metric depth. This latter is NAFNet [13], processing the final feature maps and relative depth map predicted by the former model together with the input image. Supervision. Self-supervised loss with two main terms: image reconstruction loss and smoothness loss. The former integrates perceptual loss with photometric loss as used in monodepth2 [35], with the former using a pre-trained VGG19 backbone [87], following a similar approach as in ESRGAN [106]. Training. The train is carried out on Kitti Eigen-Zhou with batch size 8 and learning rate 1e\u22124 for 4 epochs. Only NAFNet is trained, while the Depth Anything model remains frozen. Team 4: EVP++ \u2013 \u2020D M. Lavreniuk nick 93@ukr.net Network. The architecture is based on Depth Anything [111], incorporating a VIT-L encoder [23] for feature extraction and the ZoeDepth metric bins module [8] as a de4 coder. This module computes per-pixel depth bin centers, which are linearly combined to produce metric depth. Supervision. The models were trained in a supervised manner using ground-truth depth information obtained from various datasets, employing the SILog loss function. Training. The models were trained on both indoor and outdoor data, respectively on the NYUv2 dataset [64] with an image size of 392 \u00d7 518, and on KITTI [33], Virtual KITTI 2 [10], and DIODE outdoor [102] with an image size of 518\u00d71078. The batch size was set to 16, the learning rate to 0.000161, and the maximum depth to 10 for indoor scenes. For outdoor scenes, the batch size was set to 1, the learning rate to 0.00002, and the maximum depth to 80. Both models were trained for 5 epochs. Team 6: 3DCreators \u2013 \u2020D R. Li lirui.david@gmail.com Q. Mao maoqing@mail.nwpu.edu.cn J. Wu 18392713997@mail.nwpu.edu.cn Y. Zhu yuzhu@nwpu.edu.cn J. Sun sunjinqiu@nwpu.edu.cn Y. Zhang ynzhang@nwpu.edu.cn Network. An architecture made of two sub-networks. The first model consists of a pre-trained ViT-large backbone [23] from Depth Anything [111] and a ZoeDepth decoder [8]. The second is Metric3D [115], which uses ConvNext-Large [57] backbone and a LeRes decoder [117]. Supervision. The first network is fine-tuned with the KITTI dataset using SILog loss. The second network uses the released pre-trained weights trained by a diverse collection of datasets as detailed in [115]. Training. The first network is fine-tuned using batch size 16 for 5 epochs. At inference, test-time augmentation \u2013 i.e., color jittering and horizontal flipping \u2013 is used to combine the predictions by the two models: the same image is augmented 10 times and processed by the two models, then the predictions are averaged. Team 7: visioniitd \u2013 D S. Patni suraj.patni@cse.iitd.ac.in A. Agarwal aradhye.agarwal.cs520@cse.iitd.ac.in C. Arora chetan@cse.iitd.ac.in Network. The model is ECoDepth [66], which provides effective conditioning for the MDE task to diffusion methods like stable diffusion. It is based on a Comprehensive Image Detail Embedding (CIDE) module which utilizes ViT embeddings of the image and subsequently transforms them to yield a semantic context vector. These embeddings are used to condition the pre-trained UNet backbone in Stable Diffusion, which produces hierarchical feature maps from its decoder. These are resized to a common dimension and passed to the Upsampling decoder and depth regressor to produce the final depth. Supervision. Supervised training using the ground truth depth with SILog loss as the loss function with variance focus (\u03bb) 0.85. Ground-truth depth is transformed as 1 (1+x). Training. Trained on NYUv2 [64], KITTI [33], virtual KITTI v2 [10] for 25 epochs, with one-cycle learning rate (min: 3e\u22125, max: 5e\u22124) and batch size 32 on 8\u00d7 A100 GPUs. Team 9: HIT-AIIA \u2013 \u2020D P. Sun 23s136164@stu.hit.edu.cn K. Jiang jiangkui@hit.edu.cn G. Wu gwu@hit.edu.cn J. Liu hitcslj@hit.edu.cn X. Liu csxm@hit.edu.cn J. Jiang jiangjunjun@hit.edu.cn Network. It involves the pre-trained Depth Anything encoder and pre-trained CLlP model. The latter is introduced to calculate the similarity between the keywords \u2018indoor\u2019 or \u2018outdoor\u2019 and features extracted from the input image to route it to two, different instances of Depth Anything specialized on indoor or outdoor scenarios. Supervision. Two instances of Depth Anything are finetuned on ground-truth labels, respectively from NYUv2 and KITTI for indoor and outdoor environments. Training. The training resolution is 392 \u00d7 518 on NYUv2 and 384 \u00d7 768 on KITTI. The batch size is 16 and both instances are trained for 5 epochs. Team 10: FRDC-SH \u2013 \u2020D X. Zhang zhangxidan@fujitsu.com J. Wei weijianing@fujitsu.com F. Wang wangfangjun@fujitsu.com Z. Tan zhmtan@fujistu.com Network. The depth network is the Depth Anything [111] pre-trained model \u2013 based on ZoeDepth [8] with a DPT BEiT L384 \u2013 and further fine-tuned. Supervision. Trained on ground-truth depth, with SILog and Hyperbolic Chamfer Distance losses. Training. The model is fine-tuned on NYU-v2 [64], 7Scenes [86], SUNRGBD [128], DIODE [102], KITTI [33], DDAD [39], and Argoverse [12] \u2013 without any resizing of the image resolution \u2013 for 20 epochs with batch size 32, a learning rate set to 1.61e-04, and a 0.01 weight decay. Team 15: hyc123 \u2013 D J. Wang 601533944@qq.com Network. Swin encoder [55] with skip connections and a decoder with channel-wise self-attention modules. Supervision. Trained with ground truth depths, using a loss consisting of a combination of two L1 losses and an SSIM loss, weighted accordingly. Training. The model was trained on Kitti Eigen-Zhou split using images of size 370 \u00d7 1224 for 100 epochs. 5 Team 16: ReadingLS \u2013 \u2020MD* A. Luginov a.luginov@pgr.reading.ac.uk M. Shahzad m.shahzad2@reading.ac.uk Network. The depth network is SwiftDepth [58], a compact model with only 6.4M parameters. Supervision. Self-supervised monocular training with the minimum reconstruction loss [35], enhanced by offline knowledge distillation from a large MDE model [111]. Training. The model is trained in parallel on Kitti Eigen-Zhou and a selection of outdoor YouTube videos, similarly to KBR [93]. Both training and prediction are performed with the input resolution of 192 \u00d7 640. The teacher model [111] is not trained on either these datasets or SYNSPatches. Team 19: Elder Lab \u2013 D S. Hosseini smhh@yorku.ca A. Trajcevski atrajcev@yorku.ca J. H. Elder jelder@yorku.ca Network. An off-the-shelf semantic segmentation model [105] is used at first to segment the image. Then, the depth of pixels on the ground plane is estimated by predicting the camera angle from the height of the highest pixel on the ground. Then, depth is propagated vertically for pixels above the ground, while the Manhattan frame is estimated with [76] to identify both Manhattan and non-Manhattan segments in the image and propagate depth along them in 3D space. Finally, the depth map is completed according to heat equations [27], with pixels for which depth has been already estimated imposing forcing conditions, while semantic boundaries and the image frame impose reflection boundary conditions. Supervision. Ground-truth depth is used for training three kernel regression models. Training. Three simple statistical models are trained on CityScapes [20] and NYUv2 [64]: 1) A kernel regression model to estimate ground elevation angle from the vertical image coordinate of the highest observed ground pixel. The ground truth elevation angle is computed by fitting a plane (constrained to have zero roll) to the ground truth ground plane coordinates; 2) A kernel regression model to estimate the depth of ground pixels from their vertical coordinate, conditioned on semantic class; 3) median depth of non-ground pixels in columns directly abutting the bottom of the image frame, conditioned on semantic class. 5. Results Submitted methods were evaluated on the testing split of SYNS-Patches [1, 92]. Participants were allowed to submit methods without any restriction on the supervision or the predictions by the model, which can be either relative or metric. Accordingly, to ensure a fair comparison among the methods, the submitted predictions are aligned to groundtruth depths according to median depth scaling. 5.1. Quantitative Results Table 1 highlights the results of this third edition of the challenge, with the top-performing techniques, ordered using FScore performance, achieving notable improvements over the baseline method. A first, noteworthy observation is the widespread adoption of the Depth Anything model [111], pre-trained on 62M of images, as the backbone architecture by the leading teams, including PICO-MR, RGA-Robot, EVP++, 3DCreators, HIT-AIIA, FRDC-SH, and ReadingLS, demonstrating its effectiveness and versatility. Specifically, Team PICO-MR, which secured the top position on the leaderboard, achieved an F-score of 23.72, outperforming the baseline method by a remarkable 72.9%. This represents a significant improvement over the previous state-of-the-art method, DJI&ZJU, which achieved an F-score of 17.51 in the \u201cThe Second Monocular Depth Estimation Challenge\u201d [91]. In particular, Team PICO-MR\u2019s result shows a 35.5% increase in performance compared to DJI&ZJU, highlighting the rapid progress made in monocular depth estimation within a relatively short period. This improvement can be also clearly observed in the other metrics considered, both accuracy and error \u2013 notably, achieving the second absolute results on F-Edges, MAE, and RMSE. Their success can be attributed to the fine-tuning of the Depth Anything model on the Cityscapes dataset using a combination of SILog, SSIL, Gradient, and Random Proposal Normalization losses, as well as their strategic choice of fine-tuning for a few epochs to prevent overfitting and ensure robustness to unseen data. Team RGA-Robot, in the third place, achieved an Fscore of 22.79, outperforming the baseline by 66.1%. Their novel approach of augmenting the Depth Anything model, maintained frozen, with an auxiliary network, NAFNet, to convert relative depth predictions into metric depth, combined with self-supervised loss terms, shows the effectiveness of this approach in enhancing depth accuracy. In terms of the F-Edges metric, this method achieves the best result. Team EVP++, ranking fourth, achieved an F-score of 20.87, surpassing the baseline by 52.1%. Their approach involved training the Depth Anything model on both indoor and outdoor datasets, adapting image sizes, batch sizes, and learning rates to each scenario, and highlighting the importance of tailoring model parameters to the specific characteristics of the target environment. This strategy notably improves the results in terms of standard 2D error metrics, yielding the lowest MAE, RMSE, and AbsRel. Several other teams also surpassed both the baseline method and the previous state-of-the-art from the second edition of the challenge. Team 3DCreators achieved an 6 Table 1. SYNS-Patches Results. We provide metrics across the whole test split of the dataset. Top-performing entries generally leverage the pre-trained Depth Anything [111] model. Only a few methods use self-supervised losses or proxy depth labels. Train Rank F\u2191 F-Edges\u2191 MAE\u2193 RMSE\u2193 AbsRel\u2193 Acc-Edges\u2193 Comp-Edges\u2193 PICO-MR \u2020D* 1 23.72 11.01 3.78 6.61 21.24 3.90 4.45 Anonymous ? 2 23.25 10.78 3.87 6.70 21.70 3.59 9.86 RGA-Robot \u2020S 3 22.79 11.52 5.21 9.23 28.86 4.15 0.90 EVP++ \u2020D 4 20.87 10.92 3.71 6.53 19.02 2.88 6.77 Anonymous ? 5 20.77 9.96 4.33 7.83 27.80 3.45 13.25 3DCreators \u2020D 6 20.42 10.19 4.41 7.89 23.94 3.61 5.80 visioniitd D 7 19.07 9.92 4.53 7.96 23.27 3.26 8.00 Anonymous ? 8 18.60 9.43 3.92 7.16 20.12 2.89 15.65 HIT-AIIA \u2020D 9 17.83 9.14 4.11 7.73 21.23 2.95 17.81 FRDC-SH \u2020D 10 17.81 9.75 5.04 8.92 24.01 3.16 14.16 Anonymous ? 11 17.57 9.13 4.28 8.36 23.35 3.18 20.66 Anonymous ? 12 16.91 9.07 4.14 7.35 22.05 3.24 18.52 Anonymous ? 13 16.71 9.25 5.48 11.05 34.20 2.57 18.04 Anonymous ? 14 16.45 8.89 5.29 10.53 33.67 2.60 18.73 hyc123 D 15 15.92 9.17 8.25 13.88 43.88 4.11 0.74 ReadingLS \u2020MD* 16 14.81 8.14 5.01 8.94 29.39 3.28 30.28 Baseline S 17 13.72 7.76 5.56 9.72 32.04 3.97 21.63 Anonymous ? 18 13.71 7.55 5.49 9.44 30.74 3.61 18.36 Anonymous ? 19 11.90 8.08 6.33 10.89 30.46 2.99 33.63 Elder Lab D 20 11.04 7.09 8.76 15.86 63.32 3.22 40.61 M=Monocular \u2013 S=Stereo \u2013 D*=Proxy Depth \u2013 D=Ground-truth Depth \u2013 \u2020=Pre-trained Depth Anything model F-score of 20.42, outperforming the baseline by 48.8% by fine-tuning and combining predictions from the Depth Anything model and Metric3D. Team visioniitd follows surpassing the baseline using ECoDepth, which conditions Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings. Team HIT-AIIA and FRDC-SH also achieved notable improvements, with F-score of 17.83 and 17.81, respectively, using specialized model instances and fine-tuning on diverse datasets. Finally, the remaining teams outperformed the baseline either on the F-score or any of the other metrics, yet not surpassing the winner of the previous edition. Team hyc123, with an F-score of 15.92, outperformed the baseline by 16.0% using a Swin encoder with skip connections and a decoder with channel-wise self-attention modules, while Team ReadingLS outperforms the baseline by distilling knowledge from Depth Anything to a lightweight network based on SwiftDepth, further improved using minimal reconstruction loss during training. Finally, Team Elder Lab employed an off-the-shelf semantic segmentation model and estimated depth using techniques such as predicting camera angle, propagating depth along Manhattan and non-Manhattan segments, and completing the depth map using heat equations. They achieved an F-score of 11.04, 19.5% lower than the baseline score of 13.72, yet they obtained 3.22 Acc-Edge, beating the baseline. 5.2. Qualitative Results Figure 3 provides qualitative results for the depth predictions of each submission. A notable trend among the top-performing teams, such as PICO-MR, RGA-Robot, EVP++, and 3DCreators, is the adoption of the Depth Anything model as a backbone architecture. While Depth Anything represents the current state-of-the-art in monocular depth estimation, the qualitative results highlight that there are still significant challenges in accurately estimating depth, particularly for thin structures in complex outdoor scenes. This is evident in columns 2, 4, 5, and 6 of Figure 3, where objects like trees and branches are not well-recovered, despite the impressive quantitative performance of these methods as shown in Table 1. Interestingly, Team visioniitd, which employs a novel approach called ECoDepth to condition Stable Diffusion\u2019s UNet backbone with Comprehensive Image Detail Embeddings, demonstrates a remarkable ability to estimate depth for thin structures. Yet, they are outperformed quantitatively by other methodologies, suggesting that estimating depth in smooth regions may be more challenging than in thin structures. The qualitative results also reveal some method-specific anomalies. For instance, hyc123 exhibits salt-and-pepper noise artifacts, while Elder Lab\u2019s method, which ranks last, generates overly smooth depth maps that lose important scene objects. These anomalies highlight the importance of developing robust techniques that can handle diverse scene characteristics. Grid-like artifacts are observed in the predictions of top-performers PICO-MR and RGA-Robot, particularly in regions where the network seems uncertain about depth estimates. This suggests that further improvements in network architecture and training strategies may be necessary to mitigate these artifacts. 7 GT PICO-MR RGA-Robot EVP++ 3DCreators visioniitd HIT-AIIA FRDC-SH hyc123 ReadingLS Baseline Elder Lab Figure 3. SYNS-Patches Depth Visualization. Best viewed in color and zoomed in. Methods are ranked based on their F-Score in Table 1. We can appreciate how thin structures still represent one of the hardest challenges to any method, such as branches and railings, for instance. Near depth discontinuities, most approaches tend to produce \u201chalos\u201d, interpolating between foreground and background objects and thus failing to perceive sharp boundaries. Nonetheless, most methods expose higher level of detail compared to the baseline. The indoor scenario in the last column shows the strong performance of methods like PICO-MR, EVP++, HITAIIA, and FRDC-SH in estimating scene structure. This can be attributed to their use of large-scale pre-training, fine-tuning on diverse datasets, and carefully designed loss functions that capture both global and local depth cues. However, all methods still exhibit over-smoothing issues at depth discontinuities, manifesting as halo effects. While they outperform the baseline in this regard, likely due to their supervised training with ground truth or proxy labels, there remains significant room for improvement. A notable limitation across all methods is the inability to effectively estimate depth for non-Lambertian surfaces, such as glass or transparent objects. This is evident in the penultimate right column and the first column, corresponding to the windshield. The primary reason for this limitation is the lack of accurate supervision for such surfaces in the training data, highlighting the need for novel techniques and datasets that explicitly address this challenge. In conclusion, the qualitative results provide valuable insights into the current state of monocular depth estimation methods. While the adoption of large-scale pretraining and carefully designed architectures has led to significant improvements, challenges persist in accurately estimating depth for thin structures, smooth regions, and nonLambertian surfaces. Addressing these limitations through novel techniques, improved training strategies, and diverse datasets will be crucial for further advancing this field. 6. Conclusions & Future Work This paper has summarized the results for the third edition of MDEC. Over the various editions of the challenge, we have seen a drastic improvement in performance, showcasing MDE \u2013 in particular real-world generalization \u2013 as an exciting and active area of research. With the advent of the first foundational models for MDE during the last months, we observed a diffused use of frameworks such as Depth Anything [111]. This ignited a major boost to the results submitted by the participants, with a much higher impact compared to the specific kind of supervision chosen for the challenge. Nonetheless, as we can appreciate from the qualitative results, any methods still struggle to accurately predict fine structures and discontinuities, hinting that there is still room for improvement despite the massive amount of data used to train Depth Anything. We hope MDE will continue to attract new researchers and practitioners to this field and renew our invitation to participate in future editions of the challenge. Acknowledgments. This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1. 8", "introduction": "Monocular depth estimation (MDE) aims at predicting the distance from the camera to the points of the scene de- picted by the pixels in the captured image. It is a highly ill-posed problem due to the absence of geometric priors usually available from multiple images. Nonetheless, deep learning has rapidly advanced this field and made it a real- ity, enabling results far beyond imagination. 1Independent 2University of Bologna 3Blue River Technology 4Oxford Internet Institute 5University of Surrey 6ByteDance 7University of Chinese Academy of Science 8RGA Inc. 9Space Research Institute NASU-SSAU, Kyiv, Ukraine 10Northwestern Polytechnical University, Xi\u2019an 11Indian Institute of Technology, Delhi 12Harbin Institute of Technology 13Fujitsu 14GuangXi University 15University of Reading 16York University For years, most proposed approaches have been tailored to training and testing in a single, defined domain \u2013 e.g., automotive environments [33] or indoor settings [64] \u2013 of- ten ignoring their ability to generalize to unseen environ- ments. Purposely, the Monocular Depth Estimation Chal- lenge (MDEC) in the last years has encouraged the commu- nity to delve into this aspect, by proposing a new benchmark for evaluating MDE models on a set of complex environ- ments, comprising natural, agricultural, urban, and indoor settings. The dataset comes with a validation and a testing split, without any possibility of training/fine-tuning over it thus forcing the models to generalize. While the first edition of MDEC [90] focused on bench- marking self-supervised approaches, the second [91] addi- tionally opened the doors to supervised methods. During the former, the participants outperformed the baseline [30, 92] in all image-based metrics (AbsRel, MAE, RMSE), but could not improve pointcloud reconstructions [65] (F- Score). The latter, instead, brought new methods capable of outperforming the baseline on both aspects, establishing a new State-of-the-Art (SotA). The third edition of MDEC, detailed in this paper, ran in conjunction with CVPR2024, following the successes of the second one by allowing sub- missions of methods exploiting any form of supervision, e.g. supervised, self-supervised, or multi-task. Following previous editions, the challenge was built around SYNS-Patches [1, 92]. This dataset was chosen because of the variegated diversity of environments it con- tains, including urban, residential, industrial, agricultural, natural, and indoor scenes. Furthermore, SYNS-Patches contains dense high-quality LiDAR ground-truth, which is very challenging to obtain in outdoor settings. This allows 1 arXiv:2404.16831v2 [cs.CV] 27 Apr 2024 for a benchmark that accurately reflects the real capabilities of each model, potentially free from biases. While the second edition counted 8 teams outperforming the SotA baseline in either pointcloud- or image-based met- rics, this year 19 submissions achieved this goal. Among these, 10 submitted a report introducing their approach, 7 of whose outperformed the winning team of the second edi- tion. This demonstrates the increasing interest \u2013 and efforts \u2013 in MDEC. In the remainder of the paper, we will provide an overview of each submission, analyze their results on SYNS-Patches, and discuss potential future developments." }, { "url": "http://arxiv.org/abs/2403.01569v1", "title": "Kick Back & Relax++: Scaling Beyond Ground-Truth Depth with SlowTV & CribsTV", "abstract": "Self-supervised learning is the key to unlocking generic computer vision\nsystems. By eliminating the reliance on ground-truth annotations, it allows\nscaling to much larger data quantities. Unfortunately, self-supervised\nmonocular depth estimation (SS-MDE) has been limited by the absence of diverse\ntraining data. Existing datasets have focused exclusively on urban driving in\ndensely populated cities, resulting in models that fail to generalize beyond\nthis domain.\n To address these limitations, this paper proposes two novel datasets: SlowTV\nand CribsTV. These are large-scale datasets curated from publicly available\nYouTube videos, containing a total of 2M training frames. They offer an\nincredibly diverse set of environments, ranging from snowy forests to coastal\nroads, luxury mansions and even underwater coral reefs. We leverage these\ndatasets to tackle the challenging task of zero-shot generalization,\noutperforming every existing SS-MDE approach and even some state-of-the-art\nsupervised methods.\n The generalization capabilities of our models are further enhanced by a range\nof components and contributions: 1) learning the camera intrinsics, 2) a\nstronger augmentation regime targeting aspect ratio changes, 3) support frame\nrandomization, 4) flexible motion estimation, 5) a modern transformer-based\narchitecture. We demonstrate the effectiveness of each component in extensive\nablation experiments. To facilitate the development of future research, we make\nthe datasets, code and pretrained models available to the public at\nhttps://github.com/jspenmar/slowtv_monodepth.", "authors": "Jaime Spencer, Chris Russell, Simon Hadfield, Richard Bowden", "published": "2024-03-03", "updated": "2024-03-03", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI", "cs.RO" ], "main_content": "Instead of using ground-truth depth annotations from LiDAR or RGB-D sensors, selfsupervised monocular depth estimation relies exclusively on photometric consistency constraints. The seminal approach by Garg et al. [6] combined the known baseline between stereo pairs with the predicted depth to obtain correspondences and perform view synthesis. Monodepth [17] complemented this with the virtual stereo consistency loss. Performance was further improved by introducing differentiable bilinear interpolation [23] and a reconstruction loss based on SSIM [24]. 3Net [25] extended the virtual stereo consistency into a trinocular setting. To extend this formulation to the monocular domain, it is necessary to replace the fixed stereo baseline with a network to predict the relative pose between frames. This was first proposed by SfM-Learner [7] and extended by DDVO [26], which introduced a differentiable DSO module [27]. Purely monocular approaches are sensitve to dynamic objects, as their additional motion is not accounted-for by the relative pose estimation. This results in incorrect correspondences, which further lead to inaccurate depth predictions. Therefore, future research aimed to minimize this impact via predictive masking [7], uncertainty estimation [28\u201330], optical flow [31\u201333] or motion masks [34\u201336]. Several works have instead focused on improving the robustness of the photometric loss. One notable example is Monodepth2 [8], which introduced the minimum reconstruction loss and static-pixel automasking. FeatDepth [37] applied the same view synthesis to dense feature descriptors, which should be invariant to viewpoint and illumination conditions. DeFeat-Net [38] learned the feature descriptors simultaneously, while Shu et al. [39] used intermediate autoencoder representations. Others complemented the photometric loss with semantic segmentation [40\u201342] or geometric constraints [43\u201345]. Finally, is is also common to introduce proxy depth label regression obtained from SLAM [28, 46], synthetic data [47] or hand-crafted disparity [48, 49]. The encoder network architecture has been improved by introducing 3-D (un)packing blocks, positional encoding [50], transformers [51] or high-resolution networks [52]. Updated decoders have focused on sub-pixel convolutions [53], selfattention [52, 54, 55] and progressive skip connections [9]. Akin to supervised MDE developments [4, 56], Johnston et al. [54] and Bello et al. [50, 57] obtained improvements by representing depth as a discrete volume. So far, these contributions have only been tested on automotive datasets, such as Kitti [10], CityScapes [58] or DDAD [12]. Recent benchmarks and challenges [1, 59, 60] have shown that these models fail to generalize beyond this restricted training domain. Meanwhile, recent supervised models [2, 3, 60] have leveraged collections of datasets to improve zero-shot generalization. In this paper, we aim to close the gap between supervised and self-supervised MDE in the challenging task of zero-shot generalization. This is achieved by greatly increasing the diversity and scale of the training data by leveraging unlabeled videos from YouTube, without requiring manual annotation or expensive pre-processing. 4 Fig. 2: SlowTV & CribsTV. Sample images from the proposed datasets, featuring diverse scenes for hiking, driving, scuba diving and real estate. The datasets consist of 45 videos curated from YouTube with a total of 2M training frames. Diversifying the training data allows our SS-MDE models to generalize to unseen datasets. We make the list of URLs and tools to process publicly available. Fig. 3: SlowTV Map. We show the map location for each sequence in SlowTV. The distribution of locations ensures that the training data is highly diverse. Green=Natural, Red=Driving, Blue=Underwater. 3 Datasets The proposed SlowTV and CribsTV datasets consist of 45 videos curated from YouTube, totaling more than 140 hours of content and 2 million training images. As shown in Table 1, this is an order of magnitude more data than any commonly used SS-MDE dataset. SlowTV contains three main outdoor categories (hiking, driving and scuba diving), while CribsTV focuses exclusively on real estate properties. When combined, these datasets provide an incredibly diverse set of training scenes for our models, allowing us to tackle the challenging task of zero-shot generalization. 5 Table 1: Datasets Comparison. The top half shows commonly used SS-MDE training datasets. SlowTV and CribsTV greatly diversify the training environments and scale to much larger data quantities. The bottom half summarizes the testing datasets used in our zero-shot generalization evaluation. Urban Natural Scuba Indoor Depth Acc Density #Img Kitti [10, 61]\u2020 \u2713 \u2717 \u2717 \u2717 LiDAR High Low 71k DDAD [12] \u2713 \u2717 \u2717 \u2717 LiDAR Mid Low 76k CityScapes [11] \u2713 \u2717 \u2717 \u2717 Stereo Low Mid 88k Mannequin [16]\u2020 \u2713 \u2717 \u2717 \u2713 SfM Mid Mid 115k SlowTV (Ours)\u2020 \u2713 \u2713 \u2713 \u2717 \u2717 \u2717 \u2717 1.7M CribsTV (Ours)\u2020 \u2713 \u2717 \u2717 \u2713 \u2717 \u2717 \u2717 330k Kitti [10, 61] \u2713 \u2717 \u2717 \u2717 LiDAR High Low 652 DDAD [12] \u2713 \u2717 \u2717 \u2717 LiDAR Mid Low 1k Sintel [62] \u2717 \u2713 \u2717 \u2717 Synth High High 1064 SYNS-Patches [1, 13] \u2713 \u2713 \u2717 \u2713 LiDAR High High 775 DIODE [63] \u2713 \u2717 \u2717 \u2713 LiDAR High High 771 Mannequin [16] \u2713 \u2717 \u2717 \u2713 SfM Mid Mid 1k NYUD-v2 [64] \u2717 \u2717 \u2717 \u2713 Kinect Mid High 654 TUM-RGBD [65] \u2717 \u2717 \u2717 \u2713 Kinect Mid High 2.5k \u2020Datasets used to train our networks. Hiking videos target natural scenes, such as forest, mountains, deserts or fields, which are non-existent in current datasets. Our driving split seeks to complement existing automotive datasets, which tend to focus on urban driving in densely populated cities [10\u201312, 66\u201369]. The proposed split instead features videos from scenic routes, traversing forest, mountainous or coastal roads with sparse traffic. We also feature a variety of weather and seasonal conditions. Underwater scuba diving represents yet another previously unexplored domain, which further increases data diversity. Finally, real estate properties are a natural counterpart to the previous outdoor data. They also complement the Mannequin Challenge [16], which primarily focuses on human beings in indoor settings, rather than the indoor scenes themselves. The proposed videos were collected from a diverse set of locations and conditions, as illustrated in Figure 3. This includes the USA, Canada, the Balkans, Eastern Europe, Indonesia and Hawaii, and conditions such as rain, snow, autumn and summer. Since CribsTV contains a large number of individual properties, it is challenging to obtain accurate information about each of their locations. However, they are predominantly located in the USA. Figure 2 shows sample frames from each of the available categories, illustrating the dataset\u2019s incredible diversity. Videos were downloaded at HD resolution (720 \u00d7 1280) and extracted at 10 FPS to reduce storage, while still providing smooth motion and large overlap between adjacent frames. In the case of SlowTV, only 100 consecutive frames out of every 250 were retrained. This reduces the self-similarity between training samples and keeps the dataset size tractable. The final SlowTV contains a total of 1.7M images, composed of 1.1M natural, 400k driving and 180k underwater. Since we target SS-MDE, the only annotations required are the camera intrinsic parameters, which can be estimated using COLMAP [70]. However, as discussed in 6 Section 4.2, it is possible to let the network jointly optimize camera parameters alongside depth and motion. We find that this is more robust and improves performance compared to training with potentially inaccurate COLMAP intrinsics. CribsTV instead consists of highly-produced cinematic house tours. In practice, this means they include cuts between shots of different viewpoints or each room. Some of these shots may also be unsuitable for SS-MDE, containing zooming/focusing/blurring effects or static shots. As such, it was first necessary to split each video into its individual scenes using an off-the-shelf scene detector [71]. We then performed a quick manual check to filter out potentially invalid scenes based on their first frame. The final dataset contains 330k training frames. CribsTV also presents additional challenges when estimating the camera intrinsics. Due to the short shot duration (on average 5 seconds) and lack of overlap between scenes, COLMAP is unable to produce any reconstructions. This again motivates the need for a more flexible depth estimation framework, capable of estimating camera intrinsics. This reduces the complexity of dataset collection and allows us to train with much larger quantities of diverse data. 4 Methodology Monocular depth estimation aims to reconstruct the 3-D structure of the scene using only a single 2-D image projection. However, additional support frames are required in order to synthesize the target view and compute the photometric reconstruction losses that drive optimization. In the case where only stereo pairs are used [6, 17], the predicted depth is combined with the known stereo baseline to perform the view synthesis. However, if only monocular video is available, such as YouTube videos from SlowTV or CribsTV, is becomes necessary to incorporate an additional pose network to estimate the relative motion between adjacent frames [7]. A key difference between these forms of supervision is that stereo approaches can estimate metric depth, while monocular approaches are only accurate up to unknown scale and shift factors. Depth. The the depth estimation network \u03a6D can be formalized as \u02c6 Dt = \u03a6D(It) , where \u02c6 Dt is the predicted sigmoid disparity and It is the target image. Note that the disparity map must be inverted into a depth map and appropriately scaled in order to warp the support images. Pose. Similarly, the pose estimation network is represented as \u02c6 Pt+k = \u03a6P(It \u2295It+k) , where \u2295is channel-wise concatenation, It+k is the support frame at time offset k \u2208 {\u22121, +1} and \u02c6 Pt+k is the predicted motion as a translation and axis-angle rotation. 4.1 Losses Supervised approaches such as MiDaS [2], DPT [3] or NewCRFs [5] require groundtruth depth annotations, in the form of LiDAR, depth cameras, SfM reconstructions or stereo disparity estimation. SS-MDE [6, 8, 9, 52] instead relies on the photometric consistency across the target and support frames. 7 Using the predicted depth Dt and motion \u02c6 Pt+k, pixel-wise correspondences between these images can be obtained as p\u2032 t+k = K\u02c6 Pt+kDt(pt) K\u22121pt, (1) where K represents a camera\u2019s intrinsics parameters, pt are the 2-D pixel coordinates in the target image and p\u2032 t+k are its 2-D reprojected coordinates onto the corresponding support frame. The synthesized support frame is then obtained via I\u2032 t+k = It+k p\u2032 t+k \u000b , where \u27e8\u00b7\u27e9represents differentiable bilinear interpolation [23]. The reconstruction loss is then given by the weighed combination of SSIM+L1 [17], defined as Lph \u0000I, I\u2032\u0001 = \u03bb1\u2212Lssim \u0000I, I\u2032\u0001 2 + (1\u2212\u03bb) L1 \u0000I, I\u2032\u0001 . (2) As is common, the loss balancing weight is set to \u03bb = 0.85. It is well know that purely-monocular approaches [7] are sensitive to artifacts caused by dynamic objects. This is due to the additional motion of the object being unaccounted for in the correspondence estimation procedure from (1). Recent research [34\u201336] aimed to solve these challenges using motion masks or semantic segmentation maps. Whilst effective, the additional annotations and labeling required makes these contributions unsuitable for the proposed framework. Instead we opt for the simple, yet effective, contributions from Monodepth2 [8]. This includes the minimum reconstruction loss and static-pixel automasking. The minimum reconstruction loss reduces the impact of occlusions by assuming only support frames contains a correct correspondence. This frame is obtained by finding the minimum pixel-wise error across all support frames, defined as Lrec = X p min k Lph \u0000It, I\u2032 t+k \u0001 , (3) where P indicates averaging over a set. Automasking instead reduces the effect of static frames and objects moving at the same speed as the camera. These objects remain static across frames, giving the impression of an infinite depth. Automasking simply removes pixels from the loss where the original non-warped support frame has a lower reconstruction error that the synthesized view. This is computed as M = s min k Lph \u0000It, I\u2032 t+k \u0001 < min k Lph(It, It+k) { , (4) where J\u00b7K represents the Iverson brackets. Whilst being simple to implement, Figure 4 demonstrates the effectiveness of incorporating these contributions. Finally, we complement the reconstruction loss with the common edge-aware smoothness regularization [17]. These networks and losses constitute the core baseline required to train the desired zero-shot depth estimation models. The following sections describe additional contributions that help to further maximize performance and generalization capabilities. 8 Image GT Baseline Monodepth2 Fig. 4: Dynamic Object Robustness. Incorporating the contributions from Monodepth2 [8] mitigates artifacts from dynamic objects and improves the sharpness of depth discontinuities. This is achieved in a cost-effective manner, without requiring complex consistency losses or additional annotations. 4.2 Learning Camera Intrinsics Many datasets provide accurately calibrated camera intrinsic parameters. Unfortunately, in crowd-sourced [72], photo-tourism [73] or internet-curated datasets (such as SlowTV or CribsTV) these parameters are not freely available. Instead, it is common practice to rely on SfM reconstructions obtained from COLMAP [70]. Unfortunately, these reconstructions may be incorrect, incomplete or sometimes impossible to obtain. As such, especially in the case of internet-curated datasets that may continuously change or scale up in size, it would be extremely beneficial to omit these pre-processing requirements. We take inspiration from [34, 74] learn depth, pose and camera intrinsics simultaneously. To achieve this, two additional branches are incorporated into the pose estimation network as \u02c6 Pt+k, fxy, cxy = \u03a6P(It \u2295It+k) , (5) where fxy and cxy represent the focal lengths and principal point, respectively. These parameters are predicted as normalized and scaled accordingly based on the input image size. The branch predicting the focal lengths uses a softplus activation to ensure 9 (a) Image (b) Ground-truth (c) Base (\u03b4.25 = 61.82%) (d) Distorted (\u03b4.25 = 71.12%) Fig. 5: Image Shape Overfitting. The same model can produce significantly different results with different resolution images. Distorting the image to the original training resolution can improve performance, despite introducing stretching/squashing artifacts. Note, for instance, the improved boundary sharpness in (d). a positive output. The principal point instead uses sigmoid, under the assumption that it will lie within the image plane. Incorporating these decoders results in a negligible 2MParam increase in the pose estimation network, with no additional computation required for the losses. Instead, we simply modify (1) to use the predicted intrinsics instead of the ground-truth ones. Despite this, our ablations in Section 5.4 show that this increases performance compared to training with intrinsics estimated by COLMAP. 4.3 Augmentation Strategies Existing research [21, 75, 76] has shown the importance of incorporating more sophisticated augmentation strategies. This is especially the case in SSL, which relies exclusively on the the diversity of the available data. However, existing SS-MDE (SS-MDE) approaches use only traditional augmentations such as color jittering and horizontal flipping. This section describes the additional augmentations incorporated into our training regime to further boost the generalization capabilities of our final models. 4.3.1 Aspect Ratio Dense predictions networks, such as the depth network used in this paper, can process images of arbitrary shape. However, when trained only on a single dataset with a fixed image size, it is common for them to overfit to this size, resulting in poor performance on out-of-dataset examples. An example of this effect can be seen in Figure 5, where first resizing the image to match the training aspect ratio results in better performance, despite introducing stretching or squashing artifacts. We overcome this by introducing an augmentation that randomizes the image sizes and aspect ratios seen by the network during training. The proposed aspect ratio augmentation augmentation (AR-Aug) consists of two components: cropping 10 (a) Original (16:9) (b) 1:2 (c) 18:5 (d) 5:3 Fig. 6: AR-Aug. Sample training images generated using the proposed augmentation strategy. This augmentation prevents overfitting to image shapes and increases the diversity of images seen by the network. and resizing. The first stage uniformly samples from a set of predefined common aspect ratios1. A random crop is generated using this aspect ratio, covering 50-100% of original image height or width. The resizing stage ensures that the final crop has roughly the same number of pixels as the original input image. Figure 6 shows training samples obtained using this procedure. This augmentation is applied at the mini-batch level to ensure all images are the same shape. If using ground-truth intrinsics, these are rescaled accordingly. AR-Aug has the effect of drastically increasing the diversity of shapes and sizes seen by the network and prevents overfitting to a single shape. 4.3.2 RandAugment We additionally proposed to complement color jittering with RandAugment [21]. This strategy sequentially applies a random combination of photometric and geometric augmentations. Since MDE requires accurate re-projections across a sequence of images we remove the geometric augmentations (e.g. translate, rotate and shear) and focus 1Portrait: 6:13, 9:16, 3:5, 2:3, 4:5, 1:1. Landscape: 5:4, 4:3, 3:2, 14:9, 5:3, 16:9, 2:1, 24:10, 33:10, 18:5. 11 Fig. 7: Photometric Augmentations. Sample augmentations produced by RandAugment [21] and CutOut [22]. The resulting model is highly robust to changing illumination conditions and is capable of filling in masked regions from the surrounding context. This can be seen in large regions of the ground-plane and connecting the tree trunk. purely on photometric ones. The set of possible augmentations is thus reduced to: identity (i.e. no augmentation), auto-contrast, equalization, sharpness, brightness, color and contrast. At each training iteration, a random subset of three augmentations is chosen and applied to both the target and support frames. Sample augmented images using this strategy can be found in Figure 7. 4.3.3 CutOut Inspired by the recent success of transformer token-masking augmentations [20, 77, 78], we additionally propose to re-introduce CutOut augmentations [22]. While CutOut was originally used to boost holistic tasks like classification, it can also be applied to dense prediction tasks. In this case, the objective is to teach the network to predict the depth for a missing region in the image, based only on the context surrounding it. As such, these models should learn to incorporate additional context cues and be more robust to test-time artifacts such as reflections or highlights. To further increase the variability of the augmentations, we implement various fill modes for the masked-out regions: white, black, grayscale, RGB and random. A 12 Table 2: Model Complexity. KBR retains the architecture from Monodepth Benchmark [1], while KBR++ matches DPT [3]. Despite being of equivalent complexity, our models greatly outperforms the SSL baselines and can close the gap to supervised performance. Backbone MParam\u2193 FPS\u2191 KBR (Ours) [15] ConvNeXt-B [79] 92.65 61.50 KBR++ (Ours) BEiT-L [20] 345.01 9.60 MiDaS [2] ResNeXt-101 [80] 105.36 51.38 DPT [3] ViT-L [81] 344.06 14.54 DPT [3] BEiT-L [20] 345.01 9.60 NeWCRFs [82] Swin [83] 270.44 21.61 different fill mode is randomly selected at each training iteration. Figure 7 shows examples of applying this augmentation and the robustness of the model to it. 5 Results We carry out extensive evaluations to demonstrate the effectiveness of the techniques and datasets proposed in this paper. This includes the zero-shot experiments for the final models, as well as detailed ablations on each proposed component. We additionally evaluated our models in the challenging task of map-free relocalization [72] and the MDEC-2 challenge [59, 60]. Since both KBR and KBR++ were trained exclusively on monocular data, it is necessary to first align the predictions to the ground-truth metric scale. This alignment is obtained using the least-squares procedure proposed by MiDaS [3] and is applied equally to every baseline. 5.1 Baselines The SotA self-supervised models were obtained from the Monodepth Benchmark [1], which use a pretrained ConvNeXt-B backbone. These models were trained exclusively on the Kitti dataset [7, 84] and are therefore also zero-shot on all other datasets. We also compare our frameworks to current SotA supervised models, which require accurate ground-truth annotations to train. MiDaS [2] and DPT [3] were trained on a collection of 10/12 supervised datasets that do not overlap with our testing set (unless otherwise specified). As such, these models are also evaluated in the challenging zero-shot setting. We use the pretrained models provided in the PyTorch Hub. NewCRFs [5] instead provides separate outdoor/indoor models trained on Kitti and NYUD-v2 respectively. We evaluate the corresponding model in a zero-shot manner depending on the dataset category. Even though NewCRFs [5] should be capable of predicting metric depth, we apply the least-squares alignment procedue to ensure that all results are comparable. 5.2 Implementation Details The proposed models were implemented in PyTorch [85] and based on the Monodepth Benchmark [1]. The original KBR [15] used a ConvNeXt-B backbone [79, 86] and a 13 DispNet decoder [17, 58]. The pose network instead used ConvNeXt-T for efficiency. As such, these models are comparable to the SSL baselines [1]. Table 2 shows a comparison between the computational complexity of the proposed models and the supervised SotA. As seen, these models use larger transformer-based backbones [20, 81, 83]. In order to make our results more comparable, KBR++ incorporates the same architecture used by DPT [3, 20]. Our ablation experiments in Section 5.4 show the impact of different backbone architectures. In our experiments and ablations, each model is trained using three different random seeds and we report average performance. This improves the reliability of the results and reduces the impact of non-determinism. We make the datasets, pretrained models and training code available at https://github.com/jspenmar/slowtv monodepth. The final KBR++ models were trained on a combination of SlowTV (1.7M), CribsTV (330k), Mannequin Challenge (115k) and Kitti Eigen-Benchmark (71k). To make the duration of each epoch tractable and balance the contribution of each dataset, we fix the number of images per epoch to 30k, 15k, 15k and 15k, respectively. The subset sampled from each dataset varies with each epoch to ensure a high data diversity. The models were trained for 60 epochs using AdamW [87] with weight decay 10\u22123 and a base learning rate of 10\u22124, decreased by a factor of 10 for the final 20 epochs. Empirically, we found that linearly warming up the learning rate for the first few epochs stabilized learning and prevented model collapse. When training with DPT backbones, finetuning the pretrained encoders at a lower learning rate was also found to be beneficial. We use a batch size of 4 and train the models on a single NVIDIA GeForce RTX 3090. SlowTV, CribsTV and Mannequin Challenge use a base image size of 384 \u00d7 640, while Kitti uses 192 \u00d7 640. We apply horizontal flipping and color jittering, along with the proposed RandAugment [21] and CutOut [22] augmentations, each with 30% probability. AR-Aug is applied with 70% probability, sampling from 16 predefined aspect ratios previously described. Since existing models are trained exclusively on automotive data, most of the motion occurs in a straight-line and forward-facing direction. It is therefore common practice to force the network to always make a forward-motion prediction by reversing the target and support frame if required. Handheld videos, while still primarily featuring forward motion, also exhibit more complex motion patterns. As such, removing the forward motion constraint results in a more flexible model that improves performance. Similarly, existing models are trained with a fixed set of support frames\u2014usually previous and next. Since SlowTV and Mannequin Challenge are mostly composed of handheld videos, the change from frame-to-frame is greatly reduced. We make the model more robust to different motion scales and appearance changes by randomizing the separation between target and support frames. In general, we sample such that handheld videos use a wider time-gap between frames, while automotive has a small time-gap to ensure there is significant overlap between frames. As shown later, this leads to further improvements and greater flexibility. 14 Table 3: Learning Camera Intrinsics. Performance when training on a single dataset (Kitti or Mannequin Challenge) and learning camera intrinsics. If the cameras are not perfectly calibrated, learning the intrinsics can improve accuracy. Kitti Eigen-Zhou Rel\u2193 F\u2191 \u03b4.25\u2191 Baseline 5.69 60.88 95.89 Learn K 5.68 60.81 95.90 Mannequin Rel\u2193 F\u2191 \u03b4.25\u2191 Baseline 16.66 14.20 77.18 Learn K 16.12 14.77 78.40 5.3 Evaluation Metrics We follow the original evaluation procedure outlined in [15] and report the following metrics per dataset: Rel. Absolute relative error (%) between target y and prediction \u02c6 y as Rel = P |y\u2212\u02c6 y| /y. Delta. Prediction threshold accuracy (%) as \u03b4.25 = P (max (\u02c6 y/y, y/\u02c6 y) < 1.25) . F. Pointcloud reconstruction F-Score [88] (%) as F = (2PR) / (P + R) , where P and R are the Precision and Accuracy of the 3-D reconstruction with a correctness threshold of 10cm. We additionally compute multi-task metrics to summarize the performance across all datasets: Rank. Average ordinal ranking order across all metrics as Rank = P m rm, where m represents each available metric and r is the ordinal rank. Improvement. Average relative increase or decrease in performance (%) across all metrics as \u2206= P m(\u22121)lm(Mm \u2212M 0 m)/M 0 m, where lm = 1 if a lower value is better, Mm is the performance for a given metric and M 0 m is the baseline\u2019s performance. 5.4 Ablation To demonstrate the effectiveness of each proposed component, we carry out a series of ablation studies. These experiments generally use a more efficient architecture (ConvNeXt-Tiny) and a smaller training dataset. Learning K. Table 3 shows the benefits of learning the camera intrinsics, as outlined in Section 4.2. We train models on either Kitti or Mannequin Challenge and test them on the same dataset. If the dataset provides accurately calibrated intrinsics (Kitti), this procedure provides comparable performance. However, in the case where these were estimated by COLMAP [70], learning K results in a slight performance boost. This highlights the flexibility of this contribution, which requires only a negligible increase of 2MParams in the pose network, yet allows us to train without groundtruth intrinsics. This further simplifies the process of data collection and results in a framework the requires only uncalibrated monocular video to train. 15 Table 4: Ablations. We carry out ablations for each component in our framework. From top to bottom: KBR [15] contributions, network architecture, augmentations and datasets. In each case, the proposed contributions improve the zero-shot generalization of our models. In-Distribution Outdoor Indoor Multi-task Kitti Mannequin DDAD DIODE Sintel SYNS DIODE NYUD-v2 TUM Rank\u2193 \u2206\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 KBR 2.37 0.00 6.84 56.17 14.39 17.67 12.63 20.21 33.49 57.08 33.34 40.81 22.40 18.50 14.91 80.77 11.59 87.23 15.02 80.86 Fwd \u02c6 P 2.79 -0.64 7.27 54.61 14.36 17.52 12.43 19.76 33.52 56.97 32.25 41.05 22.32 18.45 14.89 80.54 11.68 87.14 15.50 80.29 k = \u00b11 3.16 -1.18 6.79 55.92 14.17 17.92 13.66 19.18 33.75 56.56 32.30 41.14 23.05 17.90 15.00 80.82 11.73 86.94 15.66 79.91 Fixed K 4.00 -2.76 7.09 55.19 14.95 17.11 14.30 18.15 33.49 57.10 33.39 41.43 22.56 18.67 15.41 79.98 12.06 86.14 15.75 80.07 No AR-Aug 3.53 -3.86 8.32 50.15 14.32 17.87 14.75 16.90 34.49 55.82 33.25 40.51 23.38 17.33 14.58 81.69 11.24 87.77 14.61 81.76 None 5.16 -7.59 8.66 48.33 14.62 17.01 18.46 15.17 34.38 55.62 31.88 40.27 23.32 17.97 15.15 80.30 11.88 86.35 15.55 79.81 ConvNeXt-B 3.47 0.00 6.84 56.17 14.39 17.67 12.63 20.21 33.49 57.08 33.34 40.81 22.40 18.50 14.91 80.77 11.59 87.23 15.02 80.86 ViT-B-384 4.68 -3.17 7.41 54.31 14.80 17.05 14.47 16.31 33.37 56.99 33.00 39.15 23.08 17.55 14.19 83.09 11.58 86.83 15.41 81.65 ViT-L-384 2.53 0.41 7.29 54.70 14.02 18.10 14.38 16.88 33.02 58.18 30.68 42.01 22.51 17.83 13.72 83.98 10.60 88.96 14.07 83.24 BEiT-B-384 2.79 0.32 6.93 55.28 14.23 17.98 13.50 17.84 32.52 58.33 32.41 41.34 22.83 17.75 13.87 83.53 11.02 88.05 14.55 82.28 BEiT-L-384 1.53 3.91 6.91 55.60 12.78 20.37 14.39 17.75 32.57 58.73 30.21 41.22 21.76 18.92 13.53 84.65 9.63 91.05 13.60 85.81 No Aug 3.68 0.00 6.39 56.55 17.12 14.28 21.54 9.85 35.62 52.92 35.38 39.54 24.90 15.88 16.69 76.55 14.41 80.27 17.50 76.18 ColorJitter 3.00 1.30 6.31 56.38 16.95 14.36 19.72 10.71 35.65 53.08 35.16 38.73 24.47 16.16 16.63 76.93 14.25 80.59 17.58 75.62 CutOut 3.68 1.55 6.51 55.93 17.25 14.35 18.98 11.86 35.73 52.81 35.75 38.86 25.17 15.92 16.71 76.61 14.09 80.76 17.43 75.86 RandAugment 2.37 2.09 6.36 56.34 16.89 14.61 19.25 11.05 35.36 53.30 34.11 39.40 24.82 15.83 16.91 76.34 13.98 81.16 16.98 76.65 All 2.26 2.97 6.36 56.30 17.09 14.36 17.17 11.66 35.52 53.44 35.41 39.12 24.58 16.31 16.80 76.54 13.61 81.96 17.08 76.48 Base 2.47 0.00 6.49 55.87 16.62 14.95 18.59 12.09 35.15 54.00 36.01 39.42 24.69 16.20 16.44 77.08 13.52 82.54 16.87 77.22 No Kitti 3.05 -14.82 17.98 23.73 16.02 15.57 21.42 10.85 36.04 52.89 34.93 39.39 27.32 14.50 16.41 77.35 13.19 83.43 16.36 78.12 No Mannequin 3.89 -12.62 6.83 55.20 24.42 9.12 17.07 11.60 35.73 53.12 37.04 33.11 24.67 15.95 17.87 73.60 18.36 70.35 22.49 63.12 No SlowTV 3.79 -8.87 7.15 54.72 16.51 14.59 30.15 6.43 36.48 51.85 35.85 37.47 27.07 13.92 16.30 77.67 13.65 82.62 17.13 77.20 With CribsTV 1.79 0.58 6.67 55.33 16.36 15.09 19.41 12.59 35.07 54.25 34.68 38.90 24.84 16.02 16.12 77.79 12.76 84.21 16.81 77.14 Highlighted cells are NOT zero-shot results. S=Stereo, M=Monocular, D=Ground-truth Depth. KBR. Table 4 (1st block) shows results when removing each component proposed by the original KBR [15]. Fwd \u02c6 P represents a network forced to always make a forward-motion prediction. k = \u00b11 uses a fixed set of support frames, instead of the randomization proposed in Section 5.2. Fixed K removes the learned camera intrinsics, while No AR-Aug removes the proposed aspect ratio augmentation. As expected, the model with the full set of contributions performs best, while the model without any contributions is worse by 7.6%. It is interesting to note that both learning the intrinsics and AR-Aug provide the biggest performance boost. Furthermore, it is worth remembering that none of the components (except learning K) results in a increase in model complexity. However, when combined together, they significantly improve the zero-shot generalization capabilities of our models. Network Architecture. To make our models more comparable with the supervised SotA, we modify our architecture to match the one from DPT [3]. These results are shown in Table 4 (2nd block). All models start from an encoder pretrained on ImageNet. Interestingly, we find that most transformer-based architectures do not significantly improve upon the baseline (ConvNeXt-B [79]), which is much more efficient. However, the largest version of BEiT [20] provides the best performance. Augmentations. Table 4 (3rd block) shows the results of incorporating the more advanced augmentation strategies from Section 4.3. All variants (except No Aug) additionally include horizontal flipping. As shown, the default color jittering augmentation used by most existing models is slightly better than using no augmentations. However, both CutOut and RandAugment provide larger improvements. Furthermore, the final row (All) demonstrates that these improvements are cumulative and that the model benefits from combining multiple augmentation strategies. Datasets. The final ablation experiment explores the effect of removing or adding each training dataset, shown in Table 4 (4th block). As expected, removing each dataset results in a significant drop in performance. Whilst SlowTV seems to impact performance the least, we believe this is due to the lack of natural data within our evaluation set. This is supported by the fact that SlowTV has the most impact on SYNS-Patches, which is the only dataset with natural scenes. Finally, incorporating the CribsTV dataset proposed in this paper slightly increases the overall performance, especially in indoor scenes. It is worth remembering this variant also includes Mannequin Challenge, which results in a less drastic improvement. Furthermore, training on more varied data is likely to be more beneficial when combined with larger models with better generalization capacity. 5.5 In-distribution We compare our final models to the (self-)supervised SotA on the two datasets from our training set with available ground-truth, namely Kitti and Mannequin Challenge. This represents the evaluation procedure commonly employed by most papers, where the test data is drawn from the same distribution as the training data. These results can be found in Table 5 (In-Distribution). Both of our models outperform every (self-)supervised baseline on both datasets, excluding NewCRFs [5] on 17 Kitti. This shows that SlowTV provides complementary driving data that can generalize across datasets. Furthermore, the improved KBR++ increases performance on Mannequin Challenge due to the additional indoor data provided by CribsTV. 5.6 Zero-shot Generalization The core of our evaluation takes place in a zero-shot setting, meaning that models are not fine-tuned on the target datasets. This tests the capability to adapt to out-of-distribution examples and previously unseen environments. Existing SS-MDE publications sometimes include zero-shot evaluations. However, this is frequently limited to CityScapes [11] and Make3D [89], which contain low-quality ground-truth and represent an urban automotive domain similar to the training Kitti. We instead opt for a much more challenging collection of datasets, constituting a mixture of urban, natural, synthetic and indoor scenes. Please refer to Table 1 for details regarding the evaluations datasets and their splits. Outdoor. These results can be found in Table 5 (Outdoor). Both of our models outperform the SSL baselines by a large margin. This is even the case on DDAD [12], which is also an urban automotive dataset. Meanwhile, our model is capable of generalizing to urban [12, 63], synthetic [62] and natural [1, 13] datasets, performing on par with the SotA supervised baselines. It is interesting to note that NeWCRFs generalizes across automotive datasets and provides the best performance on DDAD. However, it fails when evaluated in alternative domains and provides only minimal improvements over the SSL baselines. Finally, even the more efficient KBR provides impressive performance that matches more complex transformer-based backbones. Indoor. Table 5 (Indoor) shows performance on all indoor datasets. Note that NeWCRFs was trained exclusively on NYUD-v2, while DPT uses it as part of its training collection. As such, this subset of results is not zero-shot. Our models outperform the SSL baselines by an even larger margin, due to the large shift in distribution when moving from outdoor to indoor scenes. In this case, KBR++ provides a noticeable improvement over KBR, thanks to the additional indoor training data provided by CribsTV. This helps to further close the gap w.r.t. supervised approaches. Once again, our model is now capable of performing on-par with all supervised models except DPT-BEiT, despite requiring no ground-truth annotations. Overall. To summarize, Table 5 (Multi-task) reports the multi-task metrics across all datasets. Our models outperform the updated SS-MDE baselines from [1] by over 35%. Meanwhile, the contributions from this paper further improve our original model [15] by 4.5%. What\u2019s more, KBR++ is the second-best model overall, outperforming supervised baselines such as NeWCRFs and DPT-ViT. It is worth emphasizing once again that our model is entirely self-supervised, relying exclusively on the photometric consistency across frames. Thus, we present the first approach demonstrating the true capabilities of SS-MDE, which can leverage much larger and more diverse collections of data freely available on YouTube. 18 Table 5: Results. Outdoor and Indoor represent zero-shot evaluations. We outperform all SS-MDE baselines [1] (top block). Our original model (KBR) performs on par with the supervised SotA, while the updated model from this paper (KBR++) outperforms every model except DPT [3]. Our models do not required ground-truth annotations for training and instead leverage large-scale YouTube data. In-Distribution Outdoor Indoor Multi-task Kitti Mannequin DDAD DIODE Sintel SYNS DIODE NYUD-v2 TUM Train Rank\u2193 \u2206\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Garg [6] S 7.58 -38.52 7.65 53.28 27.63 9.08 26.93 7.80 39.60 44.15 39.41 31.93 26.05 15.17 19.18 70.54 22.49 59.60 23.53 62.82 Monodepth2 [8] MS 7.74 -38.34 7.90 50.50 27.44 7.97 24.31 8.25 39.53 44.71 40.09 29.49 25.31 14.83 19.40 70.42 22.41 60.09 23.50 62.36 DiffNet [52] MS 7.05 -36.84 7.98 49.60 27.46 7.76 23.03 9.43 38.87 46.14 39.93 28.77 25.09 14.64 19.11 70.94 21.82 61.30 23.21 63.08 HR-Depth [9] MS 5.95 -35.16 7.70 51.49 27.01 8.39 23.13 9.94 39.09 45.60 38.82 30.90 25.07 15.48 18.93 71.19 21.74 61.18 23.18 63.50 KBR (Ours) [15] M 3.37 0.00 6.84 56.17 14.39 17.67 12.63 20.21 33.49 57.08 33.34 40.81 22.40 18.50 14.91 80.77 11.59 87.23 15.02 80.86 KBR++ (Ours) M 2.95 4.58 6.77 56.57 12.95 20.08 13.10 17.87 32.81 57.77 30.46 41.89 21.92 18.79 14.26 82.94 9.24 91.85 12.71 85.86 MiDaS [2] D 5.68 -11.84 13.71 33.44 16.96 12.62 16.00 15.41 32.72 59.04 30.95 39.55 26.94 14.69 10.71 88.42 10.48 89.59 14.43 82.35 DPT-ViT [3] D 3.84 -1.74 10.98 40.56 15.52 14.46 15.49 18.25 32.59 59.82 25.53 43.57 23.24 17.44 9.60 91.38 10.10 90.10 12.68 86.25 DPT-BEiT [3] D 2.11 11.12 9.45 44.22 13.55 16.58 10.70 22.63 31.08 61.51 21.38 46.46 21.47 17.73 7.89 93.34 5.40 96.54 10.45 89.68 NeWCRFs [5] D 3.84 1.03 5.23 59.20 18.20 15.17 9.59 23.02 37.01 49.66 39.25 32.43 24.28 16.76 14.05 84.95 6.22 95.58 14.63 82.95 Highlighted cells are NOT zero-shot results. S=Stereo, M=Monocular, D=Ground-truth Depth. Tmp Kitti SYNS Sintel Mannequin DDAD DIODE DIODE NYUD-v2 TUM Image GT HR-Depth KBR KBR++ MiDaS DPT-BEiT NeWCRFs Fig. 8: Zero-shot SS-MDE. The proposed KBR models generalize to a wide range of scene types, greatly improving upon the SSL baselines. The contributions from this paper further improve robustness to indoor scenes and the accuracy at thin structures. Middle=SelfSupervised \u2013 Bottom=Supervised. 5.7 Qualitative Results Visualizations. Sample predictions for each model and dataset can be found in Figure 8. Our models are significantly more robust than the best SSL baseline [9], which fails on all domains except the automotive. This is most noticeable in indoor settings, where it treats human faces as background. Meanwhile, our model generalizes across all datasets and environments, providing high-quality predictions. It can also be seen how KBR++ improves over the base KBR, especially in thin structures (DIODE Outdoors) and depth boundaries (TUM). Failure Cases. Despite being a significant step forward for SS-MDE, our model still has a few limitations and failure cases. We show these examples in Figure 9. The main one is the lack of explicit modeling for dynamic objects. Whilst the minimum reconstruction loss and automasking [8] can reduce their impact, there are still cases where vehicles in front of the camera are predicted as holes of infinite depth. Another 20 Tmp Kitti SYNS Sintel Mannequin DDAD Diode Diode NYUD TUM Image GT HR-Depth KBR KBR++ MiDaS DPT-BEiT NeWCRFs Fig. 9: Failure Cases. The proposed model occasionally produces holes of infinite depth or texture-copy artifacts. However, complex regions such as foliage or object boundaries tend to be challenging for all approaches. Finally, the upright prior in training data makes the model sensitive to strong rotations. Middle=Self-Supervised \u2013 Bottom=Supervised. common failure case is texture-copying artifacts, where textures from the original image are incorrectly predicted as changes in depth. This can happen on objects such as textured walls or pavements made with bricks or text on shirts and signs. Finally, another interesting failure case are reflective or transparent surfaces, as they do not violate the photometric constraints during training. However, these are also challenging for supervised methods, as the data cannot be correctly captured using LiDAR either. 5.8 Map-Free Relocalization Following our preliminary publication [15], we report our results on the MapFreeReloc [72] benchmark. Map-free relocalization aims to regress the 6DoF pose of a target frame based only on the known pose of a single reference frame. This is contrary to traditional localization pipelines, which typically contain a map-building or network-training phase that requires large-scale captures for each specific scene. 21 Table 6: Map-free Relocalization [72]. The feature-matching MapFreeReloc baselines [72] can be improved with our novel SS-MDE models. These results are zero-shot, without finetuning on the target dataset (which consists of portrait images). The improved variants from this paper (KBR++) perform on-par with DPT-BEiT and outperform every other (self-)supervised approach. Multi-task Pose VCRE Train Rank\u2193 \u2206\u2191 Trans\u2193 Rot\u2193 P\u2191 AUC\u2191 Error\u2193 P\u2191 AUC\u2191 Garg [6] S 10.50 -21.84 2.96 52.57 5.43 17.15 188.20 24.84 51.61 Monodepth2 [17] MS 11.25 -22.32 2.95 52.92 5.50 17.22 189.67 24.38 50.63 DiffNet [52] MS 10.88 -21.70 2.97 53.19 5.65 17.71 188.80 24.78 51.24 HR-Depth [9] MS 9.38 -21.07 2.94 52.95 5.67 17.95 187.83 25.06 51.52 KBR (Ours) M 4.50 0.00 2.63 49.01 11.54 32.02 181.21 29.96 58.89 KBR++ (Ours) M 2.00 4.86 2.58 46.22 12.50 33.76 178.26 31.93 61.48 MiDaS [2] D 4.00 0.35 2.60 46.92 11.39 30.44 180.64 30.45 59.72 DPT-ViT [3] D 3.38 1.09 2.56 45.62 11.27 30.92 181.34 30.60 60.03 DPT-BEiT [3] D 1.75 5.32 2.49 44.99 12.56 32.48 181.67 32.46 62.03 NeWCRFs [5] D 8.00 -16.98 2.89 51.92 6.69 20.77 184.63 25.89 52.93 DPT-NYUD [72] D+FT 6.50 -6.61 2.67 47.66 9.17 26.46 184.53 28.68 56.87 DPT-Kitti [72] D+FT 5.88 -3.05 2.66 49.21 10.86 29.99 178.49 28.37 56.86 Trans=meters, Rot=deg, VCRE=px, Precision=%, AUC=%. Recent research [72, 90] has shown that the scale ambiguity in map-free relocalization feature-matching pipelines can be resolved by incorporating SotA MDE predictions. The models are evaluated on the validation split of the benchmark, which consists of 37k images from 65 small-scale landmarks. The data was crowd-sourced and collected using mobile phones, meaning that it features an uncommon portrait aspect ratio. This makes the task of zero-shot transfer even more challenging. The feature-matching baseline [72] uses LoFTR [91] correspondences, a PnP solver and DPT [3] fine-tuned on either Kitti or NYUD-v2. Metric depth for all evaluated models is obtained by aligning the predictions to the baseline fine-tuned DPT predictions. We report the evaluation metrics provided by the benchmark authors. This includes translation (meters), rotation (deg) and reprojection (px) errors. Pose Precision/AUC were computed with an error threshold of 25 cm & 5\u25e6, while Reprojection uses a threshold of 90px. The results can be found in Table 6, along with visualizations in Figure 10. The updated models from this paper (KBR++) outperform every (self-)supervised model, except DPT-BEiT. This demonstrates the effectiveness of training with our diverse datasets, as well as the improved robustness to image aspect ratios provided by AR-Aug. Furthermore, it showcases the ability to incorporate SS-MDE into real-world problem pipelines. We find that the original DPT models perform better than their fine-tuned counterparts, despite using these as the metric scale reference. This suggests that the finetuning procedure of [72] may provide metric scale at the cost of generality. However, this highlights the need for models that predict accurate metric depth, rather than only relative depth. 22 Image DPT-Kitti++ KBR++ KBR++ Fig. 10: MapFreeReloc [72] Predictions. Our models are capable of adapting to the challenging portrait images of the MapFreeReloc benchmark, allowing us to outperform other (self-)supervised methods. Our predictions are sharper and more detailed that those provided by the baseline, which requires a large collection of ground-truth annotations during training. 6 Conclusion This paper has introduced KBR and KBR++, the first SS-MDE models that match and even outperform SotA supervised algorithms. We demonstrated this in our challenging zero-shot experiments, which showcase the robustness and generalization capabilities of our models. This was made possible due to our approach to data collection, focusing on the scale of the training set and leveraging the lack of annotations needed for self-supervised learning. We curated two novel large-scale YouTube datasets, SlowTV and CribsTV, with a total of 2M training frames. These datasets contain an incredibly diverse set of environments, ranging from hikes in snowy forests, to luxurious houses and even underwater caves. Performance and generalization were further maximized by introducing stronger augmentation regimes (AR-Aug, RandAugment and CutOut), simultaneously learning 23 the camera\u2019s intrinsics, making the training more flexible and modernizing the network architecture. Our extensive ablations demonstrated the benefits of introducing each respective component. The main limitation of the current models is their sensitivity to dynamic objects. Whilst the contributions from Monodepth2 [8] alleviate some of these artifacts, a more explicit motion model may be required to handle these scenarios. Introducing optical flow constraints may be the most feasible way to achieve this in a self-supervised manner. However, it is worth noting the increased computational requirements resulting from training a new network and computing the required consistency losses. Finally, estimating metric depth (for generic scenes) without ground-truth annotations is an open research problem that could further increase the applicability of SS-MDE to real-world tasks. By making the datasets and code freely available to the public, we hope to further drive the SotA in SS-MDE and inspire future research that addresses these challenging problems. Acknowledgements This work was partially funded by the EPSRC under grant agreements EP/S016317/1 & EP/S035761/1.", "introduction": "Reliably reconstructing the 3-D structure of the world is a crucial component in many real-world applications, such as autonomous driving, robotics, camera relocalization or augmented reality. While traditional depth estimation algorithms relied on corre- spondence estimation and triangulation, recent research has shown it is possible to train a neural network to reconstruct a scene from only a single image. Despite being an ill-posed task due to the scale ambiguity, monocular depth estimation (MDE) has become of great interest due to its flexibility and applicability to many fields. Recent supervised MDE [2\u20135] approaches have achieved impressive results, but are limited both by the availability and quality of annotated datasets. LiDAR data is expensive to collect and frequently exhibits boundary artifacts due to motion correc- tion. Meanwhile, Structure-from-Motion (SfM) is computationally expensive and can produce noisy, incomplete or incorrect reconstructions. Self-supervised learning (SSL) should be able to scale to much larger data quanti- ties, since only monocular or stereo video is required for training. These models instead leverage photometric constraints, using the predicted depth and motion to warp adja- cent frames and synthesize the target image. However, in practice, existing SS-MDE models [6\u20139] rely exclusively on automotive datasets [10\u201312]. This lack of variety sig- nificantly impacts their generalization capabilities and results in failures when applied to natural or indoor scenes. Moreover, despite being fully convolutional, these models struggle to generalize to different image sizes. We argue that the lack of diversity is due to the challenges of data collection, with new datasets aiming to provide high-quality ground-truth annotations that can be used for testing [13, 14]. Whist this is important to accurately evaluate the performance of models, it also places strong limitations on the achievable scale for the training splits. In this paper, we instead focus on creating datasets that specifically target self- supervised learning, exploiting the fact that no ground-truth annotations are required. Combined with our additional contributions, we train self-supervised models capable of zero-shot generalization beyond the automotive domain. Our models significantly outperform all existing SS-MDE approaches and can even match or outperform State- of-the-Art (SotA) supervised techniques. A preliminary version of this work [15] was published at the International Confer- ence on Computer Vision. This paper introduced the SlowTV dataset, composed of 1.7M frames from 40 curated YouTube videos. These videos featured a wide diversity of 2 settings, including seasonal hiking, scenic driving and scuba diving. SlowTV provided our models with the general foundation for natural scenes, such as forests, mountain- ous terrains or deserts. Our dataset was combined with Mannequin Challenge [16] and Kitti [10], which targeted indoor scenes with humans and urban driving. Our preliminary work introduced several contributions that further maximized zero-shot performance, whilst not increasing the complexity and computational requirements of the model. For instance, predicting the camera intrinsics prevented performance drops resulting from training with inaccurate intrinsics estimated via SfM. Meanwhile, the aspect ratio augmentation (AR-Aug) diversified the distribution of image shapes seen during training and facilitated transfer across datasets. This paper presents several significant extensions to Kick Back & Relax (KBR) [15] and thus introduces KBR++. Despite performing on par with several supervised SotA models, there was still a gap when evaluating on indoor datasets due to the exclusive reliance on Mannequin Challenge as a source of indoor data. Following the design phi- losophy of our original work, we introduce the CribsTV dataset. This is an extension to SlowTV consisting of 330k images from curated YouTube real estate virtual tours. As such, this new dataset focuses on bedrooms, living rooms and kitchens and is com- plemented by gardens, swimming pools and aerial outdoor shots. This further reduces the gap between supervision and self-supervision in indoor settings. We complement this novel dataset with additional augmentation strategy experi- ments. Since its inception [6, 7, 17], SS-MDE has restricted itself to simple augmen- tations, such as color jittering and horizontal flipping. However, recent contrastive SSL [18\u201320] research has shown the benefits of more aggressive augmentation schemes. We demonstrate this is also the case in SS-MDE and incorporate RandAugment [21] and CutOut [22] into the pipeline. Finally, we modernize the depth network archi- tecture with the transformer-based backbones from DPT [3] and perform several new ablation experiments that give further insight into the performance of our models. Our updated models outperform all (self-)supervised approaches, except DPT-BEiT, despite not requiring any ground-truth annotations. The contributions of our works can be summarized as: 1. We introduce a novel SS-MDE dataset of SlowTV YouTube videos and complement it with CribsTV, resulting in a total of 2M training images. This dataset features an incredibly diverse set of environments, including worldwide seasonal hiking, scenic driving, scuba diving and real estate tours. 2. We leverage SlowTV and CribsTV to train zero-shot models that generalize across multiple datasets. We additionally apply these models to the task of map-free relocalization, demonstrating their applicability to real-world settings. 3. We introduce a range of contributions and best-practices that further maximize generalization. This includes: camera intrinsics learning, an aspect ratio augmen- tation, stronger photometric augmentations, support frame randomization, flexible motion estimation and a modernized depth network architecture. We demonstrate the effectiveness of these contributions in detailed ablation experiments. 4. We close the performance gap between supervision and self-supervision, greatly furthering the SotA in SS-MDE. We share these developments with the community, making the datasets, pretrained models and code available to the public. 3" }, { "url": "http://arxiv.org/abs/2307.10713v1", "title": "Kick Back & Relax: Learning to Reconstruct the World by Watching SlowTV", "abstract": "Self-supervised monocular depth estimation (SS-MDE) has the potential to\nscale to vast quantities of data. Unfortunately, existing approaches limit\nthemselves to the automotive domain, resulting in models incapable of\ngeneralizing to complex environments such as natural or indoor settings.\n To address this, we propose a large-scale SlowTV dataset curated from\nYouTube, containing an order of magnitude more data than existing automotive\ndatasets. SlowTV contains 1.7M images from a rich diversity of environments,\nsuch as worldwide seasonal hiking, scenic driving and scuba diving. Using this\ndataset, we train an SS-MDE model that provides zero-shot generalization to a\nlarge collection of indoor/outdoor datasets. The resulting model outperforms\nall existing SSL approaches and closes the gap on supervised SoTA, despite\nusing a more efficient architecture.\n We additionally introduce a collection of best-practices to further maximize\nperformance and zero-shot generalization. This includes 1) aspect ratio\naugmentation, 2) camera intrinsic estimation, 3) support frame randomization\nand 4) flexible motion estimation. Code is available at\nhttps://github.com/jspenmar/slowtv_monodepth.", "authors": "Jaime Spencer, Chris Russell, Simon Hadfield, Richard Bowden", "published": "2023-07-20", "updated": "2023-07-20", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI", "cs.RO" ], "main_content": "Garg et al. [17] proposed the first algorithm for SS-MDE, where the target view was synthesized using its stereo pair and predicted depth map. Monodepth [19] greatly improved performance by incorporating differentiable bilinear interpolation [27], an SSIM-weighted reconstruction loss [64] and left-right consistency. SfMLearner [77] extended SS-MDE into the purely monocular domain by replacing the fixed stereo transform with a trainable VO network. DDVO [63] further refined the predicted motion with a differentiable DSO module [16]. Purely monocular approaches are highly sensitive to dynamic objects, which cause incorrect correspondences. Many works have tried to minimize this impact by introducing predictive masking [77], uncertainty estimation [30, 69, 45], optical flow [70, 48, 35] and motion masks [23, 9, 14]. Monodepth2 [20] proposed the minimum reconstruction loss and static automasking, encouraging the loss to optimize unoccluded pixels and preventing holes in the depth. Other methods focused on the robustness of the photometric loss. This was achieved through the use of pretrained [74] or learnt [53, 52] feature descriptors and semantic constraints [11, 25, 29]. Mahjourian et al. [39] and Bian et al. [6] complemented the photometric loss with geometric constraints. ManyDepth [65] additionally incorporated the previous frame\u2019s prediction into a cost volume. Complementary to these developments, other works proposed changes to the network architecture, including both the encoder [24, 22, 76], and decoder [44, 24, 68, 76, 38, 75]. Akin to supervised MDE developments [4, 5], Johnston et al. [28] and Bello et al. [21, 22] obtained improvements by representing depth as a discrete volume. Finally, several works have complemented selfsupervision with proxy depth regression. These are typically obtained from SLAM [30, 49], synthetic data [37] or hand-crafted disparity estimation [60, 65]. In particular, DepthHints [65] improved the proxy depth robustness by generating estimates with multiple hyperparameters. The works described here train exclusively on automotive data, such as Kitti [18], CityScapes [13] or DDAD [24]. Recent benchmark studies [56, 54] have shown that this lack of variety limits generalization to out-of-distribution domains, such as forests, natural or indoor scenes. We propose to greatly increase the diversity and scale of the training data by leveraging unlabelled videos from YouTube, without requiring manual annotation or expensive pre-processing. 3. SlowTV Dataset SlowTV is a style of TV programming featuring uninterrupted shots of long-duration events. Our dataset consists of 40 curated videos ranging from 1\u20138 hours and a total of 135 hours. 2 Figure 2: SlowTV. Sample images from the proposed dataset, featuring diverse scenes for hiking, driving and scuba diving. The dataset consists of 40 videos curated from YouTube, totalling to 1.7M frames. Diversifying the training data allows our SS-MDE models to generalize to unseen datasets. Table 1: Datasets Comparison. The top half shows commonly used SS-MDE training datasets. The proposed SlowTV greatly diversifies training environments and scales to much larger quantities. The bottom half summarizes the testing datasets used in our zero-shot generalization evaluation. Urban Natural Scuba Indoor Depth Acc Density #Img Kitti [18, 61]\u2020 \u2713 \u2717 \u2717 \u2717 LiDAR High Low 71k DDAD [24] \u2713 \u2717 \u2717 \u2717 LiDAR Mid Low 76k CityScapes [13] \u2713 \u2717 \u2717 \u2717 Stereo Low Mid 88k Mannequin [31]\u2020 \u2713 \u2717 \u2717 \u2713 SfM Mid Mid 115k SlowTV (Ours)\u2020 \u2713 \u2713 \u2713 \u2717 \u2717 \u2717 \u2717 1.7M Kitti [18, 61] \u2713 \u2717 \u2717 \u2717 LiDAR High Low 652 DDAD [24] \u2713 \u2717 \u2717 \u2717 LiDAR Mid Low 1k Sintel [7] \u2717 \u2713 \u2717 \u2717 Synth High High 1064 SYNS-Patches [1, 56] \u2713 \u2713 \u2717 \u2713 LiDAR High High 775 DIODE [62] \u2713 \u2717 \u2717 \u2713 LiDAR High High 771 Mannequin [31] \u2713 \u2717 \u2717 \u2713 SfM Mid Mid 1k NYUD-v2 [41] \u2717 \u2717 \u2717 \u2713 Kinect Mid High 654 TUM-RGBD [57] \u2717 \u2717 \u2717 \u2713 Kinect Mid High 2.5k \u2020Datasets used to train our networks. We focus on three categories: hiking, driving and scuba diving. Hiking videos target natural settings, including forests, mountains or fields, which are non-existent in current datasets. These videos were collected in a diverse set of locations and conditions. This includes the USA, Canada, the Balkans, Eastern Europe, Indonesia and Hawaii, and conditions such as rain, snow, autumn and summer. Existing automotive datasets tend to focus on urban driving in densely populated cities [18, 13, 26, 10, 24, 71, 8]. Our SlowTV dataset features complementary data in the form of long drives in scenic routes, such as mountain and natural trails. Finally, underwater is an otherwise unused domain, which increases the diversity of the training data and prevents overfitting to purely urban scenes. Figure 2 shows the variability of the proposed dataset, with additional examples and details in Appendix A. Videos were downloaded at HD resolution (720 \u00d7 1280) and extracted at 10 FPS to reduce storage, while still providing smooth motion and large overlap between adjacent frames. To make the dataset size tractable and reduce selfsimilarity, only 100 consecutive frames out of every 250 were retained. Despite this, the final training dataset consists of a total of 1.7M images, composed of 1.1M natural, 400k driving and 180k underwater. Table 1 compares existing datasets with those used in this publication. Since our dataset targets self-supervised methods, the only annotations required are the camera intrinsic parameters. We apply COLMAP [51] to a sub-sequence to estimate the intrinsics for each video. However, as discussed in Section 4.2, it is possible to let the network jointly optimize camera parameters alongside depth and motion. This improves performance and results in a truly self-supervised perception and navigation framework, requiring only monocular video to learn how to reconstruct. 4. Methodology MDE is an alternative to traditional depth estimation techniques, such as stereo matching and cost volumes. Rather than relying on multi-view images, these depth networks take only a single image as input. From this image, a disparity or inverse depth map is estimated as \\ \\protect \\ensuremath {\\textbf {\\MakeUppercase []{D}}}protect \\hat {\\ac {Depth}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Disp}}_{\\text {\\acs {time}}} = \\protect \\ensuremath {\\Phi }_{\\textit {D}}(\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}}) , where \\protect \\ensuremath {\\Phi }_{\\textit {D}} represents a trainable DNN, \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} is the target image at time-step \\protect \\ensuremath {t} and \\ \\protect \\ensuremath {\\textbf {\\MakeUppercase []{D}}}protect \\hat {\\ac {Depth}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Disp}}_{\\text {\\acs {time}}} the predicted sigmoid disparity. As SlowTV contains only monocular videos, we adopt a fully monocular pipeline [77], whereby our framework also estimates the relative pose \\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Pose}}_{\\text {\\acs {tk}}} between the target \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} and support frames \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img}}_{\\text {\\acs {tk}}}, where \\protect \\ensuremath {k} = \u00b11 is the offset between adjacent frames. This is represented as \\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Pose}}_{\\text {\\acs {tk}}} = \\protect \\ensuremath {\\Phi }_{\\textit {P}}(\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} \u2295\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img}}_{\\text {\\acs {tk}}}) , where \u2295is channel-wise concatenation. Pose is predicted as a translation and axis-angle rotation. 4.1. Losses The correspondences required to warp the support frames and compute the photometric loss are given by backprojecting the depth and re-projecting onto each support frame. This process is summarized as \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\text {\\acs {pix}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {pix-synth}}_{\\text {\\acs {tk}}} \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}}\\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Pose}}_{\\text {\\acs {tk}}}\\protect \\ensuremath {\\textbf {\\MakeUppercase []{D}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Depth}}_{\\text {\\acs {time}}} \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {pix}}_{\\text {\\acs {time}}}\\ \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}}la\\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {pix}}_{\\text {\\acs {time}}}bel {eq:reprojection} \\acs {pix-synth-tk} = \\acs {Cam} \\acs {Pose-tk} \\easyfunc {\\acs {Depth-t}}{\\acs {pix-t}} \\acs {Cam}^{\\inv } \\acs {pix-t} , (1) where \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}} are the camera intrinsic parameters, \\protect \\ensuremath {\\textbf {\\MakeUppercase []{D}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Depth}}_{\\text {\\acs {time}}} is the inverted and scaled disparity prediction \\ \\protect \\ensuremath {\\textbf {\\MakeUppercase []{D}}}protect \\hat {\\ac {Depth}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Disp}}_{\\text {\\acs {time}}}, \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {pix}}_{\\text {\\acs {time}}} are the 2-D pixel coordinates in the target frame and \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\text {\\acs {pix}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {pix-synth}}_{\\text {\\acs {tk}}} are the reprojected coordinates in the support frame. We omit the transformation to homogeneous coordinates for simplicity. 3 The warped support frames are then given by \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img-synth}}_{\\text {\\acs {tk}}} = \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img}}_{\\text {\\acs {tk}}} \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}}\\protect \\text {\\acs {pix}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {pix-synth}}_{\\text {\\acs {tk}}} \u000b , where \u27e8\u00b7\u27e9represents differentiable bilinear interpolation [27]. These warped frames are used to compute the photometric loss w.r.t. the original target frame. As is common, we use the weighted combination of SSIM+L1 [19], given by \\protect \\ensuremath {\\mathcal {L}}_{\\textit {ph}} \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}} \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}'\\ f \\protect \\ensuremath {\\lambda }un\\protect \\ensuremath {\\mathcal {L}}_{\\textit {ssim}} c \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}d \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}'e f { \\\\protect \\ensuremath {\\lambda }a \\protect \\ensuremath {\\mathcal {L}}_{\\textit {1}} c \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}s \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}' {Loss-photo} } { \\acs {Img}, \\acs {Img-synth} } { \\acs {weight-ssim} \\frac { 1 \\minus \\easyfunc { \\acs {Loss-ssim} }{ \\acs {Img}, \\acs {Img-synth} } }{2} + \\mypar { 1 \\minus \\acs {weight-ssim} } \\easyfunc { \\acs {Loss-l1} }{ \\acs {Img}, \\acs {Img-synth} } } , (2) where \\protect \\ensuremath {\\lambda } = 0.85 is the loss balancing weight. While Mannequin Challenge consists almost exclusively of static scenes, Kitti and SlowTV contain dynamic objects, such as vehicles, hikers, and wild marine life. Rather than introducing motion masks [23, 9, 14], commonly requiring semantic segmentation, we opt for the minimum reconstruction loss [20]. This loss reduces the impact of occluded pixels by optimizing only the pixels with the smallest loss across all support frames and is computed as \\protect \\ensuremath {\\mathcal {L}}_{\\textit {rec}} \\protect \\ensuremath {\\textbf {\\MakeLowercase []{p}}} \\ac \\protect \\ensuremath {k} \\protect \\ensuremath {\\mathcal {L}}_{\\textit {ph}} s \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img-synth}}_{\\text {\\acs {tk}}} { Loss-rec} = \\asum _{\\myac {pix}} \\mymin _{\\myac {offset}} \\easyfunc { \\acs {Loss-photo} } { \\acs {Img-t}, \\acs {Img-synth-tk} } , (3) where P indicates averaging over a set. Finally, automasking [20] helps remove holes of infinite depth caused by static frames and objects moving at similar speeds to the camera. Automasking simply discards pixels where the photometric loss for the unwarped target frame is lower than the loss for the synthesized view, given by \\protect \\ensuremath {\\textbf {\\MakeUppercase []{\\ensuremath {\\mathbb {M}}}}} \\ac \\protect \\ensuremath {k} \\protect \\ensuremath {\\mathcal {L}}_{\\textit {ph}} s \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\text {\\acs {Img}}' \\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img-synth}}_{\\text {\\acs {tk}}} { M ask \\protect \\ensuremath {k} \\protect \\ensuremath {\\mathcal {L}}_{\\textit {ph}}-\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}}s \\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img}}_{\\text {\\acs {tk}}}t a tic} = \\myivr { \\mymin _{\\myac {offset}} \\easyfunc { \\acs {Loss-photo} }{ \\acs {Img-t}, \\acs {Img-synth-tk} } < \\mymin _{\\myac {offset}} \\easyfunc { \\acs {Loss-photo} }{ \\acs {Img-t}, \\acs {Img-tk} } } , (4) where J\u00b7K represents the Iverson brackets. Additional results showing the effectiveness of the minimum reconstruction loss and automasking can be found in Appendix E. This reconstruction loss is complemented by the common edge-aware smoothness regularization [19]. These networks and losses constitute the core baseline required to train the desired zero-shot depth estimation models. To improve existing performance and generalization, we incorporate several new components into the pipeline. 4.2. Learning Camera Intrinsics As discussed in Section 3, we use COLMAP to estimate camera intrinsics for each dataset video. Whilst this is significantly less computationally demanding than obtaining full reconstructions, it introduces additional pre-processing requirements. Eliminating this step would simplify dataset collection and allow for even easier scale-up. We take inspiration from [23, 12] and predict camera intrinsics using the pose network \\protect \\ensuremath {\\Phi }_{\\textit {P}}. This is achieved by adding two decoder branches with the same architecture used to predict pose. The modified network is defined as (a) Image (b) Ground-truth (c) Base (\u03b4.25 = 61.82%) (d) Distorted (\u03b4.25 = 71.12%) Figure 3: Generalizing to Image Shapes. The same model, at different resolutions, can produce significantly different predictions. Distorting the image (and resizing the prediction) can improve performance, despite introducing artefacts. Note the improved boundary sharpness in (d). \\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Pose}}_{\\text {\\acs {tk}}}, \\protect \\ensuremath {\\textbf {\\MakeLowercase []{f}}}_{xy}, \\protect \\ensuremath {\\textbf {\\MakeLowercase []{c}}}_{xy} = \\protect \\ensuremath {\\Phi }_{\\textit {P}}(\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\protect \\text {\\acs {Img}}_{\\text {\\acs {time}}} \u2295\\protect \\ensuremath {\\textbf {\\MakeUppercase []{I}}}\\protect \\ensuremath {t}\\\\protect \\ensuremath {k}protect \\text {\\acs {time}}\\!+\\!\\text {\\acs {offset}}\\protect \\text {\\acs {Img}}_{\\text {\\acs {tk}}}) , where \\protect \\ensuremath {\\textbf {\\MakeLowercase []{f}}}_{xy} and \\protect \\ensuremath {\\textbf {\\MakeLowercase []{c}}}_{xy} are the focal lengths and principal point. Both quantities are predicted as normalized and scaled by the image shape prior to combining them into \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}}. The focal length decoder uses a softplus activation to guarantee a positive output. The principal point instead uses a sigmoid, under the assumption that it will lie within the image. All parameters\u2014depth, pose and intrinsics\u2014are optimized simultaneously, as they all establish the correspondences across support frames, given by (1). 4.3. Aspect Ratio Augmentation The depth network is commonly a fully convolutional network that can process images of any size. In practice, these networks can overfit to the training size, resulting in poor out-of-dataset performance. Figure 3 shows this effect, where resizing to the training resolution improves results, despite introducing stretching or squashing distortions. Since both SlowTV and Mannequin Challenge were sourced from YouTube, they feature the common widescreen aspect ratio (16:9). However, the objective is to train a model that can be easily applied to real-world settings in a zero-shot fashion. To this end, we propose an aspect ratio augmentation (AR-Aug) that randomizes the image shape during training, increasing the data diversity. AR-Aug has two components: centre cropping and resizing. The cropping stage uniformly samples from a set of predefined aspect ratios. A random crop is generated using this aspect ratio, covering 50-100% of the original height or width. By definition, the sampled crop will be smaller than the original image and of different shape. The crop is therefore resized to match the number of pixels in the original image. Appendix B details the full set of aspect ratios used and shows training images obtained using this procedure. AR-Aug has the effect of drastically increasing the distribution of image shapes, aspect ratios and object scales seen by the network during training. As shown in Section 5.4, this greatly increases performance, especially when evaluating on datasets with different image sizes. 4 5. Results We evaluate the proposed models in a variety of settings and datasets, including in-distribution and zero-shot. Since the trained model is purely monocular, the predicted depth is in arbitrary units. Instead of using traditional median alignment [77, 20], we follow MiDaS [47] and estimate scale and shift alignment parameters based on a least-squares criterion. We apply the same strategy to every baseline. Results using median alignment are shown in Appendix F. Note that datasets with SfM ground-truth (e.g. Mannequin Challenge) are also scaleless and would require this step even for techniques that predict metric depth. 5.1. Implementation Details The proposed models are implemented in PyTorch [43] using the baselines from the Monodepth Benchmark [56]. The depth network uses a pretrained ConvNeXt-B backbone [33, 66] and a DispNet decoder [40, 19]. The pose network instead uses ConvNeXt-T for efficiency. Each model variant is trained with three random seeds and we report average performance. This improves the reliability of the results and reduces the impact of non-determinism. The final models were trained on a combination of SlowTV (1.7M), Mannequin Challenge (115k) and Kitti Eigen-Benchmark (71k). To make the duration of each epoch tractable and balance the contribution of each dataset, we fix the number of images per epoch to 30k, 15k and 15k, respectively. The subset sampled from each dataset varies with each epoch to ensure a high data diversity. The models were trained for 60 epochs using AdamW [34] with weight decay 10\u22123 and a base learning rate of 10\u22124, decreased by a factor of 10 for the final 20 epochs. Empirically, we found that linearly warming up the learning rate for the first few epochs stabilized learning and prevented model collapse. We use a batch size of 4 and train the models on a single NVIDIA GeForce RTX 3090. SlowTV and Mannequin Challenge use a base image size of 384 \u00d7 640, while Kitti uses 192 \u00d7 640. As is common, we apply horizontal flipping and colour jittering augmentations with 50% probability. AR-Aug is applied with 70% probability, sampling from 16 predefined aspect ratios. The full set of aspect ratios can be found in Appendix B. Since existing models are trained exclusively on automotive data, most of the motion occurs in a straight-line and forward-facing direction. It is therefore common practice to force the network to always make a forward-motion prediction by reversing the target and support frame if required. Handheld videos, while still primarily featuring forward motion, also exhibit more complex motion patterns. As such, removing the forward motion constraint results in a more flexible model that improves performance. Similarly, existing models are trained with a fixed set of support frames\u2014usually previous and next. Since SlowTV and Mannequin Challenge are mostly composed of handheld videos, the change from frame-to-frame is greatly reduced. We make the model more robust to different motion scales and appearance changes by randomizing the separation between target and support frames. In general, we sample such that handheld videos use a wider time-gap between frames, while automotive has a small time-gap to ensure there is significant overlap between frames. As shown later, this leads to further improvements and greater flexibility. 5.2. Baselines We use the SSL baselines from [56], trained on Kitti Eigen-Zhou with a ConvNeXt-B backbone. We minimize architecture changes and training settings w.r.t. the baselines to ensure models are comparable and improvements are solely due to the contributions from this paper. We also report results for recent State-of-the-Art (SotA) supervised MDE approaches, namely MiDaS [47], DPT [46] and NeWCRFs [72]. MiDaS and DPT were trained on a large collection of supervised datasets that do not overlap with our testing datasets (unless otherwise indicated). As such, these models are also evaluated in a zero-shot fashion. We use the pre-trained models and pre-processing provided by the PyTorch Hub. NeWCRFs provides separate indoor/outdoor models, trained on Kitti and NYUD-v2 respectively. We evaluate the corresponding model in a zero-shot manner depending on the dataset category. Despite predicting metric depth, we apply scale and shift alignment to ensure results are comparable. 5.3. Evaluation Metrics We report the following metrics per dataset: Rel. Absolute relative error (%) between target \\protect \\ensuremath {y} and prediction \\ \\protect \\ensuremath {y}protect \\hat {\\text {\\acs {target}}} as Rel = P |\\protect \\ensuremath {y}\u2212\\ \\protect \\ensuremath {y}protect \\hat {\\text {\\acs {target}}}| /\\protect \\ensuremath {y}. Delta. Prediction threshold accuracy (%) as \u03b4.25 = P (max (\\ \\protect \\ensuremath {y}protect \\hat {\\text {\\acs {target}}}/\\protect \\ensuremath {y}, \\protect \\ensuremath {y}/\\ \\protect \\ensuremath {y}protect \\hat {\\text {\\acs {target}}}) < 1.25) . F. Pointcloud reconstruction F-Score [42] (%) as F = (2PR) / (P + R) , where P and R are the Precision and Accuracy of the 3-D reconstruction with a correctness threshold of 10cm. Table 2: Model Complexity. Supervised SotA approaches make use of computationally expensive transformer backbones. Despite being of equivalent complexity to the SSL baselines [56], our model closes the gap to supervised performance. Backbone MParam\u2193FPS\u2191 KBR (Ours) ConvNeXt-B [33] 92.65 61.50 MiDaS [47] ResNeXt-101 [67] 105.36 51.38 DPT [46] ViT-L [15] 344.06 14.54 DPT [46] BEiT-L [3] 345.01 9.60 NeWCRFs [73] Swin [32] 270.44 21.61 5 We additionally compute multi-task metrics to summarize the performance across all datasets: Rank. Average ordinal ranking order across all metrics as Rank = P m rm, where m represents each available metric and r is the ordinal rank. Improvement. Average relative performance increase (%) across all metrics as \u2206= P m(\u22121)lm(Mm \u2212M 0 m)/M 0 m, where lm = 1 if lower is better, Mm is the performance for a given metric and M 0 m is the baseline\u2019s performance. 5.4. Ablation We perform a \u201cleave-one-out\u201d ablation study, whereby a single component is removed per-experiment from the full model. This helps to understand the impact of each proposed contribution. We report this ablation on Kitti Eigen-Zhou, Mannequin Challenge and SYNS-Patches. As shown in Table 3, the full model with all contributions performs best. Fwd \\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}} represents a network forced to always predict forward-motion. \\protect \\ensuremath {k} = \u00b11 uses fixed support frames, instead of the randomization in Section 5.1. Fixed \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}} removes the learnt intrinsics from Section 4.2, while No AR-Aug removes the aspect ratio augmentation. It is worth noting that none of these contributions increase the number of depth network parameters. Learning the intrinsics results in a negligible increase in the pose network, which is not required for inference. Despite this, each contribution significantly improves accuracy and generalization. 5.5. In-distribution We compare our best approach\u2014Kick Back & Relax (KBR)\u2014against existing SotA on the two training datasets with ground-truth: Kitti and Mannequin Challenge. This represents the most common evaluation, where the test data is sampled from the same distribution as the training data. As shown in Table 4 (In-Distribution), all variants of the proposed models outperform the improved SSL baselines from [56]. Even more surprising, our models also outperform most supervised baselines on Kitti, despite DPT-BEiT Table 3: Leave-one-out Ablation. We study the contribution of each proposed component. Randomizing the support frames, learning camera parameters and augmenting the image shape all contribute to improving overall performance. Multi-task Kitti Eigen-Zhou Mannequin SYNS (Val) R\u2193\u2206\u2191 Rel\u2193F\u2191 \u03b4.25\u2191 Rel\u2193F\u2191 \u03b4.25\u2191 Rel\u2193F\u2191 \u03b4.25\u2191 Full 2.20 0.00 6.16 57.60 95.52 14.39 17.67 82.23 20.34 17.08 69.88 Fwd \\ protect \\hat {\\ensuremath {\\textbf {\\MakeUppercase []{P}}}} 2.60 -0.08 6.18 57.47 95.47 14.36 17.52 82.22 20.24 17.20 69.52 \\protect \\ensuremath {k} = \u00b11 2.30 -0.60 6.03 58.23 95.67 14.17 17.92 82.43 21.04 16.04 68.33 Fixed \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}} 4.00 -1.46 6.30 56.93 95.38 14.95 17.11 81.00 20.46 17.11 69.56 No AR-Aug 4.50 -4.72 7.42 52.89 93.99 14.32 17.87 82.16 21.32 16.10 67.64 None 5.40 -5.52 7.47 51.83 94.19 14.62 17.01 81.29 21.21 16.72 67.03 Highlighted cells indicate zero-shot results. being trained on it. NeWCRFs is the only supervised model to outperform ours by a slight margin. This may be due to the additional automotive data from SlowTV, which increases the variety and improves generalization. Finally, our model outperforms even the supervised SotA on Mannequin Challenge F-Score. 5.6. Zero-shot Generalization The core of our evaluation takes place in a zero-shot setting, i.e. models are not fine-tuned. This demonstrates the capability of our model to generalize to previously unseen environments. While several existing SS-MDE approaches provide zero-shot evaluations, this is usually limited to CityScapes [13] and Make3D [50]. These datasets provide low-quality ground-truths and focus exclusively on urban environments similar to Kitti. We instead opt for a collection of challenging datasets, constituting a mixture of urban, natural, synthetic and indoor scenes. Outdoor. These results can be found in Table 4 (Outdoor), where all evaluated models are zero-shot. Once again, our models outperform the SSL baselines in every metric, across all datasets. NeWCRFs is capable of generalizing to other automotive datasets and provides good performance on DDAD. However, our model adapts better to complex synthetic (Sintel) and natural (SYNS-Patches) scenes. Despite being fully self-supervised and requiring no depth annotations during training, our model outperforms MiDaS and DPT-ViT. DPT leverages expensive transformer-based backbones and additional datasets to improve performance. Indoor. Table 4 (Indoor) shows results for all indoor datasets. Note that NeWCRFs was trained exclusively on NYUD-v2, while DPT-BEiT used it as part of its training collection. As such, this subset of results is not zero-shot. As with the outdoor evaluations, our model provides significant improvements over all existing SSL approaches. This is due to the focus on Kitti and the lack of indoor training data, highlighting the need for more varied training sources. However, the supervised models still provide improvements over our method, likely due to the additional indoor datasets used for training. Once again, we emphasize that our model is fully self-supervised. Despite this, we close the performance gap on complex supervised models. Visualizations. We visualize the network predictions in Figure 4. As seen, the proposed model clearly outperforms the best SSL baseline. This is most noticeable in indoor settings, where the baseline treats human faces as background. In many cases, our self-supervised model provides similar or better depth maps than the supervised baselines. Once again, these rely on ground-truth annotations and expensive transformer-based backbones. Meanwhile, our model simply requires curated collections of freely-available monocular YouTube videos, without even camera intrinsics. Failure Cases. Our approach does not explicitly use explicit 6 Table 4: Results. Outdoor and Indoor represent zero-shot evaluations. We outperform all SS-MDE baselines [56] (top block). In many cases, our model performs on par with supervised SotA (bottom block), without requiring ground-truth depth annotations for training. In-Distribution Outdoor Indoor Multi-task Kitti Mannequin DDAD DIODE Sintel SYNS DIODE NYUD-v2 TUM Train Rank\u2193\u2206\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Garg [17] S 7.58 -38.52 7.65 53.28 27.63 9.08 26.93 7.80 39.60 44.15 39.41 31.93 26.05 15.17 19.18 70.54 22.49 59.60 23.53 62.82 Monodepth2 [20] MS 7.74 -38.34 7.90 50.50 27.44 7.97 24.31 8.25 39.53 44.71 40.09 29.49 25.31 14.83 19.40 70.42 22.41 60.09 23.50 62.36 DiffNet [76] MS 7.05 -36.84 7.98 49.60 27.46 7.76 23.03 9.43 38.87 46.14 39.93 28.77 25.09 14.64 19.11 70.94 21.82 61.30 23.21 63.08 HR-Depth [38] MS 5.95 -35.16 7.70 51.49 27.01 8.39 23.13 9.94 39.09 45.60 38.82 30.90 25.07 15.48 18.93 71.19 21.74 61.18 23.18 63.50 KBR (Ours) M 3.37 0.00 6.84 56.17 14.39 17.67 12.63 20.21 33.49 57.08 33.34 40.81 22.40 18.50 14.91 80.77 11.59 87.23 15.02 80.86 MiDaS [47] D 4.89 -11.84 13.71 33.44 16.96 12.62 16.00 15.41 32.72 59.04 30.95 39.55 26.94 14.69 10.71 88.42 10.48 89.59 14.43 82.35 DPT-ViT [46] D 3.32 -1.74 10.98 40.56 15.52 14.46 15.49 18.25 32.59 59.82 25.53 43.57 23.24 17.44 9.60 91.38 10.10 90.10 12.68 86.25 DPT-BEiT [46] D 1.84 11.12 9.45 44.22 13.55 16.58 10.70 22.63 31.08 61.51 21.38 46.46 21.47 17.73 7.89 93.34 5.40 96.54 10.45 89.68 NeWCRFs [72] D 3.26 1.03 5.23 59.20 18.20 15.17 9.59 23.02 37.01 49.66 39.25 32.43 24.28 16.76 14.05 84.95 6.22 95.58 14.63 82.95 Highlighted cells are NOT zero-shot results. S=Stereo, M=Monocular, D=Ground-truth Depth. Tmp Kitti SYNS Sintel Mannequin DDAD DIODE DIODE NYUD-v2 TUM Image GT HR-Depth Ours MiDaS DPT-BEiT NeWCRFs Figure 4: Zero-shot SS-MDE. The proposed model adapts to a wide range of datasets and environments. It greatly outperforms the updated self-supervised baselines from [56, 38] and performs on-par with SotA supervised baselines [47, 46, 73], whilst being more efficient. Middle=Self-Supervised \u2013 Bottom=Supervised. motion masks to handle dynamic objects. Instead, we rely only on the minimum reconstruction loss and automasking [20]. Whilst this improves the robustness, it can be seen how dynamic objects such as cars can cause incorrect predictions (e.g. Kitti or DDAD). This represents one of the most important avenues for future research. Further discussions regarding these failure cases and additional visualizations can be found in Appendix G. MDEC-2. The Monocular Depth Estimation Challenge [54, 55] tested zero-shot generalization on SYNS-Patches. We 7 Figure 5: MDEC-2 [55]. Our submission (jspenmar2) was top of the MDEC-2 leaderboard in F-Score reconstruction. The challenge evaluated zero-shot performance on SYNS-Patches for both supervised and self-supervised approaches. compare our model to all submissions from the latest edition (CVPR2023). As seen in Figure 5, our method (jspenmar2) achieves the highest F-Score reconstruction and is top-3 in all metrics except AbsRel and Edge-Accuracy. Once again, this illustrates the benefits of SlowTV, which contains large quantities of natural data not present in other datasets. 5.7. Map-Free Relocalization Map-free relocalization is the task of localizing a target image using a single reference image. This is contrary to traditional pipelines, which require large image collections to first build a scene-specific map, such as SfM or training a CNN. Recent work [59, 2] has shown the benefit of incorporating metric MDE into feature matching pipelines to resolve the ambiguous scale of the predicted pose. We evaluate all depth models on the MapFreeReloc benchmark [2] validation split, serving as an example realworld task. The feature-matching baseline [2] consists of LoFTR [58] correspondences, a PnP solver and DPT [46] fine-tuned on either Kitti or NYUD-v2. Since this benchmark requires metric depth but does not provide groundtruth, we align all models to the baseline fine-tuned DPT predictions using least-squares. We report the metrics provided by the benchmark authors. This includes translation (meters), rotation (deg) and reprojection (px) errors. Pose Precision/AUC were computed with an error threshold of 25 cm & 5\u25e6, while Reprojection uses a threshold of 90px. As shown in Table 5, our method has the best performance across all SS-MDE approaches by a large margin. Our performance is on par with the supervised SotA, without requiring ground-truth supervision. This further demonstrates the benefits of the proposed SlowTV dataset and its applicability to real-world scenarios. Interestingly, we find that the original DPT models perform better than their fine-tuned counterparts, despite using these as the metric scale reference. This suggests that the fine-tuning procedure of [2] may provide metric scale at the cost of generality. However, this highlights the need for models that predict accurate metric depth, rather than only relative depth. Table 5: Map-free Relocalization [2]. We incorporate KBR into a feature-matching pipeline for singe-image relocalization. We once again outperform the SS-MDE baselines in every metric and perform on par with supervised SotA. Pose VCRE Train Trans\u2193Rot\u2193P\u2191 AUC\u2191 Error\u2193P\u2191 AUC\u2191 Garg [17] S 2.96 52.57 5.43 17.15 188.20 24.84 51.61 Monodepth2 [19] MS 2.95 52.92 5.50 17.22 189.67 24.38 50.63 DiffNet [76] MS 2.97 53.19 5.65 17.71 188.80 24.78 51.24 HR-Depth [38] MS 2.94 52.95 5.67 17.95 187.83 25.06 51.52 KBR (Ours) M 2.63 49.01 11.54 32.02 181.21 29.96 58.89 MiDaS [47] D 2.60 46.92 11.39 30.44 180.64 30.45 59.72 DPT-ViT [46] D 2.56 45.62 11.27 30.92 181.34 30.60 60.03 DPT-BEiT [46] D 2.49 44.99 12.56 32.48 181.67 32.46 62.03 NeWCRFs [72] D 2.89 51.92 6.69 20.77 184.63 25.89 52.93 DPT-NYUD [2] D+FT 2.67 47.66 9.17 26.46 184.53 28.68 56.87 DPT-Kitti [2] D+FT 2.66 49.21 10.86 29.99 178.49 28.37 56.86 Trans=meters, Rot=deg, VCRE=px, Precision=%, AUC=%. 6. Conclusion This paper has presented the first approach to SS-MDE capable of generalizing across many datasets, including a wide range of indoor and outdoor environments. We demonstrated that our models significantly outperform existing self-supervised models, even in the automotive domain where they are currently trained. By leveraging the large quantity and variety of data in the new SlowTV dataset, we are able to close the gap between supervised and self-supervised performance. Additional components, such as the novel AR-Aug, randomized support frames and more flexible pose estimation, further improve the performance and zero-shot generalization of the proposed models. Future work should explore alternative sources of data to incorporate even more scene variety. In particular, additional indoor data may significantly reduce the remaining gap between self-supervised and supervised approaches. Another key direction is improving the accuracy in dynamic scenes. A promising approach would be using optical flow to refine the estimated correspondences. This could be incorporated in a self-supervised manner, without requiring semantic segmentation or motion masks. However, it introduces additional costs due to the increased computational requirements from the new network. Developing models capable of predicting metric depth would further increase their applicability to real-world applications. Finally, as the diversity of training environments increases, it will become crucial to further diversify the benchmarks used to evaluate these models. Acknowledgements This work was partially funded by the EPSRC under grant agreements EP/S016317/1 & EP/S035761/1. 8 A. SlowTV Dataset Figure 7 shows a frame from each SlowTV video, while Figure 8 shows their map location. Sequences [00-27] are hiking scenes, [28-30] scuba diving and [31-39] driving. As seen, this dataset provides an incredible diversity of environments and locations, enabling us to train models capable of generalizing to previously unseen scene types. B. Aspect Ratio Augmentation To make the models invariant to the training image size, we propose to incorporate an aspect ratio augmentation. For more information see Section 4.3 in the main paper. Sample training images obtained using this procedure an be found in Figure 6. The centre crop is uniformly sampled from a set of predetermined aspect ratios: \u2022 Portrait: 6:13, 9:16, 3:5, 2:3, 4:5, 1:1 \u2022 Landscape: 5:4, 4:3, 3:2, 14:9, 5:3, 16:9, 2:1, 24:10, 33:10, 18:5 C. Evaluation Datasets Kitti Eigen-Benchmark [18]. (Test: 652) Subset of the common Kitti Eigen split with corrected LiDAR [61]. Kitti Eigen-Zhou [18]. (Val: 700) Subset of the Kitti Eigen-Zhou val split with corrected LiDAR [61]. Mannequin Challenge [18]. (Test: 1k) Subset of the original test split, using COLMAP [51] depth reconstructions. SYNS-Patches [1, 56]. (Val: 400, Test: 775) Official val and test splits consisting of dense LiDAR maps. DDAD [24]. (Test: 1k) Subset of the official val split, featuring LiDAR maps with an increased range up to 250m. Sintel [18]. (Test: 1064) Official test split, consisting of synthetic image & depth pairs from highly dynamic scenes DIODE Indoors [62]. (Test: 325) Official val split with dense LiDAR depth maps. DIODE Outdoors [62]. (Test: 446) Official val split with dense LiDAR depth maps. NYUD-v2 [41]. (Test: 654) Official test split collected using a Kinect RGB-D camera. TUM-RGBD [18]. (Test: 2.5k) Subset of dynamic scenes with moving people also collected using a Kinect. D. Leaning Camera Intrinsics Estimating the intrinsics parameters is required when training with uncalibrated cameras. However, this procedure can be applied even if the camera parameters are known. Table 6 shows results when training on either Kitti Eigen-Benchmark or Mannequin Challenge. If the dataset provides accurately calibrated cameras (Kitti), selfsupervised learning of the intrinsics is on par with using the (a) Original (16:9) (b) 4:5 (c) Original (16:9) (d) 5:3 (e) Original (16:9) (f) 2:1 (g) Original (16:9) (h) 1:1 Figure 6: AR-Aug. Additional augmentations used to diversify the variety of image shapes and object scales seen by the network. Table 6: Learning Camera Intrinsics. Performance when training on a single dataset (Kitti or Mannequin Challenge) and learning camera intrinsics. If the cameras are not perfectly calibrated, learning the intrinsics can improve accuracy. Kitti Eigen-Zhou Rel\u2193F\u2191 \u03b4.25\u2191 Baseline 5.69 60.88 95.89 Learn \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}} 5.68 60.81 95.90 Mannequin Rel\u2193F\u2191 \u03b4.25\u2191 Baseline 16.66 14.20 77.18 Learn \\protect \\ensuremath {\\textbf {\\MakeUppercase []{K}}} 16.12 14.77 78.40 ground-truth parameters. However, when the ground-truth parameters are estimated using COLMAP [51], learning the intrinsics can slightly improve performance. 9 Figure 7: SlowTV Dataset. We show one frame per video from the proposed SlowTV. The dataset contains a diverse set of environments in a range of environmental conditions. The final dataset has a total of 1.7M images, with 1.15M natural, 400k driving and 180k underwater. Figure 8: SlowTV Map. Distribution of locations in the proposed dataset. Green=Natural, Red=Driving, Blue=Underwater. E. Dynamic Objects MDE models trained exclusively using monocular supervision are prone to artefacts from dynamic objects. For instance, vehicles moving at similar speeds to the camera can produce holes of infinite depth due to their static appearance across images. Meanwhile, other dynamic objects can result in underestimated depth when moving towards the camera, or overestimated depth when moving away from it. This is due to the additional motion causing incorrect correspondences in the warping procedure. Existing approaches that address these dynamic objects [23, 9, 14] rely on additional labels such as semantic or instance segmentation. We instead opt for the losses proposed by Monodepth2 [20] as a simpler proxy without increased computation or label requirements. 10 Table 7: Monodepth2 [20] Losses. The minimum reconstruction loss and automasking from Monodepth2 serve as valuable proxies to increase robustness to dynamic objects, while remaining simple and efficient. Multi-task Kitti Mannequin DDAD DIODE Sintel SYNS DIODE NYUD-v2 TUM Rank\u2193\u2206\u2191 Rel\u2193F\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Baseline 1.89 0.00 9.00 53.50 16.89 14.66 23.57 11.13 35.99 52.70 35.33 38.15 25.47 15.73 17.91 75.03 21.68 71.41 17.69 75.67 MinRec+Automask 1.11 7.01 6.50 55.62 16.96 14.48 18.49 11.64 35.62 52.95 34.97 38.83 24.44 16.25 16.85 76.50 14.27 80.54 17.23 76.23 Image GT Baseline Monodepth2 Figure 9: Monodepth2 Losses. Monodepth2 [20] reduces the presence of holes of infinite depth and dynamic object artefacts. The sharpness of object boundaries are also improved due to the refined correspondences from the minimum reconstruction loss. We test the effectiveness of these constraints on a smaller subset of all three training datasets. These results can be found in Table 7 and Figure 9. Despite not explicitly modelling dynamic objects, Monodepth2 drastically increases the accuracy and robustness. This can be seen both in the improved metrics and the reduction in visual artefacts. F. Median Alignment Results Table 8 shows results when applying median depth alignment between prediction and ground-truth. As expected, this generally results in worse performance that estimating both scale and shift parameters. This is particularly noticeable for MiDaS, DPT and the SSL baselines. G. Failure Cases Whilst representing a significant milestone in SS-MDE, our model still suffers from several failure cases. We show these in Figure 10. For instance, Kitti shows a car estimated as a hole of infinite depth, despite training with the minimum reconstruction loss and automasking [20]. Several visualizations are also characterized by texture-copy artefacts. In some cases, our models estimated incorrect relative object positions (e.g. Sintel or DDAD). An interesting failure case for all approaches are highly-reflective surfaces, such as mirrors or TVs. These are challenging due to the fact that they do not violate the photometric error and obtaining LiDAR or SfM ground-truth is highly challenging. Finally, due to the strong prior for upright images, our model struggles to adapt to extreme rotations (TUM-RGBD). This 11 Table 8: Median-Scaling Results. This represents the common SS-MDE (SS-MDE) evaluation procedure [77]. Removing the shift alignment reduces performance for all approaches. Our method still outperforms all existing SS-MDE models, and NeWCRFs (NeWCRFs) in many cases. In-Distribution Outdoor Indoor Kitti Mannequin DDAD DIODE Sintel SYNS DIODE NYUD-v2 TUM Train Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 F\u2191 Rel\u2193 F\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Rel\u2193 \u03b4.25\u2191 Garg [17] S 7.65 53.28 34.55 9.29 26.77 4.77 57.87 42.85 53.16 30.98 31.68 13.58 30.63 51.00 26.78 54.29 27.37 55.26 Monodepth2 [20] MS 7.90 50.50 35.88 8.18 25.46 4.77 57.61 43.21 54.40 30.11 30.05 13.28 33.51 47.49 29.87 50.08 30.59 49.82 DiffNet [76] MS 7.98 49.60 35.50 8.15 24.17 4.75 55.68 45.37 55.23 29.44 29.75 13.41 28.67 53.82 26.62 54.69 28.56 53.07 HR-Depth [38] MS 7.70 51.49 35.89 8.62 24.01 5.08 57.88 43.92 53.91 30.89 29.87 14.03 32.88 47.67 27.32 53.06 29.22 52.31 KBR (Ours) M 7.23 54.63 18.73 15.04 14.01 14.01 43.80 60.84 37.06 36.01 24.92 16.49 18.88 72.09 13.27 83.65 16.60 76.48 MiDaS [47] D 18.45 20.13 26.02 10.61 18.38 8.28 48.63 60.15 39.09 32.72 35.30 9.18 18.08 74.48 23.11 69.67 17.75 76.99 DPT-ViT [46] D 14.23 36.25 28.54 11.38 17.83 8.99 72.46 49.09 128.86 29.58 32.69 12.93 36.82 55.15 24.82 67.95 24.33 78.16 DPT-BEiT [46] D 18.20 37.46 30.79 12.58 15.39 11.78 70.30 50.03 60.20 29.54 31.09 13.76 51.07 53.11 75.32 42.91 25.27 83.07 NeWCRFs [72] D 5.55 56.45 22.15 13.68 11.87 13.44 50.52 51.16 48.42 32.30 27.79 14.50 16.15 79.52 7.00 94.44 14.93 80.63 Highlighted cells are NOT zero-shot results. S=Stereo, M=Monocular, D=Ground-truth Depth. Tmp Kitti SYNS Sintel Mannequin DDAD Diode Diode NYUD TUM Image GT HR-Depth Ours MiDaS DPT-BEiT NeWCRFs Figure 10: Failure Cases. The proposed model occasionally produces holes of infinite depth or texture-copy artefacts. However, complex regions such as foliage or boundaries tend to be oversmoothed by all approaches. Finally, the upright prior in training data makes the model less robust to strong rotations. Middle=Self-Supervised \u2013 Bottom=Supervised. could be mitigated with additional augmentations. Finally, it is worth pointing out that, in the vast majority of these cases, our model outperforms the SSL baselines. 12", "introduction": "Reliably reconstructing the 3-D structure of the envi- ronment is a crucial component of many computer vision pipelines, including autonomous driving, robotics, aug- mented reality and scene understanding. Despite being an inherently ill-posed task, monocular depth estimation (MDE) has become of great interest due to its flexibility and applicability to many fields. While traditional supervised methods achieve impressive results, they are limited both by the availability and qual- ity of annotated datasets. LiDAR data is expensive to col- lect and frequently exhibits boundary artefacts due to mo- tion correction. Meanwhile, Structure-from-Motion (SfM) is computationally expensive and can produce noisy, incom- plete or incorrect reconstructions. Self-supervised learning (SSL) instead leverages the photometric consistency across frames to simultaneously learn depth and Visual Odome- try (VO) without ground-truth annotations. As only stereo or monocular video is required, SSL has the potential to scale to much larger data quantities. Unfortunately, existing SS-MDE approaches have relied exclusively on automotive data [18, 13, 24]. The limited di- versity of training environments results in models incapable 1 arXiv:2307.10713v1 [cs.CV] 20 Jul 2023 of generalizing to different scene types (e.g. natural or in- doors) or even other automotive datasets. Moreover, despite being fully convolutional, these models struggle to adapt to different image sizes. This further reduces performance on sources other than the original dataset. Inspired by the recent success of supervised MDE [36, 47, 46], we develop an SS-MDE model capable of perform- ing zero-shot generalization beyond the automotive domain. In doing so, we aim to bridge the performance gap between supervision and self-supervision. Unfortunately, most exist- ing supervised datasets are unsuitable for SSL, as they con- sist of isolated image and depth pairs. On the other hand, existing SSL datasets focus only on the automotive domain. To overcome this, we make use of SlowTV as an un- tapped source of high-quality data. SlowTV is a television programming approach originating from Norway consist- ing of long, uninterrupted shots of relaxing events, such as train or boat journeys, nature hikes and driving. This rep- resents an ideal training source for SS-MDE, as it provides large quantities of data from highly diverse environments, usually with smooth motion and limited dynamic objects. To improve the diversity of available data for SS-MDE, we have collated the SlowTV dataset, consisting of 1.7M frames from 40 videos curated from YouTube. This dataset consists of three main categories\u2014natural, driving and underwater\u2014each featuring a rich and diverse set of scenes. We combine SlowTV with Mannequin Challenge [31] and Kitti [18] to train our proposed models. SlowTV provides a general distribution across a wide range of natural scenes, while Mannequin Challenge covers indoor scenes with hu- mans and Kitti focuses on urban scenes. The resulting mod- els are trained with an order of magnitude more data than any existing SS-MDE approach. Contrary to many super- vised approaches [4, 72], we train a single model capable of generalizing to all scene types, rather than separate in- door/outdoor models. This closely resembles the zero-shot evaluation proposed by MiDaS [47] for supervised MDE. The contributions of this paper can be summarized as: 1. We introduce a novel SS-MDE dataset of SlowTV YouTube videos, consisting of 1.7M images. It fea- tures a diverse range of environments including world- wide seasonal hiking, scenic driving and scuba diving. 2. We leverage SlowTV to train zero-shot models capable of adapting to a wide range of scenes. The models are evaluated on 7 datasets unseen during training. 3. We show that existing models fail to generalize to dif- ferent image shapes and propose an aspect ratio aug- mentation to mitigate this. 4. We greatly reduce the performance gap w.r.t. super- vised models, improving the applicability of SS-MDE to the real-world. We make the dataset, pretrained model and code available to the public." }, { "url": "http://arxiv.org/abs/2304.07051v3", "title": "The Second Monocular Depth Estimation Challenge", "abstract": "This paper discusses the results for the second edition of the Monocular\nDepth Estimation Challenge (MDEC). This edition was open to methods using any\nform of supervision, including fully-supervised, self-supervised, multi-task or\nproxy depth. The challenge was based around the SYNS-Patches dataset, which\nfeatures a wide diversity of environments with high-quality dense ground-truth.\nThis includes complex natural environments, e.g. forests or fields, which are\ngreatly underrepresented in current benchmarks.\n The challenge received eight unique submissions that outperformed the\nprovided SotA baseline on any of the pointcloud- or image-based metrics. The\ntop supervised submission improved relative F-Score by 27.62%, while the top\nself-supervised improved it by 16.61%. Supervised submissions generally\nleveraged large collections of datasets to improve data diversity.\nSelf-supervised submissions instead updated the network architecture and\npretrained backbones. These results represent a significant progress in the\nfield, while highlighting avenues for future research, such as reducing\ninterpolation artifacts at depth boundaries, improving self-supervised indoor\nperformance and overall natural image accuracy.", "authors": "Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, Yufei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao", "published": "2023-04-14", "updated": "2023-04-26", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI" ], "main_content": "Supervised. Eigen et al. [22] introduced the first end-toend CNN for MDE, which made use of a scale-invariant loss and a coarse-to-fine network. Further improvements to the network architecture included the use of CRFs [53, 100], regression forests [72], deeper architectures [67, 88], multi-scale prediction fusion [60] and transformer-based encoders [9,15,66]. Alternatively, depth estimation was formulated as a discrete classification problem [7,8,24,49]. In parallel, novel losses were proposed in the form of gradientbased regression [51, 84], the berHu loss [47], an ordinal relationship loss [14] and scale/shift invariance [67]. Recent approaches focused on the generalization capabilities of MDE by training with collections of datasets [7, 23,66,67,69,82]. This relied on the availability of groundtruth annotations, including automotive data LiDAR [27, 32, 38], RGB-D/Kinect [16, 61, 79], SfM reconstructions [50,51], optical flow/disparity estimation [67,88] or crowdsourced annotations [14]. These annotations varied in accuracy, which may have impacted the final model\u2019s performance. Furthermore, this increased the requirements for acquiring data from new sources, making it challenging to scale to larger amounts of data. Self-Supervised. Instead of relying on costly annotations, Garg et al. [25] proposed an algorithm based on view synthesis and the photometric consistency across stereo pairs. Monodepth [28] incorporated differentiable bilinear interpolation [42], virtual stereo prediction and a SSIM+L1 reconstruction loss. SfM-Learner [108] required only monocular video supervision by replacing the known stereo transform with a pose estimation network. Artifacts due to dynamic objects were reduced by incorporating uncertainty [45,65,93], motion masks [12,20,31], optical flow [57, 68, 98] or the minimum reconstruction loss [29]. Meanwhile, robustness to unreliable photometric appearance was improved via feature-based reconstructions [76, 99, 105] and proxy-depth supervision [45, 73, 86]. Developments in network architecture design included 3D (un-)packing blocks [32], positional encoding [30], transformer-based encoders [2, 106], sub-pixel convolutions [64], progressive skip connections [58] and self-attention decoders [43,91,107]. Challenges & Benchmarks. The majority of MDE approaches have been centered around automotive data. This includes popular benchmarks such as Kitti [27, 81] or the Dense Depth for Autonomous Driving Challenge [32]. The Robust Vision Challenge series [104], while generalization across multiple datasets, has so far consisted only of automotive [27] and synthetic datasets [10,70]. More recently, Ignatov et al. introduced the Mobile AI Challenge [40], investigating efficient MDE on mobile devices in urban settings. Finally, the NTIRE2023 [102] challenge, concurrent to ours, targeted high-resolution images of specular and non-lambertian surfaces. The Monocular Depth Estimation Challenge series [77]\u2014the focus of this paper\u2014is based on the MonoDepth Benchmark [78], which provided fair evaluations and implementations of recent SotA self-supervised MDE algorithms. Our focus lies on zero-shot generalization to a wide diversity of scenes. This includes common automotive and indoor scenes, but complements it with complex natural, industrial and agricultural environments. 3. The Monocular Depth Estimation Challenge The second edition of the Monocular Depth Estimation Challenge1 was organized on CodaLab [63] as part of a CVPR2023 workshop. The initial development phase lasted four weeks, using the SYNS-Patches validation split. The leaderboard for this phase was anonymous, where all method scores were publicly available, but usernames remained hidden. Each participant could see the metrics for their own submission. The final challenge stage was open for two weeks. In this case, the leaderboard was completely private and participants were unable to see their own scores. This encouraged evaluation on the validation split rather than the test split. Combined with the fact that all ground-truth depths were withheld, the possibility of overfitting due to repeated evaluations was severely limited. This edition of the challenge was extended to any form of supervision, with the objective of providing a more comprehensive overview of the field as a whole. This allowed us to determine the gap between different techniques and identify avenues for future research. We report results only for submissions that outperformed the baseline in any pointcloud/image-based metric on the Overall dataset. Dataset. The challenge is based on the SYNS-Patches dataset [1, 78], chosen due to the diversity of scenes and 1 https://codalab.lisn.upsaclay.fr/competitions/10031 2 Table 1. SYNS-Patches. Distribution of images per category in the val/test splits. Agriculture Indoor Industry Misc Natural Recreation Residential Transport Woodland Total Val 104 67 36 72 36 14 13 4 54 400 Test 211 81 71 0 147 48 110 17 90 775 Total 315 148 107 72 183 62 123 21 144 1,175 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Outdoor-Urban 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Natural 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Outdoor-Agriculture 0 20 40 60 80 100 120 0.00 0.05 0.10 0.15 0.20 Indoor Figure 1. Depth Distribution Per Scene Type. Indoor scenes are limited to 20m, while outdoor scenes reach up to 120m. Natural and Agriculture scenes contain a larger percentage of long-range depths (20-80m), while urban scenes focus on the mid-range (20-40m). Figure 2. SYNS-Patches. Sample images from the diverse dataset scenes, including complex urban, natural and indoor settings. The dataset contains high-quality ground-truth with 78.20% coverage. Depth boundaries were computed as Canny edges in the log-depth maps. environments. A breakdown of images per category and some representative examples are shown in Table 1 and Figure 2. SYNS-Patches also provides extremely highquality dense ground-truth LiDAR, with an average coverage of 78.20% (including sky regions). Given the dense ground-truth, depth boundaries were obtained using Canny edge-detection on the log-depth maps. This allows us to compute additional fine-grained metrics for these challenging regions. As outlined in [78], the images are manually checked to remove dynamic object artifacts. Evaluation. Participants provided the unscaled disparity prediction for each dataset image. The evaluation server bilinearly upsampled the predictions to the target resolution and inverted them into depth maps. Self-supervised methods trained with stereo pairs and supervised methods using LiDAR or RGB-D data should be capable of predicting met3 ric depth. Despite this, in order to ensure comparisons are as fair as possible, the evaluation aligned predictions with the ground-truth using the median depth. We set a maximum depth threshold of 100 meters. Metrics. We follow the metrics used in the first edition of the challenge [77], categorized as image-/pointcloud-/edgebased. Image-based metrics represent the most common metrics (MAE, RMSE, AbsRel) computed using pixel-wise comparisons between the predicted and ground-truth depth map. Pointcloud-based metrics [62] (F-Score, IoU, Chamfer distance) instead evaluate the reconstructed pointclouds as a whole. In this challenge, we report reconstruction FScore as the leaderboard ranking metric. Finally, edgebased metrics are computed only at depth boundary pixels. This includes image-/pointcloud-based metrics and edge accuracy/completion metrics from IBims-1 [46]. 4. Challenge Submissions We outline the technical details for each submission, as provided by the authors. Each submission is labeled based on the supervision used, including ground-truth (D), proxy ground-truth (D*) and monocular (M) or stereo (S) photometric support frames. The first half represent supervised methods, while the remaining half are self-supervised. Baseline \u2013 S J. Spencer1 j.spencermartin@surrey.ac.uk C. Russell4 cmruss@amazon.de S. Hadfield1 s.hadfield@surrey.ac.uk R. Bowden1 r.bowden@surrey.ac.uk Challenge organizers submission from the first edition. Network. ConvNeXt-B encoder [56] with a base Monodepth decoder [28,59] from [78]. Supervision. Self-supervised with a stereo photometric loss [25] and edge-aware disparity smoothness [28]. Training. Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. Team 1: DJI&ZJU \u2013 D W. Yin8 yvanwy@outlook.com K. Cheng9 chengkai21@mail.ustc.edu.cn G. Xu9 xugk@mail.ustc.edu.cn H. Chen7 haochen.cad@zju.edu.cn B. Li10 libo@nwpu.edu.cn K. Wang8 wkx1993@gmail.com X. Chen8 xiaozhi.chen@dji.com Network. ConvNeXt-Large [56] encoder, pretrained on ImageNet-22k [21], and a LeReS decoder [97] with skip connections and a depth range of [0.3, 150] meters. Supervision. Supervised using ground-truth depths from a collection of datasets [3,6,13,16,17,26,32,36,90,92,103]. The final loss is composed of the SILog loss [22], pairwise normal regression loss [97], virtual normal loss [95] and a random proposal normalization loss (RPNL). RPNL enhances the local contrast by randomly cropping patches from the predicted/ground-truth depth and applying median absolute deviation normalization [75]. Training. The network was trained using a resolution of 512 \u00d7 1088. In order to train on mixed datasets directly with metric depth, all ground-truth depths were rescaled as \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} \u2032 = \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} fc/ f, where f is the original focal length and fc is an arbitrary focal length. This way, the network assumed all images were taken by the same pinhole camera, which improved convergence. Team 2: Pokemon \u2013 D M. Xiang10 xiangmochu@mail.nwpu.edu.cn J. Ren10 renjiahui@mail.nwpu.edu.cn Y. Wang10 wangyufei777@mail.nwpu.edu.cn Y. Dai10 daiyuchao@nwpu.edu.cn Network. Two-stage architecture. The first part was composed of a SwinV2 backbone [54] and a modified NeWCRFs decoder [100] with a larger attention window. The second stage used an EfficientNet [80] with 5 inputs (RGB, low-res depth and high-res depth) to refine the highresolution depth. Supervision. Supervised training using LiDAR/synthetic depth and stereo disparities from a collection of datasets [5, 6,11,16\u201318,22,34,37\u201339,61,71,83\u201385,88,89,92,94,96]. Losses included the SILog loss [22] (\u03bb = 0.85) for metric datasets, SILog (\u03bb = 1) for scale-invariant training, the Huber disparity loss for Kitti disparities and an affine disparity loss [67] for datasets with affine ambiguities. Training. The final combination of losses depended on the ground-truth available from each dataset, automatically mixed by learning an uncertainty weight for each dataset [44]. Since each dataset contained differently-sized images, they were resized to have a shorter side of 352 and cropped into square patches. Some datasets used smaller crops of size 96 \u00d7 352, such that the deepest feature map fell entirely into the self-attention window (11 \u00d7 11). A fusion process based on [60] merged low-/high-resolution predictions into a consistent high-resolution prediction. Team 3: cv-challenge \u2013 D C. Li13 lichao@vivo.com Q. Zhang13 zhangqi.aiyj@vivo.com Z. Liu13 zhiwen.liu@vivo.com Y. Wang13 wangyixing@vivo.com Network. Based on ZoeDepth [9] with a BEiT384-L backbone [4]. Supervision. Supervised with ground-truth depth from Kitti and NYUD-v2 [61] using the SILog loss. Training. The original ZoeDepth [9] and DPT [66] were pretrained on a collection of 12 datasets. The models were then finetuned on Kitti (384\u00d7768) or NYUD-v2 (384\u00d7512) 4 for outdoor/indoor scenes, respectively. Different models were deployed on an automatic scene classifier. The fine-tuned models were combined with a content-adaptive multi-resolution merging method [60], where patches were combined based on the local depth cue density. Since the transformer-based backbone explicitly captured long-term structural information, the original double-estimation step was omitted. Team 4: DepthSquad \u2013 D M. Nam11 mwn0221@deltax.ai H. T. Hoa11 hoaht@deltax.ai K. M. Umair11 mumairkhan@deltax.ai S. Hossain11 sadat@deltax.ai S. M. N. Uddin11 sayednadim@deltax.ai Network. Based on the PixelFormer architecture [2] which used a Swin [55] encoder and self-attention decoder blocks with cross-attention skip connections. Disparity was predicted as a discrete volume [7], with the final depth map given as the weighted average using the bin probabilities. Supervision. Supervised using the SILog loss w.r.t. the LiDAR ground-truth. Training. The model was trained on the Kitti Eigen-Zhou (KEZ) split using images of size 370 \u00d7 1224 for 20 epochs. Additional augmentation was incorporated in the form of random cropping and rotation, left-right flipping and CutDepth [41]. When predicting on SYNS-Patches, images were zero-padded to 384\u00d71248 to ensure the compatibility of the training resolution. These borders were remove prior to submission. Team 5: imec-IDLab-UAntwerp \u2013 MS L. Trinh6 khaclinh.trinh@student.uantwerpen.be A. Anwar6 ali.anwar@uantwerpen.be S. Mercelis6 siegfried.mercelis@uantwerpen.be Network. Pretrained ConvNeXt-v2-Huge [87] encoder with an HR-Depth decoder [58], modified with deformable convolutions [19]. The pose network instead used ResNet18 [35]. Supervision. Self-supervised using the photometric loss [29] and edge-aware smoothness. Training. Trained on the Kitti Eigen-Benchmark (KEB) split with images of size 192\u00d7640. The network was trained for a maximum of 30 epochs, with the encoder remaining frozen after 6 epochs. Team 6: GMD \u2013 MS B. Li12 1966431208@qq.com J. Huang12 huang176368745@gmail.com Network. ConvNeXt-XLarge [56] backbone and an HRDepth [58] decoder. Supervision. Self-supervised based on the photometric loss [29]. Training. Trained on KEZ using a resolution of 192 \u00d7 640. Team 7: MonoViTeam \u2013 MSD* C. Zhao15 zhaocq@mail.ecust.edu.cn M. Poggi14 m.poggi@unibo.it F. Tosi14 fabio.tosi5@unibo.it Y. Tang15 yangtang@ecust.edu.cn S. Mattoccia14 stefano.mattoccia@unibo.it Network. MonoViT [106] architecture, composed of MPViT [48] encoder blocks and a self-attention decoder. Supervision. Self-supervised on Kitti Eigen (KE) using the photometric loss [29] (stereo and monocular support frames) and proxy depth regression. Regularized using edge-aware disparity smoothness [28] and depth gradient consistency w.r.t. the proxy labels. Training. Proxy depths were obtained by training a selfsupervised RAFT-Stereo network [52] on the trinocular Multiscopic [101] dataset. The stereo network was trained for 1000 epochs using 256\u00d7480 crops. The monocular network was trained on KE for 20 epochs using images of size 320 \u00d7 1024. Team 8: USTC-IAT-United \u2013 MS J. Yu9 harryjun@ustc.edu.cn M. Jing9 jing mohan@mail.ustc.edu.cn X. Qi9 xiaohua000109@163.com Network. Predictions were obtained as a mixture of multiple networks: DiffNet [107], FeatDepth [74] and MonoDEVSNet [33]. DiffNet and FeatDepth used a ResNet backbone, while MonoDEVSNet used DenseNet [38]. Supervision. Self-supervised using the photometric loss [29]. Training. The three models were trained with different resolutions: 320 \u00d7 1024, 376 \u00d7 1242, 384 \u00d7 1248, respectively. All predictions were interpolated to 376\u00d71242 prior to ensembling using a weighted average with coefficients {0.35, 0.3, 0.35}. 5. Results Participant submissions were evaluated on SYNS-Patches [1, 78]. As previously mentioned, this paper only discusses submissions that outperformed the baseline in any pointcloud-/image-based metric across the Overall dataset. Since both challenge phases ran independently and participants were responsible for generating the predictions, we cannot guarantee that the testing/validation metrics used the same model. We therefore report results only for the test split. All methods were median aligned w.r.t. the ground-truth, regardless of the supervision used. This ensures that the evaluations are identical and comparisons are fair. 5 Table 2. SYNS-Patches Results. We provide metrics across the whole dataset and per scene-category. As expected, supervised methods generally outperform self-supervised ones. The largest gap can be found in Indoor scenes, self-supervised methods were trained exclusively on automotive data. Teams DepthSquad & imec-IDLab-UAntwerp outperformed the challenge baseline [78] by incorporating more advanced network architectures. Train Rank F\u2191 F-Edges\u2191 MAE\u2193 RMSE\u2193 AbsRel\u2193 Acc-Edges\u2193 Comp-Edges\u2193 Overall DJI&ZJU D 1 17.51 8.80 4.52 8.72 24.32 3.22 21.65 Pokemon D 2 16.94 9.63 4.71 8.00 25.35 3.56 19.95 cv-challenge D 3 16.70 9.36 4.91 8.63 24.33 3.02 18.07 imec-IDLab-UAntwerp MS 4 16.00 8.49 5.08 8.96 28.46 3.74 11.32 GMD MS 5 14.71 8.13 5.17 8.97 29.43 3.75 17.29 Baseline S 6 13.72 7.76 5.56 9.72 32.04 3.97 21.63 DepthSquad D 7 12.77 7.68 5.17 8.83 29.92 3.56 35.26 MonoViTeam MSD* 8 12.44 7.49 5.05 8.59 28.99 3.10 38.93 USTC-IAT-United MS 9 11.29 7.18 5.81 9.58 32.82 3.47 43.38 Outdoor-Urban DJI&ZJU D 1 16.41 7.37 3.81 7.82 21.85 2.91 24.36 imec-IDLab-UAntwerp MS 4 16.28 7.27 4.49 7.98 26.18 3.67 13.11 GMD MS 5 15.21 6.80 4.60 8.00 27.55 3.73 16.26 Pokemon D 2 15.10 8.48 4.03 6.90 23.67 3.36 19.13 cv-challenge D 3 15.01 7.79 4.26 7.70 22.88 2.87 15.73 Baseline S 6 14.09 6.48 4.77 8.43 29.10 3.89 22.75 DepthSquad D 7 12.90 5.92 4.49 7.80 27.44 3.26 35.36 MonoViTeam MSD* 8 12.52 5.89 4.37 7.62 26.46 2.83 40.33 USTC-IAT-United MS 9 11.31 5.73 5.14 8.69 30.64 3.13 40.15 Outdoor-Natural Pokemon D 2 14.90 6.75 6.26 10.47 28.40 3.54 14.44 cv-challenge D 3 14.66 6.79 6.35 10.86 27.09 3.08 19.73 imec-IDLab-UAntwerp MS 4 14.43 6.02 6.51 11.43 30.57 3.59 9.44 DJI&ZJU D 1 14.31 6.07 5.97 10.81 26.48 3.45 17.75 GMD MS 5 12.89 5.74 6.77 11.62 32.57 3.68 13.97 Baseline S 6 12.10 5.32 7.46 12.86 36.89 3.84 18.35 DepthSquad D 7 11.54 6.03 6.87 11.52 33.66 3.36 32.47 MonoViTeam MSD* 8 10.98 5.38 6.66 11.13 32.19 3.13 36.01 USTC-IAT-United MS 9 9.26 4.92 7.69 12.22 38.14 3.36 42.92 Outdoor-Agriculture DJI&ZJU D 1 16.36 5.24 5.17 10.13 29.07 3.43 18.84 Pokemon D 2 15.58 6.40 5.25 9.09 27.45 3.64 18.30 imec-IDLab-UAntwerp MS 4 14.94 5.49 5.70 10.14 30.70 3.75 10.27 cv-challenge D 3 14.68 5.82 5.61 10.02 25.90 3.17 17.67 GMD MS 5 14.03 5.06 5.65 9.98 30.40 3.80 15.94 Baseline S 6 12.26 4.76 6.10 10.84 33.58 4.00 18.73 DepthSquad D 7 11.56 4.55 5.61 9.79 31.16 3.60 35.30 MonoViTeam MSD* 8 11.15 4.52 5.62 9.61 31.43 3.17 39.06 USTC-IAT-United MS 9 10.27 3.97 6.34 10.76 33.61 3.40 38.73 Indoor DJI&ZJU D 1 33.20 33.12 0.70 1.63 13.08 2.89 33.52 cv-challenge D 3 33.08 33.57 0.87 1.35 16.52 2.90 21.76 Pokemon D 2 32.05 32.53 0.83 1.26 16.00 4.11 45.68 imec-IDLab-UAntwerp MS 4 22.49 29.53 1.06 1.59 23.38 4.38 14.52 Baseline S 6 21.11 28.96 1.04 1.51 22.77 4.60 37.09 GMD MS 5 20.25 29.52 1.03 1.48 23.37 3.96 35.68 MonoViTeam MSD* 8 19.82 28.62 0.97 1.42 20.91 3.63 43.46 USTC-IAT-United MS 9 19.81 28.93 1.02 1.49 21.83 5.19 69.50 DepthSquad D 7 19.18 28.34 1.09 1.61 23.24 5.14 44.02 M=Monocular \u2013 S=Stereo \u2013 D*=Proxy Depth \u2013 D=Ground-truth Depth 6 GT Baseline DJI&ZJU Pokemon cv-challenge imec-IDLab-UAntwerp GMD DepthSquad MonoViTeam USTC-IAT-United GT Baseline DJI&ZJU Pokemon cv-challenge imec-IDLab-UAntwerp GMD DepthSquad MonoViTeam USTC-IAT-United Figure 3. SYNS-Patches Depth Visualization. Best viewed in color and zoomed in. Most methods struggle with thin structures, such as branches and railings. Object boundaries are also characterized by \u201chalos\u201d, caused by interpolation between foreground and background objects. Notable improvements can be seen in Natural and Agricultural scenes, where the top submissions provide much higher levels of detail than the baseline. 5.1. Quantitative Results Table 2 shows the overall performance for each submission across the whole dataset, as well as each category. Each subset is ordered using F-Score performance. We additionally show the ranking order based on Overall F-Score for ease of comparison across categories. The Overall top F-Score and AbsRel were obtained by Team DJI&ZJU, supervised using ground-truth depths from a collection of 10 datasets. This represents a relative improvement of 27.62% in F-Score (13.72% \u2013 Baseline) and 18% in AbsRel (29.66% \u2013 OPDAI) w.r.t. the first edition of the challenge [77]. The top-performing self-supervised method was Team imec-IDLab-UAntwerp, which leveraged improved pretrained encoders and deformable decoder convolutions. This submission provided relative improvements of 16.61% F-Score and 4.04% AbsRel over the first edition. As expected, supervised approaches using ground-truth depth generally outperformed self-supervised approaches based on the photometric error. However, it is interesting to note that supervising a model with only automotive data (e.g. Team DepthSquad, trained on KEZ) was not sufficient to guarantee generalization to other scene 7 types. Meanwhile, as discussed in [78], improving the pretrained backbone (Teams imec-IDLab-UAntwerp & GMD) is one of the most reliable ways of increasing performance. Alternative contributions, such as training with proxy depths (MonoViTeam) or ensembling different architectures (USTC-IAT-United), can improve traditional image-based results but typically result in slightly inferior reconstructions. The top submission (DJI&ZJU) consistently outperformed the other submissions across each scene category, demonstrating good generalization capabilities. However, Teams Pokemon & cv-challenge provided slightly better pointcloud reconstructions in Natural scenes. We theorize this might be due to the use of additional outdoor datasets, while DJI&ZJU primarily relies on automotive data. It is further interesting to note that self-supervised approaches such as Teams imec-IDLab-UAntwerp & GMD outperformed even some supervised methods in Urban reconstructions, despite training only on Kitti. Finally, supervised methods provided the largest improvement in Indoor scenes, since self-supervised approaches were limited to urban driving datasets. DJI&ZJU relied on Taskonomy and DIML, Pokemon on ScanNet, SceneNet, NYUD-v2 and more and cv-challenge made use of ZoeDepth [9] pretrained on the DPT dataset collection [66]. This demonstrates the need for more varied training data in order to generalize across multiple scene types. 5.2. Qualitative Results Figure 3 shows visualizations for each submission\u2019s predictions across varied scene categories. Generally, all approaches struggle with thin structures, such as the railings in images two and five or the branches in image four. Models vary between ignoring these thin objects (Baseline), treating them as solid objects (USTC-IAT-United) and producing inconsistent estimates (cv-challenge). Self-supervised methods are more sensitive to image artifacts (e.g. saturation or lens flare in images one and three) due to their reliance on the photometric loss. Meanwhile, supervised methods can be trained to be robust to the artifacts as long as the groundtruth is correct. Object boundaries still present challenging regions, as demonstrated by the halos produced by most approaches. Even Team DJI&ZJU, while reducing the intensity of these halos, can sometimes produce over-pixelated boundaries. However, it is worth pointing out that many submissions significantly improve over the Baseline predictions [78]. In particular, Teams cv-challenge, imec-IDLab-UAntwerp & GMD show much greater levels of detail in Urban and Agricultural scenes, reflected by the improved Edge-Completion metric in Table 2. This is particularly impressive given the self-supervised nature of some of these submissions. Unfortunately, self-supervised approaches show significantly inferior performance in Indoor settings, as they lack the data diversity to generalize. This can be seen by the fact that many self-supervised approaches produce incorrect scene geometry and instead predict ground-planes akin to outdoors scenes. Images six, thirteen and sixteen highlight some interesting complications for monocular depth estimation. Transparent surfaces, such as the glass, are not captured when using LiDAR or photometric constraints. As such, most approaches ignore them and instead predict the depth for the objects behind them. However, as humans, we know that these represent solid surfaces and obstacles that cannot be traversed. It is unclear how an accurate supervision signal could be generated for these cases. This calls for more flexible depth estimation algorithms, perhaps relying on multimodal distributions and discrete volumes. 6. Conclusions & Future Work This paper has summarized the results for the second edition of MDEC. Most submissions provided significant improvements over the challenge baseline. Supervised submissions typically focused on increasing the data diversity during training, while self-supervised submissions improved the network architecture. As expected, there is still a performance gap between these two styles of supervision. This is particularly the case in Indoor environments. This motivates the need for additional data sources to train self-supervised models, which are currently only trained on automotive data. Furthermore, accurate depth boundary prediction is still a highly challenging problem. Most methods frequently predicted \u201chalos\u201d, representative of interpolation artifacts between the foreground and background. Future challenge editions may introduce additional tracks for metric vs. relative depth prediction, as predicting metric depth is even more challenging. We hope this competition will continue to bring researchers into this field and strongly encourage any interested parties to participate in future editions of the challenge. Acknowledgments This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1.", "introduction": "Monocular depth estimation (MDE) refers to the task of predicting the distance from the camera to each image pixel. Unlike traditional geometric correspondence and triangula- tion techniques, this requires only a single image. Despite the ill-posed nature of the problem, deep learning has shown rapid improvements in this field. Unfortunately, many existing approaches have focused solely on training and evaluating in an automotive urban setting. This puts into question their ability to adapt to previously unseen environments. The proposed Monocular Depth Estimation Challenge (MDEC) aims to mitigate this by evaluating models on a complex dataset consisting of natural, agricultural, urban and indoor scenes. Furthermore, this is done in a zero-shot fashion, meaning that the models must be capable of generalizing. The first edition of MDEC [77] focused on benchmark- ing self-supervised approaches. The submissions outper- formed the baseline [25,78] in all image-based metrics (Ab- sRel, MAE, RMSE), but provided slightly inferior point- cloud reconstructions [62] (F-Score). The second edition of MDEC, detailed in this paper, ran in conjunction with CVPR2023. This edition was open to any form of super- vision, e.g. supervised, self-supervised or multi-task. The aim was to evaluate the state of the field as a whole and determine the gap between different supervision strategies. The challenge was once again centered around SYNS-Patches [1, 78]. This dataset was chosen due its diversity, which includes urban, residential, indus- trial, agricultural, natural and indoor scenes. Furthermore, SYNS-Patches contains dense high-quality LiDAR ground- 1 arXiv:2304.07051v3 [cs.CV] 26 Apr 2023 truth, which is exceedingly rare in outdoor environments. This ensures that the evaluations accurately reflect the ca- pabilities of each model. Eight teams out of the 28 final submissions outperformed the State-of-the-Art (SotA) baseline in either pointcloud- or image-based metrics. Half of these submission were super- vised using ground-truth depths, while the remaining half were self-supervised with the photometric reconstruction loss [25,28]. As expected, supervised submissions typically outperformed self-supervised ones. However, the novel self-supervised techniques generally outperformed the pro- vided baseline, even in pointcloud reconstructions. The re- mainder of the paper will provide the technical details of each submission, analyze their results on SYNS-Patches and discuss potential directions for future research." }, { "url": "http://arxiv.org/abs/2211.12174v1", "title": "The Monocular Depth Estimation Challenge", "abstract": "This paper summarizes the results of the first Monocular Depth Estimation\nChallenge (MDEC) organized at WACV2023. This challenge evaluated the progress\nof self-supervised monocular depth estimation on the challenging SYNS-Patches\ndataset. The challenge was organized on CodaLab and received submissions from 4\nvalid teams. Participants were provided a devkit containing updated reference\nimplementations for 16 State-of-the-Art algorithms and 4 novel techniques. The\nthreshold for acceptance for novel techniques was to outperform every one of\nthe 16 SotA baselines. All participants outperformed the baseline in\ntraditional metrics such as MAE or AbsRel. However, pointcloud reconstruction\nmetrics were challenging to improve upon. We found predictions were\ncharacterized by interpolation artefacts at object boundaries and errors in\nrelative object positioning. We hope this challenge is a valuable contribution\nto the community and encourage authors to participate in future editions.", "authors": "Jaime Spencer, C. Stella Qian, Chris Russell, Simon Hadfield, Erich Graf, Wendy Adams, Andrew J. Schofield, James Elder, Richard Bowden, Heng Cong, Stefano Mattoccia, Matteo Poggi, Zeeshan Khan Suri, Yang Tang, Fabio Tosi, Hao Wang, Youmin Zhang, Yusheng Zhang, Chaoqiang Zhao", "published": "2022-11-22", "updated": "2022-11-22", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "To avoid using costly Light Detection and Ranging (LiDAR) annotations, self-supervised approaches to MDE instead rely on the proxy task of image reconstruction via view synthesis. The predicted depth is combined with a known (or estimated) camera transform to establish correspondences between adjacent images. This means that, whilst the network can predict depth from a single input image at test time, the training procedure requires multiple support frames to perform the view synthesis. Methods can be categorized based on the source of these support frames. Stereo methods [9, 11, 47, 55] rely on stereo rectified images pairs with a known and fixed camera baseline. This allows the network to predict metric depth, but can result in occlusions artefacts if not trained carefully. On the other hand, monocular approaches [66, 23, 57] commonly use the previous and following frame from a monocular video. These approaches are more flexible, as no stereo data is required. However, they are sensitive to the presence of dynamic objects. Furthermore, depth is predicted only up to an unknown scale factor and requires median scaling during evaluation to align it with the ground-truth. Garg [9] introduced the first approach to MDE via stereo view synthesis, using AlexNet [26] and an L1 reconstruction loss. Monodepth [11] drastically improved the performance through bilinear synthesis [18] and a weighted combination of SSIM [58] and L1. It additionally incorporated virtual stereo supervision and a smoothness regularization weighted by the strength of the image edges. 3Net [45] extended this to a trinocular setting, while DVSO [47] and MonoResMatch [55] incorporated an additional residual refinement network. SfM-Learner [66] introduced the first fully monocular framework, replacing the fixed stereo baseline with a Visual Odometry (VO) regression network. A predictive mask was introduced to downweigh the photometric loss at independently moving dynamic objects. Future methods refined this masking procedure via uncertainty estimation [21, 23], object motion prediction [28, 22, 30] and automasking [11, 4]. Monodepth2 [12] additionally proposed the minimum reprojection loss as a simple way of handling varying occlusions in a sequence of frames. Instead of averaging the reconstruction loss over the sequence, they proposed to take only the minimum loss across each image pixel, assuming this will select the frame with the nonoccluded correspondence. Subsequent approaches focused on improving the robustness of the photometric loss by incorporating feature descriptors [63, 51, 50], affine brightness changes [61], scale consistency [37, 4] or adversarial losses [3, 44, 33]. Meanwhile, the architecture of the depth prediction network was improved to target higher-resolution predictions by incorporating sub-pixel convolutions [49, 42], 3-D packing blocks [15], improved skip connections [60, 65, 36], transformers [64] and discrete disparity volumes [20, 13, 14]. Several methods incorporated additional supervision in the form of (proxy) depth regression from LiDAR [27, 16], synthetic [35], SLAM [23, 57, 47], hand-crafted stereo [55, 59, 33], the matted Laplacian [14] and self-distillation [43, 41]. One notable example is DepthHints [59], which combined hand-crafted disparity [17] with the min reprojection loss [12]. This provided a simple way of fusing multiple disparity maps into a single robust estimate. 2.1. Datasets & Benchmarks This section reviews some of the most commonly used datasets and benchmarks used to evaluate MDE. Despite being a fundamental and popular computer vision task, there has not been a standard centralized challenge such as ImageNet [7], VOT [25] or IMC [19]. This makes it challenging to ensure that all methods use a consistent evaluation procedure. Furthermore, the lack of a withheld test set encourages overfitting due to repeated evaluation. Table 1 provides an overview of these datasets. Kitti [10] is perhaps the most common training and testing dataset for MDE. It was popularized by the Kitti Eigen (KE) split [8], containing 45k images for training and 697 for testing. However, this benchmark contains some longTable 2: SYNS-Patches Category Distribution. Agriculture Indoor Industry Misc Natural Recreation Residential Transport Woodland Total Val 104 67 36 72 36 14 13 4 54 400 Test 211 81 71 0 147 48 110 17 90 775 Total 315 148 107 72 183 62 123 21 144 1,175 Figure 1: SYNS-Patches Challenge Dataset. We show some representative examples from the diverse set of testing categories. This includes complex urban, natural and indoor scenes with high-quality dense LiDAR. Depth boundaries were computed as Canny edges in the log-depth maps. standing errors that heavily impact the accuracy of the results. The ground-truth depth suffers from background bleeding at object boundaries due to the different sensor viewpoints, coupled with the motion artefacts produced by the moving LiDAR. Furthermore, the data preprocessing omitted the transformation to the camera reference frame. These issues are further exacerbated by the sparsity of the ground-truth depth maps, which contain measurements for only 4.10% of the image pixels. Uhrig et al. [56] aimed to correct these errors and provide a more reliable benchmark, dubbed the Kitti Eigen-Benchmark (KEB) split. The ground-truth density was improved to 15.28% by accumulating LiDAR data from \u00b15 adjacent frames. This data was aggregated and refined by adding consistency checks using a hand-crafted stereo matching algorithm [17]. The main drawback is that this refinement procedure removes points at object boundaries, which are common sources of errors even in State-of-theArt (SotA) approaches. However, despite providing a clear improvement over KE, adoption by the community has been slow. We believe this to be due to the need to provide consistent comparisons against previous methods that only evaluate on KE, as this would require authors to re-run all preexisting approaches on this new baseline. The DDAD dataset [15] contains data from multiple cities in USA and Japan and totalling to 76k training and 3k testing images. It provides an density of 1.03%, an average of 24k points per image and an increased depth range up to 250 meters. This dataset was the focus on the DDAD challenge organized at CVPR 2021, which featured additional fine-grained performance metrics on each semantic class. Similar to KEB, we believe that adoption of these improved datasets is hindered by the need to re-train and re-evaluate preexisting methods. Spencer et al. [52] aimed to unify and update the training and benchmarking procedure for MDE. This was done by providing a public repository containing modernized SotA implementations of 16 recent approaches with common robust design decisions. The proposed models were evaluated on the improved KEB and SYNS-Patches, incorporating more informative pointclound[39] and edge-based [24] metrics. This modern benchmark procedure constitutes the basis of the Monocular Depth Estimation Challenge. 3. The Monocular Depth Estimation Challenge The first edition of MDEC was organized as part of a WACV2023 workshop. The challenge was organized on CodaLab [40] due to its popularity and flexibility, allowing for custom evaluation scripts and metrics. We plan to arrange a permanent leaderboard on CodaLab that remains open to allow authors to continue evaluating on SYNS-Patches. The first two weeks of the challenge constituted the development phase, where participants could submit predictions only on the validation split of SYNS-Patches. For the remainder of the challenge, participants were free to submit to either split. Participants only had access to the dataset images, while the ground-truth depth maps and depth boundaries were withheld to prevent overfitting. 3.1. Dataset The evaluation for the challenge was carried out on the recently introduced SYNS-Patches dataset [52], which is a subset of SYNS [1]. The original SYNS is composed of aligned image and LiDAR panoramas from 92 different scenes belonging to a wide variety of environments, such as Agriculture, Natural (e.g. forests and fields), Residential, Industrial and Indoor. This is a departure from the commonly used datasets in the field, such as Kitti [10], CityScapes [6] or DDAD [15], which focus purely on urban scenes collected by automotive vehicles. SYNS also provides dense LiDAR maps with 78.30% coverage and 365k points per image, which are exceptionally rare in outdoor environments. This allows us to compute metrics targeting complex image regions, such as thin structures and depth boundaries, which are common sources of error. SYNS-Patches represents the subset of patches from each scene extracted at eye level at 20 degree intervals of a full horizontal rotation. This results in 18 images per scene and a total dataset size of 1656. Since the data collection procedure is highly sensitive to dynamic objects, additional manual verification is required. The final dataset consists of 1175 images, further separated into validation and testing splits of 400 and 775 images. We show some representative testing images in Figure 1 and the distribution of images categories per split in Table 2 3.2. Training procedure The first edition of MDEC focused on evaluating the State-of-the-Art in self-supervised monocular depth estimation. This included methods complemented by hand-crafted proxy depth maps or synthetic data. We expected most methods to be trained on Kitti [10] due to its widespread use. However, we placed no restrictions on the training dataset (excluding SYNS/SYNS-Patches) and encouraged participants to use additional training sources. To aid participants and give a strong entry point, we provided a public starting kit on GitHub1. This repository contained the training and evaluating code for 16 recent SotA contributions to MDE. The baseline submission was the top F-Score performer out of all SotA approaches in this starting kit [9, 52]. This consisted of a ConvNeXt [34] backbone and DispNet [38] decoder. The model was trained on the Kitti Eigen-Zhou split with an image resolution of 192\u00d7640 using only stereo view synthesis, the vanilla photometric loss and edge-aware smoothness regularization. 3.3. Evaluation procedure Participants provided their unscaled disparity predictions at the training image resolution. Our evaluation script bilinearly upsampled the predictions to the full image resolution and applied median scaling to align the predicted and ground-truth depths. Finally, the prediction and groundtruth were clamped to a maximum depth of 100m. We omit test-time stereo blending [11] and border cropping [8]. 3.4. Performance metrics The predictions were evaluated using a wide variety of image/pointcloud/edge-based metrics. Submissions were ranked based on the F-Score performance [39], as this targets the structural quality of the reconstructed pointcloud. We provide the units of each metric, as well as an indication if lower (\u2193) or higher (\u2191) is better. 3.4.1 Image-based MAE. Absolute error (m\u2193) as \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} \\ \\ensuremath {y} m ae , (1) where \\ensuremath {y} represents the ground-truth depth at a single image pixel p\\ensuremath {\\textbf {\\MakeLowercase {p}}} and \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} is the predicted depth at that pixel. RMSE. Absolute error (m\u2193) with higher outlier weight as \\ \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} r \\ensuremath {y} mse . (2) AbsRel. Range-invariant relative error (%\u2193) as \\ensuremath {y} \\hat {\\text {\\acs {depth-gt}}} \\ \\ensuremath {y} a \\ensuremath {y} bsrel . (3) 3.4.2 Pointcloud-based F-Score. Reconstruction accuracy (%\u2191) given by the harmonic mean of Precision and Recall as \\ f s c o re , (4) 1https://github.com/jspenmar/monodepth_ benchmark GT Baseline OPDAI z.suri Anonymous MonoViT Figure 2: SYNS-Patches Depth Visualization. Models perform significantly better in urban environments that resemble the training automotive data. Thin structures, such as the railings and branches, are highly challenging to predict accurately and are commonly merged together. Precision. Percentage (%\u2191) of predicted 3-D points \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\hat {\\text {\\acs {point-gt}}} within a threshold \\ensuremath {\\delta } of a ground-truth point \\ensuremath {\\textbf {\\MakeLowercase {q}}} as \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\hat {\\text {\\acs {point-gt}}} \\ensuremath {\\MakeUppercase {Q}} \\hat {\\text {\\acs {cloud-gt}}} \\ pre \\ensuremath {\\textbf {\\MakeLowercase {q}}} c \\ensuremath {\\MakeUppercase {Q}} i \\ensuremath {\\textbf {\\MakeLowercase {q}}} s \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\hat {\\text {\\acs {point-gt}}} i o \\ensuremath {\\delta } n , (5) where J\u00b7K represents the Iverson brackets. Recall. Percentage (%\u2191) of ground-truth 3-D points within a threshold of a predicted point as \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\ensuremath {\\MakeUppercase {Q}} \\ rec \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\hat {\\text {\\acs {point-gt}}} a \\ensuremath {\\MakeUppercase {Q}} \\hat {\\text {\\acs {cloud-gt}}} l \\ensuremath {\\textbf {\\MakeLowercase {q}}} l \\ensuremath {\\textbf {\\MakeLowercase {q}}} \\hat {\\text {\\acs {point-gt}}} . \\ensuremath {\\delta } (6) Following \u00a8 Ornek et al. [39], the threshold for a correctly reconstructed point is set to 10 cm i.e. \\ensuremath {\\delta } = 0.1. Note that Precision and Recall are only used to compute the F-Score and are not reported in the challenge leaderboard. 3.4.3 Edge-based F-Score. Pointcloud reconstruction accuracy (%\u2191) computed only at ground-truth \\ensuremath {\\textbf {\\MakeUppercase {M}}} and predicted \\ensuremath {\\textbf {\\MakeUppercase {M}}} \\hat {\\acs {edges-gt}} depth boundaries, represented by binary masks. Accuracy. Distance (px\u2193) from each predicted depth boundary to the closest ground-truth boundary as \\e d \\ensuremath {\\textbf {\\MakeUppercase {M}}} \\hat {\\acs {edges-gt}} gp\\ensuremath {\\textbf {\\MakeLowercase {p}}} e a \\ensuremath {\\textbf {\\MakeUppercase {M}}} cp\\ensuremath {\\textbf {\\MakeLowercase {p}}} c , (7) where EDT represents the Euclidean Distance Transform. Completeness. Distance (px\u2193) from each ground-truth depth boundary to the closest predicted boundary as \\e d \\ensuremath {\\textbf {\\MakeUppercase {M}}} gp\\ensuremath {\\textbf {\\MakeLowercase {p}}} ec \\ensuremath {\\textbf {\\MakeUppercase {M}}} \\hat {\\acs {edges-gt}} op\\ensuremath {\\textbf {\\MakeLowercase {p}}} m p . (8) These metrics were proposed as part of the IBims-1 [24] benchmark, which features dense indoor depth maps. 4. Challenge Submissions Baseline J. Spencer1 j.spencermartin@surrey.ac.uk C. Russell3 cmruss@amazon.de S. Hadfield1 s.hadfield@surrey.ac.uk R. Bowden1 r.bowden@surrey.ac.uk Challenge organizers submission. Re-implementation of Garg [9] from the updated monocular depth benchmark[52]. Trained with stereo photometric supervision with edgeaware smoothness regularization. Network is composed of a ConvNeXt-B backbone [34] with a DispNet [38] decoder. Trained for 30 epochs on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. 4.1. Team 1 OPDAI H. Wang6 hwscut@126.com Y. Zhang6 yusheng.z1995@gmail.com H. Cong6 congheng@outlook.com Based on a ConvNext-B [34] with an HRDepth [36] decoder. Trained with monocular and stereo data, along with Table 3: SYNS-Patches Results. Each sections reports results over a different set of scene categories. The updated SotA models from [52] provide a strong baseline for the challenge, outperforming all submissions in pointcloud reconstruction F-Score. However, all submissions significantly improved upon traditional image-based metrics by up to 6.47% in MAE and 7.42% in AbsRel. All approaches adapt better to unseen outdoor urban environments that the challenging natural and agricultural scenes. F-Score\u2191 F-Score (Edges)\u2191 MAE\u2193 RMSE\u2193 AbsRel\u2193 EdgeAcc\u2193 EdgeComp\u2193 Overall Baseline 13.72 7.76 5.56 9.72 32.04 3.97 21.63 OPDAI 13.53 7.41 5.20 8.98 29.66 3.67 27.31 z.suri 13.08 7.46 5.39 9.27 29.96 3.81 32.70 Anonymous 12.85 7.30 5.32 9.04 30.22 3.83 43.77 MonoViT 12.66 7.51 5.22 8.96 29.70 3.36 35.47 Outdoor-Urban Baseline 14.09 6.48 4.77 8.43 29.10 3.89 22.75 OPDAI 13.17 5.99 4.53 7.93 27.12 3.47 27.71 z.suri 12.72 5.97 4.77 8.25 27.99 3.64 34.31 Anonymous 12.83 5.56 4.60 7.95 28.04 3.66 41.04 MonoViT 12.00 5.87 4.54 7.85 27.91 3.12 35.24 Outdoor-Natural Baseline 12.11 5.32 7.46 12.86 36.89 3.84 18.35 OPDAI 11.61 5.26 6.82 11.52 33.53 3.52 24.11 z.suri 11.40 5.25 7.14 12.07 34.43 3.61 30.96 Anonymous 11.83 5.31 7.11 11.76 34.16 3.59 38.96 MonoViT 11.84 5.72 6.92 11.72 33.33 3.30 31.33 Outdoor-Agriculture Baseline 12.26 4.77 6.10 10.84 33.58 4.00 18.73 OPDAI 12.26 4.47 5.78 10.20 31.53 3.69 27.38 z.suri 12.75 4.40 5.85 10.33 30.52 3.77 29.03 Anonymous 11.53 4.20 5.78 10.12 30.76 3.87 41.89 MonoViT 11.34 4.57 5.72 10.03 30.99 3.40 33.53 Indoor Baseline 21.11 28.96 1.04 1.51 22.77 4.60 37.09 OPDAI 23.56 27.95 1.00 1.54 21.12 4.82 36.28 z.suri 19.95 28.84 0.98 1.42 21.44 5.20 43.93 Anonymous 19.32 28.82 1.07 1.55 23.94 5.10 74.43 MonoViT 20.45 27.65 1.01 1.49 21.16 4.28 55.52 proxy depth hints [59]. This submission uses a large combination of losses, including the photometric loss with an explainability mask [66], autoencoder feature-based reconstruction [50], virtual stereo [11], proxy depth regression, edge-aware disparity smoothness [11], feature smoothness [50], occlusion regularization [47] and explainability mask regularization [66]. The models were trained on Kitti Eigen-Zhou (KEZ) without depth hints and KEB with depth hints for 5 epochs and an image resolution of 192 \u00d7 640. 4.2. Team 2 z.suri Z. K. Suri8 z.suri@eu.denso.com The depth and pose estimation networks used ConvNeXtB [34] as the encoder, with the depth network complemented by a DiffNet [65] decoder. Trained with both stereo and monocular inputs, using edge-aware regularization [11] and the min reconstruction photometric loss with automasking [12]. A strong pose network is essential for accurate monocular depth estimation. This submission introduced a stereo pose regression loss. The pose estimation network was additionally given a stereo image pair and supervised w.r.t. the know ground-truth camera baseline between them. The networks were trained on Kitti Eigen-Zhou with an image resolution of 192 \u00d7 640. 4.3. Team 3 Anonymous The author of this submission did not provide any details. 4.4. Team 4 MonoViT C. Zhao9 y20180082@mail.ecust.edu.cn M. Poggi7 m.poggi@unibo.it F. Tosi7 fabio.tosi5@unibo.it Y. Zhang7 youmin.zhang2@unibo.it Y. Tang9 yangtang@ecust.edu.cn S. Mattoccia7 stefano.mattoccia@unibo.it Trained on KE with an image resolution of 320 \u00d7 1024. The depth network used the MonoViT [64] architecture, combining convolutional and MPViT [29] encoder blocks. GT Baseline OPDAI z.suri Anonymous MonoViT Figure 3: SYNS-Patches Pointcloud Visualization. Converting the depth maps into poinclouds allows us to evaluate the quality of the reconstructed scene. All approaches can reliably estimate the road surface ground plane. However, object boundaries exhibit smooth interpolation artefacts connecting them to background structures. Adapting to previously unseen indoor environments is still highly challenging. The network was trained using stereo and monocular support frames, based on the minimum photometric loss [12], edge-aware smoothness [11] and L1 proxy depth regression. Proxy depth labels were obtained by training a self-supervised stereo network [32, 54] on the Multiscopic dataset [62]. This dataset provides three horizontally aligned images, allowing the network to compensate for occlusions. The pretrained stereo network was trained using Center and Right pairs, but used the full triplet when computing the per-pixel minimum photometric loss. It was trained for 1000 epochs using 256 \u00d7 480 crops. 5. Results Table 3 show the performance of the participants\u2019 submissions on the SYNS-Patches test set. As seen, most submissions outperformed the baseline in traditional imagebased metrics (MAE, RMSE, AbsRel) across all scene types. However, the baseline still achieved the best performance in both pointcloud reconstruction metrics (F-Score (Edges)). We believe this is due to the fact that most existing benchmarks report only image-based metrics. As such, novel contributions typically focus on improving performance on only these metrics. However, we believe pointcloud-based reconstruction metrics [39] are crucial to report, as they reflect the true objective of monocular depth estimation. As expected, all approaches transfer best to other Outdoor Urban environments, while the previously unseen Natural and Agriculture category provided a more difficult challenge. In most outdoor environments the baseline provides the best F-Score performance, while OPDAI & MonoViT improve on image-based metrics (> 0.5 meter improvement in Outdoor Natural MAE). It is also interesting to note that all approaches improve the accuracy of the detected edges by roughly 15%. Meanwhile, edge completeness is drastically reduced, implying that participant submissions are more accurate at extracting strong edges, but oversmooth predictions in highly textured regions. Finally, it is worth noting that the increased metric performance in indoor environments is likely due to the significantly decreased depth range. We show qualitative visualizations for the predicted depth maps and pointclouds in Figures 2 & 3, respectively. The target images were selected prior to evaluation to reflect the wide variety of available environments. Generally, we find that most predictions are oversmoothed and lack high-frequency detail. For instance, many models fill in gaps between thin objects, such as railings (second image) or branches (third image). As is expected, all submissions tend to perform better in urban settings, as they are more similar to the training distribution. The submission by MonoViT generally produces the highest-quality visualizations, with more detailed thin structures and sharper boundaries. This is reflected by the improved image-based metrics. However, as seen in the pointcloud visualizations in Figure 3, these predictions still suffer from boundary interpolation artefacts that are not obvious in the depth map visualizations. This reinforces the need for more detailed metrics in these complex image regions. 6. Conclusions & Future Work This paper has presented the results for the first edition of Monocular Depth Estimation Challenge. It was interesting to note that, while most submissions outperformed the baseline in traditional image-based metrics (MAE, RMSE, AbsRel), they did not improved pointcloud F-Score reconstruction. As expected, SYNS-Patches represents a challenging dataset for current monocular depth estimation systems. We believe this to be due to the over-reliance on automotive training data. Despite its availability and ease of collection, it does not contain varied enough scenarios to generalize to more complex natural scenes. As such, it is likely that additional sources of training data are required to develop truly generic perception systems. Future editions of MDEC may expand to additionally evaluate supervised MDE approaches. This would help compare the SotA in both branches of research and help to determine the reliability of supervised networks. We hope this provides a valuable contribution to the community and strongly encourage authors in this field to participate in future editions of the challenge. Acknowledgements This work was partially funded by the EPSRC under grant agreements EP/S016317/1, EP/S016368/1, EP/S016260/1, EP/S035761/1.", "introduction": "Depth estimation is a core computer vision task, allow- ing us to recover the 3-D geometry of the world. Whilst tra- ditional approaches to depth estimation relied on stereo [17, 53, 5] or multi-view [2, 48, 31] matching, monocular ap- proaches [8, 9, 12, 46] requiring only a single image have recently garnered much attention. The task of Monocular Depth Estimation (MDE) is ill- posed, as an infinite number of scene arrangements with varying object sizes and depths could result in the same 2-D image projection. However, humans are capable of per- forming this task by relying on cues and priors such as ab- solute/relative object sizes, elevation within the scene, tex- ture gradients, perspective distortions, stereo/motion paral- lax and more. Networks performing MDE must also learn these geometric cues, rather than just rely on correspon- dence matching. The rise in popularity of this field has resulted in a plethora of contributions, including supervised [8, 46], self- supervised [9, 11, 12, 15] and weakly-supervised [47, 55, 59] approaches. Comparing these approaches in a fair and consistent manner is a highly challenging task, as it is the responsibility of each author to ensure they are fol- lowing the same procedures as preceding methods. The need to provide this backward-compatibility can result in long-standing errors in the benchmarking procedure, rang- ing from incorrect metric computation and data preprocess- ing to incorrect ground-truths. This paper covers the recent Monocular Depth Estima- tion Challenge (MDEC), organized as part of a workshop at WACV2023. The objective of this challenge was to pro- vide an updated and centralized benchmark to evaluate con- tributions in a fair and consistent manner. This first edi- tion focused on self-/weakly-supervised MDE, as they have the possibility to scale to larger amounts of data and do not require expensive LiDAR ground-truth. Despite this flexibility, the majority of published approaches train and evaluate only on automotive data. As part of this chal- lenge, we tested the generalization of these approaches to a wider range of scenarios, including natural, urban and in- door scenes. This was made possible via the recently re- leased SYNS-Patches dataset [1, 52]. In general, partici- pants found it challenging to outperform the updated Garg baseline [9, 52] in pointcloud-based reconstruction metrics (F-Score), but generally improved upon traditional image- based metrics (MAE, RMSE, AbsRel). arXiv:2211.12174v1 [cs.CV] 22 Nov 2022 Table 1: Dataset & Benchmark Comparison. We summarize recent datasets commonly used in self-supervised monocular depth estimation. CityScapes represents a common pretraining dataset, while SYNS-Patches is testing-only. SYNS-Patches is the only dataset providing high-quality dense depth maps in a wide variety of environments. Accuracy Density (%) Num Points Urban Natural Indoor Train Val Test CityScapes [6] \u2717 \u2717 \u2717 \u2713 \u2717 \u2717 88,250 \u2717 \u2717 Kitti Eigen-Zhou [10, 66] \u2717 \u2717 \u2717 \u2713 \u2717 \u2717 39,810 4,424 \u2717 Kitti Eigen [10, 8] Mid 4.10 19k \u2713 \u2717 \u2717 45,200 1,776 697 Kitti Eigen-Benchmark [10, 56] High 15.28 71k \u2713 \u2717 \u2717 71,633 5,915 652 DDAD [15] High 1.02 24k \u2713 \u2717 \u2717 75,900 23,700 3,080 SYNS-Patches [1, 52] High 78.30 365k \u2713 \u2713 \u2713 \u2717 400 775" }, { "url": "http://arxiv.org/abs/2208.01489v4", "title": "Deconstructing Self-Supervised Monocular Reconstruction: The Design Decisions that Matter", "abstract": "This paper presents an open and comprehensive framework to systematically\nevaluate state-of-the-art contributions to self-supervised monocular depth\nestimation. This includes pretraining, backbone, architectural design choices\nand loss functions. Many papers in this field claim novelty in either\narchitecture design or loss formulation. However, simply updating the backbone\nof historical systems results in relative improvements of 25%, allowing them to\noutperform the majority of existing systems. A systematic evaluation of papers\nin this field was not straightforward. The need to compare like-with-like in\nprevious papers means that longstanding errors in the evaluation protocol are\nubiquitous in the field. It is likely that many papers were not only optimized\nfor particular datasets, but also for errors in the data and evaluation\ncriteria. To aid future research in this area, we release a modular codebase\n(https://github.com/jspenmar/monodepth_benchmark), allowing for easy evaluation\nof alternate design decisions against corrected data and evaluation criteria.\nWe re-implement, validate and re-evaluate 16 state-of-the-art contributions and\nintroduce a new dataset (SYNS-Patches) containing dense outdoor depth maps in a\nvariety of both natural and urban scenes. This allows for the computation of\ninformative metrics in complex regions such as depth boundaries.", "authors": "Jaime Spencer, Chris Russell, Simon Hadfield, Richard Bowden", "published": "2022-08-02", "updated": "2022-12-21", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.CG", "cs.LG" ], "main_content": "We consider self-supervised approaches that do not use ground-truth depth data at training time, but instead learn to predict depth as a way to estimate high-fidelity warps from one image to another. While all approaches predict depth from a single image, they can be categorized based on the additional frames used to perform these warps. Stereo-supervised frameworks directly predict metric depth given a known stereo baseline. Approaches using only monocular video additionally need to estimate Visual Odometry (VO) and only predict depth up to an unknown scale. These depth predictions are scaled during evaluation and aligned with the ground-truth. However, monocular methods are more flexible, since they do not require a stereo pair. Despite having their own artifacts, they do not share stereo occlusion artifacts, making video a valuable cue complementary to stereo. 2.1 Stereo Garg et al. (2016) introduced view synthesis as a proxy task for self-supervised monocular depth estimation. The predicted depth map was used to synthesize the target view from its stereo pair, and optimized using an L1 reconstruction loss. An additional smoothness regularization penalized all gradients in the predicted depth. Monodepth (Godard et al., 2017) used Spatial Transformer Networks (Jaderberg et al., 2015) to perform view synthesis in a fully-differentiable manner. The reconstruction loss was improved via a weighted L1 and SSIM (Wang et al., 2004) photometric loss, while the smoothness regularization was softened in regions with strong image gradients. Monodepth additionally introduced a virtual stereo consistency term, forcing the network to predict both left and right depths from a single image. 3Net (Poggi et al., 2018) extended Monodepth to a trinocular setting by adding an extra decoder and treating the input as the central image in a three-camera rig. SuperDepth (Pillai et al., 2019) replaced Upsample-Conv blocks with sub-pixel convolutions (Shi et al., 2016), resulting in improvements when training with high-resolution images. They additionally introduced a differentiable stereo blending procedure based on test-time stereo blending (Godard et al., 2017). FAL-Net (Gonzalez Bello & Kim, 2020) proposed a discrete disparity volume network, complemented by a probabilistic view synthesis module and an occlusion-aware reconstruction loss. Other methods complemented the self-supervised loss with (proxy) ground-truth depth regression. Kuznietsov et al. (2017) introduced a reverse Huber (berHu) regression loss (Zwald & Lambert-Lacroix, 2012; Laina et al., 2016) using the ground-truth sparse Light Detection and Ranging (LiDAR). Meanwhile, SVSM (Luo et al., 2018) proposed a two-stage pipeline in which a virtual stereo view was first synthesized from a self-supervised depth network. The target image and synthesized view were then processed in a stereo matching cost volume trained on the synthetic FlyingThings3D dataset (Mayer et al., 2016). Similarly, DVSO (Rui et al., 2018) and MonoResMatch (Tosi et al., 2019) incorporated stereo matching refinement networks to predict a residual disparity. These approaches used proxy ground-truth depth maps from direct stereo Simultaneous Localization and Mapping (SLAM) (Wang et al., 2017) and hand-crafted stereo matching (Hirschm\u00fcller, 2005), respectively. DVSO (Rui et al., 2018) introduced an occlusion regularization, encouraging sharper predictions that prefer background depths and second-order disparity smoothness. DepthHints (Watson et al., 2019) introduced proxy depth supervision into Monodepth2 (Godard et al., 2019), also obtained using SGM (Hirschm\u00fcller, 2005). The proxy depth maps were computed as the fused minimum reconstruction loss from predictions with various hyperparameters. As with the automasking procedure of Monodepth2 (Godard et al., 2019), the proxy regression loss was only applied to pixels where the hint produced a lower photometric reconstruction loss. Finally, PLADE-Net (Gonzalez Bello & Kim, 2021) expanded FAL-Net (Gonzalez Bello & Kim, 2020) by incorporating positional encoding and proxy depth regression using a matted Laplacian. 3 Published in Transactions on Machine Learning Research (12/2022) 2.2 Monocular SfM-Learner (Zhou et al., 2017) introduced the first approach supervised only by a stream of monocular images. This replaced the known stereo baseline with an additional network to regress VO between consecutive frames. An explainability mask was predicted to reduce the effect of incorrect correspondences from dynamic objects and occlusions. Klodt & Vedaldi (2018) introduced uncertainty (Kendall & Gal, 2017) alongside proxy SLAM supervision, allowing the network to ignore incorrect predictions. This uncertainty formulation also replaced the explainability mask from SfM-Learner. DDVO (Wang et al., 2018) introduced a differentiable DSO module (Engel et al., 2018) to refine the VO network prediction. They further made the observation that the commonly used edge-aware smoothness regularization (Godard et al., 2017) suffers from a degenerate solution in a monocular framework. This was accounted for by applying spatial normalization prior to regularizing. Subsequent approaches focused on improving the robustness of the photometric loss. Monodepth2 (Godard et al., 2019) introduced several simple changes to explicitly address the assumptions made by the view synthesis framework. This included minimum reconstruction filtering to reduce occlusion artifacts, alongside static pixel automasking via the raw reconstruction loss. D3VO (Yang et al., 2020) additionally predicted affine brightness transformation parameters (Engel et al., 2018) for each support frame. Meanwhile, DepthVO-Feat (Zhan et al., 2018) observed that the photometric loss was not always reliable due to ambiguous matching. They introduced an additional feature-based reconstruction loss synthesized from a pretrained dense feature representation (Weerasekera et al., 2017). DeFeat-Net (Spencer et al., 2020) extended this concept by learning dense features alongside depth, improving the robustness to adverse weather conditions and low-light environments. Shu et al. (2020) instead trained an autoencoder regularized to learn discrimative features with smooth second-order gradients. Mahjourian et al. (2018) incorporated explicit geometric constraints via Iterative Closest Points using the predicted alignment pose and the mean residual distance. Since this process was non-differentiable, the gradients were approximated. SC-SfM-Learner (Bian et al., 2019) proposed an end-to-end differentiable geometric consistency constraint by synthesizing the support depth view. They included a variant of the absolute relative loss constrained to the range [0, 1], additionally used as automasking for the reconstruction loss. Poggi et al. (2020) compared various approaches for estimating uncertainty in the depth prediction, including dropout, ensembles, student-teacher training and more. Johnston & Carneiro (2020) proposed to use a discrete disparity volume and the variance along each pixel to estimate the uncertainty of the prediction. During training, the view reconstruction loss was computed using the Expected disparity, i.e. the weighted sum based on the likelihood of each bin. Recent approaches have focused on developing architectures to produce higher resolution predictions that do not suffer from interpolation artifacts. PackNet (Guizilini et al., 2020) proposed an encoder-decoder network using 3-D (un)packing blocks with sub-pixel convolutions (Shi et al., 2016). This allowed the network to encode spatial information in an invertible manner, used by the decoder to make higher quality predictions. However, this came at the cost of a tenfold increase in parameters. CADepth (Yan et al., 2021) proposed a Structure Perception self-attention block as the final encoder stage, providing additional context to the decoder. This was complemented by a Detail Emphasis module, which refined skip connections using channel-wise attention. Meanwhile, Zhou et al. (2021) replaced the commonly used ResNet encoder with HRNet due to its suitability for dense predictions. Similarly, concatenation skip connections were replaced with a channel/spatial attention block. HR-Depth (Lyu et al., 2021) introduced a highly efficient decoder based on SqueezeExcitation blocks (Hu et al., 2020) and progressive skip connections. While each of these methods reports improvements over previous approaches, they are frequently not directly comparable. Most approaches differ in the number of training epochs, pretraining datasets, backbone architectures, post-processing, image resolutions and more. This begs the question as to what percentage of the improvements are due to the fundamental contributions of each approach, and how much is unaccounted for in the silent changes. In this paper, we aim to answer this question by first studying the effect of changing components rarely claimed as contributions. Based on the findings, we train recent SotA methods in a comparable way, further improving their performance and evaluating each contribution independently. 4 Published in Transactions on Machine Learning Research (12/2022) (a) Input image (b) Raw GT (c) Improved G Figure 2: Inaccurate Ground-Truth. The original Kitti (Geiger et al., 2013) ground-truth data used by previous monocular depth benchmarks (Eigen & Fergus, 2015) is inaccurate and contains errors, especially at object boundaries. Note the background bleeding in the highlighted region. Uhrig et al. (2018) correct this by accumulating LiDAR data over multiple frames. 0 20 40 60 80 100 120 0.00 0.02 0.04 0.06 0.08 0.10 Kitti Eigen 0 20 40 60 80 100 120 0.00 0.02 0.04 0.06 0.08 0.10 Kitti Eigen-Benchmark 0 20 40 60 80 100 120 0.00 0.01 0.02 0.03 0.04 0.05 0.06 SYNS-Patches Depth (m) Density (%) Figure 3: Depth Distribution. We show the distribution of depths for KE, KEB & SYNS-Patches. SYNS-Patches contains more varied depth values due to the indoors scenes. In all cases the maximum depth is clamped to 100 meters during evaluation. 3 Benchmark Datasets The objective of this paper is to critically examine recent SotA contributions to monocular depth learning. One of the biggest hurdles to overcome is the lack of informative benchmarks due to erroneous evaluation procedures. By far, the most common evaluation dataset in the field is the Kitti Eigen (KE) split (Eigen & Fergus, 2015). Unfortunately, it has contained critical errors since its creation, the most egregious being the inaccuracy of the ground-truth depth maps. The original Kitti data suffered from artifacts due to the camera and LiDAR not being identically positioned. As such, each sensor had slightly different viewpoints, each with slightly different occlusions. This was exacerbated by the sparsity of the LiDAR, resulting in the background bleeding into the foreground. An example of this can be seen in Figure 2. Furthermore, the conversion to depth maps omitted the transformation to the camera reference frame and used the raw LiDAR depth instead. Finally, the Squared Relative error was computed incorrectly without the squared term in the denominator. Although these errors have been noted by previous works, they have nevertheless been propagated from method to method up to this day due to the need to compare like-with-like when reporting results. It is much easier to simply ignore these errors and copy the results from existing papers, than it is to correct the evaluation procedure and re-evaluate previous baselines. We argue this behavior should be corrected immediately and provide an open-source codebase to make the transition simple for future authors. Our benchmark consists of two datasets: the Kitti Eigen-Benchmark (KEB) split (Uhrig et al., 2018) and the newly introduced SYNS-Patches dataset. 3.1 Kitti Eigen-Benchmark Uhrig et al. (2018) aimed to fix the aforementioned errors in the Kitti (Geiger et al., 2013) dataset. This was done by accumulating LiDAR data over \u00b15 frames to create denser ground-truth and removing occlusion artifacts via SGM (Hirschm\u00fcller, 2005) consistency checks. The final KEB split represents the subset of KE with available corrected ground-truth depth maps. This consists of 652 of the original 697 images. 5 Published in Transactions on Machine Learning Research (12/2022) Figure 4: SYNS-Patches. Top: Diverse testing images. Middle: Dense depth maps. Bottom: Log-depth Canny edges. Table 1: SYNS-Patches Scenes. We show the distibution of images per scene in the proposed dataset. This evaluates the model\u2019s capability to generalize beyond purely automotive data. Agriculture Natural Indoor Woodland Residential Industry Misc Recreation Transport Total 315 183 148 144 123 107 72 62 21 1,175 Similarly, we report the well-established image-based metrics from the official Kitti Benchmark. While some of these overlap with KE, we avoid saturated metrics such as \u03b4 < 1.253 or the incorrect SqRel error2. We also report pointcloud-based reconstruction metrics proposed by \u00d6rnek et al. (2022). They argue that imagebased depth metrics are insufficient to accurately evaluate depth estimation models, since the true objective is to reconstruct the 3-D scene. Further details can be found in Section B.1. Nevertheless, the Kitti dataset is becoming saturated and a more varied and complex evaluation framework is required. 3.2 SYNS-Patches The second dataset in our benchmark is the novel SYNS-Patches, based on SYNS (Adams et al., 2016). The original SYNS is composed of 92 scenes from 9 different categories. Each scene contains a panoramic HDR image and an aligned dense LiDAR scan. This provides a previously unseen dataset to evaluate the generalization capabilities of the trained baselines. We extend depth estimation to a wider variety of environments, including woodland & natural scenes, industrial estates, indoor scenes and more. SYNS provides dense outdoor LiDAR scans. Previous dense depth datasets (Koch et al., 2018) are typically limited to indoor scenes, while outdoor datasets (Geiger et al., 2013; Guizilini et al., 2020) are sparse. These dense depth maps allow us to compute metrics targeting high-interest regions such as depth boundaries. Our dataset is generated by sampling 18 undistorted patches per scene, performing a full horizontal rotation roughly at eye level. To make this dataset more amenable to the transition from Kitti, we maintain the same aspect ratio and extract patches of size 376 \u00d7 1242. We follow the same procedure on the LiDAR to extract aligned dense ground-truth depth maps. Ground-truth depth boundaries are obtained via Canny edges in the dense log-depth map. After manual validation and removal of data with dynamic object artifacts, the final test set contains 1,175 of the possible 1,656 images. We show the distributions of depth values in Figure 3, the images per category in Table 1 and some illustrative examples in Figure 4. For each dataset, we additionally compute the percentage of image pixels with ground-truth depth values. The original Kitti Eigen has a density of only 4.10%, while the improved Kitti Eigen-Benchmark has 15.28%. Meanwhile, SYNS-Patches has ground-truth for 78.30% of the scene, demonstrating the high-quality of the data. We report image-based metrics from Uhrig et al. (2018) and pointcloud-based metrics from \u00d6rnek et al. (2022). For more granular results, we compute these metrics only at depth boundary pixels. Finally, we compute the edge-based accuracy and completeness from Koch et al. (2018) using the Chamfer pixel distance to/from predicted and ground-truth depth edges. Further dataset creation details can be found in Section B.3. Note that SYNS-Patches is purely a testing dataset never used for training. This represents completely unseen environments that test the generalization capabilities of the trained models. 2The squared term was missing from the denominator. 6 Published in Transactions on Machine Learning Research (12/2022) 4 The Design Decisions That Matter Most contributions to self-supervised monocular depth estimation focus on alterations to the view synthesis loss (Godard et al., 2017; 2019; Zhan et al., 2018), additional geometric consistency (Mahjourian et al., 2018; Bian et al., 2019) or regularization (Rui et al., 2018) and the introduction of proxy supervised losses (Kuznietsov et al., 2017; Tosi et al., 2019; Watson et al., 2019). In this paper, we return to first principles and study the effect of changing components in the framework rarely claimed as contributions. We focus on practical changes that lead to significant improvements, raising the overall baseline performance and providing a solid platform on which to evaluate recent SotA models. Since this paper primarily focuses on the benchmarking and evaluation procedure, we provide a detailed review of monocular depth estimation and each contribution as supplementary material in Section A. We encourage readers new to the field to refer to this section for additional details. It is worth reiterating that the depth estimation network only ever requires a single image as its input, during both training and evaluation. However, methods that use monocular video sequences during training require an additional relative pose regression network. This replaces the known fixed stereo baseline used by stereo-trained models. Note that this pose network is only required during training to perform the view synthesis and compute the photometric loss. As such, it can be discarded during the depth evaluation. Furthermore, since all monoculartrained approaches use the same pose regression system, it is beyond the scope of this paper to evaluate the performance of this component. 4.1 Implementation details We train these models on the common Kitti Eigen-Zhou (KEZ) training split (Zhou et al., 2017), containing 39,810 frames from the KE split where static frames are discarded. Most previous works perform their ablation studies on the KE test set, where the final models are also evaluated. This indirectly incorporates the testing data into the hyperparameter optimization cycle, which can lead to overfitting to the test set and exaggerated performance claims. We instead use a random set of 700 images from the KEZ validation split with updated ground-truth depth maps (Uhrig et al., 2018). Furthermore, we report the image-based metrics from the Kitti Benchmark and the pointcloud-based metrics proposed by \u00d6rnek et al. (2022), as detailed in Section B.2. For ease of comparison, we add the performance rank ordering of the various methods. Image-based ordering uses AbsRel, while pointcloud-based uses the F-Score. Models were trained for 30 epochs using Adam with a base learning rate of 1e\u22124, reduced to 1e\u22125 halfway through training. The default DepthNet backbone is a pretrained ConvNeXt-T (Liu et al., 2022), while PoseNet uses a pretrained ResNet-18 (He et al., 2016). We use an image resolution of 192\u00d7640 with a batch size of 8. Horizontal flips and color jittering are randomly applied with a probability of 0.5. We adopt the minimum reconstruction loss and static pixel automasking losses from Monodepth2 (Godard et al., 2019), due to their simplicity and effectiveness. We use edge-aware smoothness regularization (Godard et al., 2017) with a scaling factor of 0.001. These losses are computed across all decoder scales, with the intermediate predictions upsampled to match the full resolution. We train in a Mono+Stereo setting, using monocular video and stereo pair support frames. To account for the inherently random optimization procedure, each model variant is trained using three random seeds and mean performance is reported. We emphasize that the training code has been publicly released alongside the benchmark code to ensure the reproducibility of our results and to allow future researchers to build off them. 4.2 Backbone Architecture & Pretraining Here we evaluate the performance of recent SotA backbone architectures and their choice of pretraining. Results can be found in Table 2 & Figure 5. We test ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), MobileNet-v3 (Howard et al., 2019), EfficientNet (Tan & Le, 2019), HRNet (Wang et al., 2021) and ConvNeXt (Liu et al., 2022). ResNet variants are trained either from scratch or using pretrained supervised weights from Wightman (2019). ResNeXt-101 variants additionally include fully supervised (Wightman, 2019), weakly-supervised and self-supervised (Yalniz et al., 2019) weights. All remaining backbones are pretrained by default. 7 Published in Transactions on Machine Learning Research (12/2022) Table 2: Backbone Ablation. We study the effect of various backbone architectures & pretraining methods. PT indicates use of pretrained ResNet weights. All other baselines are pretrained by default. ConvNeXt provides the best performance, followed by HRNet-W64. ResNeXt can be further improved by using self-/weakly-supervised weights. Frames per second were measured on an NVIDIA GeForce RTX 3090 with an image of size 192 \u00d7 640. Image-based Pointcloud-based KEZ (val) MParam\u2193 FPS\u2191 # MAE\u2193 RMSE\u2193 AbsRel\u2193 LogSI\u2193 # Chamfer\u2193 F-Score\u2191 IoU\u2191 ResNet-18 14.33 237.1 16 1.50 3.58 7.70 11.45 16 0.59 50.71 35.20 ResNet-18 PT 14.33 238.2 10 1.23 3.06 6.29 9.03 10 0.49 54.35 38.79 ResNet-101 51.51 79.63 13 1.34 3.27 6.76 10.33 12 0.53 53.53 37.80 ResNet-101 PT 51.51 79.72 7 1.16 2.95 5.71 8.94 4 0.47 57.25 41.35 ResNeXt-101 95.76 72.07 11 1.29 3.17 6.55 9.74 11 0.51 53.80 38.16 ResNeXt-101 PT 95.76 71.98 8 1.09 2.64 5.74 7.77 9 0.44 55.10 39.55 ResNeXt-101 SSL 95.76 71.50 6 1.08 2.62 5.65 7.74 7 0.44 55.56 39.96 ResNeXt-101 SWSL 95.76 71.77 5 1.07 2.65 5.56 7.78 6 0.44 56.40 40.83 MobileNet-v3-S 2.20 158.3 17 1.95 4.27 10.25 14.74 17 0.76 45.13 30.35 MobileNet-v3-L 6.67 137.8 12 1.27 3.05 6.72 9.00 13 0.51 52.87 37.46 EfficientNet-B0 5.84 99.35 14 1.29 3.04 6.95 8.90 14 0.52 52.14 37.05 EfficientNet-B4 19.41 54.28 15 1.33 3.11 7.20 9.22 15 0.53 51.77 36.83 HRNet-W18 16.05 29.65 9 1.14 2.81 5.86 8.17 8 0.47 55.34 39.76 HRNet-W64 122.81 24.17 4 1.02 2.51 5.33 7.26 5 0.43 57.05 41.45 ConvNeXt-T 31.93 147.5 3 1.03 2.59 5.30 7.63 3 0.43 57.97 42.26 ConvNeXt-B 92.65 98.09 1 0.97 2.49 4.98 7.40 1 0.40 60.63 44.89 ConvNeXt-L 203.27 72.44 2 0.98 2.44 5.19 7.07 2 0.41 58.17 42.49 Figure 5: Backbone Ablation. We show the relative performance improvement in F-Score, MAE, LogSI and AbsRel obtained by different backbone architectures and pretraining methods. Most existing papers limit their backbone to a pretrained ResNet18, resulting in limited improvements. Full results in Table 2. As seen in Table 2, ConvNeXt variants outperform all other backbones, with HRNet-W64 following closely behind. Within lighter backbones, i.e. < 20 MParams, we find HRNet-W18 to be the most effective, greatly outperforming mobile backbones such as MobileNet-v3 and EfficientNet. Regarding pretraining, all ResNet backbones show significant improvements when using pretrained weights. ResNeXt variants are further improved by using pretrained weights from self-supervised or weakly-supervised training (Yalniz et al., 2019). 8 Published in Transactions on Machine Learning Research (12/2022) A summary of these results can be found in Figure 5, showing the relative improvement over ResNet-18 (from scratch) in F-Score, MAE, LogSI and AbsRel metrics. The vast majority of monocular depth approaches simply stop at a pretrained ResNet-18 backbone (\u223c20% improvement). However, replacing this with a wellengineered modern architecture such as ConvNeXt or HRNet results in an additional 15% improvement. As such, the remainder of the paper will use the ConvNeXt-B backbone when comparing baselines. 4.3 Depth Regularization The second study evaluates the performance of various commonly used depth regularization losses. We focus on variants of depth spatial gradient smoothing such as first-order (Garg et al., 2016), edge-aware (Godard et al., 2017), second-order (Rui et al., 2018) and Gaussian blurring. We additionally test two variants of occlusion regularization (Rui et al., 2018), favoring either background or foreground depths. Results are shown in Table 3. We evaluate these models on the proposed KEZ validation split, as well as the KE test set commonly used by previous papers. Once again, the KEZ split is used to limit the chance of overfitting to the target KEB and SYNS-Patches test sets. When evaluating on the updated ground-truth, using no additional regularization produces the best results. All variants of depth smoothness produce slightly inferior results, all comparable to each other. Incorporating occlusion regularization alongside smoothness regularization again leads to a decrease in performance. Meanwhile, on the inaccurate ground-truth (Eigen & Fergus, 2015), smoothness constraints produce slightly better results. We believe this is due to the regularization encouraging oversmoothing in boundaries, which mitigates the effect of the incorrect groundtruth boundaries shown in Figure 2. As such, this is overfitting to errors in the evaluation criteria, rather than improving the actual depth prediction. Meanwhile, the large overlapping receptive fields of modern architectures such as ConvNeXt are capable of implicitly providing the smoothness required by neighbouring depths. Once again, these results point towards the need for an up-to-date benchmarking procedure that is based on reliable data and more informative metrics. Table 3: Depth Regularization Ablation. We study the effect of adding smoothness (edge-aware, first-/second-order, w/wo Gaussian blurring) and occlusion regularization (prefer background/foreground). Whilst all methods typically use these regularizations, omitting them provides the best performance when evaluating on the corrected ground-truth. Meanwhile, these regularizations provide slight improvements on the outdated Kitti Eigen split. This is likely due to oversmoothing to account for the inaccurate boundaries shown in Figure 2. Image-based Pointcloud-based KEZ (val) # MAE\u2193 RMSE\u2193 AbsRel\u2193 LogSI\u2193 # Chamfer\u2193 F-Score\u2191 IoU\u2191 No Regularization 1 1.63 3.68 7.86 11.35 1 0.66 50.25 34.68 First-order 3 1.65 3.64 8.12 11.32 6 0.67 49.30 33.90 Fist-order Blur 5 1.65 3.66 8.19 11.36 4 0.67 49.39 33.96 Second-order 2 1.64 3.64 8.10 11.25 5 0.67 49.32 33.93 Second-order Blur 4 1.65 3.66 8.14 11.27 2 0.67 49.46 34.05 Occlusion (BG) 7 1.66 3.65 8.27 11.38 7 0.68 48.86 33.52 Occlusion (FG) 6 1.65 3.65 8.19 11.23 3 0.67 49.40 34.02 KE (test) # AbsRel\u2193 SqRel\u2193 RMSE\u2193 LogRMSE\u2193 \u03b4 < 1.251\u2191 \u03b4 < 1.252\u2191 \u03b4 < 1.253\u2191 No Regularization 2 0.1002 0.7580 4.5833 0.1905 0.8865 0.9589 0.9798 First-order 1 0.0993 0.7386 4.5274 0.1873 0.8870 0.9608 0.9810 Fist-order Blur 7 0.1013 0.7600 4.5431 0.1886 0.8849 0.9603 0.9807 Second-order 4 0.1004 0.7567 4.5260 0.1877 0.8871 0.9609 0.9808 Second-order Blur 3 0.1003 0.7661 4.5512 0.1881 0.8867 0.9607 0.9806 Occlusion (BG) 6 0.1004 0.7573 4.5454 0.1872 0.8850 0.9607 0.9811 Occlusion (FG) 5 0.1004 0.7623 4.5326 0.1872 0.8870 0.9611 0.9810 9 Published in Transactions on Machine Learning Research (12/2022) Table 4: State-of-the-Art Summary. We summarize the settings used by each evaluated method in the benchmark. Contributions made by each method are indicated by either bold font or an asterisk. Please note that these settings do not exactly reflect the original implementations by the respective authors, since we have introduced changes for the sake of comparability and performance improvement. Train Decoder Proxy Min+Mask Feat Virtual Blend Mask Smooth Occ SfM-Learner M Monodepth Explainability \u2713 Klodt M Monodepth Uncertainty \u2713 Monodepth2 M Monodepth \u2713* \u2713 Johnston M Discrete \u2713 \u2713 HR-Depth M HR-Depth \u2713 \u2713 Garg S Monodepth \u2713 Monodepth S Monodepth \u2713* \u2713* SuperDepth S Sub-Pixel \u2713* \u2713 \u2713 Depth-VO-Feat MS Monodepth DepthNet \u2713 Monodepth2 MS Monodepth \u2713* \u2713 FeatDepth MS Monodepth \u2713 AutoEnc \u2713 CADepth MS CADepth \u2713 \u2713 DiffNet MS DiffNet \u2713 \u2713 HR-Depth MS HR-Depth \u2713 \u2713 Kuznietsov SD* Monodepth berHu \u2713 DVSO SD* Monodepth berHu \u2713 \u2713* \u2713* MonoResMatch SD* Monodepth berHu \u2713 \u2713 DepthHints MSD* Monodepth LogL1 \u2713 \u2713 Ours MS HR-Depth Ours (Min) MS HR-Depth \u2713 Ours (Proxy) MSD* HR-Depth LogL1 Ours (Min+Proxy) MSD* HR-Depth LogL1 \u2713 5 Results This section presents the results obtained when combining recent SotA approaches with our proposed design changes. Once again, the focus lies in training all baselines in a fair and comparable manner, with the aim of finding the effectiveness of each contribution. For completeness and comparison with the original papers, we report results on the KE split in Section 5.2. However, it is worth reiterating that we strongly believe this evaluation should not be used by future authors. 5.1 Evaluation details We cap the maximum depth to 100 meters (compared to the common 50m (Garg et al., 2016) or 80m (Zhou et al., 2017)) and omit border cropping (Garg et al., 2016) & stereo-blending post-processing (Godard et al., 2017). Again, we show the rank ordering based on image-based (AbsRel), pointcloud-based (F-Score) and edge-based (F-Score) metrics. Stereo-supervised (S or MS) approaches apply a fixed scaling factor to the prediction, due to the known baseline between the cameras during training. Monocular-supervised (M) methods instead apply per-image median scaling to align the prediction and ground-truth. We largely follow the implementation details outlined in Section 4.1, except for the backbone architecture, which is replaced with ConvNeXt-B (Liu et al., 2022). As a baseline, all methods use the standard DispNet decoder (Mayer et al., 2016) (excluding methods proposing new architectures), the SSIM+L1 photometric loss (Godard et al., 2017), edge-aware smoothness regularization (Godard et al., 2017) and upsampled multiscale losses (Godard et al., 2019). We settle on these design decisions due to their popularity and prevalence in all recent approaches. This allows us to minimize the changes between legacy and modern approaches and focus on the contributions of each method. 10 Published in Transactions on Machine Learning Research (12/2022) The specific settings and contributions of each approach can be found in Table 4. For the full details please refer to the original publication, the review in Section 2 or the public codebase. We label the training supervision as follows: M = Monocular video synthesis, S = Stereo pair synthesis & D* = Proxy depth regression. In this case, all proxy depth maps used for regression (D*) are obtained via the hand-crafted stereo disparity algorithm SGM (Hirschm\u00fcller, 2005). We further improve their robustness via the min reconstruction fusion proposed by Depth Hints (Watson et al., 2019). Table 5: Kitti Eigen Evaluation. Results reported in the original publications (top) vs. those obtained by our updated baselines (bottom). The proposed baselines outperform those provided by the original authors in every case, most notably in the mono (Zhou et al., 2017) and stereo (Garg et al., 2016) baselines. For instance \u03b4 < 1.25 accuracy is improved by a raw 10% in both cases, while AbsRel error is decreased by 5%. As such, models with the proposed changes represent the new SotA. Original results Train # AbsRel\u2193 SqRel\u2193 RMSE\u2193 LogRMSE\u2193 \u03b4 < 1.251\u2191 \u03b4 < 1.252\u2191 \u03b4 < 1.253\u2191 SfM-Learner M 18 0.1830 1.5950 6.7090 0.2700 0.7340 0.9020 0.9590 Klodt M 17 0.1800 1.9700 6.8550 0.2700 0.7650 0.9130 0.9620 Monodepth2 M 13 0.1150 0.9030 4.8630 0.1930 0.8770 0.9590 0.9810 Johnston M 6 0.1060 0.8610 4.6990 0.1850 0.8890 0.9620 0.9820 HR-Depth M 9 0.1090 0.7920 4.6320 0.1850 0.8840 0.9620 0.9830 Garg S 16 0.1520 1.2260 5.8490 0.2460 0.7840 0.9210 0.9670 Monodepth S 14 0.1330 1.1420 5.5330 0.2300 0.8300 0.9360 0.9700 SuperDepth S 11 0.1120 0.8750 4.9580 0.2070 0.8520 0.9470 0.9770 Depth-VO-Feat MS 15 0.1350 1.1320 5.5850 0.2290 0.8200 0.9330 0.9710 Monodepth2 MS 5 0.1060 0.8180 4.7500 0.1960 0.8740 0.9570 0.9790 FeatDepth MS 1 0.0990 0.6970 4.4270 0.1840 0.8890 0.9630 0.9820 CADepth MS 3 0.1020 0.7520 4.5040 0.1810 0.8940 0.9640 0.9830 DiffNet MS 2 0.1010 0.7490 4.4450 0.1790 0.8980 0.9650 0.9830 HR-Depth MS 7 0.1070 0.7850 4.6120 0.1850 0.8870 0.9620 0.9820 Kuznietsov SD* 12 0.1130 0.7410 4.6210 0.1890 0.8620 0.9600 0.9860 DVSO SD* 8 0.1070 0.8520 4.7850 0.1990 0.8660 0.9500 0.9780 MonoResMatch SD* 10 0.1110 0.8670 4.7140 0.1990 0.8640 0.9540 0.9790 DepthHints MSD* 4 0.1050 0.7690 4.6270 0.1890 0.8750 0.9590 0.9820 Our implementation Train # AbsRel\u2193 SqRel\u2193 RMSE\u2193 LogRMSE\u2193 \u03b4 < 1.251\u2191 \u03b4 < 1.252\u2191 \u03b4 < 1.253\u2191 SfM-Learner M 18 0.1374 1.6426 5.3609 0.2201 0.8562 0.9467 0.9732 Klodt M 17 0.1325 1.4591 5.2738 0.2178 0.8604 0.9482 0.9740 Monodepth2 M 16 0.1145 0.9374 4.9131 0.1946 0.8767 0.9589 0.9803 Johnston M 15 0.1140 0.8577 4.7707 0.1912 0.8772 0.9606 0.9816 HR-Depth M 14 0.1114 0.9026 4.8276 0.1911 0.8817 0.9609 0.9815 Garg S 8 0.1007 0.8180 4.6693 0.1922 0.8859 0.9575 0.9788 Monodepth S 7 0.1003 0.7958 4.6448 0.1909 0.8846 0.9573 0.9793 SuperDepth S 10 0.1030 0.8122 4.6907 0.1939 0.8828 0.9562 0.9786 Depth-VO-Feat MS 9 0.1012 0.7802 4.6676 0.1954 0.8800 0.9551 0.9780 Monodepth2 MS 5 0.0994 0.7491 4.5438 0.1879 0.8883 0.9607 0.9806 FeatDepth MS 3 0.0986 0.7410 4.5521 0.1880 0.8868 0.9603 0.9805 CADepth MS 4 0.0988 0.7313 4.5117 0.1848 0.8876 0.9621 0.9816 DiffNet MS 2 0.0983 0.7377 4.5139 0.1852 0.8900 0.9622 0.9814 HR-Depth MS 1 0.0974 0.7306 4.4875 0.1854 0.8921 0.9621 0.9811 Kuznietsov SD* 13 0.1098 0.9175 4.7278 0.1846 0.8797 0.9625 0.9832 DVSO SD* 12 0.1056 0.8226 4.6427 0.1880 0.8822 0.9594 0.9812 MonoResMatch SD* 11 0.1048 0.8266 4.6263 0.1881 0.8835 0.9588 0.9806 DepthHints MSD* 6 0.0997 0.7958 4.5112 0.1834 0.8924 0.9631 0.9820 11 Published in Transactions on Machine Learning Research (12/2022) 5.2 Kitti Eigen We first validate the effectiveness of our improved implementations on the KE split. As discussed, we strongly believe that this dataset encourages suboptimal design decisions, and future work should not evaluate on it. We report the original metrics from Eigen & Fergus (2015), as detailed in Section B.1. Results can be found in Table 5, alongside results from the original published papers. As seen, the models trained with our design decisions improve over each of their respective baselines. This is particularly noticeable in the early baselines (Garg et al., 2016; Zhou et al., 2017), which have remained unchanged since publication. This again highlights the importance of providing up-to-date baselines that are trained in a comparable way. 5.3 Kitti Eigen-Benchmark We report results on the KEB split, described in Sections 3.1 & B.2. To re-iterate, this is an updated evaluation using the corrected depth maps from Uhrig et al. (2018). From these images, we select a subset of 10 interesting images to evaluate the qualitative performance of the trained models. Results can be found in Table 6 & Figure 7. As shown, when training and evaluating in a comparable manner, the improvements provided by recent contributions are significantly lower than those reported in the original papers. In fact, we find the seminal stereo method by Garg et al. (2016) to be one of the top performers across all metrics. Most notably, it outperforms all other methods in 3-D pointcloud-based metrics, indicating that the reconstructions are the most accurate. Incorporating the discussed design decisions along with SotA contributions provides the best image-based performance. However, these contributions were not developed to optimize pointcloud reconstruction. As seen, purely monocular approaches perform worse than stereo-based methods, despite the median scaling aligning the predictions to the ground truth. However, incorporating the contributions from Monodepth2 (Godard et al., 2019) results in significant improvements over SfM-Learner. Qualitative depth visualizations can be found in Figure 6. Once again, the seminal Garg et al. (2016) and SfMLearner (Zhou et al., 2017) baselines are drastically improved w.r.t. the original implementation. However, SfM-Leaner predictions are still characterized by artifacts common to purely monocular supervision. Most notably, objects moving at similar speeds to the camera are predicted as holes of infinite depth, due to the fact that they appear static across images. Static pixel automasking from Monodepth2 (Godard et al., 2019) GT (Interp) SfM-Learner Garg Monodepth2 (MS) HR-Depth (MS) DepthHints (MS) Figure 6: Kitti Visualization. Baseline models (Garg et al., 2016; Zhou et al., 2017) are greatly improved from their original implementations. Incorporating the minimum reconstruction loss & automasking from Monodepth2 (Godard et al., 2019) improves accuracy on thin structures and prevents holes of infinite depth. Full results in Table 6. 12 Published in Transactions on Machine Learning Research (12/2022) Table 6: Kitti Eigen-Benchmark Evaluation. When training in comparable conditions, the stereo baseline (Garg et al., 2016) is one of the top performing methods. The minimum reprojection and automasking losses (Godard et al., 2019) help to improve performance and mitigate monocular supervision artefacts. This is further improved via a high-resolution decoder (Lyu et al., 2021) and proxy depth supervision (Watson et al., 2019). However, these contributions only improve image-based depth metrics, but do not result in more accurate 3-D pointcloud reconstructions. Image-based Pointcloud-based KEB (test) Train # MAE\u2193 RMSE\u2193 AbsRel\u2193 LogSI\u2193 # Chamfer\u2193 F-Score\u2191 IoU\u2191 SfM-Learner M 22 1.98 4.57 10.69 15.80 22 0.73 44.77 30.03 Klodt M 21 1.96 4.54 10.49 15.86 21 0.72 45.26 30.40 Monodepth2 M 18 1.84 4.11 8.82 13.10 18 0.71 46.64 31.62 Johnston M 19 1.83 3.99 8.85 12.89 20 0.71 45.72 30.78 HR-Depth M 17 1.80 4.04 8.65 12.75 17 0.69 47.35 32.10 Garg S 2 1.60 3.75 7.65 11.39 1 0.60 53.28 37.33 Monodepth S 6 1.61 3.72 7.73 11.57 6 0.64 51.25 35.45 SuperDepth S 9 1.64 3.77 7.81 11.63 2 0.63 52.30 36.40 Depth-VO-Feat MS 4 1.63 3.72 7.70 11.64 3 0.62 52.01 36.15 Monodepth2 MS 10 1.61 3.62 7.90 10.99 7 0.64 50.50 34.98 FeatDepth MS 8 1.60 3.60 7.80 11.01 10 0.65 49.99 34.51 CADepth MS 14 1.63 3.60 8.09 10.84 14 0.66 49.32 34.06 DiffNet MS 12 1.62 3.63 7.97 10.93 12 0.65 49.63 34.23 HR-Depth MS 3 1.58 3.56 7.70 10.68 5 0.62 51.49 35.93 Kuznietsov SD* 20 1.82 3.98 9.32 11.80 19 0.71 45.80 30.63 DVSO SD* 13 1.66 3.77 8.05 11.32 11 0.67 49.82 34.18 MonoResMatch SD* 16 1.66 3.75 8.20 11.31 16 0.66 49.01 33.52 DepthHints MSD* 15 1.63 3.62 8.10 10.94 15 0.66 49.30 33.80 Ours MS 1 1.59 3.64 7.63 11.15 4 0.62 51.74 35.97 Ours (Min) MS 7 1.58 3.53 7.79 10.58 8 0.63 50.45 35.01 Ours (Proxy) MSD* 5 1.57 3.50 7.73 10.59 9 0.64 50.36 34.69 Ours (Min+Proxy) MSD* 11 1.61 3.56 7.97 10.74 13 0.65 49.46 33.90 Garg Monodepth Kuznietsov SfM-Learner Klodt DVSO Depth-VO-Feat SuperDepth MonoResMatch Monodepth2 (M) Monodepth2 (MS) Depth Hints Johnston FeatDepth CADepth DiffNet HR-Depth (M) HR-Depth (MS) 40 30 20 10 0 Improvement (%) F-Score LogSI AbsRel MAE Figure 7: Kitti Eigen-Benchmark Improvement. When training and evaluating in fair conditions, many contributions do not result in relative improvements w.r.t. the Garg et al. (2016) stereo baseline. Most notably, all monocular-supervised approaches perform significantly worse despite the per-image median scaling. Full results in Table 6. 13 Published in Transactions on Machine Learning Research (12/2022) fixes most of these artifacts. Similarly, the minimum reconstruction loss leads to more accurate predictions for thin objects, such as traffic signs and posts. DepthHints (Watson et al., 2019) and HR-Depth (Lyu et al., 2021) independently improve the quality of the predictions via proxy depth supervision and a high-resolution decoder, respectively. However, these contributions do not significantly improve the 3-D reconstructions. 5.4 SYNS-Patches Finally, we evaluate the baselines on the SYNS-Patches dataset. As discussed in Sections 3.2 & B.3, this dataset consists of 1175 images from a variety of different scenes, such as woodlands, natural scenes and urban residential/industrial areas. It is worth noting that we evaluate the models from previous section without re-training or fine-tuning. As such, SYNS-Patches represents a dataset completely unseen during training. This allows us to evaluate the robustness of the learned representations to new unseen environment types. We select a subset of 5 images per category to evaluate the qualitative performance. To make results more comparable, all models are evaluated using the monocular protocol, where each predicted depth map is aligned to the ground-truth using per-image median scaling. We reuse the metrics from the KEB split and additionally report edge-based accuracy and completion from Koch et al. (2018). Finally, we also compute the F-Score only at depth boundary pixels, reflecting the quality of the predicted discontinuities. Results for this evaluation can be found in Table 7. As seen, performance decreases drastically for all approaches, showing that models do not transfer well beyond the automotive domain. A further decrease in F-Score can be seen when evaluating only on depth edges, indicating this as a common source of error. Once again, models incorporating recent contributions provide SotA performance in traditional image-based Table 7: SYNS-Patches Evaluation. Overall performance is drastically reduced when evaluating outside the automotive training domain. Methods using minimum reconstruction loss and automasking improve image-based metrics, while Garg et al. (2016) still provides some of the top 3-D pointcloud reconstructions. Predicted edge boundaries are typically accurate (Edge-Acc <5 px), but incomplete (Edge-Comp >25 px). Image-Based Pointcloud-based Edge-based SYNS-Patches Train # MAE\u2193 RMSE\u2193 AbsRel\u2193 # F-Score\u2191 IoU\u2191 # F-Score\u2191 Acc\u2193 Comp\u2193 SfM-Learner M 22 5.43 9.25 31.58 22 11.79 6.43 20 8.47 3.46 36.12 Klodt M 20 5.40 9.20 31.20 21 12.00 6.57 19 8.48 3.44 35.22 Monodepth2 M 13 5.33 9.02 30.05 20 12.08 6.62 21 8.46 3.30 37.01 Johnston M 10 5.24 8.92 29.72 18 12.16 6.66 18 8.60 3.23 42.82 HR-Depth M 8 5.26 8.95 29.53 5 13.37 7.40 6 9.16 3.07 30.03 Garg S 15 5.29 9.20 30.73 2 13.48 7.45 1 9.53 3.37 26.79 Monodepth S 21 5.29 9.20 31.27 19 12.14 6.67 17 8.69 3.57 61.15 SuperDepth S 16 5.26 9.08 30.83 13 12.87 7.10 11 9.01 3.40 40.40 Depth-VO-Feat MS 17 5.30 9.17 30.83 16 12.43 6.82 15 8.77 3.50 38.49 Monodepth2 MS 6 5.18 8.91 29.04 8 13.18 7.27 13 8.95 3.38 32.69 FeatDepth MS 7 5.16 8.80 29.12 17 12.27 6.73 22 8.41 3.50 44.09 CADepth MS 11 5.22 8.97 29.80 14 12.83 7.06 16 8.70 3.42 35.89 DiffNet MS 2 5.16 8.91 28.80 9 13.16 7.26 14 8.81 3.45 39.46 HR-Depth MS 5 5.13 8.85 28.94 1 13.79 7.65 5 9.21 3.25 28.33 Kuznietsov SD* 19 5.47 9.50 31.08 10 13.15 7.26 7 9.11 3.39 47.13 DVSO SD* 9 5.18 8.93 29.66 11 13.08 7.23 2 9.29 3.34 40.23 MonoResMatch SD* 14 5.24 9.07 30.28 15 12.73 7.01 9 9.03 3.47 51.03 DepthHints MSD* 18 5.33 9.07 30.90 12 12.91 7.11 10 9.01 3.24 26.21 Ours MS 12 5.20 9.03 29.93 4 13.38 7.39 3 9.28 3.31 34.36 Ours (Min) MS 1 5.11 8.80 28.59 7 13.20 7.27 12 8.98 3.24 32.46 Ours (Proxy) MSD* 3 5.11 8.79 28.87 3 13.46 7.45 4 9.23 3.16 30.60 Ours (Min+Proxy) MSD* 4 5.08 8.71 28.91 6 13.23 7.30 8 9.11 3.16 31.32 14 Published in Transactions on Machine Learning Research (12/2022) GT SfM-Learner Garg Monodepth2 (MS) HR-Depth (MS) DepthHints (MS) Figure 8: SYNS Visualization. As evidenced by Table 7, models trained on Kitti do not transfer well to natural scenes, such as woodlands. Similar to Kitti, we find the contributions from Mondepth2 (Godard et al., 2019) to improve prediction accuracy in challenging areas such as thin structures and object boundaries. Full results in Table 7. depth metrics. However, Garg et al. (2016) consistently remains one of the top performers in 3-D pointcloud reconstruction and edge-based metrics. In general, predicted edges are typically accurate (\u223c3 px error). However, there are many missing edges, as reflected by the large edge completeness error (\u223c26 px error). Qualitative depth visualizations can be found in Figure 8. Similar to Kitti, Monodepth2 (Godard et al., 2019) and its successors (Watson et al., 2019; Lyu et al., 2021) greatly improve performance on thin structures. However, there is still room for improvement, as shown by the railing prediction in the second image. This is reflected by the low 3-D reconstruction metrics. Furthermore, all methods perform significantly worse in natural and woodlands scenes, demonstrating the need for more varied training data. 6 Conclusion This paper has presented a detailed ablation procedure to critically evaluate the current SotA in selfsupervised monocular depth learning. We independently reproduced 16 recent baselines, modernizing them with sensible design choices that improve their overall performance. When benchmarking on a level playing field, we show how many contributions do not provide improvements over the legacy stereo baseline. Even in the case where they do, the change in performance is drastically lower than that claimed by the original publications. This results in new SotA models that set the bar for future contributions. Furthermore, this work has shown how, in many cases, the silent changes that are rarely claimed as contributions can result in equal or greater improvements than those provided by the claimed contribution. Regarding future work, we identify two main research directions. The first of these is the generalization capabilities of monocular depth estimation. Given the results on SYNS-Patches, it is obvious that purely automotive data is not sufficient to generalize to complex natural environments, or even other urban environments. As such, it would be of interest to explore additional sources of data, such as indoor sequences, natural scenes or even synthetic data. The second avenue should focus on the accuracy on thin objects and depth discontinuities, which are challenging for all existing methods. This is reflected in the low F-Score and Edge Completion metrics in these regions. To aid future research and encourage good practices, we make the proposed codebase publicly available. We invite authors to contribute to it and use it as a platform to train and evaluate their contributions. 15 Published in Transactions on Machine Learning Research (12/2022) Acknowledgments This work was partially funded by the EPSRC under grant agreements EP/S016317/1 & EP/S035761/1.", "introduction": "Depth estimation is a fundamental low-level computer vision task that allows us to estimate the 3-D world from its 2-D projection(s). It is a core component enabling mid-level tasks such as SLAM, visual localization or object detection. More recently, it has heavily impacted fields such as Augmented Reality, autonomous vehicles and robotics, as knowing the real-world geometry of a scene is crucial for interacting with it\u2014both virtually and physically. This interest has resulted in a large influx of researchers hoping to contribute to the field and compare with previous approaches. New authors are faced with a complex range of established design decisions, such as the choice of backbone architecture & pretraining, loss functions and regularization. This is further complicated by the fact that papers are written to be accepted for publication. As such, they often emphasize the theoretical novelty of their work, over the robust design decisions that have the most impact on performance. 1 arXiv:2208.01489v4 [cs.CV] 21 Dec 2022 Published in Transactions on Machine Learning Research (12/2022) Garg('16) Monodepth('17) Kuznietsov('17) SfM-Learner('17) Klodt('18) DVSO('18) Depth-VO-Feat('18) SuperDepth('19) MonoResMatch('19) Monodepth2('19) Depth Hints('19) Johnston ('20) FeatDepth ('20) CADepth ('21) DiffNet ('21) HR-Depth('21) 0.08 0.10 0.12 0.14 0.16 0.18 AbsRel Reported Ours (a) SotA Performance per Baseline Garg('16) Monodepth('17) Kuznietsov('17) SfM-Learner('17) Klodt('18) DVSO('18) Depth-VO-Feat('18) SuperDepth('19) MonoResMatch('19) Monodepth2('19) Depth Hints('19) Johnston ('20) FeatDepth ('20) CADepth ('21) DiffNet ('21) HR-Depth('21) 40 30 20 10 0 10 20 30 AbsRel Improvement (%) Reported Ours (b) SotA Improvement per Baseline (c) Backbone Performance (d) Backbone Improvement Figure 1: Quantifying SotA Contributions. (a) Performance at the time vs. our re-implementation using common design decisions (lower is better). Kuznietsov et al. (2017) and SfM-Learner (Zhou et al., 2017) have much higher error as they do not use stereo data training. (b) Original papers AbsRel relative performance improvement w.r.t. Garg et al. (2016) vs. real improvements observed training/evaluating baselines in a fair and comparable manner (higher is better). (c) Performance obtained by ablating the backbone architecture. (d) Backbone relative performance improvements (w.r.t. ResNet-18 from scratch) outweigh those provided by most recent contributions. See section Section 4.2 for more details of design choices. This paper offers a chance to step back and re-evaluate the state of self-supervised monocular depth estima- tion. We do this via an extensive baseline study by carefully re-implementing popular State-of-the-Art (SotA) algorithms from scratch1. Our modular codebase allows us to study the impact of each framework component and ensure all approaches are trained in a fair and comparable way. Figure 1 compares the performance reported by recent SotA against that obtained by the same technique on our updated benchmark. Our reimplementation improves performance for all evaluated baselines. However, the relative improvement resulting from each contribution is significantly lower than that reported by the original publications. In many cases, it is likely this is the result of arguably unfair comparisons against outdated baselines. For instance, as seen in Figures 1c & 1d, simply modernizing the choice of backbone in these legacy formulations results in performance gains of 25%. When applying these common design decisions to all approaches, it appears that \u2018legacy\u2019 formulations are still capable of outperforming many recent methods. As part of our unified benchmark, we propose a novel evaluation dataset in addition to the exclusively used urban driving datasets. This new dataset (SYNS-Patches) contains 1175 images from a wide variety of urban and natural scenes, including categories such as urban residential, woodlands, indoor, industrial and more. This allows us to evaluate the generality of the learned depth models beyond the restricted automotive domain that is the focus of most papers. To summarize, the contributions of this paper are: 1. We provide a modular codebase containing modernized baselines that are easy to train and extend. This encourages direct like-with-like comparisons and better research practices. 2. We re-evaluate the updated baselines algorithms consistently using higher-quality corrected ground- truth on the existing benchmark dataset. This pushes the field away from commonly used flawed benchmarks, where errors are perpetuated for the sake of compatibility. 1Code is publicly available at https://github.com/jspenmar/monodepth_benchmark. 2 Published in Transactions on Machine Learning Research (12/2022) 3. In addition to democratizing code and evaluation on the common Kitti benchmarks, we propose a novel testing dataset (SYNS-Patches) containing both urban and natural scenes. This focuses on the ability to generalize to a wider range of applications. The dense nature of the ground-truth allows us to provide informative metrics in complex regions such as depth boundaries. 4. We make these resources available to the wider research community, contributing to the further advancement of self-supervised monocular depth estimation." }, { "url": "http://arxiv.org/abs/2204.05698v1", "title": "Medusa: Universal Feature Learning via Attentional Multitasking", "abstract": "Recent approaches to multi-task learning (MTL) have focused on modelling\nconnections between tasks at the decoder level. This leads to a tight coupling\nbetween tasks, which need retraining if a new task is inserted or removed. We\nargue that MTL is a stepping stone towards universal feature learning (UFL),\nwhich is the ability to learn generic features that can be applied to new tasks\nwithout retraining.\n We propose Medusa to realize this goal, designing task heads with dual\nattention mechanisms. The shared feature attention masks relevant backbone\nfeatures for each task, allowing it to learn a generic representation.\nMeanwhile, a novel Multi-Scale Attention head allows the network to better\ncombine per-task features from different scales when making the final\nprediction. We show the effectiveness of Medusa in UFL (+13.18% improvement),\nwhile maintaining MTL performance and being 25% more efficient than previous\napproaches.", "authors": "Jaime Spencer, Richard Bowden, Simon Hadfield", "published": "2022-04-12", "updated": "2022-04-12", "primary_cat": "cs.LG", "cats": [ "cs.LG" ], "main_content": "Multi-task learning. At its core, MTL [3, 32, 41] aims to train a single network to accomplish multiple tasks. Through feature sharing, these models can reduce compute requirements while performing better than expert network counterparts. Initial approaches consisted of multiple task encoders with additional feature sharing layers. The seminal UberNet [21] introduced a multi-scale, multi-head network capable of performing a large number of tasks simultaneously. Cross-stitch networks [27] introduced soft feature sharing by learning linear combinations of multiple task features. In practice, this requires first training each task separately and then finetuning their features. Sluice networks [33] extended this idea by incorporating subspace and skip-connection sharing. Meanwhile, NDDRCNNs [17] replaced the linear combination of features with a dimensionality reduction mechanism. Kokkinos et al. [21] and Zhao et al. [46] showed that feature sharing in unreleated tasks results in a degradation in performance for both tasks, known as negative transfer. To account for this, MTAN [23] used convolutional attention to build task specific decoders from a shared backbone. Other methods learn where to branch from the backbone and what layers to share. Vandenhende et al. [40] decide what layers to share based on precomputed task affinity scores [16]. FAFS [25] begins with a fully shared model, optimizing the separation between dissimilar tasks while minimizing model complexity. BMTAS [2] and LTB [19] instead use the Gumbel softmax to represent branching points in a tree structure. More recent approaches introduce additional refinement steps prior to making the final prediction. PAD-Net [43] was the first of these networks, using simple task heads to make intermediate predictions. Each possible pair of tasks were then connected via spatial attention, from which a final prediction was made. MTI-Net [42] extended this approach to multiple scales, incorporating feature propagation modules between them. PAP-Net [45] instead learned per-task pixel affinity matrices, estimating the pixel-wise correlation between each combination of tasks. Zhou et al. [47] additionally incorporated inter-task patterns. Due to the connections between all possible tasks, these approaches suffer from a quadratic growth of network parameters, leading to intractable compute requirements. Transfer learning. A topic closely related to UFL is transfer learning [31,37,39,50]. However, these works typically focus on solving domain shift at the input level, performing the same task with a different input modality. In other cases, the target is a closely related task, e.g. classification on a different set of labels. More closely related to Medusa are featureand network-based techniques for transfer learning. Feature-based approaches aim to transform the source feature representations into new representations for the target domain. This includes approaches such as feature augmentation [9,14,20], mapping [24,29,30], clustering [7,8] and sharing [11, 18, 22]. Meanwhile, network-based techniques have instead focused on parameter sharing. Some 2 Backbone SFA SFA SFA SFA MSA SFA SFA SFA SFA MSA Task 0 Task T Initial Predictions Shared Features \u0d57 \ud835\udfcf\ud835\udfd1\ud835\udfd0 \u0d57 \ud835\udfcf\ud835\udfcf\ud835\udfd4 \u0d57 \ud835\udfcf\ud835\udfd6 \u0d57 \ud835\udfcf\ud835\udfd2 \ud835\udc80\ud835\udfce \ud835\udc80\ud835\udfce \ud835\udfce \ud835\udc80\ud835\udfd1 \ud835\udfce \ud835\udc80\ud835\udfce \ud835\udc7b \ud835\udc3b0 \ud835\udc61 \ud835\udf48 \u0d24 \ud835\udc390 \ud835\udc61 \ud835\udc3b1 \ud835\udc61 \ud835\udf48 \u0d24 \ud835\udc39 1 \ud835\udc61 \ud835\udc3b2 \ud835\udc61 \ud835\udf48 \u0d24 \ud835\udc392 \ud835\udc61 \ud835\udc3b3 \ud835\udc61 \ud835\udf48 \u0d24 \ud835\udc393 \ud835\udc61 \ud835\udc3b\ud835\udc61 (b) \ud835\udc39 \ud835\udc60 \ud835\udc61 \ud835\udf48 \ud835\udc35\ud835\udc60 (a) \u0278 \u0278 Figure 2. Proposed Medusa Architecture. We focus on building independent task heads. This allows us to efficiently scale to a larger number of tasks thanks to the dual attention mechanisms. (a) Shared Feature Attention, selecting relevant backbone features for each task and scale through per-channel spatial attention. (b) Novel Multi-Scale Attention head, combining task features at different scales to generate the final predictions. notable examples include matrix factorization [48, 49] and parameter reuse from a network pretrained on a source domain [28, 38]. However, the focus lies mainly on the performance after finetuning on a specific new target task, rather than the overall performance on a wide range of tasks, which is the focus of MTL and UFL. More recently, [16,44] were proposed to learn and model the relationship between tasks. However, these approaches do not truly solve the UFL problem since it is still necessary to train a separate network on each task. Instead they follow a brute force approach to find the best possible source to transfer for a given target task. In contrast, Medusa learns a single representation which can generalize well across future target tasks. 3. Methodology The aim of this work is to introduce an architecture capable of learning universal features that perform well in multiple different tasks. As shown in Figure 2, Medusa consists of two main components: a shared backbone and individual task heads. One key design feature is that each task head is independent from the rest. This allows us to add new task heads a posteriori, which can be trained in conjunction or separately from the existing tasks. 3.1. Shared Feature Attention The only part of the architecture common to all tasks is the shared backbone. The backbone produces the shared features \\ensuremath {\\textbf {\\MakeUppercase {B}}}_{\\text {\\acs {scale}}} \\ \\ensuremath {C}_{\\text {\\acs {scale}}} acs {backbone} \\in \\setreal ^{\\scriptsize \\acs {ch-scale}} at multiple scales S, where \\ensuremath {C}_{\\text {\\acs {scale}}} is the number of channels per-scale. Our goal is to learn universal features useful in a wide range of tasks, which may not be known at training time. In order to let the backbone learn a broad range of features whilst allowing tasks to pick their own specific subsets, we introduce spatial attention between the backbone and each task head. We define the process of applying spatial attention SA to a generic feature map F as \\la b e l { eq:s pat ial _att} \\deffunc { \\acs {att} } { \\acs {fmap} } { \\easyfunc { \\acs {sigmoid} }{ \\easyfunc { \\acs {conv}_1 }{ \\acs {fmap} } } \\acs {hadamard} \\easyfunc { \\acs {conv}_2 }{ \\acs {fmap} } } , (1) where \u03c3 is the sigmoid operation, \u2299the Hadamard product and \u03d5 a convolution operation followed by batch normalization and a ReLU activation. Note that the concept of spatial attention is also known as the GLU activation [10] and has previously been used in MTL [23, 42, 43]. In Medusa, the convolution weights for each scale and task are independent from each other. Therefore, \\ text {\\acs {fmap}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\a cs {att}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ \\ensuremath {\\textbf {\\MakeUppercase {B}}}_{\\text {\\acs {scale}}} func { \\acs {fmap-task} } { \\acs {att-task} } { \\acs {backbone} } represents the initial task features for scale s and task t. The shared backbone can now learn a generic feature representation that suits a much wider range of tasks. Through the per-channel spatial attention \\ea syfunc { \\acs {sigmoid} } { \\easyfunc { \\acs {conv}_1 }{ \\acs {fmap} }} , each task/scale retains only the specific subset of backbone fea3 tures relevant to it. This alleviates the possibility of negative transfer, where sharing features between unrelated tasks can degrade the performance of both tasks. Whilst previous approaches also make use of spatial attention, they place a larger focus on modeling the connections between each pair of tasks. By creating an information bottleneck, Medusa places more importance on learning features common to all tasks that therefore provide better transfer capabilities. Additionally, our multi-scale approach provides the subsequent task features with a wide variety of information which, combined with the proposed MSA head, helps to provide optimal features for the final prediction. 3.2. Multi-Scale Task Predictions Rather than building a per task sequential decoder such as [23], we build parallel task heads by using each scale of backbone features to make an initial prediction for each task. This results in S predictions per task, used as additional supervision during training. The initial task features \\ text {\\acs {fmap}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} are refined through \\ b ar {\\text {\\acs {fmap}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ f un c \\ text {\\acs {fmap}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} { \\acs {fmap-task-ref} } { \\acs {resblock}_2 } { \\easyfunc { \\acs {resblock}_1 }{ \\acs {fmap-task} } } , (2) where \\ b ar {\\text {\\acs {fmap}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} are the refined task features and \\de f f unc { \\acs {resblock} } { \\acs {fmap} } { \\easyfunc { \\acs {conv} }{ \\acs {fmap} } + \\acs {fmap} } is a residual convolutional block. The initial predictions are given by \\ text {\\acs {pred}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ text {\\acs {conv}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ \\ b ar {\\text {\\acs {fmap}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} func { \\acs {pred-initial} } { \\acs {conv-task-scale} } { \\acs {fmap-task-ref} } , where \\ text {\\acs {conv}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} is the convolution mapping from \\ensuremath {C}_{\\text {\\acs {scale}}} channels provided by the backbone to those required by the task. These predictions are used as intermediate supervision exclusively during training, while the refined task features are combined in the MSA heads. 3.3. Multi-Scale Attention Task Heads The final step is combining task features from multiple scales to make the final prediction for each task. In the na\u00a8 \u0131ve case, one could simply upsample all task features to the same resolution, concatenate channel-wise and process them together [42]. We refer to this task head as HRHead in the results further on. This assumes that the predictions from each scale are equally valid and important. However, due to the varying resolution and subsequent receptive field of each scale, this is not typically the case. In practice, higher resolution predictions can help to provide more accurate and sharp edges for some tasks. On the other hand, lower resolutions with more channels provide more descriptive features with a larger receptive field, making predictions more consistent on a global scale. We capture this information by introducing a novel Multi-Scale Attention task head. Given the processed task features \\ b ar {\\text {\\acs {fmap}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} the network is able to select the important information from each scale using the spatial attention SA previously defined in (1). This results in \\ ensuremath {\\textbf {\\MakeUppercase {H}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\a cs {att}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ b ar {\\text {\\acs {fmap}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} \\ func { \\acs {feat-final-scale} } { \\acs {att-task} } { \\acs {fmap-task-ref} } , (3) \\ensuremath {\\textbf {\\MakeUppercase {H}}}^{\\text {\\acs {task}}} \\ acs {f e a t -fi nal} = \\mat {H}_{\\scriptsize 0}^{\\scriptsize \\acs {task}} \\acs {concat} \\mat {H}_{\\scriptsize 1}^{\\scriptsize \\acs {task}} \\acs {concat} \\ldots \\acs {concat} \\mat {H}_{\\scriptsize \\acs {Scale}}^{\\scriptsize \\acs {task}} , (4) where \u2295represents channel-wise concatenation of the attended per task per scale features \\ ensuremath {\\textbf {\\MakeUppercase {H}}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} . Note that the spatial attention weights are independent from those previously used to extract \\ text {\\acs {fmap}}_{\\text {\\acs {scale}}}^{\\text {\\acs {task}}} . The final per task features \\ensuremath {\\textbf {\\MakeUppercase {H}}}^{\\text {\\acs {task}}} are used to obtain the final predictions as \\text {\\acs {pred}}^{\\text {\\acs {task}}} \\text {\\acs {conv}}^{\\text {\\acs {task}}} \\ \\ensuremath {\\textbf {\\MakeUppercase {H}}}^{\\text {\\acs {task}}} func { \\acs {pred-final} } { \\acs {conv-task} } { \\acs {feat-final} } , where \\text {\\acs {conv}}^{\\text {\\acs {task}}} maps the final number of channels \\ensuremath {C}_{\\text {\\acs {scale}}} \\sum \\acs {ch-scale} to the required task channels. Thanks to the design of the system it becomes trivial to attach new task heads to the shared backbone. These task heads are able to choose relevant features from the shared backbone and adapt the multiple scales to the needs of new tasks. Furthermore, since the task heads are independent, the number of parameters increases only linearly with the number of tasks. Because these heads are lightweight, the resulting system is highly efficient. This is contrary to approaches such as [42,43], where each task requires connections to every other task, resulting in a quadratic parametercomplexity with regards to the number of tasks. 4. Results Dataset. We use the NYUD-v2 dataset [34], containing labels for depth estimation, semantic segmentation, edge detection and surface normal estimation. Following existing benchmarks [26, 42], we focus on evaluating depth and semantic segmentation, leaving edges and surface normals as auxiliary tasks for use during training. Depth is evaluated through the Root Mean Squared Error (RMSE), while semantic segmentation uses the mean-Intersection over Union (m-IoU). Implementation details. We use HRNet-18 [36] pretrained on ImageNet [12] as the backbone, due to its suitability for dense prediction tasks. This produces features at downsampling scales of {4, 8, 16, 32} with {18, 36, 72, 144} channels, respectively. We use the Adam optimizer, with a base LR=1e-4 and a polynomial decay [4]. Experimentally, we found that training the shared backbone with a lower learning rate than the heads (typically LR*0.1) produced better results. Models are trained for 100 epochs. Regarding the losses, we use the L1 loss for depth and surface normal estimation, cross-entropy for semantic segmentation and a binary cross-entropy (with positive weighting of 0.95) for edge detection. 4.1. Multi-task Evaluation Performance. We first evaluate Medusa\u2019s performance in a traditional MTL setting, following the procedure in [26]. As mentioned, the main tasks evaluated are depth estimation and semantic segmentation. However, during training we make additional use of Edge detection and surface Normal estimation to show the network a varied set of tasks. Following [26], we define multi-task learning performance as 4 Table 1. Multi-task Evaluation. When performing MTL on NYUD-v2 Medusa is on par with the current SotA [42], whilst using less resources (see Figure 4). This is due to the novel lightweight MSA task head. We highlight the best and next best performing techniques. Backbone Head N+E Seg \u2191 Depth \u2193 \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} % \u2191 ST Baseline ResNet-18 DeepLab-v3+ 35.77 0.600 +0.00 MT Baseline ResNet-18 DeepLab-v3+ 35.74 0.597 +0.12 Cross-stitch [27] ResNet-18 DeepLab-v3+ 36.01 0.600 +0.30 NDDR-CNN [17] ResNet-18 DeepLab-v3+ 34.72 0.611 -2.47 MTAN [23] ResNet-18 DeepLab-v3+ 36.00 0.594 +0.79 ST Baseline HRNet-18 HRHead 34.57 0.606 +0.00 MT Baseline HRNet-18 HRHead 33.21 0.614 -2.63 MTAN [23] HRNet-18 DeepLab-v3+ 35.25 0.581 +3.02 MTAN HRNet-18 DeepLab-v3+ \u2713 36.19 0.567 +5.57 PAD-Net [43] HRNet-18 HRHead 34.39 0.617 -1.23 PAD-Net HRNet-18 HRHead \u2713 35.46 0.604 +1.43 MTI-Net [42] HRNet-18 HRHead 36.94 0.559 +7.26 MTI-Net HRNet-18 HRHead \u2713 37.40 0.540 +9.48 Medusa (ours) HRNet-18 MSA (ours) 36.99 0.573 +6.19 Medusa HRNet-18 MSA \u2713 37.48 0.545 +9.24 Table 2. Spatial Attention Ablation Study. The SFA column indicates the presence of spatial attention between the shared backbone and the task heads. Meanwhile, the MSA task head incorporates attention when combining each task\u2019s multi-scale features. Both types of attention lead to clear improvements. All models use the HRNet-18 backbone. SFA Head N+E Seg \u2191 Depth \u2193 \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} % \u2191 ST Baseline HRHead 34.57 0.606 +0.00 MT Baseline HRHead 33.21 0.614 -2.63 MT Baseline MSA 35.58 0.598 +2.12 Medusa HRHead \u2713 36.50 0.558 +6.71 Medusa \u2713 HRHead \u2713 36.64 0.553 +7.31 Medusa MSA \u2713 37.14 0.555 +7.91 Medusa \u2713 MSA \u2713 37.48 0.545 +9.24 \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} \\ l ab e l { eq:m \\ensuremath {l}^{\\text {\\acs {task}}} t l } \\ a c s { mtl-perf} = \\frac {1}{\\acs {Task}} \\sum _{\\scriptsize \\acs {task}=0}^{\\scriptsize \\acs {Task} \\minus 1} \\mypar {\\inv }^{\\scriptsize \\acs {label-task}} \\frac { \\acs {perf}_{\\scriptsize \\acs {multitask}}^{\\scriptsize \\acs {task}} \\minus \\acs {perf}_{\\scriptsize \\acs {baseline}}^{\\scriptsize \\acs {task}} } { \\acs {perf}_{\\scriptsize \\acs {baseline}}^{\\scriptsize \\acs {task}} } , (5) where M t {m,b} is the per-task performance of the multitask or baseline network and \\ensuremath {l}^{\\text {\\acs {task}}} indicates if a lower value means a better performance for the given task. As such, \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} represents the average increase (or drop) in performance for each task, relative to the single task baseline. We obtain single task baselines (ST) for each backbone by training expert networks on each task separately, resulting in two completely separate models. ResNet models use Deeplab-v3+ ASPP [5] task heads, while HRNet-18 uses the na\u00a8 \u0131ve multi-scale task head, upsampling all scales and concatenating channel-wise (HRHead). Meanwhile, the multi-task baselines (MT) use a joint backbone with separate task heads. The baselines were obtained by retraining the code provided by the authors of [26, 42]. In order to make results more comparable, we also create and train a version of MTAN adapted to make use of the HRNet backbone. However, since MTAN builds a per task decoder, rather than making initial predictions at multiple scales, it still requires use of the DeepLap-v3+ head. The results can be found in Table 1, where the column (N+E) indicates the presence of the auxiliary edges and surface normals tasks. It is interesting to note that some MT baselines and methods [17, 43] actually lead to a degrada5 (a) Input (b) GT (c) MT (d) MTI-Net (e) Medusa Figure 3. Qualitative Evaluation. Through the proposed MSA heads, Medusa\u2019s predictions are both globally consistent and have well defined borders. Results are on par with the current State-of-the-Art (SotA) while using less resources. tion in performance. This is likely due to a combination of negative transfer and task loss balancing during training. Meanwhile, despite not modelling task connections in the decoder, Medusa still shows improvements when incorporating the auxiliary (N+E) tasks. This demonstrates the ability of Medusa to learn generic features that complement all tasks, sharing only the useful information. To summarize, Medusa greatly outperforms all baselines with independent task heads [23] and is comparable to the current SotA [42] while using resources in a more efficient manner, as we will now discuss. Ablation. We perform an ablation study to understand the importance of Medusa\u2019s components, primarily focused on the uses of spatial attention. In the case of the Shared Feature Attention (SFA), we replace the spatial attention connecting the shared backbone to each task head with a convolutional block with BatchNorm and ReLU. On the other hand, we compare the proposed MSA head to the default HRNet, which does not contain spatial attention. Table 2 shows that both attention components result in large benefits. Incorporating the SFA results in a consistent relative improvement across the different techniques of 8.94% and 16.81%. Meanwhile, the MSA head leads to even larger performance gains\u2014from 17.88% to 180.60%. Most notably, incorporating the MSA head into the MT baseline results in an improvement over the ST baseline. Even across all Medusa variants, this novel task head leads to a consistent increase in accuracy. Similarly, incorporating the SFA between the backbone and task heads improves performance regardless of the task head used. Overall, the dual attention mechanisms used in Medusa lead to a relative improvement of 37.7% over the plain convolutional baseline. Once again, we believe that this is due to spatial attention providing an effective, yet efficient mechanism for routing information between different stages in the network. This allows it to easily decide what information should or should not be shared across either tasks and scales. Visualizations. Figure 3 shows qualitative results based on the network predictions. As expected, the MT baseline shows the worst results. MTI-Net shows more spurious class predictions, especially in cluttered environments, as seen in the second image in Figure 3. Meanwhile, we find Medusa to be more globally coherent, while still having well defined edges between classes and in depth discontinuities. This is due to the proposed MSA head, which can effectively combine the best features from each scale. 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 25 50 75 100 125 150 175 Params (M) ST = +0.00% MT = -2.63% MTAN = +5.57% MTI-Net = +9.48% Medusa = +9.24% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 25 50 75 100 125 150 175 FLOPS (G) Number of Tasks (a) 4 6 8 10 12 14 Params (M) 2 0 2 4 6 8 10 Multi-task Performance (%) 12 14 16 18 20 22 FLOPS (G) 2 0 2 4 6 8 10 ST HRNet-18 MT HRNet-18 MTAN MTAN (N+E) MTI-Net MTI-Net (N+E) Medusa Medusa (N+E) Medusa (MSA) Medusa (MSA) (N+E) (b) Figure 4. Resource Usage. Modelling the relationships between each pair of tasks and scales [42] results in a quadratic increase in paramters/GFLOPS w.r.t. the number of tasks. This does not scale well to an increasing number of tasks. Medusa\u2019s independent task heads lead to a much more efficient scaling, while focusing on features that are more generic and reusable. Resources. Figure 4 shows how different approaches scale w.r.t. the number of tasks. The most resource efficient approach is the MT baseline. However, since it only uses basic task heads without intermediate predictions or attention, its performance is lacklustre. Other approaches with independent task heads (ST, MTAN) are relatively efficient, since the increase in task head parameters is linear, but their performance (see Table 1) is not on par with Medusa. MTI-Net is the only approach with results comparable to Medusa, but it does not scale well to increasing numbers of tasks. After only three tasks, MTI-Net requires more parameters than the ST baseline, which trains a completely separate network for each task. This gap only increases, due to the quadratic parameter-complexity introduced by the connections between all possible pairs of tasks. 4.2. Universal Feature Learning The following experiment evaluates Medusa in the highlighted UFL task. The objective is to learn generic shared features that can be adapted to new, unseen tasks on unseen datasets without additional finetuning at the backbone level. This is contrary to MTL, where the objective is to learn features that perform well in the specific set of training tasks without generalization to other tasks. It is also contrary to traditional transfer learning, where the objective is instead to solve the domain shift between different modalities of a single task. We show the ability of Medusa features to transfer to new tasks on new datasets through the PASCAL-Context [6] dataset, containing semantic segmentation, human part seg7 Table 3. Universal Feature Learning. We use the pretrained backbone features from Table 1 to train task heads on new tasks on new datasets. The features learned by Medusa provide large improvements over the commonly used ImageNet pretrained features, despite the fact that we train with orders of magnitude less data. NYUD-v2 PASCAL-Context Seg \u2191 Depth \u2193 \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} % \u2191 Parts \u2191 Sal \u2191 \\ensuremath {\\Delta }_{\\text {\\acs {multitask}}} % \u2191 ST Baseline 34.57 0.606 +0.00 48.73 56.44 +0.00 MT Baseline 33.21 0.614 -2.63 36.13 51.96 -12.93 MTAN [23] 36.19 0.567 +5.57 47.37 57.84 +4.26 MTI-Net [42] 37.40 0.540 +9.48 51.50 60.19 +10.76 Medusa 37.48 0.545 +9.24 52.24 61.91 +13.18 mentation and edge detection. There are also pseudoground truth labels for surface normals and saliency [26] obtained from SotA models [1, 5]. Since three of the tasks are common to NYUD-v2 we evaluate on the two unique ones: human part segmentation and saliency estimation. To carry out this evaluation we use the previous models trained on NYUD-v2 in Section 4.1 with the auxiliary (N+E) tasks and check their transfer capability to the new target tasks in the PASCAL-Context dataset. This is done by freezing the shared feature backbone network and adding a new task head corresponding to either saliency estimation or human part segmentation. This can be seen as a form of continual learning. Since the shared backbone and previous task heads are frozen, we ensure that the network does not forget existing information. Instead, we expand its knowledge by learning a new task. Table 3 shows the results from this experiment, including the previous MTL results on NYUD-v2 for comparison. This highlights the main difference between UFL and MTL, where MTL only performs well in the original training tasks. This is only exacerbated by the na\u00a8 \u0131ve multitask implementation, resulting in a large amount of negative transfer between tasks. Meanwhile, Medusa provides the best transfer capabilities. It is worth noting the large improvement over ImageNet pretrained features from the single task baseline (ST), which are trained on orders of magnitude more data than the remaining MTL methods. However, since they are trained exclusively for global image classification, the learnt representations do not transfer well to complex dense tasks. Meanwhile, even through MTL performance is almost equal to MTI-Net (9.24% vs. 9.48%), the features learnt by Medusa generalize to a broader range of tasks (13.18% vs. 10.76%). This is due to Medusa\u2019s design, which places a larger focus on the shared feature representation, which is therefore able to learn a more effective feature representation. 5. Conclusions & Future Work In this paper we have highlighted the importance of universal feature learning vs. multi-task learning, requiring a feature learning system to perform well over a large variety of tasks without additional finetuning. This is in contrast to most current MTL approaches, which focus learning features specific to a given set of training tasks. To this end we proposed Medusa, capable of training on multiple tasks simultaneously, while allowing new task heads to be attached and trained jointly or separately. Furthermore, thanks to the novel MSA head, we are capable of doing this in a very efficient manner. This helps to provide comparable results whilst using less resources than previous approaches. We additionally demonstrated the generality of the features leant by Medusa in the UFL task on unseen tasks and datasets, and showed its ability to outperform SotA features from both ImageNet and other MTL networks. Whilst Medusa has shown its effectiveness in both MTL and UFL, it is not without limitations and challenges to address in future work. For instance, the data used during training is currently required to have labels for all target tasks. In practice, these labels can be challenging to obtain, especially as the number of tasks and images grows. Medusa\u2019s performance is also dependent on the tasks used during training. If we wish to transfer to a task that is completely unrelated to the training tasks, it is likely that the features will not overlap. Both of these issues could potentially be addressed by making the training process more flexible, without requiring each item to have labels for all tasks or by training with multiple unrelated datasets.", "introduction": "Classical approaches to computer vision relied on hand- crafted heuristics and features that encapsulated what re- searchers believed would be useful for a given task. With the advent of deep learning, features have become part of the learning process, leading to representations that would have never been developed heuristically. Unfortunately, most deep learning systems learn features that perform well on only one target task. Even if pretrained features are used, these require finetuning. Works that explored generic fea- tures [13, 15, 35] have focused on invariance to illumina- tion and viewpoint changes, with the objective of establish- ing geometric correspondences. Whilst this is a useful step in many applications, these features are not suitable for a wider range of tasks. Meanwhile, there has been a recent surge in multi-task learning (MTL), since training a network to solve multiple tasks simultaneously can provide a performance increase over training each task independently [23, 26]. Nonethe- Backbone Task 1 Task 2 Task 3 Task Decoders (a) Traditional Multi-Scale Attention Shared Feature Attention Multi-Scale Attention Multi-Scale Attention Backbone Task 1 Task 2 Task 3 Task Decoders (b) Proposed Figure 1. Transferable Feature Learning. Current high- performing approaches to MTL rely on connections between every combination of tasks, leading to a quadratic parameter complexity w.r.t. the number of tasks. We maintain independent task heads, making it possible to easily add/remove new tasks a posteriori. The dual spatial attention mechanisms (SFA & MSA) allow us to maintain performance while scaling linearly and learning highly reusable feature representations. See Figure 2 for a full overview. less, these works have focused on maximizing accuracy, not generality. At their core, they still try to learn features that perform well on a specific subset of tasks. It is often dif- ficult or impossible to include new tasks into a previously trained model. Moreover, modern approaches [42,43] have such tight connections between tasks that it becomes im- possible to evaluate a single task without all other training tasks, as illustrated in Figure 1. It is difficult to argue that such features are truly generic. This paper addresses the problem of universal feature learning (UFL), where a system is capable of learning 1 generic features useful for all tasks. As discussed, MTL is evaluated on the same set of tasks used during training, i.e. the training and evaluation tasks are identical. In contrast, UFL aims to generalize beyond this training set. In other words, the training and evaluation tasks are not the same. As such, the resulting representations produced by the back- bone are referred to as universal features. It is worth noting that, in order to insert a new task into the network, the lay- ers corresponding to the task head still need training. How- ever, the focus of UFL is on learning backbone feature rep- resentations that are left frozen while adding these new task heads. This results in shorter and more efficient training, as well as avoiding catastrophic forgetting in the shared back- bone features. The method proposed in this paper, dubbed Medusa, aims to learn this universal representation. We design an ar- chitecture with completely independent task heads, where the only shared component is the backbone. Each task head retains only the specific subset of relevant backbone features via a spatial attention mechanism. This allows the backbone to learn generic features, while reducing the likelihood of negative transfer between tasks. The model then makes initial predictions at each backbone resolution, which are further combined in a novel Multi-Scale Atten- tion (MSA) head. By feeding back diverse training tasks, we encourage the learned features to encode a wide vari- ety of information across scales. Furthermore, independent task heads result in an efficient feature extraction process that utilizes significantly less resources but maintains com- petitive performance, while having a flexible architecture where new task heads can be easily added. Our contributions are summarized as: 1. We highlight the importance of universal feature learning in contrast to MTL. The main objective be- hind this is to learn a universal language for computer vision applications. This requires a system to learn fea- tures that require no additional finetuning to perform well in tasks they were not originally trained for. In practice, this means that the set of evaluation tasks is different from those used during feature training. 2. We present a novel Multi-Scale Attention task head and show how it can be used to develop an architec- ture capable of addressing the UFL problem. 3. Finally we show that Medusa can still be applied to traditional MTL, where it achieves competitive perfor- mance while requiring far fewer resources." }, { "url": "http://arxiv.org/abs/2003.13446v1", "title": "DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning", "abstract": "In the current monocular depth research, the dominant approach is to employ\nunsupervised training on large datasets, driven by warped photometric\nconsistency. Such approaches lack robustness and are unable to generalize to\nchallenging domains such as nighttime scenes or adverse weather conditions\nwhere assumptions about photometric consistency break down.\n We propose DeFeat-Net (Depth & Feature network), an approach to\nsimultaneously learn a cross-domain dense feature representation, alongside a\nrobust depth-estimation framework based on warped feature consistency. The\nresulting feature representation is learned in an unsupervised manner with no\nexplicit ground-truth correspondences required.\n We show that within a single domain, our technique is comparable to both the\ncurrent state of the art in monocular depth estimation and supervised feature\nrepresentation learning. However, by simultaneously learning features, depth\nand motion, our technique is able to generalize to challenging domains,\nallowing DeFeat-Net to outperform the current state-of-the-art with around 10%\nreduction in all error measures on more challenging sequences such as nighttime\ndriving.", "authors": "Jaime Spencer, Richard Bowden, Simon Hadfield", "published": "2020-03-30", "updated": "2020-03-30", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Here we review some of the most relevant previous work, namely in depth estimation and feature learning. 2.1. Depth Estimation Traditionally, depth estimation relied on finding correspondences between every pixel in pairs of images. However, if the images have been stereo rectified, the problem can be reduced to a search for the best match along a single row in the target image, known as disparity estimation. Initial methods for disparity estimation relied on hand-crafted matching techniques based on the Sum of Squared Differences (SSD), smoothness and energy minimization. Supervised. Ladick` y [33] and \u02c7 Zbontar [79] showed how learning the matching function can drastically improve the performance of these systems. Mayer et al. [46] instead proposed DispNet, a Fully Convolutional Network (FCN) [40] capable of directly predicting the disparity map between two images, which was further extended by [50]. Kendall et al. [30] introduced GC-Net, where the disparity is processed as a matching cost-volume in a 3D convolutional network. PSMNet [9] and GA-Net [81] extended these cost-volume networks by introducing Spatial Pooling Pyramid (SPP) features and Local/Semi-Global aggregation layers, respectively. Estimating depth from a single image seemed like an impossible task without these disparity and perspective cues. However, Saxena [58] showed how it is possible to approximate the geometry of the world based on superpixel segmentation. Each superpixel\u2019s 3D position and orientation is estimated using a trained linear model and an MRF. Liu et al. [38, 39] improve on this method by instead learning these models using a CNN, while Ladick` y et al. [34] incorporate semantic information as an alternative cue. Eigen et al. [14, 15] introduced the first methods for monocular depth regression using end-to-end deep learning by using a scale-invariant loss. Laina [35] and Cao [7] instead treated the task of monocular estimation as a classification problem and introduced a more robust loss function. Meanwhile, Ummenhofer et al. [66] introduced DeMoN, jointly training monocular depth and egomotion in order to perform Structure-from-Motion (SfM). In this paper we go one step further, jointly learning depth, egomotion and the feature space used to support them. Unsupervised Stereo Training. In order to circumvent the need for costly ground truth training data, an increasing number of approaches have been proposed using photometric warp errors as a substitute. For instance, DeepStereo [17] synthesizes novel views using raw pixels from arbitrary nearby views. Deep3D [74] also performs novel view synthesis, but restricts this to stereo pairs and introduces a novel image reconstruction loss. Garg [18] and Godard [23] greatly improved the performance of these methods by introducing an additional autoencoder and left-right consistency losses, respectively. UnDeepVO [37] additionally learns monocular VO between consecutive frames by aligning the predicted depth pointclouds and enforcing consistency between both stereo streams. More recently, there have been several approaches making use of GANs [1, 53]. Most notably, [62] uses GANs to perform day-night translation and provide an additional consistency to improve performance in nighttime conditions. However, the lack of any explicit feature learning makes it challenging to generalize across domains. Unsupervised Monocular Training. In order to learn unsupervised monocular depth without stereo information, it is necessary to learn a surrogate task that allows for the use of photometric warp losses. Zhou et al. [82, 83] introduced some of the first methods to make use of VO estimation to warp the previous and next frames to reconstruct the target view. Zhan [80] later extended this by additionally incorporating a feature based warp loss. Babu et al. [3, 44] proposed an unsupervised version of DeMoN [66]. Other published methods are based upon video processing with RNNs [69] and LSTMs [51] or additionally predicting scene motion [67] or optical flow [29, 70, 78]. The current state-of-the-art has been pushed by methods that incorporate additional constraints [68] such as temporal [45], semantic [10], edge & normal [75, 76], cross-task [84] and cycle [52, 73] consistencies. Godard et al. [22] expanded on these methods by incorporating information from the previous frame and using the minimum reprojection error in order to deal with occlusions. They also introduce an automasking process which removes stationary pixels in the target frame. However, they still compute photometric losses in the original RGB colourspace, making it challenging to learn across domains. 2.2. Feature Learning Hand-Crafted. Initial approaches to feature description typically relied on heuristics based on intensity gradients in the image. Since these were computationally expensive, it became necessary to introduce methods capable of \ufb01nding interesting points in the image, i.e. keypoints. Some of the most well-know methods include SIFT [41] and its variant RootSIFT [2], based on a Difference of Gaussians and Non-Maxima Suppression (NMS) for keypoint detection and HOG descriptors. Research then focused on improving the speed of these systems. Such is the case with SURF [5], BRIEF [6] and BRISK [36]. ORB features [56] improved the accuracy, robustness and speed of BRIEF [6] and are still widely used. Sparse Learning. Initial feature learning methods made use of decision trees [55], convex optimization [63] and evolutionary algorithms [31, 32] in order to improve detection reliability and discriminative power. Intelligent Cost functions [24] took this a step further, by using Gaussian Processes to learn appropriate cost functions for optical/scene \ufb02ow. Since the widespread use of deep learning, several methods have been proposed to learn feature detection and/or description. Balntas et al. [4] introduced a method for learning feature descriptors using in-triplet hard negative mining. LIFT [77] proposes a sequential pipeline consisting of keypoint detection, orientation estimation and feature description, each performed by a separate network. LF-Net [49] builds on this idea, jointly generating dense score and orientation maps without requiring human supervision. On the other hand, several approaches make use of networks with shared encoder parameters in order to simultaneously learn feature detection and description. Georgakis et al. [20] learn 3D interest points using a shared Fast RCNN [21] encoder. Meanwhile, DeTone introduced SuperPoint [12] where neither decoder has trainable parameters, improving the overall speed and computational cost. More recently, D2-Net [13] proposed a describe-then-detect approach where the network produces a dense feature map, from which keypoints are detected using NMS. Dense Learning. Even though SuperPoint [12] and D2Net [13] produce dense feature maps, they still focus on the detection of interest points and don\u2019t use their features in a dense manner. Weerasekera et al. [72] learn dense features in the context of SLAM by minimizing multi-view matching cost-volumes, whereas [60] use generative feature learning with scene completion as an auxiliary task to perform visual localisation. The Universal Correspondence Network [11] uses optical correspondences to create a pixel-wise version of the contrastive loss. Schmidt [59] instead propose semisupervised training with correspondences obtained from KinectFusion [47] and DynamicFusion [48] models. Fathy [16] and Spencer [65] extended the pixel-wise contrastive loss to multiple scale features through a coarse-to-\ufb01ne network and spatial negative mining, respectively. On the other hand, SDC-Net [61] focuses on the design of the network architecture, increasing the receptive \ufb01eld through stacked dilated convolution, and apply the learnt features to optical \ufb02ow estimation. In this work we attempt to unify state-of-the-art feature learning with monocular depth and odometry estimation. This is done in such a way that the pixel-wise correspondences from monocular depth estimation can support dense feature learning in the absence of ground-truth labels. Meanwhile, computing match-costs in the learned feature space greatly improves the robustness of the depth estimation in challenging cross-domain scenarios. 3. Methodology The main objective of DeFeat-Net is to jointly learn monocular depth and dense features in order to provide more robust estimates in adverse weather conditions. By leveraging the synergy between both tasks we are able to do this in a fully self-supervised manner, requiring only a monocular stream of images. Furthermore, as a byproduct of the training losses, the system additionally learns to predict VO between consecutive frames. Figure 2 shows an overview of DeFeat-Net. Each training sample is composed of a target frame It and a set of support frames It+k, where k \u2208{\u22121, 1}. Using the predicted depth for It and the predicted transforms to It+k we can obtain a series of correspondences between these images, which in turn can be used in the photometric warp and pixel-wise contrastive losses. The code and pre-trained models for this technique will be available at https:// github.com/jspenmar/DeFeat-Net. 3.1. Networks DispNet. Given a single input image, It, its corresponding depth map is obtained through Dt = 1 a \u03a6D(It) + b, (1) where a and b scale the \ufb01nal depth to the range [0.1, 100]. \u03a6D represents the disparity estimation network, formed by a ResNet [25] encoder and decoder with skip connections. DispNet It PoseNet Correspondence Module K Smoothness Loss FeatNet Contrastive Loss Warp Loss It+k Pt+k Ft+k Ct+k Dt Ft LS LP LC Warp Loss LF Inputs Networks Outputs Loss Modules Losses Figure 2. Overview of DeFeat-Net which combines complementary networks to simultaneously solve for feature representation, depth and ego-motion. The introduction of feature warping improves the robustness in complex scenarios. This decoder also produces intermediate disparity maps at each stage, resulting in four different scales. PoseNet. Similarly, the pose prediction network \u03a6P consists of a multi-image ResNet encoder, followed by a 4layer convolutional decoder. Formally, Pt\u2192t+k = \u03a6P (It, It+k), (2) where Pt\u2192t+k is the predicted transform between the cameras at times t and t + k. As in [22, 68] the predicted pose is composed of a rotation in axis-angle representation and a translation vector, scaled by a factor of 0.001. FeatNet. The \ufb01nal network produces a dense ndimensional feature map of the given input image, \u03a6F : NH\u00d7W \u00d73 7\u2192RH\u00d7W \u00d7n. As such, we de\ufb01ne the corresponding L2-normalized feature map as F = ||\u03a6F (I)|| . (3) In this case, \u03a6F is composed of a residual block encoderdecoder with skip connections, where the \ufb01nal encoder stage is made up of an SPP [9] with four scales. 3.2. Correspondence Module Using the predicted Dt and Pt\u2192t+k we can obtain a set of pixel-wise correspondences between the target frame and each of the support frames. Given a 2D point in the image p and its homogeneous coordinates \u02d9 p we can obtain its corresponding location q in the 3D world through q = \u03c0\u22121( \u02d9 p) = K\u22121 t \u02d9 p Dt(p), (4) where \u03c0\u22121 is the backprojection function, Kt is the camera\u2019s intrinsics and Dt(p) the depth value at the 2D pixel location estimated using (1). We can then compute the corresponding point ct\u2192t+k by projecting the resulting 3D point onto a new image with ct\u2192t+k(p) = \u03c0( \u02d9 q) = KtPt\u2192t+k \u02d9 q, (5) where Pt\u2192t+k is the transform to the new coordinate frame, i.e. the next or previous camera position from (2). Therefore, the \ufb01nal correspondences map is de\ufb01ned as Ct\u2192t+k = {ct\u2192t+k(p) : \u2200p} . (6) These correspondences can now be used in order to determine the sampling locations for the photometric warp loss and the positive matches in a pixel-wise contrastive loss to learn an appropriate feature space. 3.3. Losses Once again, it is worth noting that DeFeat-Net is entirely self-supervised. As such, the only ground truth inputs required are the orginal images and the camera\u2019s intrinsics. Pixel-wise Contrastive. In order to train \u03a6F , we make use of the well established pixel-wise contrastive loss [11, 59, 65]. Given two feature vectors from the dense feature maps, f1 = F1(p1) and f2 = F2(p2), the contrastive loss is de\ufb01ned as l(y, f1, f2) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 2(d)2 if y = 1 1 2{max(0, m \u2212d)}2 if y = 0 0 otherwise (7) with y as the label indicating if the pair is a correspondence, d = ||f1 \u2212f2|| and m the target margin between negative pairs. In this case, the set of positive correspondences is given by Ct\u2192t+k. Meanwhile, the negative examples are generated using one of the spatial negative mining techniques from [65]. From both sets, a label mask Y is created indicating if each possible pair of pixels is a positive, negative or should be ignored. As such, the \ufb01nal loss is de\ufb01ned as LC = X p1 X p2 l(Y (p1, p2), Ft(p1), Ft+k(p2)). (8) This loss serves to drive the learning of a dense feature space which enables matching regardless of weather and seasonal appearance variations. Photometric and Feature Warp. We also use the correspondences in a differentiable bilinear sampler [28] in order to generate the warped support frames and feature maps It+k\u2192t = It+k\u27e8Ct\u2192t+k\u27e9 (9) Ft+k\u2192t = Ft+k\u27e8Ct\u2192t+k\u27e9 (10) where \u27e8\u27e9is the sampling operator. The \ufb01nal warp losses are a weighted combination of SSIM [71] and L1, de\ufb01ned by \u03a8(I1, I2) = \u03b1 1\u2212SSIM(I1, I2) 2 +(1\u2212\u03b1) ||I1\u2212I2|| (11) LP = \u03a8(It, It+k\u2192t), (12) LF = \u03a8(Ft, Ft+k\u2192t), (13) The photometric loss LP serves primarily to support the early stages of training when the feature space is still being learned. Smoothness. As an additional regularizing constraint, we incorporate a smoothness loss [27]. This enforces local smoothness in the predicted depths proportional to the strength of the edge in the original image, \u2202It. This is de\ufb01ned as LS = \u03bb N X p |\u2202Dt(p)| e\u2212||\u2202It(p)||, (14) where \u03bb is a scaling factor typically set to 0.001. This loss is designed to avoid smoothing over edges by reducing the weighting in areas of strong intensity gradients. 3.4. Masking & Filtering Some of the more recent improvements in monocular depth estimation have arisen from explicit edge-case handling [22]. This includes occlusion \ufb01ltering and the masking of stationary pixels. We apply these automatic procedures to the correspondences used to train both the depth and dense features. Minimum Reprojection. As the camera capturing the monocular stream moves throughout the scene, various elements will become occluded and disoccluded. In terms of a photometric error based loss, this means that some of the correspondences generated by the system will be invalid. However, when multiple consecutive frames are being used, i.e. k \u2208{\u22121, 1}, different occlusions occur in each image. By making the assumption that the photometric error will be greater in the case where an occlusion is present, we can \ufb01lter these out by simply propagating the correspondence with the minimum error. This is de\ufb01ned as Ct\u2192t+k = ( ct\u2192t\u22121 where \u03a8(It, It\u2192t\u22121)<\u03a8(It, It\u2192t+1) ct\u2192t+1 otherwise (15) Automasking. Due to the nature of the training method and implicit depth priors (i.e. regions further away change less) stationary frames or moving objects can cause holes of in\ufb01nite depth in the predicted depth maps. An automasking procedure is used to remove these stationary pixels from contributing to the loss, \u00b5 = \u0014 min k \u03a8(It, It+k) < min k \u03a8(It, It+k\u2192t) \u0015 , (16) where \u00b5 is the resulting mask indicating if a correspondence is valid or not and [] is the Iverson bracket. In other words, pixels that exhibit lower photometric error to the unwarped frame than to the warped frame are masked from the cost function. 4. Results Each subsystem in DeFeat-Net follows a U-Net structure with a ResNet18 encoder pretrained on ImageNet, followed by a 7 layer convolutional decoder similar to [23]. The code and pre-trained models will be available at https: //github.com/jspenmar/DeFeat-Net. In all our experiments, the warp loss parameter is set to \u03b1 = 0.85 as per [28]. On the KITTI dataset [19] we follow the Eigen-Zhou evaluation protocol of [23, 83]. This dataset split provides 39,810 training images and 4,424 validation images. These images are all from a single domain (sunny daytime driving). We also make use of the RobotCar Seasons dataset [57]. This is a curated subset of the larger RobotCar dataset [43], containing 49 sequences. The dataset was speci\ufb01cally chosen to cover a wide variety of seasons and weather conditions, leading to greater diversity in appearance than KITTI. Unlike the KITTI dataset, which provides sparse groundtruth depth from LiDAR, RobotCar Seasons does not include any depth ground truth. Our proposed technique is unsupervised, and can still be trained on this varied dataset, but the lack of ground truth makes quantitative evaluation on RobotCar Seasons impossible. To resolve this, we returned to the original RobotCar dataset and manualy created a validation dataset comprising of 12,000 images with Method Abs-Rel Sq-Rel RMSE RMSE-log A1 A2 A3 LEGO [75] 0.162 1.352 6.276 0.252 Ranjan [54] 0.148 1.149 5.464 0.226 0.815 0.935 0.973 EPC++ [42] 0.141 1.029 5.350 0.216 0.816 0.941 0.976 Struct2depth (M) [8] 0.141 1.026 5.291 0.215 0.816 0.945 0.979 Monodepth V2 [22] 0.123 0.944 5.061 0.197 0.866 0.957 0.980 DeFeat 0.126 0.925 5.035 0.200 0.862 0.954 0.980 Table 1. Monocular depth evaluation on the KITTI dataset Method \u00b5+ Global \u00b5\u2212 Global AUC Local \u00b5\u2212 Local AUC ORB [56] N/A N/A 85.83 N/A 84.06 ResNet [26] 8.5117 25.9872 94.77 11.1335 68.26 ResNet-L2 0.341 1.0391 99.25 0.4371 71.80 VGG [64] 4.0077 12.6543 92.94 5.9088 70.03 VGG-L2 0.3905 1.2235 99.57 0.565 77.06 SAND-G [65] 0.093 0.746 99.73 0.266 87.06 SAND-L 0.156 0.592 98.88 0.505 94.34 SAND-GL 0.183 0.996 99.28 0.642 93.34 DeFeat 0.105 1.113 99.10 0.294 83.64 Table 2. Learned feature evaluation on the KITTI dataset their corresponding ground-truth LiDAR depth maps, split evenly across day and night driving scenarios. 4.1. Single Domain Evaluation We \ufb01rst evaluate our approach on the KITTI dataset, which covers only a single domain. For evaluation of depth accuracy, we use the standard KITTI evaluation metrics, namely the absolute relative depth error (ABS REL), the relative square error (SQ REL) and the root mean square error (RMSE). For these measures, a lower number is better. We also include the inlier ratio measures (A1, A2 and A3) of [23] which measure the fraction of relative depth errors within 25%, 252% and 253% of the ground truth. For these measures, a larger fraction is better. To evaluate the quality of the learned feature representations, we follow the protocol of [65]. We compute the average distance in the feature space for the positive pairs from the ground-truth (\u00b5+), and the negative pairs (\u00b5\u2212). Naturally a smaller distance between positive pairs, and a larger distance between negative pairs, is best. We also compute the Area Under the Curve (AUC) which can be interpreted as the probability that a randomly chosen negative sample will have a larger distance than the corresponding positive ground truth match. Therefore, higher numbers are better. Following [65] all three errors are split into both local (within 25 pixels) and global measurements. The results of the depth evaluation are shown in Table 1 and the feature evaluation is shown in Table 2. We can see that in this single-domain scenario, the performance of our technique is competitive with MonodepthV2 and clearly outperforms most other state-of-the-art techniques for monocular depth estimation. The results for [22] were obtained by training a network using the code provided by the authors. Regarding the features, L2 denotes the L2-normalized versions, whereas G, L & GL represent the different negative mining variants from [65]. We can also see that despite being unsupervised, our learned feature space is competitive with contemporary supervised feature learning techniques and greatly outperforms pretrained features when evaluating locally. It is interesting, however, to note that the simple act of L2-normalizing can improve the global performance of the pretrained features. Our feature space tends to perform better in the Global evaluation metrics than the local ones. This is unsurprising as the negative samples for the contrastive loss in (7) are obtained globally across the entire image. 4.2. Multi-Domain Evaluation However, performance in the more challenging RobotCar Seasons dataset demonstrates the real strength of jointly learning both depth and feature representations. RobotCar Seasons covers multiple domains, where traditional photometric based monocular depth algorithms struggle and where a lack of cross-domain ground-truth has historically made feature learning a challenge. For this evaluation, we select the best competing approach from Table 1 (MonodepthV2) and retrain both it and DeFeat-Net on the RobotCar Seasons dataset. All techniques are trained from scratch. The results are shown in Table 3 and example depth map comparisons are shown in Figure 3. We can see that in this more challenging task, the proposed approach outperforms the previous state of the art technique across all error measures. While for the daytime scenario, the improvements are modest, on the nighttime data there is a signi\ufb01cant improvement with around 10% reduction in all error measures. We believe that the main reason behind this difference is Test domain Method Abs-Rel Sq-Rel RMSE RMSE-log A1 A2 A3 Day Monodepth V2 [22] 0.271 3.438 9.268 0.329 0.600 0.840 0.932 Day DeFeat 0.265 3.129 8.954 0.323 0.597 0.843 0.935 Night Monodepth V2 [22] 0.367 4.512 9.270 0.412 0.561 0.790 0.888 Night DeFeat 0.335 4.339 9.111 0.389 0.603 0.828 0.914 Table 3. Monocular depth evaluation on the RobotCar dataset Figure 3. Top: input images from the RobotCar dataset. Middle: estimated depth maps from Monodepth V2 [22]. Bottom: estimated depth maps from DeFeat-Net. that in well-lit conditions, the photometric loss is already a good supervision signal. In this case, incorporating the feature learning adds to the complexity of the task. However, nighttime scenarios make photometric matching less discriminative, leading to weaker supervision. Feature learning provides the much needed invariance and robustness to the loss, leading to the signi\ufb01cant increase in performance. It is interesting to note that the proposed approach is especially robust with regards to the number of estimated outliers. The A1, A2 and A3 error measures are fairly consistent between the day and night scenarios for the proposed technique. This indicates that even in areas of uncertain depth (due to under-exposure and over-saturation), the proposed technique fails gracefully rather than producing catastrophically incorrect estimates. Since previous state-of-the-art representations cannot be trained unsupervised, and RobotCar Seasons does not provide any ground-truth depth, it is not possible to repeat the feature comparison from Table 2 in the multi-domain scenario. Instead Figure 4 compares qualitative examples of the learned feature spaces. For these visualizations, we \ufb01nd the linear projection that best shows the correlation between the feature map and the images and map it to the RGB color cube. This dimensionality reduction removes a signi\ufb01cant amount of discriminative power from the descriptors, but allows for some form of visualization. In all cases, the feature descriptors can clearly distinguish scene structures such as the road. It is interesting to note that a signi\ufb01cant degree of context has been encoded in the features, and they are capable of easily distinguishing a patch in the middle of the road, from one on the left or right, and from a patch of similarly colored pavement. The feature maps trained on the single domain KITTI dataset can sometimes display more contrast than those trained on RobotCar Seasons. Although this implies a greater degree of discrimination between different image regions, this is likely because the latter representation can cover a much broader range of appearances from other domains. Regarding, the nighttime features, it is interesting that those trained on a single domain seem to exhibit strange behaviour around external light sources such as the lampposts, traf\ufb01c lights and headlights. This is likely due to the bias in the training data, with overall brighter image content. 4.3. Ablation Finally, for each dataset we explore the bene\ufb01ts of concurrent feature learning, by re-training with the FeatNet subsystem disabled. As shown in Table 4, the removal of Figure 4. Feature space visualizations for DeFeat-Net trained on the single-domain KITTI dataset (centre) and multi-domain RobotCar Seasons dataset (right). Dataset Method Abs-Rel Sq-Rel RMSE RMSE-log A1 A2 A3 KITTI DeFeat (no feat) 0.123 0.948 5.130 0.197 0.863 0.956 0.980 KITTI DeFeat 0.126 0.925 5.035 0.200 0.862 0.954 0.980 RobotCar Day DeFeat (no feat) 0.274 3.885 8.953 0.335 0.640 0.853 0.934 RobotCar Day DeFeat 0.265 3.129 8.954 0.323 0.597 0.843 0.935 RobotCar Night DeFeat (no feat) 0.748 13.502 8.956 0.657 0.393 0.624 0.759 RobotCar Night DeFeat 0.335 4.339 9.111 0.389 0.603 0.828 0.914 Table 4. Performance with and without concurrent feature learning, on each dataset the concurrent feature learning from our technique causes a small and inconsistent change on the KITTI and RobotCar Day data. However, on the RobotCar Night data, our full approach drastically outperforms the version which does not learn a specialist matching representation. For many error measures, the performance doubles in these challenging scenarios, and the reduction in outliers causes a three-fold reduction in the Sq-Rel error. These \ufb01ndings reinforce the observation that the frequently used photometric warping loss is insuf\ufb01cient for estimating depth in challenging real-world domains. 5. Conclusions & Future Work This paper proposed DeFeat-Net, a uni\ufb01ed framework for learning robust monocular depth estimation and dense feature representations. Unlike previous techniques, the system is able to function over a wide range of appearance domains, and can perform feature representation learning with no explicit ground truth. This idea of co-training an unsupervised feature representations has potential applications in many areas of computer vision beyond monocular depth estimation. The main limitation of the current approach is that there is no way to enforce feature consistency across seasons. Although depth estimation and feature matching work robustly within any given season, it is currently unclear weather feature matching between different seasons is possible. It would be interesting in the future to explore crossdomain consistency as an additional training constraint. However, this will necessitate the collection of new datasets with cross seasonal alignments. Acknowledgements This work was partially funded by the EPSRC under grant agreements (EP/R512217/1, EP/S016317/1 and EP/S035761/1). We would also like to thank NVIDIA Corporation for their Titan Xp GPU grant.", "introduction": "Recently there have been many advances in computer vision tasks related to autonomous vehicles, including monocular depth estimation [22, 83, 73] and feature learn- ing [13, 61, 65]. However, as shown in Figure 1, these ap- proaches tend to fail in the most complex scenarios, namely adverse weather and nighttime conditions. In the case of depth estimation, this is usually due to the assumption of photometric consistency, which starts to break down in dimly-lit environments. Feature learning can overcome such strong photometric assumptions, but Figure 1. Left: Challenging lighting conditions during nighttime driving. Right: A catastrophic failure during depth map estimation for a current state-of-the-art monocular depth estimation frame- work, after being trained speci\ufb01cally for this scenario. these approaches tend to require ground truth pixel-wise correspondences and obtaining this ground truth in cross- seasonal situations is non-trivial. Inconsistencies between GPS measurements and drift from Visual Odometry (VO) makes automatic pointcloud alignment highly inaccurate and manual annotation is costly and time-consuming. We make the observation that depth estimation and fea- ture representation are inherently complementary. The pro- cess of estimating the depth for a scene also allows for the computation of ground-truth feature matches between any views of the scene. Meanwhile robust feature spaces are necessary in order to create reliable depth-estimation sys- tems with invariance to lighting and appearance change. Despite this relationship, all existing approaches tackle these challenges independently. Instead, we propose DeFeat-Net, a system that is capable of jointly learning depth from a single image in addition to a dense feature representation of the world and ego-motion between con- secutive frames. What\u2019s more, this is achieved in an en- tirely self-supervised fashion, requiring no ground truth other than a monocular stream of images. We show how the proposed framework can use the exist- ing relationships between these tasks to complement each other and boost performance in complex environments. As has become commonplace [23], the predicted depth and ego-motion can be used to generate a correspondence map arXiv:2003.13446v1 [cs.CV] 30 Mar 2020 between consecutive images, allowing for the use of photo- metric error based losses. However, these correspondences can also be used as positive examples in relative metric learning losses [65]. In turn, the learnt features can pro- vide a more robust loss in cases where photometric errors fail, i.e. nighttime conditions. The remainder of the paper provides a more detailed de- scription of the proposed DeFeat-Net framework in the con- text of previous work. We extensively show the bene\ufb01ts of our joint optimization approach, evaluating on a wide vari- ety of datasets. Finally, we discuss the current state-of-the- art and opportunities for future work. The contributions of this paper can be summarized as: 1. We introduce a framework capable of jointly and si- multaneously learning monocular depth, dense feature representations and vehicle ego-motion. 2. This is achieved entirely self-supervised, eliminating the need for costly and unreliable ground truth data collection. 3. We show how the system provides robust depth and in- variant features in all weather and lighting conditions, establishing new state-of-the-art performance." }, { "url": "http://arxiv.org/abs/2003.13431v1", "title": "Same Features, Different Day: Weakly Supervised Feature Learning for Seasonal Invariance", "abstract": "\"Like night and day\" is a commonly used expression to imply that two things\nare completely different. Unfortunately, this tends to be the case for current\nvisual feature representations of the same scene across varying seasons or\ntimes of day. The aim of this paper is to provide a dense feature\nrepresentation that can be used to perform localization, sparse matching or\nimage retrieval, regardless of the current seasonal or temporal appearance.\n Recently, there have been several proposed methodologies for deep learning\ndense feature representations. These methods make use of ground truth\npixel-wise correspondences between pairs of images and focus on the spatial\nproperties of the features. As such, they don't address temporal or seasonal\nvariation. Furthermore, obtaining the required pixel-wise correspondence data\nto train in cross-seasonal environments is highly complex in most scenarios.\n We propose Deja-Vu, a weakly supervised approach to learning season invariant\nfeatures that does not require pixel-wise ground truth data. The proposed\nsystem only requires coarse labels indicating if two images correspond to the\nsame location or not. From these labels, the network is trained to produce\n\"similar\" dense feature maps for corresponding locations despite environmental\nchanges. Code will be made available at:\nhttps://github.com/jspenmar/DejaVu_Features", "authors": "Jaime Spencer, Richard Bowden, Simon Hadfield", "published": "2020-03-30", "updated": "2020-03-30", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Historically, hand-crafted sparse features have been widely popular [53]. Notable examples include SIFT [28] and ORB [42]. These continue to be used in current applications for Simultaneous Localization and Mapping (SLAM) [32, 33] and VO estimation [58]. Meanwhile, Li et al. [27] and Sattler et al. [44] use SIFT descriptors in a 3D pointcloud to perform global localization. On the other hand, Krajnik et al. [24, 23] introduced GRIEF, based on an evolutionary algorithm to refine BRIEF [4] comparisons. These features were subsequently applied to relocalization in [25]. LIFT [57] and LF-Net [40] instead train a sequential pipeline of networks in order to learn keypoint detection, orientation estimation and feature description. However, Valgren and Lilienthal [54] demonstrate how the performance of sparse features degrades as the seasonal variation increases. Stylianou et al. [51] instead claim that changes in illumination and keypoint detection failures are the main degrading factors. Alternative approaches aggregate sparse features to form super-pixel image representations. Such is the case in the work of Neubert et al. [37, 38] and Naseer et al. [35], who aggregate SURF [3] and HOG [9] features, respectively. Other methods take this idea further and learn how to combine sparse features into a single holistic image descriptor. Some of the most notable examples include the Bag of Words [8] and the Fisher Kernel [19, 55]. Such methods have been applied to localization and mapping in [31, 12]. As an extension to these methods, Jegou et al. propose the Vector of Locally Aggregated Descriptors (VLAD) [20], simplifying the computational complexity whilst maintaining performance. Torii et al. introduced DenseVLAD [52], combining RootSIFT [1] and view synthesis to perform localization. Meanwhile, the contextual loss [30] has been proposed as a metric for similarity between non-aligned images. However, it has never been used in the context of explicit feature learning. Since the rise of deep learning, methods have focused on the aggregation of intermediate pretrained features [34, 36]. Xia et al. [56] incorporated PCANet [5] into a SLAM loop closure system. Meanwhile, VLAD was adapted into deep learning frameworks such as NetVLAD [2] and directly applied to place localization. Other approaches to holistic image feature learning include [6, 17]. These methods make use of relational labels indicating similarity or dissimilarity to train their networks. As such, they rely on losses such as contrastive [15] and triplet [47] loss. These deep learning methods focus on producing a single descriptor representing the whole image. However, Sattler et al. [45] conclude that in order to solve complex localization problems it is necessary to learn dense feature descriptors. Dusmanu et al. [10] opt for a \u201cdescribe-thendetect\u201d approach, where non-maximal-suppression is used to detect keypoints of interest in a dense feature map. Meanwhile, Schuster et al. [48] introduce SDC-Net, focused on the design of an architecture based on the use of stacked dilated convolutions. Schmidt et al. [46] introduce a pixelwise version of the contrastive loss used to train a network to produce dense matches between DynamicFusion [39] and KinectFusion [18] models. Fathy et al. [11] employ a Output + Deja-Vu Features ENCODER SPP DECODER Contextual Triplet Loss > Input P A N Figure 2: Proposed methodology overview. The network is trained with consecutive Anchor Positive Negative triplets corresponding to images of the same or different locations, respectively. The similarity metric doesn\u2019t require spatial alignment between images or perfect pixel-wise correspondences. combination of losses in order to train coarse-to-\ufb01ne dense feature descriptors. Spencer et al. [50] extended these methods to introduce a more generic concept of scale through spatial negative mining. The main drawback of these methods is that they do not tackle seasonal invariance. In this paper we propose a new framework to learn dense seasonal invariant representations. This is done in a largely unsupervised manner, greatly expanding the use cases of this feature learning framework. Furthermore, we extend contextual loss to create a relational loss based on a triplet con\ufb01guration. 3. Deja-Vu Features The aim of this work is to provide a dense feature descriptor representation for a given image. This representation must be capable of describing features uniquely such that short-term feature matching is possible, but with suf\ufb01cient invariance to temporal appearance variation such that features can also be matched between day & night or winter & summer. An overview of the proposed methodology can be found in Figure 2. At the core of the system lies a Fully Convolutional Network (FCN) formed from residual blocks and skip connections. By utilizing only convolutions, the network is not restricted to a speci\ufb01c input size and allows for the estimation of a feature at every pixel in the image. The \ufb01nal stage of the encoder makes use of a Spatial Pooling Pyramid (SPP) block, with average pooling branches of size 32, 16, 8 and 4, respectively. This allows the network to incorporate information from various scales and provide a more detailed representation of the input. Formally, we de\ufb01ne Deja-Vu to produce a dense ndimensional representation at every pixel in the input image, F \u2208RH\u00d7W \u00d7n, obtained by F = \u03a6(I|w), (1) where I \u2208NH\u00d7W \u00d73 is the corresponding input image and \u03a6 a network parametrized by a set of weights w. 3.1. Contextual Similarity In order to determine the similarity between two images, I1 & I2, we \ufb01rst obtain their features, F 1 & F 2, using (1). We then take inspiration from [30] to quantify how uniquely each feature in F 1 matches to a single location in F 2. This allows us to compare feature maps of the same location without requiring pixel-wise matches or perfect photometric alignment. In the context of localization and revisitation, this means that two images of the same location should be regarded as similar, whereas any other pair should be dissimilar. To formalize this idea, two images are considered similar if each feature descriptor in I1 has a matching feature in I2 that is signi\ufb01cantly closer in the embedded feature space than any other features in that image. Given a single feature at a 2D point p1, its set of distances with respect to the other feature map is de\ufb01ned as D(p1) = { ||F 1(p1) \u2212F 2(p2)|| : \u2200p2 }. (2) We then normalize this set of distances according to e D(p1) = D(p1) min(D(p1)) + \u03f5, (3) where \u03f5 = 1e \u22125. Intuitively, this is similar to performing a traditional ratio test on the feature distances. In general, the best match will have e D = 1. The rest of the points are then described as the ratio with respect to the best match in the range e D = [1, \u221e). The set of normalized similarities between p1 and all of F 2 is then given using a softmax function S(p1) = exp 1 \u2212e D(p1) h ! , (4) e S(p1) = S(p1) P p2 S(p2), (5) where h represents the band-width parameter controlling the \u201chardness\u201d of the similarity margin. In this case, the Figure 3: Contextual triplet loss framework. The triplets formed by each training sample can be divided into seasonal or cross-seasonal triplets, each contributing to the short-term and long-term matching performance. In the case of positive pairs, each feature in the image should only match to a single feature in the other image. best match results in a value of S = 1 and S will tend to 0 for large values of e D. e S is therefore maximised by a single low and many high values of e D, i.e. cases where there is a unique match. Following these de\ufb01nitions, we can now represent the global similarity between the original pair of images as CX(F 1, F 2) = 1 N X p1 max e S(p1), (6) where N is the total number of features p1. Since this is an average of the normalised pixel-wise similarities, the resulting metric is constrained to the range [0, 1], indicating completely different or identical feature maps, respectively. As such, this encodes both the distances and uniqueness of the feature space without enforcing spatial constraints. This similarity metric can now be used at inference time to determine if two feature maps are likely to represent the same location. 3.2. Contextual Triplet Loss Since we make use of relational labels between images, i.e. if the images correspond to approximately the same location or not, the similarity metric is introduced into a triplet loss framework. In a traditional triplet loss, the aim is to minimize positive feature embedding distances, AP, and separate them from negative pairs AN by at least a set margin m. However, in the context of relocalization, positive pairs should be those with a high similarity. We therefore introduce a modi\ufb01ed triplet loss inspired by (6) to take this into account: T = {IA, IP , IN}, (7) l(T) = max(CX(F A, F N)\u2212CX(F A, F P )+m, 0). (8) Given an Anchor image, the Positive sample is obtained from the same location in a different weather/season. On the other hand, the Negative corresponds to an image from a different location and any season. Each training sample is composed of two consecutive frames of triplets. This framework allows us to introduce additional triplets, which help to provide additional consistency within each season and aid short-term matching. This results in a total of \ufb01ve triplets per training sample, illustrated in Figure 3. In order to incorporate the information from all the triplets, the \ufb01nal loss is de\ufb01ned as L = 1 NX X TX l(TX) + \u03b1 NS X TS l(TS), (9) where TX and TS are the sets of seasonal and cross-seasonal triplets, NX and NS are the respective number of triplets in each category and \u03b1 is a balancing weight in the range [0, 1]. Once again, the image-level labels of A, P & N are used to drive pixel-wise feature training. 4. Results Dataset. To train the proposed features we make use of the RobotCar Seasons dataset [45]. This is a subset of the original RobotCar dataset [29] focused on cross-seasonal revisitations. It provides a set of consecutive frames at 49 unique locations, each at different times of the year including sun, rain, dawn, overcast, dusk and night. Additionally, Figure 4: Learnt feature visualizations from PCA dimensionality reduction. Despite drastic appearance changes, the network is capable of correctly identifying the similarities between the anchor and positive pair, whilst discriminating the negative pair. a reference pointcloud and poses are provided. However, it it still not possible to obtain accurate cross-seasonal pixellevel matches due to pose inconsistency. Fortunately, our system does not require this type of correspondence supervision. The dataset is split into a training and validation set of 40 and 9 locations, respectively. The training triplets are generated on the \ufb02y. From an Anchor image at a given season and location, a random target season is selected for the Positive sample. The closest frame within that season is found by calculating the distance between the respective GPS readings. Finally, the Negative sample is obtained by randomly sampling from a different RobotCar Seasons location, without any restriction on the season. Training. Using this data, Deja-Vu is trained for 160 epochs with a base learning rate of 0.001 and an SGD optimizer. The contextual triplet loss margin was typically \ufb01xed to m = 0.5 since the similarity between images is constrained to the range [0, 1]. In order to provide a more compact representation, the dimensionality of the features was restricted to n = 10, with a consistency loss weight \u03b1 = [0, 1]. Feature visualization. In order to visualize the ndimensional features produced by the network, we apply PCA and map the features to the RGB cube. Three pairs of examples are shown in shown in Figure 4. This triplet helps to illustrate some of the most challenging aspects of the task at hand. This includes the drastic appearance changes between different times of day, night-time motion blur from the increased exposure and sunburst/re\ufb02ections. Despite this, the feature maps for the anchor and positive appear globally similar and distinct to the negative pair. Cross-seasonal AUC. The baselines and proposed DVF are evaluated based on the Area Under the Curve (AUC) of the ROC curve when classifying a pair of images as corresponding to the same location or not. In this context, we consider all images within each RobotCar Seasons location as \u201ctrue positives\u201d, regardless of the season and their exact alignment. These results are shown in Figure 5 as a series of performance matrices indicating the classi\ufb01cation performance between all possible season combinations. The diagonal corresponds to classi\ufb01cation within each season, whereas Features Seasonal AUC Cross-season AUC SIFT [28] 80.78 46.79 RootSIFT [1] 97.15 59.75 ORB [43] 96.60 66.99 SIFT + CX 94.42 64.58 RootSIFT + CX 95.55 68.36 ORB + CX 96.26 70.54 VGG [49] + CX 99.05 73.03 NC-Net [41] + CX 97.58 74.03 D2-Net [10] + CX 98.70 74.96 SAND [50] + CX 99.74 74.86 NetVLAD [2] + CX 99.41 77.57 DVF \u03b1 = 0 99.30 93.82 DVF \u03b1 = 0.2 99.82 96.56 DVF \u03b1 = 0.4 99.59 91.37 DVF \u03b1 = 0.6 99.76 93.46 DVF \u03b1 = 0.8 99.52 94.12 DVF \u03b1 = 1 99.47 92.94 Table 1: Within season and cross-season AUCs from Figure 5. Incorporating the contextual similarity improves performance on hand crafted baselines. The proposed approach further increases accuracy by a large margin. 87.87 63.02 53.23 48.36 69.03 71.31 69.44 69.5 65.43 63.02 98.48 59.73 50.68 78.16 79.39 82.29 65.62 76.77 53.23 59.73 99.91 82.03 67.6 57.33 66.47 66.19 56.44 48.36 50.68 82.03 99.54 60.14 51.09 54.14 60.18 49.54 69.03 78.16 67.6 60.14 96.71 82.85 80.08 72.88 71.35 71.31 79.39 57.33 51.09 82.85 98.1 83.24 70.35 67.25 69.44 82.29 66.47 54.14 80.08 83.24 93.96 71.03 70.71 69.5 65.62 66.19 60.18 72.88 70.35 71.03 96.82 68.74 65.43 76.77 56.44 49.54 71.35 67.25 70.71 68.74 97.97 Dawn Dusk Night N-R O-S O-W Rain Snow Sun Target Season Dawn Dusk Night N-R O-S O-W Rain Snow Sun Source Season (a) ORB 100 88.65 56.9 55.16 88.94 93.71 96.1 95.27 78.5 88.65 99.94 57.35 50.83 91.21 93.99 98.22 89.68 71.64 56.9 57.35 99.8 85.17 60.14 59.38 58.97 58.89 51.78 55.16 50.83 85.17 99.91 53.12 53.29 52.86 54.08 50.9 88.94 91.21 60.14 53.12 100 97.15 93.5 90.11 69.76 93.71 93.99 59.38 53.29 97.15 99.18 98.72 94.52 67.43 96.1 98.22 58.97 52.86 93.5 98.72 99.7 93.62 73.49 95.27 89.68 58.89 54.08 90.11 94.52 93.62 99.85 71.92 78.5 71.64 51.78 50.9 69.76 67.43 73.49 71.92 99.27 Dawn Dusk Night N-R O-S O-W Rain Snow Sun Target Season Dawn Dusk Night N-R O-S O-W Rain Snow Sun Source Season (b) SAND + CX 99.35 84.79 68.09 59.47 90.78 93.14 90.67 91.67 79.59 84.79 99.92 65.92 62.51 84.01 93 97.95 90.63 76.43 68.09 65.92 99.68 63.56 62.52 56.95 65.75 62.04 72.86 59.47 62.51 63.56 99.98 62.03 56.94 71.04 60.47 57.61 90.78 84.01 62.52 62.03 99.07 93.61 89.01 87.4 81.01 93.14 93 56.95 56.94 93.61 99.78 95.83 94.7 80.07 90.67 97.95 65.75 71.04 89.01 95.83 99.77 93.94 74.76 91.67 90.63 62.04 60.47 87.4 94.7 93.94 98.94 81.66 79.59 76.43 72.86 57.61 81.01 80.07 74.76 81.66 98.19 Dawn Dusk Night N-R O-S O-W Rain Snow Sun Target Season Dawn Dusk Night N-R O-S O-W Rain Snow Sun Source Season (c) NetVLAD + CX 99.88 99.81 94.14 97.21 96.27 99.3 99.81 99.68 97.54 99.81 99.99 89.88 94.58 98.87 99.21 99.89 97.99 95.22 94.14 89.88 99.83 98.74 88.58 95.42 91 95.7 91.86 97.21 94.58 98.74 99.9 96.09 93.21 97.49 97.92 94.49 96.27 98.87 88.58 96.09 100 99.02 98.73 95.61 95.17 99.3 99.21 95.42 93.21 99.02 99.81 99.5 98.65 96.27 99.81 99.89 91 97.49 98.73 99.5 99.85 98.99 97.01 99.68 97.99 95.7 97.92 95.61 98.65 98.99 99.91 97.24 97.54 95.22 91.86 94.49 95.17 96.27 97.01 97.24 99.24 Dawn Dusk Night N-R O-S O-W Rain Snow Sun Target Season Dawn Dusk Night N-R O-S O-W Rain Snow Sun Source Season (d) DVF (Proposed) Figure 5: Performance matrices for baselines and the proposed DVF. The diagonals represent localization within each season, whereas all other cells perform cross-season localization. For summarized results see Table 1. N-R: Night-Rain, O-S: Overcast-Summer, O-W: Overcast-Winter. all other blocks represent cross-seasonal classi\ufb01cation. This is summarized in Table 1, where we show that the proposed features outperform all baselines. These baselines were obtained by using the code and features provided by the corresponding authors/libraries. Additionally, we show how the used similarity metric can improve performance even in traditional methods using ORB, SIFT and RootSIFT. Sparse feature matching. Despite producing primarily a dense feature map representation, Deja-Vu can also be used to perform sparse cross-seasonal matching. This is worthy of note, given that the proposed method does not make use of any spatial information when training. The system is only required to produce globally \u201csimilar\u201d or \u201cdissimilar\u201d feature maps, with no context on what regions of the images match to each other. Recently, a new dataset [26] was proposed containing cross-seasonal correspondences. However, additional experiments in the supplementary material show that this dataset is still not accurate enough to provide meaningful evaluation data, especially in the case of the RobotCar Seasons dataset. As such, we provide quantitative results on [26] as supplementary material, and instead show qualitaDVF Night Dawn Cross-season SAND D2-Net Figure 6: Sample seasonal and cross-seasonal matches obtained by DVF (proposed), SAND and D2-Net, respectively. However, they fail when attempting to match scenes with drastic appearance changes. tive performance compared to two recent state-of-the-art feature representations, SAND [50] and D2-Net [10], using the models provided by the respective authors. In order to provide the keypoint locations at which to match, we use the well established Shi-Tomasi corner detector [21]. In the case of D2-Net we use their own provided keypoint detection module. The detected descriptors are matched using traditional techniques, such as mutual nearest neighbour and the ratio test, and re\ufb01ned using RANSAC [13]. In all images we show a representative subset of the obtained inliers to avoid cluttering the visualizations. The \ufb01rst two columns in Figure 6 represent short-term matches between consecutive frames. Here it can be seen how all methods perform well, obtaining multiple matches. However, in the case where we try to perform matching between two different seasons at different times, i.e. the \ufb01nal column, performance drops signi\ufb01cantly for SAND and D2Net. Meanwhile, DVF is still capable of handling drastic changes in appearance. Cross-Seasonal Relocalization. Finally, we show how Deja-Vu can be used to perform 6-DOF cross-seasonal relocalization. In practice this means that localization can be performed in previously unseen conditions without requiring additional training or \ufb01ne-tuning. In order to demonstrate this, PoseNet [22] is trained on a subset of RobotCar sequences from one season and evaluated on a corresponding subset from a different season. The baseline is obtained by training PoseNet in a traditional manner. Meanwhile, all other feature variants are incorporated by replacing the input image to the network with its corresponding dense n-dimensional feature representation, namely D2-Net, SAND and the proposed DVF. These features correspond to those in Table 1, which are left \ufb01xed during PoseNet training. From the results in Table 2, it can be seen how the DVF variants clearly outperform the baselines, with the best one almost halving the error. Figure 7 shows some qualitative results from the localization pipeline. As expected, the proposed PoseNet variant using Deja-Vu features follows the ground truth poses more closely, despite having been trained on different weather conditions. Method P (m) R (deg/m) PoseNet [22] 10.3459 0.0170 D2-Net [10] 11.1858 0.0029 SAND [50] 7.3386 0.0045 DVF \u03b1 = 0 5.5759 0.0050 DVF \u03b1 = 1 7.2076 0.0036 Table 2: Position (meters) and Rotation (deg/meter) error when localizing in a RobotCar sequence of one season using a sequence of a different season. Figure 7: Predicted localization using PoseNet models trained under different seasonal conditions. The variant using the proposed Deja-Vu features follows the target trajectory more closely. 5. Conclusions & Future Work In this paper we have proposed Deja-Vu features, a novel approach to dense feature learning which is robust to temporal changes. We have achieved this in a largely unsupervised manner, removing the need for exact pixel-wise matches between cross-season sequences. In combination with the relational nature of the supervision, this can generate much larger amounts of training data by simply using rough alignment obtained automatically from GPS. We have shown how the use of contextual similarity can improve relocalization performance, even in well established methods using hand-crafted features. While stateof-the-art same season localization methods tend to perform with high accuracy, their cross-seasonal performance is not comparable. On the other hand, Deja-Vu has over 90% accuracy and can still perform pixel-level matching between complex seasons. We hope this is a step towards generalizing feature representation in complex tasks and environments. Interesting avenues for future work include introducing some level of spatial constraints into the proposed loss metrics. Acknowledgements This work was funded by the EPSRC under grant agreement (EP/R512217/1). We would also like to thank NVIDIA Corporation for their Titan Xp GPU grant.", "introduction": "Feature extraction and representation is a core compo- nent of computer vision. In this paper we propose a novel approach to feature learning with applications in a multitude of tasks. In particular, this paper addresses the highly chal- lenging task of learning features which are robust to tempo- ral appearance changes. This includes both short-term and long-term changes, e.g. day vs. night and summer vs. win- ter, respectively. This is important in scenarios such as au- Source Target (a) Traditional Anchor Positive Negative (b) Proposed Figure 1: Traditional methods for learning dense features require pixel-wise correspondences, making cross-seasonal training nearly impossible. We propose a novel method for dense feature training requiring only image level correspon- dences and relational labels. tonomous driving, where the vehicle must be capable of op- erating reliably regardless of the current season or weather. Traditional hand-crafted features, such as SIFT [28] and ORB [43], typically fail to obtain reliable matches in cross- domain environments since they haven\u2019t been designed to handle these changes. More recently, there have been sev- eral deep learning techniques proposed [46, 48, 50] to learn dense feature representations. These methods tend to use a set of pixel-wise correspondences to obtain relational labels indicating similarity or dissimilarity between different im- age regions. As such, these techniques focus on the spatial properties of the learned features. However, none of these methods address the huge vi- arXiv:2003.13431v1 [cs.CV] 30 Mar 2020 sual appearance variation that results from longer tempo- ral windows. This is likely due to the heavy biases in the commonly used training datasets [7, 14, 16], which do not incorporate seasonal variation. A limiting factor to this is acquiring ground truth correspondences for training. Even if the dataset does have data across multiple seasons [29], obtaining the pixel-wise ground truth correspondences re- quired for these techniques is non-trivial. The noise from GPS and drift from Visual Odometry (VO) make pointcloud alignment unreliable and, by the very de\ufb01nition of the prob- lem, appearance cannot be used to solve cross-seasonal cor- respondence. In order to overcome this, we instead opt for a weakly supervised approach. Rather than obtaining relational la- bels at the pixel level, we use coarse labels indicating if two images were taken at the same location. The network is then trained to produce globally \u201csimilar\u201d dense feature maps for corresponding locations. An illustration of this process can be found in Figure 1. This allows us to obtain large amounts of training data without requiring pixel-wise cross-seasonal alignment. This paper introduces one of the only approaches capable of using holistic image-level cor- respondence as ground truth to supervise dense pixel-wise feature learning. The remainder of this paper describes the details of the proposed approach. This includes the architecture of the Deja-Vu feature (DVF) network and the similarity metric used to train it. We show the properties of the learned fea- tures, demonstrating their seasonal invariance. Finally, we discuss the potential applications of these features, most no- tably in areas such as self-localization. The main contributions can be summarized as follows: 1. We propose a novel dense feature learning framework focused on invariance to seasonal and visio-temporal changes. 2. We achieve this in a weakly supervised manner, requir- ing only rough cross-seasonal image alignment rather than pixel-level correspondences and yet we solve the pixel-level feature description problem. 3. Finally, we propose a novel method for performing lo- calization based on the aforementioned similarity met- ric, which makes full use of the dense feature maps." }, { "url": "http://arxiv.org/abs/1903.10427v1", "title": "Scale-Adaptive Neural Dense Features: Learning via Hierarchical Context Aggregation", "abstract": "How do computers and intelligent agents view the world around them? Feature\nextraction and representation constitutes one the basic building blocks towards\nanswering this question. Traditionally, this has been done with carefully\nengineered hand-crafted techniques such as HOG, SIFT or ORB. However, there is\nno ``one size fits all'' approach that satisfies all requirements. In recent\nyears, the rising popularity of deep learning has resulted in a myriad of\nend-to-end solutions to many computer vision problems. These approaches, while\nsuccessful, tend to lack scalability and can't easily exploit information\nlearned by other systems. Instead, we propose SAND features, a dedicated deep\nlearning solution to feature extraction capable of providing hierarchical\ncontext information. This is achieved by employing sparse relative labels\nindicating relationships of similarity/dissimilarity between image locations.\nThe nature of these labels results in an almost infinite set of dissimilar\nexamples to choose from. We demonstrate how the selection of negative examples\nduring training can be used to modify the feature space and vary it's\nproperties. To demonstrate the generality of this approach, we apply the\nproposed features to a multitude of tasks, each requiring different properties.\nThis includes disparity estimation, semantic segmentation, self-localisation\nand SLAM. In all cases, we show how incorporating SAND features results in\nbetter or comparable results to the baseline, whilst requiring little to no\nadditional training. Code can be found at:\nhttps://github.com/jspenmar/SAND_features", "authors": "Jaime Spencer, Richard Bowden, Simon Hadfield", "published": "2019-03-25", "updated": "2019-03-25", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "On the other hand, most approaches to dedicated feature learning tend to focus on solving dense correspondence estimation rather than using sparse keypoints. Early work in this area did not perform explicit feature extraction and instead learns a task specific latent space. Such is the case with end-to-end VO methods [42, 22], camera pose regression [19, 4] or stereo disparity estimation [47]. Meanwhile, semantic and instance segmentation approaches such as those proposed by Long et al. [23], Noh et al. [31] or Wang et al. [41] produce a dense representation of the image containing each pixel\u2019s class. These require dense absolute labels describing specific properties of each pixel. Despite advances in the annotation tools [9], manual checking and refinement still constitutes a significant burden. Relative labels, which describe the relationships of similarity or dissimilarity between pixels, are much easier to obtain and are available in larger quantities. Chopra et al. [7], Sun et al. [40] and Kang et al. [18] apply these to face re-identification, which requires learning a discriminative feature space that can generalize over a large amount of unseen data. As such, these approaches make use of relational learning losses such as contrastive [16] or triplet loss [39]. Further work by Yu et al. [45] and Ge et al. [13] discusses the issues caused by triplet selection bias and provides methods to overcome them. As originally presented, these losses don\u2019t tackle dense image representation and instead compare holistic image descriptors. Schmidt et al. [38] propose a \u201cpixel-wise\u201d contrastive loss based on correspondences obtained from KinectFusion [34] and DynamicFusion [30]. Fathy et al. [10] incorporate an additional matching loss for intermediate layer representations. More recently, the contextual loss [26] has been proposed as a similarity measure for nonaligned feature representations. In this paper we generalise the concept of \u201cpixel-wise\u201d contrastive loss to generic correspondence data and demonstrate how the properties of the learned feature space can be manipulated. 3. SAND Feature Extraction The aim of this work is to provide a high-dimensional feature descriptor for every pixel within an image, capable of describing the context at multiple scales. We achieve this by employing a pixel-wise contrastive loss in a siamese network architecture. Each branch of the siamese network consists of a series of convolutional residual blocks followed by a Spatial Pooling Pyramid (SPP) module, shown in Figure 2. The convolution block and base residual blocks serve as the initial feature learning. In order to increase the receptive field, the final two residual blocks employ an atrous convolution with dilations of two and four, respectively. The SPP module is formed by four parallel branches, each with average pooling scales of 8, 16, 32 and 64, respectively. Each branch produces a 32D output with a resolution Conv Conv stride 2 Bilinear Avg. pool 32 32 64 128 128 32 320 128 32 n H x W x n H x W x 3 64 x 64 32 x 32 16 x 16 8 x 8 Figure 2: SAND architecture trained for dense feature extraction. The initial convolutions are residual blocks, followed by a 4-branch SPP module and multi-stage decoder. of (H/ 4, W/ 4). In order to produce the \ufb01nal dense feature map, the resulting block is upsampled in several stages incorporating skip connections and reducing it to the desired number of dimensions, n. Given an input image I, it\u2019s dense n-dimensional feature representation can be obtained by F(p) = \u03a6(I(p)|w), (1) where p represents a 2D point and \u03a6 represents a SAND branch, parametrized by a set of weights w. I stores RGB colour values, whereas F stores n-dimensional feature descriptors, \u03a6 : N3 \u2192Rn. 3.1. Pixel-wise Contrastive Loss To train this feature embedding network we build on the ideas presented in [38] and propose a pixel-wise contrastive loss. A siamese network with two identical SAND branches is trained using this loss to produce dense descriptor maps. Given a pair of input points, contrastive loss is de\ufb01ned as l(y, p1, p2) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 2(d)2 if y = 1 1 2{max(0, m \u2212d)}2 if y = 0 0 otherwise (2) where d is the euclidean distance of the feature embeddings ||F 1(p1) \u2212F 2(p2)||, y is the label indicating if the pair is a match and m is the margin. Intuitively, positive pairs (matching points) should be close in the latent space, while negative pairs (non-matching points) should be separated by at least the margin. The labels indicating the similarity or dissimilarity can be obtained through a multitude of sources. In the simplest case, the correspondences are given directly by disparity or optical \ufb02ow maps. If the data is instead given as homogeneous 3D world points \u02d9 q in a depth map or pointcloud, these can be projected onto pairs of images. A set of corresponding pixels can be obtained through p = \u03c0( \u02d9 q) = KP \u02d9 q, (3) (c1, c2) = (p1, p2) where \u03c01( \u02d9 q) 7\u2192\u03c02( \u02d9 q), (4) where \u03c0 is the projection function parametrized by the corresponding camera\u2019s intrinsics K and global pose P . (a) Source (b) (0, \u221e) (c) (0, 25) (d) (0, \u221e) (0, 25) Figure 3: Effect of (\u03b1, \u03b2) thresholds on the scale information observed by each individual pixel. Large values of \u03b1 and \u03b2 favour global features, while low \u03b2 values increase local discrimination. A label mask Y is created indicating if every possible combination of pixels is a positive example, negative example or should be ignored. Unlike a traditional siamese network, every input image has many matches, which are not spatially aligned. As an extension to (2) we obtain L(Y , F 1, F 2) = X p1 X p2 l(Y (p1, p2), p1, p2). (5) 3.2. Targeted Negative Mining The label map Y provides the list of similar and dissimilar pairs used during training. The list of similar pairs is limited by the ground truth correspondences between the input images. However, each of these points has (H \u00d7W)\u22121 potential dissimilar pairs \u02c6 c2 to choose from. This only increases if we consider all potential dissimilar pairs within a training batch. For converting 3D ground truth data we can de\ufb01ne an equivalent to (4) for negative matches, \u02c6 c2 \u223cp2 where \u03c0\u22121 1 (c1) \u21ae\u03c0\u22121 2 (p2). (6) It is immediately obvious that it is infeasible to use all available combinations due to computational cost and balancing. In the na\u00a8 \u0131ve case, one can simply select a \ufb01xed number of random negative pairs for each point with a ground truth correspondence. By selecting a larger number of negative samples, we can better utilise the variability in the available data. It is also apparent that the resulting highly unbalanced label distributions calls for loss balancing, where the losses attributed to negative samples are inversely weighted according to the total number of pairs selected. In practice, uniform random sampling serves to provide globally consistent features. However, these properties are not ideal for many applications. By instead intelligently targeting the selection of negative samples we can control the properties of the learned features. Typically, negative mining consists of selecting hard examples, i.e. examples that produce false positives in the network. Whilst this concept could still be applied within the proposed method, we instead focus on spatial mining strategies, as demonstrated in Figure 3. The proposed mining strategy can be de\ufb01ned as \u02c6 c\u20322 \u223c\u02c6 c2 where \u03b1 < ||\u02c6 c2 \u2212c2|| < \u03b2. (7) In other words, the negative samples are drawn from a region within a radius with lower and higher bounds of (\u03b1, \u03b2), respectively. As such, this region represents the area in which the features are required to be unique, i.e. the scale of the features. For example, narrow baseline stereo requires locally discriminative features. It is not important for distant regions to be distinct as long as \ufb01ne details cause measurable changes in the feature embedding. To encourage this, only samples within a designated radius, i.e. a small \u03b2 threshold, should be used as negative pairs. On the other hand, global descriptors can be obtained by ignoring nearby samples and selecting negatives exclusively from distant image regions, i.e. large \u03b1 and \u03b2 = \u221e. 3.3. Hierarchical Context Aggregation It is also possible to bene\ufb01t from the properties of multiple negative mining strategies simultaneously by \u201csplitting\u201d the output feature map and providing each section with different negative sampling strategies. For NS number of mining strategies, NC represents the number of channels per strategy, \u230an/ NS\u230b. As a modi\ufb01cation to (2), we de\ufb01ne the \ufb01nal pixel-level loss as l(y, p1, p1 2...pNS 2 )= \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 2 NS P i=1 d2(i) if y = 1 1 2 NS P i=1 {max(0, mi-d(i))}2 if y = 0 0 otherwise (8) where pi 2 represents a negative sample from strategy i and d2(i) = (i+1)NC X z=iNC \u0000F 1(p1, z) \u2212F 2(pi 2, z) \u00012 . (9) This represents a powerful and generic tool that allows us to further adapt to many tasks. Depending on the problem at hand, we can choose corresponding features scales that best suit the property requirements. Furthermore, more complicated tasks or those requiring multiple types of feature can bene\ufb01t from the appropriate scale hierarchy. For the purpose of this paper, we will evaluate three main categories: global features, local features and the hierarchical combination of both. 3.4. Feature Training & Evaluation Training. In order to obtain the pair correspondences required to train the proposed SAND features, we make use of the popular Kitti dataset [14]. Despite evaluating on three of the available Kitti challenges (Odometry, Semantics and Stereo) and the Cambridge Landmarks Dataset, the feature network \u03a6 is pretrained exclusively on a relatively modest subsection of 700 pairs from the odometry sequence 00. Each of these pairs has 10-15 thousand positive correspondences obtained by projecting 3D data onto the images, with 10 negative samples each, generated using the presented mining approaches. This includes thresholds of (0, \u221e) for Global descriptors, (0, 25) for Local descriptors and the hierarchical combination of both (GL). Each method is trained for 3, 10 and 32 dimensional feature space variants with a target margin of 0.5. Visualization. To begin, a qualitative evaluation of the learned features can be found in Figure 4. This visualisation makes use of the 3D descriptors, as their values can simply be projected onto the RGB color cube. The exception to this is GL, which makes use of 6D descriptors reduced to 3D through PCA. It is immediately apparent how the selected mining process affects the learned feature space. When considering small image patches, G descriptors are found to be smooth and consistent, while they are discriminative regarding distant features. Contrary to this, L shows repeated features across the whole image, but sharp contrasts and edges in their local neighbourhood. This aligns with the expected response from each mining method. Finally, GL shows a combination of properties from both previous methods. Image Global Local G+L Figure 4: Learned descriptor visualizations for 3D. From top to bottom: source image, Global mining, 25 pixel Local mining and hierarchical approach. L descriptors show more de\ufb01ned edges and local changes, whereas GL provides a combination of both. D Mining \u00b5+ Global perf. Local perf. AUC \u00b5\u2212 AUC \u00b5\u2212 32 ORB NA 85.83 NA 84.06 NA 3 G 0.095 98.62 0.951 84.70 0.300 L 0.147 96.05 0.628 91.92 0.564 GL (6D) 0.181 97.86 1.161 90.67 0.709 10 G 0.095 99.43 0.730 86.99 0.286 L 0.157 98.04 0.579 93.57 0.510 GL 0.187 98.60 1.062 91.87 0.678 32 G 0.093 99.73 0.746 87.06 0.266 I 0.120 99.61 0.675 91.94 0.406 L 0.156 98.88 0.592 94.34 0.505 GL 0.183 99.28 0.996 93.34 0.642 GIL 0.214 98.88 1.217 91.97 0.784 Table 1: Feature metrics for varying dimensionality and mining method vs. ORB baseline. Global and Local provide the best descriptors in their respective areas, while GL and GIL maximise the negative distance and provides a balanced matching performance. Distance distributions. A series of objective measures is provided through the distribution of positive and negative distances in Table 1. This includes a similarity measure for positive examples \u00b5+ (lower is better) and a dissimilarity measure for negative examples \u00b5\u2212(higher is better). Additionally, the Area Under the Curve (AUC) measure represents the probability that a randomly chosen negative sample will have a greater distance than the corresponding positive ground truth match. These studies were carried out for both local (25 pixels radius) and global negative selection strategies. Additionally, the 32D features were tested with an intermediate (75 pixel radius) and fully combined GIL approach. From these results, it can be seen that the global approach G performs best in terms of positive correspondence representation, since it minimizes \u00b5+ and maximizes the global AUC across all descriptor sizes. On the other hand, L descriptors provide the best matching performance within the local neighbourhood, but the lowest in the global context. Meanwhile, I descriptors provide a compromise between G and L. Similarly, the combined approach provide an intermediate ground where the distance between all negative samples is maximised and the matching performance at all scales is balanced. All proposed variants signi\ufb01cantly outperform the shown ORB feature baseline. Finally, it is interesting to note that these properties are preserved across the varying number of dimensions of the learnt feature space, revealing the consistency of the proposed mining strategies. 4. Feature Matching Cost Volumes Inspired by [5], after performing the initial feature extraction on the stereo images, these are combined in a cost volume \u03c1 by concatenating the left and right features across all possible disparity levels, as de\ufb01ned by \u03c1(x, y, \u03b4, z) = ( F1(x, y, z) if z \u2264n F2(x + \u03b4, y, z) otherwise , (10) where n corresponds to the dimensionality of the feature maps. This results in \u03c1(H \u00d7 W \u00d7 D \u00d7 2n), with D representing the levels of disparity. As such, the cost volume provides a mapping from a 4-dimensional index to a single value, \u03c1 : N4 \u2192R. It is worth noting that this disparity replicated cost volume represents an application agnostic extension of traditional dense feature matching cost volumes [11]. The following layers are able to produce traditional pixel-wise feature distance maps, but can also perform multi-scale information aggregation and deal with viewpoint variance. The resulting cost volume is fed to a 3D stacked hourglass network composed of three modules. In order to reuse the information learned by previous hourglasses, skip connections are incorporated between corresponding sized layers. As a \ufb01nal modi\ufb01cation, additional skip connections from the early feature extraction layers are incorporated before the \ufb01nal upsampling and regression stages. To illustrate the generality of this system, we exploit the same cost volume and network in two very different tasks. Stereo disparity estimation represents a traditional application for this kind of approach. Meanwhile, semantic segmentation has traditionally made use of a single input image. In order to adapt the network for this purpose, only the \ufb01nal layer is modi\ufb01ed to produce an output with the desired number of segmentation classes. 5. Results In addition to the previously mentioned disparity and semantic segmentation, we demonstrate applicability of the proposed SAND features in two more areas: selflocalisation and SLAM. Each of these areas represents a different computer vision problem with a different set of desired properties. For instance, stereo disparity represents a narrow baseline matching task and as such may favour local descriptors in order to produce sharp response boundaries. Meanwhile, semantic segmentation makes use of implicit feature extraction in end-to-end learned representations. Due to the nature of the problem, feature aggregation and multiple scales should improve performance. On the other side of the spectrum, self-localisation emphasizes wide baselines and revisitation, where the global appearance of the scene helps determine the likely location. In this case, it is crucial to have globally robust features that are invariant to changes in viewpoint and appearance. Furthermore, the speci\ufb01c method chosen makes use of holistic image representations. Finally, SLAM has similar requirements to selflocalisation, where global consistency and viewpoint invariance is crucial to loop closure and drift minimization. HowMethod Train (%) Eval (%) Baseline [5] 1.49 2.87 10D-G 1.19 3.00 10D-L 1.34 2.82 10D-GL 1.16 2.91 32D-G 1.05 2.65 32D-L 1.09 2.85 32D-GL 1.06 2.79 Table 2: Disparity error on Kitti Stereo train/eval split. With less training, the proposed methods achieves comparable or better performance than the baseline. ever, it represents a completely different style of application. In this case, revisitation is detected through sparse direct matching rather than an end-to-end learning approach. Furthermore, the task is particularly demanding of it\u2019s features, requiring both wide baseline invariance (mapping) and narrow baseline (VO). As such, it is an ideal use case for the combined feature descriptors. 5.1. Disparity Estimation Based on the architecture described in Section 4, we compare our approach with the implementation in [5]. We compare against the original model trained exclusively on the Kitti Stereo 2015 dataset for 600 epochs. Our model \ufb01xes the pretrained features for the \ufb01rst 200 epochs and \ufb01netunes them at a lower learning rate for 250 epochs. The \ufb01nal error metrics on the original train/eval splits from the public Stereo dataset are found in Table 2 (lower is better). (a) Ground Truth (b) Baseline (c) 32D-G-FT (d) 32D-GL-FT Figure 5: Semantic segmentation visualization for validation set images. The incorporation of SAND features improves the overall level of detail and consistency of the segmented regions. (a) Baseline (b) 10D-G (c) 10D-L (d) 10D-GL (e) 32D-G (f) 32D-L (g) 32D-GL Figure 6: Disparity visualization for two evaluation images (prediction vs. error). The proposed feature representation increases estimation robustness in complicated areas such as the vehicle windows. As seen, with 150 less epochs of training the 10D variant achieves a comparable performance, while the 32D variants provide up to a 30% reduction in error. It is interesting to note that G features tend to perform better than local and combined approaches, L and GL. We theorize that the additional skip connections from the early SAND branch make up for any local information required, while the additional global features boost the contextual information. Furthermore, a visual comparison for the results in shown in Figure 6. The second and fourth rows provide a visual representation of the error, where red areas indicate larger errors. As seen in the bottom row, the proposed method increases the robustness in areas such as the transparent car windows. 5.2. Semantic Segmentation Once again, this approach is based on the cost volume presented in Section 4, with the \ufb01nal layer producing a 19class segmentation. The presented models are all trained on the Kitti pixel-level semantic segmentation dataset for 600 epochs. In order to obtain the baseline performance, Method IoU Class IoU Cat. Flat Nature Object Sky Construction Human Vehicle Baseline 29.3 53.8 87.1 78.1 30.1 63.3 54.4 1.6 62.1 32D-G 31.1 55.8 87.3 78.5 36.0 59.8 57.5 6.7 66.8 32D-G-FT 35.4 59.9 88.7 83.0 46.7 62.7 63.3 6.7 68.1 32D-GL 29.4 51.7 85.1 76.6 33.8 51.8 54.4 4.3 56.3 32D-GL-FT 33.1 56.6 87.4 91.5 42.6 56.7 60.4 3.9 63.7 Table 3: Intersection over Union (%) measures for class and category average and per-category breakdown. The incorporation of the proposed features results in an increase in accuracy in complicated categories such as Object and Human. the stacked hourglass network is trained directly with the input images, whereas the rest use the 32D variants with G and LG learned features. Unsurprisingly L alone does not contain enough contextual information to converge and therefore is not shown in the following results. In the case of the proposed methods, two SAND variants are trained. The \ufb01rst two \ufb01x the features for the \ufb01rst 400 epochs. The remaining two part from these models and \ufb01netune the features at a lower learning rate for 200 additional epochs. As seen in the results in Table 3, the proposed methods signi\ufb01cantly outperform the baseline. This is especially the case with Human and Object, the more complicated categories where the baseline fails almost completely. In terms of our features, global features tend to outperform their combined counterpart. Again, this shows that this particular task requires more global information in order to determine what objects are present in the scene than exact location information provided by L features. 5.3. Self-localisation As previously mentioned, self-localisation is performed using the well known method PoseNet [19]. While PoseNet has several disadvantages, including additional training for every new scene, it has proven highly successful and serves as an example application requiring holistic image representation. The baseline was obtained by training a base ResNet34 architecture as described in [19] from scratch with the original dataset images. Once again, the proposed method replaces the input images with their respective SAND feature representation. Both approaches were trained for 100 epochs with a constant learning rate. Once again, only versions denoted FT present any additional \ufb01netuning to the original pretrained SAND features. As shown in Table 4, the proposed method with 32D \ufb01netuned features generally outperforms the baseline. This contains errors for the regressed position, measured in meters from the ground truth, and rotation representing the orientation of the camera. As expected, increasing the dimensionality of the representation (3 vs. 32) increases the \ufb01nal accuracy, as does \ufb01netuning the learnt representations. Most notably it performs well in sequences like GreatCourt, KingsCollege or ShopFacade. We theorize that this is due to the distinctive features and shapes of the buildings, which allows for a more robust representation. However, the approach tends to perform worse in sequences containing similar or repeating surroundings, such as the Street sequence. This represent a complicated environment for the proposed features in the context of PoseNet, since the global representation can\u2019t be reliably correlated with the exact position without additional information. 5.4. SLAM All previous areas of work explore the use of our features in a deep learning environment, where the dense feature representations are used. This set of experiments instead focuses on their use in a sparse matching domain with explicit feature extraction. The learned features serve as a direct replacement for hand-engineered features. The baseline SLAM system used is an implementation for S-PTAM [33]. This system makes use of ORB descriptors to estimate VO and create the environment maps. We perform no additional training or adaptation of our features, or any other part of the pipeline for this task. We simply drop our features into the architecture that was built around ORB. It is worth emphasising that we also do not aggregate our features over a local patch. Instead we rely on the feature extraction network to have already encoded all relevant contextual information in the pixel\u2019s descriptor. Method GreatCourt KingsCollege OldHospital ShopFacade StMarysChurch Street P R P R P R P R P R P R Baseline 10.30 0.35 1.54 0.09 3.14 0.10 2.224 0.19 2.77 0.22 22.60 1.01 3D-G 12.05 0.33 2.18 0.09 4.07 0.09 2.66 0.29 4.21 0.26 36.13 1.53 32D-G 11.46 0.30 1.62 0.09 3.30 0.11 2.20 0.25 3.67 0.23 31.92 1.24 32D-G-FT 8.226 0.26 1.52 0.08 3.21 0.9 2.01 0.22 3.16 0.22 29.89 0.99 Table 4: Position (m) and Rotation (deg/m) error for baseline PoseNet vs. SAND feature variants. FT indicates a variant with \ufb01netuned features. The proposed methods outperforms the baseline in half of the sequence in terms of position error and all except one in terms of rotation error. Method 00 02 03 04 05 APE RPE APE RPE APE RPE APE RPE APE RPE Baseline 5.63 0.21 8.99 0.28 6.39 0.05 0.69 0.04 2.35 0.12 32D-G 13.09 0.21 41.65 0.36 6.00 0.08 6.43 0.13 6.59 0.16 32D-L 5.99 0.21 9.83 0.29 4.40 0.04 1.13 0.05 2.37 0.12 32D-GL 4.84 0.20 9.66 0.29 3.69 0.04 1.35 0.05 1.93 0.11 Method 06 07 08 09 10 Baseline 3.78 0.09 1.10 0.19 4.19 0.13 5.77 0.43 2.06 0.28 32D-G 9.10 0.13 2.05 0.21 15.40 0.17 11.50 0.45 18.25 0.35 32D-L 2.54 0.09 0.88 0.19 5.26 0.13 6.25 0.42 2.03 0.30 32D-GL 2.00 0.08 0.96 0.19 6.00 0.13 5.48 0.42 1.36 0.29 Table 5: Absolute and relative pose error (lower is better) breakdown for all public Kitti odometry sequences, except 01. APE represents aligned trajectory absolute distance error, while RPE represents motion estimation error. On average, 32D-GL provides the best results, with comparable performance from 32D-L. A visual comparison between the predicted trajectories for two Kitti odometry sequences can be found in Figure 7. As seen, the proposed method follows the ground truth more closely and presents less drift. In turn, this shows that our features are generally robust to revisitations and are viewpoint invariant. Additionally, the average absolute and relative pose errors for the available Kitti sequences are shown in Table 5. These measures represent the absolute distance between the aligned trajectory poses and the error in the predicted motion, respectively. In this application, it can be seen how \u2212300 \u2212200 \u2212100 0 100 200 300 x (m) 0 100 200 300 400 500 z (m) Ground Truth Baseline 32D-G 32D-L 32D-GL \u2212200 \u2212100 0 100 200 x (m) \u2212100 0 100 200 300 z (m) Ground Truth Baseline 32D-G 32D-L 32D-GL \u221260 \u221240 \u221220 0 20 40 x (m) \u2212160 \u2212140 \u2212120 \u2212100 \u221280 \u221260 z (m) Ground Truth Baseline 32D-G 32D-L 32D-GL Figure 7: Kitti odometry trajectory predictions for varying SAND features vs. baseline. Top row shows two full sequences, with zoomed details in the bottom row. The hierarchical approach GL provides both robust motion and drift correction. the system greatly bene\ufb01ts for the hierarchical aggregation learning approach. This is due to SLAM requiring two different sets of features. In order to estimate the motion of the agent in a narrow baseline, the system requires locally discriminative features. On the other hand, loop closure detection and map creation requires globally consistent features. This is refected in the results, where G consistently drifts more than L (higher RPE) and GL provides better absolute pose (lower APE). 6. Conclusions & Future Work We have presented SAND, a novel method for dense feature descriptor learning with a pixel-wise contrastive loss. By using sparsely labelled data from a fraction of the available training data we demonstrate that it is possible to learn generic feature representations. While other methods employ hard negative mining as a way to increase robustness, we instead develop a generic contrastive loss framework allowing us to modify and manipulate the learned feature space. This results in a hierarchical aggregation of contextual information visible to each pixel throughout training. In order to demonstrate the generality and applicability of this approach, we evaluate it on a series of different computer vision applications each requiring different feature properties. This ranges from dense and sparse correlation detection to holistic image description and pixel-wise classi\ufb01cation. In all cases SAND features are shown to outperform the original baselines. We hope this is a useful tool for most areas of computer vision research by providing easier to use features requiring less or no training. Further work in this area could include exploring additional desirable properties for the learnt features spaces and the application of these to novel tasks. Additionally, in order to increase the generality of these features they can be trained with much larger datasets containing a larger variety of environments, such as indoor scenes or seasonal changes. Acknowledgements This work was funded by the EPSRC under grant agreement (EP/R512217/1). We would also like to thank NVIDIA Corporation for their Titan Xp GPU grant.", "introduction": "Feature extraction and representation is a fundamental component of most computer vision research. We pro- pose to learn a feature representation capable of support- ing a wide range of computer vision tasks. Designing such a system proves challenging, as it requires these features to be both unique and capable of generalizing over radical changes in appearances at the pixel-level. Areas such as (a) Source (b) Global (c) Local (d) Hierarchical Figure 1: Visualization of SAND features trained using varying context hierarchies to target speci\ufb01c properties. Simultaneous Localisation and Mapping (SLAM) or Visual Odometry (VO) tend to use feature extraction in an explicit manner [2, 17, 20, 46], where hand-crafted sparse features are extracted from pairs of images and matched against each other. This requires globally consistent and unique features that are recognisable from wide baselines. On the other hand, methods for optical \ufb02ow [32] or object tracking [3] might instead favour locally unique or smooth feature spaces since they tend to require iterative processes over narrow baselines. Finally, approaches typi- cally associated with deep learning assume feature extrac- tion to be implicitly included within the learning pipeline. End-to-end methods for semantic segmentation [6], dispar- ity estimation [44] or camera pose regression [29] focus on the learning of implicit \u201cfeatures\u201d speci\ufb01c to each task. Contrary to these approaches, we treat feature extraction as it\u2019s own separate deep learning problem. By employing sparsely labelled correspondences between pairs of images, we explore approaches to automatically learn dense repre- sentations which solve the correspondence problem while exhibiting a range of potential properties. In order to learn from this training data, we extend the concept of contrastive loss [16] to pixel-wise non-aligned data. This results in a \ufb01xed set of positive matches from the ground truth corre- spondences between the images, but leaves an almost in\ufb01- nite range of potential negative samples. We show how by carefully targeting speci\ufb01c negatives, the properties of the arXiv:1903.10427v1 [cs.CV] 25 Mar 2019 learned feature representations can be modi\ufb01ed to adapt to multiple domains, as shown in Figure 1. Furthermore, these features can be used in combination with each other to cover a wider range of scenarios. We refer to this framework as Scale-Adaptive Neural Dense (SAND) features. Throughout the remainder of this paper we demonstrate the generality of the learned features across several types of computer vision tasks, including stereo disparity estimation, semantic segmentation, self-localisation and SLAM. Dis- parity estimation and semantic segmentation \ufb01rst combine stereo feature representations to create a 4D cost volume covering all possible disparity levels. The resulting cost volume is processed in a 3D stacked hourglass network [5], using intermediate supervision and a \ufb01nal upsampling and regression stage. Self-localisation uses the popular PoseNet [19], replacing the raw input images with our dense 3D feature representation. Finally, the features are used in a sparse feature matching scenario by replacing ORB/BRIEF features in SLAM [33]. Our contributions can be summarized as follows: 1. We present a methodology for generic feature learning from sparse image correspondences. 2. Building on \u201cpixel-wise\u201d contrastive losses, we demonstrate how targeted negative mining can be used to alter the properties of the learned descriptors and combined into a context hierarchy. 3. We explore the uses for the proposed framework in several applications, namely stereo disparity, seman- tic segmentation, self-localisation and SLAM. This leads to better or comparable results in the correspond- ing baseline with reduced training data and little or no feature \ufb01netuning." } ], "Chris Russell": [ { "url": "http://arxiv.org/abs/1901.04909v1", "title": "Efficient Search for Diverse Coherent Explanations", "abstract": "This paper proposes new search algorithms for counterfactual explanations\nbased upon mixed integer programming. We are concerned with complex data in\nwhich variables may take any value from a contiguous range or an additional set\nof discrete states. We propose a novel set of constraints that we refer to as a\n\"mixed polytope\" and show how this can be used with an integer programming\nsolver to efficiently find coherent counterfactual explanations i.e. solutions\nthat are guaranteed to map back onto the underlying data structure, while\navoiding the need for brute-force enumeration. We also look at the problem of\ndiverse explanations and show how these can be generated within our framework.", "authors": "Chris Russell", "published": "2019-01-02", "updated": "2019-01-02", "primary_cat": "cs.LG", "cats": [ "cs.LG", "stat.ML" ], "main_content": "The desire for explanations of how complex computer systems make decisions dates back to some of the earliest work on expert systems (Buchanan and Shortliffe, 1984). In the context of machine learning, much prior work has focused upon providing humancomprehensible approximations (typically either linear models (Lundberg and Lee, 2017; Montavon et al., 2017; Ribeiro et al., 2016; Shrikumar et al., 2016), or decision trees (Craven and Shavlik, 1996)) of the true decision making criteria. This fact that the simplified model is only an approximation of the true decision making criteria means that these methods avoid the trade-off between accuracy and explainablity discussed in the introduction, but also raises the question of how accurate these approximations really are. These approximate models are either fitted globally (Craven and Shavlik, 1996; Martens et al., 2007; Sanchez et al., 2015) over the entire space of valid datapoints, or as a local approximation (Lundberg and Lee, 2017; Montavon et al., 2017; Ribeiro et al., 2016; Shrikumar et al., 2016) that only describes how decisions are made in the neighbourhood of a particular datapoint. Another important class of explanations comes from \u201ccase-based reasoning\u201d (Caruana et al., 1999; Kim et al., 2014) in which the method justifies the decision/score made by the algorithm by showing data points from the training set that the algorithm found similar in some sense. Finally, there are methods for contrastive or counterfactual explanations that seek a minimal change such that the response of the algorithm changes e.g. \u201cYou were denied a loan because you have an income of $30,000, if you had an income of $45,000 you would have been offered the loan.\u201d Martens and Provost (2013) was the first to propose the use of this technique in the context of removing words for website classification, while Wachter et al. (2018) proposed it as a general framework suitable for continuous and discrete data. Use of counterfactual explanations have strong support from the social sciences (Miller, 2017), and form part of the established philosophical literature on explanations (Kment, 2006; Lewis, 1973; Ruben, 2004). Others have called for the use of counterfactuals in explaining machine learning (Doshi-Velez et al., 2017). Finally, Binns et al. (2018) followed Lim and Dey (2009) in performing a user study of explanations1. 1Counterfactual explanations are referred to as \u201cwhy-not explanations\u201d by Lim and Dey (2009), and \u201csensitivity\u201d by Binns et al. (2018). Binns et al. (2018) found evidence that users prefer counterfactual explanations over case-based reasoning. For a more detailed review of the literature, please see Mittelstadt et al. (2019). Finally, concurrent with this work, Ustun et al. (2019), have also proposed the generation of diverse counterfactuals using mixed integer programmes for linear models. However, they do not consider the case of complex data in which individual variables may take either a value from a continuous range, or one of a set of discrete values. 2.1 Formalising Counterfactual Explanations We follow Lewis (1973) in describing a counterfactual as a \u201cclose possible world\u201d in which a different outcome (or classifier response) occurs. In the context of classifier responses, we can formalise this as follows: Given a datapoint x , the closest counterfactual x \u2032 can then be found by solving the problem arg min x\u2032 d(x,x \u2032) (1) uch that: \u2032 (2) x\u2032() such that: f (x \u2032) = c (2) ce measure, the classifier function and ( ) where d(\u00b7, \u00b7) is a distance measure, f the classifier function and c the classifier responses we desire. This is a much looser definition of counterfactual than that used in the causal literature (e.g. Pearl (2000)) and some thought needs to go into the choice of distance function to make the counterfactuals found useful. In the context of human comprehensible explanations, it is important that the change between the original datapoint, and the counterfactual is simple enough that a person can understand it, and that the way the datapoint is altered to generate the counterfactual should also be representative of the original dataset in some way. To meet these objectives, Wachter et al. suggested making use of the \u21131 norm, weighted by the inverse Median Absolute Deviation, which we write as || \u00b7 ||1,MAD. This has two noticeable advantages: (i) The counterfactuals found are typically sparse i.e. they differ from the original datapoint in a small number of factors, making the change easier to comprehend. (ii) In some limited sense the distance function is scale free, in that multiplying one dimension by a scalar will not alter the solution found, and robust to outliers. Wachter et al. (2018) proposed solving this problem as a Lagrangian: min x\u2032 max \u03bb ||x \u2212x \u2032||1,MAD + \u03bb(f (x) \u2212c)2 (3) \u03bb tends to infinity this converges to a minimiser of As the term \u03bb tends to infinity this converges to a minimiser of ||x \u2212x \u2032||1,MAD that satisfies f (x) = c or at least is a local minima of (f (x) \u2212c)2. Stability is a major concern when using the Lagrangian approach to generating counterfactual explanations. It is important that the counterfactuals generated do what they set out to do and satisfy the constraint f (x \u2032) \u22640 to within a very tight tolerance. For this to happen the value \u03bb much be sufficiently large and this induces stability issues (Wright and Nocedal, 1999). Moreover, the shape of the objective for is reminiscent of pathological optimisation problems. Noticeably, for large \u03bb the objective forms a deep narrow valley around the decision boundary similar to a high-dimensional Efficient Search for Diverse Coherent Explanations FAT*\u201919, January 2019, Atlanta, Georgia USA analogue of the Rosenbrock or \u2018banana\u2019 function (Rosenbrock, 1960), while the sparsity of the solution found means that the minima occurs at gradient discontinuities in the objective function. To avoid these issues we preserve the original formulation of equation (1), with explicit constraints. We show how this problem can be formulated as a linear programme when f is linear and distance function d takes the form of a weighted \u21131 norm. Where they occur, binary constraints (such as this variable must take only values 0 or 1) are treated as integer constraints and our final formulation is efficiently solved using a Mixed Integer Program Solver. 3 COHERENT COUNTERFACTUALS ON MIXED DATA We now outline our procedure for generating coherent counterfactual explanations for linear classifiers, including logistic and linear regression and SVMs, defined over complex datasets where the variables may take any value from a contiguous range or an additional set of discrete states, For such mixed data the notion of distance becomes problematic. For example, in the FICO dataset, one of the variables that measures \u201cMonths Since Most Recent Delinquency\u201d may take either a non-negative value corresponding to the number of months, or a set of special values: \u22127 \u201cCondition not Met (e.g. No Inquiries, No Delinquencies)\u201d \u22128 \u201cNo Usable/Valid Trades or Inquiries\u201d or \u22129 \u201cNo Bureau Record or No Investigation\u201d. Beyond the computational challenges in searching over all valid values for all sets of variables, it is apparent that the change from special value \u22127 to \u22128 is fundamentally different from the shift between \u201c7 months since most recent delinquency\u201d and \u201c8 months since most recent delinquency\u201d. A common trick among applied statisticians when training predictors on this kind of data is to augment it using a variant of the one-hot (or dummy variable) encoding. Here, a variable xi that takes either a contiguous value, or one k discrete states is replaced by k + 1 variables. The first of these variable takes either the contiguous value, if xi is in the contiguous range or a fixed response Fi (typically 0) if xi is in a discrete state. The remaining k variables di,1, . . . ,di,k are indicator variables that take value 1 if xi is in the appropriate discrete state and 0 otherwise. A linear classifier can be trained on these encoded datapoints instead of the original data with substantially higher performance. The challenge with using such embedding into higher-dimensional spaces, and then computing counterfactuals in the embedding space, is that the extra degrees of freedom allow nonsense states (for example turning all indicator variables on) which do not map back into the original data space. We show how a small set of linear constraints can avoid many of these failures, and by combining it with simple integer constraints for the indicator variables guarantee that the counterfactual found is coherent. We will refer to the space enclosed by these linear constraints as the \u201cmixed polytope\u201d. We refer to a particular datapoint a decision has been made about as x and it\u2019s individual components as xi. We write ci for the ith contiguous variable that can take values in the range [Li,Ui] and use di,j for the jth component of the ith set of indicator variables that has value 1 if xi is taking the jth discrete value. To make optimisation tractable under these constraints we assume that this decision has been made by a linear function f (x) = w \u00b7 x + b. The mixed polytope of variable i then is described by the linear constraints: \u00d5 j di,j + di,c = 1 (4) Fi, \u2212li + ui = ci (5) 0 \u2264li \u2264(Li \u2212Fi)di,c (6) 0 \u2264ri \u2264(Ri \u2212Fi)di,c (7) di,j \u2208[0, 1] \u2200j (8) where di,c is an additional indicator value that shows that variable v is takes a contiguous value. It is immediately obvious that if the variables di,j are binary, i.e. take values {0, 1}, then any vector [ci,di,1, . . . ,di,k] that lies in the mixed polytope is consistent with a standard mixed encoding from a consistent state. Moreover, the polytope is tight in so much as optimising a linear objective defined directly over the variables d and c would result in a valid solution. However, we are unable to take advantage of this, as the additional constraint on the value of f (x) further constrains the polytope and potentially allows for fractional optimal solutions if di is not forced to be binary. We are now well placed to write down an Integer Program to generate counterfactuals. We write \u02c6 x for the mixed encoding of datapoint x and assume that our classifier is linear in the embedding space. We seek: arg min x\u2032 || \u02c6 x \u2212x \u2032||1,w (9) such that: f (x \u2032) \u22640 (10) x \u2032 lies on the mixed polytope (11) di,j \u2208{0, 1} \u2200i, j (12) where ||\u00b7||1,w is a weighted \u21131 norm with the weights to be discussed later. Note that we now use the constraint f (x) \u22640, rather than f (x) = 0 as it is possible that changing the state of one of the discrete variables will take us over the boundary rather than up to it. This can be expanded into a linear program. As f is a linear classifier we can split it into linear sub-functions over the discrete and contiguous values (d and c respectively) and rewrite it as f (x \u2032) = a \u00b7 c + \u00cd i a\u2032 i \u00b7di +b allowing f (x \u2032) \u22640 to be replaced with the linear constraint. The objective minc || \u02c6 xc \u2212c||1,w can be made linear using the standard transformation: min c || \u02c6 x \u2212c||1,w = min c,\u0434,h \u00d5 i (\u0434i + hi) (13) such that: 0 \u2264\u0434i, \u02c6 xi \u2212ci \u2264\u0434i \u2200i (14) 0 \u2264hi, x \u2032 i \u2212ci \u2264hi \u2200i (15) FAT*\u201919, January 2019, Atlanta, Georgia USA Chris Russell Putting this all together it gives us the following program arg min c,d,\u0434,hw \u00b7 (\u0434 + h) + \u00d5 i w\u2032 i \u00b7 (di \u2212\u02c6 di) (16) such that: a \u00b7 (\u0434 + h) + \u00d5 i a\u2032 i \u00b7 di + b \u22640 (17) 0 \u2264\u0434i, \u02c6 xi \u2212ci \u2264\u0434i \u2200i (18) 0 \u2264hi, ci \u2212\u02c6 xi \u2264hi \u2200i (19) mixed polytope conditions hold (20) di,j \u2208{0, 1} \u2200i, j (21) The encoding in equation (13) used for the continuous variables is not needed for the discrete variables, as owing to their binary nature, we can simply choose the sign of w\u2032 appropriately to penalise switching away from the state of \u02c6 di. These equations can be given to a standard MIPS solver, such as Gurobi Optimization (2018), allowing coherent counterfactuals to be automatically generated. 3.1 Choices of Parameter The solution found depends strongly upon the choice of parameters w and f , which can be adjusted given better knowledge of the problem or what the explanations found should look like. Here we present some simple heuristics that give good results in practice. Choice of w. We follow Wachter et al. in the use of the inverse median absolute deviation (MAD) for w with some small modifications. We consider the contiguous and discrete values separately and generate the inverse MAD for contiguous regions by discarding datapoints that take one of the given discrete labels. For the discrete labels, the measure of inverse MAD is inappropriate for any distribution over binary labels as the median absolute deviation over any distribution of binary labels is always zero. With only two possible states, the median will coincide with the mode and therefore the median of the absolute deviation will be zero2. Instead, for binary variables we replace the MAD with the standard deviation over the data multiplied by a normalising constant k = \u03a6\u22121(3/4) \u22481.48 to make it commensurate with the use of MAD elsewhere. We use m to refer to these choices of weights. Givenm, we set thew = m, for all parameters penalising changes in the contiguous region of data . For the parameters w\u2032 that govern the cost of transitions from discrete states, we adapt w\u2032 depending on the value taken by the original datapoint we are seeking counterfactuals for. We wish transitions to any new discrete state to be penalised by the scaled inverse standard deviation associated with that state, while a transition away from a current discrete state to the contiguous region should be penalised by the scaled inverse standard deviation of the current state. We achieve this as follows: Given the scale parameters m\u2032, we set w\u2032 i,c = 0 and if xi is currently in discrete state i, set wj := mj \u2212mi for all j , i and final set mi := \u2212wi. This has the required properties. Choice of Fi . Although we introduced the variable Fi in the context of training the model, it can also be adjusted on a perexplanation basis, providing the intercept value b is also altered to compensate. We choose the value Fi in such a way that when \u02c6 xi is 2In the case where it\u2019s a 50/50 split between the two binary states, the MAD is illdefined but one possible solution still remains zero. in the contiguous range, it does not incur an additional penalty to transitioning to a discrete state. This is done setting Fi := xi. On the other-hand, when \u02c6 xi takes a discrete value, we wish to ensure that transitioning to the contiguous range gives you a representative and typical value without incurring an additional cost. This is done by setting Fi = median(Xi). 4 DIVERSE EXPLANATIONS In the original paper of Wachter et al. (2018) the authors note that diverse counterfactual explanations may often be useful \u2013 if someone wishes to improve their credit score, the first route to altering their data that you suggest may not be the useful for them, and another explanation would be more useful. Equally, if no other explanation exists, this too is valuable information for that person. Wachter et al. suggested local optima might be one source of diversity. For linear classifiers their objective (3) is convex in x \u2032 for any choice of \u03bb, and for problems of this particular form, only one minima exists. Instead we take an different approach and induce diversity by restricting the state of variables altered in previously generated counterfactuals. This is done by following a obeying a simple set of rules which we give in the following paragraph. Diversity constraints: If a particular discrete state has been selected in a counterfactual but not in the original data we prohibit the transition to that state, but allow transitions to other discrete states by the same variable. If the counterfactual alters a discrete state to one in the contiguous range we prohibit that transition, while if it alters an already contiguous state to a new contiguous value we prohibit altering the contiguous state but allow transitions to one of the discrete states. Each constraint is added individually, and if the addition of a new constraint means that the mixed integer program can no longer be satisfied, the constraint is immediately removed. The process terminates when the new counterfactual explanations generated is the same as the previous explanation. Sample outputs of the entire procedure are discussed in the following section. 5 EXPERIMENTS To demonstrate the effectiveness of our approach, we generate diverse counterfactuals on a range of problems. All explanations generated will be human readable text that show the sparse changes needed. All text will take the form: You got score ____ . One way you could have got score ____ i s i f : ____ took value ____ r a t h e r than ____ Another way you could have got score ____ i s i f : ____ had taken value ____ r a t h e r than ____ where the blanks are completed automatically. The list of explanations will be naturally ranked by their weighted \u21131 distance from the original datapoint as they are computed by greedily adding constraints. For completeness, we show a full list of explanations as they are generated. If the generated counterfactuals are to be offered to consumers, this list should be truncated, as many of the later elements are unwieldy. All explanations automatically generated by our approach will be shown in the typewriter font, hypothetical Efficient Search for Diverse Coherent Explanations FAT*\u201919, January 2019, Atlanta, Georgia USA You got score ' good ' . One way you could have got score ' bad ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value 63 r a t h e r than 74 Another way you could have got score ' bad ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value 15 r a t h e r than \u22127. You got score ' good ' . One way you could have got score ' bad ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value 57 r a t h e r than 69 Another way you could have got score ' bad ' i n s t e a d i s i f : NetFractionRevolvingBurden had taken value 41 r a t h e r than 0 Another way you could have got score ' bad ' i n s t e a d i s i f : NumInqLast6M had taken value 4 r a t h e r than 2 Another way you could have got score ' bad ' i n s t e a d i s i f : NumSatisfactoryTrades had taken value 27 r a t h e r than 41 Another way you could have got score ' bad ' i n s t e a d i s i f : AverageMInFile had taken value 49 r a t h e r than 6 3 ; NumInqLast6Mexcl7days had taken value 0 r a t h e r than 2 Another way you could have got score ' bad ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value \u22129 , r a t h e r than 69 Another way you could have got score ' bad ' i n s t e a d i s i f : NumSatisfactoryTrades had taken value \u22129, r a t h e r than 41 Another way you could have got score ' bad ' i n s t e a d i s i f : PercentTradesNeverDelq had taken value \u22129, r a t h e r than 95 You got score ' bad ' . One way you could have got score ' good ' i s i f : E x t e r n a l R i s k E s t i m a t e took value 72 r a t h e r than \u22129 You got score ' bad ' . One way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value \u22127, r a t h e r than 3 ; MSinceMostRecentInqexcl7days had taken value 14 r a t h e r than 0 Another way you could have got score ' good ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value 66 r a t h e r than 6 1 ; MSinceMostRecentInqexcl7days had taken value \u22128, r a t h e r than 0 Another way you could have got score ' good ' i n s t e a d i s i f : NumSatisfactoryTrades had taken value 35 r a t h e r than 2 6 ; NumInqLast6M had taken value 0 r a t h e r than 1 ; NetFractionRevolvingBurden had taken value \u22129 r a t h e r than 57 Another way you could have got score ' good ' i n s t e a d i s i f : P e r c e n t I n s t a l l T r a d e s had taken value \u22129, r a t h e r than 5 7 ; NetFractionRevolvingBurden had taken value 0 r a t h e r than 57 Another way you could have got score ' good ' i n s t e a d i s i f : NumInqLast6Mexcl7days had taken value 6 r a t h e r than 1 ; NumRevolvingTradesWBalance had taken value \u22129, r a t h e r than 6 Another way you could have got score ' good ' i n s t e a d i s i f : AverageMInFile had taken value 238 r a t h e r than 86 Another way you could have got score ' good ' i n s t e a d i s i f : MaxDelqEver had taken value \u22129, r a t h e r than 6 ; P e r c e n t I n s t a l l T r a d e s had taken value 40 r a t h e r than 5 7 ; NumInqLast6M had taken value \u22129, r a t h e r than 1 ; NumRevolvingTradesWBalance had taken value 0 r a t h e r than 6 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value 83 r a t h e r than 3 ; MaxDelqEver had taken value 5 r a t h e r than 6 ; N e t F r a c t i o n I n s t a l l B u r d e n had taken value \u22129, r a t h e r than 6 7 ; NumBank2NatlTradesWHighUtilization had taken value \u22129, r a t h e r than 2 Table 1: Two explanations for different pieces of data leading to a \u2018good\u2019 result on the FICO challenge (left, top) one of the few short explanations for \u2018bad\u2019 (left, bottom) on and a typical explanations for a datapoint scored as \u2018bad\u2019 (right). explanations and those generated by other methods will be given in quote blocks. We first turn our attention to the LSAT dataset. 5.1 LSAT The LSAT dataset is a simple prediction task to estimate how well a student is likely to do in their first year exams at law school based upon their race, GPA, and law school entry exams. It is regularly used in fairness community as the historic data has a strong racial bias, with classifiers trained on this data typically predicting that any black person will do worse than average, regardless of their exam scores. As such, counterfactual explanations generated on this dataset should provide evidence of racial bias, and provide immediate grounds for system administrators to block the deployment of the system, or for individuals suffering from from discrimination to challenge the decision. We train a logistic regression classifier to predict student\u2019s first year grade score and assume that a decision is being automatically made to reject students predicted to do worse than average. This mimics the setup of Wachter et al. (2018) although we do not use a neural network to predict. Wachter et al. had difficulty with the binary nature of the race variable ( value \u20181\u2019 indicates that an individual identified as black, and \u20180\u2019 for all other skin colours) and frequently predicted nonsense values such as a skin colour of \u2018-0.7\u2019. To get around this, they had to explicitly fix the race variable to take labels \u20180\u2019 and \u20181\u2019 over two runs and then pick the solution found that has the smallest weighted \u21131 distance. In contrast, we simply treat the variable as a mixed encoded variable that takes a continuous value in the region of [0, 0] (i.e. only the value 0), and with an additional discrete state of value 1. All other issues are taken care of automatically, and we automatically generate diverse counterfactuals. Wachter et al. (2018) consider five individuals Person 1 2 3 4 5 Race 0 0 1 1 0 LSAT 39.0 48.0 28.0 28.5 18.3 GPA 3.1 3.7 3.3 2.4 2.7 and reported the following explanations: Person 1: If your LSAT was 34.0, you would have an average predicted score (0). Person 2: If your LSAT was 32.4, you would have an average predicted score (0). Person 3: If your LSAT was 33.5, and you were \u2018white\u2019, you would have an average predicted score (0). Person 4: If your LSAT was 35.8, and you were \u2018white\u2019, you would have an average predicted score (0). Person 5: If your LSAT was 34.9, you would have an average predicted score (0). FAT*\u201919, January 2019, Atlanta, Georgia USA Chris Russell The explanations found using our method are as follows: You got score ' above average ' . One way you could have got score ' below average ' i s i f : l s a t took value 3 3 . 9 r a t h e r than 3 9 . 0 \u2212\u2212\u2212\u2212\u2212 Another way you could have got score ' below average ' i s i f : gpa had taken value 2 . 5 r a t h e r than 3 . 1 \u2212\u2212\u2212\u2212\u2212\u2212 Another way you could have got score ' below average ' i s i f : i s b l a c k had taken value 1 r a t h e r than 0 You got score ' above average ' . One way you could have got score ' below average ' i s i f : l s a t took value 3 2 . 3 r a t h e r than 4 8 . 0 \u2212\u2212\u2212\u2212\u2212\u2212 Another way you could have got score ' below average ' i s i f : i s b l a c k took value 1 r a t h e r than 0 . You got score ' below average ' . One way you could have got score ' above average ' i s i f : l s a t took value 3 1 . 6 r a t h e r than 2 8 . 0 ; i s b l a c k took value 0 r a t h e r than 1 You got score ' below average ' . One way you could have got score ' above average ' i s i f : l s a t took value 3 8 . 8 r a t h e r than 2 8 . 5 ; i s b l a c k took value 0 r a t h e r than 1 You got score ' below average ' . One way you could have got score ' above average ' i s i f : l s a t took value 3 6 . 4 r a t h e r than 1 8 . 3 This is not a direct comparison with Wachter et al., as they made use of a different classifier. There are noticeable differences in the counterfactuals found starting with the small discrepancy between the first explanation of person 1. Beyond this, several benefits of our new approach are apparent. With the previous approach, the inherent racial bias of the algorithm was only detectable by computing counterfactuals for black students, as the lack of representation in the dataset (6% of the dataset identified as black) meant that counterfactuals that changed race were heavily penalised. In fact, with a classifier with a slightly weaker racial bias, it\u2019s possible that Wachter et al., might never observe the bias, as it would be always preferable to vary the LSAT score rather than to alter race. This is not the case for our approach where the diverse explanations offered makes the racial bias very apparent. Another factor also apparent is the absence of gratuitous diversity. If, as in the last example, changing the LSAT score is both sufficient to obtain a different outcome, and necessary, no additional explanations that jointly vary the LSAT score and GPA are shown. 5.2 The FICO Explainability Challenge We further demonstrate our approach set out in the previous section on the FICO Explainability Challenge. This new challenge is based upon an anonymized Home Equity Line of Credit (HELOC) Dataset released by FICO a credit scoring company. The aim is to train a classifier to predict whether a homeowner they will repay their HELOC account within 2 years. Potentially, this prediction is then used to decide whether the homeowner qualifies for a line of credit and how much credit should be extended. We do not compare against the previous baseline method of Wachter et al, as this would require on the order of 411 \u22484 million runs to compute all the counterfactuals over the valid binary states using their brute force approach. The target to predict is a binary variable FICO refer to as \u201cRisk Performance\u201d. It takes value \u201cBad\u201d indicating that a consumer was 90 days past due or worse at least once over a period of 24 months from when the credit account was opened. The value \u201cGood\u201d indicates that they made all payments without being 90 days overdue. The raw data has a total of 23 components excluding \u201cRisk Performance\u201d and after performing the mixed encoding this rises to 56. Although the only task given in the FICO challenge \u201cOne of the tasks is how well data scientists can use the explanation and their best judgements to make predictions of the selected test instances.\u201d3 does not require good explanations \u2013 it could potentially be solved by an uninterpretable algorithm that simply makes high accuracy predictions \u2013 the dataset in itself is still useful. In particular is possible to consider how helpful counterfactual explanations would be to an applicant who has been denied or offered a loan with respect to the three uses of explanation of Wachter et al., listed in the introduction. Namely if: (i) the explanations we offer would help a data-subject understand why a particular loan decision has been reached; (ii) to provide grounds to contest a decision if the outcome is undesired or (iii) to understand what if anything could be changed to receive a desired outcome. Although (i) is perhaps best evaluated with user studies as in Binns et al. (2018); for (ii) there are two possible ways as to how these counterfactual explanations could be used to contest a decision. If part of an explanations says: One way you could have got score \u2019good\u2019 is if: the number of months since recent delinquency was -9 rather than 15 and you know that you have not missed a payment in the last 16 months, this gives immediate grounds to contest. Another important example is shown in table 1, bottom left: One way you could have got score ' good ' i s i f : E x t e r n a l R i s k E s t i m a t e took value 72 r a t h e r than \u22129. Importantly, this explanation shows that the only thing wrong with the application is that the external risk estimate is missing (value \u2018-9\u2019 corresponds to \u2018No bureau record\u2019). This provides the data-subject with exactly the information they need to correct their score. Counterfactuals also provide additional grounds to contest; In the problem specification, FICO also require that the classification response with respect certain variables is monotonic; for example that the more recently you have missed a payment the less likely you are to receive a \u2018good\u2019 decision. If on the other hand, an explanation says One way you could have got score \u2019good\u2019 is if: the number of months since recent delinquency was 7 rather than 15 this provides direct evidence that the model violates these sensible constraints, and gives grounds to contest. Finally, regarding (iii), explanations that say that the \u201cNet Fractional Revolving Burden\u201d is too high or that \u201cMonths since Most Recent Delinquency\u201d is too low provide a direct pathway to getting a favourable decision in the future, even if that pathway is simply waiting till you become eligible in the future. To evaluate on this dataset, we train a logistic regressor on the mixed data using the dummy variable encoding described in section 3At the time of writing only one task has been released. Efficient Search for Diverse Coherent Explanations FAT*\u201919, January 2019, Atlanta, Georgia USA You got score ' bad ' . One way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value \u22127 r a t h e r than 1 ; MSinceMostRecentInqexcl7days had taken value 24 r a t h e r than 0 You got score ' bad ' . One way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value \u22127 r a t h e r than 9 ; MSinceMostRecentInqexcl7days had taken value 20 r a t h e r than 0 ; NetFractionRevolvingBurden had taken value \u22129 r a t h e r than 89 Another way you could have got score ' good ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value 78 r a t h e r than 5 9 ; MSinceMostRecentInqexcl7days had taken value \u22128 r a t h e r than 0 Another way you could have got score ' good ' i n s t e a d i s i f : E x t e r n a l R i s k E s t i m a t e had taken value 67 r a t h e r than 5 4 ; MSinceMostRecentInqexcl7days had taken value \u22128 r a t h e r than 0 ; NumInqLast6M had taken value \u22129 r a t h e r than 4 Another way you could have got score ' good ' i n s t e a d i s i f : NumSatisfactoryTrades had taken value 33 r a t h e r than 3 1 ; NetFractionRevolvingBurden had taken value \u22129 r a t h e r than 6 2 ; NumRevolvingTradesWBalance had taken value \u22129 r a t h e r than 12 Another way you could have got score ' good ' i n s t e a d i s i f : NumSatisfactoryTrades had taken value 48 r a t h e r than 2 5 ; NumInqLast6M had taken value 0 r a t h e r than 4 ; NetFractionRevolvingBurden had taken value 0 r a t h e r than 89 Another way you could have got score ' good ' i n s t e a d i s i f : P e r c e n t I n s t a l l T r a d e s had taken value \u22129 r a t h e r than 4 7 ; NumInqLast6Mexcl7days had taken value 4 r a t h e r than 0 ; NetFractionRevolvingBurden had taken value 0 r a t h e r than 62 Another way you could have got score ' good ' i n s t e a d i s i f : P e r c e n t I n s t a l l T r a d e s had taken value \u22129 r a t h e r than 5 8 ; NumInqLast6Mexcl7days had taken value 13 r a t h e r than 4 ; NumRevolvingTradesWBalance had taken value \u22129 r a t h e r than 7 Another way you could have got score ' good ' i n s t e a d i s i f : AverageMInFile had taken value 298 r a t h e r than 78 Another way you could have got score ' good ' i n s t e a d i s i f : AverageMInFile had taken value 352 r a t h e r than 37 Another way you could have got score ' good ' i n s t e a d i s i f : MaxDelqEver had taken value \u22129 r a t h e r than 6 ; P e r c e n t I n s t a l l T r a d e s had taken value 23 r a t h e r than 4 7 ; NumRevolvingTradesWBalance had taken value 0 r a t h e r than 1 2 ; NumBank2NatlTradesWHighUtilization had taken value \u22129 r a t h e r than 3 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value 52 r a t h e r than 9 ; MaxDelqEver had taken value \u22129 r a t h e r than 6 ; P e r c e n t I n s t a l l T r a d e s had taken value 0 r a t h e r than 5 8 ; NetFractionRevolvingBurden had taken value \u22128 r a t h e r than 8 9 ; NumRevolvingTradesWBalance had taken value 0 r a t h e r than 7 ; NumBank2NatlTradesWHighUtilization had taken value \u22129 r a t h e r than 2 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentDelq had taken value 83 r a t h e r than 1 ; N e t F r a c t i o n I n s t a l l B u r d e n had taken value \u22129 r a t h e r than 9 3 ; NumRevolvingTradesWBalance had taken value \u22128 r a t h e r than 1 2 ; NumBank2NatlTradesWHighUtilization had taken value 2 r a t h e r than 3 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentTradeOpen had taken value \u22129 r a t h e r than 7 ; MaxDelq2PublicRecLast12M had taken value 9 r a t h e r than 4 ; MaxDelqEver had taken value 0 r a t h e r than 6 ; NumTradesOpeninLast12M had taken value \u22129 r a t h e r than 3 ; MSinceMostRecentInqexcl7days had taken value \u22129 r a t h e r than 0 ; N e t F r a c t i o n I n s t a l l B u r d e n had taken value \u22129 r a t h e r than 7 6 ; NumRevolvingTradesWBalance had taken value \u22128 r a t h e r than 7 ; NumBank2NatlTradesWHighUtilization had taken value 0 r a t h e r than 2 ; PercentTradesWBalance had taken value \u22128 r a t h e r than 100 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceMostRecentTradeOpen had taken value \u22129 r a t h e r than 1 1 ; MaxDelq2PublicRecLast12M had taken value 8 r a t h e r than 4 ; MaxDelqEver had taken value 0 r a t h e r than 6 ; NumTradesOpeninLast12M had taken value \u22129 r a t h e r than 1 ; NetFractionRevolvingBurden had taken value \u22128 r a t h e r than 6 2 ; PercentTradesWBalance had taken value \u22128 r a t h e r than 94 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceOldestTradeOpen had taken value 803 r a t h e r than 8 8 ; MSinceMostRecentTradeOpen had taken value 0 r a t h e r than 7 ; NumTrades60Ever2DerogPubRec had taken value \u22129 r a t h e r than 0 ; NumTrades90Ever2DerogPubRec had taken value \u22129 r a t h e r than 0 ; PercentTradesNeverDelq had taken value 100 r a t h e r than 9 2 ; MSinceMostRecentDelq had taken value \u22129 r a t h e r than 9 ; NumTotalTrades had taken value \u22129 r a t h e r than 2 6 ; NumTradesOpeninLast12M had taken value 0 r a t h e r than 3 ; N e t F r a c t i o n I n s t a l l B u r d e n had taken value 0 r a t h e r than 7 6 ; NumRevolvingTradesWBalance had taken value \u22128 r a t h e r than 7 ; NumInstallTradesWBalance had taken value 8 r a t h e r than 7 ; NumBank2NatlTradesWHighUtilization had taken value 0 r a t h e r than 2 ; PercentTradesWBalance had taken value \u22128 r a t h e r than 100 Another way you could have got score ' good ' i n s t e a d i s i f : MSinceOldestTradeOpen had taken value \u22129 r a t h e r than 1 3 7 ; MSinceMostRecentTradeOpen had taken value 0 r a t h e r than 1 1 ; MSinceMostRecentDelq had taken value \u22128 r a t h e r than 1 ; NumTotalTrades had taken value \u22129 r a t h e r than 3 2 ; NumTradesOpeninLast12M had taken value 0 r a t h e r than 1 ; MSinceMostRecentInqexcl7days had taken value \u22129 r a t h e r than 0 ; NumInqLast6M had taken value \u22129 r a t h e r than 0 ; NumInqLast6Mexcl7days had taken value \u22129 r a t h e r than 0 ; N e t F r a c t i o n I n s t a l l B u r d e n had taken value 0 r a t h e r than 9 3 ; NumInstallTradesWBalance had taken value 15 r a t h e r than 4 ; PercentTradesWBalance had taken value \u22128 r a t h e r than 94 Table 2: Paired explanations generated on the FICO dataset. Results show two full sets of explanations for similar individuals. Later explanations are given for completeness only and are not suitable to be directly offered to a data-subject. FAT*\u201919, January 2019, Atlanta, Georgia USA Chris Russell 3. Example decisions can be seen in tables 1 and 2. The explanations are generated fully automatically using the method in the previous section, with variable names extracted from the provided data, and the meanings of special values provided by the dataset creators. None of the previously mentioned monotonic constraints were violated by the learnt algorithm. As can be seen in the tables, the individual explanations generated at the start of the process are short, human readable, and do not require the data subject to understand either the internal complexity of the classifier and the variable encoding. However, taken in their entirety, a complete set of explanations, such as shown in table 1 right, or table 2 can potentially be overwhelming, and more thought is needed as to how to interactively present and navigate them. Shown in table 2, is the complete set of explanations generated for two highly similar individuals. Several factors are worth remarking on: First, the stability of the generation of multiple counterfactuals is noteworthy. Although there are small differences both in the values proposed and occasionally in the variables selected, on the whole the generated sets of explanations are very similar to one another, and provide similar data subjects, with very similar amounts of information. This consistent treatment is important for providing a sense of stability and coherence when offering repeated explanations to a data subject who\u2019s data slowly changes with time. Second, the results of the weighted \u21131 norm noticeably differs from simple sparsity constraints, with the numbers of factors selected fluctuating up and down as we proceed through the list of explanations that are ordered by their \u21131 distance from the original datapoint. Further work with data subjects is needed to determine which of these explanations are most comprehensible, and which are most useful for determining future action. Finally, it is worth remarking that the diverse counterfactuals become both less diverse and less comprehensible towards the end of the procedure. If a group of large changes are sufficient to \u201cpush\u201d a counterfactual almost to the decision boundary, it is possible for these variables to remain turned on as a necessary condition for any subsequent counterfactuals, while incidental variables that make little contribution to the decision are toggled on and off. Although one easy answer is to simply stop earlier, more diverse counterfactuals could also be generated by using a less greedy approach. Taken as a whole, the generated counterfactuals provide insight into the general behaviour of the classifier. One unexpected behaviour, is that while a single missed payment is enough to move many people from a \u2018good\u2019 credit prediction to \u2018bad\u20194, it is not irredeemable and a strong credit record in other areas can compensate for this. In such situations, diverse counterfactual explanations could be invaluable as providing direct pathways to obtaining a good credit rating. Although using counterfactuals in this way raises the spectre of people \u201cgaming the system\u201d, and intentionally distorting their credit records to obtain a better score; perhaps the most pragmatic response to this is to build more accurate systems so that as individuals make changes to improve their credit score, their underlying risk of default also decreases. 4As can be seen in the counterfactual explanations offered to \u2018good\u2019 decisions. Figure 1: A visualisation of the weights learnt by logistic regression on the FICO dataset. Weights are ordered by their median contribution to the score of each datapoint over the entire dataset, with a positive sign indicating that they drive the classifier towards a score of \u201cgood\u201d. By way of contrast a direct visualisation of the learnt linear weights is shown in figure 1, and the reader is invited to see what conclusions they can draw from them.One of the most counterintuitive factors of the weights when presented like this is that a positive weight is associated with External-risk factor taking value -9. However, as discussed an external-risk estimate of -9 may be the only counterfactual explanation offered for why someone gets a bad credit score. This is due to the much larger positive contribution of a typical external-risk estimate. 6 CONCLUSION This is the first work to show how coherent counterfactual explanations can be generated for the mixed datasets commonly used in the real world, and the first to propose a concrete method for generating diverse counterfactuals. As such the methods proposed in this paper provide an significant step forward in what can be done with counterfactual explanations. Generalising the approach to non-linear functions, and indeed to non-differentiable classifiers such as k-nearest neighbour or random forests, looks to be a useful direction for future work. However, linear functions represent part of machine learning that \u201cjust works\u201d and are consistently used by industry and data scientists in a wide range of scenarios. Reliable methods such as those discussed for generating both coherent and diverse explanations are needed if we want people to make use of them. Efficient Search for Diverse Coherent Explanations FAT*\u201919, January 2019, Atlanta, Georgia USA Collaboration between policy and technology is a two-way street. Just as policy must respect the limitations of technology in what it calls for, it is important to build the supporting technology in response to policy proposals. Compelling ideas such as counterfactual explanations are of little use unless we develop the technology to make them work. This paper has addressed major technological issues in one of the most substantial use cases for counterfactual explanations namely linear models for mixed financial data. As mentioned in section 5.2, the brute force enumeration of previous approaches do not scale to these datasets. Our work represents progress towards making methods for counterfactual explanation that \u201cjust work \u201d out of the box. Full source code can be found at https://bitbucket.org/ChrisRussell/diverse-coherent-explanations/.", "introduction": "A fundamental tension exists between the high performance of machine learning algorithms and the notion of transparency (Lip- ton, 2016). The large complex models of machine learning are cre- ated by researchers and system builders looking to maximise their performance on real-world data, and it is precisely their size and complexity that allows them to fit to the data giving them such high-performance. At the same time such models are simply too complex to fit in their builders minds; and even the people that created the systems need not understand why they make particular decisions. This tension becomes more apparent as we start using machine learning to make decisions that substantially alter people\u2019s lives. As algorithms are used to make loan decision; to recommend whether or not some one should be released on parole; or to detect cancer, it Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). FAT*\u201919, January 2019, Atlanta, Georgia USA \u00a9 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6125-5/19/01. https://doi.org/10.1145/3287560.3287574 is vital that not only are the algorithms used as accurate as possible, but also that they justify themselves in some way, allowing the subject of the decisions to verify the data used to make decisions about them, and to challenge inappropriate decisions A common remedy to avoid this trade-off is to learn the complex function, and then fit simple models about datapoints providing human comprehensible approximations of the underlying function. While popular in the machine learning community, there are many challenges in conveying the quality of the approximation and the domain over which it is valid to a lay audience. Another promising approach to explaining the incomprehensi- ble models of machine learning lies in counterfactual explanations (Lewis, 1973; Wachter et al., 2018). This recent approach to explain- ablity bypasses the problem of describing how a function works and instead focuses on the data. Instead, counterfactual explana- tions attempt to answer the question \u201cHow would my data need to be changed to get a different outcome?\u201d. Wachter et al. make the argument that there are three important use cases for explanation: (1) to inform and help the individual understand why a particu- lar decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision- making model. and that counterfactual explanations satisfy all three. Although making the case for the use of counterfactuals and showing how they could be effectively calculated for common clas- sifiers, Wachter et al. left many technical questions unanswered. Of particular concern is the issue of how should we generate coun- terfactuals efficiently and reliably for standard classifiers. This paper focuses on the technical aspects needed to generate co- herent counterfactual explanations. Keeping the existing definition of counterfactual explanations intact, we look at how explanations can be reliably generated. We make two contributions: (1) Focusing on primarily the important problem of explaining fi- nancial decisions, we look at the most common case in which the classifier is linear (i.e. linear/logistic regression, SVM etc.) but the data has been transformed via a mix-encoding based upon 1-hot or dummy variable encoding. We present a novel integer program based upon a \u201cmixed polytope\u201d that is guar- anteed to generate coherent counterfactuals that map back into the same form as the original data. (2) We provide a novel set of criteria for generating diverse counterfactuals and integrate them with our mixed polytope method. Previously, Wachter et al. strongly made the case that diverse coun- terfactuals are important to informing a lay audience about the arXiv:1901.04909v1 [cs.LG] 2 Jan 2019 FAT*\u201919, January 2019, Atlanta, Georgia USA Chris Russell decisions that have been made, writing that: \u201c...individual coun- terfactuals may be overly restrictive. A single counterfactual may show how a decision is based on certain data that is both correct and unable to be altered by the data subject before future decisions, even if other data exist that could be amended for a favourable out- come. This problem could be resolved by offering multiple diverse counterfactual explanations to the data subject.\u201d but to date no one has proposed a concrete method for generating them. We evaluate our new for mixed data approach on standard ex- plainability problems, and the new FICO explainability dataset, where we show our fully automatic approach generates coherent and informative diverse explanations for a range of sample inputs." }, { "url": "http://arxiv.org/abs/1203.3512v1", "title": "Exact and Approximate Inference in Associative Hierarchical Networks using Graph Cuts", "abstract": "Markov Networks are widely used through out computer vision and machine\nlearning. An important subclass are the Associative Markov Networks which are\nused in a wide variety of applications. For these networks a good approximate\nminimum cost solution can be found efficiently using graph cut based move\nmaking algorithms such as alpha-expansion. Recently a related model has been\nproposed, the associative hierarchical network, which provides a natural\ngeneralisation of the Associative Markov Network for higher order cliques (i.e.\nclique size greater than two). This method provides a good model for object\nclass segmentation problem in computer vision. Within this paper we briefly\ndescribe the associative hierarchical network and provide a computationally\nefficient method for approximate inference based on graph cuts. Our method\nperforms well for networks containing hundreds of thousand of variables, and\nhigher order potentials are defined over cliques containing tens of thousands\nof variables. Due to the size of these problems standard linear programming\ntechniques are inapplicable. We show that our method has a bound of 4 for the\nsolution of general associative hierarchical network with arbitrary clique size\nnoting that few results on bounds exist for the solution of labelling of Markov\nNetworks with higher order cliques.", "authors": "Chris Russell, L'ubor Ladicky, Pushmeet Kohli, Philip H. S. Torr", "published": "2012-03-15", "updated": "2012-03-15", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.CV" ], "main_content": "Consider an amn defined over a set of latent variables x = {xi|i \u2208V} where V = {1, 2, ..., n}. Each random variable xi can take a label from the label set L = {l1, l2, ..., lk}. Let C represent a set of subsets of V (i.e., cliques), over which the amn is defined. The map solution of a amn can be found by minimising an energy function E : Ln \u2192R. These energy functions can typically be written as a sum of potential functions: E(x) = \ufffd c\u2208C \u03c8c(xc), where xc represents the set of variables included in any clique c \u2208C. We refer to functions defined over cliques of size one as unary potentials and denote them by \u03c8 : L \u2192R where tions: E(x) = \ufffd c\u2208C \u03c8c(xc), where xc represents the set of variables included in any clique c \u2208C. We refer to functions defined over cliques of size one as unary potentials and denote them by \u03c8i : L \u2192R where the subscript i denotes the index of the variable over which the potential is defined. Similarly, functions defined over cliques of size two are referred to as pairwise potentials and denoted: \u03c8ij : L2 \u2192R. Potentials defined over cliques of size greater than two i.e. \u03c8c : L|c| \u2192R, |c| > 2 will be called higher order potentials, where |c| represents the number of variables included in the clique (also called the clique order). We will call an energy function pairwise if it contains no potentials defined over cliques of size greater than 2. At points within the paper, we will want to distinguish between the original variables of the energy function, whose optimal values we are attempting to find, and the auxiliary variables which we will introduce to convert our higher order function into a pairwise one. We refer to the original variables as the base layer x(1) (as they lie at the bottom of the hierarchical network). All auxiliary variables at any level h of the hierarchy are denoted by x(h). The set of indices of variables constituting level h of the hierarchy is denote by Vh. Similarly, the set of all pairwise interactions at level h is denoted by Eh. 3 ASSOCIATIVE HIERARCHICAL NETWORKS Existing higher-order models Taskar et al. (2004) proposed the use of higher order potentials that encourage the entirety of a clique to take some label, and discusses how they can be applied to predicting protein interactions and document classification. These potentials were introduced into computer vision along with an efficient graph cut based method of inference, as the strict P n Potts model (Kohli et al., 2007). A generalisation of this approach was proposed by Kohli et al. (2008), who observed that in the image labelling problem, most (but not all) pixels belonging to image segments computed using an unsupervised clustering/segmentation algorithm take the same object label. They proposed a higher order mrf over segment based cliques. The energy took the form: E(x) = \ufffd i\u2208V \ufffd i\u2208V \u03c8i(xi) + \ufffd ij\u2208E \ufffd ij\u2208E \u03c8ij(xi, xj) + \ufffd c\u2208C \ufffd c\u2208C \u03c8c(xc), (3) where \u03c8c(xc) = min l\u2208L \u03b3max c , \u03b3l c + X i\u2208c ki c\u2206(xi \u0338= l) ! . (4) The potential function parameters ki c, \u03b3l c, and \u03b3max c are subject to the restriction that ki c \u22650 and \u03b3l c \u2264 \u03b3max c , \u2200l \u2208L. \u2206denotes the Kronecker delta, an indicator function taking a value of 1 if the statement following it is true and 0 if false. These potentials can be understood as a truncated majority voting scheme on the base layer. Where possible, they encourage the entirety of the clique to assume one consistent labelling. However, beyond a certain threshold of disagreement they implicitly recognise that no consistent labelling is likely to occur, and no further penalty is paid for increasing heterogeneity. We now demonstrate that the higher order potentials \u03c8c(xc) of the Robust P n model (4) can be represented by an equivalent pairwise function \u03c8c(x(1) c , x(2) c ) de\ufb01ned over a two level hierarchical network with the addition of a single auxiliary variable x(2) c for every clique c \u2208C. This auxiliary variable take values from an extended label set Le = L \u222a{LF }, where LF , the \u2018free\u2019 label of the auxiliary variables, allows its child variables to take any label without paying a pairwise penalty. In general, every higher order cost function can be converted to a 2\u2212layer associative hierarchical network by taking an approach analogous to that of factor graphs (Kschischang et al., 2001) and adding a single multi-state auxiliary variable. However, to do this for general higher order functions requires the addition of an auxiliary variable with an exponential sized label set (Wainwright and Jordan, 2008). Fortunately, the class of higher order potentials we are concerned with can be compactly described as ahns with auxiliary variables that take a similar sized label set to the base layer, permitting fast inference. The corresponding higher order function can be written as: \u03c8c(x(1) c ) = min x(2) c \u03c8c(x(1) c , x(2) c ) = min x(2) c \" \u03c6c(x(2) c ) + X i\u2208c \u03c6ic(x(1) i , x(2) c ) # . (5) The unary potentials \u03c6c(x(2) c ) de\ufb01ned on the auxiliary variable x(2) c assign the cost \u03b3l if x(2) c = l \u2208L, and \u03b3max if x(2) c = LF . The pairwise potential \u03c6ic(xi, x(2) c ) is de\ufb01ned as: \u03c6ic(xi, x(2) c ) = ( 0 if x(2) c = LF , or x(2) c = xi. ki c if x(2) c = l \u2208L, and xi \u0338= l. (6) General Formulation The scheme described above can be extended by allowing pairwise and higher order potentials to be de\ufb01ned over x(2) and further over x(i), which corresponds to higher order potentials de\ufb01ned over the layer x(i\u22121). The higher order energy corresponding to the general hierarchical network can be written using the following recursive function: E(1)(x(1)) = X i\u2208V \u03c8(1) i (x(1) i ) + X ij\u2208E(1) \u03c8(1) ij (x(1) i , x(1) j ) + min x(2) E(2)(x(1), x(2)) (7) where E(2)(x(1), x(2)) is recursively de\ufb01ned as: E(n)(x(n\u22121), x(n)) = X c\u2208V (n) \u03c6c(x(n) c ) + X c\u2208V(n)i\u2208c \u03c6(n) ic (x(n\u22121) i , x(n) c ) + X cd\u2208E(n) \u03c8(n) cd (x(n) c , x(n) d ) + min x(n+1) E(n+1)(x(n), x(n+1)) (8) and x(n) = {x(n) c |c \u2208Vn} denotes the set of variables at the nth level of the hierarchy, E(n) represents the edges at this layer, and \u03c8(n) ic (x(n\u22121) c , x(n) c ) denotes the inter-layer potentials de\ufb01ned over variables of layer n \u22121 and n. While the hierarchical formulation of both Taskar\u2019s and Kohli\u2019s models can be understood as a mathematical convenience that allows for fast and e\ufb03cient bounded inference, our earlier work (Ladicky et al., 2009) used it for true multi-scale inference, modelling constraints de\ufb01ned over many quantisations of the image. 4 INFERENCE Inference in Pairwise Networks Although the problem of map inference is NP-hard for most associative pairwise functions de\ufb01ned over more than two labels, in real world problems many conventional algorithms provide near optimal solutions over grid connected networks (Szeliski et al., 2006). However, the dense structure of hierarchical networks results in frustrated cycles and makes traditional reparameterisation based message passing algorithms for map inference such as loopy belief propagation (Weiss and Freeman, 2001) and tree-reweighted message passing (Kolmogorov, 2006) slow to converge and unsuitable (Kolmogorov and Rother, 2006). Many of these frustrated cycles can be eliminated via the use of cycle inequalities (Sontag et al., 2008; Werner, 2009), but only by signi\ufb01cantly increasing the run time of the algorithm. Graph cut based move making algorithms do not suffer from this problem and have been successfully used for minimising pairwise functions de\ufb01ned over densely connected networks encountered in vision. Examples of move making algorithms include \u03b1expansion which can only be applied to metrics, \u03b1\u03b2 swap which can be applied to semi-metrics (Boykov et al., 2001), and range moves (Kumar and Torr, 2008; Veksler, 2007) for truncated convex potentials. These moves di\ufb00er in the size of the space searched for the optimal move. While expansion and swap search a space of size at most 2n while minimising a function of n variables, the range moves explores a much larger space of Kn where K is a parameter of the energy (see Veksler (2007) for more details). Of these move making approaches, only \u03b1\u03b2 swap can be directly applied to associative hierarchical networks as the term \u03c6ic(xi, xc), is not a metric nor truncated convex. These methods start from an arbitrary initial solution of the problem and proceed by making a series of changes each of which leads to a solution of the same or lower energy (Boykov et al., 2001). At each step, the algorithms project a set of candidate moves into a Boolean space, along with their energy function. If the resulting projected energy function (also called the move energy) is both submodular and pairwise, it can be exactly minimised in polynomial time by solving an equivalent st-mincut problem. These optima can then be mapped back into the original space, returning the optimal move within the move set. The move algorithms run this procedure until convergence, iteratively picking the best candidate as di\ufb00erent choices of range are cycled through. Minimising Higher Order Functions A number of researchers have worked on the problem of map inference in higher order amns. Lan et al. (2006) proposed approximation methods for bp to make e\ufb03cient inference possible in higher order mrfs. This was followed by the recent works of Potetz and Lee (2008); Tarlow et al. (2008, 2010) in which they showed how belief propagation can be e\ufb03ciently performed in networks containing moderately large cliques. However, as these methods were based on bp, they were quite slow and took minutes or hours to converge, and lack bounds. To perform inference in the P n models, Kohli et al. (2007, 2008), \ufb01rst showed that certain projection of the higher order P n model can be transformed into submodular pairwise functions containing auxiliary variables. This was used to formulate higher order expansion and swap move making algorithms The only existing work that addresses the problem of bounded higher order inference is (Gould et al., 2009) which showed how theoretical bounds could be derived given move making algorithms that proposed optimal moves by exactly solving some sub-problem. In application they used approximate moves which do not exactly solve the sub-problems proposed. Consequentially, the bounds they derive do not hold for the methods they propose. However, their analysis can be applied to the P n (Kohli et al., 2007) model and inference techniques, which do propose optimal moves, and it is against these bounds that we compare our results. 4.1 INFERENCE WITH \u03b1-EXPANSION We show that by restricting the form of the inter-layer potentials \u03c8(n) c (x(n\u22121) c , x(n) c ) to that of the weighted Robust P n model (Kohli et al., 2008) (see (4)), we can apply \u03b1-expansion to the pairwise form of the ahn. This requires a transform of all functions in the pairwise representation, so that they can be representable as a metric (Boykov et al., 2001). This transformation is non-standard and should be considered a contribution of this work. We alter the form of the potentials in two ways. First, we assume that all variables in the hierarchy take values from the same label set Le = L \u222a{LF }. Where this is not true \u2014 original variables x(1) at the base of the hierarchy can not take label LF \u2014 we arti\ufb01cially augment the label set with the label LF and associate an in\ufb01nite unary cost with it. Secondly, we make the inter-layer pairwise potentials symmetric by performing a local reparameterisation operation. Lemma 1. The inter-layer pairwise functions \u03c6(n) ic (x(n\u22121) i , x(n) c ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if x(n) c = LF or x(n) c = x(n\u22121) i ki c if x(n) c = l \u2208L and x(n\u22121) i \u0338= l (9) of (8) can be written as: \u03c6(n) ic (x(n\u22121) i , x(n) c ) = \u03c8(n\u22121) i (x(n\u22121) i ) + \u03c8(n) c (x(n) c ) + \u03a6(n) ic (x(n\u22121) i , x(n) c ), (10) where \u03a6(n) ic (x(n\u22121) i , x(n) c ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 if x(n\u22121) i = x(n) c ki c/2 if x(n\u22121) i = LF or x(n) c = LF and x(n\u22121) i \u0338= x(n) c ki c otherwise, (11) and \u03c8(n) c (x(n) c ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if x(n) c \u2208L \u2212ki c/2 otherwise, (12) \u03c8(n\u22121) i (x(n\u22121) i ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if x(n\u22121) i \u2208L ki c/2 otherwise. (13) Proof Consider a clique containing only one variable, the general case will follow by induction. Note that if no variables take state LF the costs are invariant to reparameterisation. This leaves three cases: x(n) c = LF, x(n\u22121) i \u2208L \u03c8c(x(n) c ) + \u03c8ic(x(n) c , x(n\u22121) i ) = \u2212k/2 + k/2 = 0 x(n) c \u2208L, x(n\u22121) i = LF \u03c8i(x(n\u22121) i ) + \u03c8ic(x(n\u22121) i , x(n) c ) = k/2 + k/2 = k x(n) c = LF, x(n\u22121) i = LF \u03c8i(x(n\u22121) i ) + \u03c8ic(x(n\u22121) i , x(n) c ) + \u03c8c(x(n) c ) = k\u2212k 2 = 0 (14) Bounded Higher Order Inference We now prove bounds for \u03b1-expansion over an ahn. 1. The pairwise function of lemma 1, is positive de\ufb01nite, symmetric, and satis\ufb01es the triangle inequality \u03c8a,b(x, z) \u2264\u03c8a,b(x, y)+\u03c8a,b(y, z)\u2200x, y, z \u2208L\u222a{LF }. (15) Hence it is a metric, and the algorithms \u03b1 \u03b2 swap and \u03b1-expansion can be used to minimise it. 2. By the work of Boykov et al. (2001), the \u03b1-expansion algorithm is guaranteed to \ufb01nd a solution within a factor of 2 max \u00002, maxE\u2208E1 maxxi,xj \u2208L \u03c8E(xi,xj) minxi,xj \u2208L \u03c8E(xi,xj) \u0001 (i.e. 4 where the potentials de\ufb01ned over the base layer of hierarchy take the form of a Potts model) of the global optima. 3. The following two properties hold: min x(1) E(x(1)) = min x(1),xa E\u2032(x(1)) + Ea(x(1), xa), (16) E(x(1)) \u2264E\u2032(x(1)) + Ea(x(1), xa), (17) Hence, if there exists a labelling (x\u2032, x\u2217)) such that E\u2032(x\u2032)+Ea(x\u2032, x\u2217) \u2264k min x(1),xa E\u2032(x(1))+Ea(x(1), xa). (18) then E(x\u2032) \u2264k min x(1) E(x(1)). (19) Consequentially, the bound is preserved in the transformation that maps from the pairwise energy back to its higher order form. By way of comparison, the work of Gould et al. (2009) provides a bound of 2|c| for the higher order potentials of the strict P n model (Kohli et al., 2007), where c is the largest clique in the network. Using their approach, no bounds are possible for the general class of Robust P n models or for associative hierarchical networks. The moves of our new range-move algorithm (see next section) strictly contain those considered by \u03b1expansion and thus our approach automatically inherits the above approximation bound. 5 NOVEL MOVES AND TRANSFORMATIONAL OPTIMALITY In this section we propose a novel graph cut based move making algorithm for minimising the hierarchical pairwise energy function de\ufb01ned in the previous section. Let us consider a generalisation of the swap and expansion moves proposed in Boykov et al. (2001). In a standard swap move, the set of all moves considered is those in which a subset of the variables currently taking label \u03b1 or \u03b2 change labels to either \u03b2 or \u03b1. In our range-swap the moves considered allow any variables taking labels \u03b1,LF or \u03b2 to change their state to any of \u03b1,LF or \u03b2. Similarly, while a normal \u03b1 expansion move allows any variable to change to some state \u03b1, our range expansion allows any variable to change to states \u03b1 or LF . This approach can be seen as a variant on the ordered range moves proposed in Veksler (2007); Kumar and Torr (2008), however while these works require that an ordering of the labels {l1, l2, . . . , ln} exist such that moves over the range {li, li+1 . . . li+j} are convex for some j \u22652 and for all 0 < i \u2264n \u2212j, our range moves function despite no such ordering existing. We now show that the problem of \ufb01nding the optimal swap move can be solved exactly in polynomial time. Consider a label mapping function f\u03b1,\u03b2 : L \u2192{1, 2, 3} de\ufb01ned over the set {\u03b1, LF , \u03b2} that maps \u03b1 to 1, LF to 2 and \u03b2 to 3. Given this function, it is easy to see that the reparameterised inter-layer potential \u03a6(n) ic (x(n\u22121) i , x(n) c ) de\ufb01ned in lemma 1 can be written as a convex function of f\u03b1,\u03b2(x(n\u22121) i ) \u2212f\u03b1,\u03b2(x(n) c ) over the range \u03b1, LF , \u03b2. Hence, we can use the Ishikawa construct (Ishikawa, 2003) to minimise the swap move energy to \ufb01nd the optimal move. A similar proof can be constructed for the range-expansion move described above. The above de\ufb01ned move algorithm gives improved solutions for the hierarchical energy function used for formulating the object segmentation problem. We can improve further upon this algorithm. Our novel construction for computing the optimal moves explained in the following section, is based upon the original energy function (before reparameterisation) and has a strong transformational optimality property. We \ufb01rst describe the construction of a three label range move over the hierarchical network, and then show in section 5.2 that under a set of reasonable assumptions, our methods are equivalent to a swap or expansion move that exactly minimises the equivalent higher order energy de\ufb01ned over the base variables E(x(1)) of the hierarchical network (as de\ufb01ned in (7)). 5.1 CONSTRUCTION OF THE RANGE MOVE We now explain the construction of the submodular quadratic pseudo boolean (qpb) move function for range expansion. The construction of the swap based move function can be derived from this range move. In essence, we demonstrate that the cost function of (9) over the range xc \u2208{\u03b2, LF , \u03b1}, xi \u2208{\u03b4, LF , \u03b1} where \u03b2 may or may not equal \u03b4 is expressible as a submodular qpb potential. To do this, we create a qpb function de\ufb01ned on 4 variables c1, c2, i1 and i2. We associate the states i1 = 1, i2 = 1 with xi taking state \u03b1, i1 = 0, i2 = 0 with the current state of xi = \u03b4, and i1 = 1, i2 = 0 with state LF . We prohibit the state i1 = 0, i2 = 1 by incorporating the pairwise term \u221e(1 \u2212i1)i2 which assigns an in\ufb01nite cost to the state i1 = 0, i2 = 1, and do the same respectively with xc and c1 and c2. To simplify the resulting equation, we write I instead of \u2206(\u03b2 \u0338= \u03b4), k\u03b4 for \u03c8i,c(LF , \u03b4) and k\u03b1 for \u03c8i,c(LF , \u03b1) then \u03c8i,c(xi, xc) = (1\u2212I)k\u03b4c2(1\u2212x2)\u2212Ik\u03b4c2+k\u03b1(1\u2212c1)x1 (20) over the range xc \u2208{\u03b2, LF , \u03b1}, xi \u2208{\u03b4, LF , \u03b1}. The proof follows from inspection of the function. Note that c2 = 1 if and only if xc = \u03b2 while c1 = 0 if and only if c = \u03b1. If xc = LF then c2 = 0 and c1 = 1 and the cost is always 0. If xc = \u03b1 the \ufb01rst two terms take cost 0, and the third term has a cost of k\u03b1 associated with it unless xi = \u03b1. Similarly, if xc = \u03b2 there is a cost of k\u03b2 associated with it, unless xi also takes label \u03b2. \u25a1 5.2 OPTIMALITY Note that both variants of unordered range moves are guaranteed to \ufb01nd the global optima if the label space of x(1) contains only two states. This is not the case for the standard forms of \u03b1 expansion or \u03b1\u03b2 swap as auxiliary variables may take one of three states. Transformational optimality Consider an energy function de\ufb01ned over the variables x = {x(h), h \u2208 {1, 2, . . . , H}} of a hierarchy with H levels. We call a move making algorithm transformationally optimal if and only if any proposed move x\u2217= {x(h) \u2217, h \u2208 {1, 2, . . . , H}} satis\ufb01es the property: E(x\u2217) = min xaux \u2217 E(x(1) \u2217, xaux \u2217 ) (21) where xaux \u2217 = S h\u22082,...,H x(h) \u2217 represents the labelling of all auxiliary variables in the hierarchy. Note that any move proposed by transformationally optimal algorithms minimises the original higher order energy (7). We now show that when applied to hierarchical networks, the range moves are transformationally optimal. Move Optimality To guarantee transformational optimality we need to constrain the set of higher order potentials. Consider a clique c with an associated auxiliary variable x(i) c . Let xl be a labelling such that x(i) c = l \u2208L and xLF be a labelling that only di\ufb00ers from it that the variable x(i) c takes label LF . We say a clique potential is hierarchically consistent if it satis\ufb01es the constraint: E(xl) \u2265E(xLF ) = \u21d2 P i\u2208c ki c\u2206(xi = l) P i\u2208c ki c > 0.5. (22) The property of hierarchical consistency is also required in computer vision for the cost associated with the hierarchy to remain meaningful. The labelling of an auxiliary variable within the hierarchy should be re\ufb02ected in the state of the clique associated with it. If an energy is not hierarchically consistent, it is possible that the optimal labelling of regions of the hierarchy will not re\ufb02ect the labelling of the base layer. The constraint (22) is enforced by construction, weighting the relative magnitude of \u03c8i(l) and \u03c8i,j(bj, x(i) c ) to guarantee that: \u03c8i(l) + X j\u2208Ni/c max xj\u2208L\u222a{Lf } \u03c8i,j(bj, x(i) c ) < 0.5 X Xi\u2208c ki\u2200l \u2208L. (23) If this holds, in the degenerate case where there are only two levels in the hierarchy, and no pairwise connections between the auxiliary variables, our network is exactly equivalent to the P n model. At most one l \u2208L at a time can satisfy (22), assuming the hierarchy is consistent. Given a labelling for the base layer of the hierarchy x(1), an optimal labelling for an auxiliary variable in x(2) associated with some clique must be one of two labels: LF and some l \u2208 L. By induction, the choice of labelling of any clique in x(j) must also be a decision between at most two labels: LF and some l \u2208L. 5.3 TRANSFORMATIONAL OPTIMALITY UNDER UNORDERED RANGE MOVES Swap range moves Swap based optimality requires an additional constraint to that of (22), namely that there are no pairwise connections between variables in the same level of the hierarchy, except in the base layer. From (6) if an auxiliary variable xc may take label \u03b3 or LF , and one of its children xi|i \u2208c take label \u03b4 or LF , the cost associated with assigning label \u03b3 or LF to xc is independent of the label of xi with respect to a given move. Under a swap move, a clique currently taking label \u03b4 \u0338\u2208{\u03b1, \u03b2} will continue to do so. This follows from (4) as the cost associated with taking label \u03b4 is only dependent upon the weighted average of child variables taking state \u03b4, and this remains constant. Hence the only clique variables that may have a new optimal labelling under the swap are those currently taking state \u03b1, LF or \u03b2, and these can only transform to one of the states \u03b1, LF or \u03b2. As the range moves map exactly this set of transformations, the move proposed must be transformationally optimal, and consequently the best possible \u03b1\u03b2 swap over the energy (7). Expansion Moves In the case of a range-expansion move, we can maintain transformational optimality while incorporating pairwise connections into the hierarchy \u2014 provided condition (22) holds, and the energy can be exactly represented in our submodular moves. In order for this to be the case, the pairwise connections must be both convex over any range \u03b1, LF , \u03b2 and a metric. The only potentials that satisfy this are linear over the ordering \u03b1, LF , \u03b2 \u2200\u03b1, \u03b2. Hence all pairwise connections must be of the form: \u03c8i,j(xi, xj) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 if xi = xj \u03bb/2 if xi = LF or xj = LF and xi \u0338= xj \u03bb otherwise. (24) where \u03bb \u2208R+ 0 . By lemma 1, it can be readily seen that the connections in the hierarchical network are a constrained variant of this form. A similar argument to that of the optimality of \u03b1\u03b2 swap can be made for \u03b1-expansion. As the label \u03b1 is \u2018pushed\u2019 out across the base layer, the optimal labelling of some x(n) where n \u22652 must either remain constant or transition to one of the labels LF or \u03b1. Again, the range moves map exactly this set of transforms and the suggested move is both transformationally optimal, and the best expansion of label \u03b1 over the higher order energy of (7). 6 EXPERIMENTS We evaluate \u03b1-expansion, \u03b1\u03b2 swap, trw-s, Belief Propagation, Iterated Conditional Modes, and both the expansion and swap based variants of our unordered range moves on the problem of object class segmentation over the MSRC data-set (Shotton et al., 2006), in which each pixel within an image must be assigned a label representing its class, such as grass, water, boat or cow. We express the problem as a three layer hierarchy. Each pixel is represented by a random variables of the base layer. The second layer is formed by performing multiple unsupervised segmentations over the image, and associating one auxiliary variable with each segment. The children of each of these variables in x(2) are the variables contained within the segment, and pairwise connections are formed between adjacent segments. The third layer is formed in the same manner as the second layer by clustering the image segments. Further details are given in Ladicky et al. (2009). We tested each algorithm on 295 test images, with an average of 70,000 pixels/variables in the base layer and up to 30,000 variables in a clique, and ran them either until convergence, or for a maximum of 500 iterations. In the table in \ufb01gure 1 we compare the \ufb01nal energies obtained by each algorithm, showing the number of times they achieved an energy lower than or equal to all other methods, the average di\ufb00erence E(method)\u2212 E(min) and average ratio E(method)/E(min). Empirically, the message passing algorithms trw-s and bp appear ill-suited to inference over these dense hierarchical networks. In comparison to the graph cut based move making algorithms, they had higher resulting energy, higher memory usage, and exhibited slower convergence. While it may appear unreasonable to test message passing approaches on hierarchical energies when higher order formulations such as (Komodakis and Paragios, 2009; Potetz and Lee, 2008) exist, we note that for the simplest hierarchy that contains only one additional layer of nodes and no pairwise connections in this second layer, higher order and hierarchical message-passing approaches will be equivalent, as inference over the trees that represent higher order potentials is exact. Similar relative performance by message passing schemes was observed in these cases. Further, application of such approaches to the general form of (7) would require the computation of the exact min-marginals of E(2), a di\ufb03cult problem in itself. In all tested images both \u03b1-expansion variants outperMethod Best E(meth) \u2212E(min) E(meth) E(min) Time Range-exp 265 74.747887 1.000368 6.1s Range-swap 137 9033.847065 1.058777 19.8s \u03b1-expansion 109 255.500278 1.001604 6.3s \u03b1\u03b2 swap 42 9922.084163 1.060385 41.6s trw-s 12 38549.214994 1.239831 8.3min bp 6 13455.569713 1.081627 2min icm 5 45954.670836 1.277519 25.3s Obr\u00b4 azek 1: Left Typical behaviour of all methods along with the lower bound obtained from trw-s an image from MSRC (Shotton et al., 2006) data set. The dashed lines at the right of the graph represent \ufb01nal converged solutions. Right Comparison of methods on 295 testing images. From left to right the columns show the number of times they achieved the best energy (including ties), the average di\ufb00erence (E(method) \u2212E(min)), the average ratio (E(method)/E(min)) and the average time taken. All three approaches proposed by this paper: \u03b1-expansion under the reparameterisation of section 5, and the transformationally optimal range expansion and swap signi\ufb01cantly outperformed existing inference methods both in speed and accuracy. See the supplementary materials for more examples. formed trw-s, bp and icm. These later methods only obtained minimal cost labellings in images in which the optimal solution found contained only one label i.e. they were entirely labelled as grass or water. The comparison also shows that unordered range move variants usually outperform vanilla move making algorithms. The higher number of minimal labellings found by the range-move variant of \u03b1\u03b2 swap in comparison to those of vanilla \u03b1-expansion can be explained by the large number of images in which two labels strongly dominate, as unlike standard \u03b1-expansion both range move algorithms are guaranteed to \ufb01nd the global optima of such a two label sub-problem (see section 5.2). The typical behaviour of all methods alongside the lower bound of trw-s can be seen in \ufb01gure 1 and further, alongside qualitative results, in the supplementary materials. 7 CONCLUSION This paper shows that higher order amns are intimately related to pairwise hierarchical networks. This observation allowed us to characterise higher order potentials which can be solved under a novel reparameterisation using conventional move making expansion and swap algorithms, and derive bounds for such approaches. We also gave a new transformationally optimal family of algorithms for performing e\ufb03cient inference in higher order amn that inherits such bounds. We have demonstrated the usefulness of our algorithms on the problem of object class segmentation where they have been shown to outperform state of the art approaches over challenging data sets (Ladicky et al., 2009) both in speed and accuracy. Reference Boykov, Y., Veksler, O. and Zabih, R. (2001), \u2018Fast approximate energy minimization via graph cuts\u2019, IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 2001. 1, 4, 5 Gould, S., Amat, F. and Koller, D. (2009), Alphabet soup: A framework for approximate energy minimization, pp. 903\u2013910. 4, 5 Ishikawa, H. (2003), \u2018Exact optimization for markov random \ufb01elds with convex priors\u2019, IEEE Transactions on Pattern Analysis and Machine Intelligence 25(10), 1333\u20131336. 5 Kohli, P., Kumar, M. and Torr, P. (2007), P 3 and beyond: Solving energies with higher order cliques, in \u2018CVPR\u2019. 1, 2, 4, 5 Kohli, P., Ladicky, L. and Torr, P. (2008), Robust higher order potentials for enforcing label consistency, in \u2018CVPR\u2019. 1, 2, 4 Kolmogorov, V. (2006), \u2018Convergent tree-reweighted message passing for energy minimization.\u2019, IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1568\u20131583. 3 Kolmogorov, V. and Rother, C. (2006), C.: Comparison of energy minimization algorithms for highly connected graphs. in: Eccv, in \u2018In Proc. ECCV\u2019, pp. 1\u201315. 3 Komodakis, N. and Paragios, N. (2009), Beyond pairwise energies: E\ufb03cient optimization for higher-order mrfs, in \u2018CVPR09\u2019, pp. 2985\u20132992. 1, 7 Kschischang, F. R., Member, S., Frey, B. J. and andrea Loeliger, H. (2001), \u2018Factor graphs and the sum-product algorithm\u2019, IEEE Transactions on Information Theory 47, 498\u2013519. 3 Kumar, M. P. and Koller, D. (2009), MAP estimation of semi-metric MRFs via hierarchical graph cuts, in \u2018Proceedings of the Conference on Uncertainity in Arti\ufb01cial Intelligence\u2019. 1 Kumar, M. P. and Torr, P. H. S. (2008), Improved moves for truncated convex models, in \u2018Proceedings of Advances in Neural Information Processing Systems\u2019. 1, 4, 5 Ladicky, L., Russell, C., Kohli, P. and Torr, P. H. (2009), Associative hierarchical crfs for object class image segmentation, in \u2018International Conference on Computer Vision\u2019. 1, 3, 7, 8 Ladicky, L., Russell, C., Sturgess, P., Alahri, K. and Torr, P. (2010), What, where and how many? combining object detectors and crfs, in \u2018ECCV\u2019, IEEE. 1 Lan, X., Roth, S., Huttenlocher, D. and Black, M. (2006), E\ufb03cient belief propagation with learned higher-order markov random \ufb01elds., in \u2018ECCV (2)\u2019, pp. 269\u2013282. 4 Potetz, B. and Lee, T. S. (2008), \u2018E\ufb03cient belief propegation for higher order cliques using linear constraint nodes\u2019. 4, 7 Roth, S. and Black, M. (2005), Fields of experts: A framework for learning image priors., in \u2018CVPR\u2019, pp. 860\u2013867. 1 Shotton, J., Winn, J., Rother, C. and Criminisi, A. (2006), TextonBoost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation., in \u2018ECCV\u2019, pp. 1\u201315. 7, 8 Sontag, D., Meltzer, T., Globerson, A., Jaakkola, T. and Weiss, Y. . (2008), Tightening lp relaxations for map using message passing, in \u2018UAI\u2019. 3 Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M. and Rother, C. (2006), A comparative study of energy minimization methods for markov random \ufb01elds., in \u2018ECCV (2)\u2019, pp. 16\u201329. 1, 3 Tarlow, D., Givoni, I. and Zemel, R. (2010), Hop-map: E\ufb03cient message passing with high order potentials, in \u2018Arti\ufb01cial Intelligence and Statistics\u2019. 4 Tarlow, D., Zemel, R. and Frey, B. (2008), Flexible priors for exemplarbased clustering, in \u2018Uncertainty in Arti\ufb01cial Intelligence (UAI)\u2019. 4 Taskar, B., Chatalbashev, V. and Koller, D. (2004), Learning associative markov networks, in \u2018Proc. ICML\u2019, ACM Press, p. 102. 1, 2 Veksler, O. (2007), Graph cut based optimization for mrfs with truncated convex priors, pp. 1\u20138. 4, 5 Vicente, S., Kolmogorov, V. and Rother, C. (2009), Joint optimization of segmentation and appearance models, in \u2018ICCV\u2019, IEEE. 1 Wainwright, M. and Jordan, M. (2008), \u2018Graphical Models, Exponential Families, and Variational Inference\u2019, Foundations and Trends in Machine Learning 1(1-2), 1\u2013305. 3 Weiss, Y. and Freeman, W. (2001), \u2018On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs.\u2019, Transactions on Information Theory . 3 Werner, T. (2009), High-arity interactions, polyhedral relaxations, and cutting plane algorithm for soft constraint optimisation (map-mrf), in \u2018CVPR\u2019. 3", "introduction": "The last few decades have seen the emergence of Markov networks or random \ufb01elds as the most widely used probabilistic model for formulating problems in machine learning and computer vision. This interest has led to a large amount of work on the problem of estimating the maximum a posteriori (map) solution of a random \ufb01eld (Szeliski et al., 2006). However, most of this research e\ufb00ort has focused on inference over pairwise Markov networks. Of particular interest are the families of associative pairwise potentials (Taskar et al., 2004), in which connected variables are assumed to be more likely than not to share the same label. In- ference algorithms targeting these associative poten- tials, which include truncated convex costs (Kumar and Torr, 2008), metrics (Boykov et al., 2001), and semi metrics (Kumar and Koller, 2009), often carry bounds which guarantee the cost of the solution found must lie within a bound, speci\ufb01ed as a \ufb01xed factor of n of the cost of the minimal solution. Although higher order Markov networks (i.e. those with a clique size greater than two) have been used to obtain impressive results for a number of challenging problems in computer vision (Roth and Black, 2005; Komodakis and Paragios, 2009; Vicente et al., 2009; Ladicky et al., 2010), the problem of bounded higher order inference has been largely ignored. In this paper, we address the problem of perform- ing graph cut based inference in a new model: the Associative Hierarchical Networks (ahns) (Ladicky et al., 2009), which includes the higher order Asso- ciative Markov Networks (amns) (Taskar et al., 2004) or P n potentials (Kohli et al., 2007) and the Robust P n (Kohli et al., 2008) model as special cases, and derive a bound of 4. This family of ahns have been successfully applied to diverse problems such as object class recognition, doc- ument classi\ufb01cation and texture based video segmen- tation, where they obtain state of the art results. Note that in our earlier work Ladicky et al. (2009), the prob- lem of inference is not discussed at all; it shows how these hierarchical models can be used for scene un- derstanding, and how learning is possible under the assumption that the model is tractable. For a set of variables x(1) ahns are characterised by energies (or costs) of the form: E(x(1)) = E\u2032(x(1)) + min xa Ea(x(1), xa) (1) where E\u2032 and Ea are pairwise amns and xa is a set of auxiliary variables. The ahn is a amn containing higher order cliques, de\ufb01ned only in terms of x(1), but can also be seen as a pairwise amn de\ufb01ned in terms of x(1) and xa. We propose new move making algorithms over the pairwise energy E\u2032(x(1))+Ea(x(1), xa) which have the important property of transformational opti- mality. Move making algorithms function by e\ufb03ciently search- ing through a set of candidate labellings and proposing the optimal candidate i.e. the one with the lowest en- ergy to move to. The set of candidates is then updated, and the algorithm repeats till convergence. We call a move making algorithm transformationally optimal if and only if any proposed move (x\u2217, xa) sat- is\ufb01es the property: E(x\u2217) = E\u2032(x\u2217) + Ea(x\u2217, xa). (2) Experimentally, our transformationally optimal algo- rithms converge faster, and to better solutions than standard approaches, such as \u03b1-expansion. Moreover, unlike standard approaches, our transformationally optimal algorithms always \ufb01nd the exact solution for binary ahns. Outline of the paper In section 2 we introduce the notation used in the rest of the paper. Existing mod- els generalised by the associative hierarchical network, and the full de\ufb01nition of ahns are given in section 3. In section 4 we discuss work on e\ufb03cient inference, and show how the pairwise form of associative hierarchical networks can be minimised using the \u03b1-expansion al- gorithm, and derive bounds for our approach. Section 5 discusses the application of novel move making algo- rithms to such energies, and we show that under our formulation the moves of the robust P n model become equivalent to a more general form of range moves over unordered sets. We derive transformational optimal- ity results over hierarchies of these potentials, guaran- teeing the optimality of the moves proposed. We ex- perimentally verify the e\ufb00ectiveness of our approach against other methods in section 6, and conclude in section 7." } ], "Fabio Tosi": [ { "url": "http://arxiv.org/abs/2303.17603v1", "title": "NeRF-Supervised Deep Stereo", "abstract": "We introduce a novel framework for training deep stereo networks effortlessly\nand without any ground-truth. By leveraging state-of-the-art neural rendering\nsolutions, we generate stereo training data from image sequences collected with\na single handheld camera. On top of them, a NeRF-supervised training procedure\nis carried out, from which we exploit rendered stereo triplets to compensate\nfor occlusions and depth maps as proxy labels. This results in stereo networks\ncapable of predicting sharp and detailed disparity maps. Experimental results\nshow that models trained under this regime yield a 30-40% improvement over\nexisting self-supervised methods on the challenging Middlebury dataset, filling\nthe gap to supervised models and, most times, outperforming them at zero-shot\ngeneralization.", "authors": "Fabio Tosi, Alessio Tonioni, Daniele De Gregorio, Matteo Poggi", "published": "2023-03-30", "updated": "2023-03-30", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.RO" ], "main_content": "Deep Stereo Matching. For decades, stereo matching has been tackled using hand-crafted algorithms [56] usually classified into local and global methods, according to their processing step and their speed/accuracy trade-off. In recent years, deep learning has become the dominant technique in the stereo matching field, achieving results that were previously unthinkable [50]. Early efforts in this field cast individual steps of the pipeline [56] as learnable components [37, 61, 62, 74]. Starting with DispNet [41], end-toend architectures rapidly replaced any alternative approach [10,14,27,33,63,72,82,85,86]. The latest advances in this field take inspiration from RAFT [67] to design recurrent architectures, either by performing lookups on a 3D correlation volume [35] or correlations in a local window [30], or exploiting Transformers [25, 31] to capture long-range dependencies between features of the input stereo pair. Despite their impressive results on public benchmarks, these methods strictly require dense in-domain ground-truth. Self-Supervised Stereo. This branch of stereo literature aims to train deep models without the use of groundtruth depth data. A common strategy involves using photometric losses [24] across stereo images from single pairs [70, 71, 90] or videos [15, 29, 76]. An alternative line of works replaces it with proxy supervision from either handcrafted algorithms [49, 68, 69] or distilled from other networks [3]. Although these strategies are practical, they have proven to only be effective at specializing or adapting to single domains, and often lack generalization [3], yet not providing reliable supervision at occlusions. In contrast, we exploit multi-view geometry at its finest through neural rendering to learn for stereo, alike single-image depth estimation frameworks can learn from stereo images [23]. Zero-Shot Generalization. This line of work focuses on training deep models on a set of labeled images and then preserving accuracy when tested across different domains, under the assumption that target domain-specific data is unavailable. Approaches initially explored include: the use of learning domain-invariant features [86], handcrafted matching volumes [8], or casting disparity estimation as a refinement problem on top of hand-crafted stereo algorithms [2]. The latest trends in the field include using contrastive feature loss and stereo selective whitening loss [87], an ImageNet pre-trained classifier to extract generalpurpose image features and graft them into the cost volume [36], or shortcut avoidance [16]. Among others, Monofor-Stereo (MfS) [79] generates training stereo pairs from large-scale real-world monocular datasets. This at the expense of 1) requiring a pre-trained monocular depth network [53] \u2013 which in turns is typically trained on millions of images involving also ground-truth labels \u2013 and 2) deal with holes generated by the forward-warping operation used to obtain the right view. In contrast, our approach of generDisparity Right Left Confidence Rendered Stereo Pair Predicted Disparity Stereo Network Neural Radiance Field (NeRF) (\ud835\udc65, \ud835\udc66, \ud835\udc67) \ud835\udc39\u03c8 (\ud835\udc45\ud835\udc3a\ud835\udc35\u03c3) COLMAP Camera Poses and Intrinsics Sparse Input Views Training Data Generation NeRF-Supervised Stereo Training Trinocular Rendering (\ud835\udf03, \ud835\udf19) Losses Figure 2. Framework Overview. Top: data generation pipeline that trains NeRFs from user-collected single-camera frames to render stereo images (i.e. triplets), confidence and proxy depth maps. Bottom: NeRF-supervised training of a stereo network on rendered pairs. ating stereo pairs from single images does not require any model pre-trained on million images [53], ground-truth label or post-processing step, and still achieves better results. Neural Radiance Fields. NeRFs [44] belong to the family of neural fields [81]. These models implicitly parameterize a 5D lightfield using one or more Multi-Layer Perceptrons (MLPs). In just three years, they have become the dominant approach for generating novel views using neural rendering. Different flavors of NeRFs have been developed to deal with dynamic scenes [19, 32, 39, 52, 80], image relighting [7,65,88], camera poses refinement [34,78], anti-aliasing in multi-resolution images [5,6], cross-spectral imaging [51], deformable objects [18,46\u201348,73] or content generation [9, 28, 60]. Most recent NeRF variants focus on faster convergence, e.g. by exploiting multiple MLPs [54], factorization [12] or explicit representations [4,45,66]. Recent works partially explored the potential of NeRFs to serve as data factories at high-level \u2013 object detection [20], semantic labeling [89] or to learn descriptors [83]. 3. Method Fig. 2 illustrates our NeRF-Supervised (NS) learning framework. We first collect multi-view images from multiple static scenes. Then, we fit a NeRF on each single scene to render stereo triplets and depth. Finally, the rendered data is used to train any existing stereo matching network. 3.1. Background: Neural Radiance Field (NeRF) A Neural Radiance Field (NeRF) [44] maps a 5D input \u2013 3D coordinates x = (x, y, z) of a point in the scene and viewing directions (\u03b8, \u03d5) of the camera capturing it \u2013 into a color-density output (c, \u03c3) by means of a network F\u03c8, modelling the radiance of an observed scene as F\u03c8(x, \u03b8, \u03d5) \u2192(c, \u03c3). Such a 5D function is approximated by the weights of an MLP F\u03c8. To render a 2D image, the following steps are taken: 1) sending camera rays through the scene to sample a set of points, 2) estimate density and color for each sampled point with F\u03c8 and 3) exploit volume rendering [40] to synthesize the 2D image. In practice, the color C(r) rendered from a camera ray r(t) = o + td can be obtained by solving the following integral: \\la b e l {e q:rendering} C(\\ma thbf {r}) = \\int _{t_n}^{t_f} T(t) \\sigma (\\mathbf {r}(t)) c(\\mathbf {r}(t),\\mathbf {d}) \\textit {dt} (1) with T(t) = exp \u0010 \u2212 R t tn \u03c3(r(s))ds \u0011 representing the accumulated transmittance from tn to tf along the ray r, and tn, tf the near and far plane, respectively. The integral is computed via quadrature by dividing the ray into a predefined set of N evenly spaced bins: \\la b e l {e q:quadrature} C(\\ma th b f { r } ) = \\su m _{ i=1}^{N} T_i(1-\\text {exp}(-\\sigma _i\\delta _i))c_i, \\hspace {0.4cm} T_i = \\text {exp}\\Big (-{\\sum _{j=1}^{i-1}\\sigma _j\\delta _j}\\Big ) (2) with \u03b4i being the step between adjacent samples ti, ti+1. Speed-up with Explicit Representations. The described model is effective, but slow to train due to two reasons: first, the MLP must learn from scratch the mapping for all points in the 5D space and second, for each individual input, the entire set of weights needs to be optimized. Explicit representations \u2013 e.g. voxel grids \u2013 can store additional features that can be rapidly indexed and interpolated, but this comes at the cost of higher memory requirements. This allows for 1) a shallower MLP, faster to converge and 2) a reduced number of parameters to optimize for each single input \u2013 i.e., features on a voxel grid and the few parameters of the shallow MLP. For instance, on this principle, DVGO [66] builds two voxel grids M(dens) and M(feat), with the former modeling density and the latter storing features that are queried by the MLP F\u03c8 to compute color: \\s i gma (\\mathbf {x}) &= \\text {interp}(\\mathbf {x}, \\mathbf {M}^{\\text {(dens)}}) \\\\ \\mathbf {c}(\\mathbf {x},\\mathbf {d}) &= F_{\\psi }(\\text {interp}(\\mathbf {x}, \\mathbf {M}^{\\text {(feat)}}), \\mathbf {x}, \\mathbf {d})p (4) while Instant-NGP [45] builds multi-resolution voxel grids accessed by means of index hashing: h( \\ m a t hbf {x} ) = \\Big ( \\bigoplus _{i=0}^d x_i\\pi _i \\Big ) \\mod {T} (5) with L being bitwise XOR, xi \u2013 with i \u2208[0, d] \u2013 single bits of the location index x, \u03c0i unique, large prime numbers and T the maximum amount of elements in the grid. 3.2. NeRF as a Data Factory With reference to Fig. 2, we now describe, step-by-step, how we use Neural Radiance Fields to generate countless image pairs for training any deep stereo network. Image Collection and COLMAP Pre-processing. We start by acquiring a sparse set of M images from a single static scene, for which any handheld device with a single camera is suitable, e.g., a mobile phone. We run COLMAP [57] on any single scene to estimate both intrinsics K and camera poses Ei, i \u2208[1, M]. This is a standard procedure for preparing user-collected data to be NeRFed [5,6,44]. NeRF Training. Then, we fit an independent NeRF \u2013 in one of its speeded-up flavors [4, 12, 45, 66] \u2013 for each scene. This is achieved by rendering, for a given batch R of rays shot from collected image positions, the corresponding color \u02c6 C(r) according to Eq. 2, and optimizing an L2 loss with respect to pixel colors C(r) in the collected frames: \\ma t h cal { L }_{r e nd} = \\ sum _{\\mathbf {r} \\in \\mathcal {R}} ||\\hat {C}(\\mathbf {r}) C(\\mathbf {r}) ||_2^2 (6) Stereo Pairs Rendering. Finally, to create the stereo training set, we generate a virtual set of stereo extrinsic parameters S = I|b. The rotation is represented by the 3 \u00d7 3 identity matrix I and the translation vector b = (b, 0, 0)T has a magnitude b along the x axis in the camera reference system. This defines the baseline of a virtual stereo camera. Subsequently, we render two novel views, one originating from an arbitrary viewpoint Ek = Rk|tk, and one from its corresponding virtual stereo camera viewpoint ER k = Ek \u00d7 S = Rk|(tk + b), which represent the reference and the target frames of a perfectly rectified stereo pair, with the latter positioned to the right of the former. This process allows for the generation of countless stereo samples for training deep stereo networks. Additionally, for each viewpoint Ek, we also render a third image from EL k = Ek \u00d7 S\u22121 = Rk|(tk \u2212b), which is a second target frame placed on the left of the reference one. This creates a stereo triplet in which the three images are perfectly rectified, as shown in Fig. 3 (a-c). The importance of this process, particularly in dealing with occlusions, will be discussed in Sec. 3.3. Finally, we extract the disparity dr from the rendered depth zr, which is aligned with the center image of the triplet, and use it to assist in the training of any deep stereo network existing in the literature. z( \\ m a thb f {r } ) = \\sum _{i=1 }^{N } T _ i(1-\\text {exp}(-\\sigma _i\\delta _i))\\sigma _i, \\quad \\quad d(\\mathbf {r}) = \\frac {b\\cdot f}{z(\\mathbf {r})} (7) with f being the focal length estimated by COLMAP [57]. 3.3. NeRF-Supervised Training Regime Data generated so far is then used to train stereo models. Given a rendered image triplet (Il, Ic, Ir), we estimate a disparity map \u02c6 dc by feeding the network with (Ic, Ir), which act as the left and right views of a standard stereo pair. Then, we propose an NS loss with two terms. Triplet Photometric Loss. We exploit image reconstruction to supervise disparity estimation [23]. Specifically, we backward-warp Ir according to \u02c6 dc and obtain \u02c6 Ir c \u2013 i.e., the reconstructed reference image. Then, we measure the photometric difference between \u02c6 Ir c and Ic as: \\beg i n { s p l i t } \\mathca l { L } _ \\ rh o ( I _{c } , \\h a t {I}_c^r) = \\beta \\cdot \\frac {1\\text {SSIM}(I_{c}, \\hat {I_{c}^r})}{2} + (1-\\beta )\\cdot |I_{c}-\\hat {I}_c^r| \\end {split} (8) with SSIM being the Structural Similarity Index Measure [77]. Nevertheless, this formulation lacks adequate supervision in occluded regions, such as the left border of the frame or the left of each depth discontinuity, which are not visible in the right image. To overcome this limitation, we employ the third image, Il. By computing L\u03c1(Ic, \u02c6 Il c), the occlusions will be complementary to those from the previous ones. Thus, to compensate for both, we compute the final, triplet photometric loss defined as the per-pixel minimum [24] between the two pairwise terms: \\be gi n {sp l it } \\ mat h cal {L }_ {\\te xt {3} \\ rh o }(\\hat {I_{l}}^c,I_{c}, \\hat {I_{r}}^c) = \\min \\Biggl ( \\mathcal {L}_\\rho (\\hat {I_{l}}^c,I_{c}), \\mathcal {L}_\\rho (I_{c},\\hat {I_{r}}^c) \\Biggr ) \\end {split} (9) Fig. 3 (left) shows the effect of occlusions when computing L\u03c1 between center-left (d) and center-right (e) pairs with bright colors, whereas they are neglected by L3\u03c1 (f). Finally, untextured regions are discarded by a mask \u00b5 [24]: \\mu = [ \\ mi n \\ma t hc al {L} _{\\text {3 }\\rho }(\\hat {I_{l}}^c,I_{c}, \\hat {I_{r}}^c) < \\min \\mathcal {L}_{\\text {3}\\rho }(I_{l},I_{c}, I_{r}) ] (10) Rendered Disparity Loss. We further assist the photometric loss by exploiting rendered disparities as: \\ma t hca l {L}_{disp} = |d_c \\hat {d}_c| (11) However, depth maps rendered by NeRF often exhibit artifacts and large errors [17], as shown in Fig. 3 (g). To (a) (b) (c) (g) (h) (d) (e) (f) (i) (j) Figure 3. Visualization of NS loss components. On the left: (a-c) rendered left-center-right triplet. (d) left-to-center and (e) right-to-center photometric losses, both exposing their own occlusions, (f) per-pixel minimum, compensating for them. On the right: (g) NeRF rendered noisy disparity map, (h) Ambient Occlusion (AO), (i) AO-filtered NeRF disparity, (j) prediction by RAFT-Stereo trained on our dataset. address this issue, we employ a filtering mechanism to preserve only the most reliable pixels. We use Ambient Occlusion (AO) [45] to measure the confidence of dc: \\ t e xt {AO} = \\ s u m _{i=1}^N T_i\\alpha _i, \\quad \\quad \\alpha _i = 1-\\text {exp}(-\\sigma _i\\delta _i) (12) and will use it to filter the disparity loss accordingly. More details are discussed in the supplementary material. NeRF-Supervised Loss. The two terms are summed as: \\ b egin { split } \\mat h c a l { L }_ { NS} &= \\gamma _{disp}\\cdot \\eta _{disp}\\cdot \\mathcal {L}_{disp} \\\\ &+ \\mu \\cdot \\gamma _{\\text {3}\\rho }\\cdot (1-\\eta _{disp})\\cdot \\mathcal {L}_{\\text {3}\\rho } \\end {split} (13) with \u03b3disp, \u03b33\u03c1 being weights balancing the impact of photometric and disparity losses, and \u03b7disp being defined as: \\et a _ {d is p } = \\begin {cases} 0 & \\text {if } \\text {AO} < th \\\\ \\text {AO} & \\text {otherwise } \\\\ \\end {cases} (14) according to a threshold th over AO, normalized in [0, 1]. 4. Experimental Results We introduce our experiments, first describing implementation details, datasets and, then, discussing our results. 4.1. Implementation Details All experiments are conducted on a single 3090 NVIDIA GPU (more details in the supplementary material). Training Data Generation. We collect a total of 270 high-resolution scenes in both indoor and outdoor environments using standard camera-equipped smartphones. For each scene, we focus on a/some specific object(s) and acquire 100 images from different viewpoints, ensuring that the scenery is completely static. The acquisition protocol involves a set of either front-facing or 360\u00b0 views. We use Instant-NGP [45] as the NeRF engine in our pipeline and train it for 50K steps. Running COLMAP and training Instant-NGP takes \u223c25 minutes per scene, with the collected images having a resolution of \u223c8Mpx. Afterwards, we generate data with three virtual baselines of b = 0.5, 0.3 and 0.1 units at different resolutions. We render a disparity map and a triplet from any image used to train Instant-NGP, aligning the center view to the original viewpoint. This results in a total of 65,148 triplets for training. Although more triplets could have been rendered ( i.e., using additional random viewpoints and baselines), these are sufficient to achieve outstanding results. Deep Stereo Training. We adopt RAFT-Stereo [35] as the main architecture over which we build our evaluation due to its accuracy and fast convergence. Yet, we also consider PSMNet [10] and CFNet [63] to evaluate the effectiveness of our proposal on widely used stereo backbones. We train all models on our dataset with batch size of 2 and a crop size of 384 \u00d7 768. We run 200k training steps for RAFT-Stereo and 250k for PSMNet and CFNet. For ablation experiments, we run 100k iterations following [35]. All the networks are trained from scratch without any pretraining on synthetic datasets. The augmentation procedure described in [35] is used for training. For PSMNet and CFNet, we set dmax to 256 and disabled ImageNet normalization. We use learning rate schedules and optimizers as in [10, 35, 63]. In our experiments, we fix \u03b2 = 0.85, th = 0.5, \u03b33\u03c1 = 0.1 and \u03b3disp = 1. 4.2. Evaluation Datasets & Protocol We use the KITTI [22], Middlebury [55] and ETH3D [59] datasets with publicly available ground-truth for evaluation. Specifically, we define validation and testing splits. Validation: 194 stereo images from KITTI 2012, 13 Additional images from the training set of Middlebury v3 (Midd-A) at Full, Half and Quarter resolutions (F, H, Q), and the Middlebury 2021 (Midd-21) dataset. On this split, we run ablation studies and direct comparisons with MfS [79]. Testing: 200 stereo images from KITTI 2015, 15 stereo pairs from Middlebury v3 training set (Midd-T) and 27 pairs from ETH3D. On them, we compare with existing methods that perform zero-shot generalization [16,36,86,87]. Evaluation Metrics. During evaluation, we compute the percentage of pixels having a disparity error greater than a given threshold \u03c4 with respect to the ground-truth. Specifically, we fix \u03c4 = 3 for KITTI, \u03c4 = 2 for Middlebury, \u03c4 = 1 for ETH3D, following the common protocol in the stereo 2-views 3-views KITTI-12 Midd-A Midd-21 L\u03c1 SGM L3\u03c1 SGM Ldisp > th \u03b7disp (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 2px) (A) \u2713 11.00 56.05 32.33 25.60 44.88 (B) \u2713 6.39 22.68 17.23 18.14 23.71 (B\u2019) [3]\u2713 5.46 19.50 15.26 17.36 21.29 (C) \u2713 5.11 28.33 13.57 12.09 24.38 (D) \u2713 5.57 19.10 13.22 14.91 20.08 (E) \u2713 5.79 21.71 9.85 8.73 22.69 (F) \u2713 \u2713 4.49 15.50 9.25 9.13 16.99 (G) \u2713 \u2713 \u2713 4.31 16.11 9.57 9.96 16.83 (H) \u2713 \u2713 \u2713 4.21 15.31 8.86 8.41 16.26 (I) \u2713 \u2713 \u2713 \u2713 4.31 14.92 8.75 8.28 14.87 (I\u2019) \u2713 \u2713 \u2713 \u2713 4.02 13.12 6.91 7.18 12.87 Table 1. Ablation Study \u2013 Loss Components. Impact of each component in our NS loss. [3]\u2713means using SGM plus [3]. (a) (b) (c) (d) (e) Figure 4. Effect of Training Losses. On the left: (a) center image and (b) corresponding disparity rendered by NeRF. On the right: disparity maps (and zoom-in) by RAFT-Stereo trained with (c) center-right or (d) triplet photometric loss, and (e) our full NeRF-Supervised loss. Baseline KITTI-12 Midd-A Midd21 0.5 0.3 0.1 (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 2px) \u2713 3.97 18.71 10.55 12.09 16.70 \u2713 \u2713 3.92 16.77 9.66 10.62 16.96 \u2713 \u2713 \u2713 4.31 14.92 8.75 8.28 14.87 Table 2. Ablation Study \u2013 Impact of Baselines. We render triplets with large, medium or small baselines \u2013 0.5, 0.3, 0.1 units. 0 100 200 300 Disparity 0.0% 2.0% 4.0% Percentage Training Set Small Baseline Medium Baseline Large Baseline 0 100 200 300 Disparity 0.0% 2.0% 4.0% Percentage Validation Set Midd-A (Q) Midd-A (H) Midd-A (F) KITTI-12 Midd-21 Figure 5. Disparity Distributions. On the left: our rendered dataset. On the right: datasets from the validation split. matching field. Unless stated otherwise, we evaluate the computed disparity maps considering both occluded as well as non-occluded regions with valid ground-truth disparity. 4.3. Ablation Study We ablate our NS loss and its impact on the training process, as well as other properties of our rendered dataset. Loss Analysis. Tab. 1 shows the results of various instances of RAFT-Stereo trained using different variations of our NS loss. We start by motivating the use of L\u03c1: by training the model using a conventional self-supervised loss on stereo pairs (A) leads to poor performance. This is mainly due to occlusions, for which no supervision can be provided. By using proxy-labels obtained through SGM [26] and filtering them with a n\u00a8 aive left-right consistency check to remove outliers at occlusions (B), the error rates are halved. The same labels processed by [3] further improve the results significantly (B\u2019). However, by exploiting the triplets peculiar to our dataset, L3\u03c1 alone (C) outperforms both (A) and (B), thanks to the stronger self-supervision recovered at occlusions. Interestingly, labels extracted by [3] (B\u2019) still produce better performance on the high-resolution datasets Midd-A (F) and Midd-21, despite being outperResolution KITTI-12 Midd-A Midd21 \u223c2Mpx \u223c0.5Mpx (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 2px) \u2713 5.34 14.77 11.56 12.32 15.53 \u2713 4.31 14.92 8.75 8.28 14.87 \u2713 \u2713 4.42 13.92 9.12 9.66 15.88 Table 3. Ablation Study \u2013 Impact of Rendering Resolution. We render images at both half and quarter of the native resolution. # Scenes KITTI-12 Midd-A Midd21 65 135 270 (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 2px) \u2713 3.98 18.23 11.07 11.30 17.44 \u2713 3.87 15.82 9.69 10.36 16.51 \u2713 4.31 14.92 8.75 8.28 14.87 Table 4. Ablation Study \u2013 Number of Collected Scenes. We render images from different amounts of collected scenes. formed by SGM labels over triplets (D). Considering the superiority of proxy labels over photometric losses alone, we then exploit the disparity maps rendered by NeRF to supervise the stereo model (E), resulting in mixed results \u2013 i.e. better on low-resolution Middlebury but worse on Midd-A (F), Midd-21 and KITTI. Indeed, such a supervision alone results sub-optimal due to the several artefacts shown in Fig. 3 (g) and recurring in most scenes, making it less effective than L3\u03c1 on KITTI, Midd-A (F) and Midd-21. By neglecting the contribution of labels having AO< th, results dramatically improve for KITTI and highresolution datasets, with a minor drop on Midd-A (Q). Instead, a major improvement is obtained over all datasets by combining Ldisp with L3\u03c1 (H). The triplet is crucial for this: combining Ldisp with L\u03c1 is less effective (G). Finally, our full LNS loss (I), balancing the two terms according to AO, results the most effective on the validation split. Furthermore, row (I\u2019) shows the impact of a longer training schedule (200K steps vs 100K). Fig. 4 qualitatively shows how the estimated disparities by RAFT-Stereo improve dramatically when switching from conventional image loss (c) to L3\u03c1 (d), although finer details are still missing. LNS recovers them with unprecedented fidelity (e). Impact of Virtual Baselines. We evaluate the impact of the virtual baseline used to render triplets on the disparity distribution. Tab. 2 shows the results of our study on the (A) (B) (C) (D) (E) (F) (G) (H) (I) Configuration KITTI-12 Midd-A Midd-21 Model Stereo Network Dataset # Images Pre-Train. (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 2px) MfS [79] + MidAs PSMNet MfS 535K \u223c2M 4.70 25.61 17.98 14.78 27.55 MfS [79] + MidAs PSMNet Ours 65K \u223c2M 5.02 28.72 20.66 16.92 26.40 NS (Ours) PSMNet Ours 65K 0 4.07 19.84 13.66 9.15 19.08 MfS [79] + MidAs CFNet MfS 535K \u223c2M 4.47 22.00 16.69 14.32 23.44 MfS [79] + MidAs CFNet Ours 65K \u223c2M 4.90 24.20 19.11 16.20 23.76 NS (Ours) CFNet Ours 65K 0 4.64 17.55 12.31 11.13 19.73 MfS [79] + MidAs RAFT-Stereo MfS 535K \u223c2M 4.45 19.79 12.67 9.63 22.26 MfS [79] + MidAs RAFT-Stereo Ours 65K \u223c2M 4.67 24.61 17.25 14.05 24.18 NS (Ours) RAFT-Stereo Ours 65K 0 4.02 13.12 6.91 7.18 12.87 Table 5. Direct Comparison with MfS [79]. We report results achieved by networks trained using MfS pipeline \u2013 both with their proposed dataset and ours \u2013 and trained with our NeRF-supervised approach (NS). (> 2px) MfS: 20.47% Ours: 6.57% MfS: 8.97% Ours: 4.74% (> 2px) MfS: 14.41% Ours: 6.63% MfS: 24.45% Ours: 3.57% Figure 6. Qualitative Comparison on Midd-A H (top) and Midd-21 (bottom) Datasets. From left to right: left images and disparity maps by RAFT-Stereo models, respectively trained with MfS or NS. Under each disparity map, the percentage of pixels with error > 2. training effectiveness. Specifically, we render our dataset using a single, large baseline of 0.5 units (21,716 triplets), as well as adding more images obtained with medium and small baselines of 0.3 and 0.1 units. We can observe that using only the large one yields the best results on KITTI, while rendering additional images with the medium baseline results in improvements on Midd-A only. Utilizing all three baselines leads to the best results on Midd-A and Midd-21, with a moderate drop on KITTI. We ascribe this to the disparity distributions generated by using the three baselines, that covers the full range defined by the combined validation sets, as shown in Fig. 5. Impact of Image Resolution. We evaluate the impact of image resolution on the training process. Purposely, we render images at approximately 2 and 0.5Mpx out of the original 8Mpx images \u2013 this because, in terms of computational burden, existing stereo networks can rarely deal with them, despite our pipeline would perfectly allow for this. As shown in Tab. 3, the best results are usually obtained by rendering 0.5Mpx images, except when testing at full resolution on Midd-A, for which rendering both higher and lower resolution images provides benefits. Impact of Scenes. Finally, we show the impact of a larger number of collected scenes on the training process. Tab. 4 highlights how the accuracy on the most challenging datasets \u2013 i.e., Middlebury \u2013 increases with it, unsurprisingly. This represents a key strength of our work, enabling anyone to generate their own extensive and scalable training data collections for stereo, resulting in better and better results, thanks to its ease of implementation. 4.4. Comparison with MfS To further evaluate the quality of our rendered data, we compare our approach with MfS [79] \u2013 the most recent method for generating stereo pairs from single images \u2013 by training three different stereo networks. Tab. 5 collects the outcome of this experiment using PSMNet [10], i.e. the baseline model used in [79], as well as with CFNet [63] and RAFT-Stereo [35]. Each model is trained on the dataset proposed in [79] (A,D,G), as well as on stereo pairs generated with their technique on our data (B,E,H) or by means of our NS paradigm (C,F,I). We point out that in the first two cases, 2 million labeled images were used to train MidAS [53], which is the key component of their pipeline. In contrast, our approach does not require any additional data. First, we note that using the MfS generation method on our data consistently leads to inferior results compared to using theirs \u2013 i.e. (B) vs (A), (E) vs (D), (H) vs (G). This is unsurprising, considering that our images were collected from only 270 scenes, while the original dataset used by MfS includes half a million images from COCO, ADE20K, DIODE, Mapillary, and DiW, which provide a much wider range of scenes and contexts. This excludes the fact that the superior results achieved by NS are a consequence of the quality of collected images solely. Eventually, any network trained with the NS supervision granted by our data, always outperforms its two counterparts by a large margin \u2013 except with CFNet on KITTI, where we register a 0.17% drop. This proves that our paradigm is effective with different stereo architectures and consistently outperforms MfS without the need for a large training dataset. Finally, Fig. 6 shows a comparison between RAFT(A) (B) (C) (D) KITTI-15 Midd-T ETH3D Method (> 3px) F (> 2px) H (> 2px) Q (> 2px) (> 1px) All Noc All Noc All Noc All Noc All Noc Training Set SceneFlow with GT GANet [85] 10.46 10.15 45.36 40.80 26.75 21.8 15.52 11.49 8.68 7.75 DSMNet [86] 5.50 5.19 29.95 24.79 16.88 12.03 13.75 9.44 12.52 11.62 CFNet [63] 6.01 5.94 29.12 24.15 20.11 15.84 13.77 10.32 5.77 5.32 MS-GCNet [8] * 6.21 18.52 8.84 RAFT-Stereo [35] * 5.74 18.33 12.59 9.36 3.28 RAFT-Stereo [35] \u2021 5.45 5.21 18.20 14.19 11.19 8.09 9.31 6.56 2.59 2.24 SGM + NDR [2] 5.41 5.12 27.27 21.09 17.70 13.51 11.75 7.93 5.20 4.78 STTR [31] 8.31 6.73 38.10 30.74 26.39 18.17 15.91 8.51 20.49 19.06 CEST [25] 7.61 6.13 27.44 19.53 19.89 11.82 14.71 7.56 10.99 9.78 FC-GANet [87] * 5.3x 10.2x 7.8x 5.8x ITSA-GWCNet [16] 5.60 5.39 29.46 25.18 19.38 15.95 14.36 10.76 7.43 7.12 ITSA-CFNet [16] 4.96 4.76 26.38 21.41 18.01 14.00 13.32 9.73 5.40 5.14 CREStereo [30] \u2021 5.79 5.40 34.78 30.52 17.57 13.87 12.88 8.85 8.98 8.14 PSMNet [10] \u2021 7.86 7.40 33.69 28.35 21.69 16.92 17.24 12.37 23.19 22.12 MS-PSMNet [8] * 7.76 19.81 16.84 Graft-PSMNet [36] 5.34 5.02 25.46 19.28 17.81 13.46 14.18 9.21 11.42 10.69 FC-PSMNet [87] * 5.8x 15.1x 9.3x 9.5x ITSA-PSMNet [16] 6.00 5.73 32.09 27.46 20.83 17.14 14.68 11.05 10.34 9.77 Training Set Real-world data without GT Reversing [3]-PSMNet [4.09] [3.88] 38.23 30.00 26.45 20.91 20.55 15.08 9.00 8.23 MfS [79]-PSMNet 5.18 4.91 26.42 21.38 17.56 13.45 12.07 9.09 8.17 7.44 NS-PSMNet (Ours) 5.05 4.80 20.60 15.83 12.91 9.07 11.03 7.15 11.69 11.00 NS-RAFT-Stereo (Ours) 5.41 5.23 16.45 12.08 9.67 6.42 8.05 4.82 2.94 2.23 Table 6. Zero-Shot Generalization Benchmark. We test all models using authors\u2019 weights. Exceptions: \u2217numbers from the original paper; \u2021 retrained model. Best results per macro-block in bold. We also highlight first , second and third absolute bests. For RAFT-Stereo (in dashed lines) only the best between \u2217and \u2021 is kept for rankings. [ ] means trained on the same domain, thus ignored for rankings. Stereo trained with MfS on the dataset adopted in [79], and its NS counterpart. The results showcase the much more detailed predictions of the latter, especially in thin structures, which is of unprecedented quality for methods not trained on ground-truth. More are reported in the supplement. 4.5. Zero-Shot Generalization Benchmark We conclude by evaluating stereo networks trained in an NS manner for zero-shot generalization. Table 6 collects the comparison with several state-of-the-art methods on the benchmark common to the latest works [16,36,87]. It is worth mentioning that different papers have often evaluated with different protocols1 on the Middlebury dataset: some compute the metrics only for Noc regions [16, 36], some limit the set of valid pixels to those having ground-truth disparity lower than 192 [16, 36], and others compute a weighted average over the dataset, setting challenging images to 0.5 as indicated on the Middlebury website [3, 16]. The protocol itself is often not reported in the papers, leading to the accumulation of several inconsistencies throughout the literature and, possibly, drawing biased conclusions. To address this, we re-evaluated any method with available code and weights, both over All / Noc pixels and considering the entire disparity range, to establish a common protocol from now on. For a few methods whose weights are no longer available, we either took numbers from the original paper (\u2217), although they may not be entirely comparable with the others, or retrained them (\u2021). We defined four main groups of methods: (A) existing stereo models, excluding (B) PSMNet variants, both 1We reached this verdict by checking the authors\u2019 code and, when not available, through private communications with authors themselves. of which were trained on synthetic data with ground-truth; (C) PSMNet models trained without ground-truth; and (D) our best model. (B) and (C) allow for comparison of several methods pursuing generalization while using a common backbone. We can see that our NS-PSMNet outperforms all PSMNet variants, except on ETH3D (it is worth noting that Reversing-PSMNet [3] was trained on raw KITTI [23], which gives it a significant advantage). Among the methods in group (A), only RAFT-Stereo outperforms NS-PSMNet on Middlebury, but performs worse on KITTI, where ITSA-CFNet is the best method among all. This suggests that RAFT-Stereo already has strong generalization capability. Combining RAFT-Stereo with NS (group D) consistently produces the best results across the entire Middlebury dataset, and results that are equivalent to RAFT-Stereo trained on synthetic ground-truth on ETH3D, all without requiring any ground-truth data. This results in a small drop in accuracy on KITTI, that is negligible in exchange for the improvement on Middlebury (often 30-40%). 5. Conclusion We have presented a pioneering pipeline that leverages NeRF to train deep stereo networks without the requirement of ground-truth depth or stereo cameras. By capturing images with a single low-cost handheld camera, we generate thousands of stereo pairs for training through our NS paradigm. This approach results in state-of-the-art zeroshot generalization, surpassing both self-supervised and supervised methods. Our work represents a significant advancement towards data democratization, putting the key to the success into the users\u2019 hands. Limitations. Samples collected so far are limited to small-scale, static scenes. Moreover, our NS \u2013 and any \u2013 stereo networks still fail in some challenging conditions, e.g. transparent surfaces [84] or nighttime images [11,21]. A larger-scale collection campaign, coupled with other NeRFs variants [43,64], may deal with them in the future. Future Research. Our NS pipeline can possibly be extended to generate labels for other dense, low-level tasks such as optical flow (similarly to [1]) or multi-view stereo.", "introduction": "Depth from stereo is one of the longest-standing research fields in computer vision [38]. It involves finding pixels correspondences across two rectified images to obtain the disparity \u2013 i.e., their difference in terms of horizontal coor- dinates \u2013 and then use it to triangulate depth. After years of studies with hand-crafted algorithms [56], deep learning radically changed the way of approaching the problem [74]. End-to-end deep networks [50] rapidly became the domi- nant solution for stereo, delivering outstanding results on benchmarks [42,55,58] given sufficient training data. This latter requirement is the key factor for their suc- cess, but it is also one of the greatest limitations. Annotated data is hard to source when dealing with depth estimation since additional sensors are required (e.g., LiDARs), and thus represents a thick entry barrier to the field. Over the years, two main trends have allowed to soften this problem: self-supervised learning paradigms [3, 23, 76] and the use of synthetic data [30, 41, 75]. Despite these advances, both approaches still have weaknesses to address. Self-supervised learning: despite the possibility to train on any unlabeled stereo pair collected by any user \u2013 and potentially opening to data democratization \u2013 the use of self-supervised losses is ineffective at dealing with ill-posed stereo settings (e.g. occlusions, non-Lambertian surfaces, etc.). Albeit recent approaches soften the occlusions prob- lem [3], predictions are far from being as sharp, detailed and accurate as those obtained through supervised training. Moreover, the self-supervised stereo literature [13, 29, 76] often focuses on well-defined domains (i.e., KITTI) and rarely exposes domain generalization capabilities [3]. Synthetic data: although training on densely annotated synthetic images can guide the networks towards sharp, de- tailed and accurate predictions, the domain-shift that occurs when testing on real data dampens the full potential of the trained model. A large body of recent literature addressing zero-shot generalization [2, 16, 30, 35, 36, 87] proves how relevant the problem is. However, obtaining stereo pairs as realistic as possible requires significant effort, despite syn- arXiv:2303.17603v1 [cs.CV] 30 Mar 2023 thetic depth labels being easily sourced through a graphics rendering pipeline. Indeed, modelling high-quality assets is crucial for mitigating the domain shift and requires ex- cellent graphics skills. While artists make various assets available, these are seldom open source and necessitate ad- ditional human labor to be organized into plausible scenes. In short, in a world where data is the new gold, obtain- ing flexible and scalable training samples to unleash the full potential of deep stereo networks still remains an open prob- lem. In this paper, we propose a novel paradigm to ad- dress this challenge. Given the recent advances in neural rendering [44, 45], we exploit them as data factories: we collect sparse sets of images in-the-wild with a standard, single handheld camera. After that, we train a Neural Ra- diance Field (NeRF) model for each sequence and use it to render arbitrary, novel views of the same scene. Specifi- cally, we synthesize stereo pairs from arbitrary viewpoints by rendering a reference view corresponding to the real ac- quired image, and a target one on the right of it, displaced by means of a virtual arbitrary baseline. This allows us to generate countless samples to train any stereo network in a self-supervised manner by leveraging popular photomet- ric losses [23]. However, this na\u00a8 \u0131ve approach would inherit the limitations of self-supervised methods [13,29,76] at oc- clusions, which can be effectively addressed by rendering a third view for each pair, placed on the left of the source view specularly to the other target image. This allows to compensate for the missing supervision at occluded regions. Moreover, proxy-supervision in the form of rendered depth by NeRF completes our NeRF-Supervised training regime. With it, we can train deep stereo networks by conducting a low-effort collection campaign, and yet obtain state-of-the- art results without requiring any ground-truth label \u2013 or not even a real stereo camera! \u2013 as shown on top of Fig. 1. We believe that our approach is a significant step towards democratizing training data. In fact, we will demonstrate how the efforts of just the four authors were enough to col- lect sufficient data (roughly 270 scenes) to allow our NeRF- Supervised stereo networks to outperform models trained on synthetic datasets, such as [2, 16, 30, 35, 36, 87], as well as existing self-supervised methods [3,79] in terms of zero- shot generalization, as depicted at the bottom of Fig. 1. We summarize our main contributions as: \u2022 A novel paradigm for collecting and generating stereo training data using neural rendering and a collection of user-collected image sequences. \u2022 A NeRF-Supervised training protocol that combines rendered image triplets and depth maps to address oc- clusions and enhance fine details. \u2022 State-of-the art, zero-shot generalization results on challenging stereo datasets [55], without exploiting any ground-truth or real stereo pair." }, { "url": "http://arxiv.org/abs/2206.07047v1", "title": "RGB-Multispectral Matching: Dataset, Learning Methodology, Evaluation", "abstract": "We address the problem of registering synchronized color (RGB) and\nmulti-spectral (MS) images featuring very different resolution by solving\nstereo matching correspondences. Purposely, we introduce a novel RGB-MS dataset\nframing 13 different scenes in indoor environments and providing a total of 34\nimage pairs annotated with semi-dense, high-resolution ground-truth labels in\nthe form of disparity maps. To tackle the task, we propose a deep learning\narchitecture trained in a self-supervised manner by exploiting a further RGB\ncamera, required only during training data acquisition. In this setup, we can\nconveniently learn cross-modal matching in the absence of ground-truth labels\nby distilling knowledge from an easier RGB-RGB matching task based on a\ncollection of about 11K unlabeled image triplets. Experiments show that the\nproposed pipeline sets a good performance bar (1.16 pixels average registration\nerror) for future research on this novel, challenging task.", "authors": "Fabio Tosi, Pierluigi Zama Ramirez, Matteo Poggi, Samuele Salti, Stefano Mattoccia, Luigi Di Stefano", "published": "2022-06-14", "updated": "2022-06-14", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Deep Stereo. First attempts to learn in stereo matching focused on the design of robust matching functions between image patches, implemented with shallow CNNs [13, 45, 75]. Then, along the established pipeline [56] other sub-tasks were cast in the form of a neural network, such as optimization [57, 58] and disparity refinement [1, 5, 19, 23, 33]. However, the development of endto-end architectures [46] represented a real turning point in the field, with more and more works focusing nowadays on the design of new architectures rather than on classical algorithms. According to [53], two main families of deep stereo networks exist, respectively 2D [32, 42, 44, 46, 49, 61, 72] and 3D architectures [9,12,17,34,60,71,76,77]. Self/Proxy-Supervised Stereo. In parallel to the spread of end-to-end stereo, some first strategies were developed to train such models without ground-truth labels [63,80]. Several works followed, exploiting image reprojection across the two views as a form of supervision [40, 65, 70] or, in alternative, the guidance given by traditional stereo algorithms [2, 52, 63, 64] was used to distill proxy labels to supervise the stereo network. Cross-spectral Matching. Finding correspondences between images sensing different parts of the wavelength spectrum represents an additional challenge. This task has been investigated in different settings, in most cases by matching RGB-IR [14, 47], RGB-thermal [50] and RGBNIR [36, 37, 59, 78] modalities, as well as between stereo images with very different radiometric variations [28, 29]. Traditionally, most approaches aimed at designing handcrafted functions [14, 27, 28, 59] or descriptors [29, 36, 50]. More modern approaches deployed deep-learning to learn a deep correlation function [37] starting from a self-similarity measure while Zhi et al. [78] designed an end-to-end network trained in semi-supervised manner on RGB-NIR, by explicitly leveraging knowledge about materials after manual annotation of the training set. Only a few recent works [10, 18, 73] on remote sensing focus on affine trasformations to tackle the RGB-MS registration task. However, in contrast to our method, they do not explicitly reason on searching dense correspondences between the two input signals. Among published papers, [78] is the most relevant to ours, as it proposes an end-to-end model trained without ground-truth, although in the RGBNIR setup. Conversely to [78], our novel proxy-supervised training paradigm does not require any manual annotation, conveniently exploiting an additional RGB camera during training data acquisition. Moreover, we propose a novel dataset with semi-dense ground-truth disparity maps, counting a total of more than 125M annotated pixels, whereas in [78] were limited to a few, sparse pixels, i.e. 5K. 3. RGB-MS Dataset In this section, we present our novel RGB-MS dataset. We start by describing the acquisition setup and then we dig into the procedure to annotate the RGB-MS pairs with semi-dense ground-truth labels. 3.1. Camera Setup Our acquisition setup consists of two synchronized Ximea cameras: an RGB camera equipped with a Sony IMX253LQR-C 12.4 Mpx sensor and an MS camera based on a IM-SM4X4-VIS2 2.2 Mpx sensor with 10 bands. Both cameras are mounted on a common support with a distance of about 4cm between their optical centers. The two cameras are calibrated, so as to form a stereo system where the RGB device (left) is used as the reference camera. This is achieved by the OpenCV tools for camera calibration [7], after having acquired a set of images with both cameras framing a planar chessboard. Then, corner detection is performed on both images after conversion to grayscale. As for the MS camera, we found it sufficient to average the 10 bands pixel-wise in order to obtain a pseudo-grayscale image in which corners are easily detectable. Thus, we estimate intrinsic and lens distortion parameters for both cameras and then, after undistortion, calibrate the RGB-MS stereo camera system and estimate the transformations to rectify the RGB and MS frames. Given the dramatic difference between the two images in terms of resolution, we cannot rely on the standard procedure to estimate the rectification parameters, since it would lead to rectified images resized to a resolution in between the two. Thus, we define the concept of unbalance rectification in which we consider two images as unbalance rectified if, after resizing them to same resolution, they are rectified. Additional details in the supplementary material. 3.2. Annotation Pipeline We now introduce our annotation pipeline, aimed at enriching our RGB-MS dataset with semi-dense ground-truth disparity maps. A variety of active depth sensors would fit this purpose, although, on the one hand, significantly limiting the resolution at which our ground-truth could be collected, on the other introducing the need to carry out a registration between the RGB camera and the active sensor itself, a non-trivial task itself. To overcome these issues, we leverage a space-time stereo pipeline [16] by adding a second RGB sensor to our camera setup. Specifically, we mount this additional device on the right of the MS sensor, i.e. with a baseline of about 8cm with respect to the other RGB camera, thus implementing a 12.2 Mpx RGB stereo setup. This configuration allows to fully exploit the high resolution of the RGB cameras to provide much more accurate ground-truth labels compared to the deployment of any ancillary active sensor (e.g. Lidar or ToF) while removing the need of complex registration across sensors. Moreover, the larger baseline increases depth resolution and produces finer-grained labels. Image Acquisition. To collect a single scene for which we aim at providing an RGB-MS pair together with groundtruth disparities, we use our trinocular cameras setup to acquire a set of passive RGB-MS images, i.e. the static scene is acquired in absence of any external perturbation. For each scene, we make several acquisition with different illuminations. These frames represent the actual images that will be distributed with our dataset. Then, we use a set of portable projectors to perturb the sensed environment with random black-and-white banded patterns generated through Non-recurring De Bruijn sequence [43], as shown in Fig. 2a-b. We acquire several dozens of active images for each scene to be processed by the next step in the pipeline. Space-Time Semi-Global-Matching. We implement a robust space-time stereo pipeline [16] to process the active stereo pairs acquired on the same static scene and obtain a highly accurate disparity map (Fig. 2d). Such a framework leverages on the variety of patterns projected during each acquisition, greatly increasing the distinctiveness of any single pixel and thus easing the matching process. In particular, our algorithm implements four steps: i) Single-pair cost computation. For each active RGBRGB pair t, we compute an initial Disparity Space Image (DSI), storing dmax matching costs for any pixel in the reference image at coordinates (x, y). Such costs are obtained by applying a Census transform [74] on 9 \u00d7 7 windows to both frames and then computing the Hamming distance between 63-bit strings: \\text { DS I } _ t(x,y , d) = \\ sum _ i \\ m at hcal {C}_i^{L(t)}(x,y) \\oplus \\mathcal {C}_i^{R(t)}(x-d,y) (1) with CL(t), CR(t) being the t-th left and right census transformed images and i any single bit in the 63-bit strings. ii) Space-time integration and SGM optimization. Once single DSIt volumes have been initialized, we integrate them over time to obtain more robust matching costs thanks to the variety of different patterns projected onto the scene: \\tex t {D S I } (x,y,d) = \\sum _t \\text {DSI}_t(x,y,d) (2) Then, we further optimize the matching cost distributions by means of the Semi-Global Matching algorithm [30], before selecting the final disparity \u02c6 d(x, y) by means of a Winner-Takes-All strategy. iii) Outliers suppression. Although SGM dramatically regularizes the initial cost volume and leads to smooth disparity maps, several outliers are still present. To remove them, we apply a confidence-based approach [68] to filter out the least confident pixels. Given a pool of conventional confidence measures, we discard all pixels marked as not reliable according to each measure. To this aim, we select two binary measures from [51] \u2013 ACC, LRC \u2013 together with a Weighted Median Disparity Deviation over a 41\u00d741 window, considering pixels as not confident if larger than 1. iv) Sub-pixel interpolation. Finally, we estimate subpixel disparities to improve depth resolution. Starting from Active Stereo Pre-Warping Warped (a) (c) (e) (b) (d) (f) 4112\u00d73008\u00d73 4112\u00d73008\u00d73 3222\u00d71605\u00d73 Figure 2. Ground-truth Acquisition. Given a set of active RGBRGB stereo pairs (a,b), we compute the ground-truth disparity (d) aligned with the left image of the RGB-RGB stereo system (c) with our space-time stereo algorithm. Then, we warp it (f) to be aligned with the left image of the RGB-MS stereo system (e). the DSI output of SGM, an interpolation algorithm based on [48] is applied to pixels that have not been filtered out previously. This is traditionally achieved by fitting, for any pixel (x, y) in the image, a function between its minimum cost c \u02c6 d = DSI(x, y, \u02c6 d) and its disparity neighbours c \u02c6 d\u22121, c \u02c6 d+1. The interpolation function can be assumed as monotonically increasing in [0, 1], then defining sub-pixel precise disparity \u02c6 dsub as \\ha t { d } _ \\ tex t { sub} = \\ b egin { c as e s } \\ h a t {d } 0. 5 + \\fra c { c _{\\ha t {d}-1} c_{\\hat {d}}}{c_{\\hat {d}+1} c_{\\hat {d}}} & \\textit {if} \\; c_{\\hat {d}-1} > c_{\\hat {d}+1} \\\\ \\hat {d} 0.5 + \\frac {c_{\\hat {d}+1} c_{\\hat {d}}}{c_{\\hat {d}-1} c_{\\hat {d}}} & \\text {otherwise} \\\\ \\end {cases} (3) Following [48], we fit a low-order polymonial function, i.e. a parabola ax2 + bx + c, setting a = 1, b = 1 and c = 0. Manual Cleaning on Point Clouds. Despite the aforementioned filtering strategies, the disparity maps obtained so far may still present some outliers. Thus, we manually clean the ground-truth maps of any noisy label by projecting them into 3D point clouds and selecting all points resulting isolated from the main structures in the scene. For pixels corresponding to points that have been filtered out in the point cloud, we remove them from the ground-truth maps. Disparity Warping. The ground-truth disparity maps produced so far are aligned to the reference image of the RGB-RGB setup, i.e. the left image rectified for the RGBRGB setup (Fig. 2c). The reference image for the RGB-MS setup is the same, but it is rectified differently due to the different mechanical alignment and the smaller field of view of the MS camera (see supplementary for more details). Hence, we use the estimated rectification homographies to perform backward warping of the disparity map and align it with the RGB image rectified for the RGB-MS setup (Fig. 2e), while considering the rotation of the camera reference frame and the different baselines in the RGB-MS and RGBRGB setups, thus producing the final ground-truth disparity \ud835\udc3b\u03a6 \u00d7 \ud835\udc4a \u03a6 \u00d7 \ud835\udc39 \u03a6 Resize \ud835\udc3f \ud835\udc3f\u2193 \ud835\udc45 \ud835\udc40\ud835\udc3f\ud835\udc43\ud835\udc36 \ud835\udc40\ud835\udc3f\ud835\udc43\ud835\udc45 \u03a6\ud835\udf03 Concat \ud835\udc3b\u03a8 \u00d7 \ud835\udc4a \u03a8 \u00d7 \ud835\udc37\u03a8 Feature Continuous Interpolation \ud835\udc51 \u03a8\ud835\udf03 Concat Asymmetric Feature Extractors Cross-Spectral Cost Volume Computation \ud835\udc53 \u03a6 \ud835\udc53 \u03a8 \u04a7 \ud835\udc51 Figure 3. Architecture Overview. Given an unbalanced stereo pair composed of a reference high-resolution image L and a target multi-spectral low-resolution image R, our network estimates a disparity map aligned with L by combining cross-spectral cost probabilities computed by a stereo backbone \u03a8\u03b8 and deep features from L obtained by the feature extractor \u03a6\u03b8 . maps for the RGB-MS pair (Fig. 2f). 4. Cross-Spectral Matching Fig. 3 illustrates our deep architecture specifically designed to address the RGB-MS matching problem. Given a rectified RGB-MS pair, we extract deep features from the high-resolution RGB image and compute the matching cost volume between the MS image and the RGB image resized at the same spatial resolution. Then, the extracted features and the cost volume are used to estimate a continuous dense correspondence field, which can be used to synthesize a high-resolution multi-spectral image aligned with the RGB one. In this section, we describe in detail the network architecture, the loss function and the training protocol. 4.1. Problem Statement Let L \u2208RWL\u00d7HL\u00d73 denote the RGB image acquired by the high-resolution camera and R \u2208RWR\u00d7HR\u00d7N (N = 10 in our experiments) the image captured by the lowresolution MS sensor after unbalanced rectification. As highlighted in Fig. 1, the two cameras feature very different properties in terms of resolution and number of channels. Our goal is to estimate a disparity map D aligned with the reference image L at an arbitrary spatial resolution. 4.2. Deep Cross-Spectral Network Our proposed architecture composed of 1) a coarse sub-module responsible for integrating geometric information between the RGB image and the MS signal at lowresolution and 2) a fine-grained sub-module aimed at recovering details by leveraging on the original high-resolution RGB image. More specifically, let \\ Psi _\\theta : \\mathbb {R}^{W_R \\times H_R \\times (N+3)} \\rightarrow \\mathbb {R}^{W_{\\Psi } \\times H_{\\Psi } \\times D_{\\Psi }} (4) denote a deep stereo backbone with parameters \u03b8 and maximum disparity D\u03a8. Examples for such backbones are stereo architectures that process a standard stereo pair by means of siamese towers and perform 3D convolutions on 4D feature volumes [12,26,71,76] to produce a 3D cost volume without performing the last softargmax step. For our specific setup, \u03a8\u03b8 takes as input L\u2193and R, where L\u2193represents the reference RGB image L resized to match the resolution of R, and outputs a feature vector D\u03a8. By doing so, the two images can be considered rectified based on the definition of unbalanced rectification [1]. However, differently from the standard RGB-RGB setting where the stereo pair is processed by means of siamese towers, we modify \u03a8\u03b8 to compute deep features from L\u2193and R using two feature extractors \u2013 that are independent, because of the different information sensed in the two images and encoded in a different number of input channels (L\u2193:3, R: N). Once highdimensional features have been extracted from the two images, we keep unchanged the remaining part of the stereo backbone \u03a8\u03b8 in charge of building the cross-spectral cost volume and regularizing the aggregated costs. Our intuition is that, by doing so, the network effectively learns to match deep features computed from two completely different signals at low-resolution and, thus, to recover the final 3D structure of the scene. Yet, \u03a8\u03b8 does not make full use of the available high resolution RGB image, where important high-frequency details may be extracted. Thus, we deploy also image features learned from the high-resolution image by means of a fully convolutional module that has the same architecture of the low resolution feature extractor adopted in \u03a8\u03b8 but does not share weights with it and defined as: \\ Phi _\\th e ta : \\mathbb {R}^{W_L \\times H_L \\times 3} \\rightarrow \\mathbb {R}^{W_{\\Phi } \\times H_{\\Phi } \\times F_{\\Phi }} (5) where F\u03a6 represents the dimension of extracted features. Following recent advances in continuous function representations [1,67], we adopt a Multilayer Perceptron (MLP) to compute the final disparity map d \u2208R at arbitrary spatial location xL \u2208R2 where the input features for the MLP are obtained by concatenating the bilinear interpolated features f\u03a8 and f\u03a6 computed on the output of \u03a8\u03b8 and \u03a6\u03b8, respectively. We train our model by adopting the recent loss function proposed in [1] that allows estimating accurate disparity maps as well as sharp depth boundaries. More specifically, we use two MLPs: MLPC, in charge of computing a categorical distribution over disparity values [0, dmax] from which an initial integer disparity value is selected, and MLPR, which estimates a real-valued offset to recover subpixel information. The final disparity value predicted by our network can be expressed as: d = \\barf{d} + \\text {MLP}_R(f_{\\Psi }(\\mathbf {x}_{L_\\downarrow }), f_{\\Phi }(\\mathbf {x}_L), \\bar {d}) f (6) RGB MS Proxies (a) (b) (c) RGB 2\u25e6RGB Warped Proxies (d) (e) (f) Figure 4. Proxy Labels Distillation. A classical algorithm [30] struggles at dealing with RGB-MS images (a-c). Using a second RGB camera yields much more accurate proxy labels (d-f). where \u00af d is the integer disparity computed by MLPC, i.e. \\ bar {d} = \\textrm {argmax}(\\text {MLP}_C(f_{\\Psi }(\\mathbf {x}_{L_\\downarrow }), f_{\\Phi }(\\mathbf {x}_L))) f (7) where xL\u2193\u2208R2 is the 2D continuous location in L\u2193, while xL \u2208R2 is the corresponding location at high resolution, defined as xL = WL WL\u2193xL\u2193. Thus, by indicating the disparity labels as d\u2217, our network can be trained by minimizing the following loss function: \\ label { e q:loss} \\begin {split} \\mathcal {L} = &-\\mathcal {N}(d^*,\\sigma ) * \\textrm {log}\\Big ( \\text {MLP}_C(f_{\\Psi }(\\mathbf {x}_{L_\\downarrow }), f_{\\Phi }(\\mathbf {x}_L)) \\Big ) \\\\ &+ \\Big | MLP_R(f_{\\Psi }(\\mathbf {x}_{L_\\downarrow }), f_{\\Phi }(\\mathbf {x}_L)) -d^*_R \\Big | \\\\ \\end {split} g (8) where N(d\u2217, \u03c3) represents the Gaussian distribution centered at d\u2217and sampled at integer values in [0, dmax], while d\u2217 R = d\u2217\u2212\u00af d. 4.3. Proxy Label Distillation We rely on proxy label distillation [2,52,63,66] to obtain a strong and reliable source of supervision for our architecture from the unlabeled images. Thus, following [66], we adopt the popular Semi-Global Matching (SGM) algorithm to obtain proxy-labels from single-channel frames encoding the average of the image channels for the multi-spectral images and the converted grayscale for the RGB images. However, since SGM aims at computing matches between the two images, in our RGB-MS setup it is intrinsically tampered by the different modalities of the two, as shown in Fig. 4 (top). This leads to less accurate proxy-labels and, thus, to a less effective weakly-supervised training. Thus, to source a reliable supervision also on the challenging RGB-MS setup, we propose to enhance our proxy distillation pipeline according to a novel strategy. Specifically, we have already discussed that a second highresolution RGB image allows us to obtain accurate semidense ground-truth maps by means of the space-time stereo framework. We argue that, in a similar manner, we can deploy the second RGB camera during unlabeled data acquisition as well to obtain better proxy supervision. This allows for complementing any RGB-MS pair with a corresponding RGB-RGB pair over which classical stereo algorithms can produce much more accurate proxy labels to be deployed at training time. Hence, we exploit the very same procedure described before to warp the proxy-labels from the rectified RGB-RGB left image to the rectified RGB-MS left image. This second strategy, illustrated in Fig. 4 (bottom), allows us to supervise our network and teach it how to match RGB-MS modalities supervised by established knowledge concerning the matching between RGB-RGB, at the cost of requiring an additional RGB camera during the offline selftraining. This kind of strategy as already been exploited to pursue training without ground-truth depth labels, e.g. in the case of monocular depth estimation by means of image reconstruction losses [24,25] or to distill proxy labels [66]. Despite representing an additional requirement for the acquisition setup \u2013 yet cheap and effortless compared to any annotation pipeline, such as the one used to collect our ground-truth data \u2013 we will show in our experiments how this strategy, when such an additional sensor is available, leads to a more effective training of our RGB-MS and, thus, to more accurate disparity estimations. 5. Experimental Results 5.1. Additional Datasets UnrealStereo4K: The UnrealStereo4K dataset [67] is a synthetic RGB-RGB stereo dataset containing around 8K high-resolution (3840 \u00d7 2160) stereo pairs with dense ground-truth disparities. Although it does not include multispectral images, we employ UnrealStereo4K as an additional source of training data to improve the overall final results yielded by our cross-modal networks. In particular, as it is a standard RGB-RGB dataset, we simulate a RGBMS setup by simply converting the right image of the stereo pair from RGB to grayscale and, then, stacking it to form a volume of 10 channels. Moreover, we resize input images fed to \u03a8 using a downsample factor of 6 to mimic the unbalanced factor featured by our real RGB-MS setup. 5.2. Implementation Details Our proposed architecture is implemented with PyTorch. We adopt Adam [38] as optimizer, with \u03b21 = 0.9 and \u03b22 = 0.999. In our implementation, we adopt different stereo backbones for our \u03a8\u03b8 submodule. In particular, we conduct experiments on two different 3D architectures such as PSM [12] and GWC [26], showing that our proposal can be effectively combined with diverse stereo networks. For both the selected backbones, we use the official code provided by the authors and modify the input channels of the Method Input 2\u25e6RGB Synth D-AEPE ADE (m) SGM Single-Channel 14.66 0.215 Zhi et al. [78] (photo) Single-Channel 89.86 1.010 Zhi et al. [78] Single-Channel \u2713 19.45 0.116 PSM All-Channels 14.51 0.103 PSM All-Channels \u2713 10.04 0.084 PSM + MLPs All-Channels \u2713 19.12 0.182 PSM + MLPs + \u03a6 All-Channels \u2713 8.36 0.065 PSM + MLPs + \u03a6 Single-Channel \u2713 24.13 0.737 PSM + MLPs + \u03a6 All-Channels \u2713 \u2713 7.31 0.057 GWC + MLPs + \u03a6 All-Channels \u2713 \u2713 8.67 0.100 Table 1. Disparity/Depth Errors. Quantitative results on the proposed RGB-MS dataset. We report error metrics for the disparity/depth predictions. Best results in bold. feature extractor associated with the right image to match the number of bands of our multi-spectral camera (i.e., 10). For MLPC and MLPR, instead, we follow the implementation of [1]. We train each network for 70 epochs on a single NVIDIA 3090 GPU with a learning rate of 10\u22124. As RGB-MS training set we use the whole set of 11K unlabeled images, computing SGM proxy labels on each stereo pair. We discard proxy disparity maps with less than 70% of valid pixels to avoid too sparse supervision, obtaining approximately 9K valid training images. When training also on UnrealStereo4K, we first pre-train for 30 epochs solely on synthetic data and then we train for other 70 epochs with both real and synthetic data. We empirically noticed that this training procedure produces sharper disparity maps as final results compared to training on RGB-MS pairs only. During training we use a batch size of 2, random crops of size 1152 \u00d7 1152 from L and select the corresponding crop of size 192\u00d7192 on the multi-spectral image R. Moreover, we randomly sample P = 30000 points from each crop at continuous spatial locations in the image domain to pursue training based on our continuous formulation. Additional training details are reported in the supplementary material. 5.3. Evaluation Metrics To evaluate our RGB-MS model, we use several metrics for optical flow, disparity and depth estimation, computing all the metrics at high resolution. In case of the optical flow metrics, we consider the registration error from the RGB to the MS camera, which, indeed, is the performance metric most relevant in our scenario which, in essence, deals with enriching the RGB image with MS information taken from the available lower resolution sensor. Hence, the metrics used to evaluate the performance of our models are: 1) Flow Average End Point Error (F-AEPE), defined as the norm of the difference between the ground-truth and predicted flow vectors expressed in pixels, 2) Disparity Average EndPoint-Error (D-AEPE), measured in pixels between the predicted disparity map and the ground-truth disparity map, averaged across all pixels, 3) Absolute depth error (ADE) measured in meters (m), between the predicted depth map Method Input 2\u25e6RGB Synth F-AEPE bad1 bad2 bad3 SGM Single-Channel 2.32 52.40 25.73 14.28 Zhi et al. [78] (photo) Single-Channel 14.22 85.05 73.87 66.42 Zhi et al. [78] Single-Channel \u2713 3.08 70.51 47.81 32.65 PSM All-Channels 2.30 44.49 21.42 13.07 PSM All-Channels \u2713 1.59 50.90 21.41 11.20 PSM + MLPs All-Channels \u2713 3.03 67.42 36.86 21.20 PSM + MLPs + \u03a6 All-Channels \u2713 1.32 34.89 13.32 7.05 PSM + MLPs + \u03a6 Single-Channel \u2713 3.82 73.20 43.08 22.20 PSM + MLPs + \u03a6 All-Channels \u2713 \u2713 1.16 32.15 11.54 6.39 GWC + MLPs + \u03a6 All-Channels \u2713 \u2713 1.37 37.91 15.12 7.64 Table 2. Flow Errors. Quantitative results on the proposed RGBMS dataset. We report error metrics for the flow maps employed during registration. Best results in bold. and the ground-truth depth map and 4) Percentage of bad pixels (bad\u03c4), where \u03c4 is a tolerance on the optical flow error to accept an estimated flow as correct. 5.4. Results We report experimental results and ablation studies dealing with disparity/depth and flow error metrics in Tab. 1 and Tab. 2, respectively. As a reference, in the first row of both tables, we show the results yielded by SGM on the RGB-MS test set when both stereo frames are converted to grayscale, averaging all channels in case of the MS image. Moreover, we compare against [78] by using their network on our RGB-MS stereo setup. Ablation on Supervision and Architecture. In rows from 4 to 7 of both tables, we ablate the contributions of the proposed supervision strategy and architectural components. First of all, comparing the basic configuration of PSM trained on SGM proxy labels obtained with or without the adoption of the second additional RGB camera (rows 4 vs 5) we note an improvement for almost all metrics, with a sensible decrease of 4 pixels and almost 1 pixel in the D-AEPE and F-AEPE error metrics, respectively. These results support our idea that a better supervision obtained by two cameras with the same modality can improve the matching performance across different modalities significantly. In the sixth row, we add MLPC and MLPR to the PSM backbone, so as to introduce our continuous formulation potentially yielding much more accurate and sharp high-resolution disparity maps. However, this causes a substantial deterioration of all performance metrics. We ascribe this issue to the task of recovering an accurate highresolution disparity map through our continuous formulation being too difficult in absence of any source of highresolution information. Indeed, in the following rows, we add the \u03a6 high-resolution feature extractor (Fig. 3) to the architecture comprising the PSM backbone and the MLPs, achieving the best results in all metrics with a huge boost compared to all previous configurations. We can appreciate the effect of each component in the first 4 columns of Fig. 5. For instance, we can perceive how PSM + 2\u25e6RGB + MLPs + \u03a6 + Synth (a) (b) (c) (d) (e) Figure 5. Ablation Qualitative Results. Disparity maps for the stereo pair in Fig. 1 computed by: PSM (a), PSM with the proposed 2\u25e6 RGB supervision (b), PSM with the continuous and sharp formulation leveraging the two MLPs (c). Finally, the full architecture with the high-resolution feature extractor \u03a6 without (d) and with synthetic supervision (e). the continuous formulation based on the MLPs (third column) can yield disparity maps exhibiting sharp edges, but only with the introduction of the encoder \u03a6 (fourth column) we are able to recover correctly the high-resolution details. Comparison with Existing Baselines. In the second and third rows, we report the results obtained by [78] by training it on our RGB-MS dataset, using the original photometric loss between the RGB image and the MS image averaged across channels, as well as on proxy labels extracted using the second RGB image. We can observe how, in both tables, the photometric loss (row 2) is in-effective as the original formulation relies on photometrically calibrated RGB-NIR images and a mapping from RGB to pseudoNIR, while the equivalent RGB to pseudo-MS mapping does not exist according to our MS camera manufacturer. On the contrary, by training it on proxy labels, it notably ameliorates the results \u2013 although our network still represents the best solution for this task, since the architecture by [78] is not designed to handle our unbalanced setup. Comparison with Alternative Input Modality. In row 8, we perform an additional test comparing our proposal to a baseline strategy to handle different input modalities with a different number of channels, \u2013 i.e., collapsing both the left and right image into a single-channel representation. We achieve this by converting RGB images into grayscale and averaging the 10 channels of MS images. Thus, we can use the original formulation of the PSM architecture, which relies on a classical shared feature extractor. From Tab.1 and 2 we notice how the network struggles with this concise input representation, suggesting that we better rely on the information from all bands to learn a reliable matching. Auxiliary Synthetic Data. In both tables, we also report the results obtained by training the best architecture (rows 7) with additional synthetic data (Sec. 5.1 and Sec. 5.2 ). We note that we can further push its performances (rows 9 vs rows 7) achieving the best results in all metrics and with an average flow estimation error (F-AEPE) as small as almost 1 pixel (1.16), a remarkable result for such a challenging registration task dealing with images featuring diverse modalities and highly unbalanced resolutions. In the last column of Fig. 5 we can perceive qualitatively how leveraging on auxiliary synthetic training data can help to further improve the predicted disparity maps, in particular in terms of sharpness of the finer details thanks to the perfect ground-truth supervision in challenging regions such as depth discontinuities. Eventually, in the last rows we report also the results obtained using a different backbone, such as GWC [26], proving to be effective as well. 6. Conclusion, Limitations and Future work In this paper, we have explored RGB-MS image registration for the first time. Purposely, we have collected a dataset of 34 synchronized RGB-MS pairs annotated with accurate semi-dense ground-truth disparity maps. We have also proposed a deep network architecture to perform crossmodal matching between the two image modalities. Moreover, to avoid the need of hard-to-source ground-truth labels, we have proposed a novel supervision strategy aimed at distilling knowledge from an easier RGB-RGB matching problem. This strategy has been realized by training our network on 11K unlabeled RGB-MS/RGB-RGB image pairs acquired as part of our dataset. Testing the trained model on the annotated RGB-MS pairs shows promising results, with an average registration error close to 1 pixel despite the dramatic diversity in image modality and resolution. Although our dataset is the first to address RGB-MS matching problem, it has several limitations. In particular, although comparable in terms of number of samples to some popular stereo datasets [55], more images could be collected to make our benchmark more complete and challenging. Moreover, at the moment, ground-truths are limited to indoor scenes only. Finally, images are acquired with only one specific 10-bands MS camera. As future work, we plan to increase the size and variety of the dataset by acquiring more environments and employing diverse MS devices. Many other ideas should definitely be explored to better tackle the novel and challenging registration task addressed in this paper. In this respect, we believe that our proposed architecture and training methodology set forth a strong baseline to steer further investigation towards more effective and more general solutions. Acknowledgements. We gratefully acknowledge the funding support of Huawei Technologies Oy (Finland).", "introduction": "Traditional RGB sensors acquire images with three channels, approximately mimicking color perception in trichromatic mammals like humans, where three kinds of \u2217Joint first authorship. cone cells are sensitive to three different ranges of wave- lengths in the visible spectrum. This is usually achieved by using bandpass optical filters corresponding to such col- ors, arranged in a Bayer pattern [6]. Multi-spectral (MS) imaging devices generalize such image acquisition mechan- ics and enable acquisition of images with a larger number of channels, i.e. ten or more, usually corresponding to nar- rower wavelength ranges. MS sensors may also be sensi- tive to wavelengths outside the visible spectrum, e.g.in the infra-red or ultra-violet bands. By extending the range of wavelengths as well as the granularity of their quantiza- tion into image channels, MS devices enable extraction of additional information about the sensed scene that human eyes fail to capture, which in turns forms the basis of pe- culiar applications. For instance, MS imaging devices are used to perform analysis of artworks [15], remote sensing for agriculture [81] and land use [79], target tracking [21], pedestrian detection [31], counterfeit detection, e.g. of ban- knotes [3], diagnostic medicine and skin inspection [39], food inspection [54] and contamination detection [20]. In spite of such broad range of applications, deployment of MS sensors is usually limited to industrial settings or expensive equipment like satellites. However, the ability to run several of these applications on mobile phones and other consumer devices directly operated by end users could open up inter- esting scenarios, like, e.g., enabling early diagnosis of skin diseases [35], democratizing food quality control [4], mak- ing plant phenotyping easier [69], and others yet to imagine. While the number of traditional cameras on high-end phones have been growing steadily in recent years, MS sen- sors have not been ported yet on consumer devices like phones or action cameras due to several limitations [22]. In particular, MS sensors resolution is significantly smaller than standard RGB cameras which feature several Megapix- els of resolution, because the most suitable technology to re- alize them extends the Bayer pattern used for color imaging into multi-spectral filter arrays [41], with each native pixel of the imaging sensor detecting one band by placing in front of it the corresponding optical filter, i.e. one MS \u201cpixel\u201d for a camera sensing 16 bands uses a 4\u00d74 grid of native pix- els. Thus, MS sensors tend also to be larger and bulkier than RGB cameras; and they are orders of magnitude more expensive, i.e. MS cameras cost at least tens of thousands of dollars/euros. There exist linear MS cameras [62] or MS cameras realized with filter wheel technology [8] which may feature high resolution but can sense only static scenes and are not appropriate for deployment on mobile devices. As a result of these technological limitations, MS cam- eras compatible with cost and size requirements of mobile devices feature very small resolution, insufficient for the ap- plications listed above. Moreover, beside up-sampling the MS image to usable resolutions, it is usually important to align MS information with RGB streams coming from tra- ditional cameras on the device, where objects or areas of interest can be easily identified with effective algorithms. Due to the challenges of matching images across spectra, the mostly explored setup to acquire MS images and RGB images simultaneously has been to physically align their op- tical centers by using beam splitters [11,31], which are how- ever unfeasible to deploy in mobile devices, where instead the dense registration between the images in the pair must be computed by computer vision algorithms. Although establishing dense image correspondences is one of the fundamental and most studied problems in com- puter vision, solutions that address the cross-spectral im- ages and/or unbalanced resolutions are rare. They only investigated case is the special incarnation of the prob- lem where an RGB image is matched to a Near Infra- Red (NIR) or Infra-Red (IR) one at the same resolution [14, 36, 37, 59, 78]. To the best of our knowledge, the gen- eral MS-RGB case, both balanced and unbalanced in terms of resolution, is yet unexplored in literature. Research on this topic has also been hindered by the lack of publicly available datasets: existing methods tackling the NIR/IR- RGB case have been tested on datasets acquired with cus- tom hardware setups targeting specific use cases, like au- tonomous driving or object detection, and never made pub- licly available [14, 78] or are tested on small datasets with sparse ground-truth: e.g. the dataset used by [36, 37] and proposed in [59] has 7 image pairs where 50-100 object cor- ners on average per pair have been manually annotated, i.e. less 700 ground-truth correspondences. In this work, we propose the first large-scale and publicly-available dataset to study RGB-MS registration. In particular, we cast the registration problem as a stereo matching one due to the two cameras being synchronized. Our dataset features more than 11K stereo pairs composed of a low-res 510\u00d7254 MS image and a high-res 3222\u00d71605 RGB image which can be used to compute a high-res MS image registered at pixel-level with the RGB image. Exam- ples of pairs are shown in Fig. 1a-b. Another key feature of our dataset is that 34 stereo pairs coming from 13 dif- ferent scenes are densely annotated (Fig. 1c), thanks to an original acquisition methodology whereby a second RGB image and several projectors are used to create a very ac- curate active space-time stereo setup [16], which result in more than 125 millions of ground-truth correspondences. We also propose a deep learning architecture to tackle the challenging cross-spectral and resolution-unbalanced prob- lem, that can be used to compute registered MS images at arbitrary resolutions and serves as a baseline for the dataset, whose results are shown in Fig. 1d. When train- ing the network, we leverage the large body of unlabelled images in our dataset by sourcing proxy-labels, which are obtained by exploiting again the second RGB camera to run passive stereo matching. Our dataset enables the commu- nity to study the challenging problem of cross-spectral and resolution-unbalanced matching of images, which is key to enable porting of existing MS applications in the consumer space as well as to unlock new applications of MS imaging specific to the mobile device setup. Our contributions are: \u2022 we propose the first investigation into the challenging problem of cross-spectral and resolution-unbalanced dense matching, which is a key enabling technology to unlock MS applications on mobile devices; \u2022 we present the first large-scale publicly available dataset for the problem; \u2022 thanks to a peculiar acquisition methodology exploit- ing two registered high-res RGB cameras, we also make available the first densely labelled set of images for this problem and we propose a training methodology which can leverage unlabelled images via proxy supervision; \u2022 we propose a deep architecture to compute correspon- dences between images at different resolutions and with dif- ferent spectral content, which can be used to generate MS images registered at pixel-level with the RGB stream. The project page is available at https://cvlab- unibo.github.io/rgb-ms-web/." }, { "url": "http://arxiv.org/abs/2104.03866v1", "title": "SMD-Nets: Stereo Mixture Density Networks", "abstract": "Despite stereo matching accuracy has greatly improved by deep learning in the\nlast few years, recovering sharp boundaries and high-resolution outputs\nefficiently remains challenging. In this paper, we propose Stereo Mixture\nDensity Networks (SMD-Nets), a simple yet effective learning framework\ncompatible with a wide class of 2D and 3D architectures which ameliorates both\nissues. Specifically, we exploit bimodal mixture densities as output\nrepresentation and show that this allows for sharp and precise disparity\nestimates near discontinuities while explicitly modeling the aleatoric\nuncertainty inherent in the observations. Moreover, we formulate disparity\nestimation as a continuous problem in the image domain, allowing our model to\nquery disparities at arbitrary spatial precision. We carry out comprehensive\nexperiments on a new high-resolution and highly realistic synthetic stereo\ndataset, consisting of stereo pairs at 8Mpx resolution, as well as on\nreal-world stereo datasets. Our experiments demonstrate increased depth\naccuracy near object boundaries and prediction of ultra high-resolution\ndisparity maps on standard GPUs. We demonstrate the flexibility of our\ntechnique by improving the performance of a variety of stereo backbones.", "authors": "Fabio Tosi, Yiyi Liao, Carolin Schmitt, Andreas Geiger", "published": "2021-04-08", "updated": "2021-04-08", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Deep Stereo Matching: Stereo has a long history in computer vision [41]. With the rise of deep learning, CNN based methods for stereo were pioneered in [56] with the aim of replacing the traditional matching cost computation. More recent works attempt to solve the stereo matching task without hand-crafted post processing steps. They can be categorized into 2D architectures and 3D architectures. In the first category, [22, 32, 15, 53, 54, 46, 1] extend the seminal DispNet [24], an end-to-end network for disparity regression. The second class, instead, consists of architectures that explicitly construct 3D feature cost volumes by means of concatenation/feature difference [4, 18, 47, 48, 8, 57, 49, 55, 29, 3, 7, 45, 50, 21] and groupwise correlation [14]. A thorough review of these works can be found in [36]. We stress once again how such networks, although achieving state-of-the-art results on most stereo benchmarks, suffer from severe over-smoothing at object discontinuities which are not captured by commonly employed disparity metrics, but which matter for many downstream applications. Therefore, the ideas proposed in this work to address this issue are orthogonal to the aforementioned networks and can be advantageously combined with nearly any stereo backbone. Disparity Output Representation: Standard stereo networks directly regress a scalar disparity at every pixel. This output representation suffers from over-smoothing and does not expose the underlying aleatoric uncertainty. The latter problem can be addressed by modeling the disparity using a parametric distribution, e.g., a Gaussian or Laplacian distribution [16, 25] while the over-smoothing issue remains unsolved. A key result of our work is to demonstrate that replacing the unimodal output representation with a bimodal one is sufficient to significantly alleviate this problem. Another line of methods estimate a non-parametric distribution over a set of discrete disparity values. However, this approach leads to inaccurate results when the estimated distribution is multi-modal [17]. Some works tackle the problem by enforcing a unimodal constraint during training [5, 58]. In contrast, we explicitly model the bimodal nature of the distribution at object boundaries by adopting a simple and effective bimodal representation. In concurrent work, [9] also predicts multi-modal distributions supervised by a heuristically designed multi-modal ground truth over a set of depth values. In contrast to them, our bimodal approach can be learned by maximizing the likelihood without requiring direct supervision on the distribution itself. Continuous Function Representation: Existing deep stereo networks use fully convolutional neural networks and make predictions at discrete pixel locations. Recently, continuous function representations have gained attention in many areas, including 3D reconstruction [27, 33, 6, 39, 44, 30, 35], texture estimation [31], image synthesis [28, 42] and semantic segmentation [20]. To the best of our knowledge, we are the first to adopt a continuous function representation for disparity estimation, allowing us to predict a True Disparity Predicted Disparity Foreground Disparity Discontinuity Background (a) Stereo Regression Network First Mode Weight First Mode Mean Second Mode Mean Predicted Disparity Foreground Background (b) Stereo Mixture Density Network Figure 2: Overcoming the Smoothness Bias with Mixture Density Networks. For clarity, we visualize the disparity d only for a single image row. (a) Classical deep networks for stereo regression suffer from smoothness bias and hence continuously interpolate object boundaries. In addition, disparity values are typically predicted at discrete spatial locations. (b) In this work, we propose to use a bimodal Laplacian mixture distribution (illustrated in gray) with weight \u03c0 as output representation which can be queried at any continuous spatial location x. This allows our model to accurately capture uncertainty close to depth discontinuities while at inference time recovering sharp edges by selecting the mode with the highest probability density. In this example, the \ufb01rst mode (\u00b51, b1) models the background and the second mode (\u00b52, b2) models the foreground disparity close to the discontinuity. When the probability density of the foreground mode becomes larger than the probability density of the background mode, the most likely disparity sharply transitions from the background to the foreground value. disparity value at any continuous pixel location. In contrast to works that allow for high-resolution stereo matching by designing memory ef\ufb01cient architectures [51, 13], our simple output representation is able to exploit ground truth disparity maps at a higher resolution than the input stereo pair, thus effectively learning stereo super-resolution. 3. Method Fig. 3 illustrates our model. We \ufb01rst encode a stereo pair into a feature map using a convolutional backbone (left). Next, we estimate parameters of a mixture density distribution at any continuous 2D location via a multi-layer perceptron head, taking the bilinearly interpolated feature vector as input (middle). From this, we obtain a disparity as well as uncertainty map (right). We now explain our model, loss function and training protocol in detail. 3.1. Problem Statement Let I \u2208RW \u00d7H\u00d76 denote an RGB stereo pair for which we aim to predict a disparity map D at arbitrary resolution. As shown in Fig. 2, classical stereo regression networks suffer from over-smoothing due to the smoothness bias of neural networks. In this work, we exploit a mixture distribution as output representation [2] to overcome this limitation. More speci\ufb01cally, we propose to use a bimodal Laplacian mixture distribution with weight \u03c0 and two modes (\u00b51, b1), (\u00b52, b2) to model the continuous probability distribution over disparities at a particular pixel. Using two modes allows our model to capture both the foreground as well as the background disparity at object boundaries. At inference time, we recover sharp object boundaries by selecting the mode with the highest density value. Thus, our model is able to transition from one disparity to another in a discontinuous fashion while at the same time relying only on the regression of functions (\u03c0, \u00b51, b1, \u00b52, b2) which are smooth with respect to the image domain and which therefore can easily be represented using neural networks. 3.2. Stereo Mixture Density Networks We now formally describe our model. Let \u03a8\u03b8 : RW \u00d7H\u00d76 \u2192RW \u00d7H\u00d7D (1) denote a stereo backbone network with parameters \u03b8 as shown in Fig. 3 (left). \u03a8\u03b8 takes as input the stereo pair I and outputs a D-dimensional feature map, represented in the domain of the reference image (e.g. the left image of a stereo pair). Examples for such networks are standard 2D convolutional networks, or networks which perform 3D convolutions. For the 2D networks, the stereo pair can be concatenated as input or processed by means of siamese towers with shared weights as typically done for 3D architectures. Similarly, this generic formulation also applies to the structured light setting (e.g., Kinect setting where I \u2208RW \u00d7H) and the monocular depth estimation problem (I \u2208RW \u00d7H\u00d73). As geometry is a piecewise continuous quantity, we apply a deterministic transformation to obtain feature points for any continuous location in RW \u00d7H. More speci\ufb01cally, for every continuous 2D location x \u2208R2, we bilinearly interpolate the features from its four nearest pixel locations in the feature map RW \u00d7H\u00d7D. More formally, we describe this transformation as: \u03c8 : R2 \u00d7 RW \u00d7H\u00d7D \u2192RD (2) Stereo Backbone SMD Head Figure 3: Method Overview. We assume a 2D or 3D stereo backbone network \u03a8\u03b8 which takes as input a stereo pair I (either concatenated or processed by siamese towers), and outputs a D-dimensional feature map in the domain of the reference image. Given any continuous 2D location x, we query its feature from the feature map via bilinear interpolation as denoted by \u03c8. The interpolated feature vector is then fed into a multi-layer perceptron f\u03b8 to estimate a \ufb01ve-dimensional vector (\u03c0, \u00b51, b1, \u00b52, b2) which represents the parameters of a bimodal distribution. N denotes the number of points randomly sampled at continuous 2D locations during training and the number of pixels during inference. On the right we show maps of \u00b51, \u00b52, \u03c0, 1 \u2212\u03c0, uncertainty h and predicted disparity \u02c6 d. Finally, we employ a multi-layer perceptron to map this abstract feature representation to a \ufb01ve-dimensional vector (\u03c0, \u00b51, b1, \u00b52, b2) which represents the parameters of a univariate bimodal mixture distribution: f\u03b8 : RD \u2192R5 (3) Note that we have re-used the parameter symbol \u03b8 to simplify notation. In the following, we use \u03b8 to denote all parameters of our model. We refer to f\u03b8(\u03c8(\u00b7, \u00b7)) as SMD Head, see Fig. 3 for an illustration. To robustly model a distribution over disparities which can express two modes close to disparity discontinuities, we choose a bimodal Laplacian mixture as output representation: p(d) = \u03c0 2 b1 e\u2212|d\u2212\u00b51| b1 + 1 \u2212\u03c0 2 b2 e\u2212|d\u2212\u00b52| b2 (4) In summary, our model can be compactly expressed as: p(d|x, I, \u03b8) = p(d|f\u03b8 (\u03c8(x, \u03a8\u03b8(I)))) (5) At inference time, we determine the \ufb01nal disparity \u02c6 d by choosing the mode with the highest density value: \u02c6 d = argmax d\u2208{\u00b51,\u00b52} p(d) (6) Note that our formulation allows to query the disparity \u02c6 d \u2208R at any continuous 2D pixel location, enabling ultra high-resolution predictions with sharply delineated object boundaries. This is illustrated in Fig. 4. Our model also allows for capturing the aleatoric uncertainty of the predicted disparity by evaluating the differential entropy of the continuous mixture distribution as: h = \u2212 Z p(d) log p(d) dd (7) In practice, we use numerical quadrature to obtain an approximation of the integral. 3.3. Loss Function We consider the supervised setting and train our model by minimizing the negative log-likelihood loss: LNLL(\u03b8) = \u2212Ed,x,I log p(d|x, I, \u03b8) (8) where the input I is randomly sampled from the dataset, x is a random pixel location in the continuous image domain \u2126= [0, W \u22121] \u00d7 [0, H \u22121], sampled as described in Sec. 3.4, and d is the ground truth disparity at location x. 3.4. Training Protocol Sampling Strategy: While a na\u00a8 \u0131ve strategy samples pixel locations x randomly and uniformly from the image domain \u2126, our framework also allows for exploiting custom sampling strategies to focus on depth discontinuities during training. We adopt a Depth Discontinuity Aware (DDA) sampling approach during training that explicitly favors points located near object boundaries while at the same time maintaining a uniform coverage on the entire image space. More speci\ufb01cally, given a ground truth disparity map at training time, we \ufb01rst compute an object boundary mask in which a pixel is considered to be part of the boundary if its (4-connected) neighbors have a disparity that differs by more than 1 from its own disparity. This mask is then dilated using a \u03c1 \u00d7 \u03c1 kernel to enlarge the boundary region. We report an analysis using different \u03c1 values in the experimental section. Given the total number of training points N, we randomly and uniformly select N/2 points from the domain of all pixels belonging to depth discontinuity regions and N/2 points uniformly from the continuous domain of all remaining pixels. At inference time, we leverage our model to predict disparity values at each location of an (arbitrary resolution) grid. Stereo Super-Resolution: Our continuous formulation al0.5Mpx 128Mpx Figure 4: Ultra High-resolution Estimation. Comparison of our model using the PSM backbone at 128Mpx resolution (top) to the original PSM at 0.5Mpx resolution (bottom), both taking stereo pairs at 0.5Mpx resolution as input. Each column shows a different zoom-level. Note how our method leads to sharper boundaries and high resolution outputs. lows us to exploit ground truth at higher resolution than the input I, which we refer to as stereo super-resolution. In contrast, classical stereo methods cannot realize arbitrary super-resolution without changing their architecture. 4. Experimental Results In this section, we \ufb01rst describe the datasets used for evaluation and implementation details. We then present an extensive evaluation that demonstrates the bene\ufb01ts of the proposed SMD Head in combination with different stereo backbones on several distinct tasks. 4.1. Datasets UnrealStereo4K: Motivated by the lack of large-scale, realistic and high-resolution stereo datasets, we introduce a new photo-realistic binocular stereo dataset at 3840 \u00d7 2160 resolution with pixel-accurate ground truth. We create this synthetic dataset using the popular game engine Unreal Engine combined with the open-source plugin UnrealCV [37]. We additionally create a synthetic active monocular dataset (mimicking the Kinect setup) at 4112 \u00d7 3008 resolution by warping a gray-scale reference dot pattern to each image, following [38]. We split the dataset into 7720 training pairs, 80 validation pairs and 200 in-domain test pairs. To evaluate the generalization ability of our method, we also create an out-of-domain test set by rendering 200 stereo pairs from an unseen scene. Similarly, the active dataset contains 3856 training images, 40 validation images, 100 test images. RealActive4K: We further collect a small real-world active dataset of an indoor room with a Kinect-like stereo sensor, including 2570 images at a resolution 4112\u00d73008 pixels from which we use 2500 for training, 20 for validation and 50 for testing. We perform Block Matching with leftright consistency check to use as co-supervision for training models jointly on synthetic (UnrealStereo4K) and real data. KITTI 2015 [26]: The KITTI dataset is a collection of real-world stereo images depicting driving scenarios. It contains 200 training pairs with sparse ground truth depth maps collected by a LiDAR and 200 testing pairs. We divide the KITTI training set into 160 training stereo pairs and 40 validation stereo pairs, following [45]. Middlebury v3 [40]: Middlebury v3 is a small highresolution stereo dataset depicting indoor scenes under controlled lighting conditions containing 10 training pairs and 10 testing pairs with dense ground truth disparities. 4.2. Implementation Details Architecture: In principle, our SMD Head is compatible with any stereo backbone \u03c8\u03b8 from the literature. In our implementation, we build on top of two state-of-the-art 3D stereo architectures: Pyramid Stereo Matching (PSM) network [4] and Hierarchical Stereo Matching (HSM) network [51]. PSM is a well-known and popular stereo network while HSM represents a method with good trade-off between accuracy and computation. Moreover, we also adopt a na\u00a8 \u0131ve U-Net structure [12] that takes as input concatenated images of a stereo pair in order to show the effectiveness of our model on 2D architectures. For the aforementioned networks, we follow the of\ufb01cial code provided by the authors. Our SMD Head f\u03b8 is implemented as a multi-layer perceptron (MLP) following [39]. More speci\ufb01cally, the number of neurons is (D, 1024, 512, 256, 128, 5). We use sine activations [44] except for the last layer that uses a sigmoid activation for regressing the \ufb01ve-parameter output. For the 3D backbone, we select the matching probabilities from the cost volume in combination with features of \u03a8\u03b8 at different resolutions as input to our SMD Head. For the 2D backbone case, instead, we select features from different layers of the decoder. We refer the reader to the supplementary material for details. Training: We implement our approach in PyTorch [34] and use Adam with \u03b21 = 0.9 and \u03b22 = 0.999 as optimizer [19]. We train all models from scratch using a single NVIDIA V100 GPU. During training, we use random crops from I as input to the stereo backbone and sample N = 50, 000 training points from each crop. We scale the ground truth disparity to [0, 1] for each dataset for numerical stability. Moreover, for RGB inputs we perform chromatic augmentations on the \ufb02y, including random brightness, gamma and color shifts sampled from uniform distributions. We further apply horizontal and vertical random \ufb02ipping while adapting the ground truth disparities accordingly. Please see the supplementary material for details regarding the training procedure for each dataset. Evaluation Metrics: Following [5], we evaluate the Soft Edge Error (SEEk) metric on pixels belonging to object boundaries, de\ufb01ned as the minimum absolute error between the predicted disparity and the corresponding local ground truth patch of size k \u00d7 k (k = {3, 5} in our experiments). Intuitively, SEE penalizes over-smoothing artifacts stronger compared to small misalignments in a local window, where the former is more harmful to subsequent applications. While not our main focus, we also report the End Point Error (EPE) as the standard error metric obtained by averaging the absolute difference between predictions and ground truth disparity values to evaluate the overall performance. For both SEE and EPE, we compute the average (Avg) and \u03c3(\u2206) metrics, with the latter one representing the percentage of pixels having errors greater than \u2206. 4.3. Ablation Study We \ufb01rst examine the impact of different components and training choices of the proposed SMD-Nets on the indomain UnrealStereo4K test set. Unless speci\ufb01ed otherwise, we use 960\u00d7540 as resolution for the binocular input I and 3840\u00d72160 for the corresponding ground truth, used for both supervision and testing purposes. The active input images consist of random dot patterns where the dots become indistinguishable at low resolution (e.g., 960 \u00d7 540). Therefore we use 2056 \u00d7 1504 as active input size while keeping the ground truth dimension at 4112 \u00d7 3008. Output Representation: In Tab. 1, we evaluate the effectiveness of our mixture density output representation across both, 2D and 3D stereo backbones on multiple tasks including binocular stereo, monocular depth and active depth. We adopt U-Net and PSM on the binocular stereo dataset as representatives of 2D and 3D backbones and report results of HSM in the supplementary for the sake of space. We also use the same U-Net backbone for a monocular depth estimation task by replacing the input with only the reference image of a binocular stereo pair to show the advantage of our method on various tasks. For the active setup, we choose HSM as it represents a network designed speci\ufb01cally for high-resolution inputs which takes as input the monocular active image and the \ufb01xed reference dot pattern. We compare our bimodal distribution to two other output representations, standard disparity regression and a unimodal Laplacian distribution [16]. For fairness, we implement these baselines by replacing the last layer of our SMD Head to predict the disparity d or the unimodal parameters (\u00b5, b), respectively, where the former is trained with a standard L1 loss while the latter with a negative log-likelihood loss. For all cases we use the proposed bilinear feature interpolation and the na\u00a8 \u0131ve random sampling strategy. Tab. 1 shows that the proposed method effectively addresses the over-smoothing problem at object boundaries, achieving the lowest SEE for all backbones on all tasks, compared to both the standard disparity regression and the unimodal representation. Moreover, we observe that the \u03a8\u03b8 Dim. SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(3) Binocular Stereo U-Net (2D ) [12] 1 2.15 41.69 24.16 2.03 39.65 22.98 1.48 8.18 2 2.38 42.28 25.74 2.26 40.42 24.57 1.97 10.44 5 1.57 30.06 14.77 1.45 28.05 16.57 1.28 5.94 PSM (3D) [4] 1 1.98 36.32 20.35 1.85 34.42 19.21 1.10 5.52 2 2.50 39.40 23.63 2.37 37.57 22.51 1.88 7.73 5 1.52 26.98 12.68 1.38 24.93 11.49 1.11 4.80 Mono. U-Net (2D) [12] 1 3.29 60.18 41.37 3.25 58.49 40.08 4.21 35.92 2 4.01 61.06 43.19 3.86 59.40 41.90 5.49 41.88 5 2.92 51.32 32.33 2.78 49.54 31.06 4.06 30.59 Active HSM (3D) [51] 1 3.40 47.87 24.80 3.18 46.14 23.76 1.29 5.84 2 4.93 57.05 33.44 4.69 55.47 32.41 2.83 10.70 5 2.69 41.84 17.35 2.43 39.83 16.17 1.42 5.48 Table 1: Output Representation analysis on the UnrealStereo4K test set. \u201cDim.\u201d refers to the output dimension of the SMD Head where 1 indicates the point estimate d, 2 the unimodal output representation (\u00b5, b) [16] and 5 our bimodal formulation (\u03c0, \u00b51, b1, \u00b52, b2). Sampling \u03c1 SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(3) Random 1.52 26.98 12.68 1.38 24.93 11.49 1.11 4.80 DDA 0 1.34 21.62 9.77 1.19 19.58 8.59 1.08 4.44 DDA 10 1.13 18.64 8.69 0.98 16.67 7.55 0.92 3.88 DDA 20 1.30 20.42 9.88 1.15 18.40 8.71 1.11 4.44 Table 2: Sampling Strategy analysis on the UnrealStereo4K test set using the PSM backbone. Eval. GT Training GT SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(3) 960 \u00d7 540 960 \u00d7 540 1.19 20.36 9.16 0.93 16.55 7.08 1.02 4.30 960 \u00d7 540 3840 \u00d7 2160 0.98 15.42 7.05 0.78 12.44 5.54 0.89 3.81 3840 \u00d7 2160 960 \u00d7 540 1.33 23.35 10.82 1.19 21.34 9.63 1.03 4.30 3840 \u00d7 2160 3840 \u00d7 2160 1.13 18.64 8.69 0.98 16.67 7.55 0.92 3.88 Table 3: Ground Truth Resolution analysis on the UnrealStereo4K test set using the PSM backbone. unimodal representation sacri\ufb01ces EPE for capturing the uncertainty, while our method is on par with the standard L1 regression. On the stereo dataset, the 3D backbone (PSM) consistently outperforms the 2D backbone (U-Net), therefore we use PSM for the following ablation experiments. Sampling Strategy: In Tab. 2, we show the impact of the sampling strategy adopted during training. More speci\ufb01cally, we compare the na\u00a8 \u0131ve uniform sampling strategy and the proposed DDA approach using different dilation kernel sizes \u03c1\u00d7\u03c1. As can be observed, DDA enables SMD-Nets to focus on depth discontinuities, resulting in better SEE compared to random point selection. Moreover, we observe that sampling exactly at depth boundaries (i.e., \u03c1 = 0) leads to slightly degraded EPE and is less effective on SEE which penalizes small misalignment in a local window. Instead, setting \u03c1 = 10 allows the network to focus on larger regions near edges and results in the best performance, while increasing \u03c1 does not improve performance further. Finally, it is worth to notice that this strategy also allows our model to improve the overall performance, achieving lower EPE metrics. In the following experiments, we thus adopt the DDA strategy using \u03c1 = 10 for our SMD-Nets. Method In-domain Out-of-domain SEE3 SEE5 EPE SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) \u03c3(3) Avg \u03c3(1) \u03c3(3) Avg \u03c3(1) \u03c3(3) Avg \u03c3(1) \u03c3(2) \u03c3(3) PSM [4] 1.73 33.06 16.57 1.61 31.11 15.44 1.09 11.88 6.94 5.19 2.19 36.94 20.07 1.99 34.09 18.25 1.53 16.92 10.25 7.83 PSM [4] + BF [23] 1.65 30.93 15.26 1.52 28.92 14.10 1.10 11.81 6.95 5.23 2.16 35.64 19.16 1.95 32.76 17.32 1.56 18.89 10.28 7.89 PSM [4] + SM [5] 1.50 29.22 12.71 1.37 27.16 11.54 1.10 11.65 6.69 4.97 2.03 33.91 16.74 1.82 30.92 14.82 1.54 16.43 9.73 7.36 PSM [4] + CE + SM [5] 1.33 27.31 10.14 1.19 25.25 8.99 0.86 10.40 4.93 3.50 1.84 29.87 13.30 1.62 26.84 11.46 1.37 13.29 7.84 6.03 PSM [4] + Ours 1.13 18.64 8.69 0.98 16.67 7.55 0.92 8.24 5.06 3.88 1.59 24.58 12.54 1.38 21.63 10.73 1.27 12.11 7.69 6.06 HSM [51] 2.01 41.63 23.81 1.89 39.69 22.62 1.16 14.81 8.20 5.84 2.43 44.49 26.17 2.24 41.74 24.33 1.75 22.03 12.73 9.23 HSM [51] + BF [23] 1.88 39.68 21.70 1.77 37.67 20.49 1.19 14.78 8.21 5.88 2.39 43.60 24.14 2.19 40.82 23.28 1.80 22.05 12.79 9.33 HSM [51] + SM [5] 1.83 40.52 22.30 1.70 38.53 21.07 1.17 14.73 8.11 5.74 2.31 43.76 25.16 2.11 40.97 23.29 1.76 21.88 12.54 9.03 HSM [51] + CE + SM [5] 2.00 45.71 25.99 1.87 43.72 24.71 1.17 16.17 8.12 5.46 2.61 48.27 28.84 2.41 45.56 26.98 1.91 26.12 14.40 10.14 HSM [51] + Ours 1.31 24.31 10.81 1.17 22.30 9.67 1.00 11.40 6.09 4.34 2.03 34.82 17.75 1.82 31.88 15.83 1.66 19.16 10.72 7.77 Table 4: Comparison on UnrealStereo4K. All methods evaluated on ground truth at 3840\u00d72160 given input size 960\u00d7540. (a) PSM [4] (b) PSM [4] + CE + SM [5] (c) PSM + Ours (d) GT, Input Figure 5: Qualitative Results on UnrealStereo4K. The \ufb01rst row shows the predicted disparity maps while the second row depicts the corresponding error maps. We zoom-in a patch in all images to better perceive details near depth boundaries. Ground Truth Resolution: Tab. 3 shows the results of our model trained and tested on the stereo data using ground truth maps at different resolutions, while maintaining the input size at 960 \u00d7 540. Towards this goal, we train our model adopting ground truth disparities 1) resized to the same resolution as the input using nearest interpolation and 2) at the original resolution (i.e. 3840 \u00d7 2160). We notice that sampling points from higher resolution disparity maps always leads to better results compared to using low resolution ground truth. We remark that the proposed model effectively leverages high resolution ground truth thanks to its continuous formulation, without requiring additional memory compared to standard stereo networks based on CNNs. 4.4. Comparison to Existing Baselines We now compare to several baselines [23, 5] which aim to address the over-smoothing problem. Bilateral median \ufb01ltering (BF) is often adopted to sharpen disparity predictions [23, 43]. Chen et al. [5] address the over-smoothing problem of 3D stereo backbones using 1) a post-processing step to extract a single-modal (SM) distribution from the full discrete distribution; 2) a cross-entropy (CE) loss to enforce a unimodal distribution during training. We reimplement [5] as no of\ufb01cial code is available. As [5] has been proposed for 3D backbones only, we use PSM [4] and HSM [51] as the stereo backbones in the following experiments. UnrealStereo4K: Tab. 4 collects results obtained from different models on both in-domain and out-of-domain test splits of the binocular UnrealStereo4K dataset. We use the same input resolution of 960 \u00d7 540 for all methods. While our baseline methods can only use supervision with the same size as the input, we leverage our continuous formulation to supervise SMD-Nets using ground truth at 3840 \u00d7 2160, on which we also evaluate all methods. For our competitors, we upsample their outcome using nearest neighbor interpolation during testing. Both original PSM and HSM follow the same training setting of our SMD-Nets. Tab. 4 suggests that BF [23] and SM [5] slightly improve SEE on both backbones while leading to degraded performance on EPE metrics. Using the CE loss combined with SM [5] leads to effective improvement on both SEE and EPE on the PSM backbone. Interestingly, we notice that adopting the same CE + SM strategy leads to worse performance on HSM. A possible explanation is that the CE loss requires to trilinearly interpolate a matching cost probability distribution to the full resolution W \u00d7 H \u00d7 Dmax (with Dmax denoting the maximum disparity), where HSM predicts a less \ufb01ne-grained cost distribution compared to PSM, thus making the cross-entropy loss less effective. Moreover, we remark that the CE loss is more expensive to compute, compared to our simple continuous likelihood-based formulation and CE + SM can only be applied on 3D backbones. In contrast, our approach based on the bimodal output representation notably outperforms our competitors on SEE on both the in-domain and out-of-domain test sets, showing Method SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(3) PSM [4] 1.10 20.57 9.74 0.99 17.83 9.02 0.73 2.49 PSM [4] + CE + SM [5] 1.02 16.12 7.53 0.90 13.80 6.94 0.66 2.09 PSM [4] + Ours 0.90 13.09 6.66 0.79 10.93 6.01 0.59 1.95 Table 5: Comparison on KITTI 2015 Validation Set using boundaries extracted from instance segmentation masks to evaluate on depth discontinuity regions. Method All Areas Non Occluded Bg Fg All Bg Fg All GANet-deep [57] 1.48 3.46 1.81 1.34 3.11 1.63 HD3-Stereo [54] 1.70 3.63 2.02 1.56 3.43 1.87 GwcNet-g [14] 1.74 3.93 2.11 1.61 3.49 1.92 PSM [4] 1.86 4.62 2.31 1.71 4.31 2.14 PSM [4] + CE + SM [5] 1.54 4.33 2.14 1.70 3.90 1.93 PSM [4] + Ours 1.69 4.01 2.08 1.54 3.70 1.89 Table 6: Comparison on KITTI 2015 Test Set, evaluated on the of\ufb01cial online benchmark. All the reported numbers represent of\ufb01cial submissions from the authors. Method SEE3 SEE5 EPE Avg \u03c3(1) \u03c3(2) Avg \u03c3(1) \u03c3(2) Avg \u03c3(3) PSM [4] 3.35 46.50 29.40 2.61 41.04 24.87 4.12 17.43 PSM [4] + CE + SM [5] 2.62 34.80 19.02 1.83 28.92 14.11 2.80 12.12 PSM [4] + Ours 2.61 34.26 19.83 1.88 28.71 15.32 3.03 13.60 Table 7: Generalization on Middlebury v3. All models are trained on UnrealStereo4K and evaluated on the training set of Middlebury v3 dataset. how our strategy predicts better disparities near boundaries. Moreover, we highlight that we achieve consistently better estimates on standard EPE metrics compared to the original backbone while performing comparably to the CE + SM baseline. Fig. 5 shows our gains at object boundaries. KITTI 2015: We \ufb01ne-tune all methods trained using UnrealStereo4K on the KITTI 2015 training set. Since the provided ground truth disparities are sparse, we rely on the na\u00a8 \u0131ve random sampling strategy to train our model. On the validation set, we evaluate SEE on boundaries of instance segmentation maps from the KITTI dataset, following the evaluation procedure described in [5]. Furthermore, we predict disparities on the test set using the same \ufb01netuned model and submit to the online benchmark. Tab. 5 and Tab. 6 show our results using PSM as backbone (we provide additional results on the validation set adopting HSM in the supplement). Note that our SMD-Net not only achieves superior performance on both SEE and EPE metrics on the validation set compared to the original PSM and [5] (Tab. 5), but also outperforms both on the test set and is on par with state-of-the-art stereo networks on standard metrics of the KITTI benchmark (Tab. 6). 4.5. Synthetic-to-Real Generalization Lastly, we demonstrate how models trained on the synthetic dataset generalize to the real-world domain for both binocular stereo and active depth estimation. (a) Disparity Regression (L1) (b) SMD Head (Bimodal) Figure 6: Generalization on RealActive4K using the HSM backbone. The point clouds of standard disparity regression using L1 loss (a) show bleeding artifacts whereas our bimodal distribution (b) leads to clean reconstructions. Middlebury v3: Tab. 7 reports the performance of supervised methods trained on the UnrealStereo4K and tested without \ufb01ne-tuning on the training set of the Middlebury v3 dataset. We evaluate them using the high-resolution ground truth. Compared to the original PSM baseline, our SMDNet achieves much better generalization on both SEE and EPE metrics while performing on par with [5]. RealActive4K: Moreover, we \ufb01ne-tune our active depth models jointly on active UnrealStereo4K and RealActive4K with pseudo-ground truth from Block Matching. Fig. 6 shows that this allows for estimating sharp disparity edges for real captures even though Block Matching does not provide supervision in these areas. In contrast, standard disparity regression fails to predict clean object boundaries. 5. Conclusion In this paper, we propose SMD-Nets, a novel stereo matching framework aimed at improving depth accuracy near object boundaries and suited for disparity superresolution. By exploiting bimodal mixture densities as output representation combined with a continuous function formulation, our method is capable of predicting sharp and precise disparity values at arbitrary spatial resolution, notably alleviating the common over-smoothing problem in learning-based stereo networks. Our model is compatible with a broad spectrum of 2D and 3D stereo backbones. Our extensive experiments demonstrate the advantages of our strategy on a new high-resolution synthetic stereo dataset and on real-world stereo pairs. We plan to extend our bimodal output representation to other regression tasks such as optical \ufb02ow and self-supervised depth estimation. Acknowledgements. This work was supported by the Intel Network on Intelligent Systems, the BMBF through the T\u00a8 uebingen AI Center (FKZ: 01IS18039A), the ERC Starting Grant LEGO3D (850533) and the DFG EXC number 2064/1 project number 390727645. We thank the IMPRS-IS for supporting Carolin Schmitt. We acknowledge Stefano Mattoccia, Matteo Poggi and Gernot Riegler for their helpful feedback.", "introduction": "Stereo matching is a long standing and active research topic in computer vision. It aims at recovering dense corre- spondences between image pairs by estimating the dispar- ity between matching pixels, required to infer depth through triangulation. It also plays a crucial role in many areas like 3D mapping, scene understanding and robotics. Traditional stereo matching algorithms apply hand- crafted matching costs and engineered regularization strate- gies. More recently, learning methods based on Convolu- tional Neural Networks (CNNs) have proven to be supe- rior, given the increasing availability of large stereo datasets *Work done as intern at MPI-IS. (a) PSM [4] (b) PSM [4] + Ours Figure 1: Point Cloud Comparison between the stereo net- work PSM [4] and our Stereo Mixture Density Network (SMD-Net) on the UnrealStereo4K dataset. Notice how SMD-Net notably alleviates bleeding artifacts near object boundaries, resulting in more accurate 3D reconstructions. [11, 10, 52]. Although such methods produce compelling results, two major issues remain unsolved: predicting accu- rate depth boundaries and generating high-resolution out- puts with limited memory and computation. The \ufb01rst issue is shown in Fig. 1a: As neural networks are smooth function approximators, they often poorly re- construct object boundaries, causing \u201cbleeding\u201d artifacts (i.e., \ufb02ying pixels) when converted to point clouds. These artifacts can be detrimental to subsequent applications such as 3D reconstruction or 3D object detection. Thus, while being ignored by most commonly employed disparity met- rics, accurate 3D reconstruction of contours is a desirable property for any stereo matching algorithm. arXiv:2104.03866v1 [cs.CV] 8 Apr 2021 Furthermore, existing methods are limited to discrete predictions at pixel locations of a \ufb01xed resolution image grid, while geometry is a piecewise continuous quantity where object boundaries may not align with pixel centers. Increasing the output resolution by adding extra upsampling layers partially addresses this problem as this leads to a sig- ni\ufb01cant increase in memory and computation. In this work, we address both issues. Our key contri- bution is to learn a representation that is precise at object boundaries and scales to high output resolutions. In partic- ular, we formulate the task as a continuous estimation prob- lem and exploit bimodal mixture densities [2] as output rep- resentation. Our simple formulation lets us (1) avoid bleed- ing artifacts at depth discontinuities, (2) regress disparity values at arbitrary spatial resolution with constant memory and (3) provides a measure for aleatoric uncertainty. We illustrate the boundary bleeding problem and our so- lution to it in Fig. 2. While classical deep networks for stereo regression suffer from smoothness bias and are in- capable of representing sharp disparity discontinuities, the proposed Stereo Mixture Density Networks (SMD-Nets) ef- fectively address this issue. The key idea is to alter the output representation adopting a mixture distribution such that sharp discontinuities can be regressed despite the fact that the underlying neural networks are only able to make smooth predictions (note that all curves in Fig. 2b are indeed smooth while the predicted disparity is discontinuous). Furthermore, the proposed model is capable of regress- ing disparity values at arbitrary continuous locations in the image, effectively solving a stereo super-resolution task. In combination with the proposed representation, this allows for regressing sharp discontinuities at sub-pixel resolution while keeping memory requirements constant. In summary, we present: (i) A novel learning framework for stereo matching that exploits compactly parameterized bimodal mixture densities as output representation and can be trained using a simple likelihood-based loss function. (ii) A continuous function formulation aimed at estimating dis- parities at arbitrary spatial resolution with constant mem- ory footprint. (iii) A new large-scale synthetic binocular stereo dataset with ground truth disparities at 3840 \u00d7 2160 resolution, comprising photo-realistic renderings of indoor and outdoor environments. (iv) Extensive experiments on several datasets demonstrating improved accuracy at depth discontinuities for various backbones on binocular stereo, monocular and active depth estimation tasks. Our source code and dataset are available at https: //github.com/fabiotosi92/SMD-Nets." }, { "url": "http://arxiv.org/abs/2003.14030v1", "title": "Distilled Semantics for Comprehensive Scene Understanding from Videos", "abstract": "Whole understanding of the surroundings is paramount to autonomous systems.\nRecent works have shown that deep neural networks can learn geometry (depth)\nand motion (optical flow) from a monocular video without any explicit\nsupervision from ground truth annotations, particularly hard to source for\nthese two tasks. In this paper, we take an additional step toward holistic\nscene understanding with monocular cameras by learning depth and motion\nalongside with semantics, with supervision for the latter provided by a\npre-trained network distilling proxy ground truth images. We address the three\ntasks jointly by a) a novel training protocol based on knowledge distillation\nand self-supervision and b) a compact network architecture which enables\nefficient scene understanding on both power hungry GPUs and low-power embedded\nplatforms. We thoroughly assess the performance of our framework and show that\nit yields state-of-the-art results for monocular depth estimation, optical flow\nand motion segmentation.", "authors": "Fabio Tosi, Filippo Aleotti, Pierluigi Zama Ramirez, Matteo Poggi, Samuele Salti, Luigi Di Stefano, Stefano Mattoccia", "published": "2020-03-31", "updated": "2020-03-31", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.LG" ], "main_content": "We review previous works relevant to our proposal. Monocular depth estimation. At first, depth estimation was tackled as a supervised [24, 49] or semi-supervised task [48]. Nonetheless, self-supervision from image reconstruction is now becoming the preferred paradigm to avoid hard to source labels. Stereo pairs [25, 28] can provide such supervision and enable scale recovery, with further improvements achievable by leveraging on trinocular assumptions [64], proxy labels from SGM [76, 80] or guidance from visual odometry [2]. Monocular videos [95] are a more flexible alternative, although they do not allow for scale recovery and mandate learning camera pose alongside with depth. Recent developments of this paradigm deal with differentiable direct visual odometry [77] or ICP [57] and normal consistency [87]. Similarly to our work, [88, 96, 17, 3, 86, 56] model rigid and non-rigid components using the projected depth, relative camera transformations, and optical flow to handle independent motions, which can also be estimated independently in the 3D space [9, 83]. In [30], the authors show how to learn camera intrinsics together with depth and egomotion to enable training on any unconstrained video. In [29, 94, 6], reasoned design choices such as a minimum reprojection loss between frames, selfassembled attention modules and auto-mask strategies to handle static camera or dynamic objects proved to be very effective. Supervision from stereo and video have also been combined [91, 29], possibly improved by means of proxy supervision from stereo direct sparse odometry [84]. Uncertainty modeling for self-supervised monocular depth estimation has been studied in [63]. Finally, lightweight networks aimed at real-time performance on low-power systems have been proposed within self-supervised [62, 61] as well as supervised [81] learning paradigms. Semantic segmentation. Nowadays, fully convolutional neural networks [55] are the standard approach for semantic segmentation. Within this framework, multi-scale context modules and proper architectural choices are crucial to performance. The former rely on spatial pyramid pooling [31, 93] and atrous convolutions [14, 13, 15]. As for the latter, popular backbones [47, 74, 32] have been improved by more recent designs [34, 18]. While for years the encoder-decoder architecture has been the most popular choice [70, 4], recent trends in Auto Machine Learning (AutoML) [52, 12] leverage on architectural search to achieve state-of-the-art accuracy. However, these latter have huge computational requirements. An alternative research path deals with real-time semantic segmentation networks. In this space, [60] deploys a compact and efficient network architecture, [89] proposes a two paths network to attain fast inferences while capturing high resolution details. DABNet [50] finds an effective combinations of depth-wise separable filters and atrous-convolutions to reach a good trade-off between efficiency and accuracy. [51] employs cascaded sub-stages to refine results while FCHardNet [11] leverages on a new harmonic densely connected pattern to maximize the inference performance of larger networks. Optical flow estimation. The optical flow problem concerns estimation of the apparent displacement of pixels in consecutive frames, and it is useful in various applications such as, e.g., video editing [10, 43] and object tracking [82]. Initially introduced by Horn and Schunck [33], this problem has traditionally been tackled by variational approaches [8, 7, 69]. More recently, Dosovitskiy et al. [21] showed the supremacy of deep learning strategies also in this field. Then, other works improved accuracy by stacking more networks [38] or exploiting traditional pyramidal [65, 75, 35] and multi-frame fusion [67] approaches. Unfortunately, obtaining even sparse labels for optical flow is extremely challenging, which renders self-supervision from images highly desirable. For this reason, an increasing number of methods propose to use image reconstruction and spatial smoothness [41, 68, 73] as main signals to guide the training, paying particular attention to occluded regions [58, 85, 53, 54, 40, 37]. Semantic segmentation and depth estimation. Monocular depth estimation is tightly connected to the 2 Camera Network Depth Semantic Network Optical Flow Network Self-Distilled Optical Flow Network Proxy Semantic Network K Monocular Sequence Single-view Image \ud835\udc46\ud835\udc39\ud835\udc61\u27f6\ud835\udc60 \ud835\udc39 \ud835\udc61\u27f6\ud835\udc60 \ud835\udc37\ud835\udc61 \ud835\udc40\ud835\udc51 \ud835\udc61 \ud835\udc39 \ud835\udc61\u27f6\ud835\udc60 \ud835\udc5f\ud835\udc56\ud835\udc54\ud835\udc56\ud835\udc51 \ud835\udc43\ud835\udc61 \ud835\udc46\ud835\udc5d \ud835\udc46\ud835\udc61 Figure 2. Overall framework for training \u2126Net to predict depth, camera pose, camera intrinsics, semantic labels and optical \ufb02ow. In red architectures composing \u2126Net. semantics of the scene. We can infer the depth of a scene by a single image mostly because of context and prior semantic knowledge. Prior works explored the possibility to learn both tasks with either full supervision [78, 23, 59, 45, 92, 44, 22] or supervision concerned with semantic labels only [90, 16]. Unlike previous works, we propose a compact architecture trained by self-supervision on monocular videos and exploiting proxy semantic labels. Semantic segmentation and optical \ufb02ow. Joint learning of semantic segmentation and optical \ufb02ow estimation has been already explored [36]. Moreover, scene segmentation [72, 5] is required to disentangle potentially moving and static objects for focused optimizations. Differently, [66] leverages on optical \ufb02ow to improve semantic predictions of moving objects. Peculiarly w.r.t. previous work, our proposal features a novel self-distillation training procedure guided by semantics to improve occlusion handling. Scene understanding from stereo videos. Finally, we mention recent works approaching stereo depth estimation with optical \ufb02ow [1] and semantic segmentation [42] for comprehensive scene understanding. In contrast, we are the \ufb01rst to rely on monocular videos to this aim. 3. Overall Learning Framework Our goal is to develop a real-time comprehensive scene understanding framework capable of learning strictly related tasks from monocular videos. Purposely, we propose a multi-stage approach to learn \ufb01rst geometry and semantics, then elicit motion information, as depicted in Figure 2. 3.1. Geometry and Semantics Self-supervised depth and pose estimation. We propose to solve a self-supervised single-image depth and pose estimation problem by exploiting geometrical constraints in a sequence of N images, in which one of the frames is used as the target view It and the other ones in turn as the source image Is. Assuming a moving camera in a stationary scene, given a depth map Dt aligned with It, the camera intrinsic parameters K and the relative pose Tt\u2192s between It and Is, it is possible to sample pixels from Is in order to synthesise a warped image e It aligned with It. The mapping between corresponding homogeneous pixels coordinates pt \u2208It and ps \u2208Is is given by: ps \u223cKTt\u2192sDptK\u22121pt (1) Following [95], we use the sub-differentiable bilinear sampler mechanism proposed in [39] to obtain e It. Thus, in order to learn depth, pose and camera intrinsics we train two separate CNNs to minimize the photometric reconstruction error between e It and It, de\ufb01ned as: LD ap = X p \u03c8(It(p), e It(p)) (2) where \u03c8 is a photometric error function between the two images. However, as pointed out in [29], such a formulation is prone to errors at occlusion/disocclusion regions or in static camera scenarios. To soften these issues, we follow the same principles as suggested in [29], where a minimum per-pixel reprojection loss is used to compute the photometric error, an automask method allows for \ufb01ltering-out spurious gradients when the static camera assumption is violated, and an edge-aware smoothness loss term is used as in [28]. Moreover, we use the depth normalization strategy proposed in [77]. See supplementary material for further details. We compute the rigid \ufb02ow between It and Is as the difference between the projected and original pixel coordinates in the target image: F rigid t\u2192s (pt) = ps \u2212pt (3) Distilling semantic knowledge. The proposed distillation scheme is motivated by how time-consuming and cumbersome obtaining accurate pixel-wise semantic annotations is. Thus, we train our framework to estimate semantic segmentation masks St by means of supervision from cheap proxy labels Sp distilled by a semantic segmentation network, pre-trained on few annotated samples and capable to generalize well to diverse datasets. Availability of proxy semantic labels for the frames of a monocular video enables us to train a single network to predict jointly depth and semantic labels. Accordingly, the joint loss is obtained 3 by adding a standard cross-entropy term Lsem to the previously de\ufb01ned self-supervised image reconstruction loss LD ap. Moreover, similarly to [90], we deploy a cross-task loss term, LD edge (see supplementary), aimed at favouring spatial coherence between depth edges and semantic boundaries. However, unlike [90], we do not exploit stereo pairs at training time. 3.2. Optical Flow and Motion Segmentation Self-supervised optical \ufb02ow. As the 3D structure of a scene includes stationary as well as non-stationary objects, to handle the latter we rely on a classical optical \ufb02ow formulation. Formally, given two images It and Is, the goal is to estimate the 2D motion vectors Ft\u2192s(pt) that map each pixel in It into its corresponding one in Is. To learn such a mapping without supervision, previous approaches [58, 54, 88] employ an image reconstruction loss LF ap that minimizes the photometric differences between It and the back-warped image e It obtained by sampling pixels from Is using the estimated 2D optical \ufb02ow Ft\u2192s(pt). This approach performs well for non-occluded pixels but provides misleading information within occluded regions. Pixel-wise motion probability. Non-stationary objects produce systematic errors when optimizing LD ap due to the assumption that the camera is the only moving body in an otherwise stationary scene. However, such systematic errors can be exploited to identify non-stationary objects: at pixels belonging to such objects the rigid \ufb02ow F rigid t\u2192s and the optical \ufb02ow Ft\u2192s should exhibit different directions and/or norms. Therefore, a pixel-wise probability of belonging to an object independently moving between frames s and t, Pt, can be obtained by normalizing the differences between the two vectors. Formally, denoting with \u03b8(pt) the angle between the two vectors at location pt, we de\ufb01ne the per-pixel motion probabilities as: Pt(pt) = max{1 \u2212cos \u03b8(pt) 2 , 1 \u2212\u03c1(pt)} (4) where cos \u03b8(pt) can be computed as the normalized dot product between the vectors and evaluates the similarity in direction between them, while \u03c1(pt) is de\ufb01ned as \u03c1(pt) = min{\u2225Ft\u2192s(pt)\u22252, \u2225F rigid t\u2192s (pt)\u22252} max{\u2225Ft\u2192s(pt)\u22252, \u2225F rigid t\u2192s (pt)\u22252} , (5) i.e. a normalized score of the similarity between the two norms. By taking the maximum of the two normalized differences, we can detect moving objects even when either the directions or the norms of the vectors are similar. A visualization of Pt(pt) is depicted in Fig. 3(d). Semantic-aware Self-Distillation Paradigm. Finally, we combine semantic information, estimated optical \ufb02ow, rigid \ufb02ow and pixel-wise motion probabilities within a \ufb01nal training stage to obtain a more robust self-distilled optical \ufb02ow network. In other words, we train a new instance of the model to infer a self-distilled \ufb02ow SFt\u2192s given the estimates Ft\u2192s from a \ufb01rst self-supervised network and the aforementioned cues. As previously discussed and highlighted in Figure 3(c), standard self-supervised optical \ufb02ow is prone to errors in occluded regions due to the lack of photometric information but can provide good estimates for the dynamic objects in the scene. On the contrary, the estimated rigid \ufb02ow can properly handle occluded areas thanks to the minimum-reprojection mechanism [29]. Starting from these considerations, our key idea is to split the scene into stationary and potentially dynamics objects, and apply on them the proper supervision. Purposely, we can leverage several observations: 1. Semantic priors. Given a semantic map St for image It, we can binarize pixels into static M s t and potentially dynamic M d t , with M s t \u2229M d t = \u2205. For example, we expect that points labeled as road are static in the 3D world, while pixels belonging to the semantic class car may move. In M d t , we assign 1 for each potentially dynamic pixel, 0 otherwise, as shown in Figure 3(e). 2. Camera Motion Boundary Mask. Instead of using a backward-forward strategy [96] to detect boundaries occluded due to the ego-motion, we analytically compute a binary boundary mask M b t from depth and egomotion estimates as proposed in [57]. We assign a 0 value for out-of-camera pixels, 1 otherwise as shown in Figure 3(f). 3. Consistency Mask. Because the inconsistencies between the rigid \ufb02ow and Ft\u2192s are not only due to dynamic objects but also to occluded/inconsistent areas, we can leverage Equation (4) to detect such critical regions. Indeed, we de\ufb01ne the consistency mask as: M c t = Pt < \u03be, \u03be \u2208[0, 1] (6) This mask assigns 1 where the condition is satis\ufb01ed, 0 otherwise (i.e. inconsistent regions) as in Figure 3(g). Finally, we compute the \ufb01nal mask M, in Figure 3(h), as: M = min{max{M d t , M c t }, M b t } (7) As a consequence, M will effectively distinguish regions in the image for which we can not trust the supervision sourced by Ft\u2192s, i.e. inconsistent or occluded areas. On such regions, we can leverage our proposed self-distillation mechanism. Then, we de\ufb01ne the \ufb01nal total loss for the selfdistilled optical \ufb02ow network as: L = X \u03b1r\u03c6(SFt\u2192s, F rigid t\u2192s ) \u00b7 (1 \u2212M) + \u03b1d\u03c6(SFt\u2192s, Ft\u2192s) \u00b7 M + \u03c8(It, e ISF t ) \u00b7 M (8) 4 (a) (b) (c) (d) (e) (f) (g) (h) Figure 3. Overview of our semantic-aware and self-distilled optical \ufb02ow estimation approach. We leverage semantic segmentation St (a) together with rigid \ufb02ow F rigid t\u2192s (b), teacher \ufb02ow Ft\u2192s (c) and motion probabilities Pt (d), the warmer the higher. From a) we obtain semantic priors M d t (e), combined with boundary mask M b t (f) and consistency mask M c t (g) derived from (d) as in Eq. 6, in order to obtain the \ufb01nal mask M (h) as in Eq. 7. where \u03c6 is a distance function between two motion vectors, while \u03b1r and \u03b1d are two hyper-parameters. 3.3. Motion Segmentation At test time, from pixel-wise probability Pt computed between SFt\u2192s and F rigid t\u2192s , semantic prior M d t and a threshold \u03c4, we compute a motion segmentation mask by: M mot t = M d t \u00b7 (Pt > \u03c4), \u03c4 \u2208[0, 1] (9) Such mask allows us to detect moving objects in the scene independently of the camera motion. A qualitative example is depicted in Figure 1(f). 4. Architecture and Training Schedule In this section we present the networks composing \u2126Net (highlighted in red in Figure 2), and delineate their training protocol. We set N = 3, using 3-frames sequences. The source code is available at https://github.com/ CVLAB-Unibo/omeganet. 4.1. Network architectures We highlight the key traits of each network, referring the reader to the supplementary material for exhaustive details. Depth and Semantic Network (DSNet). We build a single model, since shared reasoning about the two tasks is bene\ufb01cial to both [90, 16]. To achieve real-time performance, DSNet is inspired to PydNet [62], with several key modi\ufb01cations due to the different goals. We extract a pyramid of features down to 1 32 resolution, estimating a \ufb01rst depth map at the bottom. Then, it is upsampled and concatenated with higher level features in order to build a re\ufb01ned depth map. We repeat this procedure up to half resolution, where two estimators predict the \ufb01nal depth map Dt and semantic labels St. These are bi-linearly upsampled to full resolution. Each conv layer is followed by batch normalization and ReLU, but the prediction layers, using re\ufb02ection padding. DSNet counts 1.93M parameters. Camera Network (CamNet). This network estimates both camera intrinsics and poses between a target It and some source views Is(1 \u2264s \u22643, s \u0338= t). CamNet differs from previous work by extracting features from It and Is independently with shared encoders. We extract a pyramid of features down to 1 16 resolution for each image and concatenate them to estimate the 3 Euler angles and the 3D translation for each Is. As in [30], we also estimate the camera intrinsics. Akin to DSNet, we use batch normalization and ReLU after each layer but for prediction layers. CamNet requires 1.77M parameters for pose estimation and 1.02K for the camera intrinsics. Optical Flow Network (OFNet). To pursue real-time performance, we deploy a 3-frame PWC-Net [75] network as in [54], which counts 4.79M parameters. Thanks to our novel training protocol leveraging on semantics and self-distillation, our OFNet can outperform other multi-task frameworks [3] built on the same optical \ufb02ow architecture. 4.2. Training Protocol Similarly to [88], we employ a two stage learning process to facilitate the network optimisation process. At \ufb01rst, we train DSNet and CamNet simultaneously, then we train OFNet by the self-distillation paradigm described in 3.2. For both stages, we use a batch size of 4 and resize input images to 640\u00d7192 for the KITTI dataset (and to 768\u00d7384 for pre-training on Cityscapes), optimizing the output of the networks at the highest resolution only. We also report additional experimental results for different input resolutions where speci\ufb01ed. We use the Adam optimizer [46] with \u03b21 = 0.9, \u03b22 = 0.999 and \u03f5 = 10\u22128. As photometric loss \u03c8, we employ the same function de\ufb01ned in [28]. When training our networks, we apply losses using as Is both the previous and the next image of our 3-frame sequence. Finally, we set both \u03c4 and \u03be to be 0.5 in our experiments. Depth, Pose, Intrinsics and Semantic Segmentation. In order to train DSNet and CamNet we employ sequences of 3 consecutive frames and semantic proxy labels yielded by a state-of-the art architecture [12] trained on Cityscapes with ground-truth labels. We trained DSNet and CamNet for 300K iterations, setting the initial learning rate to 10\u22124, manually halved after 200K, 250K and 275K steps. We apply data augmentation to images as in [28]. Training takes 5 \u223c20 hours on a Titan Xp GPU. Optical Flow. We train OFNet by the procedure presented in 3.2. In particular, we perform 200K training steps with an initial learning rate of 10\u22124, halved every 50K until convergence. Moreover, we apply strong data augmentation consisting in random horizontal and vertical \ufb02ip, crops, random time order switch and, peculiarly, time stop, replacing all Is with It to learn a zero motion vector. This con\ufb01guration requires about 13 hours on a Titan Xp GPU with the standard 640 \u00d7 192 resolution. We use an L1 loss as \u03c6. Once obtained a competitive network in non-occluded regions we train a more robust optical \ufb02ow network, denoted as SD-OFNet, starting from pre-learned weights and the same structure of OFNet by distilling knowledge from OFNet and rigid \ufb02ow computed by DSNet using the total mask M and 416 \u00d7 128 random crops applied to Ft\u2192s, F rigid t\u2192s , M and RGB images. We train SD-OFNet for 15K steps only with a learning rate of 2.5 \u00d7 10\u22125 halved after 5K, 7.5K, 10K and 12.5K steps, setting \u03b1r to 0.025 and \u03b1d to 0.2. At test-time, we rely on SD-OFNet only. 5. Experimental results Using standard benchmark datasets, we present here the experimental validation on the main tasks tackled by \u2126Net. 5.1. Datasets. We conduct experiments on standard benchmarks such as KITTI and Cityscapes. We do not use feature extractors pre-trained on ImageNet or other datasets. For the sake of space, we report further studies in the supplementary material (e.g. results on pose estimation or generalization). KITTI (K) [27] is a collection of 42,382 stereo sequences taken in urban environments from two video cameras and a LiDAR device mounted on the roof of a car. This dataset is widely used for benchmarking geometric understanding tasks such as depth, \ufb02ow and pose estimation. Cityscapes (CS) [19] is an outdoor dataset containing stereo pairs taken from a moving vehicle in various weather conditions. This dataset features higher resolution and higher quality images. While sharing similar settings, this dataset contains more dynamics scenes compared to KITTI. It consists of 22,973 stereo pairs with 2048 \u00d7 1024 resolution. 2,975 and 500 images come with \ufb01ne semantic 5.2. Monocular Depth Estimation In this section, we compare our results to other state-ofthe-art proposals and assess the contribution of each component to the quality of our monocular depth predictions. Comparison with state-of-the-art. We compare with state-of-the-art self-supervised networks trained on monocular videos according to the protocol described in [24]. We follow the same pre-processing procedure as [95] to remove static images from the training split while using all the 697 images for testing. LiDAR points provided in [27] are reprojected on the left input image to obtain ground-truth labels for evaluation, up to 80 meters [25]. Since the predicted depth is de\ufb01ned up to a scale factor, we align the scale of our estimates by multiplying them by a scalar that matches the median of the ground-truth, as introduced in [95]. We adopt the standard performance metrics de\ufb01ned in [24]. Table 1 reports extensive comparison with respect to several monocular depth estimation methods. We outperform our main competitors such as [88, 96, 17, 3] that solve multi-task learning or other strategies that exploit additional information during the training/testing phase [9, 83]. Moreover, our best con\ufb01guration, i.e. pre-training on CS and using 1024 \u00d7 320 resolution, achieves better results in 5 out of 7 metrics with respect to the single-task, state-of-theart proposal [29] (and is the second best and very close to it on the remaining 2) which, however, leverages on a larger ImageNet pre-trained model based on ResNet-18. It is also interesting to note how our proposal without pretraining obtains the best performance in 6 out of 7 measures on 640 \u00d7 192 images (row 1 vs 15). These results validate our intuition about how the use of semantic information can guide geometric reasoning and make a compact network provide state-of-the-art performance even with respect to larger and highly specialized depth-from-mono methods. Ablation study. Table 2 highlights how progressively adding the key innovations proposed in [30, 29, 77] contributes to strengthen \u2126Net, already comparable to other methodologies even in its baseline con\ufb01guration (\ufb01rst row). Interestingly, a large improvement is achieved by deploying joint depth and semantic learning (rows 5 vs 7), which forces the network to simultaneously reason about geometry and content within the same shared features. By replacing DSNet within \u2126Net with a larger backbone [88] (rows 5 vs 6) we obtain worse performance, validating the design decisions behind our compact model. Finally, by pre-training on CS we achieve the best accuracy, which increases alongside with the input resolution (rows 8 to 10). Depth Range Error Analysis. We dig into our depth evaluation to explain the effectiveness of \u2126Net with respect to much larger networks. Table 3 compares, at different depth ranges, our model with more complex ones [29, 88]. This experiment shows how \u2126Net superior performance comes from better estimation of large depths: \u2126Net outperforms both competitors when we include distances larger than 8 m in the evaluation, while it turns out less effective in the close range. 5.3. Semantic Segmentation In Table 4, we report the performance of \u2126Net on semantic segmentation for the 19 evaluation classes of CS according to the metrics de\ufb01ned in [19, 4]. We compare \u2126Net 6 Lower is better Higher is better Method M A I CS Abs Rel Sq Rel RMSE RMSE log \u03b4 <1.25 \u03b4 < 1.252 \u03b4 < 1.253 Godard et al. [29] 0.132 1.044 5.142 0.210 0.845 0.948 0.977 Godard et al. [29] (1024 \u00d7 320) \u2713 0.115 0.882 4.701 0.190 0.879 0.961 0.982 Zhou et al. [94] \u2713 0.121 0.837 4.945 0.197 0.853 0.955 0.982 Mahjourian et al. [57] \u2713 0.159 1.231 5.912 0.243 0.784 0.923 0.970 Wang et al. [77] \u2713 0.151 1.257 5.583 0.228 0.810 0.936 0.974 Bian et al. [6] \u2713 0.128 1.047 5.234 0.208 0.846 0.947 0.970 Yin et al. [88] \u2713 \u2713 0.153 1.328 5.737 0.232 0.802 0.934 0.972 Zou et al. [96] \u2713 \u2713 0.146 1.182 5.215 0.213 0.818 0.943 0.978 Chen et al. [17] \u2713 \u2713 0.135 1.070 5.230 0.210 0.841 0.948 0.980 Luo et al. [56] \u2713 0.141 1.029 5.350 0.216 0.816 0.941 0.976 Ranjan et al. [3] \u2713 0.139 1.032 5.199 0.213 0.827 0.943 0.977 Xu et al. [83] \u2713 \u2713 0.138 1.016 5.352 0.217 0.823 0.943 0.976 Casser et al. [9] \u2713 0.141 1.026 5.290 0.215 0.816 0.945 0.979 Gordon et al. [30] \u2713 \u2713 0.128 0.959 5.230 \u2126Net(640 \u00d7 192) \u2713 \u2713 0.126 0.835 4.937 0.199 0.844 0.953 0.982 \u2126Net(1024 \u00d7 320) \u2713 \u2713 0.125 0.805 4.795 0.195 0.849 0.955 0.983 \u2126Net(640 \u00d7 192) \u2713 \u2713 \u2713 0.120 0.792 4.750 0.191 0.856 0.958 0.984 \u2126Net(1024 \u00d7 320) \u2713 \u2713 \u2713 0.118 0.748 4.608 0.186 0.865 0.961 0.985 Table 1. Depth evaluation on the Eigen split [24] of KITTI [26]. We indicate additional features of each method. M: multi-task learning, A: additional information (e.g. object knowledge, semantic information), I: feature extractors pre-trained on ImageNet [20], CS: network pre-trained on Cityscapes [19]. Lower is better Higher is better Resolution Learned Intr. [30] Norm. [77] Min. Repr. [29] Automask [29] Sem. [12] Pre-train Abs Rel Sq Rel RMSE RMSE log \u03b4 <1.25 \u03b4 < 1.252 \u03b4 < 1.253 640 \u00d7 192 0.139 1.056 5.288 0.215 0.826 0.942 0.976 640 \u00d7 192 \u2713 0.138 1.014 5.213 0.213 0.829 0.943 0.977 640 \u00d7 192 \u2713 \u2713 0.136 1.008 5.204 0.212 0.832 0.944 0.976 640 \u00d7 192 \u2713 \u2713 \u2713 0.132 0.960 5.104 0.206 0.840 0.949 0.979 640 \u00d7 192 \u2713 \u2713 \u2713 \u2713 0.130 0.909 5.022 0.207 0.842 0.948 0.979 640 \u00d7 192 \u2020 \u2713 \u2713 \u2713 \u2713 0.134 1.074 5.451 0.213 0.834 0.946 0.977 640 \u00d7 192 \u2713 \u2713 \u2713 \u2713 \u2713 0.126 0.835 4.937 0.199 0.844 0.953 0.980 416 \u00d7 128 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 0.126 0.862 4.963 0.199 0.846 0.952 0.981 640 \u00d7 192 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 0.120 0.792 4.750 0.191 0.856 0.958 0.984 1024 \u00d7 320 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 0.118 0.748 4.608 0.186 0.865 0.961 0.985 Table 2. Ablation study of our depth network on the Eigen split [24] of KITTI. \u2020: our network is replaced by a ResNet50 backbone [88]. Method Cap (m) Abs Rel Sq Rel RMSE RMSE log Godard et al. [29] 0-8 0.059 0.062 0.503 0.082 \u2126Net\u2020 0-8 0.060 0.063 0.502 0.082 \u2126Net 0-8 0.062 0.065 0.517 0.085 Godard et al. [29] 0-50 0.125 0.788 3.946 0.198 \u2126Net\u2020 0-50 0.127 0.762 4.020 0.199 \u2126Net 0-50 0.124 0.702 3.836 0.195 Godard et al. [29] 0-80 0.132 1.044 5.142 0.210 \u2126Net\u2020 0-80 0.134 1.074 5.451 0.213 \u2126Net 0-80 0.126 0.835 4.937 0.199 Table 3. Depth errors by varying the range. \u2020: our network is replaced by a ResNet50 backbone [88]. Method Train Test mIoU Class mIoU Cat. Pix.Acc. DABNet [50] CS(S) CS 69.62 87.56 94.62 FCHardNet [11] CS(S) CS 76.37 89.22 95.35 \u2126Net CS(P) CS 54.80 82.92 92.50 DABNet [50] CS(S) K 35.40 61.49 80.50 FCHardNet [11] CS(S) K 44.74 68.20 72.07 \u2126Net CS(P) K 43.80 74.31 88.31 \u2126Net CS(P) + K(P) K 46.68 75.84 88.12 Table 4. Semantic segmentation on Cityscapes (CS) and KITTI (K). S: training on ground-truth, P: training on proxy labels. against state-of-the art networks for real-time semantic segmentation [11, 50] when training on CS and testing either on the validation set of CS (rows 1-3) or the 200 semantically annotated images of K (rows 4-6). Even though our network is not as effective as the considered methods when training and testing on the same dataset, it shows greater generalization capabilities to unseen domains: it signi\ufb01cantly outperforms other methods when testing on K for mIoUcategory and pixel accuracy, and provides similar results to [11] for mIoUclass. We relate this ability to our training protocol based on proxy labels (P) instead of ground truths (S). We validate this hypothesis with thorough ablation studies reported in the supplementary material. Moreover, as we have already effectively distilled the knowledge from DPC [12] during pre-training on CS, there is only a slight bene\ufb01t in training on both CS and K (with proxy labels only) and testing on K (row 7). Finally, although achieving 46.68 mIoU on \ufb01ne segmentation, we obtain 89.64 mIoU for the task of segmenting static from potentially dynamic classes, an important result to obtain accurate motion masks. 5.4. Optical Flow In Table 5, we compare the performance of our optical \ufb02ow network with competing methods using the KITTI 2015 stereo/\ufb02ow training set [26] as testing set, which contains 200 ground-truth optical \ufb02ow measurements for eval7 train test Method Dataset Noc All F1 F1 Meisteret al. [58] C SYN + K 8.80 28.94% 29.46% Meister et al. [58] CSS SYN + K 8.10 23.27% 23.30% Zou et al. [96] SYN + K 8.98 26.0% 25.70% Ranjan et al. [3] SYN + K 5.66 20.93% 25.27% Wang et al. [79] ** K 5.58 18.00% Yin et al. [88] K 8.05 10.81 Chen et al. [17] \u2020 K 5.40 8.95 Chen et al. [17] (online) \u2020 K 4.86 8.35 Ranjan et al. [3] K 6.21 26.41% Luo et al. [56] K 5.84 21.56% Luo et al. [56] * K 5.43 20.61% \u2126Net (Ego-motion) K 11.72 13.50 51.22% OFNet K 3.48 11.61 25.78% SD-OFNet K 3.29 5.39 20.0% 19.47% Table 5. Optical \ufb02ow evaluation on the KITTI 2015 dataset. \u2020: pre-trained on ImageNet, SYN: pre-trained on SYNTHIA [71], *: trained on stereo pairs, **: using stereo at testing time. uation. We exploit all the raw K images for training, but we exclude the images used at testing time as done in [96] , to be consistent with experimental results of previous selfsupervised optical \ufb02ow strategies [88, 96, 17, 3]. From the table, we can observe how our self-distillation strategy allows SD-OFNet to outperform by a large margin competitors trained on K only (rows 5-11), and it even performs better than models pre-initialized by training on synthetic datasets [71]. Moreover, we submitted our \ufb02ow predictions to the online KITTI \ufb02ow benchmark after retraining the network including images from the whole of\ufb01cial training set. In this con\ufb01guration, we can observe how our model achieves state-of-the-art F1 performances with respect to other monocular multi-task architectures. 5.5. Motion Segmentation In Table 6 we report experimental results for the motion segmentation task on the KITTI 2015 dataset, which provides 200 images manually annotated with motion labels for the evaluation. We compare our methodology with respect to other state-of-the-art strategies that performs multitask learning and motion segmentation [3, 56, 79] using the metrics and evaluation protocol proposed in [56]. It can be noticed how our segmentation strategy outperforms all the other existing methodologies by a large margin. This demonstrates the effectiveness of our proposal to jointly combine semantic reasoning and motion probability to obtain much better results. We also report, as upper bound, the accuracy enabled by injecting semantic proxies [12] in place of \u2126Net semantic predictions to highlight the low margin between the two. 5.6. Runtime analysis Finally, we measure the runtime of \u2126Net on different hardware devices, i.e. a Titan Xp GPU, an embedded NVIDIA Jetson TX2 board and an Intel i7-7700K@4.2 GHz CPU. Timings averaged over 200 frames at 640 \u00d7 192 Method Pixel Acc. Mean Acc. Mean IoU f.w. IoU Yang et al. [86] * 0.89 0.75 0.52 0.87 Luo et al. [56] 0.88 0.63 0.50 0.86 Luo et al. [56] * 0.91 0.76 0.53 0.87 Wang et al. [79] (Full) ** 0.90 0.82 0.56 0.88 Ranjan et al. [3] 0.87 0.79 0.53 0.85 \u2126Net 0.98 0.86 0.75 0.97 \u2126Net (Proxy [12]) 0.98 0.87 0.77 0.97 Table 6. Motion segmentation evaluation on the KITTI 2015 dataset. *: trained on stereo pairs, **: using stereo at testing time. Device Watt D DS OF Cam \u2126 Jetson TX2 15 12.5 10.3 6.5 49.2 4.5 i7-7700K 91 5.0 4.2 4.9 31.4 2.4 Titan XP 250 170.2 134.1 94.1 446.7 57.4 Table 7. Runtime analysis on different devices. We report the power consumption in Watt and the FPS. D: Depth, S: Semantic, OF: Optical Flow, Cam: camera pose, \u2126: Overall architecture. resolution. Moreover, as each component of \u2126Net may be used on its own, we report the runtime for each independent task. As summarized in Table 7, our network runs in realtime on the Titan Xp GPU and at about 2.5 FPS on a standard CPU. It also \ufb01ts the low-power NIVIDA Jetson TX2, achieving 4.5 FPS to compute all the outputs. Additional experiments are available in the supplementary material. 6. Conclusions In this paper, we have proposed the \ufb01rst real-time network for comprehensive scene understanding from monocular videos. Our framework reasons jointly about geometry, motion and semantics in order to estimate accurately depth, optical \ufb02ow, semantic segmentation and motion masks at about 60 FPS on high-end GPU and 5FPS on embedded systems. To address the above multi-task problem we have proposed a novel learning procedure based on distillation of proxy semantic labels and semantic-aware selfdistillation of optical-\ufb02ow information. Thanks to this original paradigm, we have demonstrated state-of-the-art performance on standard benchmark datasets for depth and optical \ufb02ow estimation as well as for motion segmentation. As for future research, we \ufb01nd it intriguing to investigate on whether and how would it be possible to self-adapt \u2126Net on-line. Although some very recent works have explored this topic for depth-from-mono [9] and optical \ufb02ow [17], the key issue with our framework would be to conceive a strategy to deal with semantics. Acknowledgement. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.", "introduction": "What information would an autonomous agent be keen to gather from its sensory sub-system to tackle tasks like navigation and interaction with the explored environment? It would need to be informed about the geometry of the sur- roundings and the type of objects therein, and likely better \u2217Joint \ufb01rst authorship. know which of the latter are actually moving and how they do so. What if all such cues may be provided by as simple a sensor as a single RGB camera? Nowadays, deep learning is advancing the state-of-the- art in classical computer vision problems at such a quick pace that single-view holistic scene understanding seems to be no longer out-of-reach. Indeed, highly challenging prob- lems such as monocular depth estimation and optical \ufb02ow can nowadays be addressed successfully by deep neural net- works, often through uni\ufb01ed architectures [88, 3, 96]. Self- supervised learning techniques have yielded further major achievements [95, 58] by enabling effective training of deep networks without annotated images. In fact, labels are hard to source for depth estimation due to the need of active sen- sors and manual \ufb01ltering, and are even more cumbersome in the case of optical \ufb02ow. Concurrently, semi-supervised ap- proaches [90, 16] proved how a few semantically labelled images can improve monocular depth estimation signi\ufb01- cantly. These works have also highlighted how, while pro- ducing per-pixel class labels is tedious yet feasible for a hu- man annotator, manually endowing images with depth and optical \ufb02ow ground-truths is prohibitive. In this paper, we propose the \ufb01rst-ever framework for comprehensive scene understanding from monocular videos. As highlighted in Figure 1, our multi-stage network architecture, named \u2126Net, can predict depth, semantics, op- tical \ufb02ow, per-pixel motion probabilities and motion masks. This comes alongside with estimating the pose between ad- jacent frames for an uncalibrated camera, whose intrinsic parameters are also estimated. Our training methodology 1 arXiv:2003.14030v1 [cs.CV] 31 Mar 2020 leverages on self-supervision, knowledge distillation and multi-task learning. In particular, peculiar to our proposal and key to performance is distillation of proxy semantic la- bels gathered from a state-of-the-art pre-trained model [52] within a self-supervised and multi-task learning procedure addressing depth, optical \ufb02ow and motion segmentation. Our training procedure also features a novel and effective self-distillation schedule for optical \ufb02ow mostly aimed at handling occlusions and relying on tight integration of rigid \ufb02ow, motion probabilities and semantics. Moreover, \u2126Net is lightweight, counting less than 8.5M parameters, and fast, as it can run at nearly 60 FPS and 5 FPS on an NVIDIA Titan Xp and a Jetson TX2, respectively. As vouched by thorough experiments, the main contributions of our work can be summarized as follows: \u2022 The \ufb01rst real-time network for joint prediction of depth, optical \ufb02ow, semantics and motion segmentation from monocular videos \u2022 A novel training protocol relying on proxy seman- tics and self-distillation to effectively address the self- supervised multi-task learning problem \u2022 State-of-the-art self-supervised monocular depth esti- mation, largely improving accuracy at long distances \u2022 State-of-the-art optical \ufb02ow estimation among monoc- ular multi-task frameworks, thanks to our novel occlusion- aware and semantically guided training paradigm \u2022 State-of-the-art motion segmentation by joint reason- ing about optical-\ufb02ow and semantics" }, { "url": "http://arxiv.org/abs/1904.04144v1", "title": "Learning monocular depth estimation infusing traditional stereo knowledge", "abstract": "Depth estimation from a single image represents a fascinating, yet\nchallenging problem with countless applications. Recent works proved that this\ntask could be learned without direct supervision from ground truth labels\nleveraging image synthesis on sequences or stereo pairs. Focusing on this\nsecond case, in this paper we leverage stereo matching in order to improve\nmonocular depth estimation. To this aim we propose monoResMatch, a novel deep\narchitecture designed to infer depth from a single input image by synthesizing\nfeatures from a different point of view, horizontally aligned with the input\nimage, performing stereo matching between the two cues. In contrast to previous\nworks sharing this rationale, our network is the first trained end-to-end from\nscratch. Moreover, we show how obtaining proxy ground truth annotation through\ntraditional stereo algorithms, such as Semi-Global Matching, enables more\naccurate monocular depth estimation still countering the need for expensive\ndepth labels by keeping a self-supervised approach. Exhaustive experimental\nresults prove how the synergy between i) the proposed monoResMatch architecture\nand ii) proxy-supervision attains state-of-the-art for self-supervised\nmonocular depth estimation. The code is publicly available at\nhttps://github.com/fabiotosi92/monoResMatch-Tensorflow.", "authors": "Fabio Tosi, Filippo Aleotti, Matteo Poggi, Stefano Mattoccia", "published": "2019-04-08", "updated": "2019-04-08", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "In this section, we review the literature relevant to our work concerned with stereo/monocular depth estimation and proxy label distillation. Stereo depth estimation. Most conventional dense stereo algorithms rely on some or all the well-known four steps thoroughly described in [46]. In this field, SGM [15] stood out for the excellent trade-off between accuracy and efficiency thus becoming very popular. \u02d8 Zbontar and LeCun [61] were the first to apply deep learning to stereo vision replacing the conventional matching costs calculation with a siamese CNN network trained to predict the similarity between patches. Luo et al. [29] cast the correspondence problem as a multi-class classification task, obtaining better results. Mayer et al. [34] backed away from the previous approaches and proposed an end-to-end trainable network called DispNetC able to infer disparity directly from images. While DispNetC applies a 1-D correlation to mimic the cost volume, GCNet by Kendall et al. [17] exploited 3D convolutions over a 4-D volume to obtain matching costs and finally applied a differentiable version of argmin to select the best disparity along this volume. Other works followed these two main strategies, building more complex architectures starting from DispNetC [37, 25, 57, 47] or GCNet [3, 26, 18] respectively. The domain shift issue affecting these architectures (e.g. synthetic to real) has been addressed in either offline [49] or online [50] fashion, or greatly reduced by guiding them with external depth measurements (e.g. Lidar) [42]. Monocular depth estimation. Before the deep learning era, some works tackled depth-from-mono with MRF [45] or boosted classifiers [22]. However, with the increasing availability of ground truth depth data, supervised approaches based on CNNs [23, 27, 56, 7] rapidly outperformed previous techniques. An attractive trend concerns the possibility of learning depth-from-mono in a selfsupervised manner, avoiding the need for expensive ground truth depth labels that are replaced by multiple views of the sensed scene. Then, supervision signals can be obtained by image synthesis according to the estimated depth, camera pose or both. In general, acquiring images from a stereo camera enables a more effective training than using a single, moving camera, since the pose between frames known. Concerning stereo supervision, Garg et al. [8] first followed this approach, while Godard et al. [11] introduced spatial transform network [16] and a left-right consistency loss. Other methods improved efficiency [40], deploying a pyramidal architecture, and accuracy by simulating a trinocular setup [44] or including joint semantic segmentation [60]. In [38], a strategy was proposed to reduce further the energy efficiency of [40] leveraging fixed-point quantization. The semi-supervised framework by Kuznietsov et al. [21] combined stereo supervision with sparse LiDAR measurements. The work by Zhou et al. [63] represents the first attempt to supervise a depth-from-mono framework with single camera sequences. This approach was improved including additional cues such as point-cloud alignment [32], differentiable DVO [53] and multi-task learning [64]. Zhan et al. [62] combined the two supervision approaches outlined so far deploying stereo sequences. Another class of methods [2, 1, 5] applied a generative adversarial paradigm to the monocular scenario. Finally, relevant to our work is Single View Stereo matching (SVS) [30], processing a single image to obtain a second synthetic view using Deep3D [55] and then computing a disparity map between the two using DispNetC [34]. However, these two architectures are trained independently. Moreover, DispNetC is supervised with ground truth labels from synthetic [34] and real domains [35]. Differently, the framework we are going to introduce requires no ground truth at all and is elegantly trained in an end-to-end manner, outperforming SVS by a notable margin. Proxy labels distillation. Since for most tasks ground truth labels are difficult and expensive to source, some Cost Volume warp Initial Disparity Estimation Disparity Refinement Multi-scale Feature Extractor \ud835\udc517 \ud835\udc510 . . . \ud835\udc5f2 \ud835\udc5f0 . . . \ud835\udc512..0 \ud835\udc39\ud835\udc3f 0 \ud835\udc52\ud835\udc3f . . . warp \ud835\udc39\ud835\udc45 0 \ud835\udc510 \ud835\udc39\ud835\udc3f 2 \ud835\udc39\ud835\udc3f 0 Figure 2. Illustration of our monoResMatch architecture. Given one input image, the multi-scale feature extractor (in red) generates highlevel representations in the \ufb01rst stage. The initial disparity estimator (in blue) yields multi-scale disparity maps aligned with the left and right frames of a stereo pair. The disparity re\ufb01nement module (in orange) is in charge of re\ufb01ning the initial left disparity relying on features computed in the \ufb01rst stage, disparities generated in the second stage, matching costs between high-dimensional features F 0 L extracted from input and synthetic \u02dc F 0 R from a virtual right viewpoint, together with absolute error eL between F 0 L and back-warped \u02dc F 0 R (see Section 3.3). works recently enquired about the possibility to replace them with easier to obtain proxy labels. Tonioni et al. [49] proposed to adapt deep stereo networks to unseen environments leveraging traditional stereo algorithms and con\ufb01dence measures [43], Tosi et al. [51] learned con\ufb01dence estimation selecting positive and negative matches by means of traditional con\ufb01dence measures, Makansi et al. [33] and Liu et al. [28] generated proxy labels for training optical \ufb02ow networks using conventional methods. Speci\ufb01cally relevant to monocular depth estimation are the works proposed by Yang et al. [58], using stereo visual odometry to train monocular depth estimation, by Klodt and Vedaldi [20], leveraging structure from motion algorithms and by Guo et al. [13], obtaining labels from a deep network trained with supervision to infer disparity maps from stereo pairs. 3. Monocular Residual Matching In this section, we describe in detail the proposed monocular Residual Matching (monoResMatch) architecture designed to infer accurate and dense depth estimation in a self-supervised manner from a single image. Figure 2 recaps the three key components of our network. First, a multi-scale feature extractor takes as input a single raw image and computes deep learnable representations at different scales from quarter resolution F 2 L to full-resolution F 0 L in order to toughen the network to ambiguities in photometric appearance. Second, deep high-dimensional features at input image resolution are processed to estimate, through an hourglass structure with skip-connections, multi-scale inverse depth (i.e., disparity) maps aligned with the input and a virtual right view learned during training. By doing so, our network learns to emulate a binocular setup, thus allowing further processing in the stereo domain [30]. Third, a disparity re\ufb01nement stage estimates residual corrections to the initial disparity. In particular, we use deep features from the \ufb01rst stage and back-warped features of the virtual right image to construct a cost volume that stores the stereo matching costs using a correlation layer [34]. Our entire architecture is trained from scratch in an endto-end manner, while SVS [30] by training its two main components, Deep3D [55] and DispNetC [34], on image synthesis and disparity estimation tasks separately (with the latter requiring additional, supervised depth labels from synthetic imagery [34]). Extensive experimental results will prove that monoResMatch enables much more accurate estimations compared to SVS and other state-of-the-art approaches. 3.1. Multi-scale feature extractor Inspired by [25], given one input image I we generate deep representations using layers of convolutional \ufb01lters. In particular, the \ufb01rst 2-stride layer convolves I with 64 learnable \ufb01lters of size 7 \u00d7 7 followed by a second 2-stride convolutional layer composed of 128 \ufb01lters with kernel size 4 \u00d7 4. Two deconvolutional blocks, with stride 2 and 4, are deployed to upsample features from lower-spatial resolution to full input resolution producing 32 features maps each. A 1\u00d71 convolutional layer with stride 1 further processes upsampled representations. 3.2. Initial Disparity Estimation Given the features extracted by the \ufb01rst module, this component is in charge of estimating an initial disparity map. In particular, an encoder-decoder architecture inspired by DispNet processes deep features at quarter resolution from the multi-scale feature extractor (i.e., conv2) and outputs disparity maps at different scales, speci\ufb01cally from 1 128 to full-resolution. Each down-sampling module, composed of two convolutional blocks with stride 2 and 1 each, produces a growing number of extracted features, respectively 64, 128, 256, 512, 1024, and each convolutional layer uses 3 \u00d7 3 kernels followed by ReLU non-linearities. Differently from DispNet, which computes matching costs in the early part of this stage using features from the left and right images of a stereo pair, our architecture lacks such necessary information required to compute a cost volume since it processes a single input image. Thus, no 1-D correlation layer can be imposed to encode geometrical constraints in this stage of our network. Then, upsampling modules are deployed to enrich feature representations through skipconnections and to extract two disparity maps, aligned respectively with the input frame and a virtual viewpoint on its right as in [11]. This process is carried out at each scale using 1-stride convolutional layers with kernel size 3 \u00d7 3. 3.3. Disparity Re\ufb01nement Given an initial estimate of the disparity at each scale obtained in the second part of the network, often characterized by errors at depth discontinuities and occluded regions, this stage predicts corresponding multi-scale residual signals [14] by a few stacked nonlinear layers that are then used to compute the \ufb01nal left-view aligned disparity map. This strategy allows us to simplify the end-to-end learning process of the entire network. Moreover, motivated by [30], we believe that geometrical constraints can play a central role in boosting the \ufb01nal depth accuracy. For this reason, we embed matching costs in feature space computed employing a horizontal correlation layer, typically deployed in deep stereo algorithms. To this end, we rely on the rightview disparity map computed previously to generate rightview features \u02dc F 0 R from the left ones F 0 L using a differentiable bilinear sampler [16]. The network is also fed with error eL, i.e. the absolute difference between left and virtual right features at input resolution, with the latter backwarped at the same coordinates of the former, as in [24]. We point out once more that, differently from [30], our architecture produces both a synthetic right view, i.e. its features representation, and computes the \ufb01nal disparity map following stereo rationale. This makes monoResMatch a single end-to-end architecture, effectively performing stereo out of a single input view rather than the combination of two models (i.e., Deep3D [55] and DispNetC [34] for the two tasks outlined) trained independently as in [31]. Moreover, exhaustive experiments will highlight the superior accuracy achieved by our fully self-supervised, end-toend approach. 3.4. Training Loss In order to train our multi-stage architecture, we de\ufb01ne the total loss as a sum of two main contributions, a Linit term from the initial disparity estimation module and a Lref term from the disparity re\ufb01nement stage. Following [12], we embrace the idea to up-sample the predicted low-resolution disparity maps to the full input resolution and then compute the corresponding signals. This simple strategy is designed to force the inverse depth estimation to reproduce the same objective at each scale, thus leading to much better outcomes. In particular, we obtain the \ufb01nal training loss as: Ltotal = ni X s=1 Linit + nr X s=1 Lref (1) where s indicates the output resolution, ni and nr the numbers of considered scales during loss computation, while Linit and Lref are formalised as: Linit =\u03b1ap(Ll ap + Lr ap) + \u03b1ds(Ll ds + Lr ds) + \u03b1ps(Ll ps + Lr ps) (2) Lref = \u03b1apLl ap + \u03b1dsLl ds + \u03b1psLl ps (3) where Lap is an image reconstruction loss, Lds is a smoothness term and Lps is a proxy-supervised loss. Each term contains both the left and right components for the initial disparity estimator, and the left components only for the re\ufb01nement stage. Image reconstruction loss. A linear combination of l1 loss and structural similarity measure (SSIM) [54] encodes the quality of the reconstructed image \u02dc I with respect to the original image I: Lap = 1 N X i,j \u03b11 \u2212SSIM(Iij, \u02c6 Iij) 2 + (1 \u2212\u03b1)|Iij \u2212\u02c6 Iij| (4) Following [11], we set \u03b1 = 0.85 and use a SSIM with 3 \u00d7 3 block \ufb01lter. Disparity smoothness loss. This cost encourages the predicted disparity to be locally smooth. Disparity gradients are weighted by an edge-aware term from image domain: Lds = 1 N X i,j |\u2202xdij|e\u2212|\u2202xIij| + |\u2202ydij|e\u2212|\u2202yIij| (5) Proxy-supervised loss. Given the proxy disparity maps obtained by a conventional stereo algorithm, detailed in Section 4, we coach the network using reverse Huber (berHu) loss [36]: Lps = 1 N X i,j berHu(dij, dst ij, c) (6) berHu(dij, dst ij, c) = ( |dij \u2212dst ij| if |dij \u2212dst ij| \u2264c |dij\u2212dst ij |2\u2212c2 2c otherwise (7) where dij and dst ij are, respectively, the predicted disparity and the proxy annotation for pixel at the coordinates i, j of the image, while c is adaptively set as \u03b1 maxi,j |dij\u2212dst ij|, with \u03b1 = 0.2. (a) (b) (c) Figure 3. Examples of proxy labels computed by SGM. Given the source image (a), the network exploits the SGM supervision \ufb01ltered with left-right consistency check (b) in order to train monoResMatch to estimate the \ufb01nal disparity map (c). No post-processing from [11] is performed on (c) in this example. 4. Proxy labels distillation To generate accurate proxy labels, we use the popular SGM algorithm [15], a fast yet effective solution to infer depth from a recti\ufb01ed stereo pair without training. In our implementation, initial matching costs are computed for each pixel p and disparity hypothesis d applying a 9\u00d77 census transform and computing Hamming distance on pixel strings. Then, scanline optimization along eight different paths re\ufb01nes the initial cost volume as follows: E(p, d) =C(p, d) + min j>1[C(p\u2032, d), C(p\u2032, d \u00b1 1) + P1, C(p\u2032, d \u00b1 q) + P2] \u2212 min k