diff --git "a/abs_29K_G/test_abstract_long_2405.01270v1.json" "b/abs_29K_G/test_abstract_long_2405.01270v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01270v1.json" @@ -0,0 +1,106 @@ +{ + "url": "http://arxiv.org/abs/2405.01270v1", + "title": "The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks", + "abstract": "This study highlights the importance of conducting comprehensive model\ninspection as part of comparative performance analyses. Here, we investigate\nthe effect of modelling choices on the feature learning characteristics of\ngraph neural networks applied to a brain shape classification task.\nSpecifically, we analyse the effect of using parameter-efficient, shared graph\nconvolutional submodels compared to structure-specific, non-shared submodels.\nFurther, we assess the effect of mesh registration as part of the data\nharmonisation pipeline. We find substantial differences in the feature\nembeddings at different layers of the models. Our results highlight that test\naccuracy alone is insufficient to identify important model characteristics such\nas encoded biases related to data source or potentially non-discriminative\nfeatures learned in submodels. Our model inspection framework offers a valuable\ntool for practitioners to better understand performance characteristics of deep\nlearning models in medical imaging.", + "authors": "Nairouz Shehata, Carolina Pi\u00e7arra, Anees Kazi, Ben Glocker", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Graph AND Structure AND Learning", + "gt": "This study highlights the importance of conducting comprehensive model\ninspection as part of comparative performance analyses. Here, we investigate\nthe effect of modelling choices on the feature learning characteristics of\ngraph neural networks applied to a brain shape classification task.\nSpecifically, we analyse the effect of using parameter-efficient, shared graph\nconvolutional submodels compared to structure-specific, non-shared submodels.\nFurther, we assess the effect of mesh registration as part of the data\nharmonisation pipeline. We find substantial differences in the feature\nembeddings at different layers of the models. Our results highlight that test\naccuracy alone is insufficient to identify important model characteristics such\nas encoded biases related to data source or potentially non-discriminative\nfeatures learned in submodels. Our model inspection framework offers a valuable\ntool for practitioners to better understand performance characteristics of deep\nlearning models in medical imaging.", + "main_content": "INTRODUCTION Understanding biological sex-based differences in brain anatomy provides valuable insights into both neurodevelopmental processes and cognitive functioning. Recent strides in the field of geometric deep learning [1], particularly the advent of Graph Neural Networks (GNNs), have revolutionised the analysis of complex, non-Euclidean data [2] to make predictions at a node, edge, or graph-level. This allows us to treat brain shapes as graphs, leveraging the power of GNNs to learn from complex structural anatomical data [3]. Discriminative feature embeddings can be withdrawn from these models, representing brain shapes as a continuous vector of numerical features that capture valuable structural and geometrical information for downstream prediction tasks [4]. Techniques like Principal Component Analysis (PCA) can be used to reduce the dimensionality of graph embeddings for visualisation, aiding the exploration of subgroup biases in the feature space beyond the target label. This analysis may help practitioners ensure the reliability of their predictions, and is particularly important in applications where GNNs feature embeddings may be leveraged for new tasks, such as fine-tuning, domain transfer, or multi-modal approaches. In this study, we dissect GNN models trained under different settings for the task of sex classification using 3D meshes of segmented brain structures. We inspect the learned feature embeddings at different layers within a multi-graph neural network architecture. Through this granular analysis, we reveal critical insights into the inner workings of our models, identifying important effects of different modelling choices. This research demonstrates the utility of conducting a model inspection framework as part of model development, highlighting insights that may guide practitioners in the selection of models with desired characteristics, avoiding biases, overfitting and better understanding the driving forces behind predictions. 2. METHODS 2.1. Imaging datasets We used four neuroimaging datasets, including data from the UK Biobank imaging study (UKBB) 1 [5], the Cambridge Centre for Ageing and Neuroscience study (CamCAN) [6, 7], the IXI dataset2, and OASIS3 [8]. Both UKBB and CamCAN brain MRI data were acquired with Siemens 3T scanners. The IXI dataset encompassed data collected from three clinical sites, each employing different scanning systems. CamCAN and IXI are acquired from healthy volunteers, while UKBB is an observational population study. The OASIS3 dataset consists of 716 subjects with normal cognitive function and 318 patients exhibiting varying stages of cognitive decline. For all four datasets, subjects with missing biological sex or age information were excluded. Data from UKBB was split into three sets, with 9,900 scans used for training, 1,099 for validation, and 2,750 for testing. CamCAN, IXI and OASIS3 1Accessed under application 12579. 2https://brain-development.org/ixi-dataset/ arXiv:2405.01270v1 [cs.LG] 2 May 2024 \fwere used as external test sets, with sample sizes of 652, 563, and 1,034, respectively. The UKBB data is provided with a comprehensive preprocessing already applied, using FSL FIRST [9] to automatically segment 15 subcortical brain structures from T1-weighted brain MRI, including the brain stem, left/right thalamus, caudate, putamen, pallidum, hippocampus, amygdala, and accumbens-area. We apply our own pre-processing pipeline to the CamCAN, IXI, and OASIS3 datasets, closely resembling the UKBB pre-processing. Our pipeline includes skull stripping using ROBEX 3 [10], bias field correction using N4ITK [11], and brain segmentation via FSL FIRST. 2.2. Graph representation The anatomical brain structures are represented by meshes as an undirected graph composed of nodes, connected by edges forming triangular faces. The number of nodes for most structures is 642 and up to 1,068, whereas the number of edges per structure ranges between 3,840 and 6,396. The meshes are automatically generated by the FSL FIRST tool. 2.2.1. Node features Each graph node can carry additional information, encoded as feature vectors. This can include spatial coordinates or more complex geometric descriptors. While computer vision has transitioned from hand-crafted features to end-to-end deep learning, we have previously demonstrated the value of using geometric feature descriptors in GNN-based shape classification [12]. We employ Fast Point Feature Histograms (FPFH) [13], a pose invariant feature descriptor shown to substantially boost classification performance. To compute FPFH features on a mesh, a point feature histogram is first generated, involving the selection of neighboring points within a defined radius around each query point. The Darboux frame is subsequently defined, and angular variations are computed. This process involves several steps, including the estimation of normals and the calculation of angular variations, resulting in a vector of 33 features at each node. 2.2.2. Mesh registration Mesh registration is an optional pre-processing step, with the goal to remove spatial variability across subjects and datasets. Here, we investigate the use of rigid registration aligning all meshes for a specific brain structure to a standardised orientation using the closed-form Umeyama approach [14]. This method employs a singular value decompositionbased optimisation to obtain an optimal rigid transformation between two given meshes. For each of the 15 brain structures, we select a reference mesh from a random subject from the UKBB dataset, and align the meshes from all other 3https://www.nitrc.org/projects/robex Fig. 1: Model architecture consisting of a graph convolutional network (GCN) submodel feeding graph embeddings into a classification head with two fully connected layers (FC1 and FC2). Where N is the number of brain substructures, 15. For our model inspection, we read out the feature vectors from the GCN submodel, FC1, and FC2. subjects to this reference. As a result, shape variability due to orientation and position differences is minimised and the remaining variability is expected to primarily represent anatomical differences across subjects. 2.3. Multi-graph neural network architecture Our general GNN architecture is comprised of two main components; the GCN submodel which aims to learn graph embeddings over 3D meshes using multiple graph convolutional layers [12] and an MLP classification head that takes the graph embeddings as inputs and performs the final classification using two fully connected layers (cf. Fig. 1). The input to our models are 15 subgraphs representing 15 brain structures, extracted from T1-weighted brain scans. We consider two approaches for learning graph embeddings with GCN submodels. The first approach, referred to as shared submodel, uses a single GCN submodel that learns from all 15 subgraphs. Here, the weights of the graph convolutional layers are shared across brain structures. The shared submodel approach is parameter-efficient and aims to learn generic shape features. For the second approach, referred to as non-shared submodel, each subgraph is fed into a structure-specific GCN submodel. The non-shared submodel approach has more parameters and may capture structure-specific shape features. In both approaches, the architecture of the GCN submodel is identical and consists of three graph convolutional layers [15] with Rectified Linear Unit (ReLU) activations. A global average pooling layer is used as a readout layer, aggregating node representations into a single graph-level feature embedding. The embeddings from individual structures are stacked to form a subject-level feature embedding which is passed to the classification head. 2.4. Model inspection Our model inspection approach is focused on evaluating the separability of the target label (biological sex, Male and \fFemale) and data source classes (UKBB, CamCAN, IXI or OASIS3) through feature inspection. Each test set sample is passed through the complete pipeline and its feature embeddings are saved at three different stages: at the output layer of the GCN submodel and at the first (FC1) and second (FC2) fully connected layers of the classification head. The dimensions of these embeddings are, respectively, 480 (15 substructures times the hidden layer size, 32), 32 and 2. To allow for visual inspection, the feature embeddings from the GCN and FC1 layers are inputted to a PCA model to reduce their dimensionality. The PCA modes capture the directions of the largest variation in the high-dimensional feature space, allowing us to visualise feature separation in 2D scatter plots. We randomly sample 500 subjects from each dataset for the visualisations. Given that all the models were trained to classify biological sex, a clear separation should be expected between the Male and Female classes in the first PCA modes. 3. EXPERIMENTS & RESULTS For a thorough evaluation, we trained and tested the four models shared and non-shared GCN submodels, and with and without mesh rigid registration on identical data splits. All code was developed using PyTorch Geometric and PyTorch Lightning for model implementation and data handling. We used the Adam optimiser [16] with a learning rate of 0.001, and employed the standard cross entropy loss for classification. Random node translation was used as a data augmentation strategy with a maximum offset of 0.1mm [17]. This was shown to improve performance in our previous study [12]. Model selection was done based on the loss of the validation set. Our code is made publicly available4. 3.1. Classification performance Figure 2 summarises the classification performance of the four models, showing the ROC curves together with the area under the curve (AUC) metric, reported separately for each of the four test datasets. There are two main observations: (i) There are very little differences in the absolute performance across the four models. Comparing the shared vs non-shared submodel, the AUC performance is comparable. When comparing models with and without mesh registration, we find the generalisation gap decreases between in-distribution test (UKBB) and the external test data (CamCAN, IXI, OASIS3). However, we also observe a small drop in performance on the in-distribution test data when using mesh registration, compared to not using registration. A practitioner using internal test results for final model selection may opt for using a shared submodel, due to its parameter efficiency, without mesh registration, due to convenience. As we will see next, this choice may be suboptimal as test accuracy alone is insufficient to identify important model characteristics. 4https://github.com/biomedia-mira/medmesh 3.2. Effect of using structure-specific submodels For the models that use a shared submodel, we observe that the GCN feature embeddings are non-discriminative with respect to the target label. Separation seems completely missing in the shared model without registration (see Fig. 3a), with only weak separation in the shared model with registration (see Fig. 3c). For these models, the classification heads will primarily contribute to the model performance. For the models with a non-shared submodel, we find a much better separability for the GCN features with and without mesh registration (cf. Figs. 3b, d). Here, the GCN features will meaningfully contribute to the models\u2019 classification performance. 3.3. Effect of mesh registration When studying the effect of mesh registration, we can clearly observe that without registration, the GCN feature embeddings from the submodel strongly encode data source, showing separate clusters for UKBB and external test data (cf. Figs. 3a,b). When introducing mesh registration as a pre-processing step, we note a significant improvement, with an almost entirely removed separation of datasets in the GCN layer independent of whether a shared and non-shared submodel is used (Figs. 3c, d). The separability of the target label in the GCN layer is well defined for the non-shared submodel (Fig. 3d), while remaining weak for the shared submodel (Fig. 3c). Rigid registration as a pre-processing step seems to not only improve the learning efficiency of the GCN submodel, but also its ability to generalise across data distributions. 4.", + "additional_graph_info": { + "graph": [ + [ + "Nairouz Shehata", + "Anees Kazi" + ], + [ + "Anees Kazi", + "Shayan Shekarforoush" + ] + ], + "node_feat": { + "Nairouz Shehata": [ + { + "url": "http://arxiv.org/abs/2405.01270v1", + "title": "The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks", + "abstract": "This study highlights the importance of conducting comprehensive model\ninspection as part of comparative performance analyses. Here, we investigate\nthe effect of modelling choices on the feature learning characteristics of\ngraph neural networks applied to a brain shape classification task.\nSpecifically, we analyse the effect of using parameter-efficient, shared graph\nconvolutional submodels compared to structure-specific, non-shared submodels.\nFurther, we assess the effect of mesh registration as part of the data\nharmonisation pipeline. We find substantial differences in the feature\nembeddings at different layers of the models. Our results highlight that test\naccuracy alone is insufficient to identify important model characteristics such\nas encoded biases related to data source or potentially non-discriminative\nfeatures learned in submodels. Our model inspection framework offers a valuable\ntool for practitioners to better understand performance characteristics of deep\nlearning models in medical imaging.", + "authors": "Nairouz Shehata, Carolina Pi\u00e7arra, Anees Kazi, Ben Glocker", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Understanding biological sex-based differences in brain anatomy provides valuable insights into both neurodevelopmental processes and cognitive functioning. Recent strides in the field of geometric deep learning [1], particularly the advent of Graph Neural Networks (GNNs), have revolutionised the analysis of complex, non-Euclidean data [2] to make predictions at a node, edge, or graph-level. This allows us to treat brain shapes as graphs, leveraging the power of GNNs to learn from complex structural anatomical data [3]. Discriminative feature embeddings can be withdrawn from these models, representing brain shapes as a continuous vector of numerical features that capture valuable structural and geometrical information for downstream prediction tasks [4]. Techniques like Principal Component Analysis (PCA) can be used to reduce the dimensionality of graph embeddings for visualisation, aiding the exploration of subgroup biases in the feature space beyond the target label. This analysis may help practitioners ensure the reliability of their predictions, and is particularly important in applications where GNNs feature embeddings may be leveraged for new tasks, such as fine-tuning, domain transfer, or multi-modal approaches. In this study, we dissect GNN models trained under different settings for the task of sex classification using 3D meshes of segmented brain structures. We inspect the learned feature embeddings at different layers within a multi-graph neural network architecture. Through this granular analysis, we reveal critical insights into the inner workings of our models, identifying important effects of different modelling choices. This research demonstrates the utility of conducting a model inspection framework as part of model development, highlighting insights that may guide practitioners in the selection of models with desired characteristics, avoiding biases, overfitting and better understanding the driving forces behind predictions. 2. METHODS 2.1. Imaging datasets We used four neuroimaging datasets, including data from the UK Biobank imaging study (UKBB) 1 [5], the Cambridge Centre for Ageing and Neuroscience study (CamCAN) [6, 7], the IXI dataset2, and OASIS3 [8]. Both UKBB and CamCAN brain MRI data were acquired with Siemens 3T scanners. The IXI dataset encompassed data collected from three clinical sites, each employing different scanning systems. CamCAN and IXI are acquired from healthy volunteers, while UKBB is an observational population study. The OASIS3 dataset consists of 716 subjects with normal cognitive function and 318 patients exhibiting varying stages of cognitive decline. For all four datasets, subjects with missing biological sex or age information were excluded. Data from UKBB was split into three sets, with 9,900 scans used for training, 1,099 for validation, and 2,750 for testing. CamCAN, IXI and OASIS3 1Accessed under application 12579. 2https://brain-development.org/ixi-dataset/ arXiv:2405.01270v1 [cs.LG] 2 May 2024 \fwere used as external test sets, with sample sizes of 652, 563, and 1,034, respectively. The UKBB data is provided with a comprehensive preprocessing already applied, using FSL FIRST [9] to automatically segment 15 subcortical brain structures from T1-weighted brain MRI, including the brain stem, left/right thalamus, caudate, putamen, pallidum, hippocampus, amygdala, and accumbens-area. We apply our own pre-processing pipeline to the CamCAN, IXI, and OASIS3 datasets, closely resembling the UKBB pre-processing. Our pipeline includes skull stripping using ROBEX 3 [10], bias field correction using N4ITK [11], and brain segmentation via FSL FIRST. 2.2. Graph representation The anatomical brain structures are represented by meshes as an undirected graph composed of nodes, connected by edges forming triangular faces. The number of nodes for most structures is 642 and up to 1,068, whereas the number of edges per structure ranges between 3,840 and 6,396. The meshes are automatically generated by the FSL FIRST tool. 2.2.1. Node features Each graph node can carry additional information, encoded as feature vectors. This can include spatial coordinates or more complex geometric descriptors. While computer vision has transitioned from hand-crafted features to end-to-end deep learning, we have previously demonstrated the value of using geometric feature descriptors in GNN-based shape classification [12]. We employ Fast Point Feature Histograms (FPFH) [13], a pose invariant feature descriptor shown to substantially boost classification performance. To compute FPFH features on a mesh, a point feature histogram is first generated, involving the selection of neighboring points within a defined radius around each query point. The Darboux frame is subsequently defined, and angular variations are computed. This process involves several steps, including the estimation of normals and the calculation of angular variations, resulting in a vector of 33 features at each node. 2.2.2. Mesh registration Mesh registration is an optional pre-processing step, with the goal to remove spatial variability across subjects and datasets. Here, we investigate the use of rigid registration aligning all meshes for a specific brain structure to a standardised orientation using the closed-form Umeyama approach [14]. This method employs a singular value decompositionbased optimisation to obtain an optimal rigid transformation between two given meshes. For each of the 15 brain structures, we select a reference mesh from a random subject from the UKBB dataset, and align the meshes from all other 3https://www.nitrc.org/projects/robex Fig. 1: Model architecture consisting of a graph convolutional network (GCN) submodel feeding graph embeddings into a classification head with two fully connected layers (FC1 and FC2). Where N is the number of brain substructures, 15. For our model inspection, we read out the feature vectors from the GCN submodel, FC1, and FC2. subjects to this reference. As a result, shape variability due to orientation and position differences is minimised and the remaining variability is expected to primarily represent anatomical differences across subjects. 2.3. Multi-graph neural network architecture Our general GNN architecture is comprised of two main components; the GCN submodel which aims to learn graph embeddings over 3D meshes using multiple graph convolutional layers [12] and an MLP classification head that takes the graph embeddings as inputs and performs the final classification using two fully connected layers (cf. Fig. 1). The input to our models are 15 subgraphs representing 15 brain structures, extracted from T1-weighted brain scans. We consider two approaches for learning graph embeddings with GCN submodels. The first approach, referred to as shared submodel, uses a single GCN submodel that learns from all 15 subgraphs. Here, the weights of the graph convolutional layers are shared across brain structures. The shared submodel approach is parameter-efficient and aims to learn generic shape features. For the second approach, referred to as non-shared submodel, each subgraph is fed into a structure-specific GCN submodel. The non-shared submodel approach has more parameters and may capture structure-specific shape features. In both approaches, the architecture of the GCN submodel is identical and consists of three graph convolutional layers [15] with Rectified Linear Unit (ReLU) activations. A global average pooling layer is used as a readout layer, aggregating node representations into a single graph-level feature embedding. The embeddings from individual structures are stacked to form a subject-level feature embedding which is passed to the classification head. 2.4. Model inspection Our model inspection approach is focused on evaluating the separability of the target label (biological sex, Male and \fFemale) and data source classes (UKBB, CamCAN, IXI or OASIS3) through feature inspection. Each test set sample is passed through the complete pipeline and its feature embeddings are saved at three different stages: at the output layer of the GCN submodel and at the first (FC1) and second (FC2) fully connected layers of the classification head. The dimensions of these embeddings are, respectively, 480 (15 substructures times the hidden layer size, 32), 32 and 2. To allow for visual inspection, the feature embeddings from the GCN and FC1 layers are inputted to a PCA model to reduce their dimensionality. The PCA modes capture the directions of the largest variation in the high-dimensional feature space, allowing us to visualise feature separation in 2D scatter plots. We randomly sample 500 subjects from each dataset for the visualisations. Given that all the models were trained to classify biological sex, a clear separation should be expected between the Male and Female classes in the first PCA modes. 3. EXPERIMENTS & RESULTS For a thorough evaluation, we trained and tested the four models shared and non-shared GCN submodels, and with and without mesh rigid registration on identical data splits. All code was developed using PyTorch Geometric and PyTorch Lightning for model implementation and data handling. We used the Adam optimiser [16] with a learning rate of 0.001, and employed the standard cross entropy loss for classification. Random node translation was used as a data augmentation strategy with a maximum offset of 0.1mm [17]. This was shown to improve performance in our previous study [12]. Model selection was done based on the loss of the validation set. Our code is made publicly available4. 3.1. Classification performance Figure 2 summarises the classification performance of the four models, showing the ROC curves together with the area under the curve (AUC) metric, reported separately for each of the four test datasets. There are two main observations: (i) There are very little differences in the absolute performance across the four models. Comparing the shared vs non-shared submodel, the AUC performance is comparable. When comparing models with and without mesh registration, we find the generalisation gap decreases between in-distribution test (UKBB) and the external test data (CamCAN, IXI, OASIS3). However, we also observe a small drop in performance on the in-distribution test data when using mesh registration, compared to not using registration. A practitioner using internal test results for final model selection may opt for using a shared submodel, due to its parameter efficiency, without mesh registration, due to convenience. As we will see next, this choice may be suboptimal as test accuracy alone is insufficient to identify important model characteristics. 4https://github.com/biomedia-mira/medmesh 3.2. Effect of using structure-specific submodels For the models that use a shared submodel, we observe that the GCN feature embeddings are non-discriminative with respect to the target label. Separation seems completely missing in the shared model without registration (see Fig. 3a), with only weak separation in the shared model with registration (see Fig. 3c). For these models, the classification heads will primarily contribute to the model performance. For the models with a non-shared submodel, we find a much better separability for the GCN features with and without mesh registration (cf. Figs. 3b, d). Here, the GCN features will meaningfully contribute to the models\u2019 classification performance. 3.3. Effect of mesh registration When studying the effect of mesh registration, we can clearly observe that without registration, the GCN feature embeddings from the submodel strongly encode data source, showing separate clusters for UKBB and external test data (cf. Figs. 3a,b). When introducing mesh registration as a pre-processing step, we note a significant improvement, with an almost entirely removed separation of datasets in the GCN layer independent of whether a shared and non-shared submodel is used (Figs. 3c, d). The separability of the target label in the GCN layer is well defined for the non-shared submodel (Fig. 3d), while remaining weak for the shared submodel (Fig. 3c). Rigid registration as a pre-processing step seems to not only improve the learning efficiency of the GCN submodel, but also its ability to generalise across data distributions. 4." + }, + { + "url": "http://arxiv.org/abs/2210.16670v1", + "title": "A Comparative Study of Graph Neural Networks for Shape Classification in Neuroimaging", + "abstract": "Graph neural networks have emerged as a promising approach for the analysis\nof non-Euclidean data such as meshes. In medical imaging, mesh-like data plays\nan important role for modelling anatomical structures, and shape classification\ncan be used in computer aided diagnosis and disease detection. However, with a\nplethora of options, the best architectural choices for medical shape analysis\nusing GNNs remain unclear. We conduct a comparative analysis to provide\npractitioners with an overview of the current state-of-the-art in geometric\ndeep learning for shape classification in neuroimaging. Using biological sex\nclassification as a proof-of-concept task, we find that using FPFH as node\nfeatures substantially improves GNN performance and generalisation to\nout-of-distribution data; we compare the performance of three alternative\nconvolutional layers; and we reinforce the importance of data augmentation for\ngraph based learning. We then confirm these results hold for a clinically\nrelevant task, using the classification of Alzheimer's disease.", + "authors": "Nairouz Shehata, Wulfie Bain, Ben Glocker", + "published": "2022-10-29", + "updated": "2022-10-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Geometric deep learning generalizes classical neural network models to non-Euclidean domains such as point clouds, graphs, or meshes (Wu et al., 2020). It has therefore become popular across various \ufb01elds from computer vision (Zhou et al., 2020b) and physics (Shlomi et al., 2020), to healthcare topics (Dash et al., 2019) such as disease prediction (Kazi et al., 2019), drug discovery (Li et al., 2017), and brain connectome analysis (Kim et al., 2021). A recent study (Sarasua et al., 2022) investigated the expressiveness of mesh representations for disease classi\ufb01cation. We complement these \ufb01ndings by conducting a comparative study evaluating di\ufb00erent graph neural networks (GNNs) for the classi\ufb01cation of anatomical meshes extracted from neuroimaging data. We propose a simple yet e\ufb00ective multi-graph architecture with a shared submodel for learning shape embeddings (see Fig. 1). Di\ufb00erent graph convolutional layers are compared; GCNConv (Kipf and Welling, 2016), GraphConv (Morris et al., 2019), and SplineCNN (Fey et al., 2018). In all cases, we observe substantial performance improvements when using Fast Point Feature Histograms (FPFH) as node features, which to our knowledge has not been explored before. We also investigate the e\ufb00ect of data augmentation, \ufb01nding improvements in generalization to data from new domains. Our \ufb01ndings on the proof-of-concept task of biological sex classi\ufb01cation are con\ufb01rmed on the clinically relevant diagnostic task of Alzheimer\u2019s disease classi\ufb01cation. \u00a9 2022 N. Shehata, W. Bain & B. Glocker. arXiv:2210.16670v1 [cs.CV] 29 Oct 2022 \fShehata Bain Glocker MRI segmentation & 3D reconstruction of 15 substructures N meshes Shared Submodel Embedding FC1 FC2 Classifier H NH Stacked Embedding Select N substructures & obtain embedding N H H H H N INPUT FEATURE LEARNING CLASSIFICATION H H H Global Mean Pooling Figure 1: Proposed multi-graph architecture; N is the number of meshes (here, N=15), H is the number of hidden features (H = 32), and FC is a fully connected layer. 2. Graph Neural Network Architecture As the \ufb01eld of geometric deep learning has expanded, the architectural choices available to practitioners has proliferated. Here we outline our approach on three key aspects: the type of convolutional layer used, the number of convolutional submodels, and the type of geometric features encoded at the node level. 2.1. Selected Graph Convolutional Operators Graph convolutional operations are analogous to CNN operations on images, respecting the additional invariants that arise in this domain, permutation invariance being key due to the arti\ufb01cial ordering of nodes that arises when representing graphs. As shown in Bronstein et al. (2021), many GNNs follow a blueprint of \u2018message passing\u2019 (Gilmer et al., 2017), whereby node features are updated using an aggregation on the features of nodes in their neighbourhood, but there is signi\ufb01cant variance in how this is done. In this paper, we compare three seminal graph convolutional layers from the literature: GCNConv (Kipf and Welling, 2016), GraphConv (Morris et al., 2019), and SplineCNN (Fey et al., 2018). These are selected as popular representatives of graph convolutional layers, that are easy to use as plugin replacements in generic architectures. We direct readers to the original papers for details. Existing literature has compared GCNConv and GraphConv (Xu et al., 2018; Morris et al., 2019), and we extend this to medical imaging. 2.2. Multi-graph Architecture As multiple subcortical structure subgraphs may be extracted simultaneously from a single sample brain scan, one must also choose how to utilise these. One option is to combine them 2 \fGraph Neural Networks for Shape Classification in Neuroimaging into a single multigraph per sample (Wang et al., 2021; Chaari et al., 2022). However, it might not be obvious how to de\ufb01ne edges between graphs of di\ufb00erent anatomical structures. Alternatively, as in this paper, each subgraph can be input to a speci\ufb01c GNN, and the results combined into a sample level output. Practitioners must decide the number of GNNs to use. One approach is a single shared GNN that learns from all subgraphs, while another is inputting each subgraph to a separate GNN i.e. the number of (sub) GNNs is equal to the number of subgraphs per sample (Hong et al., 2021). The latter approach allows each (sub) GNN to learn structure speci\ufb01c embeddings, whilst the former encourages the GNN to generalise learnings across structures. Initially, we tested both a single shared and structure speci\ufb01c GNN submodel, \ufb01nding that the performance was comparable. Using a shared submodel signi\ufb01cantly reduces the number of parameters. Given considerations on neural networks training time (Li, 2020), cost (Wiggers, 2020), and environmental impact (Strubell et al., 2019), our preliminary results led us to use a shared GNN in this paper: each brain substructure is passed to the shared submodel to obtain an embedding. We use three convolutional layers in the submodel with ReLU activations. A global average pooling layer is used as a readout layer to aggregate the node representations into one graph embedding. These embeddings are then stacked and passed through a fully connected layer for \ufb01nal classi\ufb01cation (cf. Fig. 1). 2.3. Node and edge representation The meshes representing anatomical brain structures are de\ufb01ned by a set of nodes and edges, where both can carry additional information. Nodes can encode arbitrary feature vectors, from spatial information such as mesh coordinates to more complex, geometric feature descriptors. In computer vision, hand crafted features based on carefully designed descriptors have been largely abandoned in the end-to-end deep learning paradigm (Battaglia et al., 2018). However, in the case of shape analysis, we believe there is value in sophisticated, geometrical feature extractors, especially when there are limited amounts of training data. We evaluate the use Fast Point Feature Histograms (FPFH) (Rusu et al., 2009) as node features, and compare these with positional node features in form of Cartesian coordinates, and no node features (realized by setting constant values). To calculate the FPFH features on a mesh, \ufb01rst a point feature histogram is computed: for each query point pr, all neighbouring points inside a 3D sphere of radius r centered at point pr are selected (k-neighbourhood points); then, for each pair pr and pk in the k-neighbourhood points of pr, their normals are estimated as nr and nk. The point with the smaller angle between the line joining the pair of points and the estimated normals is chosen to be pr. Finally a Darboux frame is de\ufb01ned as (u = nr, v = (pk \u2212pr)u, w = u \u00d7 v) and the angular variations of nr and nk are computed: \u03b1 = v \u00b7 nk \u03c6 = (u \u00b7 (pk \u2212pr))/ \u2225(pk \u2212pr)\u2225 \u03b8 = arctan(w \u00b7 nk, u \u00b7 nk) Second, a Simple Point Feature Histogram (SPFH) is obtained by calculating the point features of each neighboring point pk (Rusu et al., 2008). Finally, to calculate FPFH, 3 \fShehata Bain Glocker the SPFH of the k neighbours are used to calculate the \ufb01nal histogram of pr, where they are weighted by the distances between pr and the neighbours pk. N is the number of points within the sampling radius (number of neighbours to the reference point). In our implementation, the sampling radius was set to 10mm and maximum number of neighbours to 100. FPFH(pr) = SPFH(pr) + 1 N N X k=1 SPFH(pk) \u2225pk \u2212pr\u2225 Besides the node features, we also encode edge attributes in terms of relative spherical coordinates between two nodes. Edge attributes are processed only within SplineCNN layers, but otherwise ignored in both GCNConv and GraphConv layers, as these can only use edge weights and not attributes. 3. Datasets We utilize four neuroimaging datasets to test generalization and robustness of the classi\ufb01cation performance. We use data from the UK Biobank imaging study (UKBB)1 (Sudlow et al., 2015; Miller et al., 2016), the Cambridge Centre for Ageing and Neuroscience study (Cam-CAN) (Shafto et al., 2014; Taylor et al., 2017), and the IXI dataset2. Both UKBB and Cam-CAN use a similar imaging protocol with Siemens 3T scanners. IXI consists of data acquired at three di\ufb00erent sites including Guy\u2019s Hospital using a Philips 1.5T system, Hammersmith Hospital using a Philips 3T scanner, and Institute of Psychiatry using a GE 1.5T system. UKBB, Cam-CAN, and IXI are data from healthy volunteers. We only discarded data related to subjects whose sex or age entries were unavailable. We also use the OASIS-3 dataset with 716 cognitively normal participants and 318 participants who reach various stages of cognitive decline during the study, allowing Alzheimer\u2019s disease (AD) related tasks such as classi\ufb01cation (LaMontagne et al., 2019). The cognitive status is re\ufb02ected in the clinical dementia rating (CDR) that accompanies the imaging dataset, with subjects receiving a score of: 0 for normal, 0.5 for very mild dementia, 1 for mild dementia, 2 for moderate dementia and 3 for severe dementia (Morris, 1991). The CDR is collected in clinical sessions, separate to the imaging sessions, meaning sessions must be \u2018matched\u2019 to get an {image, CDR score} pair. We match the clinical diagnosis closest in time to each scan, before \ufb01ltering out samples where the absolute time di\ufb00erence between scan and clinical assessment is greater than 365 days. To avoid di\ufb03culties in assigning scans to training, validation, and testing, we only use one scan per subject, leaving 1,084 unique scans. We exclude 50 samples because their sex or age information was missing. The \ufb01nal set of 1,034 comprises 716, 188, 111, 18 and 1 samples, for CDR of 0, 0.5, 1, 2 and 3 respectively. We binarize CDR to 0 and 1 (for CDR score 0.5, 1, 2 and 3). The UKBB data comes pre-processed with already extracted meshes for 15 subcortical brain structures3. We apply our own processing pipeline to Cam-CAN, IXI and OASIS-3 to match UKBB as closely as possible: 1) Skull stripping with ROBEX v1.24 (Iglesias et al., 1. UK Biobank Resource under Application Number 12579 2. https://brain-development.org/ixi-dataset/ 3. Brain stem, left/right thalamus, caudate, putamen, pallidum, hippocampus, amygdala, accumbens-area 4. https://www.nitrc.org/projects/robex 4 \fGraph Neural Networks for Shape Classification in Neuroimaging 2011); 2) Bias \ufb01eld correction with N4ITK5 (Tustison et al., 2010); 3) Sub-cortical brain structure segmentation and meshing using FSL FIRST 6 (Patenaude et al., 2011). Table 1: Number of samples, percentage of females, and mean, min, and max age. Dataset Samples Female (%) Age (years) UKBB 13,749 47 61 [44, 73] Cam-CAN 652 51 54 [18, 88] IXI 563 58 49 [20, 86] OASIS-3 1,034 55 72 [42, 97] 4. Experiments The experiments were designed to evaluate and compare three main aspects: (i) the choice of convolutional layers for the shared submodel; (ii) the choice for the node features; (iii) the e\ufb00ect of data augmentation on robustness and generalization. 4.1. Implementation and Training We use the Adam optimizer with a learning rate of 0.001 and the standard cross entropy loss as the classi\ufb01cation objective function. To increase the variability of the training data and to avoid over\ufb01tting, we employ a simple data augmentation strategy (Zhou et al., 2020a). Individual graph nodes are randomly translated by a maximum o\ufb00set. We evaluate the e\ufb00ect of the strength of augmentation and test maximum o\ufb00sets of 0.1mm, 0.5mm, and 1.0mm. Given the limited amount of training data, data augmentation should be bene\ufb01cial for improving classi\ufb01cation accuracy across di\ufb00erent datasets. All our implementations were done in PyTorch bene\ufb01ting from the excellent PyTorch Geometric library7. We use PyTorch Lightning8 for ease of implementation of the model and data structures. The code is available on https://github.com/biomedia-mira/medmesh. 4.2. Task 1: Biological Sex Classi\ufb01cation We use biological sex classi\ufb01cation as a proof of concept task which has shown to yield good performance with the advantage that several neuroimaging datasets from di\ufb00erent sources are available for extensive testing and evaluation of the e\ufb00ect of di\ufb00erent model choices on predictive performance. We use the UKBB data for the model development, with a data split of 70%, 10%, and 20% for training, validation, and testing. The batch size was set to 128, all hidden features set to 32 (both for the convolutional layers and fully connected layers). When using SplineCNN, we set the kernel size to 5 and use the sum aggregation. The maximum number of training epochs was set to 50, and we retain the model with highest validation performance for \ufb01nal evaluation on the test set. 5. https://itk.org 6. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FIRST 7. https://pytorch-geometric.readthedocs.io/ 8. https://www.pytorchlightning.ai/ 5 \fShehata Bain Glocker E\ufb00ect of node features To evaluate the e\ufb00ectiveness of di\ufb00erent node features, in our \ufb01rst set of experiments we employ SplineCNN in the shared convolutional submodel (as these performed well in initial experimentation). We then evaluated classi\ufb01cation performance on the UKBB test set, Cam-CAN, IXI, and OASIS-3 using constant, positional, and FPFH node features. The ROC curves in Figure 2 show that FPFH substantially outperforms other node features on all four datasets. It is worth noting that while positional features perform well on the in-distribution UKBB test set, these features underperform on out-of-distribution test sets. This is due to their reliance on Cartesian coordinates of mesh nodes which do not generalize well due to di\ufb00erences in data acquisition. FPFH, on the other hand, are invariant to the pose of the mesh and show much better generalization across datasets. 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification aug -01-ukbb Const_AUC=0.92 FPFH_AUC=0.98 pos_AUC=0.96 Chance (a) UKBB 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification aug -01-camcan Const_AUC=0.75 FPFH_AUC=0.89 pos_AUC=0.78 Chance (b) Cam-CAN 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification aug -01-ixi Const_AUC=0.79 FPFH_AUC=0.91 pos_AUC=0.81 Chance (c) IXI 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification aug -01-oasis3 Const_AUC=0.78 FPFH_AUC=0.89 pos_AUC=0.83 Chance (d) OASIS-3 Figure 2: ROC curves for sex classi\ufb01cation comparing di\ufb00erent node features across the datasets UKBB, Cam-CAN, IXI, OASIS-3 using SplineCNN in the submodel. E\ufb00ect of data augmentation Next, we evaluate the e\ufb00ect of varying strengths of data augmentation. The maximum o\ufb00set for the random node translation is varied from 0 (no augmentation), to 0.1, 0.5, and 1.0mm. The ROC curves in Figure 3 demonstrate the bene\ufb01t of data augmentation on robustness and generalization. The best performance is achieved using data augmentation of 0.1, which increased AUC by 4-5% compared to not using augmentation. While data augmentation slightly decreases the performance on the indistribution UKBB test set, it substantially improves performance on all out-of-distribution test sets, con\ufb01rming the importance of adding random perturbations to the training data. 6 \fGraph Neural Networks for Shape Classification in Neuroimaging 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification feature FPFH-ukbb aug=0.1 AUC=0.98 aug=0.5 AUC=0.93 aug=1.0 AUC=0.90 no aug AUC=0.99 Chance (a) UKBB 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification feature FPFH-camcan aug=0.1 AUC=0.89 aug=0.5 AUC=0.86 aug=1.0 AUC=0.80 no aug AUC=0.85 Chance (b) Cam-CAN 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification feature FPFH-ixi aug=0.1 AUC=0.91 aug=0.5 AUC=0.86 aug=1.0 AUC=0.82 no aug AUC=0.86 Chance (c) IXI 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification feature FPFH-oasis3 aug=0.1 AUC=0.89 aug=0.5 AUC=0.86 aug=1.0 AUC=0.80 no aug AUC=0.85 Chance (d) OASIS-3 Figure 3: ROC curves showing the e\ufb00ect of data augmentation for sex classi\ufb01cation across domains, using SplineCNN as the shared submodel and FPFH as node features. E\ufb00ect of convolution layer Finally, we evaluate the three di\ufb00erent convolutional layers, using FPFH as the node features and data augmentation of 0.1mm. In Figure 4(a) we observe similar performance for SplineCNN and GCNConv, closely followed by GraphConv. 4.3. Task 2: Alzheimer\u2019s Disease Classi\ufb01cation To con\ufb01rm whether the above \ufb01ndings hold for a clinically relevant task, we consider Alzheimer\u2019s disease (AD) classi\ufb01cation on OASIS-3 with a 70%, 10%, and 20% train, validation, and test split. We evaluate the e\ufb00ect of the convolutional layer using a larger amount of data augmentation of 0.5mm due to the smaller amounts of training data. We then also evaluate the e\ufb00ect of node features for AD classi\ufb01cation, using SplineCNN in the submodel for consistency with the sex classi\ufb01cation experiments. The results are shown in Fig. 4(b) and 4(c). GCNConv performs slightly better than SplineCNN, with a substantial decrease in performance for GraphConv. FPFH features again outperform other node features. 4.4. Bias Analyses We also investigated potential biases in the predictions in terms of subgroup performance disparities. To this end, we \ufb01rst analyzed the biological sex classi\ufb01cation model strati\ufb01ed by age groups. As the training data from UKBB only covers a limited age range between 44 and 73 year old subjects, we wanted to understand whether the performance might degrade for younger subjects. The results shown in Figure 5(a), however, suggest that the sex classi\ufb01cation model with SplineCNN and FPFH features generalizes well across the 7 \fShehata Bain Glocker 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Sex Classification feature -FPFH-oasis3 GCNConvNet_AUC=0.89 GraphConvNet_AUC=0.87 splineCNN_AUC=0.89 Chance (a) Sex Conv. Layers 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Alzheimers Classification feature -FPFH-oasis3 GCNConvNet_AUC=0.88 GraphConvNet_AUC=0.78 splineCNN_AUC=0.86 Chance (b) AD Conv. Layers 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Alzheimers Disease Classification aug _05-oasis3 Const_AUC=0.82 FPFH_AUC=0.86 pos_AUC=0.82 Chance (c) AD Node Features Figure 4: E\ufb00ect of convolution layer for (a) sex and (b) Alzheimer\u2019s disease classi\ufb01cation. (c) E\ufb00ect of node features on AD classi\ufb01cation. All evaluated on OASIS-3. entire age range. Both Cam-CAN and IXI contain many subjects in the range of 18 to 40 years. Next, we analyzed whether sex classi\ufb01cation may be a\ufb00ected by disease status. Here we looked at the classi\ufb01cation performance separately for the group of healthy controls and subjects with Alzheimer\u2019s disease. Again, we \ufb01nd no di\ufb00erences in the classi\ufb01cation accuracy, suggesting that the sex classi\ufb01cation model generalizes well (cf. Figure 5(b)). UKBB camCAN IXI OASIS3 Dataset 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Sex Classification age<40 40<=age<=60 age>60 (a) OASIS3 Dataset 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Sex Classification dx = 0 dx = 1 (b) Figure 5: Bias analysis for sex classi\ufb01cation using SplineCNN with FPFH features. Classi\ufb01cation performance is strati\ufb01ed by (a) age groups and (b) presence of disease. 5." + } + ], + "Anees Kazi": [ + { + "url": "http://arxiv.org/abs/1812.09954v1", + "title": "Self-Attention Equipped Graph Convolutions for Disease Prediction", + "abstract": "Multi-modal data comprising imaging (MRI, fMRI, PET, etc.) and non-imaging\n(clinical test, demographics, etc.) data can be collected together and used for\ndisease prediction. Such diverse data gives complementary information about the\npatient\\'s condition to make an informed diagnosis. A model capable of\nleveraging the individuality of each multi-modal data is required for better\ndisease prediction. We propose a graph convolution based deep model which takes\ninto account the distinctiveness of each element of the multi-modal data. We\nincorporate a novel self-attention layer, which weights every element of the\ndemographic data by exploring its relation to the underlying disease. We\ndemonstrate the superiority of our developed technique in terms of\ncomputational speed and performance when compared to state-of-the-art methods.\nOur method outperforms other methods with a significant margin.", + "authors": "Anees Kazi, S. Arvind krishna, Shayan Shekarforoush, Karsten Kortuem, Shadi Albarqouni, Nassir Navab", + "published": "2018-12-24", + "updated": "2018-12-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "eess.IV", + "stat.ML" + ], + "main_content": "INTRODUCTION Experts look at all the varied multi-modal data collected by imaging sources and non-imaging demographics (age, gender, weight, body-mass index) to take an informed decision for disease diagnosis. Such rich data is also exploited in Computer Aided Diagnosis systems (CADs) as complementary information. Current CAD systems combine all the complementary features by using feature selection [1], or by reducing the dimensionality with an autoencoder [2, 3, 4]. Works are also done with simply concatenating all the features to use deep learning based models [5]. All the above methods exploit the complementary information from available modalities at a global level but fail to optimally combine the varied information. For instance, the learned features are biased towards the single modality with dominant features and do not exploit the individuality of each modality. On top of that, each demographic information carries different relevance for the diagnosis of a disease. A model is required which is capable of evaluating the signi\ufb01cance of every element of the demographic data and performing the prediction task based on the selective and weighted procedure for elements of demographic data. Such a scheme will boost the disease prediction task to incorporate more clinical semantics. Graphs provide a more such a way of using multi-modal data [6, 7]. These methods leverage the similarities between subjects in terms of an af\ufb01nity graph in the training process itself. Most recent work [6] by presents an intelligent and novel use case of Graph Convolutional Networks (GCN) for the binary classi\ufb01cation task. This allows convolutions to be used on graph-structured data, where each patient represents a node in the population level graph. The method proposes to use each demographic information separately to construct a neighborhood graph. They eventually combine all the neighborhood graphs to get the average af\ufb01nity graph, unlike the conventional methods, which fuses the information for the prediction task. This method, however, yields varied results for distinct input neighborhood graphs. Each of these af\ufb01nity graphs and indirectly each element of the demographic data carries distinct neighborhood relationships (based on element dependent criteria) and statistical properties with respect to the entire population. Our motivation is to analyze the impact and relevance of the neighborhood de\ufb01nitions on the \ufb01nal task of disease prediction. In addition to that, we want to investigate whether the relative weighting of meta-data can be automated. Contributions: 1) We propose a model capable of incorporating the information of each graph separately, 2) our design architecture bears a parallel setting of Graph Convolutional (GC) layers 3) we introduce a \u2019Self-Attention layer\u2019 which automatically learns the weighting for each meta-data with respect to its relevance to the prediction task, and 4) Our model outperforms the state-of-the-art method. 2. METHODOLOGY Given a dataset D = {X, Y, \u03b4} with X \u2208RN\u00d7d representing the feature matrix for N patients and each one is provided with d-dimensional features. Y represents the corresponding label matrix and \u03b4 the demographic data matrix. The task is to predict the class label \u02c6 Y for test subjects for arXiv:1812.09954v1 [cs.LG] 24 Dec 2018 \fFig. 1: Figure describes the Multi-Layered Parallel Graph Convolutional Network with M=2. Two branches have same input features but input af\ufb01nity matrix. K classes. \u03b4 \u2208RN\u00d7M represents that for each patient Mdimensional demographic data is provided. The mth af\ufb01nity graphs G(m) \u2208RN\u00d7N are computed from the respective \u03b4m demographic element. The model f(\u00b7) to solve the task is given by \u02c6 Y = f(X, G(m); \u03b8). (1) The model takes X and G(m) as input to train the parameters \u03b8 and outputs discriminative features for classi\ufb01cation. Fig. 1 shows the entire methodology, which can be divided into three main parts: (1) Af\ufb01nity matrix W m construction, (2) the forward propagation model: we describe the model, where architecture to produce class-separable features and (3) the self-attention layer, for automatic weighting of the graph-speci\ufb01c output features of each branch. Af\ufb01nity Matrix W (m) Construction: We construct M af\ufb01nity matrices corresponding to each of the demographic element. For the mth element, let the graph G(m) = \b X, E(m)\t be an undirected and unweighted, where all the M graphs have a common vertex set X. E(m) \u2208RN\u00d7N is a demographic element speci\ufb01c edge matrix. Each graph G(m) reveals distinct intrinsic relationships between the vertices. Edges between vertices are de\ufb01ned based on the given demographic element as E(m) i,j = ( 1 if |\u03b4i,m \u2212\u03b4j,m| < \u03b2 0 otherwise , (2) where \u03b4m(\u00b7) is the corresponding demographic element and \u03b2 is a threshold. We generate af\ufb01nity matrix from these graphs by weighting the edges. A similarity metric between the subjects Sim(Xi, Xj), e.g. correlation coef\ufb01cient, is incorporated to weight the edges as W (m) i,j = Sim(Xi, Xj) \u25e6E(m) i,j (Xi, Xj) , (3) where \u25e6is the Hadamard product. Forward propagation model: We design our model such that it trains each af\ufb01nity graph separately. The proposed model bears the parallel setting of M branches as shown in Fig. 1. Each branch is equipped with spectral graph theory based GC layers. These layers help to adopt convolutions on graphs unlike grid based convolutions [7, 8]. The proposed forward propagation model is given by: H(m) l+1 = \u03c3 \u0012 D(m)\u22121 2 W (m)D(m)\u22121 2 H(m) l \u0398(m) l \u0013 (4) D is the diagonal matrix with D(m) ii = P j W (m) ij . \u0398(m) l are the trainable layer-speci\ufb01c \ufb01lters, which can be derived from a \ufb01rst-order approximation of localized spectral \ufb01lters on graphs [7], and H(m) l is the feature representation of the previous layer (H(m) 0 = X). D(m)\u22121 2 W (m)D(m)\u22121 2 is the normalized graph Laplacian, and \u03c3(\u00b7) is the recti\ufb01ed linear unit function. The model outputs Hlogits \u2208RN\u00d7K. Self-Attention Layer: The logits for M branches differ with respect to each other because of graphs although features on each vertex are common. In order to rank the demographic data elements, we design a linear combination layer that ranks the logits coming from the last hidden layer as \u02c6 Y = Softmax M X m=1 \u03c9mH(m) logits ! , (5) where \u03c9m is the trainable scalar weight associated with the demographic element and \u02c6 Y are the normalized log probabilities. We de\ufb01ne our objective function as binary weighted cross entropy loss on the labeled data to train the model parameter. 3. EXPERIMENTS Our experiments have been designed to (1) investigate the in\ufb02uence of each af\ufb01nity matrix on the performance of the \fpredictive models, (2) investigate the performance of the predictive model with multi-graph setting approaches [6], (3) we show comparison with 3 methods, linear classi\ufb01er, two-layered Dense Neural Network, baseline GCN method [6], proposed model and (4) investigate in-depth insight of self-attention layer with multi-graph setting. Dataset: We show results on a publicly available dataset namely Tadpole [9] for the prediction of Alzheimer's disease. The dataset is a subset of ADNI[10] consisting of 564 patients. The goal is to classify each patient into one of the three classes Normal, Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). For each patient, the features are collected from various biomarkers (MR, PET imaging, cognitive tests, CSF biomarkers, etc). Further risk factors are provided for each subject in terms of APOE genotyping status and FDG PET imaging. Demographic elements (age and gender) are also provided. Entire data is pre-processed with ADNI\u2019s standard data-processing pipeline.Implementation: Number of features d = 354, dropout rate: 0.3, \u21132regularisation: 5 \u00d7 10\u22124. All the experiments are implemented in Tensor\ufb02ow1 and performed with Nvidia GeForce GTX 1080 Ti 10 GB GPU. We use early stopping criteria to decide the number of epochs for each setting. The model is evaluated based on the classi\ufb01cation mean accuracy (ACC) for 10-fold Cross-Validation. 4. RESULTS AND DISCUSSION: In this section, we discuss the results of all the experiments in detail. In\ufb02uence of individual af\ufb01nity matrix: For individual af\ufb01nities it should be noted from \ufb01g. 2 (a) that each graph shows different results. This means that the input af\ufb01nity matrices have unequal relevance to the task at hand. For example, the age graph shows the best performance and the FDG graph shows the worst. The performance reduces when all the graphs are averaged and used as input as in the baseline method [6]. This proves that averaging af\ufb01nity graphs degrade the performance that could have been obtained otherwise. Performance with different combinations of graphs: We perform another experiment by using all the different combinations of af\ufb01nity matrices as input. This validates that the performance varies if the combination of af\ufb01nity matrices is changed. According to [11] age and gender are the most important factors compared to APOE and FDG for the prediction of AD. The results are demonstrated in terms of boxplots of accuracies as shown in \ufb01g.2 (c) which con\ufb01rms the different combination show different result.Moreover, the combination of gender and age show the maximum performance and most of the combinations using FDG and APOE reduce the performance. This depicts that our model upholds the clinical semantic same as [11]. This experiment also 1www.tesnsorflow.org con\ufb01rms that the overall performance reduces when all the af\ufb01nity graphs are weighted equally and averaging deteriorates the positive in\ufb02uence of other af\ufb01nity matrices due to the loss of neighborhood structure for individual graphs.Our proposed model with self-attention outperforms all the combinations, since it captures the correct weighting required for optimal performance. Performance in comparison to other methods: We compare the proposed method to three state-of-the-art methods namely linear classi\ufb01er, neural network and [6] as shown in \ufb01g. 2 (b). We chose these methods respectively to investigate 1) how linearly separable the features at every node are? 2) what is the performance of the model when features are concatenated? 3) what is the signi\ufb01cance of incorporating the graph for the task? and 4) how important is it to weight the graphs? From \ufb01g. 2 (b), it can be seen that features are separable as the linear classi\ufb01er performs quite well compared to two other methods shown. For NN, where the features are concatenated the model architecture becomes the problem. We used the same number of hidden layers (2) and hidden unties (16, 3 respectively) with the input of the feature dimension of 354. NN fails to perform well with this architecture. As can be seen that the baseline [6] improves the performance with respect to NN showing the strength of the GCN, however, it performs lower than linear and proposed. This is due to the corrupted combination of the neighborhood. Finally, our proposed method outperforms all the methods with the correct weighted combination of neighborhood and Hlogits. Effect of self attention: We also investigated the weights learnt for each branch by our model. The self-attention layer learned maximum weight for gender and age (0.35 and 0.27 respectively) and lower weight for FDG and APOE(0.09 and 0.29 respectively). It is con\ufb01rmed from [11] that age and gender are signi\ufb01cant factor for predicting AD. 5." + } + ], + "Shayan Shekarforoush": [ + { + "url": "http://arxiv.org/abs/2309.08826v1", + "title": "Dual-Camera Joint Deblurring-Denoising", + "abstract": "Recent image enhancement methods have shown the advantages of using a pair of\nlong and short-exposure images for low-light photography. These image\nmodalities offer complementary strengths and weaknesses. The former yields an\nimage that is clean but blurry due to camera or object motion, whereas the\nlatter is sharp but noisy due to low photon count. Motivated by the fact that\nmodern smartphones come equipped with multiple rear-facing camera sensors, we\npropose a novel dual-camera method for obtaining a high-quality image. Our\nmethod uses a synchronized burst of short exposure images captured by one\ncamera and a long exposure image simultaneously captured by another. Having a\nsynchronized short exposure burst alongside the long exposure image enables us\nto (i) obtain better denoising by using a burst instead of a single image, (ii)\nrecover motion from the burst and use it for motion-aware deblurring of the\nlong exposure image, and (iii) fuse the two results to further enhance quality.\nOur method is able to achieve state-of-the-art results on synthetic dual-camera\nimages from the GoPro dataset with five times fewer training parameters\ncompared to the next best method. We also show that our method qualitatively\noutperforms competing approaches on real synchronized dual-camera captures.", + "authors": "Shayan Shekarforoush, Amanpreet Walia, Marcus A. Brubaker, Konstantinos G. Derpanis, Alex Levinshtein", + "published": "2023-09-16", + "updated": "2023-09-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Capturing high quality images under challenging conditions of low-light and dynamic scenes can be daunting, especially for smartphone cameras that have limited physical sensor size. In such circumstances, the exposure time becomes a prominent factor that affects the quality of the final image. Increasing the exposure time allows more light to reach the sensor, yielding an image with a lower level of noise and rich colors. However, long exposure causes objects to appear unpleasantly blurry, due to either camera or scene motion. While lowering exposure time helps capture *Work done during an internship at Samsung AI Center Toronto A C B2 B3 B1 Long Exposure Image Short Exposure Burst Noisy Clean Sharp Sharp Blurry Clean Figure 1. Given a long exposure blurry image (A), synchronized with a burst of short exposure images (B1,B2,B3), we approach the joint deblurring-denoising task, producing a high quality clean and sharp image (C). sharp details at high sensitivity settings (ISO), the resulting image becomes susceptible to noise and color distortion. This trade-off is illustrated in Fig. 1. Given a noisy or blurry image, one can recover an enhanced version using denoising or deblurring methods. These restoration tasks are long-standing problems in image processing that are often addressed independently using either traditional methods [7, 11], or recent learning-based ones [14,41,73] that owe their success to training convolutional neural networks on a large amount of clean/corrupted pairs of data. To make use of complementary information from both types of images, some recent work aim for the joint task of deblurring-denoising when corresponding short and long exposure images are available as input [8, 39, 77, 78]. These multi-image methods outperform single image baselines. Many modern smartphones have multiple rear-facing cameras and acquisition of synchronized images is becoming an emerging capability [28] (multi-camera Android API). In this paper, we employ a two-camera imaging system for joint deblurring and denoising. Similar to earlier classical work [42, 56], we acquire a burst of short exposure images, rather than a single image, with a synchronized long exposure image. The benefit is two-fold. First, under the assumption of relative rigidity between the cameras, temporal synchronization of captures enables nonblind deblurring of the long exposure image based on the motion information in the burst [42, 56]. Some previous 1 arXiv:2309.08826v1 [cs.CV] 16 Sep 2023 \fwork have addressed the joint deblurring-denoising task in either synchronized [28] or unsynchronized [8, 39, 78] settings; however, cannot infer the motion from the single short exposure image. Some others [59], despite capturing multiple images with different exposure times, are limited to a single camera without synchronization. Second, compared to the two-frame joint deblurring-denoising approaches [8, 28, 39, 77, 78], complimentary information across multiple independent noisy images in the burst can be leveraged to more accurately estimate the underlying signal [36,64]. To this end, we first adapt a motion-aware deblurring network [76], equipped with deformable convolutions, to exploit externally provided optical flow; so the network deblurs a given long exposure image in a non-blind fashion. We compute the optical flow between frames in the burst using a pre-trained network. In case of spatial misalignment of cameras in a real dual-camera system, we align images before being passed to the method. In addition to the new flow-guided deblurring architecture, we employ a lightweight version of burst processing [3] to denoise short exposure images into a single clean version. We then fuse the resulting denoised and deblurred intermediate features, and a final clean and sharp image is reconstructed. We train the entire pipeline, consisting of deblurring, denoising and fusion, in an end-to-end fashion and evaluate the performance on synthetic data constructed by repurposing the GoPro dataset [41], as well as data obtained using a real hardware-synchronized dual-camera system. Our contributions are summarized as follows: \u2022 A novel flow-guided deblurring network, that uses the motion in the short exposure burst to remove nonuniform motion blur in the long exposure image. \u2022 A joint architecture for burst denoising and deblurring, combining complementary information from the short exposure burst and long exposure image for the final image restoration. Our approach to joint deblurring-denoising achieves stateof-the art results on challenging synthetic data and performs competitively with others on real data. We will release our code upon publication. 2. Related Work Image Deblurring. Image blur can arise from camera shake, object motion, focus, or a combination of these factors [1, 13, 21, 54]. Some early non-blind deconvolution methods, like Lucy-Richardson [34,47] and Wiener deconvolution [6], aim to restore the image assuming a known blur kernel. Other traditional deblurring methods aim to recover the sharp latent image (and blur kernel) from a blurry image using optimization techniques. Most techniques make assumption about camera motion [18, 63, 79] or use image priors [9, 10, 13, 22, 44, 49, 65] for regularization. Although these techniques can perform well under controlled settings justifying the chosen prior, their application to real-world images is limited. With recent advances of deep neural networks, learning-based methods [20, 23, 24, 27, 30, 40, 41, 48, 50\u201352, 55] are able to achieve state-of-the-art results even in challenging scenarios. CNNbased methods can recover the deblurred image (and blur kernels) from a blurry image either in an end-to-end fashion [16, 26, 41, 43, 46, 69, 76] or using deconvolution from predicted blur kernels [54, 66]. Some methods predict the deblurring result directly from the blurry image in a blind manner e.g., [43], [41], [26]. Other works [16], [46], [69], [76] decompose the deblurring problem into predicting motion from the blurry image and using it for non-blind image deblurring. We follow the latter approach, but use flow predicted from a synchronized short exposure burst, resulting in a more reliable motion estimate. Image Denoising. Classical denoising methods aim to reduce noise in an image by applying filtering either in spatial [7, 11, 57, 60] or frequency [45] domains. Similar to image deblurring, deep learning-based denoising methods [2,17,67,73,74] have been shown to outperform traditional methods. More complex noise synthesis models [17,38,62] can further boost denoising performance. Low-light imaging has especially benefited from burst processing methods. A burst of short exposure images allows the capture of more light by integrating information across the burst, while overcoming the problem of motion blur and non-uniform dynamic ranges pronounced in a long exposure image. Some approaches do not compute the burst motion explicitly or represent it implicitly using spatially varying kernels [25,36,58,64], relying on end-to-end deep learning to integrate information across the burst. Others make use of explicit motion estimation [3, 4, 15, 29] using Lucas-Kanade [33] or deep optical flow methods. With a moderate amount of noise, when motion can still be estimated reliably, the latter methods achieve state-of-the-art performance [35]. In our work, we make use of a burst denoising method with an explicit motion estimate [3]. In addition to obtaining good denoising performance, we reuse the computed motion for flow-guided long exposure image deblurring and fuse the results. Image Restoration using Short and Long Exposure Images. To exploit complementary information available in short and long exposure images, some methods combine the two. Early dual-camera approaches [42, 56] propose a hybrid imaging system that uses predicted motion from one camera to deblur the long exposure image from a second camera. Others classical methods like [68] and [71] uses degraded image pairs (noisy or blurry) to recover sharp la2 \ft Long Exposure Camera Short Exposure Camera Exposure !\"# Read-out gap !\"$ \"% \"& \"' \"( Figure 2. Visualization of temporal synchronization of two captures in an ideal dual-camera setting. The short and long exposure times are denoted as \u2206ts and \u2206tl, respectively. Read-out gaps cause missing information in the burst. tent image. Recently, deep learning-based methods have been used to combine a single long and short exposure image for image restoration [8, 28, 39, 78]. LSD2 [39] processes concatenated short and long exposure images with a simple U-Net style architecture. LSF [8] proposes a more advanced joint data synthesis pipeline that improves results on real data. D2HNet [78] proposes a two-phase architecture trained on synthetically generated long/short exposure pairs to first deblur a long-exposure image and then enhance the result based on the guidance from a single noisy shortexposure image. Lai et al. [28] engineered a dual-camera system on Google\u2019s Pixel to synchronously capture frames for face deblurring. They train deep CNNs to align and fuse a blurry but low noise raw image from the main camera with a sharp but noisy raw image from the secondary camera, to generate a final clean and sharp image of the face region. Similar to the traditional methods [42,56] we use a dualcamera system to enhance image quality. However, we employ modern deep learning approaches for motion-guided deblurring, and fuse the deblurring and burst denoising results to further boost image quality. 3. Problem Definition In the joint denoising-deblurring task, the input consists of H \u00d7 W resolution sRGB images including a burst of short exposure images, {Si}N i=1, and a single blurry image, L. In a dual-camera setting, recording short and long exposure images can occur at different temporal orders. Here, we assume that one camera captures the long exposure image while a second camera simultaneously acquires a burst. In other words, as illustrated in Fig. 2, the entire burst capture occurs within the time span of the long-exposure one. Due to physical limitations of the camera, there are readout gaps between frames of the burst causing missing information, as opposed to the long exposure camera that keeps recording the light during the entire capture. Assuming that cameras are temporally synchronized, the clean raw measurements of long and short exposure images, Figure 3. Deblurring given a motion trajectory. On the left, a single moving point from a latent sharp scene is observed at different positions along a motion trajectory (dotted line). In a long exposure image (right), this results in a blur streak aligned with the motion trajectory. Deblurring can be achieved by integrating the information along the motion trajectory. respectively denoted by R and Ri, are formalized as R = Z t0+\u2206tl t=t0 I(t)dt, Ri = Z ti+\u2206ts t=ti I(t)dt, (1) where I(t) is the incoming light to the sensor at time t. The capture of the i-th image in the burst starts at ti and ends after \u2206ts. Meanwhile, the long exposure spans an interval of \u2206tl starting from t0. Due to sensor and photon noise, recorded raw images are noisy. These are then processed by the camera\u2019s Image Signal Processing pipeline (ISP) to produce visually appealing images, {Si}N i=1 and L. The final goal is to adopt pairs of burst and long exposure images ({Si}N i=1, L), to restore a single clean and sharp image, aligned with a reference image in the burst. In particular, we define the ground-truth image as G = ISP(Rm), where m is the index of the middle image considered as reference. We set the total number of images in the burst to be odd, namely N = 2k \u22121, implying m = k. 4. Methodology Our architecture is shown in Fig. 4. We first discuss our approach to each individual task, deblurring and denoising, and then describe a fusion process trained end-to-end that combines the results into a final sharp and clean output. 4.1. Flow-guided Deblurring In real dynamic scenes, motion blur is highly nonuniform and thus removing it requires the use of spatially varying kernels. Following Motion-ETR [76], we adopt a fully-convolutional deblurring network, enabled to incorporate motion information that adaptively modulates the shape of convolution kernels. Using flexible kernels is motivated by a fundamental result from previous works [46, 72]: the deconvolution of a blurry image requires filters with similar direction/shape as the blur kernel. Accordingly, to determine the shape of filters, we employ the exposure trajectory model [76], an extension to the blur kernel with time dependency. The exposure trajectory characterizes how a point is displaced from the reference frame at different timesteps. 3 \fShort Exposure Burst {\"#}#%& ' Motion offsets Flow-guided Deblurring Network Long Exposure ( Deblurred Image ( ) Denoised Image \" * Deblurring Features Denoising Features Burst Denoising Network Joint Decoder Final Image + Optical Flow Network Warp Warp Warp Stop Gradient Concatenation Groundtruth , -& -. -' optical flow / # (H x W x 3) (N x H x W x 3) (H x W x 3) (H x W x 3) (N x H x W x 2) (H x W x 96) (H x W x 96) (H x W x 3) (H x W x 3) 01 21 0 Figure 4. Overview of our architecture. Our workflow is composed of three parts. (i) A motion-aware deblurring module (FD), guided by freezed optical flows obtained from the burst, that removes blur from the given long exposure image. (ii) A denoiser (BD), conditioned on the same optical flows, which merges the burst into a clean image (iii) A fusion module that combines feature representations from deblurring and denoising and decodes the result to a final deblurred-denoised image. Fig. 3 illustrates this for a single moving point in a scene. During capture, a point is observed at varying locations relative to the reference position at time tm. A long exposure time yields a blurry streak aligned with the motion trajectory. Given the motion trajectory for a point, deblurring is achieved by integrating the information along. To recover the trajectory for a given point, we need to obtain spatial offsets expressing shifts from the middle time step to other time steps. Thanks to the temporal synchronization between the burst and long exposure image, we leverage the motion observed in the burst to obtain discrete samples of the motion trajectories (red arrows in Fig. 3). While Motion-ETR [76] trains a separate network to predict relative offsets conditioned on the input blurry image, we use the optical flow between frames in the noisy burst. In particular, for a pixel, p, in the reference frame, Sm, the motion to an arbitrary frame, Si, can be represented as Sm(p) = Si(p + \u2206pi), (2) where \u2206pi denotes the flow vector for the point p. As illustrated in Fig. 3, these time dependent flow vectors across the burst, \u2206p = {\u2206pi}N i=1, describe the trajectory of the corresponding pixel. We argue that our pre-trained flows are more accurate offsets than those provided by Motion-ETR. First, optical flow networks are typically trained and finetuned over a large amount of data, likely to achieve better generalization to new scenes. Second, our offsets are computed from sharp frames in the burst while those in MotionETR are estimated from the blurry image. We support our argument using examples in the results (6.3). Once flow vectors are obtained, we linearly interpolate them into a trajectory with K2 points (currently K = 3) and reshape the result into K \u00d7 K deformable convolution kernels [12]. Thus, kernels have spatially varying support across the image domain. We use the same backbone architecture as Motion-ETR [76], which is based on DMPHN [70]. In this hierarchical network, the last convolution at each level is deformable. Given the input blurry image, L \u2208RH\u00d7W \u00d73, we formulate the deblurring operation as: \u02dc L = FD(L, {\u2206pi}N i=1; \u03b8F D), \u2206pi = F(Si, Sm), (3) where FD denotes the flow-guided deblurring parameterized by \u03b8F D and the optical flow network, F, returns flow vectors, \u2206pi \u2208RH\u00d7W \u00d72, computed between short exposure images and the reference frame. We choose PWCNet [53] as the flow estimator because of its high accuracy and speed. Empirically, we observe that this network is robust to the noise in the input burst and provides sufficiently accurate flows to be used in the downstream deblurring task. 4.2. Burst Denoising For burst denoising, we adopt DBSR [3] due to its strong perfomance and use of optical flow. The latter allows the same optical flow to be used for both flow-guided deblurring and burst denoising. Since, DBSR was originally designed for RAW burst super-resolution, we adapt it to the sRGB burst denoising task. Our burst denoiser is composed of four modules described in turn. Encoder. A shared encoder, Enc, is separately applied to each image, Si, in the burst, obtaining individual feature representations, ei = Enc(Si). We reduce the final feature 4 \fdimensions from D=512 to D=96 to accommodate higher resolution training images (256\u00d7256 rather than 96\u00d796). Alignment. Due to camera and scene motion, images in the burst are spatially misaligned. To effectively fuse the burst features, ei, we warp them to the reference frame, Sm using the flow \u2206pi in Eq. 3. We obtain aligned features, \u02dc ei = \u03d5(ei, \u2206pi), where \u03d5 is the backwarp operation with bilinear interpolation. Fusion. Once aligned, the features are combined across the burst to generate a final fused feature representation, e \u2208RH\u00d7W \u00d7D. We use the attention-based mechanism [3] to adaptively extract information from images while allowing an arbitrary number of images as input. Specifically, a weight predictor conditioned on warped features and flow vectors, is learned to return unnormalized log attention weights, \u02dc wi \u2208RH\u00d7W \u00d7D for each warped encoding, \u02dc ei. The fused feature map is obtained using a weighted sum e = N X i=1 exp( \u02dc wi) P j exp( \u02dc wj) \u02dc ei . (4) See our supplement for details. Decoder. The decoder reconstructs the final denoised version of the reference image from the the fused feature map, \u02dc S = Dec(e). We use a similar decoder architecture as DBSR [3] but remove the upsampling layer from the decoder as we are not performing super-resolution. The entire burst denoising can be summarized as, \u02dc S = BD({Si}N i=1; \u03b8BD) , (5) where BD is the burst denoiser network, \u02dc S \u2208RH\u00d7W \u00d73 denotes the clean reconstructed image from the burst, and \u03b8BD contains all learnable parameters in all four modules. 4.3. Joint Denoising-Deblurring We now define a learnable module that combines information from the flow-guided deblurring, FD, and the burst denoiser, BD, into a high quality image. This module receives intermediate feature representations from FD and BD modules, concatenates them into a single feature map and then decodes the result into the final clean and sharp image. From the three-level hierarchical deblurring network, we choose the last layer features of the decoders at all levels, yielding a feature map of total dimension D1 = 32 \u00d7 3 = 96. From the burst denoising branch, we select the fused feature map, e, of dimension D2 = 96. The concatenation is followed by a conv layer that merges the features into D = 96 features. We then employ a decoder with the same architecture from Sec 4.2 but separate parameters to generate the final output for the joint task, J = Dec(concat(d, e); \u03b8J), (6) Invert Tonemap Gamma Expansion Invert Color Correction sRGB Burst Invert WB + Under Exposure Gain (1/r) Color Distortion \u00d7 (2N-1) Invert WB Averaging \u00d7 (2N-1) \u00d7 1 Mosaic \u00d7 1 Mosaic Noise Model + Reapply Gain (r) ! \" = $(&; (!& + (\" #) \" = ! \" \u00d7 , ISP ISP Short Exposure Burst Long Exposure Image Noise Model \" = $(&; (!& + (\" #) \u00d7 (2N-1) \u00d7 N \u00d7 1 \u00d7 (2N-1) \u00d7 (2N-1) \u00d7 N \u00d7 N \u00d7 N Subsample Figure 5. Data generation pipeline based on [5]. Starting from a burst of 2N\u22121 consecutive frames from GoPro, the tone mapping, gamma compression and color correction steps are inverted for all images. Afterwards, the pipeline branches into two (i) The linearized intensities are averaged into a single raw image with realistic blur. (ii) The burst subsampled to simulate the read-out gap, followed by high level of noise addition and color distortion common in raw short exposure captures. Finally, the ISP is reapplied to obtain final processed long and short exposure images. where concat is the concatenation operator, d \u2208 RH\u00d7W \u00d796 denotes the feature representation taken from the deblurring module, and Dec is the decoder parameterized by \u03b8J. We train all modules in an end-to-end fashion by minimizing a multi-task weighted loss, L = L1(J, G)+\u03bbdeblurL1( \u02dc S, G)+\u03bbdenoiseL1(\u02dc L, G), (7) where L1 is the average L1 distance, and G is the groundtruth image, as defined in Sec. 3. While L1(J, G) penalize the final joint output, L1( \u02dc S, G) and L1(\u02dc L, G) regularize intermediate deblurring and denoising outputs, respectively, with hyperparameters \u03bbdeblur and \u03bbdenoise determining their importance. We study the impact of these additional loss terms in the experiments section. 5. Datasets Synthetic Data. To evaluate our method, we synthetically generate a dataset consistent with our problem definition (Sec. 3). We use the GoPro dataset * which contains 33 high frame-rate 720 \u00d7 1280 resolution videos. We follow the original data split of 22 videos for training and 11 videos for testing. To generate pairs of realistic synchronized long and short exposure images as in Sec. 3, we use a procedure similar to [8]. To start, we obtain RAW images *Images in Figs. 1, 4, 5, 7, 8, 10, 11, 12, 14 are available in GoPro dataset [41] under a CC BY 4.0 License. 5 \fFigure 6. Synchronized dual-camera capture rig with FLIR BlackFly cameras, used to collected real data. by unprocessing [5] 2N \u22121 consecutive frames. On one hand, the resulting images are averaged to obtain the clean RAW long exposure image, while on the other, they are divided by the under-exposure factor r to generate clean short exposure RAW burst. To simulate read-out gaps in the short exposure burst, we skip every other frame, resulting in a Nframe burst. In the original GoPro dataset, blurry frames were mostly generated by averaging 11 consecutive frames. To remain consistent, while also simulating read-out gaps by dropping every other frame and having an odd number of burst frames, the maximum burst size is N = 5. The main results are presented with N = 5. The impact of smaller burst sizes are also examined in the ablation study. Subsequently, we add realistic degradations similar to [8]. For the short exposure burst, we apply color distortion to simulate the commonly present purple tint similar to [8,39,78]. For both the long and short exposure images, we add heteroscedastic Gaussian noise. Finally, we process the images back into sRGB using an ISP. This results in triplets of long exposure image, short exposure burst, and groundtruth: (L, {Si}5 i=1, G). Our full synthesis process is shown in Fig. 5. More details can be found in the supplement. Real Data. In addition to our synthetic data experiments, we capture several real synchronized long exposure images and short exposure bursts using the hardware synchronized FLIR camera rig setup shown in Fig. 6. Images are captured at 2048 \u00d7 3072 resolution in RAW format and processed with the ISP used for the synthetic data. Unlike the synthetic data, the two cameras are spatially misaligned. Thus prior to applying our method, the long exposure image is warped to the middle frame in the short exposure burst, using a RANSAC-based robust homography fitting. 6. Experiments 6.1. Implementation Details. We train our method using an AdamW optimizer [32] with a cosine annealing learning rate scheduler [31] starting from lr = 0.0001 to lr = 0.00001 for a total of 200 epochs. The training data consists of 256 \u00d7 256 patches passed in mini-batches of size B = 16 to the network. During training, we do not update the pre-trained optical flow network. We set \u03bbdeblur = \u03bbdenoise = 0.25 unless otherwise specified. Experiments are implemented in PyTorch and models are trained on four NVIDIA A40 GPUs for 15 hours. 6.2. Joint Deblurring-Denoising To our knowledge, there is no previous work addressing the task of joint deblurring-denoising while using burst of short exposure images as input. Nevertheless, a handful of methods [8, 28, 39, 77, 78] approach a similar task where, rather than a burst, only a single short-exposure image is available. Classical works [42,56] are also close to our setting proposing a hybrid camera to correct the blur in a single long exposure image based on the motion blur estimated from multiple short exposure frames captured at the same time. However, they use short exposure images solely for motion blur estimation, whereas we further process them into a denoised high quality image. Through experiments on our synthetically generated dataset, we examine the image restoration quality of our proposed method and compare it with the SOTA: LSD2 [39], LSF [8], and D2HNet [78]. Experimental setting. Our problem setting differs from that in each of the above work and we modify the corresponding training procedure or method to have a fair evaluation. First, the baseline joint methods inherently accept only a single short exposure image in addition to the long exposure one. Hence, for the baselines, we pair the middle frame of the burst with the blurry image, jointly passed as the input. In contrast, our method leverages the entire burst. There are also baseline-specific differences. For example, the input and output images for LSF are in RAW space. Thus, during training and inference, it requires an ISP to post-process the output into an sRGB image. To maintain consistency in training and evaluation across all methods, we train the LSF network on our synthetic data with sRGB images as input and eliminate any post-processing. LSF also uses a pixel shuffle layer to output an image at \u00d72 resolution of the input. As our inputs and outputs match in size, we remove the pixel shuffle layer. Since the D2HNet model is large, training it from scratch on our synthetic dataset results in overfitting. Instead, we fine-tuned two different versions of D2HNet on our synthetic dataset, both of which are pretrained on the large dataset from [78]. We fine-tune one model on patches (row 1 of Table 1) and another on full-sized images (row 2 of Table 1). Results. After bringing all methods into a consistent setting, we evaluate their performance on our synthetic data. In addition to the two common metrics of PSNR and SSIM [61], we use LPIPS [75] as a perceptual metric to quantitatively assess performance. For the joint task of deblurring-denoising, quantitative results on test data are reported in Table 1. LSD2 achieves the lowest performance, despite having a relatively large number of learnable parameters. In contrast, LSF use more complex layers such as deformable convolutions, but overall require less parameters. 6 \fFigure 7. Qualitative comparison of our method with the state-of-the-art on two examples of test synthetic data. Table 1. Comparison of our method with Joint DeblurringDenoising SOTA in terms of PSNR, SSIM and LPIPS metrics. Best results are shown in bold. Method PSNR\u2191 SSIM\u2191 LPIPS\u2193 # params D2HNET\u2020 [78] 31.30 0.933 0.114 82.3M D2HNET\u2021 [78] 37.87 0.987 0.026 LSD2 [39] 36.42 0.981 0.035 31M LSF [8] 37.22 0.984 0.033 10.9M OURS 38.25 0.988 0.025 17.4M Our method significantly outperforms LSD2 and LSF in terms of PSNR. D2HNet fine-tuned on full-resolution synthetic data considerably outperforms that trained on patches of size 256\u00d7256. Although D2HNet is pre-trained on more data and have significantly more parameters, our method achieves superior performance in terms of all metrics. Qualitative examples are also demonstrated in Fig. 7. In the first example, the simulated motion and noise in the input image inhibits perceiving the number written on the poster. Unlike other methods, our method is able to recover the number better. This is also confirmed by the higher PSNR measured within the selected region. In the other example, high frequency details such as cracks between tiles are the target; our method is able to recover all of them, unlike others. Additional examples are provided in the supplement. 6.3. Ablation Study Auxiliary losses. We first explore the impact of intermediate loss terms L1( \u02dc S, G) and L1(\u02dc L, G). We train our method with the multi-task loss using different combinations, \u03bbdeblur, \u03bbdenoise \u2208{0, 0.25}. When \u03bb = 0, the corresponding term is effectively eliminated. The quantitative results in the Table 2 indicates that the best PSNR is achieved by the setting \u03bbdeb. = \u03bbden. = 0.25, while differences in SSIM and LPIPS are marginal. Also, the performance degrades with higher values of \u03bb = 0.5. Burst size. We also examine the effect of burst size N. We evaluate our method in a new experiment where every other frame is kept from the burst of size N = 5 yielding *Fine-tuned with same input patch size used in [78] \u2020Fine-tuned with full-sized images Table 2. Effect of regularization terms and burst size. N \u03bbdeb. \u03bbden. PSNR\u2191 SSIM\u2191 LPIPS\u2193 5 0 0 38.14 0.9882 0.0254 5 0 0.25 38.19 0.9882 0.0252 5 0.25 0 38.17 0.9883 0.0251 5 0.25 0.25 38.25 0.9883 0.0250 5 0.5 0.5 38.18 0.9882 0.0254 3 0.25 0.25 37.94 0.987 0.028 1 0.25 0.25 37.28 0.984 0.033 a new burst of size N = 3. We also examine the trivial case N =1 by keeping only the middle frame. To train our method for N =1, we drop the flow guidance for deblurring, and feature warping and attention in the burst-processing. These quantitative results in Table 2 show that PSNR significantly drops when reducing the burst size to N = 3 . For N = 1 even larger degradation is observed. Note that our joint model with N =3 still outperforms prior work of LSD2, LSF and D2HNet, while being more efficient than the N =5 version as requiring less input images. Single-Task baselines. Next, we explore how individual flow-guided deblurring and burst denoising modules perform compared to the joint model. We also compare our new deblurring sub-module with Motion-ETR to validate the improvement achieved by replacing learned offsets with pre-trained optical flow. We train flow-guided deblurring and burst denoiser networks separately and then evaluate them on the synthetic data. For Motion-ETR, we follow the original training protocol [76]. Quantitative results are summarized in Table 3, showing that the joint model noticeably outperforms the individual modules. This verifies that our simple fusion process is able to take advantage of both worlds and output a more accurate image in the joint restoration task. Motion blur is often more difficult to tackle than noise, as reflected in the superiority of burst denoising to deblurring. Qualitative examples are also shown in Fig. 8. The burst denoiser outputs a sharp image while flow-guided deblurring is slightly better at maintaining colors close to the ground-truth. When integrated in our joint model, the final restoration does not suffer from color distortion and remains sharp. Finally, flow-guided deblurring significantly 7 \fFigure 8. Qualitative comparison of our method in the ablation study on two example of test synthetic data. Figure 9. Qualitative comparison of different methods on two data examples captured by our real dual-camera system. Table 3. Comparison of our method with baselines approaching individual tasks in terms of PSNR, SSIM and LPIPS metrics Method Task PSNR\u2191 SSIM\u2191 LPIPS\u2193 MOTION-ETR [76] Deblurring 30.17 0.936 0.102 OURS (flow-guided) Deblurring 32.23 0.958 0.078 Burst-Denoising 35.92 0.986 0.032 Joint 38.25 0.988 0.025 outperforms the Motion-ETR in all provided metrics. In Fig. 10, we use spatial offsets to visualize the trajectory that a sparse grid of pixels follow during the exposure time. Comparing these trajectories shows that the pre-trained flow is capable of recovering highly non-linear motion, while the quadratic trajectory learned by Motion-ETR is not expressive enough, leading to a deblurred result of lower quality. Motion-ETR Flow-guided Deblurring Figure 10. Visualization of trajectories provided to Motion-ETR [76] and ours flow-guided deblurring, overlaid on top of the input blurry image. The respective network outputs are also displayed. 6.4. Real Data Finally, we evaluate all methods on real data captured by the dual-camera system. The models are trained on the synthetic data that does not perfectly match the real one in every aspect. Despite this gap, as shown in Fig. 9, our approach produces sharper and cleaner results while other methods suffer from noise, incorrect colors or severe artifacts. Compared to our flow-guided deblurring, Motion-ETR [76] performs worse in removing the blur. Note that the individual burst denoiser outputs incorrect colors, whereas our method is able to compensate for it by using deblurring branch. We also report Natural Image Quality Evaluator (NIQE) [37], a non-reference-based metric commonly used for image quality assessment when the ground-truth is not available. A lower value indicates better quality. We achieve the lowest value in all the provided examples. Please see the supplement for further examples. 7." + }, + { + "url": "http://arxiv.org/abs/2206.00746v2", + "title": "Residual Multiplicative Filter Networks for Multiscale Reconstruction", + "abstract": "Coordinate networks like Multiplicative Filter Networks (MFNs) and BACON\noffer some control over the frequency spectrum used to represent continuous\nsignals such as images or 3D volumes. Yet, they are not readily applicable to\nproblems for which coarse-to-fine estimation is required, including various\ninverse problems in which coarse-to-fine optimization plays a key role in\navoiding poor local minima. We introduce a new coordinate network architecture\nand training scheme that enables coarse-to-fine optimization with fine-grained\ncontrol over the frequency support of learned reconstructions. This is achieved\nwith two key innovations. First, we incorporate skip connections so that\nstructure at one scale is preserved when fitting finer-scale structure. Second,\nwe propose a novel initialization scheme to provide control over the model\nfrequency spectrum at each stage of optimization. We demonstrate how these\nmodifications enable multiscale optimization for coarse-to-fine fitting to\nnatural images. We then evaluate our model on synthetically generated datasets\nfor the the problem of single-particle cryo-EM reconstruction. We learn high\nresolution multiscale structures, on par with the state-of-the art.", + "authors": "Shayan Shekarforoush, David B. Lindell, David J. Fleet, Marcus A. Brubaker", + "published": "2022-06-01", + "updated": "2022-10-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Coordinate networks have emerged as a powerful way to represent and reconstruct images, videos, and 3D scenes [1\u20133], and for solving challenging inverse problems such as 3D molecular reconstruction for cryo-electron microscopy (cryo-EM) [4\u20136]. They typically take as input a point de\ufb01ned on a continuous, low-dimensional domain (e.g., a 2D position for images), and they output the signal value at that point (e.g., the color). In recently proposed architectures [7, 8], the frequency content of the represented signal can also be explicitly controlled. Such networks are motivated in part by the effectiveness of multiscale methods in image processing and 3D reconstruction. Nevertheless, while current scale-aware coordinate network architectures offer some control over scale [8\u201310], they are not readily compatible with classical multiscale methods used for coarse-to-\ufb01ne optimization, like pyramids [11, 12] or multigrid solvers [13]. State-of-the-art cryo-EM models, for instance, use frequency-marching to progressively estimate 3D density, beginning with a coarse structure, then adding \ufb01ner structure at each iteration [14, 15]. Some networks use heuristics to empirically constrain the scale of network output for coarse-to-\ufb01ne optimization, but they have no explicit constraints on the represented frequencies [9, 16, 17]. Others represent signals at multiple scales at inference time, but do not enable control of the frequency spectrum during training [8, 10]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2206.00746v2 [cs.CV] 26 Oct 2022 \fFigure 1: Overview of residual multiplicative \ufb01lter networks. Skip connections ef\ufb01ciently combine low-frequency information from earlier layers with high-frequency information in later layers. With the proposed initialization scheme, each layer learns separate, increasing frequency bands. Here, we introduce a new coordinate network architecture and training scheme that enables one to effectively control the frequency support of the learned representations during multiscale optimization, building on recent neural networks with an analytical and controllable Fourier spectra [7, 8]. For coarse-to-\ufb01ne reconstruction, one can divide the training process into stages, each of which learns a single scale. Training starts at the coarsest resolution and then progressively adds \ufb01ner scales. Naively employing this procedure with existing architectures will damage the signal reconstruction at coarser scales when \ufb01tting \ufb01ner ones (e.g., see Fig. 3). To mitigate this, we modify MFNs by adding skip connections [18], so reconstruction at one scale naturally incorporates the learned structure at coarser scales. This alleviates the need to adapt previous layers in favor of \ufb01tting the \ufb01ner grained reconstructions, effectively obviating the need to re-learn signal structure captured by coarser scales. Second, we derive an initialization scheme which explicitly introduces hierarchical gaps in the frequency spectrum of a new layer so that, by construction, its frequency support has a controllable degree of overlap. This requires the network to use the skip connections in order to \ufb01ll the holes in the spectrum and ensures lower levels remain faithful to coarse scale representations of the signal. We apply our technique to the inverse problem of single-particle cryo-EM reconstruction [6, 4, 14, 19] to determine the 3D structure of macromolecular complexes, like proteins, from collections of noisy 2D tomographic projections. This is a challenging non-linear inverse problem, closely related to multiview reconstruction, requiring estimation of the pose and 3D structure of bio-molecules. Successful 3D reconstruction relies on coarse-to-\ufb01ne methods, sometimes called frequency marching [20], to reduce the risk of becoming trapped in poor local minima. Our results demonstrate effective and ef\ufb01cient ab initio reconstruction of 3D structures with cryo-EM to high resolution. This paper proposes a new architecture and training approach to enable powerful multiscale estimation techniques with coordinate-based networks. Speci\ufb01cally, we make the following contributions: \u2022 We develop residual multiplicative \ufb01lter networks, a new coordinate network architecture with a tailored initialization scheme for multiscale signal reconstruction and representation. \u2022 We design fast and ef\ufb01cient coarse-to-\ufb01ne training strategies for residual multiplicative \ufb01lter networks that leverage the multiscale representation. \u2022 We apply the proposed architecture and training strategy to cryo-EM reconstruction on two synthetic datasets and achieve \ufb01nal reconstructions competitive with the cryoSPARC [14], a state-of-the-art method. 2 \f2 Related Work Coordinate-based Networks. Coordinate-based networks (e.g., [2, 3]) offer a memory-ef\ufb01cient, continuous function parameterization that can be \ufb02exibly trained to reconstruct 3D appearance [1, 21\u2013 25] and structure [26\u201332], including applications in biomedical imaging [33, 34] and cryo-EM [5, 4, 6]. Although originally formulated as fully-connected MLP architectures, new models have been proposed to improve performance and interpretability. For instance, one can use multiple small fully-connected networks to improve representational capacity [35\u201337], or combine coordinate-based networks with explicit feature grids to improve ef\ufb01ciency [38, 39], or facilitate generalization across shapes or scenes [40\u201343, 29, 44, 45]. Multiplicative Filter Networks (MFNs) replace the conventional MLP architecture with successive layers of Hadamard products and sine non-linearities [7]. Closest to our own work, MFNs have an analytical Fourier spectrum whose bandwidth can be explicitly constrained [8], improving the controllability and interpretability of the representation. Inspired by this work, we establish Residual MFNs (rMFNs) with specialized control over the Fourier spectrum to enable coarse-to-\ufb01ne optimization. Multiscale Reconstruction. Multiscale representations and reconstruction methods are fundamental concepts in signal and image processing. For example, wavelets are a fast and ef\ufb01cient multiscale signal representation [46, 47] useful for denoising, compression, communications, and optical \ufb02ow estimation [48]. Multigrid frameworks are used for many differential equation solvers for physics simulations [13]. Gaussian and Laplacian pyramids [49, 11] have broad application to image processing and have, in part, inspired modern deep learning architectures [50]. Recent coordinate-based network architectures build on these fundamental concepts of multiscale signal representation and reconstruction. Some architectures use explicit feature grids at multiple scales to enable ef\ufb01cient training and inference for representing shape [39, 51] or rendering scenes [52, 38]. Multiscale fully-connected coordinate network architectures have been realized by progressively increasing the frequencies of positional encodings during optimization, for example, for bundle adjustment and neural scene representation [9, 17]. It is also possible to optimize separate networks with inductive biases towards low or high frequencies to improve shape representation [16]. Still, while these methods introduce multiscale training techniques, the architectures rely on inductive biases rather than explicit constraints on the scale or Fourier spectrum of the representation. Other methods use coordinate networks for multiscale representation rather than ef\ufb01cient optimization. For instance, scale-aware positional encodings are trained at all scales simultaneously so that different output scales can be queried at inference time [10]. Band-limited coordinate networks (BACON) extend MFNs to control the network bandwidth, but require training on multiple output scales simultaneously for multiscale representation at inference time [8]. Our work is inspired by BACON, but signi\ufb01cantly modi\ufb01es the architecture, initialization scheme, and training techniques to make ef\ufb01cient multiscale optimization techniques compatible with coarse-to-\ufb01ne reconstruction. Cryo-EM Reconstruction. While coordinate networks have previously been used for cryo-EM reconstruction [5, 4, 6], they differ from our method in that they explicitly represent signals in the Fourier domain. Our goal is to demonstrate coarse-to-\ufb01ne reconstruction methods with coordinate networks in the real domain, and we demonstrate this for cryo-EM reconstruction. The proposed method may also prove promising for advanced cryo-EM methods that resolve intrinsic motion of structures since spatial motion can be directly modeled in the primal domain [53]. 3 Background Coordinate neural networks have emerged as effective tools for approximating complex spatial data (e.g., see [1, 2, 7, 8, 10]). In the case of images, for example, they provide a mapping from continuous positions on the image plane to RGB values. Mapping 3D coordinates to volumetric density allows modeling 3D geometry. In particular, these networks have been shown to be readily trained to accurately approximate low-dimensional, complex signals in a memory-ef\ufb01cient way. Among this broad family of network architectures, in this paper we focus on Multiplicative Filter Networks (MFNs) [7] such as BACON [8], as they provide explicit control over the Fourier spectra of the function approximations. Multiplicative Filter Networks. Most coordinate-based networks, like SIREN [2] and Random Fourier Features [3], use an MLP architecture consisting of the successive composition of linear 3 \ftransformations and element-wise non-linearities. In contrast, Multiplicative Filter Networks (MFNs) use a Hadamard product (i.e., element-wise multiplication) instead of composition [7]. Concretely, in an L-layer MFN, the din dimensional input coordinates, x \u2208Rdin, are transformed using L + 1 non-linear \ufb01lter modules, denoted by g(i)(\u00b7) : Rdin \u2192Rdh, i = 0, 1, . . . , L, and dh is the hidden layer dimension. In [7], either sinusoidal or Gabor \ufb01lters are used for this transformation. The sinusoidal case is given by g(i)(x) = sin(\u03c9(i)x + \u03c6(i)), where \u03c9(i) \u2208Rdh\u00d73 and \u03c6(i) \u2208Rdh are referred to as frequencies and phases. At layer i, the intermediate representation of previous layer, z(i\u22121), after being linearly transformed, is multiplied by the non-linear \ufb01lter of input g(i)(x). Formally, z(0) = g(0)(x), z(i) = g(i)(x) \u2299 \u0000W(i)z(i\u22121) + b(i)\u0001 , i = 1, . . . , L , (1) The behaviour of MFNs can be understood by analyzing the frequency of intermediate layers. Using the trigonometric identity sin(a) sin(b) = 1 2 sin(a + b \u2212\u03c0/2) + 1 2 sin(a \u2212b + \u03c0/2) , (2) closed-form expressions for intermediate representations can be derived [7, 8]. Dropping the bias terms b(i) for simplicity, the individual components of the intermediate representation z(i) is equal to a weighted sum of an exponential number of sine terms [7]: z(i) ni = X n=(n0,\u00b7\u00b7\u00b7 ,ni\u22121,ni) s=(s1,\u00b7\u00b7\u00b7 ,si) \u03b1(i)(n) sin \u0010 \u03c9(i)(n, s) x + \u03c6 (i)(n, s) \u0011 (3) where n = (n0, \u00b7 \u00b7 \u00b7 , ni\u22121, ni) is a tuple of indices with nj \u2208{1, . . . , dh} and sj \u2208{\u22121, 1}. Each term in the summation comprises an amplitude, frequency and phase shift, given by \u03b1(i)(n) = 1 2i i Y l=1 W(l) nl,nl\u22121, ( \u03c9(i)(n, s) = \u03c9(0) n0 + Pi l=1 sl\u03c9(l) nl \u03c6 (i)(n, s) = \u03c6(0) n0 + Pi l=1 sl(\u03c6(l) nl \u2212\u03c0 2 ) . (4) BACON. This analysis of the frequency content of an MFN shows that the bandwidth of z(i)(x) is the sum of the input bandwidths. This was leveraged in BACON [8] to band-limit the representation of each individual layer. Additionally, an output was created for each layer to produce intermediate reconstructions; i.e., y(i) = W(i) outz(i) + b(i) out . (5) where W(i) out \u2208Rdout\u00d7dh and b(i) out \u2208Rdout. However, to encourage y(i) to preserve its signal representation at scale i, one requires extra scale-speci\ufb01c losses during training, without which, subsequent training will corrupt the lower resolution representation. Motivating Example. Although BACON does provide control over spectral bandwidth, when adopted in the context of a coarse-to-\ufb01ne optimization procedure, it fails to preserve and carry the learned coarse-scale representations when one moves to the next \ufb01ner-scale stage of the optimization. Consequently, at each round, one must re-optimize the entire representations at all previous scales, hindering ef\ufb01cient coarse-to-\ufb01ne optimization. As an example, in Fig. 3 we examine the representation learned by BACON within a coarse-to-\ufb01ne training strategy, where successive output layers are trained to \ufb01t an image at \ufb01ner scales. We apply a staged training procedure such that a loss is applied to outputs at each scale in separate rounds of optimization. Unfortunately, when \ufb01tting \ufb01ner scales BACON completely forgets the learned representations of coarse scale outputs from previous optimization rounds (see supplement). For a fairer comparison, given any round, we keep all scale-speci\ufb01c losses, but let only the linear output layers for other scales to be updated. Still, the results of BACON (Fig. 3, top row) show that the representations at coarser scales yet become corrupted while training at \ufb01ner scales. In fact, reconstructions at different scales are highly coupled in BACON, since new layers can result in new frequencies anywhere within the band limit. In the following, we modify the architecture and its initialization and training scheme (Fig. 1) to make coordinate networks applicable to coarse-to-\ufb01ne multiscale reconstruction. 4 \fFigure 2: Illustration of the initialization scheme. (Left) The central red square depicts the base spectrum comprising frequencies of the previous layer \u03c9(i\u22121)(n, s). Other coloured regions depict copies of the base spectrum, shifted versions by \u03bb2Bi\u22121 in four directions with \u03bb2 \u22482. The copies are slightly larger due to the perturbation by the random v(i) j . Dashed regions are naturally introduced because of the sign factors si = \u00b11. (Right) The new frequency spectrum of \u03c9(i)(n, s) is the union of the shaded regions. The initialization controls the overlap with the spectrum of the previous layer. 4 Residual Multiplicative Filter Networks From Eq. 4, we can express frequencies in layer i in terms of frequencies in previous layers with the recursion \u03c9(i)(n, s) = \u03c9(i\u22121)(\u02dc n,\u02dc s) + si\u03c9(i) ni , (6) where \u02dc n and \u02dc s are formed from n and s by removing last entry, e.g., \u02dc n = (n0, \u00b7 \u00b7 \u00b7 , ni\u22121) = n