{ "url": "http://arxiv.org/abs/2404.16678v1", "title": "Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior", "abstract": "Colorizing grayscale images offers an engaging visual experience. Existing\nautomatic colorization methods often fail to generate satisfactory results due\nto incorrect semantic colors and unsaturated colors. In this work, we propose\nan automatic colorization pipeline to overcome these challenges. We leverage\nthe extraordinary generative ability of the diffusion prior to synthesize color\nwith plausible semantics. To overcome the artifacts introduced by the diffusion\nprior, we apply the luminance conditional guidance. Moreover, we adopt\nmultimodal high-level semantic priors to help the model understand the image\ncontent and deliver saturated colors. Besides, a luminance-aware decoder is\ndesigned to restore details and enhance overall visual quality. The proposed\npipeline synthesizes saturated colors while maintaining plausible semantics.\nExperiments indicate that our proposed method considers both diversity and\nfidelity, surpassing previous methods in terms of perceptual realism and gain\nmost human preference.", "authors": "Han Wang, Xinning Chai, Yiwen Wang, Yuhong Zhang, Rong Xie, Li Song", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Original Paper", "paper_cat": "Diffusion AND Model", "gt": "Automatic colorization synthesizes a colorful and semanti- cally plausible image given a grayscale image. It is a classical computer vision task that has been studied for decades. How- ever, existing automatic colorization methods cannot provide satisfactory solution due to the two main challenges: incorrect semantic colors and unsaturated colors. Aiming to synthesize semantically coherent and percep- tually plausible colors, generative models have been exten- sively incorporated into relevant research. Generative adver- sarial networks (GAN) based [4, 5, 1] and autoregressive- based [6, 2, 7] methods have made notable progress. Al- though the issue of incorrect semantic colors has been par- tially addressed, significant challenges still remain. See the yellow boxes in Figure 1, the semantic errors significantly undermine the visual quality. Recently, Denoising Diffusion Probabilistic Models(DDPM) [8] has demonstrated remark- able performance in the realm of image generation. With its exceptional generation capabilities, superior level of de- tail, and extensive range of variations, DDPM has emerged as a compelling alternative to the GAN. Moreover, the con- trollable generation algorithms based on the diffusion model have achieved impressive performance in various downstream tasks such as T2I [9], image editing [10], super resolu- tion [11], etc. In this work, we leverage the powerful diffu- sion prior to synthesize plausible images that align with real- world common sense. Unfortunately, applying pre-trained diffusion models directly to this pixel-wise conditional task lead to inconsistencies [12] that do not accurately align with the original grayscale input. Therefore, it becomes imperative to provide more effective condition guidance in order to en- sure coherence and fidelity. We align the luminance channel both in the latent and pixel spaces. Specifically, our proposed image-to-image pipeline is fine-tuned based on pre-trained stable diffusion. The pixel-level conditions are injected into the latent space to assist the denoising U-Net in producing latent codes that are more faithful to grayscale images. A luminance-aware decoder is applied to mitigate pixel space distortion. In addition to incorrect semantics, another challenge in this task is unsaturated colors. For example, the oranges in the first two columns of Figure 1 suffer from the unsaturated colors. To moderate the unsaturated colors, priors such as categories [5], bounding boxes [13] and saliency maps [14] have been introduced in relevant research. Based on this in- sight, we adopt multimodal high-level semantic priors to help the model understand the image content and generate vivid colors. To simultaneously generate plausible semantics and vivid colors, multimodal priors, including category, caption, and segmentation, are injected into the generation process in a comprehensive manner. In summary, we propose an automatic colorization pipeline to address the challenges in this task. The contri- butions of this paper are as follows: \u2022 We extend the stable diffusion model to automatic im- age colorization by introducing pixel-level grayscale conditions in the denoising diffusion. The pre-trained diffusion priors are employed to generate vivid and arXiv:2404.16678v1 [cs.CV] 25 Apr 2024 BigColor CT2 ControlNet Ours Fig. 1. We achieve saturated and semantic plausible colorization for grayscale images surpassing the GAN-based(BigColor [1]), transformer-based(CT2 [2]) and diffusion-based(ControlNet [3]) methods. plausible colors. \u2022 We design a high-level semantic injection module to enhance the model\u2019s capability to produce semantically reasonable colors. \u2022 A luminance-aware decoder is designed to mitigate pixel domain distortion and make the reconstruction more faithful to the grayscale input. \u2022 Quantitative and qualitative experiments demonstrate that our proposed colorization pipeline provides high- fidelity, color-diversified colorization for grayscale im- ages with complex content. User study further indi- cates that our pipeline gain more human preference than other state-of-the-art methods.", "main_content": "Learning-based algorithms have been the mainstream of research on automatic colorization in recent years. Previous methods suffer from unsaturated colors and semantic confusion due to the lack of prior knowledge of color. In order to generate plausible colors, generative models have been applied to automatic colorization tasks, including adversarial generative networks [4, 5, 1] and transformers [6, 2, 7]. Besides, [15] shows that diffusion models are more creative than GAN. DDPM has achieved amazing results in diverse natural image generation. Research based on DDPM has confirmed its ability to handle a variety of downstream tasks, including colorization [16]. To alleviate semantic confusion and synthesize more satisfactory results, priors are introduced into related research, including categories [5], saliency maps [14], bounding boxes [13], etc. 3. METHOD 3.1. Overview A color image ylab, represented in CIELAB color space, contains three channels: lightness channel l and chromatic channels a and b. The automatic colorization aims to recover the chromatic channels from the grayscale image: xgray \u2192\u02c6 ylab. In this work, we propose an automatic colorization pipeline for natural images based on stable diffusion. The pipeline consists of two parts: a variational autoencoder [17] and a denoising U-Net. Explicitly, the VAE is for the transformation between pixel space x \u2208RH\u00d7W \u00d73 and latent space z \u2208Rh\u00d7w\u00d7c. While the denoising U-Net applies DDPM in the latent space to generate an image from Gaussian noise. The framework of our pipeline is shown in Figure 2. First, the VAE encodes grayscale image xgray into latent code zc. Next, the T-step diffusion process generates a clean latent code z0 from Gaussian noise zT under the guidance of image latent zc and high-level semantics. Finally, z0 is reconstructed by a luminance-aware decoder to obtain the color image \u02c6 y. The pixel-level grayscale condition and high-level semantic condition for denoising process are introduced in the latent space, shown in the yellow box in Figure 2. We elaborate on the detailed injections of these conditions in Section 3.2 and Section 3.3, respectively. As for the reconstruction processes, the detailed designs of the luminance-aware decoder are described in Section 3.4. 3.2. Colorization Diffusion Model Large-scale diffusion model has the capability to generate high-resolution images with complex structures. While naive usage of diffusion priors generates serious artifacts, we introduce pixel-level luminance information to provide detailed guidance. Specifically, we use encoded grayscale image zc as control condition to enhance U-Net\u2019s understanding of luminance information in the latent space. To involve the grayscale condition in the entire diffusion process, we simultaneously input the latent code zt generated in the previous time step and the noise-free grayscale latent code zc into the input layer of UNet at each time step t: Denoising U-Net Input \ud835\udc65!\"#$ \u2208\ud835\udc45%\u00d7'\u00d7( Text Encoder EfficientNet BLIP Transfiner Category Caption Labels resize cat Output \ud835\udc66 % \u2208\ud835\udc45%\u00d7'\u00d7) \ud835\udc67\u0302*+( Luminance-aware Decoder Encoder \ud835\udefc, conv \ud835\udc53 -./0 , \ud835\udc53 12 3 \ud835\udc53 * 12 3 \u00d7 \ud835\udc40, \u2208\ud835\udc454\u00d7/\u00d7( \ud835\udc67*+( , \ud835\udc67*+( \ud835\udc67* \ud835\udc675 \u2208\ud835\udc454\u00d7/\u00d76 \ud835\udc67\u03027 \u00d7\ud835\udc47steps Cross Attention Text Embeddings \ud835\udc50* Latent Space Fig. 2. Overview of the proposed automatic colorization pipeline. It combines a semantic prior generator (blue box), a highlevel semantic guided diffusion model(yellow box), and a luminance-aware decoder (orange box). z\u2032 t = conv1\u00d71(concat(zt, zc)) (1) In this way, we take advantage of the powerful generative capabilities of stable diffusion while preserve the grayscale condition. The loss function for our denoising U-Net is defined in a similar way to stable diffusion [18]: L = Ez,zc,c,\u03f5\u223cN (0,1),t[||\u03f5 \u2212\u03f5\u03b8(zt, t, zc, c)||2 2] (2) where z is the encoded color image, zc is the encoded grayscale image, c is the category embedding, \u03f5 is a noise term, t is the time step, \u03f5\u03b8 is the denoising U-Net, zt is the noisy version of z at time step t. 3.3. High-level Semantic Guidance To alleviate semantic confusion and generate vivid colors, we design a high-level semantic guidance module for inference. As shown in Figure 2, the multimodal semantics are generated by the pre-trained semantic generator in the blue box. Afterwards, text and segmentation priors are injected into the inference process through cross attention and segmentation guidance respectively, as shown in the yellow box in Figure 2. Specifically, given the grayscale image xgray, the semantic generator produce the corresponding categories [19], captions [20] and segmentations [21]. The category, caption, and segmentation labels are in textual form, while the segmentation masks are binary masks. For textual priors, the CLIP [22] encoder is employed to generate the text embedding ct. The text embedding guidance is applied in denoising U-Net via cross-attention mechanism. Given the timestep t, the concatenated noisy input zt and the text condition ct, the latent code zt\u22121 is produced by the Colorization Diffusion Model(CDM): zt\u22121 = CDM(zt, t, zc, ct) (3) For segmentation priors, we use the pre-trained transfiner [21] to generate paired segmentation masks M and labels L. For each instance, we first resize the binary mask Mi \u2208RH\u00d7W \u00d71 to align the latent space. The resized mask is represented as M i \u2208Rh\u00d7w\u00d71. Then we use the CDM to yield the corresponding latent code zi t\u22121 of the masked region: zi t\u22121 = CDM(zt, t, zc \u00d7 M i, Li) (4) Finally, we combine the original latent code zt\u22121 and the instances to yield the segment-aware latent code \u02c6 zt\u22121: \u02c6 zt\u22121 = i=k X i=1 [zt\u22121 \u00d7 (1 \u2212M i) + zi t\u22121 \u00d7 M i] (5) We set a coefficient i \u2208[0, 1] to control the strength of segmentation guidance. The threshold is defined as Tth = T \u00d7 (1 \u2212i). The segmentation mask is used to guide the synthesis process at inference time step t > Tth. We set i = 0.3 for the experiment. Users have the flexibility to select a different value based on their preferences. 3.4. Luminance-aware Decoder As the downsampling to latent space inevitably lose detailed structures and textures, we apply the luminance condition to InstColor ChromaGAN BigColor ColTran CT2 ControlNet Ours Fig. 3. Qualitative comparisons among InstColor [13], ChromaGAN [5], BigColor [1], ColTran [6], CT2 [2], ControlNet [3] and Ours. More results are provided on https://servuskk.github.io/ColorDiff-Image/. the reconstruction process and propose a luminance-aware decoder. To align the latent space with stable diffusion, we freeze the encoder. The intermediate grayscale features obtained in the encoder are added to the decoder through skip connections. Specifically, the intermediate features f i down generated by the first three downsample layers of the encoder are extracted. These features are convolved, weighted, and finally added to the corresponding upsample layers of the decoder: \u02c6 f j up = f j up + \u03b1i \u00b7 conv(f i down), i = 0, 1, 2; j = 3, 2, 1 (6) We adopt L2 loss L2 and perceptual loss [23] Lp to train the luminance-aware decoder: L = L2 + \u03bbpLp (7) 4. EXPERIMENT 4.1. Implementation We train the denoising U-Net and luminance-aware decoder separately. Firstly, we train the denoising U-Net on the imagenet [24] training set at the resolution of 512 \u00d7 512. We initialize the U-Net using the pre-trained weights of [18]. The learning rate is fixed at 5e \u22125. We use the classifierfree guidance [25] strategy and set the conditioning dropout probability to 0.05. The model is updated for 20K iterations with a batch size of 16. Then we train the luminance-aware decoder on the same dataset and at the same resolution. The VAE is initialized using the pre-trained weights of [18]. We fix the learning rate at 1e\u22124 for 22,500 steps with a batch size of 1. We set the parameter \u03bbp in Eq.(7) to 0.1. Our tests are conducted on the COCO-Stuff [26] val set containing 5,000 images of complex scenes. At inference, we adopt DDIM sampler [27] and set the inference time step T = 50. We conduct all experiments on a single Nvidia GeForce RTX 3090 GPU. 4.2. Comparisons We compare with 6 state-of-the-art automatic colorization methods including 3 types: 1) GAN-based method: InstColor [13], ChromaGAN [5], BigColor [1], 2)Transformerbased method: ColTran [6], CT2 [2], 3) Diffusion-based method: ControlNet [3]. Qualitative Comparison. We show visual comparison results in Figure 3. The images in the first and second rows indicate the ability of the models to synthesise vivid colors. Both GAN-based and transformer-based algorithms suffer from unsaturated colors. Although ControlNet synthesises saturated colors, the marked areas contain significant artifacts. Images in the third and forth rows demonstrate the ability of the models to synthesise semantically reasonable colors. InTable 1. Quantitative comparison results. Method FID\u2193 Colorful\u2191 PSNR\u2191 InstColor [13] 14.40 27.00 23.85 ChromaGAN [5] 27.46 27.06 23.20 BigColor [1] 10.24 39.65 20.86 ColTran [6] 15.06 34.31 22.02 CT2 [2] 25.87 39.64 22.80 ControlNet [3] 10.86 45.09 19.95 Ours 9.799 41.54 21.02 Fig. 4. User evaluations. stColor, ChromaGAN, BigColor, CT2 and ControlNet fail to maintain the color continuity of the same object(discontinuity of colors between the head and tail of the train, hands and shoulders of the girl). While ColTran yields colors that defy common sense (blue shadows and blue hands). In summary, our method provides vivid and semantically reasonable colorization results. User Study. To reflect human preferences, we randomly select 15 images from the COCO-Stuff val set for user study. For each image, the 7 results and ground truth are displayed to the user in a random order. We asked 18 participants to choose their top three favorites. Figure 4 shows the proportion of the Top 1 selected by users. Our method has a vote rate of 22.59%, which significantly outperforms other methods. Quantitative Comparison. We use Fr\u00b4 echet Inception Distance (FID) and colorfulness [28] to evaluate image quality and vividness. These two metrics have recently been used to evaluate the colorization algorithm [1, 29] . Considering that colorization is an ill-posed problem, the ground-truth dependent metric PSNR used in previous works does not accurately reflect the quality of image and color generation [6, 29, 30], and the comparison here is for reference. As shown in Table 1, our proposed method demonstrates superior performance in terms of FID when compared to the state-of-the-art algorithms. Even though ControlNet outperforms our algorithm in the colourful metric, the results shown in the qualitative comparison indicate that the artefacts are meaningless and negatively affect the visual effect of the image. 4.3. Ablation Studies The significance of the main components of the proposed method is discussed in this section. The quantitative and visual comparisons are presented in Table 2 and Figure 5. High-level Semantic Guidance. We discuss the impact of high-level semantic guidance on model performance. The viTable 2. Quantitative comparison of ablation studies. Exp. Luminanceaware decoder High-level guidance FID\u2193 Colorful\u2191 (a) ! 10.05 33.73 (b) ! 9.917 42.55 Ours ! ! 9.799 41.54 w/o semantic ours (a)High-level guidance. w/o luminance ours (b)Luminance-aware decoder. Fig. 5. Visual comparison from ablation studies. suals shown in Figure 5(a) demonstrate our high-level guidance improves saturation of synthesised colors and mitigates failures caused by semantic confusion. The quantitative scores in Table 2 confirm the significant improvement in both color vividness and perceptual quality introduced by the highlevel semantic guidance. Luminance-aware Decoder. The pipeline equipped with a luminance-aware decoder facilitates the generation of cognitively plausible colors. As shown in the first row of Figure 5(b), the artifacts are suppressed. Furthermore, the incorporation of this decoder yields a positive impact on the retrieval of image details, as demonstrated by the successful reconstruction of textual elements in the second row of Figure 5(b). Consequently, the full model outperforms the alternative in terms of FID. It is reported a slight decrease in colorfulness score after incorporating luminance awareness which can be attributed to the suppression of outliers, as discussed in Section 4.2 regarding the analysis of the ControlNet. 5. CONCLUSION In this study, we introduce an novel automatic colorization pipeline that harmoniously combines color diversity with fidelity. It generate plausible and saturated colors by leveraging powerful diffusion priors with the proposed luminance and high-level semantic guidance. Besides, we design a luminance-aware decoder to restore image details and improve color plausibility. Experiments demonstrate that the proposed pipeline outperforms previous methods in terms of perceptual realism and attains the highest human preference compared to other algorithms. 6. ACKNOWLEDGEMENT This work was supported by National Key R&D Project of China(2019YFB1802701), MoE-China Mobile Research Fund Project(MCM20180702), the Fundamental Research Funds for the Central Universities; in part by the 111 project under Grant B07022 and Sheitc No.150633; in part by the Shanghai Key Laboratory of Digital Media Processing and Transmissions. 7.", "additional_graph_info": { "graph": [ [ "Eiji Kawasaki", "Markus Holzmann" ] ], "node_feat": { "Eiji Kawasaki": [ { "url": "http://arxiv.org/abs/2210.09141v1", "title": "Data Subsampling for Bayesian Neural Networks", "abstract": "Markov Chain Monte Carlo (MCMC) algorithms do not scale well for large\ndatasets leading to difficulties in Neural Network posterior sampling. In this\npaper, we apply a generalization of the Metropolis Hastings algorithm that\nallows us to restrict the evaluation of the likelihood to small mini-batches in\na Bayesian inference context. Since it requires the computation of a so-called\n\"noise penalty\" determined by the variance of the training loss function over\nthe mini-batches, we refer to this data subsampling strategy as Penalty\nBayesian Neural Networks - PBNNs. Its implementation on top of MCMC is\nstraightforward, as the variance of the loss function merely reduces the\nacceptance probability. Comparing to other samplers, we empirically show that\nPBNN achieves good predictive performance for a given mini-batch size. Varying\nthe size of the mini-batches enables a natural calibration of the predictive\ndistribution and provides an inbuilt protection against overfitting. We expect\nPBNN to be particularly suited for cases when data sets are distributed across\nmultiple decentralized devices as typical in federated learning.", "authors": "Eiji Kawasaki, Markus Holzmann", "published": "2022-10-17", "updated": "2022-10-17", "primary_cat": "stat.ML", "cats": [ "stat.ML", "cs.LG" ], "main_content": "In the following we consider a vector \u03b8 that describes the parameters (weights and biases) of a Neural Network. We define p(\u03b8) as a prior distribution over this set of parameters. Commonly used priors are Gaussian prior and Laplace prior that correspond respectively to an L2 and a L1 regularization of the vector \u03b8. We refer as p(y|x, \u03b8) the probability of a data item y given a data item x and parameter \u03b8. As an example, we aim at sampling the posterior of a Neural Network designed for a supervised task. The posterior distribution over the parameters given a set of data can be written as p(\u03b8|D) \u221dp(\u03b8) \ufffdN i=1 p(yi|xi, \u03b8) where D = {(yi, xi)}N i=1. Up to a constant, the log of the posterior can be written as a loss LD(\u03b8) = \u2212log p(\u03b8) \u2212 N \ufffd i=1 nds \ufffd i=1 log p(yi|xi, \u03b8) (1) \ufffd where the last term corresponds to the Negative LogLikelihood (NLL). This is an illustrative choice that does not reduce the generality of PBNN as we could have also considered an unsupervised setup where D = {(xi)}N i=1 and LD(\u03b8) = \u2212log p(\u03b8) \u2212\ufffdN i=1 log p(xi|\u03b8). Note that in the following D indicates precisely the ensemble of datum used for the loss computation L \u2212 \u2212\ufffd | Note that in the following D indicates precisely the ensemble of datum (yi, xi) used for the loss computation LD(\u03b8) in the equation 1. In particular, this data set can be a sub sample (mini-batch) of the larger data set containing all known data points of the training set. 3 RELATED WORK We introduce in this section some relevant literature that studied how to take into account a noisy gradient estimate of \u2207\u03b8LD(\u03b8) computed from a subset of the data. The link between noisy gradient and BNN posterior sampling is detailed in section 4.2. Stochastic Gradient Langevin Dynamics Max Welling and Yee Whye Teh (2011) [9] showed that the iterates \u03b8t will converge to samples from the true posterior distribution as they anneal the stepsize by adding the right amount of noise to a standard stochastic gradient optimization algorithm. This is known as the Stochastic Gradient Langevin Dynamics (SGLD) where the parameter update is given by \u03b8t+1 = \u03b8t \u2212\u03b7t\u2207\u03b8 \ufffd LD(\u03b8t) + \ufffd \ufffd D(\u03b8t) = \u2212log p(\u03b8) \u2212N n n \ufffd i=1 log 2\u03b7t\u03f5t \ufffd \ufffd LD(\u03b8t) = \u2212log p(\u03b8) \u2212N n n n \ufffd i=1 \ufffd n \ufffd i=1 log p(yi|xi, \u03b8) (2) where \u03b7t \u2208R+ is a learning rate and \u03f5t is a centered normally distributed random vector. No rejection step is required for a vanishing step size. The positive whole number n corresponds to the size of the subsampled mini-batch. Chen (2014) [11] later extended this idea to HMC sampler. Noisy Posterior Sampling Bias Due to a potentially high variance of the stochastic gradients \u2207\u03b8 \ufffd LD(\u03b8), Brosse (2018) [12] showed that the SGLD algorithm has an invariant probability measure which in general significantly departs from the target posterior for any non vanishing stepsize \u03b7. Furthermore, a recent work from Garriga-Alonso (2020) [13] suggests that recent versions of SGLD implementing an additional Metropolis-Hastings rejection step do not improve this issue, because the resulting acceptance probability is likely to vanish too. Failures of Data Set Splitting Inference Other works have exploited parallel computing to scale Bayesian inference to large datasets by using a two-step approach. First, a MCMC computation is run in parallel on K (sub)posteriors defined on data partitions following p(\u03b8|D) \u221d\ufffdK i=1 p(\u03b8)1/Kp(Di|\u03b8). Then, a server combines local results. While efficient, this framework is very sensitive to the quality of subposterior sampling as showned by de Souza (2022) [14]. 4 PENALTY BAYESIAN NEURAL NETWORK 4.1 Unbiased Posterior Sampling We suppose that the data set points (yi, xi) are sampled from an unknown distribution p(y, x) that can provide an infinite number of independent and identically distributed (i.i.d) random data points. This allows us to properly define a true unbiased loss L(\u03b8) as the mean over all possible data sets L(\u03b8) \u2243LD(\u03b8) with L(\u03b8) = \u2212log p(\u03b8)\u2212NE(yi,xi)\u223cp(y,x)[log p(yi|xi, \u03b8)] (3) given a fixed size N of a random data set D. In order to sample from this log-posterior, we note that detailed balance is a sufficient but not necessary condition to ensure that a Markov process possesses a stationary distribution proportional to e\u2212L(\u03b8). Concretely, the detailed balance can be written as A(\u03b8, \u03b8\u2032)q(\u03b8|\u03b8\u2032)e\u2212\u2206(\u03b8\u2032,\u03b8) = A(\u03b8\u2032, \u03b8)q(\u03b8\u2032|\u03b8) (4) where A(\u03b8\u2032, \u03b8) corresponds to the acceptance of the move from \u03b8 to \u03b8\u2032 and q(\u03b8\u2032|\u03b8) is a proposal distribution. The loss difference writes \u2206(\u03b8\u2032, \u03b8) = L(\u03b8\u2032) \u2212L(\u03b8) (5) Eiji Kawasaki, Markus Holzmann In the following, we assume that the true loss difference \u2206(\u03b8\u2032, \u03b8) is unknown, and loss differences can only be estimated based on random data sets D. Then we can introduce a random variable \u03b4(\u03b8\u2032, \u03b8) providing an unbiased estimator of \u2206(\u03b8\u2032, \u03b8) which we assume as normally distributed \u03b4(\u03b8\u2032, \u03b8) \u223cN(\u2206(\u03b8\u2032, \u03b8), \u03c32(\u03b8\u2032, \u03b8)) (6) The variance \u03c32(\u03b8\u2032, \u03b8) typically decreases with the size N of the random data sets D. This noisy loss \u03b4(\u03b8\u2032, \u03b8) introduces a bias in the posterior sampling if not correctly taken into account. In the context of statistical physics and computational chemistry, Ceperley and Dewing (1999) [4] have generalized the Metropolis-Hastings random walk algorithm to the situation where the loss is noisy and can only be estimated. They showed that it is possible to still sample the exact distribution even with very strong noise by modifying the acceptance probability and applying a noise penalty \u2212\u03c32(\u03b8\u2032, \u03b8)/2 to the loss difference in the acceptance ratio A such that A(\u03b4, \u03b8\u2032, \u03b8) = min \u0010 1, e\u2212\u03b4(\u03b8\u2032,\u03b8)\u2212\u03c32(\u03b8\u2032,\u03b8)/2\u0011 (7) One can then show that detailed balance is satis\ufb01ed on average Z d\u03b4A(\u03b4, \u03b8, \u03b8\u2032)q(\u03b8|\u03b8\u2032)N(\u03b4; \u2206(\u03b8\u2032, \u03b8), \u03c32(\u03b8\u2032, \u03b8))e\u2212\u03b4 = Z d\u03b4A(\u03b4, \u03b8\u2032, \u03b8)q(\u03b8\u2032|\u03b8)N(\u03b4; \u2206(\u03b8, \u03b8\u2032), \u03c32(\u03b8, \u03b8\u2032)) (8) which is suf\ufb01cient condition for the Markov chain to sample the unbiased distribution in the stationary regime. The penalty method can further be extended to a non symmetric proposal distribution q(\u03b8\u2032|\u03b8) used in algorithm 1. Algorithm 1 PBNN Metropolis Adjusted Algorithm \u03b8t \u2190\u03b80 for t \u21900 to T do \u03b8\u2032 \u223cq(\u03b8\u2032|\u03b8t) A(\u03b4, \u03b8\u2032, \u03b8t) \u2190min \u0010 1, q(\u03b8t|\u03b8\u2032) q(\u03b8\u2032|\u03b8t)e\u2212\u03b4(\u03b8\u2032,\u03b8t)\u2212\u03c32(\u03b8\u2032,\u03b8t)/2\u0011 u \u223cU(0, 1) if u \u2264A(\u03b4, \u03b8\u2032, \u03b8t) then \u03b8t+1 \u2190\u03b8\u2032 else \u03b8t+1 \u2190\u03b8t end if t \u2190t + 1 end for From equation 7 one can immediately recognize the drawback of PBNN leading to an exponential suppression of the acceptance since the variance \u03c32(\u03b8\u2032, \u03b8) is always non negative. Note further, that in the case of BNN posterior sampling, \u03c32(\u03b8\u2032, \u03b8) is in general not known either, and can only be estimated, too. Whereas it is possible to extend the scheme to account for noisy variances [4], we will not pursue this here. Let us stress that the penalty term serves to exactly account for the uncertainty in calculating the loss L(\u03b8) of equation 3 for a \ufb01nite number of random data. However, it does not address the actual uncertainty introduced by setting L(\u03b8) \u2248 LD(\u03b8), e.g. equation 3 and equation 1. 4.2 Langevin Dynamic Penalty Choosing a non symmetric proposal distribution q(\u03b8\u2032|\u03b8) can speed up the mixing of the Markov Chain and help PBNN scale to larger systems by maximizing the acceptance A(\u03b8\u2032, \u03b8). We \ufb01rst consider the situation where the proposal distribution depends only on the two states \u03b8 and \u03b8\u2032 and not on any given mini-batch D. In the absence of noise, the Metropolis-Hastings acceptance writes A(\u03b8\u2032, \u03b8) = min \u0012 1, q(\u03b8|\u03b8\u2032) q(\u03b8\u2032|\u03b8)e\u2212\u2206(\u03b8\u2032,\u03b8) \u0013 \u2248min \u0012 1, q(\u03b8|\u03b8\u2032) q(\u03b8\u2032|\u03b8)e\u2212(\u03b8\u2212\u03b8\u2032)\u00b7(\u2207\u03b8L(\u03b8\u2032)+\u2207\u03b8L(\u03b8))/2 \u0013 (9) where we have Taylor expanded the loss L(\u03b8) around \u03b8\u2032 assuming a suf\ufb01ciently small step from \u03b8 to \u03b8\u2032. The maximization of A(\u03b8\u2032, \u03b8) leads to a Langevin equation where the gradient of the loss introduces a drift in the Gaussian proposal distribution q(\u03b8\u2032|\u03b8) = N(\u03b8\u2032; \u03b8 \u2212\u03b7\u2207\u03b8L(\u03b8), 2\u03b7) (10) Sampling a new state \u03b8\u2032 from the proposal distribution q(\u03b8\u2032|\u03b8) corresponds exactly to drawing a centered reduced normal random variable \u03f5, and computing \u03b8\u2032 = \u03b8 \u2212\u03b7\u2207\u03b8L(\u03b8) + p 2\u03b7\u03f5 (11) The non-trivial term in the Metropolis-Hastings acceptance then writes log \u0012q(\u03b8|\u03b8\u2032) q(\u03b8\u2032|\u03b8)e\u2212\u2206(\u03b8\u2032,\u03b8) \u0013 = \u22121 4\u03b7 \r \r \r\u03b7 (\u2207\u03b8L(\u03b8\u2032) + \u2207\u03b8L(\u03b8)) \u2212 p 2\u03b7\u03f5 \r \r \r 2 + 1 4\u03b7 \r \r \r p 2\u03b7\u03f5 \r \r \r 2 \u2212L(\u03b8\u2032) + L(\u03b8) (12) In order to use a noisy gradient \u2207\u03b8L(\u03b8) \u2243\u2207\u03b8 f LD(\u03b8) as an approximation of the Gaussian mean\u2019s drift, SGLD [9] requires a vanishing learning rate to dominate the noise and maximize the acceptance. On the other hand, one could design an optimized proposal distribution q(\u03b8|\u03b8\u2032) and set a non zero step size while computing the full MetropolisHastings acceptance A(\u03b8\u2032, \u03b8) = min \u0012 1, q(\u03b8t|\u03b8\u2032) q(\u03b8\u2032|\u03b8t)e\u2212\u03b4(\u03b8\u2032,\u03b8t)\u2212\u03c32(\u03b8\u2032,\u03b8t)/2 \u0013 (13) Data Subsampling for Bayesian Neural Networks as shown in algorithm 1. The PBNN\u2019s noise penalty explicitly targets the bias introduced by a noisy loss. In fact, in the equation 13, the corresponding Monte Carlo loss minimization (i.e. introducing a zero temperature limit) corresponds to \ufb01nding the set of parameters \u03b8 that minimizes both the noisy regularized loss and its associated uncertainty. For large size models, biased samplers like the Unrestricted Langevin Algorithm (ULA) are known to be very effective as they skip the rejection step i.e set A(\u03b8\u2032, \u03b8) = 1 for a suf\ufb01ciently small step size \u03b7 resulting in an unrestricted Langevin sampling as \u03b8t+1 = \u03b8t \u2212\u03b7\u2207\u03b8L(\u03b8t) + p 2\u03b7\u03f5t (14) In order to model a noisy estimate of the loss, it is tempting to replace the drift \u03b7\u2207\u03b8L(\u03b8t) with an unbiased estimator such that \u03b7\u2207\u03b8L(\u03b8) = \u03b7\u2207\u03b8 f LD(\u03b8) + \u03b7\u03c3(\u03b8) (15) where f LD(\u03b8) is de\ufb01ned in the equation 2. For a vanishing step size \u03b7 \u21920, one may then expect that the additional noise term \u03b7\u03c3(\u03b8) gets negligible compared to the random noise of order \u03b71/2 in the equation 14. However, the uncertainty of the loss gradient \u03c3(\u03b8) does in general not result in white noise, but is correlated between different parameters \u03b8. For non-vanishing \u03b7 the noisy loss gradient can thus trigger a signi\ufb01cant departure from the target posterior, see also [12]. As we will see in the numerical experiments section, PBNN\u2019s ability to evaluate the likelihood over small minibatches even in the presence of a strong noise allows us to calibrate the Bayesian predictive distribution. This is especially convenient in comparison with usual BNNs as the regularization is handled solely by the prior distribution p(\u03b8). Commonly used uninformative priors can lead to poor performances as they do not target explicitly over\ufb01tting but rather the complexity of the model i.e. L2 and L1 penalties. As a reminder, other conventional methods such as early stopping are not compatible with the Bayesian approach developed for BNN. 4.3 Noise Penalty Estimation As showed in equation 7 in the case of a symmetric proposal distribution q(\u03b8\u2032|\u03b8), the energy difference \u03b4(\u03b8\u2032, \u03b8t) has to dominate the always positive variance \u03c32(\u03b8\u2032, \u03b8t)/2 in order to obtain a reasonable acceptance A(\u03b4, \u03b8\u2032, \u03b8t). However, in practice a noise penalty usually strongly dominates any gain in energy from \u03b8t to \u03b8\u2032 if the energy difference is computed only on a single small mini-batch D. This leads to an exponentially suppressed acceptance and long correlation times of the MCMC. To prevent this situation, we de\ufb01ne \u03b4(\u03b8\u2032, \u03b8) as an empirical average over the loss difference \u03b4(\u03b8\u2032, \u03b8) = 1 M M X j=1 \u0000LDj(\u03b8\u2032) \u2212LDj(\u03b8) \u0001 (16) where Dj corresponds to randomly chosen mini-batches. By de\ufb01nition the average is an unbiased estimator such that E[\u03b4(\u03b8\u2032, \u03b8)] = \u2206(\u03b8\u2032, \u03b8). We notice that the central limit theorem ensures that \u03b4(\u03b8\u2032, \u03b8) is normally distributed in the limit of large M as required by equation 6. The variance of the random variable \u03b4(\u03b8\u2032, \u03b8) strictly decreases with the number of mini-batches M since \u03c32(\u03b8\u2032, \u03b8) = \u03c32 D(\u03b8\u2032, \u03b8)/M where \u03c32 D corresponds to the expected variance of a single loss difference computed over a mini-batch D. Both \u03c32 D and \u03c32 are unknown, but we can compute an estimate of \u03c32(\u03b8\u2032, \u03b8) \u2243\u03c72(\u03b8\u2032, \u03b8) using an unbiased chi-squared estimator \u03c72(\u03b8\u2032, \u03b8) = 1 M(M \u22121) M X j=1 \u0000LDj(\u03b8\u2032) \u2212LDj(\u03b8) \u2212\u03b4(\u03b8\u2032, \u03b8) \u00012 (17) In the following, we do not take into account the error over the estimation of the variance \u03c32(\u03b8\u2032, \u03b8) which corresponds to the hypothesis that variations of \u03c72 as a function of \u03b8\u2032 and \u03b8 largely dominate over the noise. Leading order corrections in this noise are discussed in [4]. A second approximation can be introduced in case of a limited access to the data: drawing M mini-batches with replacement arti\ufb01cially decreases the variance at the cost of introducing a bias (i.e. violates the i.i.d. hypothesis of the energy differences). 5 EXPERIMENTS 5.1 Data Set We study the performance of PBNN on a synthetic data set that contains the positions of a double pendulum over a simulated time t. These positions are obtained by integrating Euler-Lagrange equations casted as ordinary differential equations. We turn a time series forecasting problem into a supervised regression task. We model the distribution p(y|x) where y \u2208R4 corresponds to the four Cartesian coordinates of the two masses of the double pendulum and x \u2208R4\u22175 are 5 given y past positions of the masses. The data set D = {(yi, xi)}N i=1 inputs xi write xt \u2261(yt\u221220, ..., yt\u221224) (18) where t is the discrete simulation time. The system is strongly chaotic such that learning on a given limited data Eiji Kawasaki, Markus Holzmann set can lead to strong predictive overcon\ufb01dence as suggested in \ufb01gure 1. We thus expect the noise penalty to play an important role on the realization of a supervised task based on this data set. The code that generates the double pendulum dataset is available following this link: https://gitlab.com/ eijikawasaki/double-pendulum Figure 1: Pendulum data set extracts from a single simulation run. The blue curve corresponds to one of the Cartesian coordinates of one of the masses in function of time. As an example, the behavior on the bottom (part of the test data set) is hard to predict knowing only the data from the top (part of the training data). 5.2 Benchmark setup The goal of this section is to compare the performances between PBNN and other BNN that do not include any noise penalty. We show empirically that the PBNN obtains good predictive performances even while evaluating the loss on small mini-batches and therefore introducing a strong noise. On the other hand, as expected PBNN has a much lower MCMC acceptance rate. We also compare the PBNN to SGLD which is designed to take into account a stochastic noise in the loss computation. In the following, we sample the posterior based on MCMC random walkers where q(\u03b8\u2032|\u03b8t) is a symmetric proposal density. We use a Gaussian distribution centered around \u03b8t and adjust its variance to ensure the ergodicity of the Markov process. We model the data distribution with a single multivariate Gaussian likelihood p(y|x, \u03b8) = N(y; \u00b5\u03b8(x), \u03a32 \u03b8(x)) where \u00b5\u03b8(x) \u2208R4 and \u03a32 \u03b8(x) is a positive diagonal covariance matrix parameterized by \u03c32 \u03b8,d(x) where d \u2208{1, 2, 3, 4}. This model is thus heteroscedastic: we use a Mixture Density Network [15] such that both \u00b5\u03b8(x) and \u03a32 \u03b8(x) are outputs of a neural network that takes x as an input. We observe empirically that a homoscedastic model based on a Mean Squared Error loss has a noise penalty that is several orders of magnitude smaller than its heteroscedastic counterpart. The loss of a NN is known to have numerous local minima and we don\u2019t use in the experiment any transition kernel for PBNN designed to jump between separate minima. We thus limit our study on the impact of the penalty method on a relatively small model that is easier to sample. The dimension of the vector parameter \u03b8 is equal to 419 as we consider a NN with 2 hidden layers each containing 10 neurons. The data consists in 9975 data points sequentially (i.e. not randomly) split into a 2992 points as a training data and 6983 points as a test data set. We use a Gaussian uninformative prior p(\u03b8) \u221de\u2212\u03bb\u2225\u03b8\u22252 2 corresponding to a tiny L2 regularization of the NN parameters of magnitude \u03bb = 10\u22125. During the posterior sampling, only the training data is used in order to compute the loss LD(\u03b8) and thus both \u03b4(\u03b8\u2032, \u03b8) and \u03c72(\u03b8\u2032, \u03b8). The performance of the prediction (based on the inferred parameters \u03b8 sampled from the posterior) is measure by the the average Negative-Log-Likelihood (NLL) that we de\ufb01ne as NLLD = \u22121 L L X i=1 log 1 J J X j=1 p(yi|xi, \u03b8(j)) ! (19) where the data set D of size L corresponds either to the train or the test sets. \u03b8(j) are J i.i.d. samples of the MDN parameters obtained from the Markov chain. Note that this measure should not depend on \u03b8 since the MDN prediction is marginalized over the posterior parameters distribution. The mini-batch size N in equation 1 determines the target posterior distribution that is sampled and therefore changes the value of NLLD. For a constant uninformative prior, decreasing N corresponds to increasing the variance of the predictive function. This is a straightforward consequence of Bayes theorem using an uninformative Gaussian prior with a huge variance. In order to compare the performances of the prediction of a BNN and a reference standard \u201dvanilla\u201d BNN that does not use mini-batches, we compute one-sigma con\ufb01dence intervals as drawn in the \ufb01gure 2. As an example, we compare them to a Gaussian predictive distribution and target an expected coverage of approximately 68.2%. To check the accuracy of the UQ method, we compute the Average Coverage Error (ACE) de\ufb01ned in the equation 20. ACE = \f \f \f \f \f68.2% \u22121 4L L X i=1 4 X d=1 \u03c1i,d \f \f \f \f \f with \u03c1i,d = ( 1 if yi,d \u2208[\u00b5\u2217 d(xi) \u2212\u03c3\u2217 d(xi), \u00b5\u2217 d(xi) + \u03c3\u2217 d(xi)] 0 otherwise (20) where \u00b5\u2217 d and \u03c3\u2217 d are the mean and the standard deviation in one of the four dimension d of the empirical predictive distribution as de\ufb01ned in equation 21. 5.3 Numerical Results From table 1 we empirically show that PBNN achieves a better overall performance than other biased sub sampled models for a small mini-batch size. As an illustration, \ufb01gure 2 shows the error bars of the prediction over a random period of time for each model. The empirical predictive distribution is de\ufb01ned as E[p(yt|xt, \u03b8)] \u22431 J J X j=1 p(yt|xt, \u03b8(j)) (21) where t is the simulated time and \u03b8(j) are J samples obtained by the MCMC computation. The expected value in the equation 21 is computed over the targeted posteriors which are different for every model studied in the benchmark. It is important to note here that even for a same likelihood weight Data Subsampling for Bayesian Neural Networks Table 1: Performance benchmark. The top and bottom groups of models correspond to the predictive performance based on a likelihood weight corresponding respectively to N = 2992 and N = 60. Model Test NLLD Train NLLD Test ACE Vanilla BNN \u22124.11 \u00b1 0.01 \u22125.36 \u00b1 0.01 7.1% \u00b1 0.3% Tempered BNN \u22121.96 \u00b1 0.08 \u22122.05 \u00b1 0.10 3.9% \u00b1 1.1% Batched BNN \u22121.68 \u00b1 0.17 \u22121.74 \u00b1 0.19 1.7% \u00b1 1.6% pseudo-SGLD \u22122.35 \u00b1 0.10 \u22122.48 \u00b1 0.11 4.8% \u00b1 0.8% PBNN -3.91\u00b10.07 -4.83\u00b10.09 0.4% \u00b1 2.3% Figure 2: Models predictions over a test data set example extract. The blue line corresponds to one of the Cartesian coordinate of one of the two masses. Mean models predictions are in red and one standard deviation regions are plotted in grey. The horizontal axis corresponds to the the simulated time t. The four \ufb01gures from top to bottom correspond respectively to the models: Tempered BNN, Batched BNN, pseudo-SGLD and PBNN. N in the Bayes theorem, we do not expect the same prediction between models that use the whole train set, like \u201dvanilla\u201d BNNs and the PBNN as they do not target the same posterior sampling. The vanilla BNN samples a posterior proportional to e\u2212LD(\u03b8) whereas PBNN aims at sampling a posterior proportional to e\u2212L(\u03b8) where L(\u03b8) is the loss expected for a given size N of mini-batch D as de\ufb01ned in the equation 3. Vanilla BNN We call this \ufb01rst model \u201dvanilla\u201d as it corresponds to a standard MCMC random walk based BNN with no mini batches. Vanilla BNN\u2019s acceptance writes A(\u03b8\u2032, \u03b8t) = min(1, e\u2212LD(\u03b8\u2032)+LD(\u03b8t))) where D contains all the 2992 available train data points. This model thus uses the whole train set to compute the loss difference for each new proposed state \u03b8\u2032. Note that Langevin algorithms such as MALA or HMC all sample the same posterior. The only difference with this model is in their ef\ufb01ciency as they are designed to maximize the MCMC acceptance Eiji Kawasaki, Markus Holzmann and the ergodicity of the Markov Chain. The ACE is approximately zero on the training set and signi\ufb01cantly different from zero on the test data set. In the limit of a single mini-batch containing all the training data set, the PBNN coincides with this model. In the following we aim at calibrating the PBNN predictive prediction while maintaining good predictive performances. Tempered BNN In order to provide a comparison between the usual \u201dvanilla\u201d BNN and a PBNN, we adjust the likelihood weight following \u2212log p(\u03b8) \u2212 N 2992 2992 X i=1 log p(yi|xi, \u03b8) (22) Indeed, we balance the loss to be equal to PBNN\u2019s likelihood that uses mini-batches with N = 60. This adjusted weight over the likelihood corresponds to the Safe Bayes approach where we vary the weight of the likelihood thanks to a temperature T following p(D|\u03b8)1/T as discussed by Wilson (2020) [16]. In our case T > 1 is known to help under model misspeci\ufb01cation as it is the case in the double pendulum Gaussian prediction example. The resulting temperature T = 2992/60 is however such a high temperature that the prior regularization dominates the likelihood for this model. The resulting predictive performance is unsatisfactory as shown in the table 1. Batched BNN The acceptance for this model is de\ufb01ned as A(\u03b4, \u03b8\u2032, \u03b8t) = min(1, e\u2212\u03b4(\u03b8\u2032,\u03b8t)) with \u03b4(\u03b8\u2032, \u03b8t) computed with M = 100 and N = 60. The number of mini-batches used by this model is the same as the number used by the PBNN for a fair comparison. The only difference with the PBNN model is the noise penalty e\u2212\u03c72(\u03b8\u2032,\u03b8t)/2 in the acceptance. Table 1 therefore demonstrates the impact of the penalty method, strongly improving the overall performance of the model. pseudo-SGLD The SGLD algorithm is designed to naturally take into account noise from sub-sampled data. Standard SGLD with a weight N = 2992, i.e. the number of training samples, in equation 2 leads to results that are similar to the Vanilla BNN both in terms of negative loglikelihood and coverage. In this benchmark we want to test the ability of the algorithm to handle a noisy loss. We therefore set N = n = 60 with a constant learning rate \u03b7 = 10\u22125 and call the resulting model a pseudo-SGLD. Comparing to PBNN, we observe in both table 1 and \ufb01gure 2 that the noise is too high for the SGLD in this setup. The performance of SGLD could probably be improved by decaying the step size \u03b7 polynomially as suggested in the literature [9]. As a reminder, this model has the great advantage of not requiring a rejection step. PBNN The noise penalty is estimated following equations 16 and 17 with M = 100 and N = 60. The random walk acceptance for PBNN writes A(\u03b4, \u03b8\u2032, \u03b8t) = min \u0010 1, e\u2212\u03b4(\u03b8\u2032,\u03b8t)\u2212\u03c72(\u03b8\u2032,\u03b8t)/2\u0011 We have adjusted by hand the mini-batch size N = 60 in order to calibrate the models to optimize the test data set coverage. There is a noticeable gap of PBNN\u2019s performance between the train and the test data sets that is not intuitively described in the theory of PBNN. This over\ufb01tting is probably caused partially by the access to a limited amount of data. Indeed, during the Monte Carlo sampling, all mini-batches D are part of the same training data and not i.i.d. sampled from a probability distribution p(D). In an ideal situation where we could evaluate both \u03b4 and \u03c72 from i.i.d. samples D \u223cp(D) as required by the equations 16 and 17, we expect a lower over\ufb01tting effect. 5.4 Mini-Batch Size N For the purpose of the benchmark we have calibrated PBNN error bars in the table 1 by tuning the mini-batch size N. One can wonder what is the optimal value of N in a general purpose outside the scope of calibration and for a given data set size. Figure 3 shows that, as expected, the prediction performance measured as the negative loglikelihood NLLD over the test data set increases with N. On the other hand, the acceptance rapidly drops because the number of mini-batches M in equation 17 decreases for a constant data set size. In the \ufb01gure 3 we indeed notice a linear decrease of the log-acceptance due to the decrease of the number of available mini-batches M. The optimal value of N is therefore a trade-off between reasonable acceptance and good predictive performance. -5 -4 -3 -2 -1 0 0 20 40 60 80 100 50 60 70 80 90 100 negative loglikelihood \\ log10 (acceptance) coverage % mini-batch size negative loglikelihood acceptance coverage Figure 3: PBNN performance measured by NLLD over the test data set, acceptance in base log10 and one sigma coverage in function of the mini-batch size N for a constant prior. The standard deviation continuously decreases in function of the batch size. We notice however that the coverage is not monotonous: it is determined by both the error bars size and by the loglikelihood. It is important to note in the \ufb01gure 3 that different batch sizes result in different coverages for a constant uninformative prior. As we have shown the PBNN predictive distribution can be calibrated. In practical setups as discussed by Hermans (2021) [17], it is recommended to compare the expected coverage probability of the predictive distribution de\ufb01ned in equation 21 to the empirical coverage probability as shown in the equation 20. 6 CONCLUSION Uncertainty quanti\ufb01cation for the predictions of large size neural networks remains an open issue. In this work, we have shown a new way to enable data sub-sampling for Bayesian Neural Network independent from gradient based approximation such as Stochastic Gradient Langevin Dynamic. First, we have demonstrated that a raw estimation of the likelihood based on a noisy loss introduces a bias in the posterior sampling if not taken into account. We then have shown that a Data Subsampling for Bayesian Neural Networks generalization of the Metropolis Hastings algorithm allows us to eliminate the bias and to exactly sample the posterior even with very strong noise. This necessitates an additional \u201dnoise penalty\u201d that corresponds to the variance of the noisy loss difference and exponentially suppresses the MCMC acceptance probability. In practice, the noise penalty corresponds to replacing a single large data set by multiple smaller sub sampled mini batches associated with an uncertainty over their losses. We have shown how to interpret this term as a regularization. Varying the size of the mini-batches enables a natural calibration that we have compared to other techniques such as tempered Safe Bayes approaches. Based on this calibration principle, we have provided a benchmark that empirically showed good predictive performances of PBNNs. We hope that combining data sub-sampling with other Monte Carlo acceleration techniques such as HMC could allow to compute uncertainties for model sizes not reachable until now. Lastly, PBNN could be particularly suited in the case when the data sets D are distributed across multiple decentralized devices as in the typical federated learning setup. Indeed, the noise penalty is determined by the variance of the losses computed on each individual data set. In principle, PBNN should enable the possibility to compute uncertainty with separate data sets without exchanging them. 7 Acknowledgments This work has been supported by the French government under the \u201dFrance 2030\u201d program, as part of the SystemX Technological Research Institute. Authors thank Victor Berger for useful comments and discussions.", "introduction": "The development of an effective Uncertainty Quanti\ufb01ca- tion (UQ) method that computes the predictive distribu- tion by marginalizing over Deep Neural Network (DNN) parameter sets remains an important, challenging task [1]. Bayesian methods provide the posterior distribution which can be obtained either from Variational inference or via Monte Carlo sampling techniques, Markov chain Monte Carlo (MCMC) generally considered as the gold standard of Bayesian inference [2]. However, sampling a DNN pa- rameter space by a Markov chain does not scale well for large systems and data sets, as it requires the evaluation of the log-likelihood over the whole data set at each iteration step. This obstacles the use of Bayesian Neural Networks (BNNs) posterior sampling in practice. Up to now, MCMC based BNNs only manage to handle limited sizes of data sets. Therefore, uninformative priors are commonly used to prevent over\ufb01tting, which on the other hand strongly harm the predictive performance. A current \ufb01eld of research is dedicated to developing BNN speci\ufb01c priors as reviewed by Fortuin (2021) [3]. In this article, we develop a data sub-sampling strategy for BNN posterior sampling explicitly reducing predictive overcon\ufb01dence. This leads us to a variant of MCMC, based on subsampled batch data, which we refer to as Penalty Bayesian Neural Network - PBNN. The so-called \u201dpenalty method\u201d [4] was \ufb01rst developed in the context of statisti- cal and computational physics e.g. in the work of Pierleoni et al (2004) [5] to ef\ufb01ciently sample distributions with loss functions affected by statistical noise. Naive BNN posterior sampling based on sub sampled data introduces a strong bias. This issue has already been care- fully studied in the context of Bayesian inference where several MCMC sub-sampling methodologies have been proposed for scaling up the Metropolis-Hastings algorithm [6, 7, 8]. We show, both theoretically and empirically, that PBNN enables an unbiased posterior sampling by explic- itly computing the variance of the loss of a subsampled data batch. In the following we \ufb01rst introduce some related works, especially the Stochastic Gradient Langevin Dynamic (SGLD) algorithm [9] that confronts the effect of a noise in the loss computation. We then introduce PBNN as an unbiased posterior sampling strategy and show its bene\ufb01ts focusing on its ability to mitigate the predictive overcon- \ufb01dence effects. We then continue by giving some prac- tical details on how to evaluate both, the noisy loss, and its uncertainty. We show that PBNN is compatible with state of the art MCMC proposal distribution for the Markov chain such as Langevin dynamics and Hybrid Monte Carlo (HMC) [10]. A benchmark is provided on a data set de- signed to trigger over\ufb01tting. Based on this example, PBNN arXiv:2210.09141v1 [stat.ML] 17 Oct 2022 Data Subsampling for Bayesian Neural Networks obtains good predictive performance. It includes a natural calibration parameter in the form of the size of the mini- batch. Lastly we study the impact of this parameter on the acceptance and on the overall performance of the model." } ], "Markus Holzmann": [ { "url": "http://arxiv.org/abs/1701.05107v1", "title": "Spectral analysis of photonic crystals made of thin rods", "abstract": "In this paper we address the question how to design photonic crystals that\nhave photonic band gaps around a finite number of given frequencies. In such\nmaterials electromagnetic waves with these frequencies can not propagate; this\nmakes them interesting for a large number of applications. We focus on crystals\nmade of periodically ordered thin rods with high contrast dielectric\nproperties. We show that the material parameters can be chosen in such a way\nthat transverse magnetic modes with given frequencies can not propagate in the\ncrystal. At the same time, for any frequency belonging to a predefined range\nthere exists a transverse electric mode that can propagate in the medium. These\nresults are related to the spectral properties of a weighted Laplacian and of\nan elliptic operator of divergence type both acting in $L^2(\\mathbb{R}^2)$. The\nproofs rely on perturbation theory of linear operators, Floquet-Bloch analysis,\nand properties of Schroedinger operators with point interactions.", "authors": "Markus Holzmann, Vladimir Lotoreichik", "published": "2017-01-18", "updated": "2017-01-18", "primary_cat": "math.SP", "cats": [ "math.SP", "math-ph", "math.AP", "math.MP" ], "main_content": "In this preliminary section we fix some notations that are associated to lattices of points. Furthermore, we introduce Schr\u00f6dinger operators with point interactions supported on a lattice and discuss their spectral properties. These preparations will be useful in the spectral analysis of \u0398r. Let two linearly independent vectors a1, a2 \u2208R2 be given and let the lattice \u039b and the period cell \ufffd \u0393 be defined by (1.2). Next, we introduce the associated dual lattice \u0393 by \u0393 := \ufffd nb + nb \u2208R2 : n, n \u2208Z \ufffd , (2.1) \ufffd \u0393 := \ufffd n1b1 + n2b2 \u2208R2 : n1, n2 \u2208Z \ufffd , (2.1) a ambl = 2\u03c0\u03b4ml for m, l = 1, 2. The Brillouin zone \ufffd \u039b \u2282R2 corresponding \ufffd \ufffd where b1, b2 \u2208R2 are defined via ambl = 2\u03c0\u03b4ml for m, l = 1, 2. The Brillouin zone \ufffd \u039b \u2282R2 corresponding to the lattice \u039b is defined by \ufffd \u039b := \ufffd sb + sb \u2208R2 : s, s \u2208 \ufffd \u22121 ,1 \ufffd\ufffd . (2.2) \ufffd \u039b := \ufffd s1b1 + s2b2 \u2208R2 : s1, s2 \u2208 \ufffd \u22121 2 to discuss Hamiltonians with point in 4]. Let \u2212\u2206be the self-adjoint free La 1 2, 1 2 \ufffd 1 2 \ufffd\ufffd . (2.2) eractions supported on \u039b following \ufffd \ufffd \ufffd \ufffd\ufffd In what follows we are going to discuss Hamiltonians with point interactions supported on \u039b following the lines of [AGHH, Sec. III.4]. Let \u2212\u2206be the self-adjoint free Laplacian in L2(R2) with the domain dom (\u2212\u2206) = H2(R2). Its resolvent is denoted by R0(\u03bd) := (\u2212\u2206\u2212\u03bd)\u22121. For \u03bd \u2208\u03c1(\u2212\u2206) = C \\ [0, \u221e) the integral kernel G\u03bd of R0(\u03bd) is given by G\u03bd(x \u2212y) = i 4 i 4H(1) 0 \ufffd\u221a\u03bd|x \u2212y| \ufffd , (2.3) n of the first kind and order zero; cf. [AS, Chap. 9] for \ufffd \ufffd where Im\u221a\u03bd > 0 and H(1) 0 is the Hankel function of the first kind and order zero; cf. [AS, Chap. 9] for details on Hankel functions. Next, we set \ufffd G\u03bd(x) = \ufffd G\u03bd(x), x \u0338= 0, 0, x = 0. (2.4) For \u03b1 \u2208R and m, l \u2208Z2 we define qml \u03b1,\u039b(\u03bd) := \ufffd \u03b1 \u22121 2\u03c0 2\u03c0 \ufffd \u03b3 \u2212ln \u221a\u03bd 2i 2i \ufffd\ufffd \u03b4ml \u2212\ufffd G\u03bd(ym \u2212yl), t and yp = p1a1 + p2a \ufffd where \u03b3 = 0.5772 . . . is the Euler-Mascheroni constant and yp = p1a1 + p2a2 for p = (p1, p2) \u2208Z2. Eventually, we introduce for \u03bd \u2208C \\ R the matrix \ufffd \ufffd Q\u03b1,\u039b(\u03bd) := \ufffd qml \u03b1,\u039b(\u03bd) \ufffd 2(Z2) that admits a b m,l\u2208Z2 , (2.5) \ufffd \ufffd which induces a closed operator in \u21132(Z2) that admits a bounded and everywhere defined inverse, if Im \u221a\u03bd is sufficiently large; cf. [AGHH, Thm. III.4.1]. We denote this operator again by Q\u03b1,\u039b(\u03bd) as no confusion will arise. The matrix elements of the inverse Q\u03b1,\u039b(\u03bd)\u22121 in \u21132(Z2) are denoted by rml \u03b1,\u039b(\u03bd). Definition 2.1. The Schr\u00f6dinger operator \u2212\u2206\u03b1,\u039b with point interactions supported on \u039b with coupling constant \u03b1 \u2208R is defined as the self-adjoint operator in L2(R2) with the resolvent \ufffd \ufffd \ufffd \u2208 R\u03b1,\u039b(\u03bd) := (\u2212\u2206\u03b1,\u039b \u2212\u03bd)\u22121 = R0(\u03bd) + \ufffd m,l\u2208Z \ufffd m,l\u2208Z2 rml \u03b1,\u039b(\u03bd) \ufffd \u00b7 , G\u03bd(\u00b7 \u2212yl) \ufffd L \ufffd L2(R2)G\u03bd(\u00b7 \u2212ym), (2.6) where \u03bd \u2208C \\ R and yp = p1a1 + p2a2 for p = (p1, p2) \u2208Z2. 6 Next, we are going to investigate the spectrum of \u2212\u2206\u03b1,\u039b. For this purpose, we introduce for \u03b1 \u2208R the numbers Ej = Ej(\u03b1, \u039b), j \u2208{0, 1, 2}, as follows: E0 is the smallest zero of the function1 E 7\u2192g(E, 0) + 1 2\u03c0 (\u03b3 + ln 2) \u2212\u03b1, (2.7) where g(E, \u03b8) is de\ufb01ned for \u03b8 \u2208b \u039b and E / \u2208{|x + \u03b8|2 : x \u2208\u0393} by g(E, \u03b8) := 1 4\u03c02 lim R\u2192\u221e \" X x\u2208\u0393: |x+\u03b8|\u2264R |b \u039b| |x + \u03b8|2 \u2212E \u22122\u03c0 ln R # . Similarly, the number E1 is given by the smallest zero of the function E 7\u2192g(E, \u03b80) + 1 2\u03c0 (\u03b3 + ln 2) \u2212\u03b1, (2.8) where \u03b80 := \u22121 2(b1 + b2). Eventually, let b\u2212\u2208{b1, b2} be a vector satisfying |b\u2212| = min{|b1|, |b2|}. Then, we set E2 := min \b e E, 1 4|b\u2212|2\t , where e E is the smallest positive solution of equation (2.7)2. All the numbers E0, E1, and E2 are well de\ufb01ned; cf. [AGHH, Sec. III.4]. In the next proposition we summarize some fundamental spectral properties of \u2212\u2206\u03b1,\u039b that can be found e.g. in [AGHH, Thm. III.4.7]. Proposition 2.2. Let \u03b1 \u2208R and \u039b be as in (1.2). Let the Schr\u00f6dinger operator \u2212\u2206\u03b1,\u039b be as in De\ufb01nition 2.1 and let \u03b80, g(\u00b7, \u00b7) and Ej = Ej(\u03b1, \u039b), j = 0, 1, 2, be as above. Then the following claims hold. (i) \u03c3(\u2212\u2206\u03b1,\u039b) = [E0, E1] \u222a[E2, \u221e). (ii) E0 < 0 and E2 > 0 for all \u03b1 \u2208R. (iii) E1 < 0 if and only if \u03b1 < g(0, \u03b80) + 1 2\u03c0(\u03b3 + ln 2).3 (iv) There exists an \u03b11 = \u03b11(\u039b) \u2208R such that E2 \u2264E1 for any \u03b1 \u2265\u03b11. In particular, \u03c3(\u2212\u2206\u03b1,\u039b) = [E0, +\u221e) holds for all \u03b1 \u2265\u03b11. By Proposition 2.2 the operator \u2212\u2206\u03b1,\u039b has a gap in its spectrum, if the interaction strength is chosen in a proper way. In the rest of this section, we are going to investigate this gap in more detail. In particular, we will show that for a given compact interval [a, b] \u2282R there exist a lattice \u039b and an interaction strength \u03b1 such that [a, b] is contained in the spectral gap of \u2212\u2206\u03b1,\u039b. To this aim we introduce for k > 0 the unitary scaling operator Uk : L2(R2) \u2192L2(R2), (Ukf)(x) := k\u22121f \u0000k\u22121x \u0001 . Its inverse U \u22121 k : L2(R2) \u2192L2(R2) clearly acts as (U \u22121 k f)(x) = kf (kx). In the next proposition we show that this rescaling yields, up to multiplication with a constant, a unitary equivalence between point interaction operators with suitably modi\ufb01ed geometries of lattices and strengths of interactions. 1Eq. (2.7) differs from the condition in [AGHH, Eq. (4.42) in Sec. III.4], as the term 1 2\u03c0 ln 2 was forgotten there (it disappeared in the convergence analysis in [AGHH, Eq. (4.29) in Sec. III.4]). 2Note that e E is equal to E\u03b1,\u039b b\u2212(0) in the notation of [AGHH, Sec. III.4]. The fact that E\u03b1,\u039b b\u2212(0) is the smallest positive solution of equation (2.7) can be shown in the same way as in the proof of [AGHH, Thm. III.1.4.4]. Observe that (2.8) is modi\ufb01ed similarly as (2.7) compared to [AGHH]. 3This condition differs from eq. (4.51) in [AGHH, Thm. III.4.7], the term 1 2\u03c0(\u03b3 + ln 2) was forgotten; but it must be there; cf. [AGHH, Eq. (4.29) and (4.42) in Sec. III.4]. 7 Proposition 2.3. Let \u03b1 \u2208R and \u039b be as in (1.2). For k > 0 set \u039bk := k\u22121\u039b and \u03b1k := \u03b1 \u2212ln k 2\u03c0 . Let the Schr\u00f6dinger operators \u2212\u2206\u03b1,\u039b and \u2212\u2206\u03b1k,\u039bk be as in De\ufb01nition 2.1. Then it holds U \u22121 k (\u2212\u2206\u03b1,\u039b) Uk = k\u22122 (\u2212\u2206\u03b1k,\u039bk) . Proof. Let \u03bd \u2208C \\ R. We show U \u22121 k R\u03b1,\u039b(\u03bd)Uk = k2R\u03b1k,\u039bk(k2\u03bd), which yields then the claim. By (2.6) it holds U \u22121 k R\u03b1,\u039b(\u03bd)Uk = U \u22121 k R0(\u03bd)Uk + X m,l\u2208Z2 rml \u03b1,\u039b(\u03bd) \u0000\u00b7 , U \u22121 k G\u03bd(\u00b7 \u2212yl) \u0001 L2(R2)U \u22121 k G\u03bd(\u00b7 \u2212ym). Since U \u22121 k (\u2212\u2206\u2212\u03bd)Uk = k\u22122(\u2212\u2206\u2212k2\u03bd), we get U \u22121 k R0(\u03bd)Uk = k2R0(k2\u03bd). (2.9) Using the de\ufb01nition of U \u22121 k we obtain for any y \u2208\u039b the relation U \u22121 k G\u03bd(\u00b7 \u2212y) = kGk2\u03bd(\u00b7 \u2212k\u22121y) (2.10) almost everywhere in R2. This implies \u0000\u00b7 , U \u22121 k G\u03bd(\u00b7 \u2212y) \u0001 L2(R2) = \u0000\u00b7 , kGk2\u03bd(\u00b7 \u2212k\u22121y) \u0001 L2(R2). Eventually, a straightforward calculation yields qml \u03b1,\u039b(\u03bd) = \u0012 \u03b1 \u22121 2\u03c0 \u0012 \u03b3 \u2212ln \u221a\u03bd 2i \u0013\u0013 \u03b4ml \u2212e G\u03bd(ym \u2212yl) = \u0012 \u03b1k \u22121 2\u03c0 \u0012 \u03b3 \u2212ln k\u221a\u03bd 2i \u0013\u0013 \u03b4ml \u2212e Gk2\u03bd(k\u22121(ym \u2212yl)) = qml \u03b1k,\u039bk(k2\u03bd). (2.11) Hence, the identity rml \u03b1,\u039b(\u03bd) = rml \u03b1k,\u039bk(k2\u03bd) follows. Finally, employing (2.9), (2.10) and (2.11) we get U \u22121 k R\u03b1,\u039b(\u03bd)Uk = U \u22121 k R0(\u03bd)Uk + X m,l\u2208Z2 rml \u03b1,\u039b(\u03bd) \u0000\u00b7 , U \u22121 k G\u03bd(\u00b7 \u2212yl) \u0001 L2(R2)U \u22121 k G\u03bd(\u00b7 \u2212ym) = k2R0(k2\u03bd) + k2 X m,l\u2208Z2 rml \u03b1k,\u039bk(k2\u03bd) \u0000\u00b7 , Gk2\u03bd(\u00b7 \u2212k\u22121yl) \u0001 L2(R2)Gk2\u03bd(\u00b7 \u2212k\u22121ym) = k2R\u03b1k,\u039bk(k2\u03bd). The following useful statement follows immediately from Propositions 2.2 and 2.3. Proposition 2.4. Let a, b \u2208R with a < b be given. Then there exists a lattice \u039b and a coupling \u03b1 \u2208R such that the interval [a, b] belongs to a gap of the spectrum of the Schr\u00f6dinger operator \u2212\u2206\u03b1,\u039b in De\ufb01nition 2.1, i.e. [a, b] \u2282\u03c1(\u2212\u2206\u03b1,\u039b). Proof. According to Proposition 2.2 one can \ufb01nd a lattice \u039b0 = \b n1a1 + n2a2 \u2208R2 : n1, n2 \u2208Z \t and a coupling constant \u03b10 \u2208R such that 0 / \u2208\u03c3(\u2212\u2206\u03b10,\u039b0) = [E0, E1] \u222a[E2, \u221e). Furthermore, by Proposition 2.3 it holds for any k > 0 \u03c3(\u2212\u2206\u03b1k,\u039bk) = k2\u03c3(\u2212\u2206\u03b10,\u039b0) = [k2E0, k2E1] \u222a[k2E2, \u221e), where \u03b1k = \u03b10 \u2212 1 2\u03c0 ln k and \u039bk := k\u22121\u039b0. It remains to choose the parameter k > 0 so large that k2E1 < a < b < k2E2. Then the lattice \u039b = \u039bk and the coupling coef\ufb01cient \u03b1 = \u03b1k ful\ufb01ll all the requirements. 8 Finally, we de\ufb01ne Schr\u00f6dinger operators with point interactions supported on a shifted lattice. For this purpose we introduce for y \u2208R2 the unitary translation operator Ty : L2(R2) \u2192L2(R2) by (Tyf)(x) := f(x \u2212y). Then \u2212\u2206\u03b1,y+\u039b := T \u22121 y (\u2212\u2206\u03b1,\u039b)Ty (2.12) is the Schr\u00f6dinger operator with point interactions supported on y + \u039b. Since Ty is a unitary operator, we have \u03c3(\u2212\u2206\u03b1,y+\u039b) = \u03c3(\u2212\u2206\u03b1,\u039b). 3 Spectral analysis of the operator \u0398r This section is devoted to the proof of Theorem 1.1 on the operator \u0398r de\ufb01ned in (1.6a). Since the spectrum of \u0398r is still dif\ufb01cult to investigate, we consider instead the spectral problem for the auxiliary family of Schr\u00f6dinger operators Hr,\u03bb in (1.7). Since wr, w\u22121 r \u2208L\u221e(R2; R), the operator Hr,\u03bb is wellde\ufb01ned and self-adjoint in L2(R2) and it holds that \u03bb \u2208\u03c3(\u0398r) if and only if \u03bb \u2208\u03c3(Hr,\u03bb) for all \u03bb \u22650. Let the numbers 0 < \u03bb1 < \u03bb2 < \u00b7 \u00b7 \u00b7 < \u03bbN be given. First, we prove that Hr,\u03bbn, n = 1, . . . , N, converges in the norm resolvent sense to a Schr\u00f6dinger operator with point interactions supported on y(n)+\u039b. In view of the spectral properties of these Hamiltonians with point interactions (summarized in Section 2), it turns out that there exists a lattice \u039b and constants c1, . . . , cN (that appear in the de\ufb01nition of wr) such that \u03bbn belongs to a gap of \u03c3(Hr,\u03bbn). Finally, employing a perturbation argument, we deduce the claim of Theorem 1.1. The following theorem treats the convergence of Hr,\u03bb to a Schr\u00f6dinger operator with point interactions. Since the proof of this statement is rather long and technical, it is postponed to Appendix A. Theorem 3.1. Let Hr,\u03bb, \u03bb \u22650, and \u2212\u2206\u03b1,y+\u039b, \u03b1 \u2208R, y \u2208R2, be de\ufb01ned as in (1.7) and in (2.12), respectively, and let \u03bd \u2208C \\ R. Then the following claims hold. (i) There exists a constant \u03ba = \u03ba(\u039b, \u03bb1, . . . , \u03bbN, \u2126, \u03bd) > 0 such that for any n \u2208{1, . . . , N} and all suf\ufb01ciently small r > 0 \r \r(Hr,\u03bbn \u2212\u03bd)\u22121 \u2212 \u0000\u2212\u2206\u03b1n,y(n)+\u039b \u2212\u03bd \u0001\u22121\r \r \u2264\u03ba| ln r|\u22121, where the coef\ufb01cient \u03b1n is given by \u03b1n = \u2212cn \u03bbn|\u2126| 4\u03c02 + C 2\u03c0|\u2126|2 with C = Z \u2126 Z \u2126 ln |x \u2212z|dxdz. (3.1) (ii) For \u03bb / \u2208{\u03bb1, . . . , \u03bbN} there exists a constant \u03ba\u2032 = \u03ba\u2032(\u039b, \u03bb, \u2126, \u03bd) > 0 such that for all suf\ufb01ciently small r > 0 \r \r(Hr,\u03bb \u2212\u03bd)\u22121 \u2212(\u2212\u2206\u2212\u03bd)\u22121\r \r \u2264\u03ba\u2032| ln r|\u22121. Remark 3.2. The assumption \u03bbn \u0338= \u03bbm for n \u0338= m is motivated by our application, but it is only technical. If we drop this assumption, then one can still prove convergence of Hr,\u03bbn to a Schr\u00f6dinger operator with point interactions supported on a more complicated lattice with (in general) non-constant interaction strength; cf. [BHL14]. However, in this case the spectral analysis of the limit operator presents a rather dif\ufb01cult problem. For special interesting geometries there are results available in the literature [L16]. Combining the statements of Theorem 3.1 and of Proposition 2.4 with the perturbation result [W, Satz 9.24 b)], we obtain the following claim on the spectrum of Hr,\u03bbn. Proposition 3.3. Let 0 < \u03bb1 < \u03bb2 < \u00b7 \u00b7 \u00b7 < \u03bbN, let a > 0 be \ufb01xed and de\ufb01ne \u03b7 := 2\u03c0 |\u2126| + 1. Let the operator Hr,\u03bbn be as in (1.7). Then there exist a lattice \u039b and constants c1, . . . , cN (that appear in (1.5)) such that \u0000\u03bbn \u2212\u03b7 \u2212a, \u03bbn + \u03b7 + a \u0001 \u2282\u03c1(Hr,\u03bbn) for all suf\ufb01ciently small r > 0 and all n \u2208{1, . . . , N}. 9 Proof. Let In := (\u03bbn \u2212\u03b7 \u2212a, \u03bbn + \u03b7 + a) and Jn := (\u03bbn \u2212\u03b7 \u22122a, \u03bbn + \u03b7 + 2a). Then, by Proposition 2.4 there exists a lattice \u039b and a coupling constant \u03b1 \u2208R such that (\u03bb1 \u2212\u03b7 \u22122a, \u03bbN + \u03b7 + 2a) \u2282\u03c1(\u2212\u2206\u03b1,\u039b). This implies, in particular, that for any n \u2208{1, . . ., N} In \u2282Jn \u2282\u03c1(\u2212\u2206\u03b1,y(n)+\u039b) = \u03c1(\u2212\u2206\u03b1,\u039b), where the last equation holds due to translational invariance. Next, choose the constants cn in (1.5) as cn = 4\u03c02 \u03bbn|\u2126| \u0012 C 2\u03c0|\u2126|2 \u2212\u03b1 \u0013 , (3.2) where C is given as in (3.1). Theorem 3.1 (i) implies that Hr,\u03bbn converges in the norm resolvent sense to \u2212\u2206\u03b1,y(n)+\u039b. Finally, let E := E(In) and Er := Er(In) be the spectral projections corresponding to the interval In and the operators \u2212\u2206\u03b1,y(n)+\u039b and Hr,\u03bbn, respectively. Since Hr,\u03bbn converges in the norm resolvent sense to \u2212\u2206\u03b1,y(n)+\u039b, it follows from [W, Satz 9.24 b)] that \u2225E \u2212Er\u2225< 1 for all suf\ufb01ciently small r > 0. Hence, employing [W, Satz 2.58 a)] we conclude dim ran Er = dim ran E = 0 for all suf\ufb01ciently small r > 0. This implies In \u2282\u03c1(Hr,\u03bbn). Now, we are prepared to prove the main result about the spectrum of \u0398r. Proof of Theorem 1.1. Let a > 0 be given. Set \u03b7 := 2\u03c0|\u2126|\u22121 + 1 and In := (\u03bbn \u2212\u03b7 \u2212a, \u03bbn + \u03b7 + a). Choose a lattice \u039b and the constants c1, . . . , cN (that appear in (1.5)) such that In \u2282\u03c1(Hr,\u03bbn) for all suf\ufb01ciently small r > 0, which is possible by Proposition 3.3. Recall that \u03bb \u2208\u03c1(\u0398r) if and only if \u03bb \u2208\u03c1(Hr,\u03bb); we are going to verify this property for \u03bb belonging to a small neighborhood of \u03bbn. Since In \u2282\u03c1(Hr,\u03bbn), it follows from the spectral theorem that \r \r(Hr,\u03bbn \u2212\u03bd)f \r \r L2(R2) \u2265\u03b7\u2225f\u2225L2(R2) (3.3) for all \u03bd \u2208(\u03bbn \u2212a, \u03bbn + a) and all f \u2208H2(R2). Note that the de\ufb01nition of wr in (1.5) implies \u2225wr \u22121\u2225L\u221e\u2264\u03b7 \u03bb1 1 r2| ln r| for r > 0 small enough. Therefore, it holds for \u03b6 \u2208R with |\u03b6| < \u03bb1r2| ln r| that |\u03b6|\u2225wr \u22121\u2225L\u221e< \u03bb1r2| ln r| \u00b7 \u03b7 \u03bb1 1 r2| ln r| = \u03b7. (3.4) For small enough r > 0 we have |\u03b6| < \u03bb1r2| ln r| < a and the estimate (3.3) implies for f \u2208H2(R2) \r \rHr,\u03bbn+\u03b6f \u2212(\u03bbn + \u03b6)f \r \r L2(R2) = \r \r(\u2212\u2206\u2212(\u03bbn + \u03b6)(wr \u22121) \u2212(\u03bbn + \u03b6))f \r \r L2(R2) \u2265 \r \r(Hr,\u03bbn \u2212(\u03bbn + \u03b6))f \r \r L2(R2) \u2212 \r \r\u03b6(wr \u22121)f \r \r L2(R2) \u2265(\u03b7 \u2212|\u03b6|\u2225wr \u22121\u2225L\u221e) \u2225f\u2225L2(R2). This and (3.4) imply \u03bbn + \u03b6 \u2208\u03c1(Hr,\u03bbn+\u03b6), which yields \u03bbn + \u03b6 \u2208\u03c1(\u0398r). 10 We conclude this section with an explanation how to construct a crystal such that given numbers 0 < \u03bb1 < \u03bb2 \u00b7 \u00b7 \u00b7 < \u03bbN belong to gap(s) of \u03c3(\u0398r). First, for a given lattice \u039b0 we choose \u03b10 \u2208R such that \u03b10 < g(0, \u03b80) + 1 2\u03c0(\u03b3 + ln 2), where \u03b80 = \u22121 2(b1 + b2) and b1 and b2 are the basis vectors of the dual lattice \u03930. By Proposition 2.2 it holds that 0 \u2208\u03c1(\u2212\u2206\u03b10,\u039b0). Finding (or estimating) the smallest zeros of the function E 7\u2192g(E, \u03b8) + 1 2\u03c0(\u03b3 + ln 2) \u2212\u03b10 yields an approximation for the upper and the lower endpoints of the bands of the spectrum of \u2212\u2206\u03b10,\u039b0; cf. Proposition 2.2. Next, choose k > 0 as in the proof of Proposition 2.4 such that (\u03bb1 \u2212\u03b7 \u22122a, \u03bbN + \u03b7 + 2a) \u2282\u03c1(\u2212\u2206\u03b1k,\u039bk), where \u03b7 = 2\u03c0|\u2126|\u22121 + 1, \u039bk := k\u22121\u039b0, \u03b1k = \u03b10 \u2212ln k 2\u03c0 , and a is a small positive constant. Finally, we de\ufb01ne the constants cn via the formula (3.2) in the proof of Proposition 3.3 (with \u03b1 replaced by \u03b1k). Then the crystal that is speci\ufb01ed via the lattice \u039bk and wr as in (1.5) satis\ufb01es {\u03bb1, . . . , \u03bbN} \u2282\u03c1(\u0398r) for all suf\ufb01ciently small r > 0. 4 Spectral analysis of the operator \u03a5r In this section we prove that there are no gaps in the spectrum of the operator \u03a5r = \u2212div (w\u22121 r gradf) in bounded subsets of [0, \u221e), if r > 0 is suf\ufb01ciently small. The methods employed in this section are completely different from the methods in Section 3, partly because the aim is to prove the statement of an opposite type. Using the Floquet-Bloch theory for differential operators with periodic coef\ufb01cients we will see that \u03c3(\u03a5r) consists of bands and that the \u2018lowest bands\u2019 overlap for small r > 0. The proof of this result is inspired by ideas coming from [FK96a] and makes additionally use of a result in [RT75] on the convergence of eigenvalues of the Laplace operator on domains with small holes. First, we set up some notations. For a \ufb01xed r \u22650 we de\ufb01ne the sesquilinear form hr[f, g] := \u0000w\u22121 r \u2207f, \u2207g \u0001 L2(R2;C2), dom hr := H1(R2), with wr given by (1.5) for r > 0 and wr \u22611 for r = 0. It is clear that hr is well-de\ufb01ned and symmetric. Moreover, by the de\ufb01nition of wr, there exists for any suf\ufb01ciently small \ufb01xed r \u22650 a constant \u03bar \u2208(0, 1] such that 0 \u2264\u03bar\u2225\u2207f\u22252 L2(R2;C2) \u2264hr[f] \u2264\u2225\u2207f\u22252 L2(R2;C2) for all f \u2208H1(R2). This implies that hr is closed. Thus, by the \ufb01rst representation theorem [K, Thm. VI 2.1] there exists a uniquely determined self-adjoint operator associated to the form hr, which is \u03a5r as in (1.6b) for r > 0 and the free Laplacian \u2212\u2206for r = 0. In order to describe the spectrum of \u03a5r, r \u22650, we use that its coef\ufb01cients are periodic with respect to the lattice \u039b given by (1.2). Let the period cell b \u0393 and the Brillouin zone b \u039b associated to \u039b be given by (1.2) and (2.2), respectively, and de\ufb01ne for \u03b8 \u2208b \u039b the subspace H(\u03b8) of L2(b \u0393) as the set of all f \u2208H1(b \u0393) that satisfy the so-called semi-periodic boundary conditions, i.e. H(\u03b8) := n f \u2208H1(b \u0393): f(ta2) = e\u2212i\u03b8a1f(ta2 + a1), f(ta1) = e\u2212i\u03b8a2f(ta1 + a2), t \u2208[0, 1) o . De\ufb01ning now for r \u22650 and \u03b8 \u2208b \u039b the form hr,\u03b8[f, g] := \u0000w\u22121 r \u2207f, \u2207g \u0001 L2(b \u0393;C2), dom hr,\u03b8 := H(\u03b8), we see, similarly as above, that it satis\ufb01es the assumptions of the \ufb01rst representation theorem. Hence, there exists a uniquely determined self-adjoint operator \u03a5r,\u03b8 in L2(b \u0393) associated to hr,\u03b8. 11 It is not dif\ufb01cult to see that for all \u03b8 \u2208b \u039b the operator \u03a5r,\u03b8, r \u22650, has a compact resolvent. Hence, its spectrum is purely discrete and we denote its eigenvalues (counted with multiplicity) by 0 \u2264\u03bbr,1(\u03b8) \u2264\u03bbr,2(\u03b8) \u2264\u03bbr,3(\u03b8) \u2264. . . Since w\u22121 r is periodic, we can apply the results from [BHPW11, Sec. 4] (cf. also the footnote on p. 3 of [BHPW11]) and get, combined with [DT82] or [K16, Thm. 5.9], the following characterization for the spectrum of \u03a5r. Proposition 4.1. For any r \u22650 it holds \u03c3(\u03a5r) = \u221e [ n=1 \u0002 ar,n, br,n \u0003 , ar,n := min \u03b8\u2208b \u039b \u03bbr,n(\u03b8), br,n := max \u03b8\u2208b \u039b \u03bbr,n(\u03b8). Moreover, for all n0 \u2208N there exists \u03b2 = \u03b2(n0) \u2208(0, 1) such that b0,n > (1+\u03b2)a0,n+1 holds for n = 1, 2, . . ., n0. Our goal is to show that for suf\ufb01ciently small r > 0 the relation br,n > ar,n+1 is persisted. For this purpose, we need the following auxiliary lemma, which provides a useful estimate for the L2-norm of a function in a \ufb01nite union of disks with radius r in terms of the H1-norm over the whole domain. In what follows it will be convenient to use the notation Bs := Y + Bs(0), s > 0, (4.1) where Y is as in (1.3). Lemma 4.2. Let r > 0 be suf\ufb01ciently small and let Br be as in (4.1). Then, there exists a constant \u03ba > 0 such that \u2225f\u22252 L2(Br) \u2264\u03bar\u2225f\u22252 H1(b \u0393\\Br) + \u03bar2\u2225f\u22252 H1(b \u0393) holds for all f \u2208H1(b \u0393). Proof. Throughout the proof \u03ba > 0 denotes a generic constant. Choose R > 0 so small that B(n) R := BR(y(n)) \u2282b \u0393 for all n \u2208{1, . . . , N} and B(n) R \u2229B(m) R = \u2205 for n \u0338= m. Moreover, assume also that r \u2208(0, R). For \ufb01xed n \u2208{1, . . ., N} we are going to prove the inequality \u2225f\u22252 L2(B(n) r ) \u2264\u03bar\u2225f\u22252 H1(b \u0393\\Br) + \u03bar2\u2225f\u22252 H1(b \u0393). (4.2) By summing up over n the claimed result follows. We denote by cl(b \u0393) the closure of b \u0393. Since C\u221e\u0000cl(b \u0393) \u0001 is dense in H1(b \u0393), it suf\ufb01ces to prove (4.2) for smooth functions. Let f \u2208C\u221e\u0000cl(b \u0393) \u0001 be \ufb01xed. We use for its equivalent in polar coordinates (\u03c1, \u03c6) centered at y(n) the symbol f(\u03c1, \u03c6) := f(y(n) 1 + \u03c1 cos \u03c6, y(n) 2 + \u03c1 sin \u03c6), where y(n) 1 and y(n) 2 denote the coordinates of y(n). Let \u03c1 > 0 be such that \u03c1 < r. Employing the main theorem of calculus we conclude f(\u03c1, \u03c6) = f(r, \u03c6) \u2212 Z r \u03c1 (\u2202tf)(t, \u03c6)dt. This implies \u03c1 Z 2\u03c0 0 |f(\u03c1, \u03c6)|2d\u03c6 \u22642\u03c1 Z 2\u03c0 0 |f(r, \u03c6)|2d\u03c6 + 2\u03c1 Z 2\u03c0 0 \f \f \f \f Z r \u03c1 (\u2202tf)(t, \u03c6)dt \f \f \f \f 2 d\u03c6. (4.3) 12 Similarly as above, one \ufb01nds \u03c1 Z 2\u03c0 0 |f(r, \u03c6)|2d\u03c6 \u22642\u03c1 Z 2\u03c0 0 |f(R, \u03c6)|2d\u03c6 + 2\u03c1 Z 2\u03c0 0 \f \f \f \f \f Z R r (\u2202tf)(t, \u03c6)dt \f \f \f \f \f 2 d\u03c6. (4.4) By the trace theorem [M, Thm. 3.37] applied for the domain b \u0393 \\ BR and using that b \u0393 \\ BR \u2282b \u0393 \\ Br we get Z 2\u03c0 0 |f(R, \u03c6)|2\u03c1d\u03c6 \u2264 Z 2\u03c0 0 |f(R, \u03c6)|2Rd\u03c6 = \u2225f|\u2202B(n) R \u22252 L2(\u2202B(n) R ) \u2264\u03ba\u2225f\u22252 H1(b \u0393\\BR) \u2264\u03ba\u2225f\u22252 H1(b \u0393\\Br). (4.5) Using the expression for the gradient in polar coordinates and the Cauchy-Schwarz inequality, we obtain \u03c1 Z 2\u03c0 0 \f \f \f \f \f Z R r (\u2202tf)(t, \u03c6)dt \f \f \f \f \f 2 d\u03c6 \u2264rR Z 2\u03c0 0 Z R r |(\u2202tf)(t, \u03c6)|2 dtd\u03c6 \u2264R Z 2\u03c0 0 Z R r r t \f \f \f(\u2207f)(y(n) 1 + t cos \u03c6, y(n) 2 + t sin \u03c6) \f \f \f 2 tdtd\u03c6 \u2264\u03ba\u2225f\u22252 H1(b \u0393\\Br). (4.6) Equations (4.4), (4.5) and (4.6) imply \u03c1 Z 2\u03c0 0 |f(r, \u03c6)|2d\u03c6 \u2264\u03ba\u2225f\u22252 H1(b \u0393\\Br). (4.7) In a similar way as in (4.6) one shows \u03c1 Z 2\u03c0 0 \f \f \f \f Z r \u03c1 (\u2202tf)(t, \u03c6)dt \f \f \f \f 2 d\u03c6 \u2264r\u03c1 Z 2\u03c0 0 Z r \u03c1 |(\u2202tf)(t, \u03c6)|2 dtd\u03c6 = r Z 2\u03c0 0 Z r \u03c1 \u03c1 t |(\u2202tf)(t, \u03c6)|2 tdtd\u03c6 \u2264r\u2225f\u22252 H1(b \u0393). (4.8) Hence, integrating (4.3) from 0 to r with respect to \u03c1 and using (4.7) and (4.8) we obtain \u2225f\u22252 L2(B(n) r ) = Z r 0 Z 2\u03c0 0 |f(\u03c1, \u03c6)|2\u03c1d\u03c6d\u03c1 \u2264\u03ba Z r 0 \u0010 \u2225f\u22252 H1(b \u0393\\Br) + r\u2225f\u22252 H1(b \u0393) \u0011 d\u03c1 = \u03bar\u2225f\u22252 H1(b \u0393\\Br) + \u03bar2\u2225f\u22252 H1(b \u0393). After these preliminary considerations, we are prepared to show that \u03a5r has no gaps in the spectrum in any \ufb01xed compact subinterval of [0, +\u221e), if r > 0 is suf\ufb01ciently small. Proof of Theorem 1.2. Throughout this proof \u03ba, \u03ba\u2032, \u03ba\u2032\u2032, \u03ba\u2032\u2032\u2032 > 0 denote generic constants. As no confusion can arise, we will use the abbreviation (\u00b7, \u00b7) for both scalar products (\u00b7, \u00b7)L2(b \u0393) and (\u00b7, \u00b7)L2(b \u0393;C2), and the shorthand \u2225\u00b7 \u2225for the respective norms \u2225\u00b7 \u2225L2(b \u0393) and \u2225\u00b7 \u2225L2(b \u0393;C2). Fix L > 0 and let n0 := min{n \u2208N0 : b0,n > L}. Furthermore, we choose \u03b2 \u2208(0, 1) such that b0,n > (1 + \u03b2)a0,n+1 for all n \u2264n0 (note that such a \u03b2 exists by Proposition 4.1). Using the min-max principle [RS-IV, \u00a7XIII.1] we obtain \u03bbr,n(\u03b8) = min V \u2282H(\u03b8) dim V =n max f\u2208V \u2225f\u2225=1 (w\u22121 r \u2207f, \u2207f) \u2264 min V \u2282H(\u03b8) dim V =n max f\u2208V \u2225f\u2225=1 \u2225\u2207f\u22252 = \u03bb0,n(\u03b8), for all \u03b8 \u2208b \u039b. 13 Moreover, let the vectors \u03b8\u00b1 n \u2208b \u039b be such that a0,n = \u03bb0,n(\u03b8\u2212 n ) and b0,n = \u03bb0,n(\u03b8+ n ) for all n \u2208N. We aim to prove that \u03bbr,n+1(\u03b8\u2212 n+1) < \u03bbr,n(\u03b8+ n ) holds for all suf\ufb01ciently small r > 0 and for any n < n0. In addition, we will show that \u03bbr,n0(\u03b8+ n ) > L, which yields then the claim. Fix n \u2208N such that n < n0. Choose an n-dimensional subspace W + = W +(n, r) \u2282H(\u03b8+ n ) such that \u03bbr,n(\u03b8+ n ) = min V \u2282H(\u03b8+ n ) dim V =n max f\u2208V \u2225f\u2225=1 (w\u22121 r \u2207f, \u2207f) = max f\u2208W + \u2225f\u2225=1 (w\u22121 r \u2207f, \u2207f). Let f \u2208W + with \u2225f\u2225= 1 and \ufb01x R > 0 such that \u2126\u2282BR(0). Furthermore, we de\ufb01ne B := BrR as in (4.1); in particular, we have Y + r\u2126\u2282B. Since wr \u22611 on b \u0393 \\ B, we get (w\u22121 r \u2207f, \u2207f \u0001 \u2265\u2225\u2207f\u22252 L2(b \u0393\\B;C2). (4.9) Combining the inequalities b0,n0 \u2265\u03bb0,n(\u03b8+ n ) \u2265\u03bbr,n(\u03b8+ n ), the estimate (4.9), and Lemma 4.2 we obtain \u2225f\u22252 L2(b \u0393\\B) = \u2225f\u22252 \u2212\u2225f\u22252 L2(B) \u22651 \u2212\u03bar\u2225f\u22252 H1(b \u0393\\B) \u2212\u03bar2\u2225f\u22252 H1(b \u0393) \u22651 \u2212\u03bar(1 + r) \u2212\u03bar\u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2212\u03bar2\u2225\u2207f\u22252 \u22651 \u2212\u03ba\u2032r \u2212\u03ba\u2032\u2032\u0000r + | ln r|\u22121\u0001 \u00b7 \u0000w\u22121 r \u2207f, \u2207f \u0001 \u22651 \u2212\u03ba\u2032r \u2212\u03ba\u2032\u2032\u0000r + | ln r|\u22121\u0001 b0,n0 \u22651 \u2212\u03ba\u2032\u2032\u2032| ln r|\u22121 for all suf\ufb01ciently small r > 0. Thus, we conclude (w\u22121 r \u2207f, \u2207f) \u2265\u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2265 \u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2225f\u22252 L2(b \u0393\\B) \u00001 \u2212\u03ba| ln r|\u22121\u0001 . Taking now the maximum over all normalized functions f \u2208W + we deduce \u03bbr,n(\u03b8+ n ) = max f\u2208W + \u2225f\u2225=1 (w\u22121 r \u2207f, \u2207f) \u2265 \u00001 \u2212\u03ba| ln r|\u22121\u0001 max f\u2208W + \u2225f\u2225=1 \u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2225f\u22252 L2(b \u0393\\B) . Note that dim span {f|b \u0393\\B : f \u2208W +} = n holds for all suf\ufb01ciently small r > 0. Indeed, suppose that this is not the case. Then, there exists f \u2208W +, \u2225f\u2225= 1, such that \u2225f\u2225L2(b \u0393\\B) = 0. Thus, in view of supp f \u2282B, Lemma 4.2 implies 1 = \u2225f\u22252 L2(B) \u2264\u03bar2\u2225f\u22252 H1(b \u0393) \u2264\u03bar2 + \u03ba| ln r|\u22121hr,\u03b8+ n [f] \u2264\u03bab0,n0| ln r|\u22121, which is a contradiction. Hence, we obtain \u03bbr,n(\u03b8+ n ) 1 \u2212\u03ba| ln r|\u22121 \u2265max f\u2208W + \u2225f\u2225=1 \u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2225f\u22252 L2(b \u0393\\B) \u2265 min V \u2282e H(\u03b8+ n ) dim V =n max f\u2208V f\u0338=0 \u2225\u2207f\u22252 L2(b \u0393\\B;C2) \u2225f\u22252 L2(b \u0393\\B) = \u00b5n(\u03b8+ n ), where e H(\u03b8+ n ) := {f|b \u0393\\B : f \u2208H(\u03b8+ n )} \u2282L2(b \u0393 \\ B) and \u00b5n(\u03b8+ n ) is the n-th eigenvalue of the self-adjoint operator in L2(b \u0393 \\ B) associated to the closed, symmetric and densely de\ufb01ned form e H(\u03b8+ n ) \u220bf 7\u2192\u2225\u2207f\u22252 L2(b \u0393\\B;C2). 14 The above form corresponds to the Laplace operator in L2(b \u0393 \\ B) with semi-periodic boundary conditions on \u2202b \u0393 and Neumann boundary conditions on \u2202B. Finally, it is known from [RT75, Sec. 3], that \u00b5n(\u03b8+ n ) converges to \u03bb0,n(\u03b8+ n ) = b0,n, as r \u21920+. Thus, it follows that for suf\ufb01ciently small r > 0 br,n = \u03bbr,n(\u03b8+ n ) \u2265 \u00001 \u2212\u03ba| ln r|\u22121\u0001 \u00b5n(\u03b8+ n ) \u2265(1 + \u03b2)\u22121b0,n > a0,n+1 \u2265\u03bbr,n+1(\u03b8\u2212 n+1) = ar,n+1. (4.10) Therefore, the \ufb01rst n0 bands in \u03c3(\u03a5r) overlap. It follows by a similar argument that br,n0 = \u03bbr,n0(\u03b8+ n ) \u2265 \u00001 \u2212\u03ba| ln r|\u22121\u0001 \u00b5n0(\u03b8+ n ) > L (4.11) for suf\ufb01ciently small r > 0. We deduce from (4.10) and (4.11) the claimed inclusion [0, L] \u2282\u03c3(\u03a5r). A Approximation of Schr\u00f6dinger operators with in\ufb01nitely many point interactions in R2 This appendix is devoted to the proof of Theorem 3.1. Let N \u2208N, \u03bb1, . . . , \u03bbN \u2208(0, \u221e), Y , \u039b, wr be as in Subsection 1.3 and let the self-adjoint operator Hr,\u03bb be as in (1.7). Let \u03bb > 0 and let a suf\ufb01ciently small r > 0 be \ufb01xed. First, we derive a resolvent formula for Hr,\u03bb. To this aim, we de\ufb01ne the set \u2126r := [ y\u2208Y +\u039b (y + r\u2126) \u2282R2, and introduce the operators ur : L2(R2) \u2192L2(\u2126r), (urf)(x) := (wr(x) \u22121)f(x), (A.1) and vr : L2(\u2126r) \u2192L2(R2), (vrf)(x) := ( f(x), x \u2208\u2126r, 0, else. (A.2) Note that \u2225ur\u2225= \u2225wr \u22121\u2225L\u221e= maxn\u2208{1,...,N} \u00b5n \u0000| ln r|\u22121\u0001 r\u22122 and \u2225vr\u2225= 1. Moreover, the multiplication operator in L2(R2) associated to (wr \u22121) can be factorized as (wr \u22121) = vrur. Recall that we denote (\u2212\u2206\u2212\u03bd)\u22121, \u03bd \u2208\u03c1(\u2212\u2206) = C \\ [0, \u221e), by R0(\u03bd). With these notations in hands we can derive an auxiliary resolvent formula for Hr,\u03bb. Proposition A.1. Let \u03bb, r > 0 and \u03bd \u2208C \\ R \u2282\u03c1(Hr,\u03bb) be such that |Im \u03bd| > \u03bb\u2225wr \u22121\u2225L\u221e. Then, it holds 1 \u2208\u03c1(\u03bburR0(\u03bd)vr) and \u0000Hr,\u03bb \u2212\u03bd \u0001\u22121 = R0(\u03bd) + \u03bbR0(\u03bd)vr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd). (A.3) Proof. Note that \u2225ur\u2225\u00b7 \u2225vr\u2225= \u2225wr \u22121\u2225L\u221e. Thus, by our assumptions on \u03bd and by the spectral theorem we obtain that \u03bb\u2225urR0(\u03bd)vr\u2225< 1. Hence, the operator T (\u03bd) := R0(\u03bd) + \u03bbR0(\u03bd)vr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd) is bounded and everywhere de\ufb01ned in L2(R2). Moreover, thanks to \u03bb(wr \u22121) = \u03bbvrur we get for any f \u2208L2(R2) that \u0000Hr,\u03bb \u2212\u03bd \u0001 T (\u03bd)f = \u0000\u2212\u2206\u2212\u03bd \u2212\u03bbvrur \u0001 T (\u03bd)f = f + \u03bbvr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd)f \u2212\u03bbvrurR0(\u03bd)f \u2212\u03bbvr \u00001 \u22121 + \u03bburR0(\u03bd)vr \u0001\u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd)f = f + \u03bbvr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd)f \u2212\u03bbvrurR0(\u03bd)f \u2212\u03bbvr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd)f + \u03bbvrurR0(\u03bd)f = f. 15 Since \u03bd \u2208C \\ R \u2282\u03c1(Hr,\u03bb), we obtain the resolvent identity in (A.3). In order to rewrite the resolvent formula (A.3) in a way which is convenient to study its convergence, we set H := L y\u2208Y +\u039b L2(\u2126) and de\ufb01ne the function \u00b5: R+ \u00d7 (Y + \u039b) \u2192R+, \u00b5(r, y) = \u00b5n \u0000| ln r|\u22121\u0001 for y \u2208y(n) + \u039b. Furthermore, for \u03bd \u2208C \\ R we de\ufb01ne the operators Ar(\u03bd): H \u2192L2(R2), Er(\u03bd): L2(R2) \u2192H by Ar(\u03bd)\u039e := X y\u2208Y +\u039b Z \u2126 G\u03bd(\u00b7 \u2212y \u2212rz)[\u039e]y(z)dz, (A.4a) [Er(\u03bd)f]y := Z R2 G\u03bd(r \u00b7 \u2212z + y)f(z)dz, (A.4b) and Br(\u03bd), Cr(\u03bd), Dr(\u03bd): H \u2192H by [Br(\u03bd)\u039e]y := \u00b5(r, y) Z \u2126 G\u03bd(r(\u00b7 \u2212z))[\u039e]y(z)dz, (A.5a) [Cr(\u03bd)\u039e]y := X y1\u2208(Y +\u039b)\\{y} Z \u2126 G\u03bd(r(\u00b7 \u2212z) + y \u2212y1)[\u039e]y1(z)dz, (A.5b) [Dr(\u03bd)\u039e]y := \u00b5(r, y)[\u039e]y. (A.5c) In the above formulae, [\u039e]y denotes the component of \u039e \u2208L y\u2208Y +\u039b Hy belonging to Hy, where Hy are separable Hilbert spaces. To analyse the properties of the operators in (A.4) and (A.5) we require several auxiliary unitary mappings: the identi\ufb01cation mapping Ir : L y\u2208Y +\u039b L2(y + r\u2126) \u2192L2(\u2126r), the translation operator Tr : L y\u2208Y +\u039b L2(r\u2126) \u2192L y\u2208Y +\u039b L2(y + r\u2126), and the scaling transformation Sr : H \u2192L y\u2208Y +\u039b L2(r\u2126) de\ufb01ned by (Ir\u039e)(x) := [\u039e]y(x) (x \u2208y + r\u2126, y \u2208Y + \u039b), [Tr\u039e]y(x) := [\u039e]y (x \u2212y) (x \u2208y + r\u2126), [Sr\u039e]y(x) := 1 r [\u039e]y \u0010x r \u0011 (x \u2208r\u2126). Note that the inverses of these mappings act as \u0002 I\u22121 r f \u0003 y(x) = f(x) (x \u2208y + r\u2126), \u0002 T \u22121 r \u039e \u0003 y(x) = [\u039e]y (x + y) (x \u2208r\u2126), \u0002 S\u22121 r \u039e \u0003 y(x) = r[\u039e]y(rx) (x \u2208\u2126). It will also be convenient to de\ufb01ne the product Jr := r\u22121IrTrSr. In the following lemma we state some of the basic properties of the operators Ar(\u03bd), Br(\u03bd), Cr(\u03bd), Dr(\u03bd) and Er(\u03bd). Lemma A.2. Let r > 0 be suf\ufb01ciently small and let \u03bd \u2208C \\ R. Then the following identities are true: Ar(\u03bd) = R0(\u03bd)vrJr, Br(\u03bd) + Dr(\u03bd)Cr(\u03bd) = J\u22121 r urR0(\u03bd)vrJr, Dr(\u03bd)Er(\u03bd) = J\u22121 r urR0(\u03bd). In particular, the operators Ar(\u03bd), Br(\u03bd), Cr(\u03bd), Dr(\u03bd) and Er(\u03bd) are bounded and everywhere de\ufb01ned. 16 Proof. Let a suf\ufb01ciently small r > 0 and an arbitrary \u03bd \u2208C \\ R be \ufb01xed. First, we prove the formula Ar(\u03bd) = R0(\u03bd)vrJr, which automatically implies that Ar(\u03bd) is bounded and everywhere de\ufb01ned, as the operators R0(\u03bd), vr and Jr separately possess this property. By the de\ufb01nition of Tr and Sr it follows for \u039e \u2208H , y \u2208Y + \u039b, and z \u2208y + r\u2126that 1 r [\u039e]y \u0012z \u2212y r \u0013 = [Sr\u039e]y (z \u2212y) = [TrSr\u039e]y(z). Hence, we conclude \u0000R0(\u03bd)vrJr\u039e \u0001 (x) = X y\u2208Y +\u039b Z y+r\u2126 G\u03bd(x \u2212z) 1 r2 [\u039e]y \u0012z \u2212y r \u0013 dz. Employing now in each single integral in the above sum a translation \u03b6 := z \u2212y and a transformation \u03be := \u03b6 r, we end up with \u0000R0(\u03bd)vrJr\u039e \u0001 (x) = X y\u2208Y +\u039b Z y+r\u2126 G\u03bd(x \u2212z) 1 r2 [\u039e]y \u0012z \u2212y r \u0013 dz = X y\u2208Y +\u039b Z r\u2126 G\u03bd(x \u2212\u03b6 \u2212y) 1 r2 [\u039e]y \u0012\u03b6 r \u0013 d\u03b6 = X y\u2208Y +\u039b Z \u2126 G\u03bd(x \u2212r\u03be \u2212y)[\u039e]y (\u03be) d\u03be = (Ar(\u03bd)\u039e)(x). (A.6) Next, we show the identity Dr(\u03bd)Er(\u03bd) = J\u22121 r urR0(\u03bd). Indeed, using the de\ufb01nitions of the operators Ir, Tr, Sr, Jr and ur we get for f \u2208L2(R2), x \u2208\u2126and y \u2208Y + \u039b \u0002 J\u22121 r urR0(\u03bd)f \u0003 y(x) = r2 \u0002 T \u22121 r I\u22121 r urR0(\u03bd)f \u0003 y (rx) = r2 \u0002 I\u22121 r urR0(\u03bd)f \u0003 y (rx + y) = \u00b5(r, y) Z R2 G\u03bd \u0000rx + y \u2212z \u0001 f(z)dz = [Dr(\u03bd)Er(\u03bd)f]y(x). (A.7) Clearly, the operator Dr(\u03bd) is bounded and everywhere de\ufb01ned. Moreover, since J\u22121 r , ur and R0(\u03bd) are bounded and everywhere de\ufb01ned as well and since Dr(\u03bd) is boundedly invertible for all suf\ufb01ciently small r > 0, it follows that also Er(\u03bd) is bounded. It remains to prove the identity Br(\u03bd) + Dr(\u03bd)Cr(\u03bd) = J\u22121 r urR0(\u03bd)vrJr. As in (A.7) and (A.6) we get for \u039e \u2208H , x \u2208\u2126, and y \u2208Y + \u039b \u0002 J\u22121 r urR0(\u03bd)vrJr\u039e \u0003 y(x) = X y1\u2208Y +\u039b \u00b5(r, y) Z \u2126 G\u03bd(r(x \u2212z) + y \u2212y1)[\u039e]y1(z)dz = \u0002\u0000Br(\u03bd) + Dr(\u03bd)Cr(\u03bd) \u0001 \u039e \u0003 y(x). Finally, since Br(\u03bd) and Dr(\u03bd) are both obviously bounded and everywhere de\ufb01ned due to their diagonal structure and since Dr(\u03bd) is also boundedly invertible for suf\ufb01ciently small r, it follows that Cr(\u03bd) is also a bounded operator. Thus, the proof of the lemma is complete. After all these preparations it is not dif\ufb01cult to transform the resolvent formula for Hr,\u03bb from Proposition A.1 into another one, which is more convenient for the investigation of its convergence. For this purpose, we de\ufb01ne for \u03bb \u22650 and \u03bd \u2208C \\ R the operator Fr(\u03bd, \u03bb) := \u03bb \u00001 \u2212\u03bbBr(\u03bd) \u0001\u22121Dr(\u03bd). (A.8) 17 Note that Fr(\u03bd, \u03bb) is well de\ufb01ned, as it is known from the one-center case that each component of the diagonal operator 1\u2212\u03bbBr(\u03bd) is boundedly invertible; see [AGHH, eq. (5.49) in Chap. I.5]. Hence, thanks to its diagonal structure it is clear that also (1 \u2212\u03bbBr(\u03bd))\u22121 exists as a bounded and everywhere de\ufb01ned operator. Theorem A.3. Let \u03bb \u22650, r > 0, and let Hr,\u03bb be de\ufb01ned as in (1.7). Let Ar(\u03bd), Er(\u03bd) be as in (A.4), let Br(\u03bd), Cr(\u03bd), Dr(\u03bd) be as in (A.5) and let Fr(\u03bd, \u03bb) be given by (A.8). Then, for any \u03bd \u2208C\\R with \u2225Fr(\u03bd, \u03bb)Cr(\u03bd)\u2225< 1 it holds (Hr,\u03bb \u2212\u03bd)\u22121 = R0(\u03bd) + Ar(\u03bd) \u0002 1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd) \u0003\u22121Fr(\u03bd, \u03bb)Er(\u03bd). Proof. Let wr be as in (1.5) and the operators ur, vr be as in (A.1), (A.2). Choose now a non-real number \u03bd such that additionally |Im \u03bd| > \u2225wr \u22121\u2225L\u221e. A simple computation shows now \u0002 1 \u2212\u03bb(Br(\u03bd) + Dr(\u03bd)Cr(\u03bd)) \u0003\u22121 = \u0002 (1 \u2212\u03bbBr(\u03bd))(1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd)) \u0003\u22121 = \u0002 1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd) \u0003\u22121(1 \u2212\u03bbBr(\u03bd))\u22121. Hence, it holds by Proposition A.1 and Lemma A.2 (Hr,\u03bb \u2212\u03bd)\u22121 = R0(\u03bd) + \u03bbR0(\u03bd)vr \u00001 \u2212\u03bburR0(\u03bd)vr \u0001\u22121urR0(\u03bd) = R0(\u03bd) + \u03bbAr(\u03bd)J\u22121 r \u0002 1 \u2212\u03bbJr (Br(\u03bd) + Dr(\u03bd)Er(\u03bd)) J\u22121 r \u0003\u22121 JrDr(\u03bd)Er(\u03bd) = R0(\u03bd) + \u03bbAr(\u03bd) [1 \u2212\u03bb(Br(\u03bd) + Dr(\u03bd)Er(\u03bd))]\u22121 Dr(\u03bd)Er(\u03bd) = R0(\u03bd) + Ar(\u03bd) \u0002 1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd) \u0003\u22121Fr(\u03bd, \u03bb)Er(\u03bd). For general \u03bd \u2208C \\ R with \u2225Fr(\u03bd, \u03bb)Cr(\u03bd)\u2225< 1 the statement follows by analytic continuation. Now we have all the tools to analyse the convergence of Hr,\u03bb in the norm resolvent sense. For this purpose, it is suf\ufb01cient to compute the limits of the operators Ar(\u03bd), Cr(\u03bd), Er(\u03bd) and Fr(\u03bd, \u03bb) separately. The obvious candidates for the limits of Ar(\u03bd), Cr(\u03bd) and Er(\u03bd), as r \u21920+, are given by A0(\u03bd), C0(\u03bd), and E0(\u03bd) that are de\ufb01ned as in (A.4) and (A.5) with r = 0. The convergence of Fr(\u03bd, \u03bb) is more subtle, as G\u03bd(0) is not de\ufb01ned. The known analysis of the convergence in the one-center case [AGHH, Chap. I.5] suggests the following limit operator: F(\u03bd, \u03bb): H \u2192H , \u0002 F(\u03bd, \u03bb)\u039e]y(x) := q(y, \u03bd, \u03bb) |\u2126|2 \u27e8[\u039e]y\u27e9\u2126, (A.9) where \u27e8f\u27e9\u2126= R \u2126fdx and q(y, \u03bd, \u03bb) is given by q(y, \u03bd, \u03bb) = \uf8f1 \uf8f2 \uf8f3 2\u03c0 n ln \u221a\u03bd 2i \u2212\u03b3 + 2\u03c0\u03b1n o\u22121 , \u03bb = \u03bbn, y \u2208y(n) + Y, n \u2208{1, . . . , N}, 0, else, with \u03b1n as in (3.1). Before going further with the proof of the convergence of (Hr,\u03bb \u2212\u03bd)\u22121, we recall the asymptotics of the integral kernel G\u03bd(x \u2212y) of R0(\u03bd). In a way similar to [BEHL16, Prop. A.1], one can prove the following claim. Lemma A.4. Let \u03bd \u2208C \\ R and let G\u03bd(x) = i 4H(1) 0 \u0000\u221a\u03bd|x| \u0001 be as in (2.3). Then, there exist constants \u03c1 = \u03c1(\u03bd) > 0, \u03ba = \u03ba(\u03bd) > 0, K = K(\u03bd) > 0, \u03ba\u2032 = \u03ba\u2032(\u03bd) > 0 and K\u2032 = K\u2032(\u03bd) > 0 such that \f \fG\u03bd(x) \f \f \u2264 ( \u03ba \u00001 + \f \f ln |x| \f \f\u0001 , |x| \u2264\u03c1, Ke\u2212Im \u221a\u03bd|x|, |x| \u2265\u03c1, \f \f\u2207G\u03bd(x) \f \f \u2264 ( \u03ba\u2032|x|\u22121, |x| \u2264\u03c1, K\u2032e\u2212Im \u221a\u03bd|x|, |x| \u2265\u03c1. In particular, G\u03bd and \u2207G\u03bd are integrable functions. 18 Now, we are prepared to investigate the convergence of Ar(\u03bd), Cr(\u03bd), Er(\u03bd), and Fr(\u03bd, \u03bb), as r \u21920+. Lemma A.5. Let \u03bd \u2208C \\ R, let the operators Ar(\u03bd), Cr(\u03bd), Er(\u03bd) be de\ufb01ned as in (A.4) and (A.5). Let the operators Fr(\u03bd, \u03bb) and F(\u03bd, \u03bb) be as in (A.8) and (A.9), respectively. Then there exists a constant M = M(\u03bd, \u03bb) > 0 such that \r \rAr(\u03bd) \u2212A0(\u03bd) \r \r \u2264Mr1/4, \r \rCr(\u03bd) \u2212C0(\u03bd) \r \r \u2264Mr, \r \rEr(\u03bd) \u2212E0(\u03bd) \r \r \u2264Mr1/4, \r \rFr(\u03bd, \u03bb) \u2212F(\u03bd, \u03bb) \r \r \u2264M| ln r|\u22121, for all suf\ufb01ciently small r > 0. Proof. Let \u03bd \u2208C \\ R. First, we analyze convergence of Er(\u03bd). For f \u2208L2(R2) we get, using the CauchySchwarz inequality, \r \r(Er(\u03bd) \u2212E0(\u03bd))f \r \r2 H = X y\u2208Y +\u039b Z \u2126 \f \f \f \f Z R2 (G\u03bd(rx \u2212z + y) \u2212G\u03bd(z \u2212y)) f(z)dz \f \f \f \f 2 dx \u2264 X y\u2208Y +\u039b Z \u2126 \u0012Z R2 |G\u03bd(rx \u2212z + y) \u2212G\u03bd(z \u2212y)|2 eIm \u221a\u03bd|z\u2212y|dz \u00b7 Z R2 e\u2212Im \u221a\u03bd|z\u2212y||f(z)|2dz \u0013 dx = \u0012 Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)|2 eIm \u221a\u03bd|z|dzdx \u0013 \u00b7 \u0012 Z R2 X y\u2208Y +\u039b e\u2212Im \u221a\u03bd|z\u2212y||f(z)|2dz \u0013 . The term P y\u2208Y +\u039b e\u2212Im \u221a\u03bd|z\u2212y| is uniformly bounded in z, as this sum can be estimated by a convergent z-independent geometric series. In fact, one can \ufb01nd for each z \u2208R2 points b z \u2208b \u0393 and b y \u2208Y with z = b z + b y. Then, it holds because of the periodicity of \u039b X y\u2208Y +\u039b e\u2212Im \u221a\u03bd|z\u2212y| = X y\u2208Y +\u039b e\u2212Im \u221a\u03bd|b z+b y\u2212y| \u2264eIm \u221a\u03bd|b z| X y\u2208Y +\u039b e\u2212Im \u221a\u03bd|b y\u2212y| \u2264\u03ba X y\u2208Y +\u039b e\u2212Im \u221a\u03bd|y| < \u221e, where the last sum is independent of z. Next, using the mean value theorem we obtain that for almost all (x, z) \u2208\u2126\u00d7 R2 G\u03bd(z \u2212rx) \u2212G\u03bd(z) = \u2212 Z 1 0 \u2207G\u03bd(z \u2212r\u03b8x) \u00b7 rxd\u03b8 holds. This implies Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)| dzdx \u2264M1 Z \u2126 Z R2 Z 1 0 r |\u2207G\u03bd(z \u2212r\u03b8x)| d\u03b8dzdx = rM1|\u2126| Z R2 |\u2207G\u03bd(z)| dz = M2r, (A.10) where we used the translational invariance of the Lebesgue measure and that \u2207G\u03bd is integrable by Lemma A.4. Hence, it follows with the help of the Cauchy-Schwarz inequality \r \rEr(\u03bd) \u2212E0(\u03bd) \r \r4 \u2264M3 \u0012Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)|2 eIm \u221a\u03bd|z|dzdx \u00132 \u2264M3 \u0012Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)|3 e2Im \u221a\u03bd|z|dzdx \u0013 \u00b7 \u0012Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)| dzdx \u0013 \u2264M2M3r \u0012Z \u2126 Z R2 |G\u03bd(z \u2212rx) \u2212G\u03bd(z)|3 e2Im \u221a\u03bd|z|dzdx \u0013 . 19 Employing the triangle inequality and the estimates of G\u03bd in Lemma A.4, we see that the last integral is \ufb01nite and we end up with \u2225Er(\u03bd) \u2212E0(\u03bd) \r \r \u2264Mr1/4. A similar argument yields \u2225Ar(\u03bd)\u2217\u2212A0(\u03bd)\u2217\u2225\u2264 Mr1/4 and therefore, \u2225Ar(\u03bd) \u2212A0(\u03bd)\u2225\u2264Mr1/4. Next, we analyze the convergence of Cr(\u03bd). Using that the Hilbert-Schmidt norm of an integral operator is an upper bound for its operator norm and a symmetry argument, one sees similarly as in the appendix of [HHKJ84] \r \rCr(\u03bd) \u2212C0(\u03bd) \r \r \u2264 sup y\u2208Y +\u039b X y1\u2208(Y +\u039b)\\{y} \u0012Z \u2126 Z \u2126 |G\u03bd(r(x \u2212z) + y \u2212y1) \u2212G\u03bd(y \u2212y1)|2 dzdx \u00131/2 . In the same way as in (A.10), we get |G\u03bd(r(x \u2212z) + y \u2212y1) \u2212G\u03bd(y \u2212y1)| \u2264M4r Z 1 0 |\u2207G\u03bd(r\u03b8(x \u2212z) + y \u2212y1)|d\u03b8. By the estimates in Lemma A.4 (ii), we get using that y \u0338= y1 |\u2207G\u03bd(r\u03b8(x \u2212z) + y \u2212y1)| \u2264\u03bae\u2212Im \u221a\u03bd|y\u2212y1| for all suf\ufb01ciently small r > 0 and all x, z \u2208\u2126. Hence, we deduce \u0012Z \u2126 Z \u2126 |G\u03bd(r(x \u2212z) + y \u2212y1) \u2212G\u03bd(y \u2212y1)|2 dzdx \u00131/2 \u2264M5re\u2212Im \u221a\u03bd|y\u2212y1|. This implies \r \rCr(\u03bd) \u2212C0(\u03bd) \r \r \u2264 sup y1\u2208Y +\u039b X y\u2208(Y +\u039b)\\{y1} M5re\u2212Im \u221a\u03bd|y\u2212y1| \u2264Mr, where we estimated the last sum by a convergent geometric series. Thus, the claim on the convergence of Cr(\u03bd) is shown. It remains to analyze the convergence of Fr(\u03bd, \u03bb). For this purpose, we de\ufb01ne for n \u2208{1, . . ., N} the bounded auxiliary operator e Br,n(\u03bd) in L2(\u2126) via \u0000 e Br,n(\u03bd)f \u0001 (x) = \u00b5n \u0000| ln r|\u22121\u0001 Z \u2126 G\u03bd(r(x \u2212z))f(z)dz. From the one-center case [AGHH, Chap. I.5] we know4 \u0000\u00001 \u2212\u03bb e Br,n(\u03bd) \u0001\u22121f \u0001 (x) = 1 2\u03c0|\u2126| \u00b7 | ln r| \u00b7 q(yn, \u03bd, \u03bbn) \u00b7 \u27e8f\u27e9\u2126+ O(1), r \u21920 + . Using this and the diagonal structure of (1 \u2212\u03bbBr(\u03bd))\u22121Dr(\u03bd) it follows immediately that \r \rFr(\u03bd, \u03bb) \u2212F(\u03bd, \u03bb) \r \r = \r \r\u03bb(1 \u2212\u03bbBr(\u03bd))\u22121Dr(\u03bd) \u2212F(\u03bd, \u03bb) \r \r \u2264M| ln r|\u22121 for all suf\ufb01ciently small r > 0. This \ufb01nishes the proof of the lemma. Since we know now the convergence properties of all the involved operators in the resolvent formula of Hr,\u03bb, we are ready to prove Theorem 3.1. 4Note that an inverse sign is missing in eq. (5.61) in [AGHH, Chap. I.5]. 20 Proof of Theorem 3.1. We split the proof of this theorem into three steps. First, we show that \u0000Hr,\u03bb \u2212\u03bd \u0001\u22121 converges in the operator norm, if the imaginary part of \u03bd \u2208C \\ R has a suf\ufb01ciently large absolute value. Then, in the second step we prove that the limit operator is indeed the Sch\u00f6dinger operator with point interactions speci\ufb01ed as in the theorem. Finally, we extend this convergence result to any \u03bd \u2208C \\ R. Step 1. Let the operators Ar(\u03bd), Cr(\u03bd), and Er(\u03bd) be de\ufb01ned as in (A.4) and (A.5), let Fr(\u03bd, \u03bb) be as in (A.8) and let F(\u03bd, \u03bb) be given by (A.9). Fix \u03bd \u2208C with |Im \u03bd| so large that Q\u03b1,y(n)+\u039b(\u03bd) given by (2.1) has a bounded and everywhere de\ufb01ned inverse and that \u2225F(\u03bd, \u03bb)C0(\u03bd)\u2225< 1. Note that such a choice is possible, as \u2225F(\u03bd, \u03bb)\u2225\u2264|q(y, \u03bd, \u03bb)| |\u2126|2 \u21920, |Im \u03bd| \u2192\u221e, and \r \rC0(\u03bd) \r \r \u2264 sup y\u2208Y +\u039b X y1\u2208(Y +\u039b)\\{y} \u0012Z \u2126 Z \u2126 |G\u03bd(y \u2212y1)|2 dzdx \u00131/2 , where we used the Holmgren bound for the operator norm of C0(\u03bd) from the appendix of [HHKJ84], a symmetry argument and that the Hilbert-Schmidt norm of an integral operator is an upper bound for its operator norm. Because of the asymptotics of G\u03bd from Lemma A.4 the last sum can be estimated by a convergent geometric series uniformly in |Im \u03bd|, which yields \ufb01nally the justi\ufb01cation of our assumption. Employing [K, Thm. IV 1.16] and Lemma A.5 we deduce that [1\u2212Fr(\u03bd, \u03bb)Cr(\u03bd)]\u22121 is a bounded and everywhere de\ufb01ned operator and \r \r[1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd)]\u22121 \u2212[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121\r \r \u2264 \u2225[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121\u22252 \u00b7 \u2225Fr(\u03bd, \u03bb)Cr(\u03bd) \u2212F(\u03bd, \u03bb)C0(\u03bd)\u2225 1 \u2212\u2225Fr(\u03bd, \u03bb)Cr(\u03bd) \u2212F(\u03bd, \u03bb)C0(\u03bd)\u2225\u00b7 \u2225[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121\u2225\u2264M| ln r|\u22121 (A.11) for some constant M > 0. This together with Theorem A.3 and Lemma A.5 yields eventually lim r\u21920+ \u0000Hr,\u03bb \u2212\u03bd \u0001\u22121 = lim r\u21920+ \u0002 R0(\u03bd) + Ar(\u03bd)[1 \u2212Fr(\u03bd, \u03bb)Cr(\u03bd)]\u22121Fr(\u03bd, \u03bb)Er(\u03bd) \u0003 = R0(\u03bd) + A0(\u03bd)[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121F(\u03bd, \u03bb)E0(\u03bd). (A.12) Moreover, Lemma A.5 and (A.11) imply that the order of convergence is | ln r|\u22121. Note that F(\u03bd, \u03bb) = 0, if \u03bb / \u2208{\u03bb1, . . . , \u03bbN}. Therefore, the above considerations are true for any \u03bd \u2208C \\ R and item (ii) of this theorem follows. Step 2. From now on assume that \u03bb = \u03bbn for some n \u2208{1, . . ., N}. We are going to prove that the limit operator in (A.12) is equal to (\u2212\u2206\u03b1n,y(n)+\u039b \u2212\u03bd)\u22121 with the coupling constant \u03b1n as in the formulation of the theorem. For that purpose we set H := L y\u2208Y +\u039b C = \u21132(Y + \u039b) and introduce the bounded and everywhere de\ufb01ned operators U : H \u2192H and V : H \u2192H via [U\u03be]y := q(y, \u03bd, \u03bb) |\u2126| [\u03be]y and [V \u039e]y := \u27e8[\u039e]y\u27e9\u2126 and the operators G: L2(R2) \u2192H and H : H \u2192H via [Gf]y = Z R2 G\u03bd(z \u2212y)f(z)dz and [H\u03be]y := X y1\u2208Y +\u039b e G\u03bd(y \u2212y1)[\u03be]y1, where e G\u03bd is given by (2.4). Note that the operator H is bounded. Indeed, the Holmgren bound (cf. the appendix of [HHKJ84]) and a symmetry argument imply \u2225H\u2225\u2264 sup y\u2208Y +\u039b X y1\u2208Y +\u039b \f \f e G\u03bd(y \u2212y1) \f \f. 21 Thanks to the estimates of G\u03bd in Lemma A.4 the last sum is bounded by a convergent geometric series which can be estimated by a value independent of y \u2208Y + \u039b. We \ufb01nd for f \u2208L2(R2) and x \u2208\u2126 [F(\u03bd, \u03bb)E0(\u03bd)f]y(x) = q(y, \u03bd, \u03bb) |\u2126| Z R2 G\u03bd(z \u2212y)f(z)dz = \u0002 UGf \u0003 y. Similarly, it holds for any \u039e \u2208H [F(\u03bd, \u03bb)C0(\u03bd)\u039e]y = q(y, \u03bd, \u03bb) |\u2126| X y1\u2208Y +\u039b e G\u03bd(y1 \u2212y)\u27e8[\u039e]y1\u27e9\u2126= \u0002 UHV \u039e \u0003 y. Finally, we see for \u039e \u2208H \u0000A0(\u03bd)\u039e \u0001 (x) = X y\u2208Y +\u039b G\u03bd(x \u2212y)\u27e8[\u039e]y\u27e9\u2126= X y\u2208Y +\u039b G\u03bd(x \u2212y) \u0002 V \u039e \u0003 y. This implies R0(\u03bd) + A0(\u03bd)[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121F(\u03bd, \u03bb)E0(\u03bd) = R0(\u03bd) + X y\u2208Y +\u039b G\u03bd(x \u2212y) \u0002 V \u00001 \u2212UHV \u0001\u22121UGf \u0003 y. In order to simplify the last formula, we have to investigate the operator 1\u2212V UH. Set H1 := \u21132(y(n)+\u039b) and H2 := \u21132((Y \\ y(n)) + \u039b). The decompositions of the operators V U and 1 \u2212V UH with respect to H = H1 \u2295H2 are given by V U = \u0012 q(y, \u03bd, \u03bb)IH1 0 0 0 \u0013 and 1 \u2212V UH = \u0012 q(y, \u03bd, \u03bb)Q B 0 IH2 \u0013 , where Q := Q\u03b1n,y(n)+\u039b(\u03bd) is the operator which is de\ufb01ned via the matrix (2.5) and B : H2 \u2192H1 is a bounded operator which does not need to be speci\ufb01ed because it will cancel in the further computations. Recall that due to our assumptions on \u03bd the operator Q and hence also 1 \u2212V UH are both boundedly invertible. A simple calculation shows \u00001 \u2212V UH \u0001\u22121 = \u0012 q(y, \u03bd, \u03bb)\u22121Q\u22121 \u2212q(y, \u03bd, \u03bb)\u22121Q\u22121B 0 IH2 \u0013 . Therefore, we \ufb01nd \u00001 \u2212V UH \u0001\u22121V U = \u0012 Q\u22121 0 0 0 \u0013 . Recall that the points in \u039b are denoted by yp = p1a1 +p2a2, p = (p1, p2)\u22a4\u2208Z2, where a1 and a2 span the basis cell b \u0393, and the elements of the in\ufb01nite matrix Q\u22121 are denoted by rml \u03b1n,y(n)+\u039b(\u03bd). Then, we deduce for f \u2208L2(R2) lim r\u21920+ \u0000Hr,\u03bb \u2212\u03bd \u0001\u22121f = R0(\u03bd)f + A0(\u03bd)[1 \u2212F(\u03bd, \u03bb)C0(\u03bd)]\u22121F(\u03bd, \u03bb)E0(\u03bd)f = R0(\u03bd) + X y\u2208Y +\u039b G\u03bd(x \u2212y) \u0002 V \u00001 \u2212UHV \u0001\u22121UGf \u0003 y = R0(\u03bd) + X y\u2208Y +\u039b G\u03bd(x \u2212y) \u0002\u00001 \u2212V UH \u0001\u22121V UGf \u0003 y = R0(\u03bd)f + X m,l\u2208Z2 rml \u03b1n,y(n)+\u039b(\u03bd) \u0000f, G\u03bd(\u00b7 \u2212y(n) \u2212yl) \u0001 L2(R2)G\u03bd(\u00b7 \u2212y(n) \u2212ym) = (\u2212\u2206\u03b1n,y(n)+\u039b \u2212\u03bd)\u22121f, 22 which is the desired result for our special choice of \u03bd. Step 3. Finally, we extend the result from Step 2 to any e \u03bd \u2208C \\ R. With the shorthand D(\u03bd) := (Hr,\u03bb \u2212 \u03bd)\u22121 \u2212(\u2212\u2206\u03b1n,y(n)+\u039b \u2212\u03bd)\u22121 we get via a simple calculation D(e \u03bd) = \u0002 1 + (e \u03bd \u2212\u03bd) \u0000\u2212\u2206\u03b1n,y(n)+\u039b \u2212e \u03bd \u0001\u22121\u0003 \u00b7 D(\u03bd) \u00b7 \u0002 1 + (e \u03bd \u2212\u03bd) \u0000Hr,\u03bb \u2212e \u03bd \u0001\u22121\u0003 . Thus, the claimed convergence result is true for any e \u03bd \u2208C \\ R and the order of convergence is | ln r|\u22121. This \ufb01nishes the proof of Theorem 3.1. Acknowledgment The authors thank S. Albeverio, J. Behrndt, P. Exner, F. Gesztesy, and D. Krej\u02c7 ci\u02c7 r\u00edk for useful hints to solve the approximation problems. Moreover, M. Holzmann acknowledges \ufb01nancial support under a scholarship of the program \u201cAktion Austria Czech Republic\u201d during a research stay in Prague by the Czech Centre for International Cooperation in Education (DZS) and the Austrian Agency for International Cooperation in Education and Research (OeAD). V. Lotoreichik was supported by the grant No. 17-01706S of the Czech Science Foundation (GA \u02c7 CR).", "introduction": "1.1 Motivation Photonic crystals gained a lot of attention in the recent decades both from the physical and the math- ematical side. An electromagnetic wave can propagate in the crystal if and only if its frequency does not belong to a photonic band gap. Therefore, photonic crystals can be seen as an optical analogue of \u2217Institut f\u00fcr Numerische Mathematik, Technische Universit\u00e4t Graz, Steyrergasse 30, A 8010 Graz, Austria, holzmann@math.tugraz.at \u2020Department of Theoretical Physics, Nuclear Physics Institute, Czech Academy of Sciences, 250 68, \u02c7 Re\u017e near Prague, Czechia, lotoreichik@ujf.cas.cz 1 semiconductors giving a physical motivation to study them. The idea of designing periodic dielectric materials with photonic band gaps was proposed in [J87, Y87]. In the recent years a great advance in fabrication of such crystals was achieved. Despite a substantial progress in the physical and mathematical investigation of photonic crystals with a given geometry (see [DLPSW, JJWM] and the references therein), the important task of designing photonic crystals having band gaps of a certain prede\ufb01ned structure still remains challenging. In this paper we are interested in the following inverse problem: How to design a photonic crystal such that \ufb01nitely many prede\ufb01ned frequencies \u03c91, . . . , \u03c9N belong to photonic band gaps? In order to tackle this problem we employ a special class of photonic crystals made of very thin in\ufb01- nite rods with high dielectric permittivity embedded into vacuum. To be more precise, let b \u0393 be a parallel- ogram in R2 which is spanned by two linearly independent vectors and let the points y(1), . . . , y(N) \u2208b \u0393 be pairwise distinct. The basis cell of the crystal consists of N in\ufb01nite rods with large relative dielectric permittivities \u03b5(n) \u226b1, n = 1, . . . , N, whose cross sections are small bounded domains in R2 localized near the points y(n), n = 1, . . . , N, and surrounded by vacuum with relative dielectric permittivity \u03b5 = 1. Then the crystal is built by repeating the basis cell in a periodic way such that the whole Euclidean space R3 is \ufb01lled; cf. Figure 1.1 and Subsection 1.3 for a more precise de\ufb01nition. Special crystals of the above type have already been investigated in [JJWM, JVF97, MM93, MRBJ93, PM91] by physical and numerical experiments. They were one of the \ufb01rst photonic crystals treated in the literature because they are comparably simple to produce. The results in the above mentioned papers indicate that these crystals may have band gaps for electromagnetic waves polarized in a special way. Our goal is to provide an analytic proof of this and related results. y(1) y(2) y(3) Figure 1.1: Cross section of the crystal. The basis period cell is colored in gray and contains three rods (N = 3). 1.2 Maxwell\u2019s equations and propagation of electromagnetic waves Under conventional physical assumptions for our model (absence of currents and electric charges, linear constitutive relations and no magnetic properties, i.e. the relative magnetic permeability satis\ufb01es \u00b5 \u22611) and a suitable choice of the units, Maxwell\u2019s equations for time harmonic \ufb01elds take the following simple form (see [FK96b]): div (\u03b5E) = 0, curl E = i\u03c9 c H, div H = 0, curl H = \u2212i\u03c9 c \u03b5E. (1.1) Here, the three-dimensional vector \ufb01elds E and H are the electric and the magnetic \ufb01eld, respectively, \u03c9 > 0 is the frequency of the wave, \u03b5 is the relative dielectric permittivity and c > 0 stands for the 2 speed of light. Choose a system of coordinates such that the x3-axis is parallel to the rods building our photonic crystal. In this paper, we are interested in x3-independent waves, i.e. E = E(x1, x2) and H = H(x1, x2). This assumption is reasonable, as the physical parameters also depend only on x1 and x2. In the physical literature such waves are often called standing waves, as they propagate strictly parallel to the x1-x2-plane perpendicular to the rods and not in the x3-direction. The frequency \u03c9 > 0 belongs to a photonic band gap if the Maxwell equations (1.1) possess no bounded solutions. Maxwell\u2019s equations can be regarded as a generalized eigenvalue problem for the Maxwell operator M \u0012E H \u0013 = \u0012 0 ic\u03b5\u22121curl \u2212ic curl 0 \u0013 \u0012E H \u0013 , which is de\ufb01ned on an appropriate subspace of L2(R2, C3, \u03b5dx)\u00d7L2(R2, C3, dx) that takes the constraints div (\u03b5E) = 0 and div H = 0 into account; cf. [FK96b] for more details. According to [FK96b, Sec. 7.1] the operator M is self-adjoint in L2(R2, C3, \u03b5dx)\u00d7L2(R2, C3, dx). Using periodicity and the results of [KKS02] it can be shown that \u03c9 belongs to a photonic band gap of the crystal if and only if \u03c9 / \u2208\u03c3(M). Therefore, the existence and location of photonic band gaps can be analyzed by means of the spectral analysis of M. Since \u03b5 is periodic in x1 and x2 and independent of x3, the operator M can be decomposed as M = M1 \u2295M2, where M1 acts on so-called transverse magnetic TM-modes having the form E = (0, 0, E3)\u22a4, H = (H1, H2, 0)\u22a4, and M2 acts on transverse electric TE-modes given by E = (E1, E2, 0)\u22a4, H = (0, 0, H3)\u22a4; see [FK96b, Sec. 7.1]. Therefore, it holds \u03c3(M) = \u03c3(M1) \u222a\u03c3(M2) and it suf\ufb01ces to perform the spectral analysis of M1 and M2 separately to characterize \u03c3(M). The spectra of Mj, j = 1, 2, have their own in- dependent physical meaning when taking polarization of electromagnetic waves to TE- and TM-modes into account. Moreover, \u03c3(Mj), j = 1, 2, are in simple direct correspondence with the spectra of certain scalar differential operators on L2(R2); cf. Subsection 1.3. Several mathematical approaches to treat the spectral problems for the operators Mj, j = 1, 2, have been developed. Purely numerical methods are elaborated in e.g. [MM93, MRBJ93, PM91]. A combination of numerical and analytical methods is suggested in [HPW09]. For a wide class of ge- ometries a method based on boundary integral equations is ef\ufb01cient [AKL, AKL09]. An analytic ap- proach for high contrast media is proposed by A. Figotin and P. Kuchment for crystals of a different geometry from ours which are composed of periodically ordered vacuum bubbles surrounded by an optically dense material with very large dielectric permittivity and of small width. In a series of pa- pers [FK94, FK96a, FK96b, FK98a, FK98b] these authors showed that such crystals have an arbitrarily large number of photonic band gaps. Their approach largely inspired the methods used in the present paper. Finally, we point out that topics of recent interest in this active research \ufb01eld include the analysis of guided perturbations of photonic crystals [AS04, BHPW14, KO12], of materials with non-linear consti- tutive relations [EKE12, ELT17], and of photonic crystals made of metamaterials [CL13, E10]. 1.3 Notations and statement of the main results In order to formulate our main results, we \ufb01x some notations. We set 0 := (0, 0) \u2208R2. For x \u2208R2 and r > 0 we de\ufb01ne Br(x) := {y \u2208R2 : |y \u2212x| < r}. For \u03b1, \u03b2 \u22650 and A, B \u2282R2 we use the notation \u03b1A + \u03b2B := {\u03b1x + \u03b2y \u2208R2 : x \u2208A, y \u2208B}. 3 For a measurable set \u2126\u2282R2 we denote its Lebesgue measure by |\u2126| and its characteristic function by 1\u2126. As usual, L1(\u2126) stands for the space of integrable functions over \u2126. For f \u2208L1(\u2126) we introduce the notation \u27e8f\u27e9\u2126= R \u2126f(x)dx. The L2-space over \u2126\u2282R2 with the usual inner product is denoted by \u0000 L2(\u2126), (\u00b7, \u00b7)L2(\u2126) \u0001 and the L2-based Sobolev spaces by Hk(\u2126), k = 1, 2, respectively. For a self-adjoint operator T in a Hilbert space we denote its spectrum by \u03c3(T) and its resolvent set by \u03c1(T) := C \\ \u03c3(T). Let N \u2208N be \ufb01xed and let \u03bb1, . . . , \u03bbN \u2208(0, \u221e) be pairwise distinct; without loss of generality we assume that 0 < \u03bb1 < \u03bb2 < \u00b7 \u00b7 \u00b7 < \u03bbN. These numbers are associated to the frequencies \u03c91, . . . , \u03c9N that are desired to be contained in photonic band gaps of the crystal via the relations \u03bbn = \u0000 \u03c9n c \u00012. Moreover, let a1, a2 \u2208R2 be linearly independent. We set \u039b := \b n1a1 + n2a2 \u2208R2 : (n1, n2) \u2208Z2\t and b \u0393 := \b s1a1 + s2a2 \u2208R2 : s1, s2 \u2208[0, 1) \t . (1.2) For the points in \u039b we often use the notation yn = n1a1 + n2a2, n = (n1, n2) \u2208Z2. We choose pairwise distinct points y(1), . . . , y(N) \u2208b \u0393 and de\ufb01ne Y := \b y(1), . . . , y(N)\t . (1.3) Let \u2126\u2282R2 be a bounded domain with 0 \u2208\u2126and let r > 0 be suf\ufb01ciently small such that \u0000 y(n) + r\u2126 \u0001 \u2229 \u0000 y(m) + r\u2126 \u0001 = \u2205, n \u0338= m. For n = 1, . . . , N we de\ufb01ne \u00b5n(x) := 2\u03c0 \u03bbn|\u2126|x + cnx2, (1.4) where cn \u2208R, n = 1, . . . , N, are some constant parameters. Finally, we introduce the function wr : R2 \u2192 R by wr := 1 + 1 r2 N X n=1 \u00b5n \u0012 1 | ln r| \u0013 1\u039b+y(n)+r\u2126. (1.5) The relative dielectric permittivity \u03b5r : R3 \u2192R, which describes the physical properties of the crystal, is expressed through wr by \u03b5r(x1, x2, x3) := wr(x1, x2). In order to treat the spectral problem for the asso- ciated Maxwell operator M = M1 \u2295M2 described in Subsection 1.2, we introduce two partial differential operators in L2(R2) by \u0398rf := \u2212w\u22121 r \u2206f, dom \u0398r := H2(R2), (1.6a) \u03a5rf := \u2212div (w\u22121 r grad f), dom \u03a5r := \b f \u2208H1(R2): div (w\u22121 r grad f) \u2208L2(R2) \t . (1.6b) According to [FK96b, Sec. 7.1] we have \u03c9 \u2208\u03c3(M1) \u21d0 \u21d2 \u0010\u03c9 c \u00112 \u2208\u03c3(\u0398r), \u03c9 \u2208\u03c3(M2) \u21d0 \u21d2 \u0010\u03c9 c \u00112 \u2208\u03c3(\u03a5r). Following the strategy of [FK96b, FK98b] in order to investigate the spectral properties of \u0398r we introduce a family of auxiliary Schr\u00f6dinger operators Hr,\u03bbf := \u2212\u2206f \u2212\u03bb(wr \u22121)f, dom Hr,\u03bb := H2(R2), \u03bb \u22650. (1.7) It is not dif\ufb01cult to check that \u03bb \u2208\u03c3(\u0398r) \u21d0 \u21d2 \u03bb \u2208\u03c3(Hr,\u03bb). We show that the Schr\u00f6dinger operators Hr,\u03bb converge (as r \u21920+) in the norm resolvent sense to Hamil- tonians with point interactions supported on Y + \u039b. This convergence result is already demonstrated 4 in a more general setting in [BHL14], but for our special form of wr we provide a re\ufb01ned analysis of the approximation including an estimate for the order of convergence. For similar results in the case of a single point interaction in R2 and in other space dimensions see [AGHH87, AGHK84, HHKJ84] or the monograph [AGHH] and the references therein. Using the known spectral properties of these limit op- erators with point interactions and continuity arguments one can prove that the initially given number \u03bbn belongs to a gap of \u03c3 \u0000 Hr,\u03bbn \u0001 , n = 1, . . . , N, if the geometry of the crystal and the parameters in the de\ufb01nition of wr are chosen appropriately. This leads to the existence of gaps in \u03c3(\u0398r) in the vicinities of \u03bbn which is the \ufb01rst main result of this paper and whose proof is provided in Section 3. Theorem 1.1. There exist linearly independent vectors a1, a2 \u2208R2 and coef\ufb01cients c1, . . . , cN such that N [ n=1 \u0000 \u03bbn \u2212\u03bb1r2| ln r|, \u03bbn + \u03bb1r2| ln r| \u0001 \u2282\u03c1(\u0398r) for all suf\ufb01ciently small r > 0. Concerning the analysis of \u03a5r, there are several works on similar divergence type operators with high contrast coef\ufb01cients, see e.g. [HL00, Z05]. They have in common that the parameter becomes large on a domain whose diameter divided by the size of the period cell is constant and thus, these results do not apply in our setting. Other closely related results on the spectral analysis of divergence type operators with high contrast coef\ufb01cients can be found e.g. in [CC16, FK96a, Kh13, Z00]. In our setting we use the Floquet-Bloch decomposition to show that any compact subinterval of [0, \u221e) is contained in the spectrum of \u03a5r for suf\ufb01ciently small r > 0. This is the second main result of the paper; its proof is provided in Section 4. Theorem 1.2. For any L > 0 there exists r0 = r0(L) > 0 such that [0, L] \u2282\u03c3(\u03a5r) for all 0 < r < r0. We conclude this section by a discussion and interpretation of our results. According to Theorem 1.1, for a given set of pairwise distinct frequencies \u03c91, . . . , \u03c9N \u2208(0, +\u221e) there exists a geometry of the crystal (a suitable period cell and coef\ufb01cients c1, . . . , cN) such that n\u0000 \u03c91 c \u00012 , \u0000 \u03c92 c \u00012 , . . . , \u0000 \u03c9N c \u00012o \u2282\u03c1(\u0398r), if the diameter of the rods (related to r > 0) is suf\ufb01ciently small. Moreover, we have an estimate for the size of the gap around each \u03c9n. In particular, our results demonstrate a way how to construct photonic crystals such that TM-modes with frequencies in the vicinities of \u03c9n, n = 1, . . . , N, can not propagate through it. At the same time, in view of Theorem 1.2 there are no gaps in compact subintervals of the spectrum of \u03a5r for small r > 0. Restricting the frequencies to certain ranges, as it is typically the case in applications, there exists an r0 > 0 such that for any frequency in this range there is a TE-mode with this frequency which can propagate through the crystal for any r \u2208(0, r0). These results perfectly match the experimental data and numerical tests in [JJWM, JVF97, MRBJ93] performed for the special case of the square lattice and N = 1. Organization of the paper In Section 2 we introduce Schr\u00f6dinger operators with point interactions supported on a lattice and collect some results about their spectra. These results are employed in the spectral analysis of \u0398r in Section 3. Next, the operator \u03a5r is investigated in Section 4. Finally, Appendix A contains the technical analysis of the convergence of Hr,\u03bb in the norm resolvent sense to a Schr\u00f6dinger operator with point interactions supported on a lattice. 5" } ] }, "edge_feat": {} } }